query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
In this paper, we introduce a novel method to interpret recurrent neural networks (RNNs), particularly long short-term memory networks (LSTMs) at the cellular level. We propose a systematic pipeline for interpreting individual hidden state dynamics within the network using response characterization methods. The ranked contribution of individual cells to the network's output is computed by analyzing a set of interpretable metrics of their decoupled step and sinusoidal responses. As a , our method is able to uniquely identify neurons with insightful dynamics, quantify relationships between dynamical properties and test accuracy through ablation analysis, and interpret the impact of network capacity on a network's dynamical distribution. Finally, we demonstrate generalizability and scalability of our method by evaluating a series of different benchmark sequential datasets. expose them to defined input signals such as step and sinusoid functions. Through evaluation of output attributes, such as response settling time, phase-shift, and amplitude, we demonstrate that it is possible to predict sub-regions of the network dynamics, rank cells based on their relative contribution to network output, and thus produce reproducible metrics of network interpretability. For example, step response settling time delineates cells with fast and slow response dynamics. In addition, by considering the steady-state value of the cellular step response and the amplitude of the sinusoid response, we are able to identify cells that significantly contribute to a network's decision. We evaluate our methodology on a range of sequential datasets and demonstrate that our algorithms scale to large LSTM networks with millions of parameters. The key contributions of this paper can be summarized as follows:1. Design and implementation of a novel and lightweight algorithm for systematic LSTM interpretation based on response characterization;2. Evaluation of our interpretation method on four sequential datasets including classification and regression tasks; and 3. Detailed interpretation of our trained LSTMs on the single cell scale via distribution and ablation analysis as well as on the network scale via network capacity analysis. First, we discuss related work in Sec. 2 and then introduce the notion of RNNs as dynamic systems in Sec. 3. Sec. 4 presents our algorithm for response characterization and defines the extracted interpretable definitions. Finally, we discuss the interpretations enabled by this analysis in Sec. 5 through an series of experiments, and provide final of this paper in Sec. 6. Deep Neural Networks Interpretability -A number of impactful attempts have been proposed for interpretation of deep networks through feature visualization BID9 BID37 BID36 BID20 BID34 BID6. Feature maps can be empirically interpreted at various scales using neural activation analysis BID27, where the activations of hidden neurons or the hidden-state of these neurons is computed and visualized. Additional approaches try to understand feature maps by evaluating attributions BID32 BID10 BID21 BID35. Feature attribution is commonly performed by computing saliency maps (a linear/non-linear heatmap that quantifies the contribution of every input feature to the final output decision). The contributions of hidden neurons, depending on the desired level of interpretability, can be highlighted at various scales ranging from individual cell level, to the channel and spatial filter space, or even to arbitrary groups of specific neurons BID27. A dimensionality reduction method can also be used to abstract from high dimensional feature maps into a low dimensional latent space representation to qualitatively interpret the most important underlying features BID25 BID5. However, these methods often come with the cost of decreasing cell-level auditability. Richer infrastructures have been recently developed to reason about the network's intrinsic kinetics. LSTMVis BID34, relates the hidden state dynamics patterns of the LSTM networks to similar patterns observed in larger networks to explain an individual cell's functionality. A systematic framework has also been introduced that combines interpretability methodologies across multiple network scales BID27. This enables exploration over various levels of interpretability for deep NNs; however, there is still space to incorporate more techniques, such as robust statistics BID22, information theory approaches BID31, gradients in correlation-domain BID14 and response characterization methods which we address in this paper. Recurrent Neural Networks Interpretability -Visualization of the hidden-state of a fixed-structure RNNs on text and linguistic datasets identifies interpretable cells which have learned to detect certain language syntaxes and semantics BID20 BID34. RNNs have also been shown to learn input-sensitive grammatical functions when their hidden activation patterns were visualized BID18 BID19. Moreover, gradient-based attribution evaluation methods were used to understand the RNN functionality in localizing key words in the text. While these techniques provide rich insight into the dynamics of learned linguistics networks, the interpretation of the network often requires detailed prior knowledge about the data content. Therefore, such methods may face difficulties in terms of generalization to other forms of sequential data such as time-series which we focus on in our study. Another way to build interpretability for RNNs is using the attention mechanism where the network architecture is constrained to attend to a particular parts of the input space. RNNs equipped with an attention mechanism have been successfully applied in image captioning, the fine-alignments in machine translation, and text extraction from documents BID15. Hidden-state visualization is a frequently shared property of all of these approaches in order to effectively understand the internals of the network. Hudson et al. BID17 also introduced Memory, Attention, and Composition (MAC) cells which can be used to design interpretable machine reasoning engines in an end-to-end fashion. MAC is able to perform highly accurate reasoning, iteratively directly from the data. However, application of these modification to arbitrary network architectures is not always possible, and in the case of LSTM specifically, the extension is not possible in the current scope of MAC.Recurrent Neural Networks Dynamics-Rigorous studies of the dynamical systems properties of RNNs, such as their activation function's independence property (IP) BID4, state distinguishability BID2, and observability BID11 BID12 date back to more than two decades. Thorough analyses of how the long term dynamics are learned by the LSTM networks has been conducted in BID16. Gate ablation analysis on the LSTM networks has been performed to understand cell's dynamics BID13 BID8. We introduce the response characterization method, as a novel building block to understand and reason about LSTM hidden state dynamics. In this section, we briefly we recap kinetics of RNNs. We denote the global dynamics of the hidden state values as h l t, with t ∈ {1..T} denoting the time, and l ∈ {1..L} representing the layers of the neural network. A vanilla recurrent neural network (RNN) can be formulated as BID29 BID20: DISPLAYFORM0 where DISPLAYFORM1 shows the weight matrix. h 0 t retains an input vector x t and h L t holds a vector at the last hidden layer, L, that is mapped to an output vector y t which is ultimately the function of all input sequence {x 1, . . ., x T}.RNNs are formulated as control dynamical systems in the form of the following differential equation (For the sake of notation simplicity, we omit the time argument, t): DISPLAYFORM2 where h denotes its internal state (˙ illustrates time-shift or time derivative for the discrete and continuous-time systems, respectively), x stands for the input, and R [n×n], W [n×m] and C [p×n] are real matrices representing recurrent weights, input weights and the output gains, respectively. σ: R → R indicates the activation function. In the continuous setting, σ should be locally Lipschitz (see BID3 for a more detailed discussion). Long short term Memory (LSTM) BID16, are gated-recurrent neural networks architectures specifically designed to tackle the training challenges of RNNs. In addition to memorizing the state representation, they realize three gating mechanisms to read from input (i), write to output (o) and forget what the cell has stored (f). Activity of the cell can be formulated as follows BID13: rise to distinct outputs for two different initial states at which the system is started BID33. Observable systems realize unique internal parameter settings BID4. One can then reason about that parameter setting to interpret the network for a particular input profile. Information flow in LSTM networks carries on by the composition of static and time-varying dynamical behavior. This interleaving of building blocks makes a complex partially-dependent sets of nonlinear dynamics that are hard to analytically formulate and to verify their observability properties As an alternative, in this paper we propose a technique for finding sub-regions of hidden observable dynamics within the network with a quantitative and systematic approach by using response characterization. DISPLAYFORM0 DISPLAYFORM1 In this section, we explore how response characterization techniques can be utilized to perform systematic, quantitative, and interpretable understanding of LSTM networks on both a macro-network and micro-cell scale. By observing the output of the system when fed various baseline inputs, we enable a computational pipeline for reasoning about the dynamics of these hidden units. Figure 1 provides a schematic for our response characterization pipeline. From a trained LSTM network, comprising of M LSTM units, we isolate individual LSTM cells, and characterize their output responses based on a series of interpretable response metrics. We formalize the method as follows:Definition 1 Let G, be a trained LSTM network with M hidden LSTM units. Given the dynamics of the training dataset (number of input/output channels, the main frequency components, the amplitude range of the inputs), we design specific step and sinusoidal inputs to the network, and get the following insights about the dynamics of the network at multi-scale resolutions:• the relative strength or contribution of components within the network;• the reactiveness of components to sudden changes in input; and• the phase alignment of the hidden outputs with respect to the input. Specifically, we analyze the responses of the step input and the sinusoidal input. We use the classic formulations for each of these input signals wherein step: Across a network of LSTM units we can approximate sub-regions of the dynamics of a single cell, u, by extracting the input and recurrent weights corresponding to that individual cell. We then define a sub-system consisting of just that single cell and subsequently feed one of our baseline input signals, x t ∀ t∈{1..T} to observe the corresponding output response, y t. In the following, we define the interpretable response metrics for the given basis input used in this study: DISPLAYFORM0 Definition 2 The initial and final response of the step response signal is the starting and steady state responses of the system respectively, while the response output change represents their relative difference. Response output change or the delta response for short determines the strength of the LSTM unit with a particular parameter setting, in terms of output amplitude. This metric can presumably detect significant contributor units to the network's decision. The settling time of the step response is elapsed time from the instantaneous input change to when the output lies within a 90% threshold window of its final response. Computing the settling time for individual LSTM units enables us to discover "fast units" and "slow units". This leads to the prediction of active cells when responding to a particular input profile. The amplitude and frequency of a cyclic response signal is the difference in output and rate at which the response output periodically cycles. The response frequency,f, is computed by evaluating the highest energy component of the power spectral density:f = arg max S yy (f).The amplitude metric enables us to rank LSTM cells in terms of significant contributions to the output. This criteria is specifically effective in case of trained RNNs on datasets with a cyclic nature. Given a sinusoidal input, phase-shifts and phase variations expressed at the unit's output, can be captured by evaluating the frequency attribute. The correlation of the output response with respect to the input signal is the dot product between the unbiased signals: DISPLAYFORM0 The correlation metric correspondes to the phase-alignments between input and output of the LSTM unit. Systematic computation of each of the above responses metrics for a given LSTM dynamics, enables reasoning on the internal kinetics of that system. Specifically, a given LSTM network can be decomposed into its individual cell components, thus creating many smaller dynamical systems, which can be analyzed according to their individual response characterization metrics. Repeating this process for each of the cells in the entire network creates two scales of dynamic interpretability. Firstly, on the individual cell level within the network to identify those which are inherently exhibiting fast vs slow responses to their input, quantify their relative contribution towards the system as a whole, and even interpret their underlying phase-shift and alignment properties. Secondly, in addition to characterizing responses on the cell level we also analyze the effect of network capacity on the dynamics of the network as a whole. Interpreting hidden model dynamics is not only interesting as a deployment tool but also as a debugging tool to pinpoint possible sources of undesired dynamics within the network. While one can use these response characterization techniques to interpret individual cell dynamics, this analysis can also be done on the aggregate network scale. After computing our response metrics for all decoupled cells independently we then build full distributions over the set of all individual pieces of the network to gain understanding of the dynamics of the network as a whole. This study of the response metric distributions presents another rich representation for reasoning about the dynamics, no longer at a local cellular scale, but now, on the global network scale. In the following section, we provide concrete of our system in practice to interpret the dynamics of trained LSTMs for various sequence modeling tasks. We present our computed metric response characteristics both on the decoupled cellular level as well as the network scale, and provide detailed and interpretable reasoning for these observed dynamics. We chose four benchmark sequential datasets and trained on various sized LSTM networks ranging from 32 to 320 LSTM cell networks. The and analysis presented in this section demonstrate applicability of our algorithms to a wide range of temporal sequence problems and scalability towards deeper network structures. We start by reasoning how our response characterization method can explain the hidden-state dynamics of learned LSTM networks for a sequential MNIST dataset and extend our findings to three additional datasets. We perform an ablation analysis and demonstrate how some of our metrics find cells with significant contributions to the network's decision, across all datasets. We start by training an LSTM network with 64 hidden LSTM cells to classify a sequential MNIST dataset. Inputs to the cells are sequences of length 784 generated by stacking the pixels of the 28 × 28 hand-writing digits, row-wise (cf. FIG1) and the output is the digit classification. Individual LSTM cells were then isolated and their step and sine-response were computed for the attributes defined formerly (cf. Fig. 4). FIG1 ). This interpretation allows us to indicate fast-activated/deactivated neurons at fast and slow phases of a particular input sequence. This is validated in FIG1, where the output state of individual LSTM cells are visually demonstrated when the network receives a sequence of the digit 6. The figure is sorted in respect to the predicted settling time distribution. We observe that fast-cells react to fast-input dynamics almost immediately while slow-cells act in a slightly later phase. This effect becomes clear as you move down the heatmap in FIG1 and observe the time difference from the original activation. The distribution of the delta-response, indicates inhibitory and excitatory dynamics expressed by a 50% ratio (see FIG1). This is confirmed by the input-output correlation criteria, where almost half of the neurons express antagonistic behavior to their respective sine-wave input FIG1. The sine-frequency distribution depicts that almost 90% of the LSTM cells kept the phase, nearly aligned to their respective sine-input, which indicates existence of a linear transformation. A few cells learned to establish a faster frequencies than their inputs, thereby realizing phase-shifting dynamics FIG1. The sine-amplitude distribution in FIG1 demonstrates that the learned LSTM cells realized various amplitudes that are almost linearly increasing. The ones with a high amplitude can be interpreted as those maximally contributing to the network's decision. In the following sections, we investigate the generalization of these effects to other datasets. We trained LSTM networks with 128 hidden cells, for four different temporal datasets: sequential MNIST BID23, S&P 500 stock prices BID1 and CO 2 concentration for the Mauna Laua volcano BID0 forecasting, and classification of protein secondary structure BID30. Learned networks for each dataset are denoted seq-MNIST, Stock-Net, CO 2 -Net and Protein-Net. all five metrics with the network size of 128. It represents the average cell response metric attributes for various datasets and demonstrates the global speed and amplitude of the activity of network in terms of dynamical properties of the response characterization metrics. Fig 3A- E, represents the distributions for the metrics sorted by the value of their specific attribute across all datasets. Cells in Protein-Net realized the fastest dynamics (i.e. smallest settling time) compared to the other networks, while realizing a similar trend to the seq-MNIST (Fig. 3A). The settling time distribution for the LSTM units of CO 2 and Stock-Net depicts cell-groups with similar speed profiles. For instance neurons 52 to 70 in Stock-Net, share the same settling time (Fig. 3A). Sine frequency stays constant for all networks except from some outliers which tend to modify their input-frequency (Fig. 3D). The delta response and the correlation metrics (Fig. 3B and Fig. 3E) both indicate the distribution of the inhibitory and excitatory behavior of individual cells within the network. Except from the Seq-MNIST net, neurons in all networks approximately keep a rate of 44% excitatory and 56% inhibitory dynamics. The high absolute amplitude neurons (at the two tails of Fig. 3C), are foreseen as the significant contributors to the output's decision. We validate this with an ablation analysis subsequently. Moreover, most neurons realize a low absolute delta-response value, for all datasets except for MNIST (Fig. 3B). This is an indication for cells with an equivalent influence on the output accuracy. Sine-amplitude stays invariant for most neurons in Stock and CO 2 -Nets (Fig. 3C). For the seq-MNIST net and Protein-net, this distribution has a gradually increasing trend with weak values. This predicts that individual cells are globally equivalently contributing to the output. To assess the quality of the predictions and interpretations of the provided response characterization metrics, we performed individual cell-ablation analysis on LSTM networks and evaluated the cellimpact on the output accuracy (misclassification rate), for the classification problems and on the output performance (mean absolute error), for the regression problems. We knocked out neurons from trained LSTM networks with 128 neurons. Fig. 4A -H illustrate the performance of the network for individual cell ablations for all four datasets. The gray solid line in each subplot, stands for the predictions of the response metrics. For CO 2 -Net, this confirms that neurons with higher sine amplitude tend to disrupt the output more (Fig 4D). For the same network, the delta response predicted that neurons with high negative or positive value, are more significant in output's prediction. This is clearly illustrated in Fig. 4C. For seq-MNIST-Net, the same held true; neurons with high absolute value of delta response or sine-amplitude reduce the accuracy at the output dramatically (Fig. 4A-B). By analyzing the sine-amplitude and delta-response of Protein-Net, we observe that neurons are equivalently valued and tend to contribute equivalently to the output accuracy. This is verified in the ablation analysis, shown in Fig. 4G and 4H, where the mean-misclassification error rate stays constant for all neural ablations. The absolute value for Stock-Net was also weak in terms of these two metrics, though there were some outliers at the tails of their distribution that predicted dominant neurons. This is clearly notable when comparing the neurons 120 to 128 of Fig. 4F to their prediction (gray line) where the amplitude of the response is maximal. In Fig. 4E ablation experiments for neurons 1 to 40 and 100 to 128 impose higher impact on the overall output. This was also observed in the delta response prediction shown in 4B, since neurons with stronger output response were present at the two tails of the distribution. While we analyzed the response characterization distributions on a cellular level above, in this subsection we focus on the effect of network capacity on observed hidden dynamics of the system on a global scale. Reasoning on this scale allows us to draw on how increasing the expressive capacity of LSTM networks trained on the same dataset can in vastly different learned dynamics. We experimentally vary the capacity by simply adding hidden LSTM cells to our network and retraining on the respective dataset from scratch. The relationship between each response characteristic metric and the network capacity is visualized in FIG4 -E. The trends across datasets are visualized in a single subplot to compare respective trends. One especially interesting of this analysis is the capacity relationship with response amplitude (cf. FIG4). Here we can see that the amplitude response decays roughly proportionally to 1 N, for all datasets, where N is the number of LSTM cells. In other words, we get the intuitive finding that as we increase the number of LSTM cells, the magnitude of each cell's relative contribution needed to make a prediction will subsequently decrease. Yet another key finding of this analysis is that the distribution of settling time is relatively constant across network capacity (cf. FIG4). Intuitively, this means that the network is able to learn the underlying time delay constants represented in the dataset irrespective of the network capacity. One particularly interesting point comes for Protein-Net which exhibits vastly different behavior for both settling time FIG4 ) than the remainder of the datasets. Upon closer inspection, we found that Protein-Net was heavily overfitting with increased capacity. This can be seen as an explanation for the rapid decay in its settling time as the addition of LSTM cells would increase specificity of particular cells and exhibit dynamical properties aligning with effectively memorizing pieces of the training set. In this paper, we proposed a method for response characterization for LSTM networks to predict cell-contributions to the overall decision of a learned network on both the cell and network-level resolution. We further verified and validated our predictions by performing an ablation analysis to identify cell's which contribution heavily to the network's output decision with our simple response characterization method. The ing method establishes a novel building block for interpreting LSTM networks. The LSTM network's dynamic-space is broad and cannot be fully captured by fundamental input sequences. However, our methodology demonstrates that practical sub-regions of dynamics are reachable by response metrics which we use to build a systematic testbench for LSTM interpretability. We have open-sourced our algorithm to encourage other researchers to further explore dynamics of LSTM cells and interpret the kinetics of their sequential models. In the future, we aim to extend our approach to even more data modalities and analyze the training phase of LSTMs to interpret the learning of the converged dynamics presented in this work.7 Acknowledgment
Introducing the response charactrization method for interpreting cell dynamics in learned long short-term memory (LSTM) networks.
400
scitldr
Decades of research on the neural code underlying spatial navigation have revealed a diverse set of neural response properties. The Entorhinal Cortex (EC) of the mammalian brain contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns. However, the mechanisms and functional significance of these spatial representations remain largely mysterious. As a new way to understand these neural representations, we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on velocity inputs. Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. All these different functional types of neurons have been observed experimentally. The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies. Together, our suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits. Understanding the neural code in the brain has long been driven by studying feed-forward architectures, starting from Hubel and Wiesel's famous proposal on the origin of orientation selectivity in primary visual cortex BID19. Inspired by the recent development in deep learning BID25 BID30 BID18 BID39, there has been a burst of interest in applying deep feedforward models, in particular convolutional neural networks (CNN) BID29, to study the sensory systems, which hierarchically extract useful features from sensory inputs (see e.g., BID61 ; BID24 ; BID22 ; BID60).For more cognitive tasks, neural systems often need to maintain certain internal representations of relevant variables in the absence of external stimuli-a process that requires more than feature extraction. We will focus on spatial navigation, which typically requires the brain to maintain a representation of self-location and update it according to the animal's movements and landmarks of the environment. Physiological studies done in rodents and other mammals (including humans, non-human primates and bats) have revealed a variety of neural correlates of space in Hippocampus and Entorhinal Cortex (EC), including place cells BID41, grid cells BID10 BID15 BID11 BID62 BID23 BID20, along with border cells BID49, band-like cells BID27 and others (see FIG0). In particular, each grid cell only fires when the animal occupies a distinct set of physical locations, and strikingly these locations lie on a lattice. The study of the neural underpinning of spatial cognition has provided an important window into how high-level cognitive functions are supported in the brain BID0.How might the spatial navigation task be solved using a network of neurons? Recurrent neural networks (RNNs) BID18 BID12 BID43 BID54 BID13 BID53 seem particularly useful for these tasks. Indeed, recurrent-based continuous attractor networks have been one popular type of models proposed for the formation of grid cells BID4 BID5 and place cells BID45. Such models have provided valuable insights into one set of possible mechanisms that could support the formation of the grids. However, these models typically rely on fine-tuned connectivity patterns, in particular the models need a subtle yet systematic asymmetry in the connectivity pattern to move the attractor state according to the animal's own movement. The existence of such a specific 2D connectivity in rodent EC remains unclear. Additionally, previous models have mainly focused on grid cells, while other types of responses that co-exist in the Entorhinal Cortex have been largely ignored. It would be useful to have a unified model that can simultaneously explain different types of neural responses in EC.Motivated by these considerations, here we present an alternative modeling approach for understanding the representation of space in the neural system. Specifically, we trained a RNN to perform some spatial navigation tasks. By leveraging the recent development in RNN training and knowledge of the navigation system in the brain, we show that training a RNN with biologically relevant constraints naturally gives rise to a variety of spatial response profiles as observed in EC, including grid-like responses. To our knowledge, this is the first study to show that grid-like responses could emerge from training a RNN to perform navigation. Our implies that the neural representation in EC may be seen as a natural way for the brain to solve the navigation task efficiently BID55. More generally, it suggests that RNNs can be a powerful tool for understanding the neural mechanisms of certain high-level cognitive functions. recorded when an animal navigates in a square environment, replotted from BID27, with the heat map representing the firing rate of this neuron as a function of the animal's location (red corresponds to high firing rate); a "band-like" cell from BID27; a border cell from BID49; an irregular spatially tuned cell from BID7; a "speed cell" from BID26, which exhibits roughly linear dependence on the rodent's running speed; a "heading direction cell" from BID46, which shows systematic change of firing rate depending on animal's heading direction. b) The network consists of N = 100 recurrently connected units (or neurons) which receive two external inputs, representing the animal's speed and heading direction. The two outputs linearly weight the neurons in the RNN. The goal of training is to make the responses of the two output neurons accurately represent the animal's physical location. c) Typical trajectory after training. As shown, the output of the RNN can accurately, though not perfectly, track the animal's location during navigation. Our network model consists of a set of recurrently connected units (N = 100). The dynamics of each unit in the network u i (t) is governed by the standard continuous-time RNN equation: DISPLAYFORM0 for i = 1,..., N. The activity of each unit, u i (t), is related to the activation of that unit, x i (t), through a nonlinearity which in this study we take to be u i (t) = tanh(x i (t)). Each unit receives input from other units through the recurrent weight matrix W rec and also receives external input, I(t), that enters the network through the weight matrix W in. Each unit has two sources of bias, b i which is learned and ξ i (t) which represents noise intrinsic to the network and is taken to be Gaussian with zero mean and constant variance. The network was simulated using the Euler method for T = 500 timesteps of duration τ /10.To perform a 2D navigation task with the RNN, we linearly combine the firing rates of units in the network to estimate the current location of the animal. The responses of the two linear readout neurons, y 1 (t) and y 2 (t), are given by the following equation: DISPLAYFORM1 The network inputs and outputs were inspired by simple spatial navigation tasks in 2D open environments. The task resembles dead-reckoning (sometimes referred to as path integration), which is ethologically relevant for many animal species BID6 BID38 BID9. To be more specific, the inputs to the network were the animal's speed and direction at each time step. Experimentally, it has been shown that the velocity signals exist in EC BID46 BID26 BID17, and there is also evidence that such signals are necessary for grid formation BID58.Throughout the paper, we adopt the common assumption that the head direction of the animal coincides with the actual moving direction. The outputs were the x-and y-coordinates of the integrated position. The direction of the animal is modeled by modified Brownian motion to increase the probability of straight-runs, in order to be consistent with the typical rodent's behavior in an open environment. The usage of such simple movement statistics has the advantage of having full control of the simulated trajectories. However, for future work it would be very interesting to test the model using different animals' real movement trajectories to see how the might change. Special care is taken when the animal is close to the boundary. The boundary of the environment will affect the statistics of the movement, as the animal cannot cross the boundary. This fact was reflected in the model by re-sampling the angular input variable until the input angle did not lead the animal outside the boundary. In the simulations shown below, the animal always starts from the center of the arena, but we verified that the are insensitive to the starting locations. We optimized the network parameters W rec, W in, b and W out to minimize the squared error in equation between target x-and y-coordinates from a two dimensional navigation task (performed in rectangular, hexagonal, and triangular arenas) and the network outputs generated according to equation. DISPLAYFORM0 Parameters were updated with the Hessian-free algorithm BID35 ) using minibatches of size M = 500 trials. In addition to minimizing the error function in equation we regularized the input and output weights according to equation and the squared firing rates of the units (referred to as metabolic cost) according to equation. In sum, the training aims to minimize a loss function, that consists of the error of the animal, the metabolic cost, and a penalty for large network parameters. DISPLAYFORM1 We find that the are qualitatively insensitive to the initialization schemes used for the recurrent weight matrix W rec. For the presented in this paper, simulations in the hexagonal environment were obtained by initializing the elements of W rec to be zero mean Gaussian random variables with variance 1.5 2 /N, and simulations in the square and triangular environments were initialized with an orthogonal W rec BID48. We initialized the bias b and output weights W out to be zero. The elements of W in were zero mean Gaussian variables with variance 1/N in. We run simulation experiments in arenas with different boundary shapes, including square, triangular and hexagonal. FIG0 shows a typical example of the model performance after training; the network (red trace) accurately tracks the animal's actual path (black). We are mostly interested in what kind of representation the RNN has learned to solve this navigation task, and whether such a representation resembles the response properties of neurons in EC. To test whether the trained RNN developed location-selective representations, we plot individual neurons' mean activity level as a function of the animal's location during spatial exploration. Note that these average response profiles should not be confused with the linear filters typically shown in feedforward networks. Surprisingly, we find neurons in the trained RNN show a range of interesting spatial response profiles. Examination of these response profiles suggests they can be classified into distinct functional types. Importantly, as we will show, these distinct spatial response profiles can be mapped naturally to known physiology in EC. The spatial responses of all units in trained networks are shown in the Appendix. Grid-like responses Most interestingly, we find some of the units in the RNN exhibit clear grid-like responses FIG1 ). These firing patterns typically exhibit multiple firing fields, with each firing field exhibiting roughly circular symmetric or ellipse shape. Furthermore, the firing fields are highly structured, i.e., when combined, are arranged on a regular lattice. Furthermore, the structure of the response lattice depends on the shape of the boundary. In particular, training the network to perform self-localization in a square environment tends to give rectangular grids. In hexagonal and triangular environments, the grids are closer to triangular. Experimentally, it is shown that (medial) EC contains so-called grid cells which exhibit multiple firing fields that lie on a regular grid BID10 BID15. The grid-like firing patterns in our simulation are reminiscent of the grid cells in rodents and other mammals. However, we also notice that the the grid-like model responses typically exhibit few periods, not as many as experimental data (see FIG0 . It is possible that using a larger network might reveal finer grid-patterns in our model. Nonetheless, it is surprising that the gird-like spatial representations can develop in our model, given there is no periodicity in the input. Another potential concern is that, experimentally it is reported that the grids are often on the corners of a triangular lattice BID15 even in square environments (see FIG0, though the grids are somewhat influenced by the shape of the environment. However, the rats in these experiments presumable had spatial experience in other environments with various boundary shapes. Experimentally, it would be interesting to see if grid cells would lie on a square lattice instead if the rats are raised in a single square environment -a situation we are simulating here. Border responses Many neurons in the RNN exhibit selectivity to the boundary FIG1). Typically, they only encode a portion of the boundary, e.g. one piece of wall in a square shaped environment. Such properties are similar to the border cells discovered in rodent EC BID49 BID47 BID31 ). Experimentally, border cells mainly fire along one piece of wall, although some have been observed to fire along multiple borders or along the whole boundary of the environment; interestingly, these multi-border responses were also observed in some RNN models. Currently, it is unclear how the boundary-like response profiles emerge BID49 BID47 BID31. Our model points to the possibility that the border cells may emerge without the presence of tactile cues. Furthermore, it suggests that border cell formation may be related to the movement statistics of the animals, i.e. due to the asymmetry of the movement statistics along the boundary. Band-like responses Interestingly, some neurons in the RNN exhibit band-like responses (FIG1). In most of our simulations, these bands tend to be parallel to one of the boundaries. For some of the units, one of the bands overlaps the boundary, but for others, that is not the case. Experimentally, neurons with periodic-like firing patterns have been recently reported in rodent EC. In one study, it has been reported that a substantial portion of cells in EC exhibit band-like firing characteristics BID27. However, we note that based on the reported data in BID27, the band pattern is not as clear as in our model. Spatially-stable but non-regular responses Besides the units described above, most of the remaining units also exhibit stable spatial responses, but they do not belong to the above categories. These response profiles can exhibit either one large irregular firing field; or multiple circular firing fields, but these firing fields do not show a regular pattern. Experimentally these types of cells have also been observed. In fact, it is recently reported that the non-grid spatial cells constitute a large portion of the neurons in Layer II and III of rodent EC BID7. shown in Figure 3. Interestingly, we observe that the model border cells tend to have almost zero speed-tuning (e.g., see Figure 3g,h).Head direction tuning A substantial portion of the model neurons show direction tuning. There are a diversity of direction tuning profiles, both in terms of the strength of the tuning and their preferred direction. Example tuning curves are shown in Figure 3, and the direction tuning curves of a complete population are shown in the Appendix. Interestingly, in general model neurons which show the strongest head direction tuning do not show a clear spatial firing pattern (see Figure 3a,b,c). This suggests that there are a group of neurons which are mostly responsible for encoding the direction. We also notice that neurons with clear grid-like firing can exhibit a variety of direction tuning strengths, from weak to strong (Figure 3d,e,f). In the Appendix, we quantify the relation between these different tuning properties at the whole population level, which show somewhat complex dependence. Experimentally, the heading direction tuning in EC is well-known, e.g., BID46. Both the grid and non-grid cells in EC exhibit head direction tuning BID46. Furthermore, the linear speed dependence of the model neurons is similar to the properties of speed cells reported recently in EC BID26. Our is also consistent with another recent study reporting that the majority of neurons in EC exhibit some amount of speed tuning BID17.It remains an open question experimentally, at a population level, how different types of tuning characteristics in EC relate to each other. We next investigate how the spatial response profiles evolve as learning/training progresses. We report two main observations. First, neurons that fire selectively along the boundary typically emerge first. Second, the grid-like responses with finer spatial tuning patterns only emerge later in training. For visualization, we perform dimensionality reduction using the t-SNE algorithm BID33. This algorithm embeds 100 model neurons during three phases of training (early, intermediate, and late) into a two-dimensional space according to the similarity of their temporal responses. Here the similarity metric is taken to be firing rate correlation. In this 2D space as shown in FIG3, border cell representations appear early and stably persist through the end of training. Furthermore, early during training all responses are similar to the border related responses. In contrast, grid-like cells typically undergo a substantial change in firing pattern during training before settling into their final grid-like representation FIG3 ).The developmental time line of the grid-like cells and border cells is roughly consistent with developmental studies in rodents. Experimentally, it is known that border cells emerge earlier in development, and they exist at about 2 weeks after the rat is born BID3. The grid cells mature only at about 4 weeks after birth BID28 BID57 BID3. Furthermore, our simulations suggest the reason why border cells emerge earlier in development may be that computationally it is easier to wire-up a network that gives rise to border cell responses.. These cells' response profiles appear early and stably persist through training, illustrated by the short distance they travel in this space. In b), we show that the neurons which eventually become grid cells initially have tuning profiles similar to the border cells but then change their tuning substantially during learning. As a natural consequence, they need to travel a long distance in this space between the early and late phase of the training. Spatial responses are shown for four of these grid-like cells during the late phase of training. We find appropriate regularizations of the RNN to be crucial for the emergence of grid-like representations. We only observed grid-like representations when the network was encouraged to store information while perturbed by noise. This was accomplished by setting the speed input to zero, e.g. zero speed 90% of the time, and adding Gaussian noise to the network (ξ i (t) in equation FORMULA0 ); the precise method for setting the speed input to zero and the value of the noise variance is not crucial for our simulations to develop grid-like representations. The cost function which aims to capture the penalization on the metabolic cost of the neural activity also acts as an important regularization. Our simulations show that the grid-like representation did not emerge without this metabolic cost. In FIG4, we show typical simulation for a square environment, with and without proper metabolic regularization. In the Appendix, we illustrate the effect of regularization further, in particular the role of injecting noise into the RNN units. Our are consistent with the general notion on the importance of incorporating proper constraint for learning useful representations in neural networks BID2. Furthermore, it suggests that, to learn a model with response properties similar to neural systems it may be necessary to incorporate the relevant constraints, e.g., noise and metabolic cost. One natural question is whether the trained RNNs are able to perform localization when the path length exceeds the typical length used during training (500 steps), in particular given that noise in a b Figure 6: Error-correction happens at the boundary and the error is stable over time. At the boundary, the direction is resampled to avoid input velocities that lead to a path extending beyond the boundary of the environment. These changing input statistics at the boundary, termed a boundary interaction, are the only cue the RNN receives about the boundary. We find that the RNN uses the boundary interactions to correct the accumulated error between the true integrated input and its prediction based on the linear readout of equation FORMULA1. Panel a), the mean squared error increases when there are no boundary interactions, but then decreases after a boundary interaction, with more boundary interactions leading to greater error reduction. In the absence of further boundary interaction, the squared error would gradually increase again (blue curve) at roughly a constant rate. b) The network was trained using mini-batches of 500 timesteps but has stable error over a duration at least four orders of magnitude larger. The error of the RNN output (mean and standard deviation shown in black, computed based on 10000 timesteps) is compared to the error that would be achieved by an RNN outputting the best constant values (red).the network would gradually accumulate, leading to a decrease in localization performance. We test this by simulating paths that are several orders of magnitude longer. Somewhat surprisingly, we find the RNNs still perform well (Figure 6b). In fact, the squared error (averaged over every 10000 steps) is stable. The spatial response profiles of individual units also remain stable. This implies that the RNNs have acquired intrinsic error-correction mechanisms during training. As shown earlier, during training some of the RNN units develop boundary-related firing FIG1 ), presumably by exploiting the change of input statistics around the boundary. We hypothesize that boundary interactions may enable error-correction through signals based on these boundary-related activities. Indeed, we find that boundary interactions can dramatically reduce the accumulated error (Figure 6a). Figure 6a shows that, without boundary interactions, on average the squared error grows roughly linearly as expected, however, interactions with the boundaries substantially reduce the error, and more frequent boundary interactions can reduce the error further. Error-correction on grid cells via boundary interactions has been proposed BID16 BID44, however, we emphasize that the model proposed here develops the grid-like responses, boundary responses and the error-correction mechanisms all within the same neural network, thus potentially providing a unifying account of a diverse set of phenomena. In this paper, we trained RNNs to perform path integration (dead-reckoning) in 2D arenas. We found that after training RNNs with appropriate regularization, the model neurons exhibit a variety of spatial and velocity tuning profiles that match neurophysiology in EC. What's more, there is also similarity in terms of when these distinct neuron types emerge during training/development. The EC has long been thought to be involved in path integration and localization of the animal's location. The general agreement between the different response properties in our model and the neurophysiology provide strong evidence supporting the hypothesis that the neural population in EC may provide an efficient code for representation self-locations based on the velocity input. Recently, there has been increased interest in using complex neural network models to understand the neural code. But the focus has been on using feedforward architectures, in particular CNNs BID29. Given the abundant recurrent connections in the brain, it seems a particularly fruitful avenue to take advantage of the recent development in RNNs to help with neuroscience questions BID34 BID50 BID37 BID53. Here, we only show one instance following this approach. However, the insight from this work could be general, and potentially useful for other cognitive functions as well. The finding that metabolic constraints lead to the emergence of grid-like responses may be seen as conceptually related to the efficient coding hypothesis in visual processing BID1, in particular the seminal work on the emergence of the V1-like Gabor filters in a sparse coding model by BID42. Indeed, our work is partly inspired by these . While there are conceptual similarities, however, we should also note there are differences between the sparse coding work and ours. First, the sparsity constraint in sparse coding can be naturally viewed as a particular prior while in the context of the recurrent network, it is difficult to interpret that way. Second, the grid-like responses are not the most sparse solution one could imagine. In fact, they are still quite dense compared to a more spatially localized representation. Third, the grid-like patterns that emerged in our network are not filters based on the raw input, rather the velocity inputs need to be integrated first in order to encode spatial locations. Our work is also inspired by recent work using the efficient coding idea to explain the functional architecture of the grid cells BID55. It has been shown that efficient coding considerations could explain the particular set of grid scales observed in rodents BID52. However, in that work, the firing patterns of the neurons are assumed to have a lattice structure to start with. Furthermore, our work is related to the study by Sussillo and others BID53, in which they show that regularization of RNN models are important for generating solutions that are similar to the neural activity observed in motor cortex. In Sussillo et al., a smoothness constraint together with others lead to simple oscillatory neural dynamics that well matches the neural data. We have not incorporated a smoothness constraint into our network. Additionally, we note that there are a few recent studies which use place cells as the input to generate grid cells BID8 BID51, which are fundamentally different from our work. In these feedforward network models, the grid cells essentially perform dimensionality reduction based on the spatial input from place cells. However, the main issue with these models is that, it is unclear how place cells acquire spatial tuning in the first place. To the contrary, our model takes the animal's velocity as the input, and addresses the question of how the spatial tuning can be generated from such input, which are known to exist in EC BID46 BID26. In another related study BID21, the authors train a RNN with LSTM units BID18 to perform different navigation tasks. However, no grid-like spatial firing patterns are reported. Although our model shows a qualitative match to the neural responses observed in the EC, nonetheless it has several major limitations, with each offering interesting future research directions. First, the learning rule we use seems to be biologically implausible. We are interested in exploring how a more biologically plausible learning rule could give rise to similar BID32 BID37 BID14. Second, the simulation do not show a variety of spatial scales in grid-like cells. Experimentally, it is known that grid cells have multiple spatial scales, that scale geometrically with a ratio 1.4 BID52, and this particular scale ratio is predicted by efficient coding of space BID55. We are investigating how to modify the model to get a hierarchy of spatial scales, perhaps by incorporating more neurons or modifying the regularization. Last but not least, we have focused on the representation produced by the trained RNN. An equally important set of questions concern how the networks actually support the generation of such a representation. As a preliminary effort, we have examined the connectivity patterns of the trained network, and they do not seem to resemble the connectivity patterns required by standard attractor network models. Maybe this should not be seen as too surprising. After all, the trained networks can produce a diverse set of neural responses, while the previous models only led to grid responses. It would be interesting for future work to systematically examine the questions related to the underlying mechanisms. To quantify the speed selectivity of each unit we first fit a line to the tuning curve of unit activity as a function of speed. The speed selectivity is the absolute value of the slope. If the unit activity is not modulated by speed then the speed selectivity is 0. To quantify the direction selectivity of each unit we calculated the average unit activity as a function of direction input and then took the maximum minus minimum of this tuning curve. If the unit activity is not modulated by direction then the direction selectivity is 0. To quantify the spatial selectivity we used lifetime sparseness BID56. If the unit activity is not modulated by spatial location then the spatial selectivity is 0. Each dot in the figures below show the selectivity for a single unit. Direc$on selec$vity
To our knowledge, this is the first study to show how neural representations of space, including grid-like cells and border cells as observed in the brain, could emerge from training a recurrent neural network to perform navigation tasks.
401
scitldr
Source separation for music is the task of isolating contributions, or stems, from different instruments recorded individually and arranged together to form a song. Such components include voice, bass, drums and any other accompaniments. While end-to-end models that directly generate the waveform are state-of-the-art in many audio synthesis problems, the best multi-instrument source separation models generate masks on the magnitude spectrum and achieve performances far above current end-to-end, waveform-to-waveform models. We present an in-depth analysis of a new architecture, which we will refer to as Demucs, based on a (transposed) convolutional autoencoder, with a bidirectional LSTM at the bottleneck layer and skip-connections as in U-Networks . Compared to the state-of-the-art waveform-to-waveform model, Wave-U-Net , the main features of our approach in addition of the bi-LSTM are the use of trans-posed convolution layers instead of upsampling-convolution blocks, the use of gated linear units, exponentially growing the number of channels with depth and a new careful initialization of the weights. Results on the MusDB dataset show that our architecture achieves a signal-to-distortion ratio (SDR) nearly 2.2 points higher than the best waveform-to-waveform competitor (from 3.2 to 5.4 SDR). This makes our model match the state-of-the-art performances on this dataset, bridging the performance gap between models that operate on the spectrogram and end-to-end approaches. Cherry first noticed the "cocktail party effect" : how the human brain is able to separate a single conversation out of a surrounding noise from a room full of people chatting. Bregman later tried to understand how the brain was able to analyse a complex auditory signal and segment it into higher level streams. His framework for auditory scene analysis spawned its computational counterpart, trying to reproduce or model accomplishments of the brains with algorithmic means , in particular regarding source separation capabilities. When producing music, recordings of individual instruments called stems are arranged together and mastered into the final song. The goal of source separation is to recover those individual stems from the mixed signal. Unlike the cocktail party problem, there is not a single source of interest to differentiate from an unrelated noise, but instead a wide variety of tones and timbres playing in a coordinated way. In the SiSec Mus evaluation campaign for music separation (Stöter et al., 2018), those individual stems were grouped into 4 broad categories: drums, bass, other, vocals. Given a music track which is a mixture of these four sources, also called the mix, the goal is to generate four waveforms that correspond to each of the original sources. We consider here the case of supervised source separation, where the training data contain music tracks (i.e., mixtures), together with the ground truth waveform for each of the sources. State-of-the-art approaches in music source separation still operate on the spectrograms generated by the short-time Fourier transform (STFT). They produce a mask on the magnitude spectrums for each frame and each source, and the output audio is generated by running an inverse STFT on the masked spectrograms reusing the input mixture phase ). Several architectures trained end-to-end to directly synthesize the waveforms have been proposed (Lluís et al., 2018;), but their performances are far below the state-of-the-art: in Figure 1: Mel-spectrogram for a 0.8 seconds extract of the bass source from the track "Stich Up" of the MusDB test. From left to right: ground truth, Conv-Tasnet estimate and Demucs estimate. We observe that Conv-Tasnet missed one note entirely. the last SiSec Mus evaluation campaign (Stöter et al., 2018), the best model that directly predicts waveforms achieves an average signal-to-noise ratio (SDR) over all four sources of 3.2, against 5.3 for the best approach that predicts spectrograms masks (also see Table 1 in Section 6). An upper bound on the performance of all methods relying on masking spectrograms is given by the SDR obtained when using a mask computed using the ground truth sources spectrograms, for instance the Ideal Ratio Mask (IRM) or the Ideal Binary Mask (IBM) oracles. For speech source separation, proposed Conv-Tasnet, a model that reuses the masking approach of spectrogram methods but learns the masks jointly with a convolutional front-end, operating directly in the waveform domain for both the inputs and outputs. Conv-Tasnet surpasses both the IRM and IBM oracles. Our first contribution is to adapt the Conv-Tasnet architecture, originally designed for monophonic speech separation and audio sampled at 8 kHz, to the task of sterephonic music source separation for audio sampled at 44.1 kHz. Our experiments show that Conv-Tasnet outperforms all previous methods by a large margin, with an SDR of 5.7, but still under the SDR of the IRM oracle at 8.2 (Stöter et al., 2018). However, while Conv-Tasnet separates with a high accuracy the different sources, we observed artifacts when listening to the generated audio: a constant broadband noise, hollow instruments attacks or even missing parts. They are especially noticeable on the drums and bass sources and we give one such example on Figure 1. Conv-Tasnet uses an over-complete linear representation on which it applies a mask obtained from a deep convolutional network. Because both the encoder and decoder are linear, the masking operation cannot synthesize new sounds. We conjecture that the overlap of multiples instruments sometimes lead to a loss of information that is not reversible by a masking operation. To overcome the limitations of Conv-Tasnet, our second contribution is to propose Demucs, a new architecture for music source separation. Similarly to Conv-Tasnet, Demucs is a deep learning model that directly operates on the raw input waveform and generates a waveform for each source. Demucs is inspired by models for music synthesis rather than masking approaches. It is a U-net architecture with a convolutional encoder and a decoder based on wide transposed convolutions with large strides inspired by recent work on music synthesis (Défossez et al., 2018). The other critical features of the approach are a bidirectional LSTM between the encoder and the decoder, increasing the number of channels exponentially with depth, gated linear units as activation function which also allow for masking, and a new initialization scheme. We present experiments on the MusDB benchmark, which first show that both Conv-Tasnet and Demucs achieve performances significantly better than the best methods that operate on the spectrogram, with Conv-Tasnet being better than Demucs in terms of SDR. We also perform human evaluations that compare Conv-Tasnet and our Demucs, which show that Demucs has significantly better perceived quality. The smaller SDR of Demucs is explained by more contamination from other sources. We also conduct an in-depth ablation study of the Demucs architecture to demonstrate the impact of the various design decisions. Finally, we carry out additional experiments by adding 150 songs to the training set. In this experiment, Demucs and TasNet both achieve an SDR of 6.3, suggesting that the gap in terms of SDR between the two models diminishes with more data, making the Demucs approach promising. The 6.3 points of SDR also set a new state-of-the-art, since it improves on the best previous of 6.0 on the MusDB test set obtained by training with 800 additional songs. We discuss in more detail the related work in the next Section. We then describe the original ConvTasnet model of and its adaptation to music source separation. Our Demucs architecture is detailed in Section 4. We present the experimental protocol in Section 5, and the experimental compared to the state-of-the-art in Section 6. Finally, we describe the of the human evaluation and the ablation study. A first category of methods for supervised music source separation work on time-frequency representations. They predict a power spectrogram for each source and reuse the phase from the input mixture to synthesise individual waveforms. Traditional methods have mostly focused on blind (unsupervised) source separation. Non-negative matrix factorization techniques model the power spectrum as a weighted sum of a learnt spectral dictionary, whose elements are grouped into individual sources. Independent component analysis (Hyvärinen et al., 2004) relies on independence assumptions and multiple microphones to separate the sources. Learning a soft/binary mask over power spectrograms has been done using either HMM-based prediction or segmentation techniques . With the development of deep learning, fully supervised methods have gained momentum. Initial work was performed on speech source separation , followed by works on music using simple fully connected networks over few spectrogram frames , LSTMs , or multi scale convolutional/recurrent networks (; . showed that Wiener filtering is an efficient post-processing step for spectrogram-based models and it is now used by all top performing models in this category. Those methods have performed the best during the last SiSec 2018 evaluation (Stöter et al., 2018) for source separation on the MusDB dataset. After the evaluation, a reproducible baseline called Open Unmix has been released by Stöter et al. and matches the top submissions trained only on MusDB. MMDenseLSTM, a model proposed by and trained on 807 unreleased songs currently holds the absolute record of SDR in the SiSec campaign. Both Demucs and Conv-Tasnet obtain significantly higher SDR. More recently, models operating in the waveform domain have been developed, so far with worse performance than those operating in the spectrogram domain. A convolutional network with a U-Net structure called Wave-U-Net was used first on spectrograms and then adapted to the waveform domain . Wave-U-Net was submitted to the SiSec 2018 evaluation campaign with a performance inferior to that of most spectrogram domain models by a large margin. A Wavenet-inspired, although using a regression loss and not auto-regressive, was first used for speech denoising and then adapted to source separation (Lluís et al., 2018). Our model significantly outperforms Wave-U-Net. Given that the Wavenet inspired model performed worse than Wave-U-Net, we did not consider it for comparison. In the field of monophonic speech source separation, spectrogram masking methods have enjoyed good performance . developed a waveform domain methods using masking over a learnable front-end obtained from a LSTM that reached the same accuracy. Improvements were obtained by for spectrogram methods using the unfolding of a few iterations of a phase reconstruction algorithm in the training loss. In the mean time, refined their approach, replacing the LSTM with a superposition of dilated convolutions, which improved the SDR and definitely surpassed spectrogram based approaches, including oracles that use the ground truth sources such as the ideal ratio mask (IRM) or the ideal binary mask (IBM). We show in this paper that Conv-Tasnet also outperforms all known methods for music source separation. However it suffers from significantly more artifacts than the Demucs architecture we introduce in this paper, as measured by mean opinion score. We describe in this section the Conv-Tasnet architecture of and give the details of how we adapted the architecture to fit the setting of the MusDB dataset. Overall framework Each source s is represented by a waveform x s ∈ R C,T where C is the number of channels (1 for mono, 2 for stereo) and T the number of samples of the waveform. The mixture (i.e., music track) is the sum of all sources x:= S s=1 x s. We aim at training a model g parameterized by θ, such that g(x) = (g s (x; θ)) S s=1, where g s (x; θ) is the predicted waveform for source s given x, that minimizes for some dataset D and reconstruction error L. The original Conv-Tasnet was trained using a loss called scale-invariant source-to-noise ratio (SI-SNR), similar to the SDR loss described in Section 5. We instead use a simple L1 loss between the estimated and ground truth sources. We discuss in more details regression losses in the context of our Demucs architecture in Section 4.2. The original Conv-Tasnet architecture Conv-Tasnet is composed of a learnt front-end that transforms back and forth between the input monophonic mixture waveform sampled at 8 kHz and a 128 channels over-complete representation sampled at 1 kHz using a convolution as the encoder and a transposed convolution as the decoder, both with a kernel size of 16 and stride of 8. The high dimensional representation is masked through a separation network composed of stacked residual blocks. Each block is composed of a a 1x1 convolution, a PReLU (b) non linearity, a layer wise normalization over all channels jointly , a depth-wise separable convolution with a kernel size of 3, a stride of 1 and a dilation of 2 n mod N, with n the 0-based index of the block and N an hyper-parameter, and another PReLU and normalization. The output of each block participates to the final mask estimation through a skip connection, preceded by a 1x1 convolution. The original Conv-Tasnet counted 3 × N blocks with N = 8. The mask is obtained summing the output of all blocks and then applying ReLU. The output of the encoder is multiplied by the mask and before going through the decoder. Conv-Tasnet for music source separation We adapted their architecture to the task of stereophonic music source separation: the original Conv-Tasnet has a receptive field of 1.5 seconds for audio sampled at 8 kHz, we take N = 10 and increased the kernel size (resp. stride) of the encoder/decoder from 16 (resp. 8) to 20 (resp. 10), leading to the same receptive field at 44.1 kHz. We observed better using 4 × N blocks instead of 3 × N and 256 channels for the encoder/decoder instead of 128. With those changes, Conv-Tasnet obtained state-of-the-art performance on the MusDB dataset, surpassing all known spectrogram based methods by a large margin as shown in Section 6. Separating entire songs The original Conv-Tasnet model was designed for short sentences of a few seconds at most. When evaluating it on an entire track, we obtained the best performance by first splitting the input track into chunks of 8 seconds each. We believe this is because of the global layer normalization. During training, only small audio extracts are given, so that a quiet part or a loud part would be scaled back to an average volume. However, when using entire songs as input, it will most likely contain both quiet and loud parts. The normalization will not map both to the same volume, leading to a difference between training and evaluation. We did not observe any side effects when going from one chunk to the next, so we did not look into fancier overlap-add methods. The architecture we propose, which we call Demucs, is described in the next few subsections, and the reconstruction loss is discussed in Section 4.2. Demucs takes a stereo mixture as input and outputs a stereo estimate for each source (C = 2). It is an encoder/decoder architecture composed of a convolutional encoder, a bidirectional LSTM, and a convolutional decoder, with the encoder and decoder linked with skip U-Net connections. Similarly to other work in generation in both image (; 2017) and sound (Défossez et al., 2018), we do not use batch normalization as our early experiments showed that it was detrimental to the model performance. The overall architecture is depicted in Figure 2a. Encoder The encoder is composed of L:= 6 stacked convolutional blocks numbered from 1 to L. Each block i is composed of a convolution with kernel size K:= 8, stride S:= 4, C i−1 input channels, C i output channels and ReLU activation, followed by a convolution with kernel size 1, 2C i output channels and gated linear units (GLU) as activation function . Since GLUs halve the number of channels, the final output of block i has C i output channels. A block is described in Figure 2b. Convolutions with kernel width 1 increase the depth and expressivity of the model at low computational cost. As we show in our ablation study 6.2, the usage of GLU activations after these convolutions significantly boost performance. The number of channels in the input mixture is C 0 = C = 2, while we use C 1:= 100 as the number of output channels for the first encoder block. The number of channels is then doubled at each subsequent block, i.e., C i:= 2C i−1 for i = 2..L, so the final number of channels is C L = 3200. We then use a bidirectional LSTM with 2 layers and a hidden size C L. The LSTM outputs 2C L channels per time position. We use a linear layer to take that number down to C L. Decoder The decoder is mostly the inverse of the encoder. It is composed of L blocks numbered in reverse order from L to 1. The i-th blocks starts with a convolution with stride 1 and kernel width 3 to provide context about adjacent time steps, input/output channels C i and a ReLU activation. Finally, we use a transposed convolution with kernel width 8 and stride 4, C i−1 outputs and ReLU activation. The S sources are synthesized at the final layer only, after all decoder blocks. The final layer is linear with S · C 0 output channels, one for each source (4 stereo channels in our case), without any additional activation function. Each of these channels directly generate the corresponding waveform. U-network structure Similarly to Wave-U-Net , there are skip connections between the encoder and decoder blocks with the same index, as originally proposed in U-networks . While the main motivation comes from empirical performances, an advantage of the skip connections is to give a direct access to the original signal, and in particular allows to directly transfers the phase of the input signal to the output, as discussed in Section 4.2. Unlike Wave-U-Net, we use transposed convolutions rather than linear interpolation followed by a convolution with a stride of 1. For the same increase in the receptive field, transposed convolutions require 4 times less operations and memory. This limits the overall number of channels that can be used before running out of memory. As we observed that a large number of channels was key to obtaining good , we favored the use of transposed convolutions, as explained in Section 6. Motivation: synthesis vs masking The approach we follow uses the U-Network architecture (; ;), and builds on transposed convolutions with large number of channels and large strides inspired by the approach to the synthesis of music notes of Défossez et al.. The U-Net skip connections and the gating performed by GLUs imply that this architecture is expressive enough to represent masks on a learnt representation of the input signal, in a similar fashion to Conv-Tasnet. The Demucs approach is then more expressive than Conv-Tasnet, and its main advantages are the multi-scale representations of the input and the non-linear transformations to and from the waveform domain. For the reconstruction loss L(g s (x; θ), x s ) in equation 1, we either use the average mean square error or average absolute error between waveforms: for a waveform x s containing T samples and corresponding to source s, a predicted waveformx s and denoting with a subscript t the t-th sample of a waveform, we use one of L 1 or L 2: In generative models for audio, direct reconstruction losses on waveforms can pose difficulties because they are sensitive to the initial phases of the signals: two signals whose only difference is a shift in the initial phase are perceptually the same, but can have arbitrarily high L 1 or L 2 losses. It can be a problem in pure generation tasks because the initial phase of the signal is unknown, and losses on power/magnitude spectrograms are alternative that do not suffer from this lack of specification of the output. Approaches that follow this line either generate spectrograms (e.g.,, or use a loss that compares power spectrograms of target/generated waveforms (Défossez et al., 2018). The problem of invariance to a shift of phase is not as severe in source separation as it is in unconditional generation, because the model has access to the original phase of the signal. The pase can easily be recovered from the skip connections in U-net-style architectures for separation, and is directly used as input of the inverse STFT for methods that generate masks on power spectrograms. As such, losses such as L1/L2 are totally valid for source separation. Early experiments with an additional term including the loss of Défossez et al. did not suggest that it boosts performance, so we did not pursue this direction any further. Most our experiments use L1 loss, and the ablation study presented in Section 6.2 suggests that there is no significant difference between L1 and L2. The initialization of deep neural networks is known to have a critical impact on the overall performances (; a), up to the point that showed that with a different initialization called fixup, very deep residual networks and transformers can be trained without batch normalization. While Fixup is not designed for U-Net-style skip connections, we observed that the following different initialisation scheme had great positive impact on performances compared to the standard initialization of He et al. (2015a) used in U-Networks. Considering the so-called Kaiming initialization (a) as a baseline, let us look at a single convolution layer for which we denote w the weights after the first initialization. We take α:= std(w)/a, where a is a reference scale, and replace w by w = w/ √ α. Since the original weights have element-wise order of magnitude (KC in) −1/2 where K is the kernel width and C in the number of output channels, it means that our initialization scheme produces weights of order of magnitude (KC in) −1/4, together with a non-trivial scale. Based a search over the values [0.01, 0.05, 0.1], we select a = 0.1 for all the regular and transposed convolutions, see Section 6 for more details. We experimentally observed that on a randomly initialized model applied to an audio extract, it kept the standard deviation of the features along the layers of the same order of magnitude. Without initial rescaling, the output the last layer has a magnitude 20 times smaller than the first. A perfect source separation model is time equivariant, i.e. shifting the input mixture by X samples will shift the output Y by the exact same amount. Thanks to its dilated convolutions with a stride of 1, the mask predictor of Conv-Tasnet is naturally time equivariant and even if the encoder/decoder is not strictly equivariant, Conv-Tasnet still verifies this property experimentally . Spectrogram based method will also verify approximately this property. Shifting the input by a small amount will only reflect in the phase of the spectrogram. As the mask is computed only from the magnitude, and the input mixture phase is reused, the output will naturally be shifted by the same amount. On the other hand, we noticed that our architecture did not naturally satisfy this property. We propose a simple workaround called randomized equivariant stabilization, where we sample S random shifts of an input mixture x and average the predictions of our model for each, after having applied the opposite shift. This technique does not require changing the training procedure or network architecture. Using S = 10, we obtained a 0.3 SDR gain, see Section 6.2 for more details. It does make evaluation of the model S times slower, however, on a V100 GPU, separating 1 minute of audio at 44.1 kHz with Demucs takes only 0.8 second. With this technique, separation of 1 minute takes 8 seconds which is still more than 7 times faster than real time. MusDB and additional data We use the MusDB dataset , which is composed of 150 songs with full supervision in stereo and sampled at 44100Hz. For each song, we have the exact waveform of the drums, bass, other and vocals parts, i.e. each of the sources. The actual song, the mixture, is the sum of those four parts. The first 84 songs form the train set, the next 16 songs form the valid set (the exact split is defined in the musdb python package) while the remaining 50 are kept for the test set. We collected raw stems for 150 tracks, i.e., individual instrument recordings used in music production software to make a song. We manually assigned each instrument to one of the sources in MusDB. We call this extra supervised data the stem set. We also report the performances of Tasnet and Demucs trained using these 150 songs in addition to the 84 from MusDB, to anaylze the effect of adding more training data. Source separation metrics Measurements of the performance of source separation models was developed by Vincent et al. for blind source separation and reused for supervised source separation in the SiSec Mus evaluation campaign (Stöter et al., 2018). Similarly to previous work , we focus on the SDR (Signal to Distortion Ratio) which measures the log ratio between the volume of the estimated source projection onto the ground truth, and the volume of what is left out of this projection, typically contamination by other sources or artifacts. Other metrics can be defined (SIR and SAR) and we present them in the supplementary material. We used the python package museval which provide a reference implementation for the SiSec Mus 2018 evaluation campaign. As done in the SiSec Mus competition, we report the median over all tracks of the median of the metric over each track computed using the museval package. As baselines, we selected Open Unmix (Stöter et al., 2019), a 3-layer BiLSTM model with encoding and decoding fully connected layers on spectrogram frames. It was released by the organizers of the SiSec 2018 to act as a strong reproducible baseline and matches the performances of the best candidates trained only on MusDB. We also selected MMDenseLSTM , a multi-band dense net with LSTMs at different scales of the encoder and decoder. This model was submitted as TAK2 and trained with 804 extra labeled songs 1. Both MMDenseLSTM and Open Unmix use Wiener filtering as a last post processing step. The only waveform based method submitted to the evaluation campaign is Wave-U-Net with the identifier STL2. Metrics were downloaded from the SiSec submission repository. for Wave-U-Net and MMDenseLSTM. For Open Unmix they were provided by their authors 2. We also provide the metrics for the Ideal Ratio Mask oracle (IRM), which computes the best possible mask using the ground truth sources and is the topline of spectrogram based method (Stöter et al., 2018). Table 1: Comparison of Conv-Tasnet and Demucs to state-of-the-art models that operate on the waveform (Wave-U-Net) and on spectrograms (Open-Unmix without extra data, MMDenseLSTM with extra data) as well as the IRM oracle on the MusDB test set. The Extra? indicates the number of extra training songs used. We report the median over all tracks of the median SDR over each track, as done in the SiSec Mus evaluation campaign (Stöter et al., 2018). The All column reports the average over all sources. Demucs metrics are averaged over 5 runs, the confidence interval is the standard deviation over √ 5. In bold are the values that are statistically state-of-the-art either with or without extra training data. Epoch definition and augmentation We define one epoch over the dataset as a pass over all 11-second extracts with a stride of 1 second. We use a random audio shift between 0 and 1 second and keep 10 seconds of audio from there as a training example. We perform the following data augmentation , also used by Open Unmix and MMDenseLSTM: shuffling sources within one batch to generate one new mix, randomly swapping channels. We additionally multiply each source by ±1 . All Demucs models were trained over 240 epochs. Conv-Tasnet was trained for 360 epochs when trained only on MusDB and 240 when trained with extra data and using only 2-seconds audio extracts. Training setup and hyper-parameters All models are trained with 16 V100 GPUs with 32GB of RAM. We use a batch size of 64, the Adam optimizer with a learning rate was chosen among [3e-4, 5e-4] and the initial number of channels was chosen in based on the L1 loss on the validation set. We obtained best performance with a learning rate of 3e − 4 and 100 channels. We then tried 3 different values for the initial weight rescaling reference level described in Section 4.3, [0.01, 0.05, 0.1] and selected 0.1. We computed confidence intervals using 5 random seeds in Table 1. For the ablation study on Table 4, we provide metrics for a single run. In this section, we first provide experimental on the MusDB dataset for Conv-Tasnet and Demucs compared with state-of-the-art baselines. We then dive into the ablation study of Demucs. We provide a comparison the state-of-the-art baselines on Table 1. The models on the top half were trained without any extra data while the lower half used unreleased training songs. As no previous work included confidence intervals, we considered the single metric provided by for the baselines as the exact estimate of their mean performance. Quality of the separation We first observe that Demucs and Conv-Tasnet outperforms all previous methods for music source separation. Conv-Tasnet has significantly higher SDR with 5.73, improving by 0.4 over Open-Unmix. Our proposed Demucs architecture has worse overall performance but , all methods are still far below the IRM oracle, leaving room for future improvements. We provide for the other metrics (SIR and SAR) as well as box plots with quantiles over the test set tracks in Appendix B. Audio samples for Demucs, Conv-Tasnet and all baselines are provided in the ICLR link code, with more details given in Appendix A. Human evaluations We noticed strong artifacts on the audio separated by Conv-Tasnet, especially for the drums and bass sources: static noise between 1 and 2 kHz, hollow instrument attacks or missing notes as illustrated on Figure 1. In order to confirm this observation, we organized a mean opinion score survey. We separated 8 seconds extracts from all of the 50 test set tracks for Conv-Tasnet, Demucs and Open-Unmix. We asked 38 participants to rate 20 samples each, randomly taken from one of the 3 models or the ground truth. For each one, they were required to provide 2 ratings on a scale of 1 to 5. The first one evaluated the quality and absence of artifacts (1: many artifacts and distortion, content is hardly recognizable, 5: perfect quality, no artifacts) and the second one evaluated contamination by other sources (1: contamination if frequent and loud, 5: no contamination). We show the on Tables 2 and 3. We confirmed that the presence of artifacts in the output of Conv-Tasnet degrades the user experience, with a rating of 2.85±.08 against 3.22 ±.09 for Demucs. On the other hand, Conv-Tasnet samples had less contamination by other sources than Open-Unmix or Demucs, although by a small margin, with a rating of 3.42 ±.09 against 3.30 ±.10 for Demucs and 3.27 ±.11 for Open-Unmix. Training speed We measured the time taken to process a single batch of size 16 with 10 seconds of audio at 44.1kHz (the original Wave-U-Net being only trained on 22 kHz audio, we double the time for fairness), ignoring data loading and using torch.cuda.synchronize to wait on all kernels to be completed. MMDenseLSTM does not provide a reference implementation. Wave-U-Net takes 1.2 seconds per batch, Open Unmix 0.2 seconds per batch and Demucs 1.6 seconds per batch. Conv-Tasnet cannot be trained with such a large sample size, however a single iteration over 2 seconds of audio with a batch size of 4 takes 0.7 seconds. We provide an ablation study of the main design decisions for Demucs in Table 4. Given the cost of training a single model, we did not compute confidence intervals for each variation. Yet, any difference inferior to.06, which is the standard deviation observed over 5 repetitions of the Reference model, could be attributed to noise. We observe a small but not significant improvement when using the L1 loss instead of the MSE loss. Adding a BiLSTM and using the initial weight rescaling described in Section 4.3 provides significant gain, with an extra 0.48 SDR for the first and 0.64 for the second. We observe that using randomized equivariant stabilization as described in Section 4 gives a gain of almost 0.3 SDR. We did not report the validation loss as we only use the stabilization when computing the SDR over the test set. We applied the randomized stabilization to Open-Unmix and Conv-Tasnet with no gain, since, as explained in Section 4.4, both are naturally equivariant with respect to initial time shifts. We introduced extra convolutions in the encoder and decoder, as described in Sections 4.1. The two proved useful, improving the expressivity of the model, especially when combined with GLU activation. Using a kernel size of 3 instead of 1 in the decoder further improves performance. We conjecture that the context from adjacent time steps helps the output of the transposed convolutions to be consistent through time and reduces potential artifacts arising from using a stride of 4. We showed that Conv-Tasnet, a state-of-the-art architecture for speech source separation that predicts masks on a learnt front-end over the waveform domain, achieves state-of-the-art performance for music source separation, improving over all previous spectrogram or waveform domain methods by 0.4 SDR. While Conv-Tasnet has excellent performance to separate sources, it suffers from noticeable artifacts as confirmed by human evaluations. We developed an alternative approach, Demucs, that combines the ability to mask over a learnt representation with stronger decoder capacity that allows for audio synthesis. We conjecture that this can be useful when information is lost in the mix of instruments and cannot simply be recovered by masking. We show that our approach produces audio of significantly higher quality as measures by mean opinion scores and matches the SDR of Conv-Tasnet when trained with 150 extra tracks. We believe those make it a promising alternative to methods based on masking only.
We match the performance of spectrogram based model with a model trained end-to-end in the waveform domain
402
scitldr
Although challenging, strategy profile evaluation in large connected learner networks is crucial for enabling the next wave of machine learning applications. Recently, $\alpha$-Rank, an evolutionary algorithm, has been proposed as a solution for ranking joint policy profiles in multi-agent systems. $\alpha$-Rank claimed scalability through a polynomial time implementation with respect to the total number of pure strategy profiles. In this paper, we formally prove that such a claim is not grounded. In fact, we show that $\alpha$-Rank exhibits an exponential complexity in number of agents, hindering its application beyond a small finite number of joint profiles. Realizing such a limitation, we contribute by proposing a scalable evaluation protocol that we title $\alpha^{\alpha}$-Rank. Our method combines evolutionary dynamics with stochastic optimization and double oracles for \emph{truly} scalable ranking with linear (in number of agents) time and memory complexities. Our contributions allow us, for the first time, to conduct large-scale evaluation experiments of multi-agent systems, where we show successful on large joint strategy profiles with sizes in the order of $\mathcal{O}(2^{25})$ (i.e., $\approx \text{$33$ million strategies}$) -- a setting not evaluable using current techniques. Scalable policy evaluation and learning have been long-standing challenges in multi-agent reinforcement learning (MARL) with two difficulties obstructing progress. First, joint-strategy spaces exponentially explode when a large number of strategic decision-makers is considered, and second, the underlying game dynamics may exhibit cyclic behavior (e.g. the game of Rock-Paper-Scissor) rendering an appropriate evaluation criteria non-trivial. Focusing on the second challenge, much work in multi-agent systems followed a game-theoretic treatment proposing fixed-points, e.g., Nash equilibrium, as potentially valid evaluation metrics. Though appealing, such measures are normative only when prescribing behaviors of perfectly rational agents -an assumption rarely met in reality;. In fact, many game dynamics have been proven not converge to any fixed-point equilibria , but rather to limit cycles . Apart from these aforementioned inconsistencies, solving for a Nash equilibrium even for "simple" settings, e.g. two-player games is known to be PPAD-complete -a demanding complexity class when it comes to computational requirements. To address some of the above limitations, recently proposed α-Rank as a graph-based game-theoretic solution to multi-agent evaluation. α-Rank adopts Markov Conley Chains to highlight the presence of cycles in game dynamics, and attempts to compute stationary distributions as a mean for strategy profile ranking. Though successful in small-scale applications, α-Rank severely suffers in scalability contrary to polynomial time claims made in. In fact, we show that α-Rank exhibits exponential time and memory complexities shedding light on the small-scale empirical study conducted in, whereby the largest reported game included only four agents with four available strategies each. In this work, we put forward α α -Rank as a scalable alternative for multi-agent evaluation with linear time and memory demands. Our method combines numerical optimization with evolutionary game theory for a scalable solver capable of handling large joint spaces with millions of strategy profiles. To handle even larger profiles, e.g., tens to hundreds of millions, we further introduce an oracle Figure 1: Example of population based evaluation on N = 3 learners each with 3 strategies and 5 copies. a) Each population obtains a fitness value P i depending on the strategies chosen, b) mutation strategy (red star), and c) population either selecting original strategy, or adopting the novel strategy. mechanism transforming joint evaluation into a sequence of incremental sub-games with varying sizes. Given our algorithmic advancements, we justify our claims in a largescale empirical study involving systems with O(2 25) possible strategy profiles. We first demonstrate the computation advantages of α α -Rank on varying size stochastic matrices against other implementations in Numpy, PyTorch, and OpenSpiel. With these successes, we then consider experiments unsolvable by current techniques. Precisely, we evaluate multi-agent systems in self-driving and Ising model scenarios each exhibiting a prohibitively-large strategy space (i.e., order of thousands for the former, and tens of millions for the latter). Here, we again show that α α -Rank is capable of recovering correct strategy ranking in such complex domains. In α-Rank, strategy profiles of N agents are evaluated through an evolutionary process of mutation and selection. Initially, agent populations are constructed by creating multiple copies of each learner i ∈ {1, . . ., N} assuming that all agents (in one population) execute the same unified policy. With this, α-Rank then simulates a multi-agent game played by randomly sampled learners from each population. Upon game termination, each participating agent receives a payoff to be used in policy mutation and selection after its return to the population. Here, the agent is faced with a probabilistic choice between switching to the mutation policy, continuing to follow its current policy, or randomly selecting a novel policy (other than the previous two) from the pool. This process repeats with the goal of determining an evolutionary strong profile that spreads across the population of agents. Each of the above three phases is demonstrated in Fig. 1 on a simple example of three agents -depicted by different symbols -each equipped with three strategies -depicted by the colors. Mathematical Formalisation, Notation, and Definitions: We next formalize the process posed by α-Rank, which will lead to its limitations, and also pave the way for our own proposed solution. We consider N agents with each agent i having access to a set of strategies of size s i. At round k of the evaluation process, we denote the strategy profile for agent i by S th allowed policy of the learner. X represents the set of states and A i is the set of actions for agent i. With this, we define a joint strategy profile for all participating agents as policies belonging to the joint strategy pool, i and j i ∈ {1, . . ., s i}. To evaluate performance, we assume each agent is additionally equipped with a payoff (reward) function P is the pool of joint strategies so to accommodate the effect of other learners on the i th player performance further complicating the evaluation process. Finally, given a joint profile π joint, we define the corresponding joint payoff to be the collection of all individual payoff functions, i.e., P After attaining rewards from the environment, each agent returns to its population and faces a choice between switching to a mutation policy, exploring a novel policy, or sticking to the current one Such a choice is probabilistic and defined proportional to rewards. Precisely, agent i adopts i,c, π with µ ∈ R + denoting an exploration parameter 1, π i representing policies followed by other agents at round k, and α ∈ R + an intensity ranking parameter. As noted in, one can relate the above switching process to a random walk on a Markov chain with states defined as elements in S [k] joint and transition probabilities through payoff functions. In particular, each entry in the transition probability matrix T ∈ R refers to the probability of one agent switching from one policy in a relation to attained payoffs. Precisely, consider any two joint strategy profiles π [k] joint andπ [k] joint that differ in only one individual strategy for the i th agent, i.e., there exists a unique agent such that π −i defining the probability that one copy of agent i with strategy π [k] i,a invades the population with all other agents (in that population) playingπ −i, such a probability is formalized as:, and 1 /m otherwise, with m being the size of the population. So far, we presented relevant derivations for the (π joint) entry of the state transition matrix when exactly the i th agent differs in exactly one strategy. Having one policy change, however, only represents a subset of allowed variations, where two more cases need to be considered. Now we restrict out attention to variations in joint policies involving more than two individual strategies, i.e., π joint,π [k] joint = 0. Consequently, the remaining event of self-transitions can be thus written as. Summarising the above three cases, we can then write the (π joint)'s entry of the Markov chain's transition matrix as: The goal in α-Rank is to establish an ordering in policy profiles dependent on evolutionary stability of each joint strategy. In other words, higher ranked strategies are these that are prevalent in populations with higher average times. Formally, such a notion can be easily derived as the limiting vector v 0 of our Markov chain when evolving from an initial distribution v 0. Knowing that the limiting vector is a stationary distribution, one can calculate strategy rankings as the solution to the following eigenvector problem: Limitations of α-Rank: Though the work in seeks to determine a solution to the above problem, it is worth mentioning that α-Rank suffers from one major drawback-the scalability-that we remedy in this paper. We note that the solution methodology in α-Rank is in fact unscalable to settings involving more than a hand-full of agents. Particularly, authors claim polynomial complexities of their solution to the problem in Eqn. 3. Though polynomial, such a complexity, however, is polynomial in an exponential search space, i.e., the space of joint strategy profiles. As such, the polynomial complexity claim is not grounded, and need to be investigated. In short, α-Rank exhibits an exponential (in terms of the number of agents) complexity for determining a ranking, thus rendering it inapplicable to settings involving more than a small amount of agents. In what comes next, we first discuss traditional approaches that could help solve the Eqn. 3; soon we realize an off-the-shelve solution is unavailable. Hence, we commence to propose an efficient evaluation algorithm, i.e., α α -Rank, based on stochastic optimization with suitable complexities and rigorous theoretical guarantees. At the end, we propose a search heuristic to further scale up our method by introducing oracles and we name it by α α -Oracle. The problem of computing stationary distributions is a long-standing classical problem from linear algebra. Various techniques including power method, PageRank, eigenvalue decomposition, and mirror descent can be utilized for solving the problem in Eqn. 3. As we demonstrate next, any such implementation scales exponentially in the number of learners, as we summarize in Table 1. Power Method. One of the most common approaches to computing the solution in Eqn. 3 is the power method. Power method computes the stationary vector v k by constructing a sequence {v j} j≥0 from a non-zero initial vector v 0 by applying Though viable, we first note that the power method exhibits an exponential memory complexity in terms of the number of agents. To formally derive the bound, define n to represent the total number of joint strategy profiles (i.e., n = |S Analyzing its time complexity, on the other hand, requires a careful consideration that links convergence rates with the ing graph topology of the Markov chain. Precisely, the convergence rate of the power method is dictated by the second-smallest eigenvalue of the normalized Laplacian, L G, of the graph, G, associated to the Markov chain in Section 2, i.e., v joint and transition probability matrix T [k]. The second-smallest eigenvalue of the normalized Laplacian of the graph associated with the Markov chain is given by:, with s i denoting the number of strategies of agent i. Due to space constraints, the full proof of the above lemma is refrained to Appendix A.1. The importance of Lemma A.1 is that the ant time complexity of the power method is also exponential of the form O n × log (PageRank. Inspired by ranking web-pages on the internet, one can consider PageRank for computing the solution to the eigenvalue problem in Eqn. 3. Applied to our setting, we first realize that the memory is analogous to the power method that is O(m size) = O n Eigenvalue Decomposition. Apart from the above, we can also consider the problem as a standard eigenvalue decomposition task (also what the original α-Rank is implemented according to) and adopt the method in to compute the stationary distribution. Unfortunately, state-of-the-art techniques for eigenvalue decomposition also require exponential memory and exhibit a time complexity of the form Clearly, these bounds restrict α-Rank to small number of agents N. Mirror Descent. The ordered subsets mirror descent requires at each iteration a projection on standard n−dimensional simplex: As stated in the paper, the computing of this projection requires O(n log n) time. In our setting, n = N i=1 s i is the total number of joint strategy profiles. Hence, the projection step is exponential in the number of agents N. This makes mirror descent inapplicable for α-Rank when N is large. Rather than seeking an exact solution to the problem in Eqn. 3, one can consider approximate solvers by defining a constraint optimization objective: The constrained objective in Eqn. 4 simply seeks a vector x minimizing the distance between x, itself, and while ensuring that x lies on an ndimensional simplex (i.e., x T 1 = 1, and x 0). Due to time and memory complexities required for computing exact solutions, we focus on determining an approximate vectorx defined to be the solution to the following relaxed problem of Eqn. 4: The optimization problem in Eqn. 5 can be solved using a barrier-like technique that we detail below. Before that, it is instructive to clarify the connection between the original and the relaxed problems Proposition: [Connections to Markov Chain] Letx be a solution to the relaxed optimization problem in Eqn. 5. Then,x/||x||1 = v [k] is the stationary distribution to the Markov chain in Section 2. Importantly, the above proposition allows us to focus on solving the problem in Eqn. 5 which only exhibits inequality constraints. Problems of this nature can be solved by considering a barrier function leading to an unconstrained finite sum minimization problem. To do so, denoting b [k] i to be the i th row of T [k],T − I, we can write. Introducing logarithmic barrier-functions, with λ > 0 being a penalty parameter, we arrive at Eqn. 6 is a standard finite minimization problem that can be solved using any off-the-shelve stochastic optimization algorithm, e.g., stochastic gradients, ADAM among others. A stochastic gradient execution involves sampling a strategy profile i t ∼ [1, . . ., n] at iteration t, and then executing a descent step:, with ∇ x f it (x t) being a sub-sampled gradient of Eqn. 6, and λ being a scheduled penalty parameter with λ t+1 = λt /γ for some γ > 1, See Phase I in Algorithm 1 for the pseudo-code. We can further derive a convergence theorem of: Theorem: [Convergence of Barrier Method] Letx λ be the output of a gradient algorithm descending in the objective in Eqn. 6, after T iterations, then where expectation is taken w.r.t. all randomness of a stochastic gradient implementation, and γ > 1 is a decay-rate for λ, i.e., λ, penalty parameter λ ≈ O, λ decay rate γ > 1, total number of joint strategy profiles n, and a constraint relaxation term δ. Oracle Parameters: initialize a subset of strategy pools for all agents {S i } by randomly sampling from {S i} 2: Set outer iteration count k = 0 3: while stopping criteria do: Phase I: Scalable Policy Evaluation (Section 3.2): for t = 0 → T − 1 do: Uniformly sample one strategy profile i Update solution x 10: Phase II (if turned on): Scalable Policy Evaluation with Oracle (Section 3.3): for each agent i do: 12: Compute the best-response strategy π * i by solving Eqn. 8. Update strategy pools for each agent i as S Set k = k + 1 15: Return: Best performing strategy profile π joint, across all agents. The proof of the above theorem (see the full proof in Appendix A.2) is interesting by itself, a more important aspect is the memory and time complexity implications posed by our algorithm. Theorem A.2 implies that after T = O(1 / 2) iterations with being a precision parameter, our algorithm outputs a vectorx λ 0 such that Moreover, one can easily see 3 that after T steps, the overall time and memory complexities of our update rules are given by eventually leads to a memory complexity of Table. 1). Hence, our algorithm is able to achieve an exponential reduction, in terms of number of agents, in both memory and time complexities. So far, we have presented scalable multi-agent evaluations through stochastic optimization. We can further boost scalability (to tens of millions of joint profiles) of our method by introducing an oracle mechanism. The heuristic of oracles was first introduced in solving large-scale zero-sum matrix games . The idea is to first create a restricted sub-game in which all players are only allowed to play a restricted number of strategies, which are then expanded by adding incorporating each of the players' best-responses to opponents; the sub-game will be replayed with agents' augmented strategy pools before a new round of best responses is found. The worse-case scenario of introducing oracles would be to solve the original evaluation problem in full size. The best response is assumed to be given by an oracle that can be simply implemented by a grid search. Precisely, given the top-rank profile π at iteration k, the goal for agent i is to select 4 the optimal π * i from the pre-defined strategy pool S i to maximize the reward with x [k] h denoting the state, u −i,h ) denoting the actions from agent i and the opponents, respectively. The heuristic of solving the full game from restricted sub-games is crucial especially when it is prohibitively expensive to list all joint-strategy profiles, e.g., in scenarios involving tens-of-millions of joint profiles. For a complete exposition, we summarize the pseudo-code in Algorithm 1. In the first phase, vanilla α α -Rank is executed (lines 4-9), while in the second (lines 11 -13), α α -Rank with Oracle (if turned on) is computed. To avoid any confusion, we refer to the latter as α α -Oracle. Note that even though in the two-player zero-sum games, the oracle algorithm is guaranteed to converge to the minimax equilibrium. Providing valid convergence guarantees for α α -Oracle is an interesting direction for future work. In this paper, we rather demonstrate the effectiveness of such an approach in a large-scale empirical study as shown in Section 4. In this section, we evaluate the scalability properties of α α -Rank 5. Precisely, we demonstrate that our method is capable of successfully recovering optimal policies in self-driving car simulations and in the Ising model where strategy spaces are in the order of up to tens-of-millions of possible strategies. We note that these sizes are well beyond the capabilities of state-of-the-art methods, e.g., α-Rank ) that considers at maximum four agents with four strategies, or AlphaStar which handles about 600 strategies as detailed in. Sparsity Data Structures. During the implementation phase, we realised that the transition probability, T [k], of the Markov chain induces a sparsity pattern (each row and column in 2) that if exploited can lead to significant speed-ups. To fully leverage such sparsity, we tailored a novel data structure for sparse storage and computations needed by Algorithm 1. More details can be found in Appendix B.1. Correctness of Ranking Results. Before conducting large-scale sophisticated experiments, it is instructive to validate the correctness of our on the simple cases especially those reported by. We therefore test on three normal-form games. Due to space constraints, we refrain the full description of these tasks to Appendix B.2. Fig. 2 shows that, in fact, generated by α α -Rank, the Phase I of Algorithm 1, are consistent with α-Rank's . Complexity Results on Random Matrices. We measured the time and memory needed by our method for computing the stationary distribution with varying sizes of simulated random matrices. Baselines includes eigenvalue decomposition from Numpy, optimization tools in PyTorch, and α-Rank from OpenSpiel. For our algorithm we terminated execution with gradient norms being below a predefined threshold of 0.01. According to Fig. 3, α α -Rank can achieve three orders of magnitude reduction compared to eigenvalue decomposition in terms of time. Most importantly, the performance gap keeps developing with the increasing matrix size. Autonomous Driving on Highway: High-way provides an environment for simulating self-driving scenarios with social vehicles designed to mimic real-world traffic flow as strategy pools. We conducted a ranking experiment involving 5 agents each with 5 strategies, i.e. a strategy space in the order of O(5 5) (3125 possible strategy profiles). Agent strategies varied between "rational" and "dangerous" drivers, which we encoded using different reward functions during training (complete details of defining reward functions can be found in Appendix C.2). Under this setting, we know, upfront, that optimal profile corresponds to all agents is five rational drivers. Cars trained using value-iteration and rewards averaged from 200 test trails were reported. Due to the size of the strategy space, we considered both α α -Rank and α α -Oracle. We set α α -Oracle to run 200 iterations of gradient updates in solving the top-rank strategy profile (Phase I in Algorithm 1). Results depicted in Fig. 4 (a) clearly demonstrate that both our implementations are capable of recovering the correct highest ranking strategy profile. We also note that though such sizes are feasible using α-Rank and the power-method, our achieve 4 orders of magnitude reduction in total number of iterations. Ising Model Experiment: The Ising model is the model for describing ferromagnetism in statistical mechanics. It assumes a system of magnetic spins, where each spin a j is either an upspin, ↑, or down-spin, ↓. The system energy is defined by E(a, h) with h j and λ being constant coefficients. The probability of one set of spin configuration is P (a) = exp(−E(a,h)/τ ) a exp(−E(a,h)/τ ) where τ is the environmental temperature. Finding the equilibrium of the system is notoriously hard because it is needed to enumerate all possible configurations in computing P (a).Traditional approaches include Markov Chain Monte Carlo (MCMC). An interesting phenomenon is the phase change, i.e., the spins will reach an equilibrium in the low temperatures, with the increasing τ, such equilibrium will suddenly break and the system becomes chaotic. Here we try to observe the phase change through multi-agent evaluation methods. We assume each spins as an agent, and the reward to be r j = h j a j + λ 2 k =j a j a k, and set α = 1 τ to build the link between Eqn. 1 and P (a). We consider the top-rank strategy profile from α α -Oracle as the system equilibrium and compare it against the ground truth from MCMC. We consider a five-by-five 2D model which induces a prohibitively-large strategy space of size 2 25 (tens of millions) to which the existing baselines, including α α -Rank on the single machine, are inapplicable. Fig. 4(b) illustrates that our method identifies the same phase change as what MCMC suggests. We show an example of how α α -Oracle's top-ranked profile finds the system equilibrium in Fig. 4 (c) at τ = 1. In this paper, we demonstrated that the approach in exhibits exponential time and memory complexities. We then proposed α α -Rank as a scalable solution for multi-agent evaluation with linear time and memory demands. In a set of experiments, we demonstrated that our method is truly scalable capable of handling large strategy spaces. There are a lot of interesting avenues for future research. First, we plan to theoretically analyze convergence properties of the ing oracle algorithm, and further introduce policy learning through oracles. Second, we plan take our method to the real-world by conducting multi-robot experiments. joint and transition probability matrix T [k]. The second-smallest eigenvalue of the normalized Laplacian of the graph associated with the Markov chain is given by:, with s i denoting the number of strategies of agent i. Proof: For simplicity we drop round index k in the below derivation. Notice, the underlying graph for the constructed Markov Chain can be represented as a Cartesian product of N complete graphs Indeed, two vertices π [k],π [k] ∈ G are connected by the edge if and if only these joint strategy profiles differ in at most one individual strategy, i.e ∃!i ∈ {1, . . ., N}: −i }.Hence, the spectral properties of G can be described in terms of spectral properties of K si as follows : ) is the i th eigenvalue of the unnormalized Laplacian of the complete graph K sj and ϑ i,j is the corresponding eigenvector 7. The spectrum of unnormalized Laplacian of the complete graph K si is given by Spectr(K si) = {0, s i − 1} and the only eigenvector corresponding to zero eigenvalue is 1 ∈ R si. Therefore, the minimum non-zero eigenvalue of unnormalized Laplacian of G is given by min i s i − 1. Finally, due to the fact that G is a regular graph (with degree of each node is equal to N i=1 s i − N + 1), the smallest non-zero eigenvalue of the normalized Laplacian of G is given by Giving this , the overall time complexity of Power Method is bounded by O n × log = O (log n). As for the memory complexity, Power Method requires has the same requirements as PageRank algorithm. 8 These imply that Power Method scales exponentially with number of agents N, and therefore, inapplicable when N is large. Theorem: [Convergence of Barrier Method] Letx λ be the output of a gradient algorithm descending in the objective in Eqn. 6, after T iterations, then where expectation is taken w.r.t. all randomness of a stochastic gradient implementation, and γ > 1 is a decay-rate for λ, i.e., λ Sample i t ∼ [1, . . ., n] and compute: The above implies given precision parameter > 0, after T = O 1 2 iterations, Algorithm 2 outputs vectorx λ 0 such that: Hence, by tuning parameters δ and one can approximate a stationary distribution vector ν [k]. Algorithm 2 starts with uniform distribution vector x 0 = 1 n 1 and at step t it updates the previous iterative x t by a rule given in line 6. Let 9. Then, for j / ∈ N t all entries [x t+1] j are equal to each other and updated as: Given value x T t 1, the above computation takes O time and space. For entries j ∈ N t, all entries [x t+1] j might be different (worst case) and, therefore, update The transitional probability matrix T in α α -Rank is sparse; each row and column in T [k] contains N i=1 s i − N + 1 non-zero elements (see Section 3.2). To fully leverage such sparsity, we design a new data structure (see Fig. 5) for the storage and computation. Compared to standard techniques (e.g., COO, CSR, and CRS 10) that store (row, column, value) of a sparse vector, our data structure adopts a more efficient protocol that stores (defaults, positions, biases) leading to improvements in computational efficiency, which gives us additional advantages in computational efficiency. We reload the operations for such data structure including addition, scalar multiplication, dot product, element-wise square root, L1 norm. We show the example of addition in Fig. 5. Our algorithm provides the expected ranking in all three normal-form games shown in Fig. 6, which is consistent with the in α-Rank. Battle of sexes. Battle of sexes is an asymmetric game R OM = [. As it is a single-population game, we adopt the transitional probability matrix of Eqn. 11 in . Such game has the inherent structure that Rock/Paper/Scissor is equally likely to be invaded by a mutant, e.g., the scissor population will always be fixated by the rock population, therefore, our method suggests the long-term survival rate for all three strategies are the same ( For all of our experiments, the gradient updates include two phases: warm-up phase and Adam phase. In the warm-up phase, we used standard stochastic gradient descent; after that, we replace SGD with Adam till the convergence. In practice, we find this yields faster convergence than normal stochastic gradient descent. As our algorithm does column sampling for the stochastic matrix (i.e. batch size equals to one), adding momentum term intuitively help stabilize the learning. We also implement infinite α, when calculating transition matrix (or its column), where our noise term is set to be 0.01. For most of our experiments that involve α α -rank, we set the terminating condition to be, when the gradient norm is less than 10 −9. However, for Random Matrix experiment, we set the terminating gradient norm to be 10 −2 • Learning rate to be in between 15 -17 • Alpha (ranking intensity) to be in between 1 -2.5 • Number of Population to be between 25 -55 (in integer) For all of the Adam experiments, after the warmup-step we chooses to decay δ and λ by 0.999 for each time steps, where we have δ to always be 0.1. Similarly, λ starts at the value 0.5. However, in speed and memory experiment, we chooses the decay to be 0.9 Collision Reward is calculated when agent collided with either social car or other agents. All of our value iteration agents are based on environment discretization, which represents the environment in terms of time to collision MDP, taking into account that the other agents are moving in constant speed. For all experiments, we run value-iteration for 200 steps with the discounting factor of 0.99. For each controllable cars, the default speed is randomized to be between 10 to 25, while the social cars, the speed are randomized to be between 23 to 25. We define five types of driving behaviors (one rational + four dangerous) by letting each controlled car have a different ego reward function during training (though the reward we report is the environmental reward which cannot be changed). By setting this, we can make sure, at upfront, the best joint-strategy strategy should be all cars to drive rationally.
We provide a scalable solution to multi-agent evaluation with linear rate complexity in both time and memory in terms of number of agents
403
scitldr
Deep neural networks are known to be annotation-hungry. Numerous efforts have been devoted to reducing the annotation cost when learning with deep networks. Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data. In this work, we propose DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques. In particular, DivideMix models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples, and trains the model on both the labeled and unlabeled data in a semi-supervised manner. To avoid confirmation bias, we simultaneously train two diverged networks where each network uses the dataset division from the other network. During the semi-supervised training phase, we improve the MixMatch strategy by performing label co-refinement and label co-guessing on labeled and unlabeled samples, respectively. Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods. Code is available at https://github.com/LiJunnan1992/DivideMix. The remarkable success in training deep neural networks (DNNs) is largely attributed to the collection of large datasets with human annotated labels. However, it is extremely expensive and time-consuming to label extensive data with high-quality annotations. On the other hand, there exist alternative and inexpensive methods for mining large-scale data with labels, such as querying commercial search engines (a), downloading social media images with tags , leveraging machine-generated labels , or using a single annotator to label each sample . These alternative methods inevitably yield samples with noisy labels. A recent study shows that DNNs can easily overfit to noisy labels and in poor generalization performance. Existing methods on learning with noisy labels (LNL) primarily take a loss correction approach. Some methods estimate the noise transition matrix and use it to correct the loss function . However, correctly estimating the noise transition matrix is challenging. Some methods leverage the predictions from DNNs to correct labels and modify the loss accordingly . These methods do not perform well under high noise ratio as the predictions from DNNs would dominate training and cause overfitting. To overcome this, adopt MixUp augmentation. Another approach selects or reweights samples so that noisy samples contribute less to the loss . A challenging issue is to design a reliable criteria to select clean samples. It has been shown that DNNs tend to learn simple patterns first before fitting label noise . Therefore, many methods treat samples with small loss as clean ones . Among those methods, Co-teaching and Co-teaching+ train two networks where each network selects small-loss samples in a mini-batch to train the other. Another active area of research that also aims to reduce annotation cost is semi-supervised learning (SSL). In SSL, the training data consists of unlabeled samples in addition to the labeled samples. Significant progress has been made in leveraging unlabeled samples by enforcing the model to produce low entropy predictions on unlabeled data or consistent predictions on perturbed input (; ;). propose MixMatch, which unifies several dominant SSL approaches in one framework and achieves state-of-the-art performance. Despite the individual advances in LNL and SSL, their connection has been underexplored. In this work, we propose DivideMix, which addresses learning with label noise in a semi-supervised manner. Different from most existing LNL approaches, DivideMix discards the sample labels that are highly likely to be noisy, and leverages the noisy samples as unlabeled data to regularize the model from overfitting and improve generalization performance. The key contributions of this work are: • We propose co-divide, which trains two networks simultaneously. For each network, we dynamically fit a Gaussian Mixture Model (GMM) on its per-sample loss distribution to divide the training samples into a labeled set and an unlabeled set. The divided data is then used to train the other network. Co-divide keeps the two networks diverged, so that they can filter different types of error and avoid confirmation bias in self-training. • During SSL phase, we improve MixMatch with label co-refinement and co-guessing to account for label noise. For labeled samples, we refine their ground-truth labels using the network's predictions guided by the GMM for the other network. For unlabeled samples, we use the ensemble of both networks to make reliable guesses for their labels. • We experimentally show that DivideMix significantly advances state-of-the-art on multiple benchmarks with different types and levels of label noise. We also provide extensive ablation study and qualitative to examine the effect of different components. 2 RELATED WORK Most existing methods for training DNNs with noisy labels seek to correct the loss function. The correction can be categorized in two types. The first type treats all samples equally and correct loss either explicitly or implicitly through relabeling the noisy samples. For relabeling methods, the noisy samples are modeled with directed graphical models , Conditional Random Fields , knowledge graph (b), or DNNs . However, they require access to a small set of clean samples. and propose iterative methods which relabel samples using network predictions. For explicit loss correction. propose a bootstrapping method which modifies the loss with model predictions, and improve the bootstrapping method by exploiting the dimensionality of feature subspaces. estimate the label corruption matrix for loss correction, and improve the corruption matrix by using a clean set of data. The second type of correction focuses on reweighting training samples or separating clean and noisy samples, which in correcting the loss function . A common method is to consider samples with smaller loss as clean ones . train a mentor network to guide a student network by assigning weights to samples. reweight samples based on their gradient directions. apply cross validation to identify clean samples. calculate sample weights by modeling per-sample loss with a mixture model. train two networks which select small-loss samples within each mini-batch to train each other, and improve it by updating the network on disagreement data to keep the two networks diverged. Contrary to all aforementioned methods, our method discards the labels that are highly likely to be noisy, and utilize the noisy samples as unlabeled data to regularize training in a SSL manner. and have shown that SSL method is effective in LNL. However, their methods do not perform well under high levels of noise, whereas our method can better distinguish and utilize noisy samples. Besides leveraging SSL, our method also introduces other advantages. Compared to self-training methods , our method can avoid the confirmation bias problem by training two networks to filter error for each other. Compared to Co-teaching and Co-teaching+, our method is more robust to noise by enabling the two networks to teach each other implicitly at each epoch (co-divide) and explicitly at each mini-batch (label co-refinement and co-guessing). its per-sample loss distribution with a GMM to divide the dataset into a labeled set (mostly clean) and an unlabeled set (mostly noisy), which is then used as training data for the other network (i.e. co-divide). At each mini-batch, a network performs semi-supervised training using an improved MixMatch method. We perform label co-refinement on the labeled samples and label co-guessing on the unlabeled samples. SSL methods aim to improve the model's performance by leveraging unlabeled data. Current state-of-the-art SSL methods mostly involve adding an additional loss term on unlabeled data to regularize training. The regularization falls into two classes: consistency regularization (; ;) enforces the model to produce consistent predictions on augmented input data; entropy minimization encourages the model to give high-confidence predictions on unlabeled data. propose MixMatch, which unifies consistency regularization, entropy minimization, and the MixUp regularization into one framework. In this section, we introduce DivideMix, our proposed method for learning with noisy labels. An overview of the method is shown in Figure 1. To avoid confirmation bias of self-training where the model would accumulate its errors, we simultaneously train two networks to filter errors for each other through epoch-level implicit teaching and batch-level explicit teaching. At each epoch, we perform co-divide, where one network divides the noisy training dataset into a clean labeled set (X) and a noisy unlabeled set (U), which are then used by the other network. At each mini-batch, one network utilizes both labeled and unlabeled samples to perform semi-supervised learning guided by the other network. Algorithm 1 delineates the full algorithm. Deep networks tend to learn clean samples faster than noisy samples , leading to lower loss for clean samples . , we aim to find the probability of a sample being clean by fitting a mixture model to the per-sample denote the training data, where x i is an image and yi ∈ {0, 1} C is the one-hot label over C classes. Given a model with parameters θ, the cross-entropy loss (θ) reflects how well the model fits the training samples: where p c model is the model's output softmax probability for class c. fit a two-component Beta Mixture Model (BMM) to the max-normalized loss to model the distribution of clean and noisy samples. However, we find that BMM tends to produce undesirable flat distributions and fails when the label noise is asymmetric. Instead, Gaussian Mixture Model (GMM) can better distinguish clean and noisy samples due to its flexibility in the sharpness of distribution. Therefore, we fit a two-component GMM to using the Expectation-Maximization algorithm. For each sample, its clean probability w i is the posterior probability p(g| i), where g is the Gaussian component with smaller mean (smaller loss). We divide the training data into a labeled set and an unlabeled set by setting a threshold τ on w i. However, training a model using the data divided by itself could lead to confirmation bias (i.e. the 1 Input: θ and θ, training dataset (X, Y), clean probability threshold τ, number of augmentations M, sharpening temperature T, unsupervised loss weight λu, Beta distribution parameter α for MixMatch. // standard training (with confidence penalty) 3 while e < MaxEpoch do ) // model per-sample loss with θ to obtain clean proabability for θ ) // model per-sample loss with θ to obtain clean proabability for θ 6 for k = 1, 2 do // train the two networks one by one // refine ground-truth label guided by the clean probability produced by the other network // apply temperature sharpening to the refined label ) // co-guessing: average the predictions from both networks across augmentations of u b // apply temperature sharpening to the guessed label model is prone to confirm its mistakes ), as noisy samples that are wrongly grouped into the labeled set would keep having lower loss due to the model overfitting to their labels. Therefore, we propose co-divide to avoid error accumulation. In co-divide, the GMM for one network is used to divide training data for the other network. The two networks are kept diverged from each other due to different (random) parameter initialization, different training data division, different (random) mini-batch sequence, and different training targets. Being diverged offers the two networks distinct abilities to filter different types of error, making the model more robust to noise. Confidence Penalty for Asymmetric Noise. For initial convergence of the algorithm, we need to "warm up" the model for a few epochs by training on all data using the standard cross-entropy loss. The warm up is effective for symmetric (i.e. uniformly random) label noise. However, for asymmetric (i.e. class-conditional) label noise, the network would quickly overfit to noise during warm up and produce over-confident (low entropy) predictions, which leads to most samples having near-zero normalized loss (see Figure 2a). In such cases, the GMM cannot effectively distinguish clean and noisy samples based on the loss distribution. To address this issue, we penalize confident predictions from the network by adding a negative entropy term, −H , to the cross-entropy loss during warm up. The entropy of a model's prediction for an input x is defined as: By maximizing the entropy, becomes more evenly distributed (see Figure 2b) and easier to be modeled by the GMM. Furthermore, in Figure 2c we show when the model is trained with DivideMix for 10 more epochs after warm up. The proposed method can significantly reduce the loss for clean samples while keeping the loss larger for most noisy samples. To account for label noise, we make two improvements to MixMatch which enable the two networks to teach each other. First, we perform label co-refinement for labeled samples by linearly combining the ground-truth label y b with the network's prediction p b (averaged across multiple augmentations of x b), guided by the clean probability w b produced by the other network: Then we apply a sharpening function on the refined label to reduce its temperature: Second, we use the ensemble of predictions from both networks to "co-guess" the labels for unlabeled samples (algorithm 1, line 20), which can produce more reliable guessed labels. Having acquiredX (andÛ) which consists of multiple augmentations of labeled (unlabeled) samples and their refined (guessed) labels, we follow MixMatch to "mix" the data, where each sample is interpolated with another sample randomly chosen from the combined mini-batch ofX andÛ. Specifically, for a pair of samples (x 1, x 2) and their corresponding labels (p 1, p 2), the mixed (x, p) is computed by: MixMatch transformsX andÛ into X and U. Equation 6 ensures that X are "closer" toX thanÛ. The loss on X is the cross-entropy loss and the loss on U is the mean squared error: Under high levels of noise, the network would be encouraged to predict the same class to minimize the loss. To prevent assigning all samples to a single class, we apply the regularization term used by and , which uses a uniform prior distribution π (i.e. π c = 1/C) to regularize the model's average output across all samples in the mini-batch: Finally, the total loss is: In our experiments, we set λ r as 1 and use λ u to control the strength of the unsupervised loss. We extensively validate our method on four benchmark datasets, namely CIFAR-10, CIFAR-100 , Clothing1M , and WebVision (a). Both CIFAR-10 and CIFAR-100 contain 50K training images and 10K test images of size 32 × 32. Following previous works , we experiment with two types of label noise: symmetric and asymmetric. Symmetric noise is generated by randomly replacing the labels for a percentage of the training data with all possible labels. Note that there is another criterion for symmetric label noise injection where the true labels cannot be maintained (;, for which we also report the (Table 6 in Appendix). Asymmetric noise is designed to mimic the structure of real-world label noise, where labels are only replaced by similar classes (e.g. deer→horse, dog↔cat). We use an 18-layer PreAct Resnet and train it using SGD with a momentum of 0.9, a weight decay of 0.0005, and a batch size of 128. The network is trained for 300 epochs. We set the initial learning rate as 0.02, and reduce it by a factor of 10 after 150 epochs. The warm up period is 10 epochs for CIFAR-10 and 30 epochs for CIFAR-100. We find that most hyperparameters introduced by DivideMix do not need to be heavily tuned. For all CIFAR experiments, we use the same hyperparameters M = 2, T = 0.5, and α = 4. τ is set as 0.5 except for 90% noise ratio when it is set as 0.6. We choose λ u from {0, 25, 50, 150} using a small validation set. Clothing1M and WebVision 1.0 are two large-scale datasets with real-world noisy labels. Clothing1M consists of 1 million training images collected from online shopping websites with labels generated from surrounding texts. We follow previous work and use ResNet-50 with ImageNet pretrained weights. WebVision contains 2.4 million images crawled from the web using the 1,000 concepts in ImageNet ILSVRC12. Following previous work , we compare baseline methods on the first 50 classes of the Google image subset using the inception-resnet v2 . The training details are delineated in Appendix B. We compare DivideMix with multiple baselines using the same network architecture. Here we introduce some of the most recent state-of-the-art methods: Meta-Learning proposes a gradient based method to find model parameters that are more noise-tolerant; Joint-Optim and P-correction jointly optimize the sample labels and the network parameters; M-correction Table 1: Comparison with state-of-the-art methods in test accuracy (%) on CIFAR-10 and CIFAR-100 with symmetric noise. Methods marked by * denote re-implementations based on public code. Note that none of these methods can consistently outperform others across different datasets. Mcorrection excels at symmetric noise, whereas Meta-Learning performs better for asymmetric noise. Table 1 shows the on CIFAR-10 and CIFAR-100 with different levels of symmetric label noise ranging from 20% to 90%. We report both the best test accuracy across all epochs and the averaged test accuracy over the last 10 epochs. DivideMix outperforms state-of-the-art methods by a large margin across all noise ratios. The improvement is substantial (∼10% in accuracy) for the more challenging CIFAR-100 with high noise ratios. Appendix A shows comparison with more methods in Table 6. The on CIFAR-10 with asymmetric noise is shown in Table 2. We use 40% because certain classes become theoretically indistinguishable for asymmetric noise larger than 50%. Cross-Entropy 85.0 72.3 F-correction 87.2 83.1 M-correction 87.4 86.3 Iterative-CV 88.6 88.0 P-correction 88.5 88.1 Joint-Optim 88.9 88.4 Meta-Learning 89.2 88.6 DivideMix 93.4 92.1 Table 2: Comparison with state-of-the-art methods in test accuracy (%) on CIFAR-10 with 40% asymmetric noise. We re-implement all methods under the same setting. Table 3 and Table 4 show the on Clothing1M and WebVision, respectively. DivideMix consistently outperforms state-of-the-art methods across all datasets with different types of label noise. For WebVision, we achieve more than 12% improvement in top-1 accuracy. Cross-Entropy 69.21 F-correction 69.84 M-correction 71.00 Joint-Optim 72.16 Meta-Cleaner 72.50 Meta-Learning 73.47 P-correction 73.49 DivideMix 74.76 62.68 84.00 57.80 81.36 MentorNet 63.00 81.40 57.80 79.92 Co-teaching 63.58 85.20 61.48 84.70 Iterative-CV 65 Table 5: Ablation study in terms of test accuracy (%) on CIFAR-10 and CIFAR-100. • To study the effect of model ensemble during test, we use the prediction from a single model θ instead of averaging the predictions from both networks as in DivideMix. Note that the training process remains unchanged. The decrease in accuracy suggests that the ensemble of two diverged networks consistently yields better performance during inference. • To study the effect of co-training, we train a single network using self-divide (i.e. divide the training data based on its own loss). The performance further decreases compared to θ. • We find that both label refinement and input augmentation are beneficial for DivideMix. • We combine self-divide with the original MixMatch as a naive baseline for using SLL in LNL. Appendix A also introduces more in-depth studies in examining the robustness of our method to label noise, including the AUC for clean/noisy sample classification on CIFAR-10 training data, qualitative examples from Clothing1M where our method can effectively identify the noisy samples and leverage them as unlabeled data, and visualization using t-SNE. In this paper, we propose DivideMix for learning with noisy labels by leveraging SSL. Our method trains two networks simultaneously and achieves robustness to noise through dataset co-divide, label co-refinement and co-guessing. Through extensive experiments across multiple datasets, we show that DivideMix consistently exhibits substantial performance improvements compared to state-of-the-art methods. For future work, we are interested in incorporating additional ideas from SSL to LNL, and vice versa. Furthermore, we are also interested in adapting DivideMix to other domains such as NLP. In In Figure 3, we show the Area Under a Curve (AUC) for clean/noisy sample classification on CIFAR-10 training data from one of the GMMs during the first 100 epochs. Our method can effectively separate clean and noisy samples as training proceeds, even for high noise ratio. In Figure 4, we show example images in Clothing1M identified by our method as noisy samples. Our method achieves noise filtering by discarding the noisy labels (shown in red) and using the co-guessed labels (shown in blue) to regularize training. In Figure 5, we visualize the features of training images using t-SNE . The model is trained using DivideMix for 200 epochs on CIFAR-10 with 80% label noise. The embeddings form 10 distinct clusters corresponding to the true class labels, not the noisy training labels, which demonstrates our method's robustness to label noise. For CIFAR experiments, the only hyperparameter that we tune on a per-experiment basis is the unsupervised loss weight λ u. Table 7 shows the value that we use. A larger λ u is required for stronger regularization under high noise ratios or with more classes. For both Clothing1M and WebVision, we use the same set of hyperparameters M = 2, T = 0.5, τ = 0.5, λ u = 0, α = 0.5, and train the network using SGD with a momentum of 0.9, a weight decay of 0.001, and a batch size of 32. The warm up period is 1 epoch. For Clothing1M, we train the network for 80 epochs. The initial learning rate is set as 0.002 and reduced by a factor of 10 after 40 epochs. For each epoch, we sample 1000 mini-batches from the training data while ensuring the labels (noisy) are balanced. For WebVision, we train the network for 100 epochs. The initial learning rate is set as 0.01 and reduced by a factor of 10 after 50 epochs. Table 7: Unsupervised loss weight λ u for CIFAR experiments. Higher noise ratio requires stronger regularization from unlabeled samples. Here we clarify some details for the baseline methods in the ablation study. First, DivideMix w/o co-training still has dataset division, label refinement and label guessing, but performed by the same model. Thus, the performance drop (especially for CIFAR-100 with high noise ratio) suggests the disadvantage of self-training. Second, label refinement is important for high noise ratio because more noisy samples would be mistakenly divided into the labeled set. Third, augmentation improves performance through both producing more reliable predictions and achieving consistency regularization. In addition, same as , we also find that temperature sharpening is essential for our method to perform well. We analyse the training time of DivideMix to understand its efficiency. In Table 8, we compare the total training time of DivideMix on CIFAR-10 with several state-of-the-art methods, using a single Nvidia V100 GPU. DivideMix is slower than Co-teaching+, but faster than P-correction and Meta-Learning which involve multiple training iterations. In Table 9, we also break down the computation time for each operation in DivideMix.
We propose a novel semi-supervised learning approach with SOTA performance on combating learning with noisy labels.
404
scitldr
We present a new algorithm to train a robust neural network against adversarial attacks. Our algorithm is motivated by the following two ideas. First, although recent work has demonstrated that fusing randomness can improve the robustness of neural networks (Liu 2017), we noticed that adding noise blindly to all the layers is not the optimal way to incorporate randomness. Instead, we model randomness under the framework of Bayesian Neural Network (BNN) to formally learn the posterior distribution of models in a scalable way. Second, we formulate the mini-max problem in BNN to learn the best model distribution under adversarial attacks, leading to an adversarial-trained Bayesian neural net. Experiment demonstrate that the proposed algorithm achieves state-of-the-art performance under strong attacks. On CIFAR-10 with VGG network, our model leads to 14% accuracy improvement compared with adversarial training (Madry 2017) and random self-ensemble under PGD attack with 0.035 distortion, and the gap becomes even larger on a subset of ImageNet. Deep neural networks have demonstrated state-of-the-art performances on many difficult machine learning tasks. Despite the fundamental breakthroughs in various tasks, deep neural networks have been shown to be utterly vulnerable to adversarial attacks BID32 BID11. Carefully crafted perturbations can be added to the inputs of the targeted model to drive the performances of deep neural networks to chance-level. In the context of image classification, these perturbations are imperceptible to human eyes but can change the prediction of the classification model to the wrong class. Algorithms seek to find such perturbations are denoted as adversarial attacks BID5 BID4 BID28, and some attacks are still effective in the physical world BID17 BID9. The inherent weakness of lacking robustness to adversarial examples for deep neural networks brings out security concerns, especially for security-sensitive applications which require strong reliability. To defend from adversarial examples and improve the robustness of neural networks, many algorithms have been recently proposed BID27 BID37 BID17 BID12. Among them, there are two lines of work showing effective on medium-sized data (e.g., CIFAR-10). The first line of work uses adversarial training to improve robustness, and the recent algorithm proposed in BID25 has been recognized as one of the most successful defenses, as shown in. The second line of work adds stochastic components in the neural network to hide gradient information from attackers. In the black-box setting, stochastic outputs can significantly increase query counts for attacks using finite-difference techniques BID5, and even in the white-box setting the recent Random Self-Ensemble (RSE) approach proposed by BID23 achieves similar performance to Madry's adversarial training algorithm. In this paper, we propose a new defense algorithm called Adv-BNN. The idea is to combine adversarial training and Bayesian network, although trying BNNs in adversarial attacks is not new (e.g. BID20 BID10 BID30), and very recently BID36 also tried to combine Bayesian learning with adversarial training, this is the first time we scale the problem to complex data and our approach achieves better robustness than previous defense methods. The contributions of this paper can be summarized below:• Instead of adding randomness to the input of each layer (as what has been done in RSE), we directly assume all the weights in the network are stochastic and conduct training with techniques commonly used in Bayesian Neural Network (BNN).• We propose a new mini-max formulation to combine adversarial training with BNN, and show the problem can be solved by alternating between projected gradient descent and SGD.• We test the proposed Adv-BNN approach on CIFAR10, STL10 and ImageNet143 datasets, and show significant improvement over previous approaches including RSE and adversarial training. Notations A neural network parameterized by weights w ∈ R d is denoted by f (x; w), where x ∈ R p is an input example and y is the corresponding label, the training/testing dataset is D tr/te with size N tr/te respectively. When necessary, we abuse D tr/te to define the empirical distribu- DISPLAYFORM0 δ(x i)δ(y i), where δ(·) is the Dirac delta function. x o represents the original input and x adv denotes the adversarial example. The loss function is represented as f (x i ; w), y i, where i is the index of the data point. Our approach works for any loss but we consider the cross-entropy loss in all the experiments. The adversarial perturbation is denoted as ξ ∈ R p, and adversarial example is generated by x adv = x o + ξ. In this paper, we focus on the attack under norm constraint BID25, so that ξ ≤ γ. In order to align with the previous works, in the experiments we set the norm to · ∞. The Hadamard product is denoted as. In this section, we summarize related works on adversarial attack and defense. Attack: Most algorithms generate adversarial examples based on the gradient of loss function with respect to the inputs. For example, FGSM BID11 perturbs an example by the sign of gradient, and use a step size to control the ∞ norm of perturbation. BID17 proposes to run multiple iterations of FGSM. More recently, C&W attack BID3 formally poses attack as an optimization problem, and applies a gradient-based iterative solver to get an adversarial example. Both C&W attack and PGD attack BID25 have been frequently used to benchmark the defense algorithms due to their effectiveness. Throughout, we take the PGD attack as an example, largely following BID25.The goal of PGD attack is to find adversarial examples in a γ-ball, which can be naturally formulated as the following objective function: DISPLAYFORM0 Starting from x 0 = x o, PGD attack conducts projected gradient descent iteratively to update the adversarial example: DISPLAYFORM1 where Π γ is the projection to the set {x| x−x o ∞ ≤ γ}. Although multi-step PGD iterations may not necessarily return the optimal adversarial examples, we decided to apply it in our experiments, following the previous work of BID25 ). An advantage of PGD attack over C&W attack is that it gives us a direct control of distortion by changing γ, while in C&W attack we can only do this indirectly via tuning the regularizer. Since we are dealing with networks with random weights, we elaborate more on which strategy should attackers take to increase their success rate, and the details can be found in. In random neural networks, an attacker seeks a universal distortion ξ that cheats a majority of realizations of the random weights. This can be achieved by maximizing the loss expectation DISPLAYFORM2 Here the model weights w are considered as random vector following certain distributions. In fact, solving to a saddle point can be done easily by performing multi-step (projected) SGD updates. This is done inherently in some iterative attacks such as C&W or PGD discussed above, where the only difference is that we sample new weights w at each iteration. Defense: There are a large variety of defense methods proposed in recent years, e.g. denoiser based HGD BID21 and randomized image preprocessing BID34. Readers can find more from BID18. Below we select two representative ones that turn out to be effective to white box attacks. They are the major baselines in our experiments. The first example is the adversarial training BID32 BID11. It is essentially a data augmentation method, which trains the deep neural networks on adversarial examples until the loss converges. Instead of searching for adversarial examples and adding them into the training data, BID25 proposed to incorporate the adversarial search inside the training process, by solving the following robust optimization problem: DISPLAYFORM3 where D tr is the training data distribution. The above problem is approximately solved by generating adversarial examples using PGD attack and then minimizing the classification loss of the adversarial example. In this paper, we propose to incorporate adversarial training in Bayesian neural network to achieve better robustness. The other example is RSE BID23, in this algorithm the authors proposed a "noise layer", which fuses input features with Gaussian noise. They show empirically that an ensemble of models can increase the robustness of deep neural networks. Besides, their method can generate an infinite number of models on-the-fly without any additional memory cost. The noise layer is applied in both training and testing phases, so the prediction accuracy will not be largely affected. Our algorithm is different from RSE in two folds: 1) We add noise to each weight instead of input or hidden feature, and formally model it as a BNN. 2) We incorporate adversarial training to further improve the performance. The idea of BNN is illustrated in FIG0. Given the observable random variables (x, y), we aim to estimate the distributions of hidden variables w. In our case, the observable random variables correspond to the features x and labels y, and we are interested in the posterior over the weights p(w|x, y) given the prior p(w). However, the exact solution of posterior is often intractable: notice that p(w|x, y) = -but the denominator involves a high dimensional integral BID1, hence the conditional probabilities are hard to compute. To speedup inference, we generally have two approaches-we can either sample w ∼ p(w|x, y) efficiently without knowing the closed-form formula through, for example, Stochastic Gradient Langevin Dynamics (SGLD) BID33 ), or we can approximate the true posterior p(w|x, y) by a parametric distribution q θ (w), where the unknown parameter θ is estimated by minimizing KL q θ (w) p(w|x, y) over θ. For neural network, the exact form of KL-divergence can be unobtainable, but we can easily find an unbiased gradient estimator of it using backward propagation, namely Bayes by Backprop BID2.Despite that both methods are widely used and analyzed in-depth, they have some obvious shortcomings, making high dimensional Bayesian inference remain to be an open problem. For SGLD and its extension (e.g. BID19), since the algorithms are essentially SGD updates with extra Gaussian noise, they are very easy to implement. However, they can only get one sample w ∼ p(w|x, y) in each minibatch iteration at the cost of one forward-backward propagation, thus not efficient enough for fast inference. In addition, as the step size η t in SGLD decreases, the samples become more and more correlated so that one needs to generate many samples in order to control the variance. Conversely, the variational inference method is efficient to generate samples since we know the approximated posterior q θ (w) once we minimized the KL-divergence. The problem is that for simplicity we often assume the approximation q θ to be a fully factorized Gaussian distribution: DISPLAYFORM0 Although our assumption has a simple form, it inherits the main drawback from mean-field approximation. When the ground truth posterior has significant correlation between variables, the approximation in will have a large deviation from true posterior p(w|x, y). This is especially true for convolutional neural networks, where the values in the same convolutional kernel seem to be highly correlated. However, we still choose this family of distribution in our design as the simplicity and efficiency are mostly concerned. In fact, there are many techniques in deep learning area borrowing the idea of Bayesian inference without mentioning explicitly. For example, Dropout BID31 ) is regarded as a powerful regularization tool for deep neural networks, which applies an element-wise product of the feature maps and i.i.d. Bernoulli or Gaussian r.v. B(1, α) (or N (1, α)). If we allow each dimension to have an independent dropout rate and take them as model parameters to be learned, then we can extend it to the variational dropout method BID16. Notably, learning the optimal dropout rates for data relieves us from manually tuning hyper-parameter on hold-out data. Similar idea is also used in RSE BID23, except that it was used to improve the robustness under adversarial attacks. As we discussed in the previous section, RSE incorporates Gaussian noise ∼ N (0, σ 2) in an additive manner, where the variance σ 2 is user predefined in order to maximize the performance. Different from RSE, our Adv-BNN has two degrees of freedom (mean and variance) and the network is trained on adversarial examples. In our method, we combine the idea of adversarial training BID25 with Bayesian neural network, hoping that the randomness in the weights w provides stronger protection for our model. To build our Bayesian neural network, we assume the joint distribution q µ,s (w) is fully factorizable (see FORMULA6), and each posterior q µi,si (w i) follows normal distribution with mean µ i and standard deviation exp(s i) > 0. The prior distribution is simply isometric Gaussian N (0 d, s 2 0 I d×d). We choose the Gaussian prior and posterior for its simplicity and closed-form KL-divergence, that is, for any two Gaussian distributions s and t, DISPLAYFORM0 Note that it is also possible to choose more complex priors such as "spike-and-slab" BID14 or Gaussian mixture, although in these cases the KL-divergence of prior and posterior is hard to compute and practically we replace it with the Monte-Carlo estimator, which has higher variance, ing in slower convergence rate BID15.Following the recipe of variational inference, we adapt the robust optimization to the evidence lower bound (ELBO) w.r.t. the variational parameters during training. First of all, recall the ELBO on the original dataset (the unperturbed data) can be written as DISPLAYFORM1 rather than directly maximizing the ELBO in FORMULA8, we consider the following alternative objective, DISPLAYFORM2 This is essentially finding the minima for each data point (x i, y i) ∈ D tr inside the γ-norm ball, we can also interpret as an even looser lower bound of evidence. So the robust optimization procedure is to maximize, i.e. DISPLAYFORM3 To make the objective more specific, we combine with FORMULA10 and get arg max DISPLAYFORM4 In our case, p(y|x DISPLAYFORM5] is the network output on the adversarial sample (x adv i, y i). More generally, we can reformulate our model as y = f (x; w)+ζ and assume the residual ζ follows either Logistic or Gaussian distribution depending on the specific problem, so that our framework includes both classification and regression tasks. We can see that the only difference between our Adv-BNN and the standard BNN training is that the expectation is now taken over the adversarial examples (x adv, y), rather than natural examples (x, y). Therefore, at each iteration we first apply a randomized PGD attack (as introduced in eq) for T iterations to find x adv, and then fix the x adv to update µ, s. When updating µ and s, the KL term in can be calculated exactly by, whereas the second term is very complex (for neural networks) and can only be approximated by sampling. Besides, in order to fit into the back-propagation framework, we adopt the Bayes by Backprop algorithm BID2. Notice that we can reparameterize w = µ + exp(s), where ∼ N (0 d, I d×d) is a parameter free random vector, then for any differentiable function h(w, µ, s), we can show that DISPLAYFORM6 Now the randomness is decoupled from model parameters, and thus we can generate multiple to form a unbiased gradient estimator. To integrate into deep learning framework more easily, we also designed a new layer called RandLayer, which is summarized in appendix. It is worth noting that once we assume the simple form of variational distribution, we can also adopt the local reparameterization trick BID16. That is, rather than sampling the weights w, we directly sample the activations and enjoy the lower variance during the sampling process. Although in our experiments we find the simple Bayes by Backprop method efficient enough. For ease of doing SGD iterations, we rewrite into a finite sum problem by dividing both sides by the number of training samples N tr µ *, s * = arg min DISPLAYFORM7 here we define g(µ, s) KL(q µ,s (w) p(w)) by the closed form solution, so there is no randomness in it. We sample new weights by w = µ + exp (s) in each forward propagation, so that the stochastic gradient is unbiased. In practice, however, we need a weaker regularization for small dataset or large model, since the original regularization in can be too large. We fix this problem by adding a factor 0 < α ≤ 1 to the regularization term, so the new loss becomes DISPLAYFORM8 In our experiments, we found little to no performance degradation compared with the same network without randomness, if we choose a suitable hyper-parameter α, as well as the prior distribution N (0, s 2 0 I). The overall training algorithm is shown in Alg. 1. To sum up, our Adv-BNN method trains an arbitrary Bayesian neural network with the min-max robust optimization, which is similar to BID25. As we mentioned earlier, even though our model contains noise and eventually the gradient information is also noisy, by doing multiple forward-backward iterations, the noise will be cancelled out due to the law of large numbers. This is also the suggested way to bypass some stochastic defenses in.Algorithm 1 Code snippet for training Adv-BNN 1: procedure pgd attack(x, y, w) 2:Perform the PGD-attack, omitted for brevity 3: procedure train(data, w) 4:Input: dataset and network weights w 5:for (x, y) in data do 6:x adv ← pgd attack(x, y, w) Generate adversarial images 7: DISPLAYFORM9 loss ce ← cross entropy(ŷ, y) Cross-entropy loss 10:loss kl ← kl divergence(w) KL-divergence following FORMULA7 Will it be beneficial to have randomness in adversarial training? After all, both randomized network and adversarial training can be viewed as different ways for controlling local Lipschitz constants of the loss surface around the image manifold, and thus it is non-trivial to see whether combining those two techniques can lead to better robustness. The connection between randomized network (in particular, RSE) and local Lipschitz regularization has been derived in BID23. Adversarial training can also be connected to local Lipschitz regularization with the following arguments. Recall that the loss function given data (x i, y i) is denoted as f (x i ; w), y i, and similarly the loss on perturbed data (x i + ξ, y i) is f (x i + ξ; w), y i ). Then if we expand the loss to the first order DISPLAYFORM10 we can see that the robustness of a deep model is closely related to the gradient of the loss over the input, i.e. ∇ xi f (x i), y i. If ∇ xi f (x i), y i is large, then we can find a suitable ξ such that ∆ is large. Under such condition, the perturbed image x i + ξ is very likely to be an adversarial example. It turns out that adversarial training directly controls the local Lipschitz value on the training set, this can be seen if we combine FORMULA1 with DISPLAYFORM11 Moreover, if we ignore the higher order term O(ξ 2) then becomes DISPLAYFORM12 In other words, the adversarial training can be simplified to Lipschitz regularization, and if the model generalizes, the local Lipschitz value will also be small on the test set. Yet, as BID22 indicates, for complex dataset like CIFAR-10, the local Lipschitz is still very large on test set, even though it is controlled on training set. The drawback of adversarial training motivates us to combine the randomness model with adversarial training, and we observe a significant improvement over adversarial training or RSE alone (see the experiment section below). In this section, we test the performance of our robust Bayesian neural networks (Adv-BNN) with strong baselines on a wide variety of datasets. In essence, our method is inspired by adversarial training BID25 and BNN BID2, so these two methods are natural baselines. If we see a significant improvement in adversarial robustness, then it means that randomness and robust optimization have independent contributions to defense. Additionally, we would like to compare our method with RSE BID23, another strong defense algorithm relying on randomization. Lastly, we include the models without any defense as references. For ease of reproduction, we list the hyper-parameters in the appendix. Readers can also refer to the source code on github. It is known that adversarial training becomes increasingly hard for high dimensional data BID29. In addition to standard low dimensional dataset such as CIFAR-10, we also did experiments on two more challenging datasets: 1) STL-10 (BID6, which has 5,000 training images and 8,000 testing images. Both of them are 96 × 96 pixels; 2) ImageNet-143, which is a subset of ImageNet BID7, and widely used in conditional GAN training BID26. The dataset has 18,073 training and 7,105 testing images, and all images are 64×64 pixels. It is a good benchmark because it has much more classes than CIFAR-10, but is still manageable for adversarial training. In the first experiment, we compare the accuracy under the white box ∞ -PGD attack. We set the maximum ∞ distortion to γ ∈ [0:0.07:0.005] and report the accuracy on test set. The are shown in Fig. 2. Note that when attacking models with stochastic components, we adjust PGD accordingly as mentioned in Section 2.1. To demonstrate the relative performance more clearly, we show some numerical in Tab. 1. No defense BNN Adv. training AdvBNN RSE Figure 2: Accuracy under ∞ -PGD attack on three different datasets: CIFAR-10, STL-10 and ImageNet-143. In particular, we adopt a smaller network for STL-10 namely "Model A" 1, while the other two datasets are trained on VGG.From Fig. 2 and Tab. 1 we can observe that although BNN itself does not increase the robustness of the model, when combined with the adversarial training method, it dramatically increase the testing accuracy for ∼10% on a variety of datasets. Moreover, the overhead of Adv-BNN over adversarial training is small: it will only double the parameter space (for storing mean and variance), and the total training time does not increase much. Finally, similar to RSE, modifying existing network Table 1: Comparing the testing accuracy under different levels of PGD attacks. We include our method, Adv-BNN, and the state of the art defense method, the multi-step adversarial training proposed in BID25. The better accuracy is marked in bold. Notice that although our Adv-BNN incurs larger accuracy drop in the original test set (where ξ ∞ = 0), we can choose a smaller α in so that the regularization effect is weakened, in order to match the accuracy.architectures into BNN is fairly simple, we only need to replace Conv/BatchNorm/Linear layers by their variational version. Hence we can easily build robust models based on existing ones. Is our Adv-BNN model susceptible to transfer attack? we answer this question by studying the affinity between models, because if two models are similar (e.g. in loss landscape) then we can easily attack one model using the adversarial examples crafted through the other. In this section, we measure the adversarial sample transferability between different models namely None (no defense), BNN, Adv. Train, RSE and Adv-BNN. This is done by the method called "transfer attack" BID24. Initially it was proposed as a black box attack algorithm: when the attacker has no access to the target model, one can instead train a similar model from scratch (called source model), and then generate adversarial samples with source model. As we can imagine, the success rate of transfer attack is directly linked with how similar the source/target models are. In this experiment, we are interested in the following question: how easily can we transfer the adversarial examples between these five models? We study the affinity between those models, where the affinity is defined by DISPLAYFORM0 where ρ However, ρ A →B = ρ B →A is not necessarily true, so the affinity matrix is not likely to be symmetric. We illustrate the in FIG3.We can observe that {None, BNN} are similar models, their affinity is strong (ρ ≈ 0.85) for both direction: ρ BNN →None and ρ None →BNN. Likewise, {RSE, Adv-BNN, Adv. Train} constitute the other group, yet the affinity is not very strong (ρ ≈ 0.5∼0.6), meaning these three methods are all robust to the black box attack to some extent. Following experiments are not crucial in showing the success of our method, however, we still include them to help clarifying some doubts of careful readers. The first question is about sample efficiency, recall in prediction stage we sample weights from the approximated posterior and generate the label bŷ DISPLAYFORM0 In practice, we do not want to average over lots of forward propagation to control the variance, which will be much slower than other models during the prediction stage. Here we take ImageNet-143 data + VGG network as an example, to show that only 10∼20 forward operations are sufficient for robust and accurate prediction. Furthermore, the number seems to be independent on the adversarial distortion, as we can see in Fig. 4 (left). So our algorithm is especially suitable to large scale scenario. One might also be concerned about whether 20 steps of PGD iterations are sufficient to find adversarial examples. It has been known that for certain adversarial defense method, the effectiveness appears to be worse than claimed, if we increase the PGD-steps from 20 to 100. In Fig. 4 (right), we show that even if we increase the number of iteration to 1000, the accuracy does not change very much. This means that even the adversary invests more resources to attack our model, its marginal benefit is negligible. Figure 4: Left: we tried different number of forward propagation and averaged the to make prediction. We see that for different scales of perturbation γ ∈ {0, 0.01, 0.02}, choosing number of ensemble n = 10∼20 is good enough. Right: testing accuracy stabilizes quickly as #PGD-steps goes greater than 20, so there is no necessity to further increase the number of PGD steps. To conclude, we find that although the Bayesian neural network has no defense functionality, when combined with adversarial training, its robustness against adversarial attack increases significantly. So this method can be regarded as a non-trivial combination of BNN and the adversarial training: robust classification relies on the controlled local Lipschitz value, while adversarial training does not generalize this property well enough to the test set; if we train the BNN with adversarial examples, the robustness increases by a large margin. Admittedly, our method is still far from the ideal case, and it is still an open problem on what the optimal defense solution will be. We largely follow the guidelines of attacking networks with "obfuscated gradients" in. Specifically, we derive the algorithm for white box attack to random networks denoted as f (w;), where w is the (fixed) network parameters and is the random vector. Many random neural networks can be reparameterized to this form, where each forward propagation returns different . In particular, this framework includes our Adv-BNN model by setting w = (µ, s). Recall the prediction is made through "majority voting": DISPLAYFORM0 So the optimal white-box attack should maximize the loss on the ground truth label y *. That is, ξ * = arg max ξ E f (x + ξ; w,), y *,and then x adv x + ξ *. To do that we apply SGD optimizer and sampling at each iteration, DISPLAYFORM1 one can see the iteration approximately solves. It is very easy to implement the forward & backward propagation in BNN. Here we introduce the RandLayer that can seamlessly integrate into major deep learning frameworks. We take PyTorch as an example, the code snippet is shown in Alg. 1.
We design an adversarial training method to Bayesian neural networks, showing a much stronger defense to white-box adversarial attacks
405
scitldr
Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server. In this work, we explore the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence. We explore a number of strategies to carry out this attack, starting with simple boosting of the malicious agent's update to overcome the effects of other agents' updates. To increase attack stealth, we propose an alternating minimization strategy, which alternately optimizes for the training loss and the adversarial objective. We follow up by using parameter estimation for the benign agents' updates to improve on attack success. Finally, we use a suite of interpretability techniques to generate visual explanations of model decisions for both benign and malicious models and show that the explanations are nearly visually indistinguishable. Our indicate that even a highly constrained adversary can carry out model poisoning attacks while simultaneously maintaining stealth, thus highlighting the vulnerability of the federated learning setting and the need to develop effective defense strategies. Federated learning introduced by BID11 has recently emerged as a popular implementation of distributed stochastic optimization for large-scale deep neural network training. It is formulated as a multi-round strategy in which the training of a neural network model is distributed between multiple agents. In each round, a random subset of agents, with local data and computational resources, is selected for training. The selected agents perform model training and share only the parameter updates with a centralized parameter server, that facilitates aggregation of the updates. Motivated by privacy concerns, the server is designed to have no visibility into an agents' local data and training process. The aggregation algorithm is agnostic to the data distribution at the agents. In this work, we exploit this lack of transparency in the agent updates, and explore the possibility of a single malicious agent performing a model poisoning attack. The malicious agent's objective is to cause the jointly trained global model to misclassify a set of chosen inputs with high confidence, i.e., it seeks to introduce a targeted backdoor in the global model. In each round, the malicious agent generates its update by optimizing for a malicious objective different than the training loss for federated learning. It aims to achieve this by generating its update by directly optimizing for the malicious objective. However, the presence of a multitude of other agents which are simultaneously providing updates makes this challenging. Further, the malicious agent must ensure that its update is undetectable as aberrant. Contributions: To this end, we propose a sequence of model poisoning attacks, with the aim of achieving the malicious objective while maintaining attack stealth. For each strategy, we consider both attack strength as well as stealth. We start with malicious update boosting, designed to negate the combined effect of the benign agents, which enables the adversary to achieve its malicious objective with 100% confidence. However, we show that boosted updates can be detected as aberrant using two measures of stealth, accuracy checking on the benign objective and parameter update statistics. Observing that the only parameter updates that need to be boosted are those that con-tribute to the malicious objective, we design an alternating minimization strategy that improves attack stealth. This strategy alternates between training loss minimization and the boosting of updates for the malicious objective and is able to achieve high success rate on both the benign and malicious objectives. In addition, we show that estimating the other agents' updates improves attack success rates. Finally, we use a suite of interpretability techniques to generate visual explanations of the decisions made by a global model with and without a targeted backdoor. Interestingly, we observe that the explanations are nearly visually indistinguishable. This establishes the attack stealth along yet another axis of measurement and indicates that backdoors can be inserted without drastic changes in model focus at the input. In our experiments, we consider adversaries which only control a single malicious agent and at a given time step, have no visibility into the updates that will be provided by the other agents. We demonstrate that these adversaries can influence the global model to misclassify particular examples with high confidence. We work with both the and Adult Census 1, datasets and for settings with both 10 and 100 agents, our attacks are able to ensure the global model misclassifies a particular example in a target class with 100% confidence. Our alternating minimization attack further ensures that the global model converges to the same test set accuracy as the case with no adversaries present. We also show that a simple estimation of the benign agents' updates as being identical over two consecutive rounds aids in improving attack success. Related Work: While data poisoning attacks BID1 BID16 BID12 BID22 BID12 BID9 BID4 BID7 have been widely studied, model poisoning attacks are largely unexplored. A number of works on defending against Byzantine adversaries consider a threat model where Byzantine agents send arbitrary gradient updates BID2 BID5 BID13 BID23. However, the adversarial goal in these cases is to ensure a distributed implementation of the Stochastic Gradient Descent (SGD) algorithm converges to'suboptimal to utterly ineffective models', quoting from BID13. In complete constrast, our goal is to ensure convergence to models that are effective on the test set but misclassify certain examples. In fact, we show that the Byzantine-resilient aggregation mechanism'Krum BID2 is not resilient to our attack strategies (Appendix C). Concurrent work by BID0 considers multiple colluding agents performing poisoning via model replacement at convergence time. In contrast, our goal is to induce targeted misclassification in the global model by a single malicious agent even when it is far from convergence while maintaining its accuracy for most tasks. In fact, we show that updates generated by their strategy fail to achieve either malicious or benign objectives in the settings we consider. In this section, we formulate both the learning paradigm and the threat model that we consider throughout the paper. Operating in the federated learning paradigm, where model weights are shared instead of data, gives rise to the model poisoning attacks that we investigate. The federated learning setup consists of K agents, each with access to data D i, where DISPLAYFORM0 The total number of samples is i l i = l. Each agent keeps its share of the data (referred to as a shard) private, i.e. DISPLAYFORM1 li } is not shared with the server S. The objective of the server is to learn a global parameter vector w G ∈ R n, where n is the dimensionality of the parameter space. This parameter vector minimizes the loss 2 over D = ∪ i D i and the aim is to generalize well over D test, the test data. Federated learning is designed to handle non-i.i.d partitioning of training data among the different agents. Traditional poisoning attacks deal with a malicious agent who poisons some fraction of the data in order to ensure that the learned model satisfies some adversarial goal. We consider instead an agent who poisons the model updates it sends back to the server. This attack is a plausible threat in the federated learning setting as the model updates from the agents can (i) directly influence the parameters of the global model via the aggregation algorithm; and (ii) display high variability, due to the non-i.i.d local data at the agents, making it harder to isolate the benign updates from the malicious ones. Adversary Model: We make the following assumptions regarding the adversary: (i) there is exactly one non-colluding, malicious agent with index m (limited effect of malicious updates on the global model); (ii) the data is distributed among the agents in an i.i.d fashion (making it easier to discriminate between benign and possible malicious updates and harder to achieve attack stealth); (iii) the malicious agent has access to a subset of the training data D m as well as to auxiliary data D aux drawn from the same distribution as the training and test data that are part of its adversarial objective. Our aim is to explore the possibility of a successful model poisoning attack even for a highly constrained adversary. A malicious agent can have one of two objectives with regard to the loss and/or classification of a data subset at any time step t in the model poisoning setting: 1. Increase the overall loss: In this case, the malicious agent wishes to increase the overall loss on a subset D aux = {x i, y i} r i=1 of the data. The adversarial objective is in this setting is DISPLAYFORM0 DISPLAYFORM1. This corresponds to a targeted misclassification attempt by the malicious agent. In this paper, we will focus on malicious agents trying to attain the second objective, i.e. targeted misclassification. At first glance, the problem seems like a simple one for the malicious agent to solve. However, it does not have access to the global parameter vector w t G for the current iteration as is the case in standard poisoning attacks BID15 BID9 and can only influence it though the weight update δ t m it provides to the server S. The simplest formulation of the optimization problem the malicious agent has to solve such that her objective is achieved on the t th iteration is then DISPLAYFORM2 In order to illustrate how our attack strategies work with actual data and models, we use two qualitatively different datasets. The first is an image dataset, Fashion-MNIST 3 BID21 ) which consists of 28 × 28 grayscale images of clothing and footwear items and has 10 output classes. The training set contains 60,000 data samples while the test set has 10,000 samples. We use a Convolutional Neural Network achieving 91.7% accuracy on the test set for the model architecture. The second dataset is the UCI Adult dataset 4, which has over 40,000 samples containing information about adults from the 1994 US Census. The classification problem is to determine if the income for a particular individual is greater (class '0') or less (class '1') than $50, 000 a year. For this dataset, we use a fully connected neural network achieving 84.8% accuracy on the test set BID6 for the model architecture. Owing to space constraints, all for this dataset are in the Appendix. For both datasets, we study the case with the number of agents k set to 10 and 100. When k = 10, all the agents are chosen at every iteration, while with k = 100, a tenth of the agents are chosen at random every iteration. We run federated learning till a pre-specified test accuracy (91% for Fashion MNIST and 84% for the Adult Census data) is reached or the maximum number of time steps have elapsed (40 for k = 10 and 50 for k = 100). For most of our experiments, we consider the case when r = 1, which implies that the malicious agent aims to misclassify a single example in a desired target class. For both datasets, a random sample from the test set is chosen as the example to be misclassified. For the Fashion-MNIST dataset, the sample belongs to class'5' (sandal) with the aim of misclassifying it in class'7' (sneaker) and for the Adult dataset it belongs to class'0' with the aim of misclassifying it in class'1'. We begin by investigating baseline attacks which do not conform to any notion of stealth. We then show how simple detection methods at the server may expose the malicious agent and explore the extent to which modifications to the baseline attack can bypass these methods. In order to solve the exact optimization problem needed to achieve their objective, the malicious agent needs access to the current value of the overall parameter vector w t G, which is inaccessible. This occurs due to the nature of the federated learning algorithm, where S computes w t G once it has received updates from all agents. In this case, they have to optimize over an estimate of the value of w t G: DISPLAYFORM0 where f (·) is an estimator forŵ t G based on all the information I t m available to the adversary. We refer to this as the limited information poisoning objective. The problem of choosing a good estimator is deferred to Section 4 and the strategies discussed in the remainder of this section make the assumption thatŵ DISPLAYFORM1 In other words, the malicious agent ignores the effects of other agents. As we shall see, this assumption is often enough to ensure the attack works in practice. Using the approximation thatŵ DISPLAYFORM0 Depending on the exact structure of the loss, an appropriate optimizer can be chosen. For our experiments, we will rely on gradient-based optimizers such as SGD which work well for neural networks. In order to overcome the effect of scaling by α m at the server, the final updateδ t m that is returned, has to be boosted. Explicit Boosting: Mimicking a benign agent, the malicious agent can run E m steps of a gradientbased optimizer starting from w Implicit Boosting: While the loss is a function of a weight vector w, we can use the chain rule to obtain the gradient of the loss with respect to the weight update δ, i.e. ∇ δ L = α m ∇ w L. Then, initializing δ to some appropriate δ ini, the malicious agent can directly minimize with respect to δ. FIG1. The attack is clearly successful at causing the global model to classify the chosen example in the target class. In fact, after t = 3, the global model is highly confident in its (incorrect) prediction. The baseline attack using implicit boosting FIG3 ) is much less successful than the explicit boosting baseline, with the adversarial objective only being achieved in 4 of 10 iterations. Further, it is computationally more expensive, taking an average of 2000 steps to converge at each time step, which is about 4× longer than a benign agent. Since consistently delayed updates from the malicious agent might lead to it being dropped from the system in practice, we focus on explicit boosting attacks for the remainder of the paper as they do not add as much overhead. While the baseline attack is successful at meeting the malicious agent's objective, there are methods the server can employ in order to detect if an agent's update is malicious. We now discuss two possible methods and their implication for the baseline attack. We note that neither of these methods are part of the standard federated learning algorithm nor do they constitute a full defense at the server. They are merely metrics that may be utilized in a secure system. Accuracy checking: When any agent sends a weight update to the server, it can check the validation accuracy of w t i = w t−1 G + δ t i, the model obtained by adding that update to the current state of the global model. If the ing model has a validation accuracy much lower than that of the other agents, the server may be able to detect that model as coming from a malicious agent. This would be particularly effective in the case where the agents have i.i.d. data. In FIG1, the left plot shows the accuracy of the malicious model on the validation data (Acc. Mal) at each iteration. This is much lower than the accuracy of the global model (Acc. Global) and is no better than random for the first few iterations. Figure 3: Minimum and maximum L 2 distances between weight updates. For each strategy, we show the spread of L 2 distances between all the benign agents and between the malicious agent and the benign agents. Going from the baseline attack to the alternating minimization attack with and without distance constraints, we see that the gap in the spread of distances reduces, making the attack stealthier. The benign agents behave almost identically across strategies, indicating that the malicious agent does not interfere much with their training. Weight update statistics: There are both qualitative and quantitative methods the server can apply in order to detect weight updates which are malicious, or at the least, different from a majority of the other agents. We investigate the effectiveness of two such methods. The first, qualitative method, is the visualization of weight update distributions for each agent. Since the adversarial objective function is different from the training loss objective used by all the benign agents, we expect the distribution of weight updates to be very different. This is borne out by the representative weight update distribution at t = 4 observed for the baseline attack in FIG1. Compared to the weight update from a benign agent, the update from the malicious agent is much sparser and has a smaller range. This difference is more pronounced for later time steps (see FIG13 in Appendix B).The second, quantitative method uses the spread of pairwise L p distances between weight update vectors to identify outliers. At each time step, the server computes the pairwise distances between all the weight updates it receives, and flags those weight updates which are either much closer or much farther away than the others. In Figure 3, the spread of L 2 distances between all benign updates and between the malicious update and the benign updates is plotted. For the baseline attack, both the minimum and maximum distance away from any of the benign updates keeps decreasing over time steps, while it remains relatively constant for the other agents. This can enable detection of the malicious agent. To bypass the two detection methods discussed in the previous section, the malicious agent can try to simultaneously optimize over the adversarial objective and training loss for its local data shard Results: In practice, we optimize over batches of D m and concatenate each batch with the single instance {x, τ} to be misclassified, ensuring that the adversarial objective is satisfied. In fact, as seen in FIG1 in the plot on the right, the adversarial objective is satisfied with high confidence from the first time step t = 1. Since the entire weight update corresponding to both adversarial and training objectives is boosted, the accuracy of w t m on the validation is low throughout the federated learning process. Thus, this attack can easily be detected using the accuracy checking method. Further, while the weight update distribution for this attack FIG1 is visually similar to that of benign agents, its range differs, again enabling detection. The malicious agent only needs to boost the part of the weight update that corresponds to the adversarial objective. In the baseline attack, in spite of this being the entire update, the ing distribution is sparse and of low magnitude compared to a benign agent's updates. This indicates that the weights update needed to meet the adversarial objective could be hidden in an update that resembled that of a benign agent. However, as we saw in the previous section, boosting the entire weight update when the training loss is included leads to low validation accuracy. Further, the concatenation strategy does not allow for parts of the update corresponding to the two different objectives to be decoupled. To overcome this, we propose an alternating minimization attack strategy which works as follows for iteration t. For each epoch i, the adversarial objective is first minimized starting from w Results: In FIG6, the plot on the left shows the evolution of the metrics of interest over iterations. The alternating minimization attack is able to achieve its goals as the accuracy of the malicious model closely matches that of the global model even as the adversarial objective is met with high confidence for all time steps starting from t = 3. This attack can bypass the accuracy checking method as the accuracy on test data of the malicious model is close to that of the global model. Qualitatively, the distribution of the malicious weight update FIG6 ) is much more similar to that of the benign weights as compared to the baseline attack. Further, in Figure 3, we can see that the spread in distances between the malicious updates and benign updates much closer to that between benign agents compared to the baseline attack. Thus, this attack is stealthier than the baseline. To increase the attack stealth, the malicious agent can also add a distance-based constraint onw i,t m, which is the intermediate weight vector generated in the alternating minimization strategy. There could be multiple local minima which lead to low training loss, but the malicious agent needs to send back a weight update that is as close as possible (in an appropriate distance metric) to the update they would have sent had they been benign. So, w Constraints based on the empirical distribution of weights such as the Wasserstein or total variation distances may also be used. The adversarial objective is achieved at the global model with high confidence starting from time step t = 2 and the success of the malicious model on the benign objective closely tracks that of the global model throughout. The weight update distribution for this attack FIG6 ) is again similar to that of a benign agent. Further, in Figure 3, we can see that the distance spread for this attack closely follows and even overlaps that of benign updates throughout, making it hard to detect using the L 2 distance metric. In this section, we look at how the malicious agent can choose a better estimate for the effect of the other agents' updates at each time step that it is chosen. In the case when the malicious agent is not chosen at every time step, this estimation is made challenging by the fact that it may not have been chosen for many iterations. The malicious agent's goal is to choose an appropriate estimate for δ DISPLAYFORM0 Eq. 1. At a time step t when the malicious agent is chosen, the following information is available to them from the previous time steps they were chosen: i) Global parameter vectors w αm, this will negate the effects of the other agents. However, due to estimation inaccuracy and the fact that the optimizer has not accounted for this correction, this method leads to poor empirical performance. Pre-optimization correction: Here, the malicious agent assumes thatŵ DISPLAYFORM1 m. In other words, the malicious agent optimizes for δ t m assuming it has an accurate estimate of the other agents' updates. For attacks which use explicit boosting, this involves starting from w When the malicious agent is chosen at time step t 5, information regarding the probable updates from the other agents can be obtained from the previous time steps at which the malicious agent was chosen. Previous step estimate: In this method, the malicious agent's estimateδ t [k]\m assumes that the other agents' cumulative updates were the same at each step since t (the last time step at which at the malicious agent was chosen), i.e.δ DISPLAYFORM0 In the case when the malicious agent is chosen at every time step, this reduces toδ DISPLAYFORM1 Results: Attacks using previous step estimation with the pre-optimization correction are more effective at achieving the adversarial objective for both the baseline and alternating minimization attacks. In FIG10, the global model misclassifies the desired sample with a higher confidence for both the baseline and alternating minimization attacks at t = 2. Neural networks are often treated as black boxes with little transparency into their internal representation or understanding of the underlying basis for their decisions. Interpretability techniques are designed to alleviate these problems by analyzing various aspects of the network. These include (i) identifying the relevant features in the input pixel space for a particular decision via Layerwise Relevance Propagation (LRP) techniques BID14 ); (ii) visualizing the association between neuron activations and image features (Guided Backprop , DeConvNet ); (iii) using gradients for attributing prediction scores to input features (e.g., Integrated Gradients BID20), or generating sensitivity and saliency maps (SmoothGrad BID18), Gradient Saliency Maps BID17 )) and so on. The semantic relevance of the generated visualization, relative to the input, is then used to explain the model decision. These interpretability techniques, in many ways, provide insights into the internal feature representations and working of a neural network. Therefore, we used a suite of these techniques to try and discriminate between the behavior of a benign global model and one that has been trained to satisfy the adversarial objective of misclassifying a single example. FIG11 compares the output of the various techniques for both the benign and malicious models on a random auxiliary data sample. Targeted perturbation of the model parameters coupled with tightly bounded noise ensures that the internal representations, and relevant input features used by the two models, for the same input, are almost visually imperceptible. This reinforces the stealth achieved by our attacks along with respect to another measure of stealth, namely various interpretability-based detection techniques. In this paper, we have started an exploration of the vulnerability of multi-party machine learning algorithms such as federated learning to model poisoning adversaries, who can take advantage of the very privacy these models are designed to provide. In future work, we plan to explore more sophisticated detection strategies at the server, which can provide guarantees against the type of attacker we have considered here. In particular, notions of distances between weight distributions are promising defensive tools. Our attacks in this paper demonstrate that federated learning in its basic form is very vulnerable to model poisoning adversaries, as are recently proposed Byzantine resilient aggregation mechanisms. While detection mechanisms can make these attacks more challenging, they can be overcome, demonstrating that multi-party machine learning algorithms robust to attackers of the type considered here must be developed. When the number of agents increases to k = 100, the malicious agent is not selected in every step. Further, the size of |D m | decreases, which makes the benign training step in the alternating minimization attack more challenging. The challenges posed in this setting are reflected in Figure 8, where although the baseline attack is able to introduce a targeted backdoor, it cannot ensure it for every step due to steps where only benign agents provide updates. The alternating minimization attack is also able to introduce the backdoor, as well as increase the classification accuracy of the malicious model on test data. However, the improvement in performance is limited by the paucity of data for the malicious agent. It is an open question if data augmentation could help improve this accuracy. Figure B shows the evolution of weight update distributions for the 4 different attack strategies on the CNN trained on the Faishon MNIST dataset. Time slices of this evolution were shown in the main text of the paper. The baseline and concatenated training attacks lead to weight update distributions that differ widely for benign and malicious agents. The alternating minimization attack without distance constraints reduces this qualitative difference somewhat but the closest weight update distributions are obtained with the alternating minimization attack with distance constraints. C BYPASSING BYZANTINE-RESILIENT AGGREGATION MECHANISMS BID2 recently proposed a gradient aggregation mechanism known as'Krum' that is provably resilient to Byzantine adversaries. We choose to evaluate Krum as it is efficient, provably resilient and can be used a building block for better mechanisms BID13. As stated in the introduction, the aim of Byzantine adversaries considered in this work and others BID5 BID13;; BID23 ) is to ensure convergence to ineffective models. The goals of the adversary in this paper are to ensure convergence to effective models with targeted backdoors. This difference in objectives leads to'Krum' being ineffective against our attacks. We now briefly describe Krum. Given n agents of which f are Byzantine, Krum requires that n ≥ 2f + 3. At any time step t, updates (δ t 1, . . ., δ t n) are received at the server. For each δ t i, the n − f − 2 closest (in terms of L p norm) other updates are chosen to form a set C i and their distances added up to give a score S(δ In FIG1, we see the effect of our attack strategies on Krum with a boosting factor of λ = 2 for a federated learning setup with 10 agents. Since there is no need to overcome the constant scaling factor α m, the attacks can use a much smaller boosting factor λ to ensure the global model has the targeted backdoor. Even with the baseline attack, the malicious agent's update is the one chosen by Krum for 34 of 40 time steps but the global model is unable to attain high test accuracy. The alternating minimization attack ensures that the global model maintains relatively high test accuracy while the malicious agent is chosen for 26 of 40 time steps. These conclusively demonstrate the effectiveness of model poisoning attacks against Krum.
Effective model poisoning attacks on federated learning able to cause high-confidence targeted misclassification of desired inputs
406
scitldr
Despite rapid advances in speech recognition, current models remain brittle to superficial perturbations to their inputs. Small amounts of noise can destroy the performance of an otherwise state-of-the-art model. To harden models against noise, practitioners often perform data augmentation, adding artificially-noised examples to the training set, carrying over the original label. In this paper, we hypothesize that a clean example and its superficially perturbed counterparts shouldn't merely map to the same class--- they should map to the same representation. We propose invariant-representation-learning (IRL): At each training iteration, for each training example, we sample a noisy counterpart. We then apply a penalty term to coerce matched representations at each layer (above some chosen layer). Our key , demonstrated on the LibriSpeech dataset are the following: (i) IRL significantly reduces character error rates (CER)on both `clean' (3.3% vs 6.5%) and `other' (11.0% vs 18.1%) test sets; (ii) on several out-of-domain noise settings (different from those seen during training), IRL's benefits are even more pronounced. Careful ablations confirm that our are not simply due to shrinking activations at the chosen layers. The IRL algorithm is simple: First, during training, for each example x, we produce a noisy version 90 by sampling from x ∼ ν(x), where ν is a stochastic function. In our experiments, this function takes 91 a random snippet from a noise database, sets its amplitude by drawing from a normal distribution, and 92 adds it to the original (in sample space), before converting to spectral features. We then incorporate a 93 penalty term in our loss function to penalize the distance between the encodings of the original data 94 point φ e (x) and the noisy data point φ e (x), where φ l is representation at layer l. In our experiments, 95 we choose φ e to be the output of the encoder in our Seq2Seq model. We illustrate the learning setup 96 graphically in FIG0. In short, our loss function consists of three terms, one to maximize the 97 probability assigned to the the clean example's label, another to maximize the probability our model 98 assigned to the noisy example's (identical) label scaled by hyper-parameter α, and a penalty term to 99 induce noise-invariant representations L d. In the following equations, we express the loss calculated 100 on a single example x and its noisy counterpart x, omitting sums over the dataset for brevity. DISPLAYFORM0 where θ denotes our model parameters. Because our experiments address multiclass classification, 102 our primary loss L c is cross-entropy: DISPLAYFORM1 where C denotes the vocabulary size andŷ is our model's softmax output. To induce similar 104 representations for clean and noised data, we apply a penalty consisting of two terms, the first 105 penalizes the L 2 distance between φ e (x) and φ e (x), the second penalizes their negative cosine DISPLAYFORM2 We jointly penalize the L 2 and cosine distance for the following reason. It is possible to lower the 108 L 2 distance between the two (clean and noisy) hidden representations simply by shrinking the scale 109 of all encoded representations. Trivially, these could then be dilated again simply by setting large 110 weights in the following layer. On the other hand, it is possible to assign high cosine similarity to layers will also be identical and thus those penalties will go to 0. We can express this loss as a sum 122 over successive representations φ l of the clean φ l (x) and noisy φ l (x) data: DISPLAYFORM3 In our experiments, we find that IRL-C consistently gives a small improvement over achieved 124 with IRL-E. These approaches are identical for the L 2 penalty but not for the cosine distance penalty, owing to 133 the normalizing factor which may be different at each time step. In this work we take approach (i) concatenating the representations across time steps and then calculating the penalty. All of our models are based off of the sequence-to-sequence due to. The input to the encoder is a In our experiments with IRL-E (penalty applied on a single layers), we use the output of the encoder 141 to calculate the penalty. Note that there is one output per step in the input sequence and thus we are 142 concatenating across the T 1 steps. To calculate IRL-C, we also start with the encoder output concatenating across all T 1 sequence steps first add MUSAN noise to the training data point at a signal-to-noise ratio drawn from a Gaussian 162 with a mean of 12dB and a variance of 8dB. This aligns roughly with the scale of noise employed in other papers using multi-condition training. Before presenting our main , we briefly describe the model architectures, training details, and 166 the various baselines that we compare against. We also present details on our pipeline for synthesizing 167 noisy speech and explain the experimental setup for evaluating on out-of-domain noise. Instead we decode predictions from all models via beam search with width 10. To ensure fair comparisons, we perform hyper-parameter searches separately for each model and We train all models with the Adam optimizer with an initial learning rate of 0.001. We employ a The primary loss function for each model is cross-entropy loss and our primary evaluation metric to 193 evaluate all models is the character error rate. As described above, the additional loss terms for our IRL models are L2 loss and cosine distance between representations of clean and noisy audio. We train all models on the LibriSpeech corpus, generating noisy data by adding randomly selected 216 noise tracks from the MUSAN dataset with a signal to noise ratio drawn from a Gaussian distribution 217 (12dB mean, 8dB standard deviation) and temporal shift drawn from a uniform distribution (with Our final experiments test the effects of various out-of-domain noise on our models. The are 245 shown in TAB2 . We found that our models trained with the IRL procedure had stronger (and significantly less degradation) across all tasks compared to the baseline and the purely data 247 overlapping speech compared to 91.5% for the baseline and 32.0% on the data augmented model. We 251 found that decreasing the signal-to-noise ratio also effected the baseline models more than the models 252 trained on the IRL algorithm: our IRL-C model received a character error rate of 5.7% compared to 253 27.8% for baseline and 10.8% for the purely data augmented model. We found that modifying the 254 volume of the speaker did not effect the accuracy of any of the networks. Finally, we found that our 255 models trained with the IRL algorithm performed better for re-sampled telephony data, achieving In this paper, we demonstrated that enforcing noise-invariant representations by penalizing differences 267 between pairs of clean and noisy data can increase model accuracy on the ASR task, produce models 268 that are robust to out-of-domain noise, and improve convergence speed. The performance gains 269 achieved by IRL come without any impact to inference throughput. We note that our core ideas 270 here can be applied broadly to deep networks for any supervised task. While the speech setting is 271 particularly interesting to us, our methods are equally applicable to other machine learning fields,
In this paper, we hypothesize that superficially perturbed data points shouldn’t merely map to the same class---they should map to the same representation.
407
scitldr
In this paper we design a harmonic acoustic model for pitch detection. This model arranges conventional convolution and sparse convolution in a way such that the global harmonic patterns captured by sparse convolution are composed of the enough number of local patterns captured by layers of conventional convolution. When trained on the MAPS dataset, the harmonic model outperforms all existing pitch detection systems trained on the same dataset. Most impressively, when trained on MAPS with simple data augmentation, the harmonic model with an LSTM layer on top surpasses an up-to-date, more complex pitch detection system trained on the MAESTRO dataset to which complicated data augmentation is applied and whose training split is an order-of-magnitude larger than the training split of MAPS. The harmonic model has demonstrated potential to be used for advanced automatic music transcription (AMT) systems. In recent years, deep learning has emerged as a promising approach to pitch detection. Pitch detection is the process of detecting the pitches present in a frame, i.e., a short snippet of musical waveform. The of pitch detection can be post-processed to extract note contours, i.e., note onsets and offsets. The whole process of pitch detection and note extraction is called automatic music transcription (AMT). This paper is devoted to pitch detection for solo piano music. For AMT, of first importance is an acoustic model that can predict the active pitches in a frame. Using acoustic models as building blocks, more advanced architectures can be constructed for various purposes. designed an acoustic model for pitch detection, which is referred to as the Kelz model hereafter. This model was modified from the one developed in Schlüter & Böck for onset detection and resembles the LeNet-5 model . The Kelz model treats pitch detection simply as image-related tasks by using convolution layers to capture local frequency patterns. It does not explicitly capture the harmonic patterns of pitched music. Instead, it relies on fully connected layers to this end. The above handling of harmonic patterns weakens the model's generalisation capability. This problem has been partially studied in. designed an AMT system. This system consists of an onset detector and a frame detector that detects the pitches in each frame. The two detectors have similar structures by topping the Kelz model with a bi-directional LSTM. Skip connections are featured by feeding the onset detector's output into the other detector to serve as additional intermediate features. used a similar AMT system as a sub-system to build a bigger system for piano sound generation. The AMT system in uses more features and introduces a separate detector for offset detection. The dataset used in is far larger than the one used in. designed an AMT system consisting of three separate detectors for pitch, onset and offset detection. The pitch detector is a Kelz model. Some intermediate features of the pitch detector are used as the sole inputs to the other two detectors. Both the onset and offset detectors only have a convolution and a fully connected layer. Finally, note contours are extracted by fusing the predictions from the three detectors with a hidden Markov chain model (HMM). designed a fully convolutional acoustic model named the harmonic constant-q transform (HCQT) model. It uses HCQTs as input representation. HCQTs are constructed from CQTs, which like any spectrograms have a time and a frequency dimension. Besides the above two dimensions, HCQTs have a newly added dimension called the harmonic dimension. The harmonic dimension consists of the fundamental frequency/pitch (denoted by f 0), a sub-harmonic at 1 2 f 0, and four harmonics ranging from 2f 0 to 5f 0. Rearranging CQTs into HCQTs enables the capture of harmonic patterns/structures specific to pitched music, whereas the convolutional architecture enables the capture of local patterns as in most image-related problems. designed an AMT system consisting of six cascaded networks that are trained one after another. Skip connections are used among these networks in a similar fashion as residual networks do. Except the first network, all other networks are multilayer perceptrons (MLPs) that have two to three layers. The first network N1 takes variable-q transforms (VQTs) as inputs and detects tentative pitches by linearly combing 50 frequency bins for each pitch that include harmonics and non-harmonics. N1 is essentially a convolution layer with a single sparse kernel. A second network N2 performs more accurate pitch estimation from the output of N1. The overall effect of N1 and N2 is equivalently an acoustic model that resembles the HCQT model. The difference is that N1 and N2 take into account more frequency bins when capturing harmonic patterns, whereas the HCQT model pays more attention to local patterns. The remaining four networks estimate onsets, offsets and tentative notes, and compute probabilities for each note, respectively. Loosely speaking, the waveform of a music note is composed of a number of monotones, among which the lowest frequency is called the fundamental frequency or the pitch, and all the other are integer multiples of the pitch referred to as the harmonics. For polyphonic music, there can be multiple notes active at the same time; and their pitches and harmonics often overlap. This leads to the challenge for pitch detection. Figure 1 shows an example VQT spectrogram. This spectrogram features a grid structure composed of frequency stripes. These stripes are a of non-perfect pitches and the windowing effect (also known as power leakage) in the calculation of the discrete Fourier transform (DFT). We call these stripes the local patterns, as they only involve a small number of local neighbouring frequencies. Next let us talk about harmonic patterns. To tell if a specific note is active, intuitively we will first check if its pitch (denoted by f 0) and harmonics are light (i.e., have energy). If it is the case, then it is likely the note is active. On the other hand, if it is also light at 1 2 f 0 or 1 3 f 0, then the likelihood drops, because now f 0 could be the second or third harmonic of another note. When the degree of polyphony increases, this logical reasoning is definitely over our heads and deep learning comes in handy. We call the interaction patterns among the pitches and harmonics of different notes the harmonic patterns. Since the constituent frequencies of harmonic patterns are sparsely distributed, it is inappropriate to capture harmonic patterns with the conventional convolution whose receptive field can cover only contiguous frequencies. Instead, we need a type of convolution whose receptive field in the frequency dimension corresponds to the above sparsely distributed frequencies. We term this type of convolution sparse convolution. For pitch detection, it is critical as well as challenging to capture both of the above two types of frequency patterns. The existing acoustic models focused only on either of them. The existing AMT systems concentrated on more advanced, complex network structures. The contribution of this paper is as follows. 1. We design a harmonic acoustic model for pitch detection. This model takes VQTs as inputs. In the first part of this model, the stripe-shaped local frequency patterns are captured with layers of conventional convolution. Then in the second part, the global harmonic patterns are captured with sparse convolution. 2. When all the systems participating in the comparison are trained on MAPS, the harmonic model achieves the best performance, leading the runner-up by 3.5%. 3. When trained on MAPS with simple data augmentation, the harmonic model enhanced by an LSTM layer outperforms an up-to-date, more complex system trained on the MAESTRO dataset to which complicated data augmentation is applied and whose training split is 15 times as large as the training split of MAPS, demonstrating the potential of the harmonic model to be used for building advanced AMT systems. 2.1 MAPS MAPS is a piano dataset generated by a software synthesizer and a real piano, namely, a Yamaha Disklavier upright piano, from standard MIDI files. Disklavier can be controlled by computer, and accept MIDI files as inputs and output MIDI files that are strictly synchronized with the sound generated. These output MIDI files can be used to generate ground-truth labels. The sound from Disklavier was recorded under two settings, namely, a close and an ambient setting. The synthesizer has 7 settings that differ in soundfont and reverberation configuration. Thus, there are 9 settings in total. Each setting has 30 recordings, ing in a total of 270 recordings. Since there are only 160 musical compositions, these 9 settings have overlap in composition. Next we need to partition the dataset into three splits, namely, a training, a validation and a test split. In this process, the general criterion is that the training and test splits should have no instrument and composition overlap so as to fairly compare the generalisation capability of different models. We choose the 60 recordings from Disklavier as the test split. We exclude the 71 recordings whose compositions appear in the test split from the 210 synthesized recordings and use the remaining 139 recordings as the training split. We use the above 71 recordings as the validation split. MAESTRO is a piano dataset generated by Yamaha Disklavier grand pianos from 9 years of an international piano competition. It has 1184 recordings in total that have a total duration of about 172 hours and a total size of about 103 GB. By contrast, MAPS only has a total duration of about 18 hours and a total size of about 11 GB. When partitioning MAESTRO into training, validation and test splits, we simply follow the recommendation given in. In particular, the training split has 954 recordings, the validation split has 105 recordings, and the test split has 125 recordings. The training split of MAESTRO is about 15 times as large as the training split of MAPS. 3 MODEL CONFIGURATION The harmonic model uses VQT as input representation. The procedure for computing VQT is given in appendix A.1. The setting for VQT computation is as follows. The sampling rate is 44.1 kHz. The number of frequency bins per octave (denoted by B) is 36. The minimum frequency is the frequency of MIDI note 21 multiplied by a factor of 2 −1/B. The maximum frequency is the frequency of MIDI note 132 multiplied by a factor of 2 1/B. Thus, we have 336 frequency bins in total. The bandwidth for each frequency bin is set to where f k is the centre frequency. The hop size (denoted by h) is 64 samples. For pitch detection, we do not like the hop size to be too small, because in this case adjacent frames in the spectrogram will be highly correlated. So we down-sample the ing spectrogram by a factor of 22. We formulate frame-wise pitch detection as a multi-label classification problem. Frame labels are determined from the MIDI file. First, we translate the use of the sustain pedal into extended note duration as in. Next, we express the onset and offset for a specific note in terms of samples as where t on and t off are the onset and offset times in seconds, respectively. Finally, the start and end frames of this note can be expresses as We propose a harmonic acoustic model for frame-wise pitch detection. The structure of this model is given in Table 1. In this table, we use -to mean that the output shape of a layer is the same as the output shape of the above layer. Layer 0 is the input with shape (none × none × 336 × 1) where the dimension order is batch, frame, frequency and channel. Here we use none to denote a dynamic dimension. The abbreviations used in this table are defined as follows. 1) conv(32 × 3 × 3) stands for a convolution layer with 32 kernels of receptive field 3 × 3. 2) dropout(0.8) is a dropout layer with keeping probability 0.8. 3) sparse-conv(256 × 1 × 79) is a sparse convolution layer with 256 kernels of receptive field 1 × 79. 4) maxpool(1 × 3) is a max-pooling layer with receptive 1 × 3, frame stride 1, and frequency stride 3. 5) FC(n) is a fully connected layer with n features. This structure can be divided into three parts. Part 1 consists of layers 1 through 7. This part uses four consecutive convolution layers to capture local frequency patterns. The overall receptive field of these layers in the frequency domain is 9 frequency bins. Part 2 consists of layers 8, 9 and 10. Layer 8 is a sparse convolution layer. This layer does the same job as network N1 of. In particular, for each pitch f 0 we select 50 frequency bins relative to f 0 as the input features for detecting the presence of f 0. These 50 frequency bins include bins over and under f 0, Figure 2: The receptive field of the sparse convolution in the frequency dimension where the solid vertical line is the pitch and the dotted vertical lines are the input features for the pitch . and harmonics and non-harmonics. Please refer to for a complete list of these bins. Figure 2 shows the distribution of these bins. The original 50 bins were given when the number of bins per octave is 240. And when converted to 36 bins per octave used by the harmonic model, some bin indices become non-integers. For these non-integers, we take both their floors and ceils. Thus, for each f 0 we have a total of 79 input features. For an f 0, some of its input features could be out of the VQT's bin range. In this case, we assume that the out-of-range input features are zeros. Since a typical piano only has 88 keys ranging from MIDI notes 21 to 108, among the VQT's 336 frequency bins only the first 264 are pitches to detect. Therefore, after the sparse convolution layer, we get an output of shape (none × none × 264 × 256) where 256 is the number of output features for each pitch. This output is at pitch level. However, the datasets only allow labels at note level. So we use max-pooling of receptive field 1 × 3 to down-sample the output to note level, ing in an output of shape (none × none × 88 × 256). Part 3 consists of layers 11, 12 and 13, after which we get predictions for the 88 notes. In the above structure except the last layer, when it is applicable, ReLU is used as activation function and the output is batch-normalized. The last layer uses sigmoid as activation function. We use dropout at different places to control overfitting. The difference between the harmonic model and the Kelz model is that the former explicitly captures harmonic frequency patterns with sparse convolution, whereas the latter does this implicitly by letting the network to learn them on itself. This implicit handling of harmonic patterns makes the trained model overfit the timbres and composition styles of the training split. However, the training and test splits often have different timbres and composition styles so that this implicit handling will impact the generalisation capability of the trained model. The harmonic model differs from network N1 of in two aspects. First, the harmonic model captures enough number of local frequency patterns with part 1, whereas N1 did not consider Compared with the HCQT model, the harmonic model is able to capture more complex frequency patterns by using more features for each pitch and placing sparse convolution after conventional convolution. The overall receptive field of the convolution layers in the HCQT model is over one octave and thus could lead to overfitting of the composition styles. This problem has been advertently avoided in both the Kelz and the harmonic model by using a relatively small overall receptive filed when capturing local frequency patterns. We use the binary cross entropy loss. Denote by p ∈ {0, 1} the ground-truth label for a note in a frame and byp ∈ the predicted probability for this note. The loss for this note is formulated as As a convention, the performance of pitch detection is solely measured by the f-measure defined as In the above equation P and R are the precision and recall defined, respectively, as where TPs is the number of true positives, FPs is the number of false positives, and FNs is the number of false negatives. When there is more than one recording, the above metrics can be calculated in two ways. We can first calculate them for individual recordings and then average these . We call metrics obtained this way the average . We can alternatively treat all the recordings as an ensemble and directly calculate the metrics. We refer to metrics obtained this way the ensemble . In this section, we will compare performance of the harmonic model, the Kelz model and other more complex pitch detection systems built upon acoustic models. To enhance the performance, the existing systems exploited various techniques/tricks, such as data augmentation , RNN (;, HMM , joint-task training , and larger training split . Therefore, for a fair comparison we will apply two tricks to the harmonic model, namely, data augmentation and RNN. The first type of data augmentation is pitch shift whose procedure is detailed in appendix A.2. The second type is to change the powers of different frequencies by multiplying the amplitude of each frequency bin by a random number ranging from 0.9 to 1.1. The third type is to use the two sound channels of each recording in the training split as independent training examples. In this case, for testing we will average the logits of the two sound channels. To enhance the harmonic model, we can top the harmonic model with an LSTM layer. Specifically, in this case we first train the harmonic model. Next, we remove layers 11, 12 and 13 and keep the parameters of the remaining layers fixed and untrainable. Then we replace these removed layers with an LSTM of 64 hidden units, which is in turn followed by an FC layer (i.e., a fully connected layer that converts the number of features for each note from 64 to 1). We use Tensorflow 1.13.1 as the neural network framework and implement sparse convolution ourselves. The optimizer is the Adam optimizer. For the harmonic model without LSTM on top, each example has 300 frames; the batch size is 2; and the learning rate is 10 −4. For the harmonic model with LSTM on top, each example has 600 frames; the batch size is 1; and the learning rate is 10 −3. Table 2 compares the performance of different pitch detection systems. The system in was trained on a self-made training dataset that was purely synthesized and does not overlap in musical composition with the test split of MAPS. The system in was trained on MAESTRO. also augmented the training split by pitch shift, compressing, equalising, and adding reverberation and pink noise. The of the Kelz model are cited from which implemented this model and tested it on MAPS by following the training-test split partition given in section 2.1. Note that MAESTRO cannot be used both for training and for testing, because this has the problem of instrument overlap. Therefore, to objectively access a system's generalisation capability, we only compare the test performance of different systems on the test split of MAPS. Among the existing systems, some used the ensemble and some used the average . For the system of trained without data augmentation, only the f-measure is available. In its pure form without data augmentation and LSTM, the harmonic model defeats all the existing systems except the system in. The system in trained without data augmentation leads the pure harmonic model by 0.24. This lead is attributed to the MAESTRO's far larger training split and joint-task training with more complex network structure, as evidenced and contrasted by the of that were obtained on MAPS with a less complex network structure. The system of with data augmentation surpassed the harmonic model with data augmentation but without LSTM by 1.49. However, when data augmentation and LSTM are both applied, the harmonic model outperforms the system of with data augmentation by 0.91 and reaches a record high of 85.82. Note that, even when LSTM is applied to the harmonic model, none of the existing systems except the Kelz model is simpler than the harmonic model. In this paper we designed a harmonic acoustic model for pitch detection. This model effectively captures the complex frequency interactions characterizing polyphonic pitched music through conventional convolution and sparse convolution inspired by the harmonic structure of pitched music. In its pure form without RNN and data augmentation, the harmonic model outperformed most of the existing pitch detection systems. Most noticeably, when trained on MAPS and data augmentation is done, the harmonic model with an LSTM layer on top outdid the complex system in trained on MAESTRO whose training split 15 times as large as the training split of MAPS. Thus, the harmonic model has shown great potential to be used for building advanced AMT systems. A possible future direction is to make more potential of complex spectrograms, instead of using only amplitude spectrograms. A mixture of signal can be inseparable in the real number domain but could be separable in the complex number domain. has done some preliminary study in this direction. However, our own study showed that the technique of deep complex network proposed in did not yield a performance comparable with that of real networks. Therefore, definitely more can be done. Since there is no clear, standard procedure for computing VQT, here we will sketch such a procedure by ignoring the underlying details. CQT is arguably the only choice for characterizing harmonic frequency patterns due to the following advantages. First, it was born for processing musical signal of equal tempered scale, because its frequency bins are strictly log-linearly spaced and the bandwidth of each filter is proportional to the centre frequency . Second, it is preferable over the traditional wavelet transform, because the latter cannot provide sufficient frequency resolution for musical signal processing. Third, there exists a fast algorithm for CQT (Schörkhuber &). The drawback with CQT is the low time resolution for lower frequencies. For example, when the number of frequency bins per octave is 36, the frame length for MIDI note 21 is 1.87 seconds. Therefore, CQT is not ideal for detecting lower notes. Hence there comes VQT . In VQT the bandwidth of a filter within the filter bank can be expresses as where f k is the filter's centre frequency, B is the number of frequency bins per octave, and γ is a non-negative constant. When γ is zero, VQT reduces to CQT. The rationale behind VQT is that by enlarging the bandwidth we can shrink the frame length and thus get better time resolution. In order to properly apply VQT, we need to understand what the bandwidth means. In VQT, filters have strictly finite bandwidth therefore infinite length in the time domain. That is, the frame length for each frequency is infinite. We can only consider the frame length from an engineering perspective. Let us define the frame length as the length of the zone within the infinite frame that contains a major proportion of the frame's energy. If this proportion is to be over 99%, then it can be proved that the frame length is 2.88/Ω k seconds, or 2.88 × sr/Ω k samples where sr is the sampling rate. Thus, if we like a maximum frame length of 20000 samples when sr is 44.1 kHz, we should set the minimum bandwidth to be 2.88 × 44100/20000 = 6.35 Hz. Next, let us talk about zero padding. In VQT, when computing the spectral coefficients for each frequency at different times, the signals involved are cyclic shifts of the original recording. Thus, when computing the coefficient at sample zero, half the data comes from the end of the recording. We can get around this undesirable effect by zero padding. To be specific, we can pad zeros at each end of the recording, where h is the hop size, and then compute the VQT. After that we throw away 1.44/Ω min × sr/h coefficients at each end of the spectrogram. Finally, let us talk about the hop size in terms of samples. In VQT, the hop size is any number such that sr/h ≥ Ω max where Ω max is the maximum bandwidth. After getting the spectrogram, we convert it to dB scale according to 20 × log 10 |VQT| + 10 −10 + 200 where | · | is the absolute operator to get the amplitude. Adding 10 −10 is to make the computation numerically stable. When the amplitude is zero, we get the minimum power of −200 dB. When padding values for the inputs to the first neural network layer, it is desirable that the padded values are physically meaningful. In our case, we would like these values to stand for a zero power level. Thus, these values should be -200. For convenience we alternatively shift the dB scaled spectrogram by 200 so that we can keep on using zeros for padding. Step 1 Setting the number of frequency bins per octave (B) to a higher value of 84, calculate the VQT. Step 2 As per the procedure given in Schörkhuber et al., shift the pitch of the above VQT by 0, ±1 and ±2 frequency bins. Step 3 For each pitch-shifted VQT, recover the discrete Fourier transform (DFT) of the signal. Step 4 From the above DFT, calculate the VQT for B = 36.
harmonic acoustic model
408
scitldr
Learning domain-invariant representation is a dominant approach for domain generalization. However, previous methods based on domain invariance overlooked the underlying dependency of classes on domains, which is responsible for the trade-off between classification accuracy and the invariance. This study proposes a novel method {\em adversarial feature learning under accuracy constraint (AFLAC)}, which maximizes domain invariance within a range that does not interfere with accuracy. Empirical validations show that the performance of AFLAC is superior to that of baseline methods, supporting the importance of considering the dependency and the efficacy of the proposed method to overcome the problem. In supervised learning we typically assume that samples are obtained from the same distribution in training and testing; however, because this assumption does not hold in many practical situations it reduces the classification accuracy for the test data BID20. One typical situation is domain generalization (DG) BID1 BID18 BID19 BID2: we have labeled data from several source domains and collectively exploit them such that the trained system generalizes to other unseen, but somewhat similar, target domain(s). This paper considers DG under the situation where domain d and class y labels are statistically dependent owing to some common latent factor z FIG0 -(c)), which we referred to as domainclass dependency. For example, the WISDM Activity Prediction dataset (WISDM, BID10), where y and d correspond to activities and wearable device users, exhibits this dependency because some activities (e.g., jogging) are strenuous to the extent that some unathletic subjects avoided them (data characteristics), or other activities were added only after the study began and the initial subjects could not perform them (data-collection errors). The dependency is common in real-world datasets BID23 and a similar setting has been investigated in domain adaptation (DA) studies, but most prior DG studies overlooked the dependency. Most prior DG methods utilize invariant feature learning (IFL) (e.g.,). IFL attempts to learn feature representation h from input data x which is invariant to d. When source and target domains have some common structure (see,), IFL prevents the classifier from overfitting to source domains FIG0 ). However, under the dependency, merely imposing the domain invariance can adversely affect the classification accuracy as pointed out by BID21 and illustrated in FIG0. Although that trade-off occurs in source domains (because DG uses only source data during optimization), it can also negatively affect the classification performance for target domain(s). For example, if the target domain has characteristics similar (or same as an extreme case) to those of a certain source domain, giving priority to domain invariance obviously interferes with the DG performance (FIG0).In this paper, considering that prioritizing domain invariance under the trade-off can negatively affect the DG performance, we propose a novel method adversarial feature learning under accuracy constraint (AFLAC), which maximizes domain invariance within a range that does not interfere with the classification accuracy FIG0 -(e)) on adversarial training. Specifically, AFLAC is intended to achieve accuracy-constrained domain invariance, which we define as the maximum H(d|h) (H denotes entropy) value under the condition H(y|x) = H(y|h) (h has as much y information as x). Empirical validations show that the performance of AFLAC is superior to that of baseline methods, supporting the importance of considering domain-class dependency and the efficacy of the proposed approach for overcoming the issue. DG has been attracting considerable attention in recent years, and most prior DG methods utilize IFL BID2 BID4. In particular, our proposed method is based on Domain Adversarial Nets (DAN), which was originally invented for DA BID3 and BID21 demonstrated its efficacy in DG. In addition, BID21 intuitively explained the trade-off between classification accuracy and domain invariance, but they did not suggest any solution to the problem except for carefully tuning a weighting parameter. Several studies that address DG without utilizing IFL have been conducted. For example, CCSA BID16, CIDG BID13, and CIDDG BID14 proposed to make use of semantic alignment, which attempts to make latent representation given class label (p(h|y)) identical within source domains. This approach was originally proposed by BID6 in the DA context, but its efficacy to overcome the trade-off problem is not obvious. CrossGrad BID18, which is one of the recent state-of-the-art DG methods, utilizes data augmentation with adversarial examples. However, because the method relies on the assumption that y and d are independent, it might not be directly applicable to our setting. In DA, BID23; BID6 address the situation where p(y) changes across the source and target domains by correcting the change of p(y) using unlabeled target data, which is often accomplished at the cost of classification accuracy for the source domain. However, this approach is not applicable (or necessary) to DG because we are agnostic on target domain(s), and this paper is concerned with the change of p(y) within source domains. Instead, we propose to maximize the classification accuracy for source domains while improving the domain invariance. Here we provide the notion of accuracy-constrained domain invariance, which is the maximum domain invariance within a range that does not interfere with the classification accuracy. The reason for the constraint is that the primary purpose of DG is the classification for unseen domains rather than domain itself, and the improvement of the invariance could detrimentally affect the performance. Theorem 1 Let h = f (x), i.e., h is a deterministic mapping of x with a function f. We define accuracy-constrained domain invariance as the maximum H(d|h) value under the constraint that DISPLAYFORM0 Proof 1 Using the properties of entropy, the following inequation holds: DISPLAYFORM0 By assumption, H(y|x) = H(y|h) = 0, and thus the following inequation holds: DISPLAYFORM1 Thus, the maximum H(d|h) value under the constraints is H(d|y). We propose a novel method named AFLAC, which is designed to achieve accuracy-constraind domain invariance. Formally, we denote f E (x), q M (y|h), and q D (d|h) (E, M, and D are the parameters) as the deterministic encoder, probabilistic model of the label classifier, and that of domain discriminator, respectively. Then, the objective function of AFLAC is described as follows: DISPLAYFORM0 Here γ denotes a weighting parameter. Note that, although we cannot obtain true distribution p(d|y), we can use the maximum likelihood estimator of it when y and d are discrete, as is usual with DG.Here we formally show that AFLAC is intended to achieve H(d|h) = H(d|y) (accuracy-constrained domain invariance) by a Nash equilibrium analysis similar to BID7; BID21. We define D * and M * as the solutions to Eq. 3 and Eq. 4 with fixed E. They obviously satisfy q * D = p(d|h), q * M = p(y|h), respectively. Thus, V in Eq. 3 can be written as follows: DISPLAYFORM1 E *, which we define as the solution to Eq. 5 and in Nash equilibrium, satisfies not only H(y|h) = H(y|x) (optimal classification accuracy) but also E h,y∼p (h,y) [D KL [p(d|y) BMNISTR We created the Biased Rotated MNIST dataset (BMNISTR) by modifying the sample size of the popular benchmark dataset MNISTR BID5, such that the class distribution differed among the domains. In MNISTR, each domain was created by rotating images by 15 degree increments: 0, 15,..., 75 (referred to as M0, ..., M75). We created four variants of MNISTR that have different types of domain-class dependency, referred to as BMNISTR-1 through BMNISTR-3. As shown in TAB0 -left, BMNISTR-1, -2 have similar trends but different degrees of dependency, whereas BMNISTR-1 and BMNISTR-3 differ in terms of their trends. In training, we employed a leave-one-domain-out setting BID5: we trained the models on five of the six domains and tested them using the remaining one. WISDM WISDM contains sensor data of accelerometers of six human activities (walking, jogging, upstairs, downstairs, sitting, and standing) performed by 36 users (domains). WISDM has the dependency for the reason noted in Sec. 1. we randomly selected <10 / 26> and <26 / 10> users as <source / target> domains, and split the source data into training and validation data. We compared AFLAC with the following methods. CNN is a vanilla convolutional networks trained on the aggregation of data from all source domains. DAN BID21 ) is expected to generalize across domains utilizing domain-invariant representation, but it can be affected by the trade-off as pointed out by BID21. CIDDG is our re-implementation of BID14, which is designed to achieve semantic alignment on adversarial training. Additionally, we used AFLAC-Abl, which is a version of AFLAC modified for ablation studies. AFLAC- DISPLAYFORM0 We first investigated the extent to which domain-class dependency affects the performance of domain-invariance-based methods. In TAB0 -right, we compared the mean F-measures for the classes 0 through 4 and classes 5 through 9 in BMNISTR with the target domain M0. Recall that the sample sizes for the classes 0∼4 are variable across domains, whereas the classes 5∼9 have identical sample sizes across domains. The F-measures show that AFLAC outperformed baselines in most dataset-class pairs, which supports that the dependency reduces the performance of IFL methods and that AFLAC can mitigate the problem. Further, the relative improvement of AFLAC to AFLAC-Abl is more significant for the classes 0∼4 than for 5∼9 in BMNISTR-1 and BMNISTR-3, suggesting that AFLAC tends to increase performance more significantly for classes in which the dependency occurs. Moreover, the improvement is more significant in BMNISTR-1 than in BMNISTR-2, suggesting that the stronger the dependency is, the lower the performance of domain-invariance-based methods becomes. Finally, although the dependencies of BMNISTR-1 and BMNISTR-3 have different trends, AFLAC improved the F-measures in both datasets. Next we investigated the relationship between the strength of regularization and performance. c, d) show that the accuracy gaps of AFLAC-Abl and AFLAC increase with strong regularization (such as when γ = 10), suggesting that AFLAC, as it was designed, does not tend to reduce accuracy with strong regularizer, and thus AFLAC is robust toward hyperparameter choice. In this paper, we proposed a novel method AFLAC, which maximizes domain invariance within a range that does not interfere with classification accuracy on adversarial training. Empirical validations show the superior DG performance of AFLAC to the baseline methods, supporting the importance of the domain-class dependency in domain generalization tasks and the efficacy of the proposed method for overcoming the issue.
Address the trade-off caused by the dependency of classes on domains by improving domain adversarial nets
409
scitldr
Recent advances in deep generative models have lead to remarkable progress in synthesizing high quality images. Following their successful application in image processing and representation learning, an important next step is to consider videos. Learning generative models of video is a much harder task, requiring a model to capture the temporal dynamics of a scene, in addition to the visual presentation of objects. While recent generative models of video have had some success, current progress is hampered by the lack of qualitative metrics that consider visual quality, temporal coherence, and diversity of samples. To this extent we propose Fréchet Video Distance (FVD), a new metric for generative models of video based on FID. We contribute a large-scale human study, which confirms that FVD correlates well with qualitative human judgment of generated videos. Recent advances in deep generative models have lead to remarkable success in synthesizing highquality images . A natural next challenge is to consider video generation. This is a much harder task, requiring a model to capture the temporal dynamics of a scene, in addition to the visual presentation of objects. Generative models of video will enable many applications, including missing-frame prediction , improved instance segmentation , or complex (relational) reasoning tasks by conducting inference .While great progress has been made in recent years, video generation models are still in their infancy, and generally unable to synthesize more than a few seconds of video . Learning a good dynamics model remains a major challenge in generating real world videos. However, in order to qualitatively measure progress in synthesizing videos, we require metrics that consider visual quality, temporal coherence, and diversity of generated samples. We contribute Fréchet Video Distance (FVD), a new metric for generative models of video. FVD builds on the principles underlying Fréchet Inception Distance (FID;), which has been successfully applied to images. We introduce a feature representation that captures the temporal coherence of the content of a video, in addition to the quality of each frame. Unlike popular * Both authors contributed equally to this work while interning at Google Brain. Figure 1: Generated videos by various models ranked according to FVD (lower is better). metrics such as Peak Signal to Noise Ratio (PSNR) or the Structural Similarity (SSIM;) index, FVD considers a distribution over videos, thereby avoiding the drawbacks of framelevel metrics . We contribute extensive experiments to evaluate FVD, including a large-scale human study which confirms that FVD coincides well with qualitative human judgment of generated videos. A generative model of videos must capture the underlying data distribution with which the observed data was generated. The distance between the real world data distribution P R and the distribution defined by the generative model P G is a natural evaluation metric. An analytic expression of either distribution is usually unavailable, which prohibits straightforward application of many common distance functions. For example, the popular Fréchet Distance (or 2-Wasserstein distance) between P R and P G is defined by: DISPLAYFORM0 2 where the minimization is over all random variables X and Y with distributions P R and P G respectively. This expression is difficult to solve for the general case, but has a closed form solution when when P R and P G are multivariate Gaussians : DISPLAYFORM1 where µ R and µ G are the means and Σ R and Σ G are the co-variance matrices of P R and P G. A multivariate Gaussian is seldom an accurate representation of the underlying data distribution, but when using a suitable feature space, this is a reasonable approximation. For distributions over real world images, used a learned feature embedding to calculate the distance between P R and P G as follows: First samples from P R and P G are fed through an Inception network network that was trained on ImageNet and their feature representations (activations) in one of the hidden layers are recorded. Then the Fréchet Inception Distance (FID;) is computed via Eq. 1 using the means and covariances obtained from the recorded responses of the real, and generated samples. The feature representation learned by the pre-trained neural network greatly affects the quality of the metric. When training on ImageNet, the learned features expose information useful in reasoning about objects in images, whereas other information content may be suppressed. Likewise, different layers of the network encode features at different abstraction levels. In order to obtain a suitable feature representation for videos we require a pre-trained network that considers the temporal coherence of the visual content across a sequence of frames, in addition to its visual presentation at any given point in time. In this work we investigate several variations of a pre-trained Inflated 3D Convnet (I3D;), and name the ing metric the Fréchet Video Distance (FVD) 1 2.The I3D network generalizes the Inception architecture to sequential data, and is trained to perform action-recognition on the Kinetics data set consisting of human-centered YouTube videos. Action recognition requires visual context and temporal evolution to be considered simultaneously, and I3D has been shown to excel at this task. We explore two different feature representations (logits, avg. pool) learned by I3D networks pre-trained on Kinetics-400, and Kinetics-600.A potential downside in using Eq. 1 is the potentially large error in estimating Gaussian distributions over the learned feature space. Bińkowski et al. proposed to use the Maximum Mean Discrepancy (MMD;) as an alternative in the case of images, and we will explore this variation in the context of videos as well. MMD is a kernel-based approach, which provides a means to calculate the distance between two empirical distributions without assuming a particular form. Bińkowski et al. proposed to use a polynomial kernel k(a, b):= a T b + 1 3, which we will apply to the learned features of the I3D network to obtain the Kernel Video Distance (KVD).1 A similar adaptation of FID was used by to evaluate their vid2vid model. Here we introduce FVD as a general metric for videos and an focus on an extensive empirical study.2 Code to compute FVD is available at https://git.io/fpuEH. In the following we present of a noisy study, and human study of FVD. Additional experiments that analyze its sensitivity to sample size, and resolution are available in Appendix B & D. We test how sensitive FVD is to basic distortions by adding noise to the real videos. We consider static noise added to individual frames, and temporal noise, which distorts the entire sequence of frames. We applied these distortions (details in Appendix A) at up to six different intensities, and computed the FVD and KVD between videos from the BAIR , Kinetics-400 and HMDB51 data sets and their noisy counterparts. As potential embeddings, we considered the top-most pooling layer, and the logits layer of the I3D model pre-trained on the Kinetics-400 data set, as well as the same layers in a variant of the I3D model pre-trained on the extended Kinetics-600 data set. As a baseline, we compared to a naive extension of FID for videos in which the Inception network (pre-trained on ImageNet) is evaluated for each frame individually, and the ing embeddings (or their pair-wise differences) are averaged to obtain a single embedding for each video. This "FID" score is then computed according to Eq. 1.All variants were able to detect the various injected distortions to some degree, with the pre-trained Inception network generally being inferior at detecting temporal distortions as was expected. FIG0 shows that the logits layer of the I3D model pre-trained on Kinetics-400 has the best average rank correlation with the sequence of noise intensities. An overview of its scores on the noise experiments can be seen in FIG2. Table 1: Agreement of metrics with human judgment when considering models with a fixed value for a given metric (eq.), or with spread values over a wide range (spr.). One important criterion for the performance of generative models is the visual fidelity of the samples as judged by human observers , as a metric for generative models must ultimately correlate well with human judgment. Thus we trained several conditional video generation models, and asked human raters to compare the quality of the generated videos in different scenarios. Results The of the human evaluation studies can be seen in Table 1, and additional for KVD and Avg. FID in Appendix C. We find that FVD is the superior choice compared to all other metrics tested. The obtained for eq. FVD and spr. FVD are of key importance as they determine how users will experience FVD in practice. From the spr. FVD column we can conclude that no other metric can improve upon the ranking induced by FVD, and the eq. FVD column tells us that no other metric can reliably distinguish between good models that are equal in terms of FVD.On the other hand, FVD is able to distinguish models when other metrics can not (eq. SSIM, eq. PSNR), agreeing well with human judgment (74.9 %, and 81.0 % agreement respectively). Likewise FVD consistently improves on the ranking induced by other metrics (spr. SSIM, spr. PSNR), even though these scenarios are clearly favorable for the metric under consideration. We introduced the Fréchet Video Distance (FVD), a new evaluation metric for generative models of video, and an important step towards better evaluation of models for video generation. Our experiments confirm that FVD is accurate in evaluating videos that were modified to include static noise, and temporal noise. More importantly, a large scale human study among generated videos from several recent generative models reveals that FVD consistently outperforms SSIM and PSNR in agreeing with human judgment. To test whether FVD can detect static noise we added one of the following distortions to each frame in a sequence of video frames: a black rectangle drawn at a random location in the frame, Gaussian blur, which applies a Gaussian smoothing kernel to the frame, Gaussian noise, which interpolates between the observed frame and standard Gaussian noise, and Salt & Pepper noise, which sets each pixel in the frame to either black or white with a fixed probability. Temporal noise was injected by locally swapping a number of randomly chosen frames with its neighbor in the sequence globally swapping a number of randomly chosen pairs of frames selected across the whole sequence, interleaving the sequence of frames corresponding to multiple different videos to obtain new videos, and by switching from one video to another video after a number of frames to obtain new videos. We applied these distortions at up to six different intensities that are unique to each type, e.g. related to the size of the black rectangle, the number of swaps to perform, or the number of videos to interleave. We conduct the noise study on HMDB (Correlation of Noise Intensity and Metric FVD kinetics400 logits FVD kinetics600 logits FVD kinetics400 pool FVD kinetics600 pool KVD kinetics400 logits KVD kinetics600 logits KVD kinetics400 pool KVD kinetics600 pool FID average over frames FID average over frame-diffs We consider the accuracy with which FVD is able to calculate the true underlying distance between a distribution of generated videos and a target distribution. To calculate the FVD according to Eq. 1 we need to estimate µ R, µ G and Σ R, Σ G from the available samples. The larger the sample size, the better these estimates will be, and the better FVD will reflect the true underlying distance between the distributions. For an accurate generative model these distributions will typically be fairly close, and the noise from the estimation process will primarily affect our . This effect has been wellstudied for FID (; Bińkowski et al., 2018), and is depicted for FVD in FIG3. It can be seen that even when the underlying distributions are identical, FVD will typically be larger than zero because our estimates of the parameters µ R, µ G, Σ R and Σ G are noisy. It can also be seen that for a fixed number of samples the standard errors (measured over 50 tries) are small, and an accurate comparison can be made. Hence, it is critical that in comparing FVD values across models, one uses the same sample size 3.2 5 2 6 2 7 2 8 2 9 2 10 2 11 2 12Number of ) on the BAIR data set. Using a wide range of possible hyperparameter settings, and by including model parameters at various stages of training we obtain over 3 000 different models. Generated videos are obtained by combining 2 frames of context with the proceeding 14 output frames. Following prior work we obtain the PSNR and SSIM scores by generating 100 videos for each input context (conditioning frames) and returning the best frame-averaged value among these videos. We consider 256 video sequences (unseen by the model) to estimate the target distribution when computing FVD.We conduct several human studies based on different subsets of the trained models. In particular, we select models according to two different scenarios:One Metric Equal We consider models that are indistinguishable according to a single metric, and evaluate to what degree human raters and other competing metrics are able to distinguish these models in terms of the quality of the generated videos. We choose 10 models having roughly equal values for a given metric that are close to the best quartile of the overall distribution of that metric, i.e., the models were chosen such that they are worse than 25 % of the remaining models and better than 75 % of the remaining models as determined by the metric under consideration. We were able to choose models whose values where identical up to the first 4-5 significant digits for each metric. One Metric Spread In a second setting we consider to what degree models having very different scores for a given metric, coincide with the subjective quality of their generated videos as judged by humans. We choose 10 models which were equidistant between the 10 % and 90 % percentile of the overall distribution of that metric. In this case, there should be clear differences in terms of the quality of the generated videos among the models under consideration (provided that the metric is accurate), suggesting high agreement with human judgment for the metric under consideration in comparison to competing metrics. For the human evaluation, we used 3 generated videos from each selected model. Human raters would be shown a video from two models, and then asked to identify which of the two looked better, or alternatively report that their quality was indistinguishable. Each pair of compared videos was shown to up to 3 independent raters, where the third rater was only asked if the first two raters disagreed. The raters were given no prior indication about which video was thought to be better. We calculated the correspondence between these human ratings and the ratings determined by the various metrics under consideration. Table 3: Agreement of metrics with human judgment when considering models with a fixed value for a given metric (eq.), or with spread values over a wide range (spr.). FVD is superior at judging generated videos based on subjective quality. Table 3 contains the of the human evaluation for the KVD and Avg. FID metrics in addition to the for all other metrics from the main paper. Avg. FID is computed by averaging the Inception embedding for each frame, before computing the Fréchet distance.• Avg. FID We find that Avg. FID performs markedly worse compared to FVD in most scenarios, except on spr. SSIM, where it achieves slightly better performance. It suggests that it is preferential to judge the wide range of videos (sampled from each decile as determined by SSIM) based primarily on frame-level quality. On the other hand, when considering videos of similar quality in eq. SSIM, we find that judging based on temporal coherence (in addition to frame-level quality) is beneficial and Avg. FID performs worse.• KVD We find that KVD is highly correlated with FVD (spearman: 0.9), although in most scenarios it performs slightly worse than FVD in terms of agreement with human judgment. In general we may conclude from Table 3 that FVD is the superior choice compared to all other metrics tested. The obtained for eq. FVD and spr. FVD are of key importance as they determine how users will experience FVD in practice. From the spr. FVD column we can conclude that no other metric can improve upon the ranking induced by FVD, and the eq. FVD column tells us that no other metric can reliably distinguish between good models that are equal in terms of FVD. Table 3 also reports the agreement among raters. These are computed as the fraction of the comparisons in which the first two raters agreed for a given video pair, averaged across all comparisons to obtain the final percentage. It can be seen that in most cases the raters are confident in comparing generated videos. While the in Appendix B demonstrates that FVD are highly reproducible for a fixed sample size, it does not consider to what degree small differences in FVD can be considered meaningful. To answer this question, human raters were asked to compare videos generated by a randomly chosen model having an FVD of 200 / 400 (base200 / base400) and generated videos by models that were 10, 20, 50, 100, 200, 300, 400, and 500 FVD points worse. In each case we selected 5 models from the models available at these FVD scores and generated 3 videos for each model, ing in a total of 1 800 comparisons. For a given video comparison, raters were asked to decide which of the two videos looked better, or if they were of similar quality. For each of these pairs, we asked up to 3 human raters for their opinion. In Figure 5 it can be seen that when the difference in FVD is smaller than 50, the agreement with human raters is close to random (but never worse), and increases rapidly once two models are more than 50 FVD points apart. Hence, it can be concluded that differences of 50 FVD or more typically correspond to differences in the quality of the generated videos that can be perceived by humans. Figure 5: Fraction of human raters that agree with FVD on which of two models is better, as a function of the difference in FVD between the models. Error bars are standard errors, and raters deciding that video pairs are of similar quality are counted as not agreeing with FVD.
We propose FVD: a new metric for generative models of video based on FID. A large-scale human study confirms that FVD correlates well with qualitative human judgment of generated videos.
410
scitldr
Despite advances in deep learning, artificial neural networks do not learn the same way as humans do. Today, neural networks can learn multiple tasks when trained on them jointly, but cannot maintain performance on learnt tasks when tasks are presented one at a time -- this phenomenon called catastrophic forgetting is a fundamental challenge to overcome before neural networks can learn continually from incoming data. In this work, we derive inspiration from human memory to develop an architecture capable of learning continuously from sequentially incoming tasks, while averting catastrophic forgetting. Specifically, our model consists of a dual memory architecture to emulate the complementary learning systems (hippocampus and the neocortex) in the human brain and maintains a consolidated long-term memory via generative replay of past experiences. We (i) substantiate our claim that replay should be generative, (ii) show the benefits of generative replay and dual memory via experiments, and (iii) demonstrate improved performance retention even for small models with low capacity. Our architecture displays many important characteristics of the human memory and provides insights on the connection between sleep and learning in humans. Many machine learning models, when trained sequentially on tasks, forget how to perform the previously learnt tasks. This phenomenon called catastrophic forgetting is prominent in neural networks BID23. Without a way to avert catastrophic forgetting, a learning system needs to store all training data and relearn on it along with new incoming data, when retraining. Hence, it is an important challenge to overcome in order to enable systems to learn continuously. BID23 first suggested that the underlying cause of forgetting was the distributed shared representation of tasks via network weights. Subsequent works attempted to remedy the issue by reducing representational overlap between input representations via activation sharpening algorithms BID17, orthogonal recoding of inputs BID19 or orthogonal activations at all hidden layers BID24 BID5. More recent works have explored activations like dropout BID9 and local winner-takes-all BID36 to create sparse, less correlated feature representations. But such sparse encodings can be task specific at times and in general act as heuristics to mildly pacify the underlying problem. Further, natural cognitive systems are also connectionist in nature and yet they forget gradually but not'catastrophically'. For instance, humans demonstrate gradual systematic forgetting. Frequently and recently encountered tasks tend to survive much longer in the human memory, while those rarely encountered are slowly forgotten. Some of the earlier tasks may be seen again, but it is not necessary for them to be retained in memory BID7. Hence only sparsifying representations does not solve the problem. Instead, neuroscientific evidence suggests that humans have evolved mechanisms to separately learn new incoming tasks and consolidate the learning with previous knowledge to avert catastrophic forgetting BID22 BID29 BID7.Complementary learning systems: BID22 suggested that this separation has been achieved in the human brain via evolution of two separate areas of the brain, the hippocampus and the neocortex. The neocortex is a long term memory which specializes in consolidating new information with previous knowledge and gradually learns the joint structure of all tasks and experiences; whereas the hippocampus acts as a temporary memory to rapidly learn new tasks and then slowly transfer the knowledge to neocortex after acquisition. Experience replay: Another factor deemed essential for sequential learning is experience replay. BID22; BID29 have emphasized the importance of replayed data patterns in the human brain during sleep and waking rest. BID31 BID32 proposed several replay techniques (a.k.a. pseudopattern rehearsal) to achieve replay, but they involved generating replay data without storing input representations and our experiments show that they lack the accuracy required for consolidation. Weight consolidation or freezing: Recent evidence from neuroscience also suggests that mammalian brain protects knowledge in the neocortex via task-specific consolidation of neural synapses over long periods of time BID37 BID0. Such techniques have recently been employed in progressive neural networks BID34 and Pathnets BID4 both of which freeze neural network weights after learning tasks. BID16 have used the fisher information matrix (FIM) to slow down learning on network weights which correlate with previously acquired knowledge. In this paper, we address the catastrophic forgetting problem by drawing inspiration from the above neuroscientific insights and present a method to overcome catastrophic forgetting. More specifically, we propose a dual-memory architecture for learning tasks sequentially while averting catastrophic forgetting. Our model comprises of two generative models: a short-term memory (STM) to emulate the human hippocampal system and a long term memory (LTM) to emulate the neocortical learning system. The STM learns new tasks without interfering with previously learnt tasks in the LTM. The LTM stores all previously learnt tasks and aids the STM in learning tasks similar to previous tasks. During sleep/down-time, the STM generates and transfers samples of learnt tasks to the LTM. These are gradually consolidated with the LTM's knowledge base of previous tasks via generative replay. Our approach is inspired from the strengths of deep generative models, experience replay and the complementary learning systems literature. We demonstrate our method's effectiveness in averting catastrophic forgetting by sequentially learning multiple tasks. Moreover, our experiments shed light on some characteristics of human memory as observed in the psychology and neuroscience literature. Formally, our problem setting can be called Sequential Multitask Learning and is characterized by a set of tasks T, which are to be learnt by a model parameterized by weights θ (e.g. a neural network). From here on, we will use the the phrase model and neural network interchangeably. In this work we mainly consider supervised learning tasks i.e. task t ∈ T has training examples: DISPLAYFORM0 Nt for x t i ∈ X and y t i ∈ Y, but our model easily generalizes to unsupervised learning settings. Note that tasks are presented sequentially and the total number of tasks |T| is not known a priori. Finite memory: We further assume that any training algorithm can store some examples from each task if needed, but the storage (N max) is limited and can be smaller than the total number of examples from all tasks |T| t=1 N t. So, algorithms cannot store all training examples and re-learn on them when new tasks arrive. The same restriction applies to algorithms with generative models i.e. no more than N max examples allowed at any time (generated + stored).For testing, the model can be asked to predict the label y t ∈ Y for any example x t ∈ X from any previously seen task t ∈ T. Our goal is to devise an algorithm which learns these tasks sequentially while avoiding catastrophic forgetting and can achieve a test loss close to that of a model which learnt all the tasks jointly. The idea of replaying experience to a neural network has been used previously for reinforcement learning BID20. A study by BID29 suggests that experience replay also occurs in the human brain during sleep and waking rest and aids in consolidation of learnt experiences. We propose that experience replay must be generative in nature. This is better than storing all samples in replay memories as is common in reinforcement learning, since sampling from a generative model automatically provides the most frequently encountered samples. It is also feasible with limited total memory, whereas explicitly storing samples from previous tasks requires determining which and how many samples to store for each task. Determining this can depend on the total tasks |T|, number of examples per task N t and frequency of occurrence of samples, which are often not available a priori. Previously proposed non-generative approaches to experience replay BID31 BID6 BID32 propose to preserve neural networks' learnt mappings by arbitrarily sampling random inputs and their corresponding outputs from the neural networks and using them along with new task samples while training. These approaches have only been tested in small binary input spaces in previous works, and our experiments in section 4 show that sampling random inputs in highdimensional spaces (e.g. images) does not preserve the mapping learnt by neural networks. FIG0 ) which has three elements: (i) a generative model (the generator G), (ii) a feedforward network (the learner L), and (iii) a dictionary (D dgm) with task IDs of learnt tasks and the number of times they were encountered. We call this a memory because of its weights and learning capacity, not due to any recurrent connections. We assume availability of unique task IDs for replay and to identify repetition. In practice, a task identification system (e.g., a HMM-based inference model) like in previous works BID16 suffices for this purpose. We choose variational autoencoder (VAE) BID15 for the generator, since our generative model requires reconstruction capabilities (see section 3.2). We update a DGM with samples from (multiple) new tasks using our algorithm Deep Generative Replay (see figure 1 above and algorithm 1 in appendix A). Given new incoming samples (X, Y), DGR first computes the fraction of total samples that should come from incoming samples (η tasks) and the fraction to come from previous task samples (η gen) proportionate to the number of tasks (counting repetitions). It allots a minimum fraction κ of the memory capacity N max per new task. This ensures that as the DGM saturates with tasks over time, new tasks are still learnt at the cost of gradually losing performance on the least recent previous tasks. This saturation is synonymous to how learning slows down in humans as they age but they still continue to learn new tasks while forgetting old things gradually BID7. Next, DGR computes the number of samples to be generated from previous tasks and subsamples the incoming samples (if needed) to obey maximum memory capacity (N max). It then generates samples of previously learnt tasks (X gen, Y gen) using the generator and learner, reconstructs the data {X, X gen} using the generator (hence we use a VAE) and then trains the DGM on ing samples (X recon, {Y, Y gen}). Doing this final reconstruction provides robustness to noise and occlusion (section 5). A good continual learning system needs to quickly acquire new tasks and also retain performance on previously learnt tasks. These conflicting requirements are hard to satisfy simultaneously. Hence, inspired by nature's solution to this problem, we propose a dual memory network to combat forgetting. Our architecture (DGDMN) shown in figure 2 comprises of a large deep generative memory (DGM) called the long-term memory (LTM) which stores information of all previously learnt tasks like the neocortex, and a short-term memory (STM) which behaves similar to the hippocampus and learns new incoming tasks quickly without interference from previous tasks. The STM is a collection of small, dedicated, task-specific deep generative memories (called short-term task memory -STTM), which can each learn one unique task. If an incoming task comes is already in an STTM, the same STTM is used to retrain on it, otherwise a fresh STTM is allocated to the task. Additionally, if the task has been previously consolidated then the LTM reconstructs the incoming samples for that task using the generator (hence we use a VAE), predicts labels for the reconstructions using its learner and sends these newly generated samples to the STTM allocated to this task. This provides extra samples on tasks which have been learnt previously and helps to learn them better, while also preserving the previous performance on that task to some extent. Once all (n ST M) STTMs are exhausted, the architecture sleeps (like humans) to consolidate all tasks into the LTM and free up the STTMs for new tasks. While asleep, the STM generates and sends samples of learnt tasks to the LTM, where these are consolidated via deep generative replay (see FIG1). While testing on task t (even intermittently between tasks), if any STTM currently contains task t, it is used to predict the labels, else the prediction is deferred to the LTM. This allows predicting on all tasks seen uptil now (including the most recent ones) without sleeping. We perform experiments to demonstrate forgetting on sequential image classification tasks. We briefly describe our datasets here (details in appendix B): (a) Permnist is a catastrophic forgetting BID9 BID16 benchmark and each task contains a fixed permutation of pixels on MNIST images BID18, (b) Digits dataset involves classifying a single MNIST digit per task, (c) TDigits is a transformed variant of MNIST similar to Digits but with 40 tasks for long task sequences, (d) Shapes contains several geometric shape classification tasks, and (e) Hindi contains a sequence of 8 tasks with hindi language consonant recognition. Along with our model (DGDMN), we test several baselines for catastrophic forgetting, which are briefly described here (implementation and hyperparameter details in appendix B): (a) Feedforward neural networks (NN): We use these to characterize the forgetting in the absence of any prevention mechanism and as a datum for other approaches, (b) Neural nets with dropout (DropNN): BID9 suggested using dropout as a means to prevent representational overlaps and pacify catastrophic forgetting, (c) Pseudopattern Rehearsal (PPR): A non-generative approach to experience replay BID32, (d) Elastic Weight Consolidation (EWC): BID16 proposed using the Fisher Information Matrix for task-specific consolidation of weights in a neural network, and (e) Deep Generative Replay (DGR): Using a single DGM to separate the effects of generative replay and dual memory architecture. In our preliminary experiments, we observed that large networks with excessive parameters can more easily adapt to sequentially incoming tasks, thereby masking the severity of catastrophic forgetting. So we have chosen network architectures which have to share all their parameters appropriately amongst the various tasks in a dataset to achieve reasonable joint accuracy on the dataset. This allows us to evaluate an algorithm carefully while ignoring the benefits provided by excessive parameterization. We trained DGDMN and all above baselines sequentially on the image classification tasks of Permnist, Digits, Shapes and Hindi datasets (separately). We show on the Shapes and Hindi dataset in appendix A. The classification accuracy on a held out test set for each task, after training on the t th task has been shown in figures 3 and 4. We used the same network architecture for each of NN, PPR, EWC, learner in DGR, and learner in the LTM of DGDMN (for a single dataset). DropNN had two intermediate dropout layers after each hidden layer (see appendix B for details). We observe from figures 3a and 3b, that NN and DropNN forget catastrophically when they learn new tasks. This shows that sparse representation based methods rely on the neural network being of high enough capacity to learn sparse representations BID9 and may not perform well if the network does not have redundant weights available. EWC forgets less than NN and DropNN, but it rapidly slows down learning on many weights and its learning effectively stagnates after Task 3 (e.g. see Tasks 5 and 6 in FIG2). The learning slowdown on weights hinders EWC from reusing those weights later on to jointly discover common structures amongst previously learnt and newly incoming tasks. Note that the networks do have the capacity to learn all tasks and our algorithms DGR and DGDMN outperform all baselines by learning all tasks sequentially with this same learner network (figures 3e, 3f).We observed heavy forgetting on Digits (figure 4) for most baselines, which is expected because all samples in the t th task have a single label (t) and so the t th task can be learnt on its own by setting the t th bias of the softmax layer to be high and the other biases low. Such sequential tasks cause catastrophic forgetting. We observed that NN, DropNN, PPR and EWC learnt only the task being trained on and forgot all previous knowledge immediately. Sometimes, we also observed saturation due to the softmax bias being set very high and then being unable to recover from it. PPR showed severe saturation since its replay prevented it from coming out of the saturation. DGR and DGDMN still retain performance on all tasks of Digits, and our replay strategy prevents saturation by appropriately balancing the ratios of new incoming samples and generated samples from previous tasks. The average forgetting on all tasks ∈ {1, . . ., t}, after training on the t th task (for both Digits and Permnist) is shown in FIG4. For absolute reference, the accuracy of NN by training it jointly on all tasks uptil the t th task has also been shown for each t. Again DGR and DGDMN outperform baselines in terms of retained average accuracy. In FIG4, NN, DropNN, PPR and EWC follow nearly overlapping curves (acc ≈ 1 t) since they are only able to learn one task at a time. Further, though PPR involves experience replay, it does not compare against DGR and DGDMN (figures 3c, 4c). Although, it does preserve its learnt mapping around the points randomly sampled from its domain, these random samples are not close to real images and fail to preserve performance. These observations substantiate our claim that any replay mechanism must model the input domain accurately and hence needs to be generative in nature. We observed similar for the Shapes and Hindi dataset (appendix A).We point out that datasets like Digits, which contain tasks with highly correlated input (and/or output) samples should be important benchmarks for continual learning for two main reasons: (i) High correlation amongst task samples promotes overfitting to the new incoming task and therefore causes catastrophic forgetting. Being able to retain performance on such task sequences is a strong indicator of the efficacy of a continual learning algorithm.(ii) Humans also learn by seeing many correlated samples together in a short span of time, rather than witnessing nearly IID samples (like in Permnist). For examples, kids learn a single alphabet per day in kindergarten by seeing and writing that alphabet many times that day. Since NN, DropNN and PPR do not fare well on such tasks, we show experiments on EWC, DGR and DGDMN from here on. It is well known in psychology literature that human learning improves via revision BID14 BID1. We show performance of EWC and DGDMN on Permnist, when some tasks are repeated (figure 6). DGR performs very similar to DGDMN, hence we omit it. EWC stagnates and once learning has slowed down on the weights important for Task 1, the weights cannot be changed again, not even for improving Task 1. Further, it did not learn Task 6 the first time and revision does not help either. However, DGDMN learns all tasks uptil Task 6, then benefits by revising Task 1 again (accuracy goes up), and somewhat for Task 6 (it did not forget Task 6 substantially). We reiterate that DGDMN, by its design, benefits significantly from revision because STTMs learning a repeated task gain extra samples from the LTM (or generated samples from themselves, if they had learnt the task before). While many previous works do not investigate revision, it is crucial for learning continuously and should improve performance on tasks. The ability to learn from correlated task samples and revision makes our memory architecture functionally similar to that of humans. To explore the role of the dual memory architecture and differentiate between DGDMN and DGR, we trained these algorithms on the long sequence of 40 tasks from TDigits dataset. We limited N max to 120, 000 samples for this task to explore the case where the LTM in DGDMN (DGM in DGR) cannot regenerate as many samples as in the full dataset and has to forget some tasks. At least κ = 0.05 fraction of memory was ensured per new task and consolidation in DGDMN happened after n ST M = 5 tasks. The average forgetting curves vs. tasks encountered are plotted in figure 7a. DGDMN and DGR start around an average accuracy of 1.0, but start dropping after 10 tasks since the LTM (DGM for DGR) begins to saturate. While DGDMN drops slowly and retains > 40% accuracy on all tasks, DGR drops below 20% accuracy. This is because DGR consolidates its DGM too often and the DGM's self-generated slightly erroneous samples compound errors quite fast. DGDMN uses small STTMs to learn single tasks with low error and transfers them simultaneously to the LTM. As a consequence, DGDMN consolidates its LTM with more accurate samples and less often, hence its error accumulates much slower. We discuss the effect of the small error in STTM representations in section 5.Even though DGDMN displays inevitable forgetting in figure 7a (due to memory constraint), the forgetting is gradual and not catastrophic as seen for NN, DropNN, PPR etc. on Digits dataset. We also measure average accuracy on the most recent few tasks seen (say 10). FIG6 shows that DGDMN oscillates around 90% average accuracy, whereas DGR's frequent consolidation propagates errors too fast and its accuracy drops even on this metric. Another advantage of dual memories is revealed by the training time for the algorithms. FIG8 shows an order of magnitude of difference between DGDMN and DGR in training time. This is because STTMs are smaller and faster to train than the LTM. LTM preserves all the tasks seen so far and hence requires a large number of samples to consolidate, which is costly and should not be done after every task. Learning new tasks quickly in the STM and holding them till sleep provides a speed advantage and allows learning quickly with only periodic consolidation. The dual memory architecture is a critical design choice for scalability and has also emerged naturally in humans, in the form of the complementary learning systems and the need to sleep periodically. Even though sleeping is a dangerous behavior for any organism since it can be harmed or attacked by a predator while asleep, sleep has still survived through eons of evolution and never been lost BID12. Today, most organisms with even a slightly developed nervous system (centralized or diffuse) display either sleep or light-resting behavior BID28. The experiment demonstrates the importance of sleep, since without the dual memory architecture intertwined with periodic sleep, learning would be very short lived and highly time consuming (as in DGR). In this section we show that DGDMN shares some more remarkable characteristics with the human memory and present a discussion of some more related ideas. Due to space constraints, visualizations of the learnt latent structures when training jointly vs. sequentially have been deferred to appendix A. The hyperparameters of DGDMN (κ and n ST M) have intuitive interpretations and we have provided simple heuristics to choose them without any complex searches (in appendix B).Resilience to noise and occlusion: We use a VAE to be able to reconstruct representations of samples. Reconstructed images are less noisy and can recover from partial occlusion, which gives our model human-like abilities to recognize objects in noisy, distorted or occluded images. We test our LTM model and a NN model by jointly training on uncorrupted Digits data and testing on noisy and occluded images. We see that the LTM is more robust to noisy and occluded images and exhibits smoother degradation in classification accuracy because of its denoising reconstructive properties (see FIG7). The choice of underlying generative model: Our consolidation ability and retention performance relies heavily on the generation and reconstruction ability of the underlying generative model. We chose a VAE for its reconstructive capabilities but our architecture is agnostic to the choice of the underlying generative model as long as the generator can generate reliable samples and reconstruct incoming samples accurately. Hence, variants of Generative Adversarial Networks (GAN) like BiGANs BID2, ALI and AVB BID25 can also be used for the generative model depending on the modeled domain. Why use short-term memory?: Our LTM always learns from STTMs and never from real data, and the STTMs' errors slowly propagate into the LTM and contribute to forgetting. An alternative could be to directly store data from new incoming tasks, consolidate it into the LTM after periodic intervals, and then discard the data. We show the accuracy curves on Digits dataset for this approach in FIG8. This in higher retention compared to DGDMN in FIG3 because LTM now learns from real data. However, this approach is not truly online since recently learnt tasks cannot be used immediately until after a sleep phase. Since the STM's error can be made smaller by using high capacity generators and classifiers, we suggest using a STM for true online continual learning. Connections to knowledge distillation: Previous works on (joint) multitask learning have also proposed approaches to learn individual tasks with small networks and then "distilling" them jointly into a larger neural network. Such distillation can sometimes improve performance on individual tasks if they share structure and at other times mitigate inter-task interference due to refinement of learnt functions while distilling BID30. Though we do not use temperature-controlled soft-labels while consolidating tasks into the LTM (unlike distillation), we surmise that due to refinement and compression during consolidation phase, DGDMN is also able to learn joint task structure effectively while mitigating interference between tasks. Approaches based on synaptic consolidation: Though our architecture draws inspiration from complementary learning systems and experience replay in the human brain, there is also considerable neuroscientific evidence for synaptic consolidation in the human brain (like in EWC). It might be interesting to explore how synaptic consolidation can be incorporated in our dual memory architecture without causing stagnation and we leave this to future work. We also plan to extend our architecture to learning optimal policies over time via reinforcement learning without explicit replay memories. In this work, we have developed a model capable of learning continuously on sequentially incoming tasks, while averting catastrophic forgetting. Our model employs a dual memory architecture to emulate the complementary learning systems (hippocampus and the neocortex) in the human brain and maintains a consolidated long-term memory via generative replay of past experiences. We have shown that generative replay performs the best for long-term performance retention even for neural networks with small capacity, while demonstrating the benefits of using generative replay and a dual memory architecture via our experiments. Our model hyperparameters have simple interpretations and can be set without much tuning. Moreover, our architecture displays remarkable parallels with the human memory system and provides useful insights about the connection between sleep and learning in humans. Deep Generative Replay (algorithm 1), as described in section 3.1, consolidates new tasks for a DGM with previously learnt tasks. It first computes sampling fractions for new tasks (η tasks) and previously learnt tasks (η gen) and ensures a minimum fraction (κ) per new task (lines 3-6). Then it computes the number of samples to generate from previous tasks and whether to subsample the incoming task samples to satisfy the memory capacity N max (lines 7-12). Finally, it generates the required number of samples from previous tasks, reconstructs all data and trains the DGM on ing data (lines 13-16). For a dictionary D, D is the total number of tasks in D counting repetitions, while |D| is the total number of tasks without repetitions. |X| is the number of samples in set X. BID35 have recently proposed a similar idea independently and BID27 have also employed a generative replay in two-layer restricted boltzmann machines, but they do not describe balancing new and generated samples and cannot recognize repeated tasks (section 4.2). Their generative replay without a dual memory architecture is costly to train (section 4.3) and a lack of reconstruction for new samples makes their representations less robust to noise and occlusions (section 5). In this section, we present more experiments on the Shapes and the Hindi dataset, which contain sequences of tasks with geometric shapes and hindi consonants recognition respectively. We observed similar forgetting patterns as on the Digits dataset in section 4. All baselines exhibited catastrophic forgetting on these sequences of tasks, but DGR and DGDMN were able to learn the task structure sequentially (figures 10, 11). The same is reflected in the average forgetting curves in figure 12. To explore whether learning tasks sequentially in a similar structure as learning them jointly, we visualized t-SNE BID21 tasks seen one at a time FIG0 ) on the Digits dataset. To maintain consistency, we used the same random seed in t-SNE for both joint and sequential embeddings. We observe that the LTM's latent space effectively segregates the 10 digits in both cases (joint and sequential). Though the absolute locations of the digit clusters differ in the two plots, the relative locations of digits share some similarity between both plots i.e. the neighboring digit clusters for each cluster are roughly similar. This may not be sufficient to conclude that the LTM discovers the same latent representation for the underlying shared structure of tasks in these cases and we leave a more thorough investigation to future work. We also show visualizations of digits from the LTM when trained jointly on Digits tasks FIG0 ) and when trained sequentially FIG0 ). Though the digits generated from the jointly trained LTM are quite sharp, the same is not true for the sequentially trained LTM. We observe that the sequentially trained LTM produces sharp samples of the recently learnt tasks (digits 6, 7, 8 and 9), but blurred samples of previously learnt tasks, which is due to partial forgetting on these previous tasks. Permnist: Our version involved six tasks, each containing a fixed permutation on images sampled from the original MNIST dataset. We sampled 30, 000 images from the training set and all the 10, 000 test set images for each task. The tasks were as follows: (i) Original MNIST, (ii) 8x8 central patch of each image blackened, (iii) 8x8 central patch of each image whitened, (iv) 8x8 central patch of each image permuted with a fixed random permutation, (v) 12x12 central patch of each image permuted with a fixed random permutation, and (vi) mirror images of MNIST. This way each task is as hard as MNIST and the tasks share some common underlying structure. Digits: We introduce this smaller dataset which contains 10 tasks with the t th task being classification of digit t from the MNIST dataset. TDigits: We introduced a transformed variant of MNIST containing all ten digits, their mirror images, their upside down images, and their images when reflected about the main diagonal making a total of 40 tasks. This dataset poses similar difficulty as the Digits dataset and we use it for experiments involving longer sequence of tasks. Shapes: This dataset was extracted from the Quick, Draw! dataset recently released by BID10, which contains 50 million drawings across 345 categories of hand-drawn images. We subsampled 4, 500 training images and 500 test images from all geometric shapes in Quick, Draw! (namely circle, hexagon, octagon, square, triangle and zigzag). Hindi: Extracted from the Devanagri dataset BID13 and contains a sequence of 8 tasks, each involving image classification of a hindi language consonant. All models were trained with RMSProp using learning rate = 0.001, ρ = 0.9, = 10 −8 and no decay. We used a batch size of 128 and all classifiers were provided 20 epochs of training when trained jointly, and 6 epochs when trained sequentially over tasks. For generative models (VAEs), we used gradient clipping in RMSProp with clipnorm= 1.0 and clipvalue= 0.5, and they were always trained for 25 epochs regardless of the task or dataset involved. We chose all our models by first training them jointly on all tasks in a dataset to ensure that our models had enough capacity to perform reasonably well on all tasks. But we gave preference to simpler models over very high capacity models. Classifier Models: Our implementation of NN, DropNN, PPR, EWC, learner for DGR and the learner for LTM in DGDMN used a neural network with three fully-connected layers with the number of units tuned differently according to the dataset (24, 24 units for Digits, 48, 48 for Permnist and 36, 36 for TDigits). DropNN also added two dropout layers, one after each hidden layer with droput rate = 0.2 each. The classifiers (learners) for Shapes and Hindi datasets had two convolutional layers (12, 20 : 3 × 3 kernels for Shapes and 24, 32 : 3 × 3 kernels for Hindi) each followed by a 2 × 2 max-pooling layer. The last two layers were fully-connected (16, 6 for Shapes and 144, 36 for Hindi). The hidden layers used ReLU activations, the last layer had a softmax activation, and the model was trained to minimize the cross-entropy objective function. The learners for STTMs in DGDMN were kept smaller for speed and efficiency concerns. Generative models: The generators (VAE) for DGR and LTM of DGDMN employed encoders and decoders with two fully connected hidden layers each with ReLU activation for Permnist, Digits and TDigits, and convolutional variants for Shapes and Hindi. The sizes and number of units/kernels in the layers were tuned independently for each dataset with an approximate coarse grid-search. The size of the latent variable z was set to 32 for Digits, 64 for Permnist, 96 for TDigits, 32 for Shapes and 48 for Hindi. The STTM generators for DGDMN were kept smaller for speed and efficiency. DGDMN has two new hyperparameters: (i) κ: minimum fraction of N max reserved for incoming tasks, and (ii) n ST M: number of STTMs (also sleep/consolidation frequency). Both these have straightforward interpretations and can be set directly without complex hyperparameter searches.κ ensures continual incorporation of new tasks by guaranteeing them a minimum fraction of LTM samples during consolidation. Given that LTM should perform well on last K tasks seen in long task sequence of T tasks, we observed that it is safe to assume that about 50% of the LTM would be crowded by the earlier T − K tasks. The remaining 0.5 fraction should be distributed to the last K tasks. So choosing κ = 0.5 K works well in practice (or as a good starting point for tuning). We made this choice in section 4.3 with K = 10 and κ = 0.05, and hence plotted the average accuracy over the last 10 tasks as a metric.n ST M controls the consolidation cycle frequency. Increasing n ST M gives more STTMs, less frequent consolidations and hence a learning speed advantage. But this also means that fewer samples of previous tasks would participate in consolidation (due to maximum capacity N max of LTM), and hence more forgetting might occur. This parameter does not affect learning much till the LTM remains unsaturated (i.e. N max capacity is unfilled by generated + new samples) and becomes active after that. For long sequences of tasks, we found it best to keep at least 75% of the total samples from previously learnt tasks to have appropriate retention. Hence, n ST M can be set as approximately 0.25 κ in practice (as we did in section 4.3), or as a starting point for tuning. PPR: We used a maximum memory capacity of about 3 − 6 times the number of samples in a task for the dataset being learnt on (i.e. 18, 000 for Digits, 60, 000 for Permnist, 15, 000 for Shapes and 5, 400 for Hindi). While replaying, apart from the task samples, the remaining memory was filled with random samples and corresponding labels. EWC: Most values of the coefficient of the Fisher Information Matrix based regularizer between 1 to 500 worked reasonably well for our datasets. We chose 100 for our experiments. DGR and DGDMN: N max for the DGM in DGR and for the LTM in DGDMN for Digits, Permnist, Shapes and Hindi was set as the total number of samples in the datasets (summed over all tasks) to ensure that there was enough capacity to regenerate the datasets well. For TDigits, we deliberately restricted memory capacity to see the effects of learning tasks over a long time and we kept N max as half the total number of samples. n ST M was kept at 2 for Digits, Permnist and Shapes, 5 for TDigits and 2 for Hindi. κ was set to be small, so that it does not come into play for Digits, Permnist, Shapes and Hindi since we already provided memories with full capacity for all samples. For TDigits, we used κ = 0.05 which would let us incorporate roughly 10 out of the 40 tasks well.
A dual memory architecture inspired from human brain to learn sequentially incoming tasks, while averting catastrophic forgetting.
411
scitldr
Most research on lifelong learning applies to images or games, but not language. We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling. LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity. Specifically, LAMOL is a language model that simultaneously learns to solve the tasks and generate training samples. When the model is trained for a new task, it generates pseudo-samples of previous tasks for training alongside data for the new task. The show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model. Overall, LAMOL outperforms previous methods by a considerable margin and is only 2-3% worse than multitasking, which is usually considered the LLL upper bound. The source code is available at https://github.com/jojotenya/LAMOL. The current dominant paradigm for machine learning is to run an algorithm on a given dataset to produce a trained model specifically for a particular purpose; this is isolated learning . In isolated learning, the model is unable to retain and accumulate the knowledge it has learned before. When a stream of tasks are joined to be trained sequentially, isolated learning faces catastrophic forgetting due to a non-stationary data distribution that biases the model (left figure of Figure 1). In contrast, lifelong learning is designed to address a stream of tasks by accumulating interconnected knowledge between learned tasks and retaining the performance of those tasks. A human easily achieves lifelong learning, but this is nontrivial for a machine; thus lifelong learning is a vital step toward artificial general intelligence. In this paper, we focus on lifelong language learning, where a machine achieves lifelong learning on a stream of natural language processing (NLP) tasks. To the best of our knowledge, lifelong language learning has been studied in only a few instances; for sentiment analysis (b;), conversational agents , word representation learning , sentence representation learning , text classification, and question answering (d'). However, in all previous work, the tasks in the stream are essentially the same task but in different domains. To achieve lifelong language learning on fundamentally different tasks, we propose LAMOL -LAnguage MOdeling for Lifelong language learning. It has been shown that many NLP tasks can be considered question answering (QA) . Therefore, we address multiple NLP tasks with a single model by training a language model (LM) that generates an answer based on the context and the question. Treating QA as language modeling is beneficial because the LM can be pre-trained on a large number of sentences without any labeling ; however, this does not directly solve the problem of LLL. If we train an LM on a stream of tasks, catastrophic forgetting still occurs. However, as an LM is intrinsically a text generator, we can use it to answer questions while generating pseudo-samples of Figure 1: Left: After learning Task 2, the learner has already forgetten how to solve Task 1. This is "catastrophic forgetting". Middle: The basic idea of the data-based LLL approach. A generator is learned to generate examples it has seen before. Using the generator, the learner also learns from examples from the previous task to prevent it from forgetting. Right: A language model that simultaneously takes on the roles of learner and generator. the previous task to be replayed later. LAMOL is inspired by the data-based approach for LLL in which a generator learns to generate samples in previous tasks (middle of Figure 1) . In contrast to previous approaches, LAMOL needs no extra generator (right of Figure 1). LAMOL is also similar to multitask training, but the model itself generates data from previous tasks instead of using real data. Our main contributions in this paper are: • We present LAMOL, a simple yet effective method for LLL. Our method has the advantages of no requirements in terms of extra memory or model capacity. We also do not need to know how many tasks to train in advance and can always train on additional tasks when needed. • Experimental show that our methods outperform baselines and other state-of-the-art methods by a considerable margin and approaches the multitasking upper bound within 2-3%. • Furthermore, we propose adding task-specific tokens during pseudo-sample generation to evenly split the generated samples among all previous tasks. This extension stabilizes LLL and is particularly useful when training on a large number of tasks. • We analyze how different amounts of pseudo-samples affect the final performance of LAMOL, considering both with and without the task-specific tokens. • We open-source our code to facilitate further LLL research. Lifelong learning research is based on regularization, architecture, or data. Here is a brief survey of works in these three categories. In this approach, a constraint, i.e., a regularization term, is added to minimize deviation from trained weights while updating the weights in a new task. Most regularization based methods estimate the importance of each parameter and add the importance as a constraint to the loss function. Elastic weight consolidation (EWC) calculates a Fisher information matrix to estimate the sensitivity of parameters as importance. Online EWC is a transformed version of EWC. Instead of tracking the importance of parameters for each task, online EWC simply accumulates the importance of the stream of tasks. Synaptic intelligence (SI) assigns importance to each parameter according to its contribution to the change in the total loss. Memory aware synapses (MAS) estimate importance via the gradients of the model outputs. In contrast to estimating the importance of weights, incremental moment matching (IMM) matches the moment of weights between different tasks. For this category, the main idea is to assign a dedicated capacity inside a model for each task. After completing a task, the weights are frozen and may not be changed thereafter. Some methods allow models to expand, whereas some fix the size but must allocate capacity for tasks at the beginning. Progressive neural networks utilize one column of the neural network per task. Once a new task is trained, progressive neural networks augment a new column of the neural network for the task while freezing the past trained columns. Columns that have been frozen are not allowed to change but are connected to the new column to transfer knowledge from old tasks. Towards Training Recurrent Neural Networks for Lifelong Learning unifies Gradient episodic memory and Net2Net (a). Using the curriculumbased setting, the model learns the tasks in easy-to-hard order. The model alleviates the forgetting problem by GEM method, and if it fails to learn the current task and has not been expanded yet, the model will expand to a larger model by the Net2Net approach. PathNet reuses subsets of a neural network to transfer knowledge between tasks. Unlike progressive neural networks, PathNet does not allow the model to expand. Instead, it builds a huge fixed-size model composed of a neural network and paths between different layers of the neural networks. While training a task, it selects the best combination of neural networks and paths for that particular task. Similar to progressive neural networks, selected parts are fixed to allow only inference and not training. Inspired by network pruning, PackNet prunes and re-trains the network iteratively to pack numerous tasks into a single huge model. This category has some drawbacks. When resources are limited, model expansion is prohibited. Also, some architecture-based methods require the number of tasks in advance to allocate the capacity for the tasks, which greatly reduces their practicality. This method restricts weights through the data distribution of old tasks. One data-based approach keeps a small amount of real samples from old tasks, and the other distills the knowledge from old data and imagines pseudo-data of old tasks later on. While training a new task, the data or pseudo-data is used to prevent weights from greatly deviating from the previous status. Gradient episodic memory (GEM) preserves a subset of real samples from previous tasks. Utilizing these real samples during optimization helps somewhat to constrain parameter gradients. Averaged-GEM (A-GEM) is a more efficient version of GEM which achieves the same or even better performance than the original GEM. Learning without forgetting minimizes the alteration of shared parameters by recording the outputs from old task modules on data from the new task before updating. and encode data from old tasks into a generative model system. The latter imitates the dual-memory system of the human brain, in that the model automatically decides which memory should be consolidated. Both methods replay pseudo-data of previous tasks using the generative model during training. d' investigates the performance of the episodic memory system on NLP problems. It distills the knowledge of previous tasks into episodic memory and replays it afterward. This work evaluates the method on two streams of tasks: question answering and text classification. A pre-trained LM can generate a coherent sequence of text given a context. Thus, we propose LAMOL, a method of training a single LM that learns not only to answer the question given the context but also to generate the context, the question, and the answer given a generation token. That is, in LAMOL, a model plays the role of both LM and QA model. Hence, answering questions and generating pseudo-old samples can both be done by a single model. During LLL, these pseudo-old samples are trained with new samples from new tasks to help mitigate catastrophic forgetting. Inspired by the protocol used by decaNLP , samples from the datasets we used are framed into a SQuAD-like scheme, which consists of context, question, and answer. Although the LM is simultaneously a QA model, the data format depends on the training objective. When training as a QA model, the LM learns to decode the answer after reading the context and question. On the other hand, when training as an LM, the LM learns to decode all three parts given a generation token. In addition to context, question, and answer, we add three special tokens: ANS Inserted between question and answer. As the context and question are known during inference, decoding starts after inputting ANS. EOS The last token of every example. Decoding stops when EOS is encountered. The first token during pseudo-sample generation. Decoding starts after inputting GEN. The data formats for QA and LM training are shown in Figure 2. Assume a stream of tasks {T 1, T 2, . . .}, where the number of tasks may be unknown. Directly training the LM on these tasks sequentially in catastrophic forgetting. Thus, before beginning training on a new task T i, i > 1, the model first generates pseudo samples T i by top-k sampling that represent the data distribution of previous tasks T 1,..., T i−1. Then, the LM trains on the mixture of T i and T i. To balance the ratio between |T i | and |T i |, the LM generates γ|T i | pseudo samples, where |T i | denotes the number of samples in task T i and γ is the sampling ratio. If the generated sample does not have exactly one ANS in it, then the sample is discarded. This happens in only 0.5%-1% of generated samples. During training, each sample is formatted into both the QA format and the LM format. Then, in the same optimization step, both formats are fed into the LM to minimize the QA loss L QA and LM loss L LM together. Overall, the loss is L = L QA + λL LM, where λ is the weight of the LM loss. Using the same GEN token for all tasks is problematic when training for many tasks because the portion of old tasks decreases exponentially in theory. For instance, if γ = 0.01, then the portion of the first task when training the second task is about 1%, but is only about 0.01% when training the third task. This issue is definitely harmful to LLL. To mitigate this, we can choose to replace the GEN token with a task-specific token for each task to inform the model to generate pseudo-samples belonging to the specific task. Under this setup, all previous tasks have the same share of the γ|T i | generated pseudo samples. That is, when beginning training for the i-th task T i, we generate We do not train on all datasets from both papers due to a lack of computational resources. For each task, there is a corresponding evaluation metric. Table 1 contains a summary of tasks, datasets, and metrics. Additional details are provided in Appendix A. Note that the score of any metric lies between 0 and 100%. All methods use the smallest pre-trained GPT-2 model 1 as the LM. Each task is trained for nine epochs; greedy decoding is applied during inference. • LAMOL In all experiments, k = 20 in top-k sampling and λ = 0.25 for weight of the LM loss are set. LAMOL γ GEN denotes LAMOL with a sampling ratio of γ, and the same GEN token is used for all tasks. If the task-specific tokens are used, GEN is replaced by TASK. • Keep real data Pseudo-samples are replaced by real samples from previous tasks. The quantity of real samples is equally split between previous tasks. This approach can be considered the upper bound of LAMOL. We denote it as LAMOL Table 4: Summary of averaged score on five tasks. The scores are reported as the averaged score over all tasks of the models after training on every task. The rightmost three columns -LAMOL with γ = 0.05 and γ = 0.2 of real samples from previous tasks and Multitasked -are upper bounds for comparison. Best performance in boldface. • Fine-tune The model is directly fine-tuned on the stream of tasks, one after another. • Multitask learning All tasks are trained simultaneously. Multitask learning is often seen as an upper bound of lifelong learning. In addition, it is also used to determine whether forgetting is caused by a lack of model capacity. • Regularization-based methods Online EWC and MAS ) are compared. They are chosen because they are more computationally efficient than SI and more memory efficient than IMM. Additionally, experiments such as show that MAS has better performance overall. • Gradient Episodic Memory (GEM) When training each task, we randomly sample data from previous task with the amount equivalent to 5% of the current task size into the memory. In each optimization step, the GEM approach retrieves all the data in the memory to calculate the gradients for the previous tasks. • Improved memory-based parameter adaptation (MBPA++) Sparse experience replay and local adaptation for LLL as proposed in d'. We also re-implement the paper and report better scores using different hyperparameters. To establish a reference on the capability of the GPT-2 model on every dataset, we trained the model on each dataset independently. The are shown in Table 2. We observe that the performance of the GPT-2 model is actually quite good, even beating the BERT-based model (d') on text classification datasets by a large margin. Thus, the GPT-2 model has the potential for superior LLL performance, as long as we can prevent catastrophic forgetting. For an initial understanding of the performance on all of the methods and the effect of task order, we first conducted a small-scale experiment on three small datasets: SST, QA-SRL, and WOZ. We trained all but the the multitasked method on all six permutations of the task order. The final score for each order was obtained by evaluating the model at the of the training process. The are shown in Table 3; we make several observations. Note that LAMOL with γ = 0 is not the same as Fine-tuned, as the LM loss is still optimized. • Fine-tuned, EWC, MAS, and LAMOL with γ = 0 show similar performance and are much worse than LAMOL with γ > 0. • LAMOL 0.2 GEN, our best performing method, is only 1.8 percent away from Multitasked, which implies almost no forgetting during LLL. • The order of the tasks is crucial to the performance. For instance, the WOZ score drops significantly after training other tasks. Thus, if WOZ is not the last task, the performance is usually noticeably worse. • When using LAMOL, the performance of old tasks maintains almost the same level throughout the training process. When the sampling ratio γ is increased, the performance also increases, especially when increased from 0 to 0.05. • When γ = 0, adding task-specific tokens harms performance, because the model must fit additional special tokens that are useless. Adding task-specific tokens is also not helpful if γ = 0.2. We believe that 0.2 is enough for three tasks; thus task-specific tokens are redundant. However, when γ = 0.05, task-specific tokens are beneficial because the tokens are needed to help retain a substantial presence of the first task when training the third task. • We see that a better LLL method usually has a smaller standard deviation, which implies that it is effected less by task order. Adding task-specific tokens also has a stabilizing effect. The complete forgetting progress is illustrated in Appendix B. Clearly, Fine-tuned, EWC, MAS, LAMOL 0 GEN, and LAMOL 0 TASK reveal similar patterns. However, the proposed LAMOL with γ > 0 displays the ability to retain its learned knowledge. In the case of WOZ → SRL → SST, the WOZ score even increases after training the third task using LAMOL with γ = 0.2. Here, we train the following five tasks sequentially: SQuAD, WikiSQL, SST, QA-SRL, and WOZ. Given the limited computing resources, we explore only one task order: from large to small tasks, according to the number of training samples. As shown in Table 4, LAMOL outperforms all baselines by a large margin and on average approaches within 2-3% of the multitasked upper bound. Also, as expected, the performance of LAMOL improves as the sampling ratio γ increases and task-specific tokens are used. There is also a gap between our method and the method of keeping real samples. As shown in the table, using real samples is much more sample-efficient, as 5% of real samples beats 20% of pseudo-samples. This may be due to the less-than-ideal quality of the pseudo-data. The longer the paragraphs are, the harder it is for the model to create high-quality samples. After observing the samples generated when using task-specific tokens, we discover some "chaos". That is, some examples generated by the model do not exactly correspond to the task-specific token. This implies that the task-specific tokens are sometimes too weak to constrain the model; thus their influence is overshadowed by other tokens. We believe that solving this problem will bring the performance when using task-specific tokens closer to using real samples; however, we leave this as future work. Figure 3 illustrates the test scores of each method on each task throughout the training. We clearly see that when using LAMOL, the model remembers nearly perfectly. We make several observations: • When training SQuAD, QA-SRL has not been trained yet, but the score of QA-SRL is already around 40. Also, when training QA-SRL, the SQuAD score revives if the model has forgotten SQuAD. These two facts imply that SQuAD and SRL are similar tasks, such that the model is capable of transferring knowledge from one to the other. • If forward transfer exists, replaying pseudo-data also retains the forward transfer. That is, the QA-SRL score does not drop after training on WikiSQL and SST when LAMOL is used but drops significantly for other methods. • The transferability between SQuAD and QA-SRL is expected. On the other hand, the transferability between WikiSQL and QA-SRL is quite surprising; the WikiSQL score improves considerably when training on QA-SRL for Fine-tuned and MAS after WikiSQL is forgotten during SST training. We compared the proposed method against the state-of-the-art MBPA++ proposed in d', both by citing their original numbers and also by reproducing their methods. We chose text classification as opposed to QA because we believe that LM has more of a disadvantage in text classification than in QA. We compared with LAMOL 0.2 TASK due to its good performance and stability. Following their paper and testing our model on the same four kinds of task orders, the are shown in Table 5. Our implementation in much higher scores than the original ones. However, the proposed LAMOL 0.2 TASK still outperforms our implementation of MBPA++. As the value of γ determines the performance of LLL, we conducted a medium-scale experiment to understand the influence of γ with and without task-specific tokens. In this experiment we used TASK is smaller than 1%, which shows that there is significant difference. Our implementation of MBPA++ is available at https://github.com/Daikon-Sun/EM-in-LLL. WikiSQL (blue color), SST (orange), QA-SRL (green), and WOZ (red), in that training order. The are shown in Figure 4. Unsurprisingly, the less generation done by the model, the more likely the vanishing distribution in Section 3 occurs: the model forgets how to generate previous tasks, as the ratio of previous tasks in the total dataset decreases exponentially over time. Models using task-specific tokens mitigate this somewhat, as demonstrated in the first subgraph where the performance of LAMOL 0.03 TASK is much better than that of LAMOL 0.03 In addition, the more samples the model generates, the better the overall performance of the model. However, this performance gain disappears when the sampling ratio γ is around 0.1 to 0.3. We propose LAMOL, a simple yet effective method for LLL based on language modeling. A single LM achieves LLL without additional model components and without keeping old examples. Moreover, any pre-trained LM can be used to leverage a large amount of unlabeled text to improve LLL. Finally, more tasks can be added whenever needed. Five tasks and their corresponding datasets from decaNLP : • Question Answering -Stanford Question Answering Dataset (SQuAD) : This dataset consists of context, questions, and answers. The context is paragraphs from English Wikipedia, and the answers are spans from its corresponding question paragraphs. For evaluation, we use the normalized F1 score (nF1), which strips out articles and punctuation as in. Test datasets in this task are hidden from the host so that users must upload models to their platform to generate the test ; due to this inconvenience and our many models, we elected to use the development set to test the metric. Note that we do not use the development set in the training process. The size of the training set is 87,599 while that of the development set is 10,570. • Semantic Parsing -WikiSQL : In this task, normal sentences are translated into SQL-structured SQL queries. WikiSQL provides logical forms along with natural language utterances. The exact match of the logical forms (lfEM) is used to evaluate the performance. The model outputs are required to be matched the SQL format. Otherwise, its won't get any score. The size of the training set is 56,355; that of the test set is 15,878. • Sentiment Analysis -Stanford Sentiment Treebank (SST, binary version) : This dataset consists of movie reviews with its answers, including positive and negative binary options. The exact match score is used as the metric. The size of the training set is 6,920; that of the test set is 1,821. • Semantic Role Labeling -QA-SRL: QA-SRL is a question answering form of the SRL task. The normalized F1 (nF1) score is used. The size of the training set is 6,414; that of the test set is 2,201. • Goal-Oriented Dialogue -English Wizard of Oz (WOZ) : WOZ is a restaurant reservation task that provides a predefined ontology of a series of information for helping an agent to make reservations for customers. To keep track of the dialogue state, turn-based dialogue state EM (dsEM), which requires the model outputs exactly follow the characters' conversation order, is used for judgment. The size of the training set is 2,536; that of the test set is 1,646. Four text classification tasks and five datasets from MBPA++ (dAutume et al. 2019): • News Classification -AGNews: News articles to be classified into 4 classes. • Sentiment Analysis -Yelp and Amazon: Customer reviews and ratings on Yelp and Amazon. Both datasets include 5 classes. • Wikipedia Article Classification -DBPedia: Articles and their corresponding categories on Wikipedia, including 14 classes. • Questions and Answers Categorization -Yahoo: Questions and answers on the Yahoo! platform, including 10 classes. The dataset collected by Table 6: Summary of averaged score on reversed five tasks. The scores are reported as the averaged score over all tasks of the models after training on every task. The rightmost three columns -LAMOL with γ = 0.05 and γ = 0.2 of real samples from previous tasks. Best performance in boldface. Normal Examples squad1 The United States has been accused of a wide ranging war in Afghanistan since 9 / 11. During the campaign, US forces in Afghanistan were involved in an extensive air campaign. At least 1,600 American servicemen and women were killed, while more than 1,600 civilians were injured. After the US-led invasion of Afghanistan on 12 / 11 / 2001, an estimated 10,000 American soldiers were killed in combat. What were the targets included in the conflict? ans Afghanistan squad1 In 1849, the French army was forced to withdraw, and the French were finally expelled, although it was not until late November that the French recaptured most of their territories. French troops then reached Egypt. On 21 January 1852 (the year after he left), in Cairo, they captured Tripoli, Benghazi, Benghazi, and the eastern part of Libya. After Gaddafi's return to office, he established the Gaddafi regime. On 13 February 1856, the Gaddafi family relocated to Egypt. On 13 May 1857, the army was forced to withdraw from Libya, and the army returned to Benghazi. On whom did Gaddafi's army return to Benghazi? ans Gaddafi's family wikisql the table has columns rank, athlete, sport, nation, gold, silver and key words max, min, count, sum, avg, =, >, <, op, select, where, and, col, sst it's a movie that's not worth a look.is this review negative or positive? ans negative sst it's an ugly movie about an old man who's been given power by a new god to bring about a new era.is this review negative or positive? ans negative srl in a statement on his official page, juan valderrama said that the president's nominees would be appointed by the president and confirmed by the national senate.who might be appointed? ans president srl the first two films, released on august 21, 1999, are the first in a line of films that were made by two different people, each of whom worked in tandem on the film.what was released? ans the first two films Examples of Chaos squad1 the table has columns rank (nation), gold, silver, bronze, total and key words max, min, count, sum, avg, =, >, <, op, select, where, and, col, table, caption, page, section, op, cond, question, agg, aggops, condops --which total has a gold smaller than 2, and has the lowest rank of bronze of 3?what is the translation from english to sql? ans select min total from table where gold < 2 and bronze = 3 squad1 In addition to the role played by the main characters, the main cast members are played by the same cast members who were in season one of the original series.is this review negative or positive? ans positive wikisql it's a shame that the story doesn't end with a good ending, but it certainly means the world to me.is this review negative or positive? ans positive wikisql on the same day, june 19, 2012, former heavyweight champion arthur lang defended his title against alexander green in the final of the world heavyweight championship.what did someone defend? ans his title sst at this point, he was only twenty -five years old at the time of his first appearance on the mcf tv series.who was something twenty -five years old? ans he sst on paper, the movie's two most fascinating characters are two peas in the same class, both of whom have made an appearance in this film.who did something feature? ans two most fascinating characters srl a popular seasonal film in austrian films and the most influential in the history of cinema.is this review negative or positive? ans positive srl it's not a bad film, it's just not as good as you've seen it before.is this review negative or positive? ans negative Table 7: Examples generated by LAMOL with task-specific tokens. Annotations squad1, wikisql, sst, srl correspond to each task-specific token of SQuAD, WikiSQL, SST, and QA-SRL, respectively. ans is the ANS token that separates the question from the answer. The upper frame shows the normal situation whereas the lower frame shows generated contents that are inconsistent with their task-specific token.
Language modeling for lifelong language learning.
412
scitldr
Deep learning natural language processing models often use vector word embeddings, such as word2vec or GloVe, to represent words. A discrete sequence of words can be much more easily integrated with downstream neural layers if it is represented as a sequence of continuous vectors. Also, semantic relationships between words, learned from a text corpus, can be encoded in the relative configurations of the embedding vectors. However, storing and accessing embedding vectors for all words in a dictionary requires large amount of space, and may stain systems with limited GPU memory. Here, we used approaches inspired by quantum computing to propose two related methods, word2ket and word2ketXS, for storing word embedding matrix during training and inference in a highly efficient way. Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks. Modern deep learning approaches for natural language processing (NLP) often rely on vector representation of words to convert discrete space of human language into continuous space best suited for further processing through a neural network. For a language with vocabulary of size d, a simple way to achieve this mapping is to use one-hot representation -each word is mapped to its own row of a d × d identity matrix. There is no need to actually store the identity matrix in memory, it is trivial to reconstruct the row from the word identifier. Word embedding approaches such as word2vec or GloVe use instead vectors of dimensionality p much smaller than d to represent words, but the vectors are not necessarily extremely sparse nor mutually orthogonal. This has two benefits: the embeddings can be trained on large text corpora to capture the semantic relationship between words, and the downstream neural network layers only need to be of width proportional to p, not d, to accept a word or a sentence. We do, however, need to explicitly store the d × p embedding matrix in GPU memory for efficient access during training and inference. Vocabulary sizes can reach d = 10 5 or 10 6 , and dimensionality of the embeddings used in current systems ranges from p = 300 to p = 1024 . The d × p embedding matrix thus becomes a substantial, often dominating, part of the parameter space of a learning model. In classical computing, information is stored in bits -a single bit represents an element from the set B = {0, 1}, it can be in one of two possible states. A quantum equivalent of a bit, a qubit, is fully described by a single two-dimensional complex unit-norm vector, that is, an element from the set C 2. A state of an n-qubit quantum register corresponds to a vector in C 2 n. To have exponential dimensionality of the state space, though, the qubits in the register have to be interconnected so that their states can become entangled; a set of all possible states of n completely separated, independent qubits can be fully represented by C 2n instead of C 2 n. Entanglement is a purely quantum phenomenon -we can make quantum bits interconnected, so that a state of a two-qubit system cannot be decomposed into states of individual qubits. We do not see entanglement in classical bits, which are always independent -we can describe a byte by separately listing the state of each of the eight bits. We can, however, approximate quantum register classically -store vectors of size m using O (log m) space, at the cost of losing the ability to express all possible m-dimensional vectors that an actual O (log m)-qubit quantum register would be able to represent. As we show in this paper, the loss of representation power does not have a significant impact on NLP machine learning algorithms that use the approximation approaches to store and manipulate the high-dimensional word embedding matrix. Here, we used approaches inspired by quantum computing to propose two related methods, word2ket and word2ketXS, for storing word embedding matrix during training and inference in a highly efficient way 1. The first method operates independently on the embedding of each word, allowing for more efficient processing, while the second method operates jointly on all word embeddings, offering even higher efficiency in storing the embedding matrix, at the cost of more complex processing. Empirical evidence from three NLP tasks shows that the new word2ket embeddings offer high space saving rate at little cost in terms of accuracy of the downstream NLP model. Consider two separable 2 Hilbert spaces V and W. A tensor product space of V and W, denoted as V ⊗ W, is a separable Hilbert space H constructed using ordered pairs v ⊗ w, where v ∈ V and w ∈ W. In the tensor product space, the addition and multiplication in H have the following properties The inner product between v ⊗ w and v ⊗ w is defined as a product of individual inner products v ⊗ w, v ⊗ w = v, v w, w. It immediately follows that ||v ⊗ w|| = ||v|| ||w||; in particular, a tensor product of two unit-norm vectors, from V and W, respectively, is a unit norm vector in V ⊗ W. The Hilbert space V ⊗ W is a space of equivalence classes of pairs v ⊗ w; for example {cv} ⊗ w and v ⊗ {cw} are equivalent ways to write the same vector. A vector in a tensor product space is often simply called a tensor. Let {ψ j} and {φ k} be orthonormal basis sets in V and W, respectively. From eq. 1 and 2 we can see that where δ z is the Kronecker delta, equal to one at z = 0 and to null elsewhere. That is, the set {ψ j ⊗ φ k} jk forms an orthonormal basis in V ⊗ W, with coefficients indexed by pairs jk and numerically equal to the products of the corresponding coefficients in V and W. We can add any pairs of vectors in the new spaces by adding the coefficients. The dimensionality of V ⊗ W is the product of dimensionalities of V and W. We can create tensor product spaces by more than one application of tensor product, H = U ⊗V ⊗W, with arbitrary bracketing, since tensor product is associative. Tensor product space of the form is said to have tensor order 3 of n. 1 In Dirac notation popular in quantum mechanics and quantum computing, a vector u ∈ C 2 n is written as |u, and called a ket. 2 That is, with countable orthonormal basis. 3 Note that some sources alternatively call n a degree or a rank of a tensor. Here, we use tensor rank to refer to a property similar to matrix rank, see below. Consider H = V ⊗ W. We have seen the addition property v ⊗ w + v ⊗ w = {v + v} ⊗ w and similar property with linearity in the first argument -tensor product is bilinear. We have not, however, seen how to express v ⊗ w + v ⊗ w as φ ⊗ ψ for some φ ∈ V, ψ ∈ W. In many cases, while the left side is a proper vector from the tensor product space, it is not possible to find such φ and ψ. The tensor product space contains not only vectors of the form v ⊗ w, but also their linear combinations, some of which cannot be expressed as φ ⊗ ψ. For example, can be decomposed as cannot; no matter what we choose as coefficients a, b, c, d, we have since we require ac = 1/ √ 2, that is, a = 0, c = 0, and similarly bd = 1/ √ 2, that is, b = 0, c = 0, yet we also require bd = ad = 0, which is incompatible with a, b, c, d = 0. For tensor product spaces of order n, that is, is a tensor of rank 2. Tensors with rank greater than one are called entangled. Maximum rank of a tensor in a tensor product space of order higher than two is not known in general (Buczyński &). A p-dimensional word embedding model involving a d-token vocabulary is 5 a mapping f: p, that is, it maps word identifiers into a p-dimensional real Hilbert space, an inner product space with the standard inner product ·, · leading to the L 2 norm. Function f is trained to capture semantic information from the language corpus it is trained on, for example, two words i, j with f (i), f (j) ∼ 0 are expected to be semantically unrelated. In practical implementations, we represent f as a collection of vectors f i ∈ R p indexed by i, typically in the form of d × p matrix M, with embeddings of individual words as rows. We propose to represent an embedding v ∈ R p of each a single word as an entangled tensor. Specifically, in word2ket, we use tensor of rank r and order n of the form where v jk ∈ R q. The ing vector v has dimension p = q n, but takes rnq = O (rq log q log p) space. We use q ≥ 4; it does not make sense to reduce it to q = 2 since a tensor product of two vectors in R 2 takes the same space as a vector in R 4, but not every vector in R 4 can be expressed as a rank-one tensor in R 2 ⊗ R 2. If the downstream computation involving the word embedding vectors is limited to inner products of embedding vectors, there is no need to explicitly calculate the q n -dimensional vectors. Indeed, we have (see eq. 2) Thus, the calculation of inner product between two p-dimensional word embeddings, v and w, represented via word2ket takes O (rq log q log p) time and O additional space. In most applications, a small number of embedding vectors do need to be made available for processing through subsequent neural network layers -for example, embeddings of all words in all sentences in a batch. For a batch consisting of b words, the total space requirement is O (bp + rq log q log p), instead of O (rp) in traditional word embeddings. Reconstructing a single p-dimensional word embedding vector from a tensor of rank r and order n takes O (rn log 2 p) arithmetic operations. To facilitate parallel processing, we arrange the order-n tensor product space into a balanced tensor product tree (see Figure 1), with the underlying vectors v jk as leaves, and v as root. For example, for n = 4, Instead of performing n multiplications sequentially, we can perform them in parallel along branches of the tree, reducing the length of the sequential processing to O (log 2 n). Typically, word embeddings are trained using gradient descent. The proposed embedding representation involves only differentiable arithmetic operations, so gradients with respect to individual elements of vectors v jk can always be defined. With the balanced tree structure, word2ket representation can be seen as a sequence of O (log 2 n) linear layers with linear activation functions, where n is already small. Still, the gradient of the embedding vector v with respect to an underlying tunable parameters v lk involves products ∂ k n j=1 v jk /∂v lk = j =l v jk, leading to potentially high Lipschitz constant of the gradient, which may harm training. To alleviate this problem, at each node in the balanced tensor product tree we use LayerNorm . Let A: V → U be a linear operator that maps vectors from Hilbert space V into vector in Hilbert space U; that is, for v, v, ∈ V, α, β ∈ R, the vector A(αv + βv) = αAv + βAv is a member of U. Let us also define a linear operator B: W → Y. A mapping A ⊗ B is a linear operator that maps vectors from V ⊗ W into vectors in U ⊗ Y. We define A ⊗ B: V ⊗ W → U ⊗ Y through its action on simple vectors and through linearity for ψ j ∈ V and φ k ∈ U. Same as for vectors, tensor product of linear operators is bilinear In finite-dimensional case, for n × n matrix representation of linear operator A and m × m matrix representing B, we can represent A ⊗ B as an mn × m n matrix composed of blocks a jk B. We can see a p-dimensional word embedding model involving a d-token vocabulary as a linear operator F: R d → R p that maps the one-hot vector corresponding to a word into the corresponding word embedding vector. Specifically, if e i is the i-th basis vector in R d representing i-th word in the vocabulary, and v i is the embedding vector for that word in R p, then the word embedding linear operator is T. If we store the word embeddings a d × p matrix M, we can then interpret that matrix's transpose, M T, as the matrix representation of the linear operator F. Consider q and t such that q n = p and t n = d, and a series of n linear operators In word2ketXS, we represent the d × p word embedding matrix as where F jk can be represented by a q × t matrix. The ing matrix F has dimension p × d, but takes rnqt = O (rq log qt log t log p log d) space. Intuitively, the additional space efficiency comes from applying tensor product-based exponential compression not only horizontally, individually to each row, but horizontally and vertically at the same time, to the whole embedding matrix. We use the same balanced binary tree structure as in word2ket. To avoid reconstructing the full embedding matrix each time a small number of rows is needed for a multiplication by a weight matrix in the downstream layer of the neural NLP model, which would eliminate any space saving, we use lazy tensors . If A is an m × n matrix and matrix B is p × q, then ij th entry of A ⊗ B is equal to As we can see, reconstructing a row of the full embedding matrix involves only single rows of the underlying matrices, and can be done efficiently using lazy tensors. In order to evaluate the ability of the proposed space-efficient word embeddings in capturing semantic information about words, we used them in three different downstream NLP tasks: text summarization, language translation, and question answering. In all three cases, we compared the accuracy in the downstream task for the proposed space-efficient embeddings with the accuracy achieved by regular embeddings, that is, embeddings that store p-dimensional vectors for d-word vocabulary using a single d × p matrix. In text summarization experiments, we used the GIGAWORD text summarization dataset using the same preprocessing as, that is, using 200K examples in training. We used an encoder-decoder sequence-to-sequence architecture with bidirectional forwardbackward RNN encoder and an attention-based RNN decoder , as implemented in. In both the encoder and the decoder we used internal layers with dimensionality of 256 and dropout rate of 0.2, and trained the models, starting from random weights and embeddings, for 20 epochs. We used the validation set to select the best model epoch, and reported on a separate test set. We used Rouge 1, 2, and L scores . In addition to testing the regular dimensionality of 256, we also explored 400, and 8000, but kept the dimensionality of other layers constant. The in Table 1 show that word2ket can achieve 16-fold reduction in trainable parameters at the cost of a drop of Rouge scores by about 2 points. As expected, word2ketXS is much more spaceefficient, matching the scores of word2ket while allowing for 34,000 fold reduction in trainable parameters. More importantly, it offers over 100-fold space reduction while reducing the Rouge scores by only about 0.5. Thus, in the evaluation on the remaining two NLP tasks we focused on word2ketXS. The second task we explored is German-English machine translation, using the IWSLT2014 (DE-EN) dataset of TED and TEDx talks as preprocessed in . We used the same sequence-to-sequence model as in GIGAWORD summarization task above. We used BLEU score to measure test set performance. We explored embedding dimensions of 100, 256, 400, 1000, and 8000 by using different values for the tensor order and the dimensions of the underlying matrices F jk. The in Table 2 show a drop of about 1 point on the BLEU scale for 100-fold reduction in the parameter space, with drops of 0.5 and 1.5 for lower and higher space saving rates, respectively. The third task we used involves the Stanford Question Answering Dataset (SQuAD) dataset. We used the DrQA's model , a 3-layer bidirectional LSTMs with 128 hidden units for both paragraph and question encoding. We trained the model for 40 epochs, starting from random weights and embeddings, and reported the test set F1 score. DrQA uses an embedding with vocabulary size of 118,655 and embedding dimensionality of 300. As the embedding matrix is larger, we can increase the tensor order in word2ketXS to four, which allows for much higher space savings. Results in Table 3 show a 0.5 point drop in F1 score with 1000-fold saving of the parameter space required to store the embeddings. For order-4 tensor word2ketXS, we see almost 10 5 -fold space saving rate, at the cost of a drop of F1 by less than two points, that is, by a relative drop of less than 3%. We also investigated the computational overhead introduced by the word2ketXS embeddings. For tensors order 2, the training time for 40 epochs increased from 5.8 for the model using regular embedding to 7.4 hours for the word2ketXS-based model. Using tensors of order 4, to gain additional space savings, increased the time to 9 hours. Each run was executed on a single NVIDIA Tesla V100 GPU card, on a machine with 2 Intel Xeon Gold 6146 CPUs and 384 GB RAM. While the training time increased, as shown in Fig. 3, the dynamics of model training remains largely unchanged. The of the experiments show substantial decreases in the memory footprint of the word embedding part of the model, used in the input layers of the encoder and decoder of sequence-tosequence models. These also have other parameters, including weight matrices in the intermediate layers, as well as the matrix of word probabilities prior to the last, softmax activation, that are not compressed by our method. During inference, embedding and other layers dominate the memory footprint of the model. Recent successful transformer models like BERT by , GPT-2 by, RoBERTa by and Sparse Transformers by require hundreds of millions of parameters to work. In RoBERTa BASE, 30% of the parameters belong to the word embeddings. During training, there is an additional memory need to store activations in the forward phase in all layers, to make them available for calculating the gradients in the backwards phase. These often dominate the memory footprint during training, but one can decrease the memory required for Figure 2: Dynamics of the test-set F1 score on SQuAD dataset using DrQA model with different embeddings, for rank-2 order-2 word2ketXS, for rank-1 order-4 word2ketXS, for for the regular embedding. Given the current hardware limitation for training and inference, it is crucial to be able to decrease the amount of memory these networks requires to work. A number of approaches have been used in lowering the space requirements for word embeddings. Dictionary learning and word embedding clustering approaches have been proposed. Bit encoding has been also proposed. An optimized method for uniform quantization of floating point numbers in the embedding matrix has been proposed recently . To compress a model for low-memory inference, used pruning and quantization for lowering the number of parameters. For low-memory training sparsity and low numerical precision approaches were proposed. In approximating matrices in general, Fourier-based approximation methods have also been used ). None of these approaches can mach space saving rates achieved by word2ketXS. The methods based on bit encoding, such as;; are limited to space saving rate of at most 32 for 32-bit architectures. Other methods, for example based on parameter sharing or on PCA, can offer higher saving rates, but their storage requirement is limited by d + p, the vocabulary size and embedding dimensionality. In more distantly related work, tensor product spaces have been used in studying document embeddings, by using sketching of a tensor representing n-grams in the document.
We use ideas from quantum computing to proposed word embeddings that utilize much fewer trainable parameters.
413
scitldr
One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary. Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e.g. hand- engineered features). Humans, however, do not learn to communicate based on well-summarized features. In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols. The agents play an image description game where the image contains factors such as colors and shapes. We train the agents using the obverter technique where an agent introspects to generate messages that maximize its own understanding. Through qualitative analysis, visualization and a zero-shot test, we show that the agents can develop, out of raw image pixels, a language with compositional properties, given a proper pressure from the environment. One of the key requirements for artificial general intelligence (AGI) to thrive in the real world is its ability to communicate with humans in natural language. Natural language processing (NLP) has been an active field of research for a long time, and the introduction of deep learning BID18 enabled great progress in NLP tasks such as translation, image captioning, text generation and visual question answering; BID13 BID10; BID19 BID0. However, training machines in a supervised manner with a large dataset has its limits when it comes to communication. Supervised methods are effective for capturing statistical associations between discrete symbols (i.e. words, letters). The essence of communication is more than just predicting the most likely word to come next; it is a means to coordinate with others and potentially achieve a common goal BID1 BID7 ).An alternative path to teaching machines the art of communication is to give them a specific task and encourage them to learn how to communicate on their own. This approach will encourage the agents to use languages grounded to task-related entities as well as communicate with other agents, which is one of the ways humans learn to communicate BID5. Recently, there have been several notable works that demonstrated the emergence of communication between neural network agents. Even though each work produced very interesting of its own, in all cases, communication was either achieved with a single discrete symbol (as opposed to a sequence of discrete symbols) BID8 BID17 or via a continuous value (; BID12 . Not only is human communication un-differentiable, but also using a single discrete symbol is quite far from natural language communication. One of the key features of human language is its compositional nature; the meaning of a complex expression is determined by its structure and the meanings of its constituents BID9 . More recently, BID22 and BID16 trained the agents to communicate in grounded, compositional language. In both studies, however, inputs given to the agents were hand-engineered features (disentangled input) rather than raw perceptual signals that we receive as humans. In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols. Unlike previous works, our setup poses greater challenges to the agents since visual understanding and discrete communication have to be induced from scratch in parallel. We place the agents in a two-person image description game, where images contain objects of various color and shape. Inspired by the pioneering work of BID3, we employ a communication philosophy named obverter to train the agents. Having its root in the theory of mind and human language development BID21, the obverter technique motivates an agent to search over messages and generate the ones that maximize their own understanding. The contribution of our work can be summarized as follows:• We train artificial agents to learn to disentangle raw image pixels and communicate in compositional language at the same time.• We describe how the obverter technique, a differentiable learning algorithm for discrete communication, could be employed in a communication game with raw visual input.• We visualize how the agents are perceiving the images and show that they learn to disentangle color and shape without any explicit supervision other than the communication one.• Experiment suggest that the agents could develop, out of raw image input, a language with compositional properties, given a proper pressure from the environment (i.e. the image description game).Finally, while our exposition follows a multi-agent perspective, it is also possible to interpret our in the single-agent setting. We are effectively learning a neural network that is able to learn disentangled compositional representations of visual scenes, without any supervision. Subject to the constraints imposed by their environment, our agents learn disentangled concepts, and how to compose these to form new concepts. This is an important milestone in the path to AGI. 2.1 THE TWO-PERSON IMAGE DESCRIPTION GAME Speaker Listener ABACD 1Speaker Listener ABACD 0Figure 1: The two-person image description game. Speaker observes an image and generates a message (i.e. a sequence of discrete symbols). The listener, after observing a separate image and the message, must correctly decide whether it is seeing the same object as the speaker (left side; output 1) or not (right side; output 0). (blue, red, white, gray, yellow, green, cyan, magenta), and five shapes (box, sphere, cylinder, capsule, ellipsoid), giving us total 40 combinations. We choose a straightforward image description game with two factors (color and shape) so that we can perform extensive analysis on the outcome confidently, based on full control of the experiment. In a single round of the two-person image description game, one agent becomes the speaker and the other the listener. The speaker is given a random image, and generates a message to describe it. The listener is also given a random image, possibly the same image as the speaker's. After hearing the message from the speaker, the listener must decide if it is seeing the same object as the speaker (Figure 1). Note that an image is the raw pixels given to the agents, and an object is the thing described by the image. Therefore two different images can depict the same object. In each round the agents change roles of being the speaker and the listener. We generated synthetic images using Mujoco physics simulator 1. The example images are shown in FIG0. Each image depicts a single object with a specific color and shape in 128×128 resolution. There are eight colors (blue, red, white, gray, yellow, green, cyan, magenta) and five shapes (box, sphere, cylinder, capsule, ellipsoid), giving us 40 combinations. We generated 100 variations for each of the 40 object type. Note that the position of the object varies in each image, changing the object size and the orientation. Therefore even if the speaker and the listener are given the same object type, the actual images are very likely to be different, preventing the agents from using pixel-specific information, rather than object-related information to win the game. Figure 3: Agent model architecture. The visual module processes the image, and the language module generates or consumes messages. The decision module accepts embeddings from both modules and produces the output. The solid arrows indicate modifying the output from the previous layer. The dotted arrows indicate copying the output from the previous layer. Aside from using disentangled input, another strong assumption made in previous works BID3 BID22 was that the agents had access to the true intention of the speaker. In BID3, the listener was trained to modify its RNN hidden vector as closely to the speaker's intention (meaning vector; please see TAB7 in Appendix A) as possible. In BID22, each agent had an auxiliary task to predict the goals of all other agents. In both cases, the true meaning/goal vector was used to update the model parameters, exposing the disentangled information to the agents. In order to relax this assumption and encourage the agents to develop communication with minimal guidance, our model uses no other signal than whether the listener made a correct decision. Figure 3 depicts the agent model architecture. We use a convolutional neural network followed by a fully-connected layer to process the image. A single RNN, specifically the gated recurrent units (GRU), is used for both generating and consuming messages (the message generation using the obverter strategy is described in the next section). When consuming a message, the image embedding from the visual module and the message embedding from the language module are concatenated and processed by another fully-connected layers (i.e. decision module) with the sigmoid outputŷ, 0 being "My (listener) image is different from the speaker's" and 1 being "My image is the same as the speaker's". Further details of the model architecture (e.g. number of layers) are described in Appendix C. Although our work is inspired by (see Appendix A for the description of BID3), obverter technique is a general message generation philosophy used/discussed in a number of communication and language evolution studies BID11;; BID14, which has its root in the theory of mind. Theory of mind observes that a human has direct access only to one's own mind and not to the others'. Therefore we typically assume that the mind of others is analogous to ours, and such assumption is reflected in the functional use of language BID5. For example, if we want to convey a piece of information to the listener 2, it is best to speak in a way that maximizes the listener's understanding. However, since we cannot directly observe the listener's state of mind, we cannot exactly solve this optimization problem. Therefore we posit that the listener's mind operates in a similar manner as ours, and speak in a way that maximizes our understanding, thus approximately solving the optimization problem. This is exactly what the obverter technique tries to achieve. When an agent becomes the teacher (i.e. speaker), the model parameters are fixed. The image is converted to an embedding via the visual module. After initializing its RNN hidden layer to zeros, the teacher at each timestep evaluatesŷ for all possible symbols and selects the one that maximizesŷ. The RNN hidden vector induced by the chosen symbol is used in the next timestep. This is repeated untilŷ becomes bigger than the predefined threshold, or the maximum message length is reached (see Appendix D for algorithm). Therefore the teacher, through introspection, greedily selects characters at each timestep to generate a message such that the consistency between the image and the message is as clear to itself as possible. When an agent becomes the learner (i.e. listener), its parameters are updated via back-propagating the cross entropy loss between its outputŷ and the true label y. Therefore the agents must learn to communicate only from the true label indicating whether the teacher and the learner are seeing the same object. We remind the reader that only the learner's RNN parameters are updated, and the teacher uses its fixed RNN. Therefore an agent uses only one RNN for both speaking and listening, guaranteeing self-consistency (see Appendix B for a detailed comparison between the obverter technique and the RL-based approach). Furthermore, because the teacher's parameters are fixed, message generation can easily be extended to be more exploratory. Although in this work we deterministically selected a character in each timestep, one can, for example, sample characters proportionally toŷ and still use gradient descent for training the agents. Using a more exploratory message generation strategy could help us discover a more optimal communication language when dealing with complex tasks. Another feature of the obverter technique is that it observes the principle of least effort . Because the teacher stops generating symbols as soon asŷ reaches the threshold, it does not waste any more effort trying to perfect the message. The same principle was implemented in one way or another in previous works, such as choosing the shortest among the generated strings BID14 or imposing a small cost for generating a message BID22. During the early stages of research, we noticed that randomly sampling object pairs (one for the teacher, one for the learner) lead to agents focusing only on colors and ignoring shapes. When the teacher's object is fixed, there are 40 (8 colors × 5 shapes) possibilities on the learner's side. If the teacher only talks about the color of the object, the learner can correctly decide for 36 out of 40 possible object types. The learner makes incorrect decisions only when the teacher and the learner are given objects of the same color but different shapes, ing in 90% accuracy on average. This is actually what we observed; the accuracy plateaued between 0.9 and 0.92 during the training, and the messages were more or less the same for objects with the same color. Therefore when constructing a mini-batch of images, we set 25% to be the object pairs of the same color and shape, 30% the same shape but different colors, 20% the same color but different shapes. The remaining 25% object pairs were picked randomly 3.Vocabulary size (i.e. number of unique symbols) and the maximum message length were also influential to the final outcome. We noticed that a larger vocabulary and a longer message length helped the agents achieve a high communication accuracy more easily. But the ing messages were more challenging to analyze for compositional patterns. In all our experiments we used 5 and 20 respectively for the vocabulary size and the maximum message length, similar to what BID3 used. This suggests that the environment plays as important, if not more, role as the model architecture in the emergence of complex communication as discussed by previous studies BID15 BID4 BID16 and should be a main consideration for future efforts. Further details regarding hyperparameters are described in Appendix E. In this section, we first study the convergence behavior during the training phase. Then we analyze the language developed by the agents in terms of compositionality. As stated in the introduction, in compositional language, the meaning of a complex expression is determined by its structure and the meanings of its constituents. With this definition in mind, we focus on two aspects of the inter-agent communication to evaluate its compositional properties: the structure (i.e. grammar) of the communication, and zero-shot performance (i.e. generalizing to novel stimuli). These two aspects, which are both necessary conditions for any language to be considered compositional, have been used by previous works to study the compositional nature of artificial communication BID3 BID22 BID16.To evaluate the structure of the messages, we study the evolution of the communication as training proceeds, and try to derive a grammar for expressing colors and shapes. To evaluate the zero-shot capabilities, we test if the agents can compose consistent messages for objects they have not seen during the training. Moreover, we visualize the image embeddings from the visual modules of both agents to understand how they are recognizing colors and shapes, the of which, for a better view of the figures, are provided in Appendix H. Figure 4: Progress during the training (best seen in color). (Top) We plot the training accuracy, training loss, average message length and average message distinctness in each round. (Bottom) We plot the perplexities and the Jaccard similarity of the messages spoken by both agents in each round. Note that the average message length and the perplexities are divided by 20 to match the y-axis range with other metrics. Figure 4 shows the convergence behavior during the training. Training accuracy was calculated by rounding the learner's sigmoid output by 0.5. Message distinctness was calculated by dividing the number of unique messages in the mini-batch by the size of the mini-batch. Ideally there should be, on average, 40 distinct messages in the mini-batch of 50 images, therefore giving us 0.8 distinctness. Every 10 round, both agents were given the same 1, 000 randomly sampled images to generate 1, 000 message pairs. Then perplexity was calculated for each object type and averaged, thus indicating the average number of distinct messages used by the agents to describe a single object type (note that perplexities in the plot was divided by 20). Jaccard similarity between both agents' messages was also calculated for each object type and averaged. At the beginning, the listener (i.e. learner) always decides it is not seeing the same object as the speaker, giving us 0.75 accuracy 4. But after 7, 000 rounds, accuracy starts to go beyond 0.9. Loss is negatively correlated with accuracy until round 15, 000, where it starts to fluctuate. Accuracy, however, remains high due to how accuracy is measured; by rounding the learner's output by 0.5. Although we could occasionally observe some patterns in the messages when both accuracy and loss were high, a lower loss generally ed in a clearer communication structure (i.e. grammar) and better zero-shot performance. The loss fluctuation also indicates some instability in the training process, which is a potential direction for future work. Message distinctness starts at near 0, indicating the agents are generating the same message for all object types. After round 7, 000, where both message distinctness and message length reach their maximum, both start to decrease. But message distinctness never goes as high as the ideal 0.8, meaning that the agents are occasionally using the same message for different object types, as will be shown in the following section. Both perplexities and Jaccard similarity show seemingly meaningless fluctuation at early rounds. After round 7, 000, perplexities and Jaccard similarity show negatively correlated behavior, meaning that not only is each agent using consistent messages to describe each object type, but also both agents are using very similar messages to describe each object type. We found perplexity and Jaccard similarity to be an important indicator of the degree of the communication structure. During rounds 7, 000 ∼ 8, 000, performance was excellent in terms of loss and accuracy, but perplexity was high and Jaccard similarity low, indicating the agents were assigning incoherent strings to each object type just to win the game. Similar behavior was observed in the early stages of language evolution simulation in BID14 where words represented some meanings but had no structure (i.e. protolanguage). It seems that artificial communication acquires compositional properties after the emergence of protolanguage regardless of whether the input is entangled or disentangled. We choose agents from different training rounds to highlight how the language becomes more structured over time. TAB1 shows agents' messages in the beginning (round 40), when the training accuracy starts pushing beyond 90% (round 6, 940), when agents settle on a common language (round 16, 760).In round 40, both agents are respectively producing the same message for all object types as mentioned in section 3.1. We might say the messages are structured, but considering that the listener always answers 0 in early rounds, we cannot say the agents are communicating. In round 6, 940, which is roughly when the agents begin to communicate more efficiently, training accuracy is significantly higher than round 40. However, perplexities show that both agents are assigning many names to a single object type (40-80 names depending on the object type), indicating that the agents are focusing on pixel-level differences between images of the same object type. TAB1 shows, as an example, the messages used by both agents to describe the red sphere. Due to high perplexity, it is difficult to capture the underlying grammar of the messages even with regular expression. Furthermore, as Jaccard similarity indicates, both agents are generating completely different messages for the same object type. In round 16, 760, as the perplexities and Jaccard similarity tell us, the agents came to share a very narrow set of names for each object type (1-4 names depending on the object type). Moreover, the names of the same-colored objects and same-shaped objects clearly seem to follow a pattern. Overall, each of the three phases (round 40, round 6,940, round 16,760) seem to represent the development of visual perception, learning to communicate, and emergence of structure. We found the messages in round 16, 760 could be decomposed in a similar manner as Table 6 in Appendix A. The top of TAB2 shows a possible decomposition of the messages from round 16, 760 and the bottom shows the rules for each color and shape derived from the decomposition. According to our analysis, the agents use the first part of the message (i.e. prefix) to specify a shape, and the second part (i.e. suffix) to specify a color. However, they use two different strings to specify a shape. For example, the agents use either aaaa or bbbbb to describe a box. The strings used for specifying colors show slightly weaker regularity. For example, red is always described by either the suffix c or suffix e, but magenta is described by the suffix bb, bd, and sometimes b or bc.ā used for gray objects represents deletion of the prefix a. Note that removing prefixes beyond their length causes the pattern to break. For example, gray box, gray sphere and gray cylinder use the sameāāa to express the color, but gray capsule and gray ellipsoid use irregular suffixes. Despite some irregularities and one exceptional case (cyan box), the messages provide strong evidence that the agents learned to properly recognize color and shape from raw pixel input (see Appendix H for studying what the visual module learned), mapped each color and shape to prefixes and suffixes, and are able to compose meaningful messages to describe a given image to one another. Communication accuracy for each object type is described in Appendix F. Communication examples and their analysis are given in Appendix G. If the agents have truly learned to compose a message that can be divided into a color part and a shape part, then they should be able to accurately describe an object they have not seen before, which is another necessary condition for a compositional language. Therefore, we hold out five objects (the shaded cells in TAB4) from the dataset during the training and observe how agents describe five novel objects during the test phase. The agents were chosen from round 19, 980, which showed a high accuracy (97.8%), low perplexities (1.48, 1.65) and a high Jaccard similarity (0.75). TAB4 shows a potential decomposition of the messages used by the agents (original messages are described by TAB12 in Appendix I). We can observe that there is clearly a structure in the communication, although some messages show somewhat weaker patterns compared to when the agents were trained with all object types (even when we consider the effects ofb andē. However, the messages describing the held-out object types show clear structure with the exception of yellow ellipsoid. In order to assess the communication accuracy when held-out objects are involved, we conducted another test with the agents from round 19, 980. Each held-out object was given to the speaker, the listener, or both. In the first two cases, the held-out object was paired with all 40 object types and each pair was tested 10 times. In the last case, the held-out object was tested against itself 10 times. In all cases, the agents switched roles after 5 times. Table 4 shows communication accuracies for each case. We can see the agents can successfully communicate most of the time even when given novel objects. The last column shows that the listener is not simply producing 0 to maximize its chance to win the game. It is also notable that the objects described withoutb orē show better performance in general. We noticed the communication accuracy for held-out objects seems relatively weak considering the messages used to describe them strongly showed structure. TAB4 . This, however, from the grammar (i.e. structure) being not as straightforward as TAB2, especially with short messages (i.e. frequent use ofb andē). The same tendency can be observed for non-held-out objects as described by the per-object communication accuracy Table 9 in Appendix J. Table 4: Communication accuracy when agents were given objects not seen during the training. From the grammar analysis in the previous section, we have shown that the emerged language strongly follows a well-defined grammar. In the zero-shot test, the agents demonstrated that they can successfully describe novel object, although not perfectly, by also following a similar grammar. Both are, as stated in the beginning of section 3, necessary conditions for any communication to be considered compositional. Therefore we can safely conclude that the emerged language in this work possesses some qualifications to be considered compositional. In this work, we used the obverter technique to train neural network agents to communicate in a two-person image description game. Through qualitative analysis, visualization and the zero-shot test, we have shown that even though the agents receive raw perception in the form of image pixels, under the right environment pressures, the emerged language had properties consistent with the ones found in compositional languages. As an evaluation strategy, we followed previous works and focused on assessing the necessary conditions of compositional languages. However, the exact definition of compositional language is still somewhat debatable, and, to the best of our knowledge, there is no reliable way to mathematically quantify the degree of compositionality of an arbitrary language. Therefore, in order to encourage active research and discussion among researchers in this domain, we propose for future work, a quantitatively measurable definition of compositionality. We believe compositionality of a language is not binary (e.g. language A is compositional/not compositional), but a spectrum. For example, human language has some aspects that are compositional (e.g., syntactic constructions, most morphological combinations) and some that are not (e.g., irregular verb tenses in English, character-level word composition). It is also important to clearly define grounded language and compositional language. If one agent says abc (eat red apple) and another says cba (apple red eat), and they both understand each other, are they speaking compositional language? We believe such questions should be asked and addressed to shape the definition of compositionality. In addition to the definition/evaluation of compositional languages, there are numerous directions of future work. Observing the emergence of a compositional language among more than two agents is an apparent next step. Designing an environment to motivate the agents to disentangle more than two factors is also an interesting direction. Training agents to consider the context (i.e. pragmatics), such as giving each agent several images instead of one, is another exciting future work. A EMERGENCE OF GRAMMAR, BID3 In BID3, the author successfully trained neural agents to develop a structured (i.e. grammatical) language using disentangled meaning vectors as the input. Using 10 subject vectors and 10 predicate vectors, all represented as explicit binary vectors, total 100 meaning vectors could be composed TAB7 ). Each digit in the subject vector 5a serves a clear role, respectively representing speaker(sp), hearer(hr), other(ot), and plural(pl). The predicate vector values, on the other hand, are randomly chosen so that each predicate vector will have three 1's and three 0's. The combination of ten subject vectors and ten predicate vectors allows 100 meaning vectors. The author used twenty neural agents for the experiment. Each agent was implemented with the vanilla recurrent neural networks (RNN), where the hidden vector h's size was 10, same as the size of the meaning vector m in order to treat h as the agent's understanding of m. In each training round a single learner (i.e. listener) and ten teachers (i.e. speaker) were randomly chosen. Each teacher, given all 100 m's in random order, generates a message s 5 for each m and sends it to the learner. The messages are generated using the obverter techinque, which is described in Algorithm 1. The learner is trained to minimize the mean squared error (MSE) between h (after consuming the s) and m. After the learner has learned from all ten teachers, the next round begins, repeating the process until the error goes below some threshold. Algorithm 1: Message generation process used in BID3. DISPLAYFORM0 9 Append i to s; DISPLAYFORM1 Terminate;When the training was complete, the author was able to find strong patterns in the messages used by the agents (Table 6). Note that the messages using predicates tired, scared, sick and happy especially follow a very clear pattern. Batali also conducted a zero-shot test where the agents were trained without the diagonal elements in Table 6 and tested with all 100 meaning vectors. The agents were able to successfully communicate even when held-out meaning vectors were used, but the Table 6: (Top) Messages used by a majority of the population for each of the given meanings.(Bottom) A potential analysis of the system in terms of a root plus modifications. Italic symbols are used to specify predicates and roman symbols are used to specify subjects. Messages in parentheses cannot be made to fit into this analysis.messages used for the held-out meaning vectors did not show as strong compositional patterns as the non-zero-shot case. The obverter technique allows us to generate messages that encourage the agents to use a shared language, even a highly structured one, via using a single RNN for both speaking and listening. This is quite different from other RL-based related works BID17 BID22 BID8 BID12 BID16 where each agent has separate components (e.g. two RNNs) for generating messages and consuming messages. This is necessary typically because the message generation module and the message consumption module have different input/output requirements. The message generation module accepts some input related to the task (e.g. goal description vector, question embedding, or image embedding) and generates discrete symbols. The message consumption module, on the other hand, accepts discrete symbols (i.e. the message) and generates some output related to the task (e.g. some prediction or some action to take). Therefore, when a neural agent speaks in the RL-based approach, its message generation process is completely separated from its own listening process, but tied to the listening process of another agent (i.e. listener) 6. This means an agent may not have internal consistency; what an agent speaks may not make sense to itself. However, agents in the RL-based setting do converge on a common language because, during the training, the error signal flows from the listener to the speaker directly. Obverter approach, on the other hand, requires that each agent has a single component for both message generation and message consumption. This single component accepts discrete symbols and generates some output related to the task. This guarantees internal consistency because an agent's message generation process is tied to its own message consumption process; it will only generate messages that make sense to itself. In the obverter setting, the error signal does not flow between agents directly, but agents converge on a common language by taking turns to be the listener; the listener tries to understand what the speaker says, so that when the listener becomes the speaker, it can generate messages that make sense to itself and, at the same time, will be understood by the former speaker (now listener).The advantage of obverter approach over RL-based approach is that it is motivated by the theory of mind and more resembles the acquisition/development process of human language. Having a single mechanism for both speaking and listening, and training oneself to be a good listener leads to the emergence of self-consistent, shared language. However, obverter technique requires that all agents perform the same task, which means all agents must have identical model architectures. This is because, during the message generation process, the speaker internally simulates what the listener will go though when it hears the message. Therefore we cannot play an asymmetrical game such as where the speaker sees only one image and generates a message but the listener is given multiple images and must choose one after hearing the message. RL-based approaches do not have this problem since there are separate modules for speaking and listening. We believe obverter technique could be the better choice for certain tasks regarding human mind emulation. But it certainly is not the tool for every occasion. The RL-based approach is a robust tool for any general task that may or may not involve human-like communication. We conclude this section with a possible future research direction that combines the strengths of both approaches to enable communication in more interesting and complicated tasks. We used TensorFlow and the Sonnet library for all implementation. We used an eight-layer convolutional neural network. We used 32 filters with the kernel size 3 for every layer. The strides were for each layer. We used rectified linear unit (ReLU) as the activation function for every layer. Batch normalization was used for every layer. We did not use the bias parameters since we used Batch normalization. For padding, we used the TensorFlow VALID padding option for every layer. The fully connected layer that follows the convolutional neural network was of 256 dimensions, with ReLU as the activation function. We used a single layer Gated Recurrent Units (GRU) to implement the language module. The size of the hidden layer was 64. We used a two-layer feedforward neural network. The first layer reduces the dimensionality to 128 with ReLU as the activation function, then the second layer generates a scalar value with sigmoid as the activation function. Both agents' model parameters are randomly initialized. The training process consists of rounds where teacher/learner roles are changed, and each round consists of multiple games where learner's model parameters are updated. In each game, the teacher, given a mini-batch of images, generates corresponding messages. The learner, given a separate mini-batch of images and the messages from the teacher, decides whether it is seeing the same object type as the teacher. Learner's model parameters are updated to minimize the cross entropy loss. After playing a predefined number of games, we move on to the next round where two agents change their roles. Algorithm 2: Message generation process used in our work. Table 7: Accuracy when each object type is given to the speaker. DISPLAYFORM0 We conducted a separate test with the agents from round 16, 760 to assess the communication accuracy for each object type. The agents were given 1, 600 total object pairs (40 × 40). Each object pair was tested 10 times, where after 5 times the agents switched speaker/listener roles. The average accuracy was 95.4%, and only 88 out of 1, 600 object pairs were communicated with accuracy lower than 0.8. Table 7 describes the accuracy when each object type was given to the speaker. We can observe that the accuracy is higher for objects that are described with less overlapping messages. For example, yellow box is communicated with the accuracy of 98%, and it is described with aaaaaa, which is not used for any other object types. Gray box, on the other hand, is communicated with accuracy 93%. It is described with aaa, which is also used for yellow capsule and green sphere, both of which are communicated with low accuracies as well. Figure 5 provides ten examples of communication when the speaker is given a blue box and the listener is given various object types. The listener's belief (i.e. score) that it is seeing the same image as the speaker changes each time it consumes a symbol. It is notable that most of the time the score jumps between 0 and 1, rather than gradually changing in between. This is natural given that messages that differ by only a single character can mean different objects (e.g. blue box and blue cylinder). This phenomenon can also be seen in human language. For example, blue can and blue cat differ by a single alphabet, but the semantics are completely different. Object types that are described by similar messages as blue box, such as blue cylinder and magenta box cause marginal confusion to the listener such that prediction scores for both objects are not complete zeros. There are also cases where two completely different objects are described by the same message as mentioned in Section 3.1. From TAB2 we can see that blue box and cyan cylinder are described by the same message bbbbbbb{b,d}, although the messages were composed using different rules. Therefore the listener generates high scores for both objects, occasionally losing the game when the agents are given this specific object pair (1 out of 40 chance). This can be seen as a side effect coming from the principle of least effort which motivates the agents to win the game most of the time while minimizing the effort to generate messages. Section 3.2 provides strong evidence that the agents are properly recognizing the color and shape of an object. In this section, we study the visual module of both agents to study how they are processing Table 9: Accuracy when each object type is given to the speaker. Shaded cells indicate the objects not seen during the training. In the same manner as Appendix F, we conducted a separate test with the agents from round 19, 980 to assess the communication accuracy for each object type. The agents were given 1, 600 total object pairs (40 × 40). Each object pair was tested 10 times, where after 5 times the agents switched speaker/listener roles. The average accuracy was 94.73%, and 103 out of 1, 600 object pairs were communicated with accuracy lower than 0.8. Table 9 describes the accuracy when each object type was given the speaker. Shaded cells indicate objects not seen during the training. Here we can observe the same tendency as the one seen in Appendix F; the accuracy is higher for objects that are described with less overlapping messages. Lets assume agent0 is aware of red circle, blue square and green triangle. If agent0 came upon a blue circle for the first time and had to describe it to agent1, the efficient way would be to say blue circle. But it could also say blue not square not triangle. If agent1 had a similar knowledge as agent0 did, then both agents would have a successful communication. However, it is debatable whether saying blue not square not triangle is as compositional as blue circle.
We train neural network agents to develop a language with compositional properties from raw pixel input.
414
scitldr
The ability to forecast a set of likely yet diverse possible future behaviors of an agent (e.g., future trajectories of a pedestrian) is essential for safety-critical perception systems (e.g., autonomous vehicles). In particular, a set of possible future behaviors generated by the system must be diverse to account for all possible outcomes in order to take necessary safety precautions. It is not sufficient to maintain a set of the most likely future outcomes because the set may only contain perturbations of a dominating single outcome (major mode). While generative models such as variational autoencoders (VAEs) have been shown to be a powerful tool for learning a distribution over future trajectories, randomly drawn samples from the learned implicit likelihood model may not be diverse -- the likelihood model is derived from the training data distribution and the samples will concentrate around the major mode of the data. In this work, we propose to learn a diversity sampling function (DSF) that generates a diverse yet likely set of future trajectories. The DSF maps forecasting context features to a set of latent codes which can be decoded by a generative model (e.g., VAE) into a set of diverse trajectory samples. Concretely, the process of identifying the diverse set of samples is posed as DSF parameter estimation. To learn the parameters of the DSF, the diversity of the trajectory samples is evaluated by a diversity loss based on a determinantal point process (DPP). Gradient descent is performed over the DSF parameters, which in turn moves the latent codes of the sample set to find an optimal set of diverse yet likely trajectories. Our method is a novel application of DPPs to optimize a set of items (forecasted trajectories) in continuous space. We demonstrate the diversity of the trajectories produced by our approach on both low-dimensional 2D trajectory data and high-dimensional human motion data. Forecasting future trajectories of vehicles and human has many useful applications in autonomous driving, virtual reality and assistive living. What makes trajectory forecasting challenging is that the future is uncertain and multi-modal -vehicles can choose different routes and people can perform different future actions. In many safety-critical applications, it is important to consider a diverse set of possible future trajectories, even those that are less likely, so that necessary preemptive actions can be taken. For example, an autonomous vehicle should understand that a neighboring car can merge into its lane even though the car is most likely to keep driving straight. To address this requirement, we need to take a generative approach to trajectory forecasting that can fully characterize the multimodal distribution of future trajectories. To capture all modes of a data distribution, variational autoencoders (VAEs) are well-suited generative models. However, random samples from a learned VAE model with Gaussian latent codes are not guaranteed to be diverse for two reasons. First, the sampling procedure is stochastic and the VAE samples can fail to cover some minor modes even with a large number of samples. Second, since VAE sampling is based on the implicit likelihood function encoded in the training data, if most of the training data is centered around a specific mode while other modes have less data (Fig. 1 (a) ), the VAE samples will reflect this bias and concentrate around the major mode (Fig. 1 (b) ). To tackle this problem, we propose to learn a diversity sampling function (DSF) that can reliably generate a diverse set of trajectory samples (Fig. 1 (c) ). The proposed DSF is a deterministic parameterized function that maps forecasting context features (e.g., past trajectories) to a set of latent codes. The latent codes are decoded by the VAE docoder into a set of future trajectory samples, denoted as the DSF samples. In order to optimize the DSF, we formulate a diversity loss based on a determinantal point process (DPP) to evaluate the diversity of the DSF samples. The DPP defines the probability of choosing a random subset from the set of trajectory samples. It models the negative correlations between samples: the inclusion of a sample reduces the probability of including a similar sample. This makes the DPP an ideal tool for modeling the diversity within a set. In particular, we use the expected cardinality of the DPP as the diversity measure, which is defined as the expected size of a random subset drawn from the set of trajectory samples according to the DPP. Intuitively, since the DPP inhibits selection of similar samples, if the set of trajectory samples is more diverse, the random subset is more likely to select more samples from the set. The expected cardinality of the DPP is easy to compute and differentiable, which allows us to use it as the objective to optimize the DSF to enable diverse trajectory sampling. Our contributions are as follows: We propose a new forecasting approach that learns a diversity sampling function to produce a diverse set of future trajectories; We propose a novel application of DPPs to optimize a set of items (trajectories) in continuous space with a DPP-based diversity measure; Experiments on synthetic data and human motion validate that our method can reliably generate a more diverse set of future trajectories compared to state-of-the-art generative models. Trajectory Forecasting has recently received significant attention from the vision community. A large portion of previous work focuses on forecasting 2D future trajectories for pedestrians (; ; ;) or vehicles (a). Some approaches use deterministic trajectory modeling and only forecast one future trajectory;. As there are often multiple plausible future trajectories, several approaches have tried to forecast distributions over trajectories; ). Recently, Rhinehart et al. (2018; propose a generative model that can accurately forecast multi-modal trajectories for vehicles. also use egocentric videos to predict the future trajectories of the camera wearer. Some work has investigated forecasting higher dimensional trajectories such as the 3D fullbody pose sequence of human motions. Most existing work takes a deterministic approach and forecasts only one possible future motion from past 3D poses (; ; ; b), static images or egocentric videos . Differently, some probabilistic approaches use conditional variational autoencoders (cVAEs) to generate multiple future motions. In constrast to previous work, our approach can generate a diverse set of future motions with a limited number of samples. Diverse Solutions have been sought after in a number of problems in computer vision and machine learning. A branch of these methods aiming for diversity stems from the M-Best MAP problem , including diverse M-Best solutions and multiple choice learning ). Alternatively, previous work has used submodular function maximization to select a diverse subset of garments from fashion images . Determinantal point processes (DPPs) (; are efficient probabilistic models that can measure both the diversity and quality of items in a subset, which makes it a natural choice for the diverse subset selection problem. DPPs have been applied for document and video summarization , recommendation systems , object detection , and grasp clustering . have also used DPPs to mitigate mode collapse in generative adversarial networks (GANs). The work most related ours is , which also uses the cardinality of DPPs as a proxy for user engagement. However, there are two important differences between our approach and theirs. First, the context is different as they use the cardinality for a subset selection problem while we apply the cardinality as an objective of a continuous optimization problem in the setting of generative models. Second, their main motivation behind using the cardinality is that it aligns better with the user engagement semantics, while our motivation is that using cardinality as a diversity loss for deep neural networks is more stable due to its tolerance of similar trajectories, which are often produced by deep neural networks during stochastic gradient descent. The aim of multi-modal trajectory forecasting is to learn a generative model over future trajectories. Variational autoencoders (VAEs) are a popular choice of generative models for trajectory forecasting ) because it can effectively capture all possible future trajectories by explicitly mapping each data point to a latent code. VAEs model the joint distribution p θ (x, z) = p(z)p θ (x|z) of each data sample x (e.g., a future trajectory) and its corresponding latent code z, where p(z) denotes some prior distribution (e.g., Gaussians) and p θ (x|z) denotes the conditional likelihood model. To calculate the marginal likelihood p θ (x) = p θ (x, z)/p θ (z|x), one needs to compute the posterior distribution p θ (z|x) which is typically intractable. To tackle this, VAEs use variational inference which introduces an approximate posterior q φ (z|x) and decomposes the marginal log-likelihood as where L(x; θ, φ) is the evidence lower bound (ELBO) defined as During training, VAEs jointly optimize the recognition model (encoder) q φ (z|x) and the likelihood model (decoder) p θ (x|z) to maximize the ELBO. In the context of multi-modal trajectory forecasting, one can generate future trajectories from p(x) by drawing a latent code z from the prior p(z) and decoding z with the decoder p θ (x|z) to produce a corresponding future trajectory x. Our core technical innovation is a method to learn a diversity sampling function (DSF) that can generate a diverse set of future trajectories. To achieve this, we must equip ourselves with a tool to evaluate the diversity of a set of trajectories. To this end, we make use of determinantal point processes (DPPs) to model the diversity within a set. DPPs promote diversity within a set because the inclusion of one item makes the inclusion of a similar item less likely if the set is sampled according to a DPP. Formally, given a set of items (e.g., data points) Y = {x 1, . . ., x N}, a point process P on the ground set Y is a probability measure on 2 Y, i.e., the set of all subsets of Y. P is called a determinantal point process if a random subset Y drawn according to P has where Y ⊆ Y, I is the identity matrix, L ∈ R N ×N is the DPP kernel, a symmetric positive semidefinite matrix, and The DPP kernel L is typically constructed by a similarity matrix S, where S ij defines the similarity between two items x i and x j. If we use the inner product as the similarity measure, L can be written in the form of a Gram matrix L = S = X T X where X is the stacked feature matrix of Y. As a property of the Gram matrix, det (L Y) equals the squared volume spanned by vectors x i ∈ Y. With this geometric interpretation in mind, one can observe that diverse sets are more probable because their features are more orthogonal, thus spanning a larger volume. In addition to set diversity encoded in the similarity matrix S, it is also convenient to introduce a quality vector r = [r 1, . . ., r N] to weigh each item according to some unary metric. For example, the quality weight might be derived from the likelihood of an item. To capture both diversity and quality of a subset, the DPP kernel L is often decomposed in the more general form: With this decomposition, we can see that both the quality vector r and similarity matrix S contribute to the DPP probability of a subset Y: Due to its ability to capture the global diversity and quality of a set of items, we choose DPPs as the probabilistic approach to evaluate and optimize the diversity of the future trajectories drawn by our proposed diversity sampling function. Safety-critical applications often require that the system can maintain a diverse set of outcomes covering all modes of a predictive distribution and not just the most likely one. To address this requirement, we propose to learn a diversity sampling function (DSF) to draw deterministic trajectory samples by generating a set of latent codes in the latent space of a conditional variational autoencoder (cVAE) and decoding them into trajectories using the cVAE decoder. The DSF trajectory samples are evaluated with a DPP-based diversity loss, which in turn optimizes the parameters of the DSF for more diverse trajectory samples. Formally, the future trajectory x ∈ R T ×D is a random variable denoting a D dimensional feature over a future time horizon T (e.g., a vehicle trajectory or a sequence of humanoid poses). The context ψ = {h, f} provides the information to infer the future trajectory x, and it contains the past trajectory h ∈ R H×D of last H time steps and optionally other side information f, such as an obstacle map. In the following, we first describe how we learn the future trajectory model p θ (x|ψ) with a cVAE. Then, we introduce the DSF and the DPP-based diversity loss used to optimize the DSF. In order to generate a diverse set of future trajectory samples, we need to learn a generative trajectory forecasting model p θ (x|ψ) that can cover all modes of the data distribution. Here we use cVAEs (other proper generative models can also be used), which explicitly map data x with the encoder q φ (z|x, ψ) to its corresponding latent code z and reconstruct the data from the latent code using the decoder p θ (x|z, ψ). By maintaining this one-on-one mapping between the data and the latent code, cVAEs attempt to capture all modes of the data. As discussed in Sec. 3.1, cVAEs jointly optimize the encoder and decoder to maximize the variational lower bound: We use multivariate Gaussians for the prior, encoder and decoder:, and p θ (x|z, ψ) = N (x;x, αI). Both the encoder and decoder are implemented as neural networks. The encoder network f φ outputs the parameters of the posterior distribution: (µ, σ) = f φ (x, ψ). The decoder network g θ outputs the reconstructed future trajectoryx: Detailed network architectures are given in Appendix B.1. Based on the Gaussian parameterization of the cVAE, the objective in Eq. 6 can be rewritten as where we take V samples from the posterior q φ (z|x, ψ), D z is the number of latent dimensions, and β = 1/α is a weighting factor. The training procedure for the cVAE is detailed in Alg. 2 (Appendix A). Once the cVAE model is trained, sampling from the learned future trajectory model p θ (x|ψ) is efficient: we can sample a latent code z according to the prior p(z) and use the decoder p θ (x|z, ψ) to decode it into a future trajectory x. Algorithm 1 Training the diversity sampling function (DSF) S γ (ψ) Initialize γ randomly 4: while not converged do 5: Generate latent codes Z = {z 1, . . ., z N} with the DSF S γ (ψ) Generate the trajectory ground set Y = {x 1, . . ., x N} with the decoder g θ (z, ψ) 8: Compute the similarity matrix S and quality vector r with Eq. 8 and 9 9: Compute the DPP kernel Calculate the diversity loss L diverse Update γ with the gradient ∇L diverse 12: end for 13: end while As mentioned before, randomly sampling from the learned cVAE model according to the implicit likelihood function p θ (x|ψ), i.e., sampling latent codes from the prior p(z), does not guarantee that the trajectory samples are diverse: major modes (those having more data) with higher likelihood will produce most of the samples while minor modes with lower likelihood will have almost no sample. This prompts us to devise a new sampling strategy that can reliably generate a diverse set of samples covering both major and minor modes. We propose to learn a diversity sampling function (DSF) S γ (ψ) that maps context ψ to a set of latent codes Z = {z 1, . . ., z N}. The DSF is implemented as a γ-parameterized neural network which takes ψ as input and outputs a vector of length N · D z (see Appendix B.1 for network details). The latent codes Z are decoded into a diverse set of future trajectories Y = {x 1, . . ., x N}, which are denoted as the DSF trajectory samples. We note that N is the sampling budget. To solve for the parameters of the DSF, we propose a diversity loss based on a DPP defined over Y. In this section, we first describe how the DPP kernel L is defined, which involves the construction of the similarity matrix S and quality vector r. We then discuss how we use the DPP kernel L to formulate a diversity loss to optimize the parameters of the DSF. Recall that the DPP kernel is defined as L = Diag(r) · S · Diag(r), where r defines the quality of each trajectory and S measures the similarity between two trajectories. The DPP kernel L(γ) is a function of γ as it is defined over the ground set Y output by the DSF S γ (ψ). Similarity. We measure the similarity S ij between two trajectories x i and x j as where d x is the Euclidean distance and k is a scaling factor. This similarity design ensures that 0 ≤ S ij ≤ 1 and S ii = 1. It also makes S positive definite since the Gaussian kernel we use is a positive definite kernel. Quality. It may be tempting to use p(x|ψ) to define the quality of each trajectory sample. However, this likelihood-based measure will clearly favor major modes that have higher probabilities, making it less likely to generate samples from minor modes. This motivates us to design a quality metric that treats all modes equally. To this end, unlike the similarity metric which is defined in the trajectory space, the quality of each sample is measured in the latent space and is defined as Geometrically, let R be the radius of a sphere Φ containing most samples from the Gaussian prior p(z). We treat samples inside Φ equally and only penalize samples outside Φ. In this way, samples from major modes are not preferred over those from minor modes as long as they are inside Φ, while samples far away from the data manifold are heavily penalized as they are outside Φ. The radius R is determined by where ρ percent of the Gaussian samples lie within, and we set ρ = 90. To compute R, we use the percentage point function of the chi-squared distribution which models the distribution over the sum of squares of independent standard normal variables. The base quality ω is a hyperparameter which we set to 1 during training in our experiments. At test time, we can use a larger ω to encourage the DPP to select more items from the ground set Y. The hyperparameter ρ (or R) allows for the trade-off between diversity and quality. When R is small, the quality metric is reduced to a pure likelihood-based metric (proportional to the latent likelihood), which will prefer samples with high likelihood and in a less diverse sample set. When R is large, most samples will have the same quality, and the ing samples will be highly diverse but less likely. In practice, the choice of R should be application dependent, as one could imagine autonomous vehicles would need to consider more diverse scenarios including those less likely ones to ensure robustness. We note that after the diverse samples are obtained, it is possible to reassign the quality score for each sample based on its likelihood to allow users to prioritize more likely samples. Diversity Loss. To optimize the DSF S γ (ψ), we need to define a diversity loss that measures the diversity of the trajectory ground set Y generated by S γ (ψ). An obvious choice for the diversity loss would be the negative log likelihood However, there is a problem with directly using the DPP log likelihood. The log likelihood heavily penalizes repeated items: if two trajectories inside Y are very similar, their corresponding rows in L will be almost identical, making det(L(γ)) = λ 1 λ 2... λ N ≈ 0 (λ n is the n-th eigenvalue). In practice, if the number of modes in the trajectory distribution p(x|ψ) is smaller than |Y|, Y will always have similar trajectories, thus making det(L(γ)) always close to zero. In such cases, optimizing the negative log likelihood causes numerical issues, which is observed in our early experiments. Instead, the expected cardinality of the DPP is a better measure for the diversity of Y, which is defined as Intuitively, since the DPP discourages selection of similar items, if Y is more diverse, a random subset Y drawn according to the DPP is more likely to select more items from Y, thus having larger cardinality. The expected cardinality can be computed as (Eq. 15 and 34 in): The main advantage of the expected cardinality is that it is well defined even when the ground set Y has duplicated items, since it does not require all eigenvalues of L to be non-zero as the log likelihood does. Thus, our diversity loss is defined as L diverse (γ) = −tr I − (L(γ) + I) −1. The training procedure for S γ (ψ) is outlined in Alg. 1. Inference. At test time, given current context ψ,we use the learned DSF S γ (ψ) to generate the future trajectory ground set Y. In some cases, Y may still contain some trajectories that are similar to others. In order to obtain a diverse set of trajectories without repetition, we aim to perform MAP inference for the DPP to find the most diverse subset Y * = arg max Y ∈Y P L(γ) (Y). A useful property of DPPs is that the log-probability function is submodular . Even though submodular maximization is NP-hard, we use a greedy algorithm which is a popular optimization procedure that works well in practice. As outlined in Alg. 3, the output set Y f is initialized as ∅, and at each iteration, the trajectory which maximizes the log probability is added to Y f, until the marginal gain becomes negative or Y f = Y. The primary focus of our experiments is to answer the following questions: Are trajectory samples generated with our diversity sampling function more diverse than samples from the cVAE and other baselines? How does our method perform on both balanced and imbalanced data? Is our method general enough to perform well on both low-dimensional and high-dimensional tasks? Figure 2: In real data, contexts (past trajectories) are seldom the same due to noise. Metrics. A problem with trajectory forecasting evaluation is that in real data each context ψ (i) usually only has one future trajectory x (i), which means we only have one sample from a multi-modal distribution. Let us consider a scenario of three data examples as shown in Fig. 2 (red, purple, blue). The contexts (past trajectories) of the three examples are instances of the same trajectory but they are slightly different due to noise. As these three contexts have the same semantic meaning, they should share the future trajectories, e.g., the purple and blue future trajectories are also valid for the red context. If we evaluate each example (x (i), ψ (i) ) only with its own future trajectory x (i), a method can achieve high scores by only forecasting the mode corresponding to x (i) and dropping other modes. This is undesirable because we want a good method to capture all modes of the future trajectory distribution, not just a single mode. To allow for multi-modal evaluation, we propose collecting multiple future trajectories for each example by clustering examples with similar contexts. Specifically, we augment each data example (x (i), ψ (i) ) with a future trajectory set.., M } and metrics are calculated based on X (i) instead of x (i), i.e., we compute metrics for each x ∈ X (i) and average the .;; ). However, these two metrics do not penalize repeated samples. To address this, we introduce two new metrics ASD and FSD to evaluate the similarity between samples in the set of forecasted trajectories. Larger ASD and FSD means the forecasted trajectories are more non-repetitive and diverse. Baselines. We compare our Diversity Sampler Function (DSF) with the following baselines: cVAE: a method that follows the original sampling scheme of cVAE by sampling latent codes from a Gaussian prior p(z). MCL: an approach that uses multiple choice learning to optimize the sampler S γ (ψ) with the following loss: L mcl = minx ∈Y x − x 2, where x is the ground truth future trajectory. R2P2: a method proposed in that uses a reparametrized pushforward policy to improve modeling of multi-modal distributions for vehicle trajectories. cGAN: generative adversarial networks conditioned on the forecasting context. We implement all baselines using similar networks and perform hyperparameter search for each method for fair comparisons. For methods whose samples are stochastic, we use 10 random seeds and report the average for all metrics. We first use synthetic data to evaluate our method's performance for low-dimensional tasks. We design a virtual 2D traffic scene where a vehicle comes to a crossroad and can choose three different future routes: forward, left, and right. We consider two types of synthetic data: Balanced data, which means the probabilities of the vehicle choosing one of the three routes are the same; Table 1: Quantitative on synthetic data (numbers scaled by 10) when N = 10. Imbalanced data, where the probabilities of the vehicle going forward, left and right are 0.8, 0.1, 0.1, respectively. We synthesize trajectory data by simulating the vehicle's behavior and adding Gaussian noise to vehicle velocities. Each data example (x (i), ψ (i) ) contains future trajectories of 3 steps and past trajectories of 2 steps. We also add an obstacle map around the current position to the context ψ (i). In total, we have around 1100 training examples and 1000 test examples. Please refer to Appendix B for more implementation details. Table 1 summarizes the quantitative for both balanced and imbalanced data when the sampling budget N is 10. We can see that our method DSF outperforms the baselines in all metrics in both test settings. As shown in Fig. 3, our method generates more diverse trajectories and is less affected by the imbalanced data distribution. The trajectory samples of our method are also less repetitive, a feature afforded by our DPP formulation. Fig. 4 shows how ADE changes as a function of the sampling budget N. Table 2: Quantitative on for human motion forecasting when N = 10. To further evaluate our method's performance for more complex and high-dimensional tasks, we apply our method to forecast future human motions (pose sequences). We use motion capture to obtain 10 motion sequences including different types of motions such as walking, turning, jogging, bending, and crouching. Each sequence is about 1 minute long and each pose consists of 59 joint angles. We use past 3 poses (0.1s) to forecast next 30 poses (1s). There are around 9400 training examples and 2000 test examples where we use different sequences for training and testing. More implementation details can be found in Appendix B. We present quantitative in Table 2 and we can see that our approach outperforms other methods in all metrics. As the dynamics model used in R2P2 does not generalize well for high-dimensional human motion, we find the model fails to converge and we do not compare with it in this experiment. Fig. 4 shows that our method achieves large improvement when the sampling budget is big (N = 50). We also present qualitative in Fig. 5, where we show the starting pose and the final pose of all 10 forecasted motion samples for each method. We can clearly see that our method generates more diverse future human motions than the baselines. Please refer to Appendix C and our video for additional qualitative . In this section, we perform additional experiments on a large human motion dataset (3.6 million frames), Human3.6M , to evaluate the generalization ability of our approach. We predict future motion of 2 seconds based on observed motion of 0.5 seconds. Please refer to Appendix B.3 for implementation details. We also use a new selection of baselines including several variants of our method (DSF) and the cVAE to validate several design choices of our method, including the choice of the expected cardinality over the negative log likelihood (NLL) of the DPP as the diversity loss. Specifically, we use the following new baselines: DSF-NLL: a variant of DSF that uses NLL as the diversity loss instead of the expected cardinality. DSF-COS: a DSF variant that uses cosine similarity to build the similarity matrix S for the DPP kernel L. DSF-NLL: a variant of the cVAE that samples 100 latent codes and performs DPP MAP inference on the latent codes to obtain a diverse set of latent codes, which are then decoded into trajectory samples. We present quantitative in Table 3 when the number of samples N is 10 and 50. The baseline DSF-COS is able to achieve very high diversity (ASD and FSD) but its samples are overly diverse and have poor quality which is indicated by the large ADE and FDE. Compared with DSF-NLL, Table 3: Quantitative on Human3.6M for N = 10 and N = 50. X means the method is unable to learn a model due to numerical instability. our method achieves better diversity (ASD and FSD) and similar ADE and FDE when the number of samples is small (N = 10). For a larger number of samples (N = 50), NLL becomes unstable even with a large (1e-3) added to the diagonal. This behavior of NLL, i.e., stable for small N but unstable for large N, matches our intuition that NLL becomes unstable when samples become similar (as discussed in Sec. 4.2), because when there are more samples, it is easier to have similar samples during the SGD updates of the DSF network. The baseline cVAE-LDPP also performs worse than DSF in all metrics even though it is able to outperfom the cVAE. We believe the reason is that diversity in sample space may not be well reflected in the latent space due to the non-linear mapping from latent codes to samples induced by deep neural networks. We proposed a novel forecasting approach using a DSF to optimize over the sample space of a generative model. Our method learns the DSF with a DPP-based diversity measure to generate a diverse set of trajectories. The diversity measure is a novel application of DPPs to optimize a set of items in continuous space. Experiments have shown that our approach can generate more diverse vehicle trajectories and human motions compared to state-of-the-art baseline forecasting approaches. 2: Output: cVAE encoder network f φ (x, ψ) and decoder network g θ (z, ψ) 3: Initialize φ and θ randomly 4: while not converged do 5: Compute parameters (µ, σ) of the posterior distribution q φ (z|x, ψ) using f φ (x, ψ) Sample V Gaussian noises {1, . . ., V} from N (0, I) Transform noises to latent samples from q φ (z|x, ψ): Decode latent samples into reconstructed trajectories {x 1, . . .,x V} using g θ (z, ψ) Calculate the cVAE loss L cvae according to Eq. 6 11: Update φ and θ with ∇ φ L cvae and ∇ θ L cvae 12: end for 13: end while Figure 6: Network architectures for synthetic data and human motion. Top: for synthetic data, we use a CNN to process the obstacle map f and directly flatten trajectories x and h into vectors. The reconstructed trajectoryx is decoded with an MLP. Bottom: for human motion, we use Bi-LSTMs to extract temporal features for x and h and decode the reconstructed trajectoryx with a forward LSTM. Synthetic data. Fig. 6 (Top) shows the network architecture for synthetic data. The number of latent dimensions is 2. By default, we use ReLU activation for all networks. The future trajectory x ∈ R 3×2 consists of 3 future positions of the vehicle. The context ψ contains past trajectories h ∈ R 2×2 of 2 time steps and a obstacle map f ∈ {0, 1} 28×28 spanning a 4 × 4 area around the current position of the vehicle (the road width is 2). For the encoder, we use a convolutional neural network (CNN) with three 32-channel convolutional layers to process f. The first two layers have kernel size 4 and stride 2 while the last layer has kernel size 6 and stride 1. The obtained CNN features are concatenated with flattened x and h into a unified feature, which is feed into a multilayer perceptron (MLP). The MLP has one 128-dim hidden layer and two heads outputing the mean µ and variance σ of the latent distribution. For the decoder, we concatenate the CNN feature from f with the latent code z ∈ R 2 and flattened h into a unified feature. The feature is passed through an MLP with one 128-dim hidden layer which outputs the reconstructed future trajectorỹ x ∈ R 3×2. For the diversity sampler function (DSF), we concatenate the CNN feature from f with the flattened h and pass it through an MLP with one 128-dim hidden layer to obtain a set of latent codes {z 1, . . ., z N} which are represented by a vector of length 2N. Human motion. Fig. 6 (Bottom) shows the network architecture for synthetic data. The number of latent dimensions is 8. The future trajectory x ∈ R 30×59 consists of future poses of 30 time steps (1s). The context ψ contains past poses h ∈ R 3×59 of 3 time steps (0.1s). Each pose consists of 59 joint angles. For the encoder, we use two 128-dim bidirectional LSTMs (Bi-LSTMs) and mean pooling to obtain the temporal features for x and h. We then concatenate the temporal features into a unified feature and feed it into an MLP with two hidden layers and two heads to obtain the mean µ and variance σ of the latent distribution. For the decoder, we reuse the Bi-LSTM of the encoder for the context h and a 128-dim forward LSTM to decode the future trajectoryx. At each time step t, the forward LSTM takes as input the previous posex t−1 (h H for t = 0), the latent code z ∈ R 8 and the temporal features from h, and outputs a 128-dim feature. The feature is then passed through an MLP with two hidden layers to generate the reconstructed posex t. For the DSF, we use a different 128-dim Bi-LSTM to obtain the temporal feature for h, which is feed into an MLP with a 128-dim hidden layer to produce a set of latent codes {z 1, . . ., z N} which are represented by a vector of length 8N. When training the cVAE model using Eq. 7, we take V = 1 sample from the posterior q φ (z|x, ψ). The weighting factor β for the KL term is set to 0.1 for synthetic data and 1e-4 for human motion. We use Adam to jointly optimize the encoder and decoder. The learning rate is set to 1e-4 and we use a mini batch size of 32 for synthetic data. We optimize the model for 500 epochs for synthetic data and 100 epochs for human motion. When training the DSF, the scale factor k for the similarity matrix S is set to 1 for synthetic data and 1e-2 for human motions. For both synthetic data and human motions, we use Adam with learning rate 1e-4 to optimize the DSF for 20 epochs. Recall that in the metrics section (Sec. 5.1), we need the grouping threshold ε to build the ground truth future trajectory set X (i) = {x (j) | ψ (j) − ψ (i) ≤ ε, j = 1,..., M }. For synthetic data, ε is set to 0.1 and we only use past trajectories h to compute the distance between contexts. For human motion, ε is set to 0.5. Following previous work (; ;), we convert the motion sequences in the dataset into sequences of 3D joint positions, and adopt a 17-joint skeleton. We train on five subjects (S1, S5, S6, S7, S8), and test on two subjects (S9 and S11). We use the same network architectures (Fig.6 (Bottom)) in this experiment as the one used in the human motion forecasting experiment above. The number of latent dimensions is 128. When training the cVAE model, the weighting factor β is set to 0.1. We sample 5000 training examples every epoch and optimize the cVAE for 500 epochs using Adam and a learning rate of 1e-4. We set the batch size to 64 for the optimization. The scale factor k for the similarity matrix S of the DPP kernel is set to 5. When learning the DSF, we use a batch size of 64 and sample 1000 training examples every epoch and optimize the DSF for 20 epochs using Adam and a learning rate of 1e-3. When computing the metrics, we set the grouping threshold ε to 0.1. We also show additional qualitative for human motion forecasting in Fig. 7. The quality and diversity of the forecasted motions are best seen in our video 2. Figure 7: Additional visualization for human motion forecasting. The left shows the starting pose, and on the right we show for each method the final pose of 10 forecasted motion samples.
We learn a diversity sampling function with DPPs to obtain a diverse set of samples from a generative model.
415
scitldr
There is mounting evidence that pretraining can be valuable for neural network language understanding models, but we do not yet have a clear understanding of how the choice of pretraining objective affects the type of linguistic information that models learn. With this in mind, we compare four objectives---language modeling, translation, skip-thought, and autoencoding---on their ability to induce syntactic and part-of-speech information, holding constant the genre and quantity of training data. We find that representations from language models consistently perform best on our syntactic auxiliary prediction tasks, even when trained on relatively small amounts of data, which suggests that language modeling may be the best data-rich pretraining task for transfer learning applications requiring syntactic information. We also find that a randomly-initialized, frozen model can perform strikingly well on our auxiliary tasks, but that this effect disappears when the amount of training data for the auxiliary tasks is reduced. Representation learning with deep recurrent neural networks has revolutionized natural language processing and replaced many of the expert-designed, linguistic features previously used. Recently, researchers have begun to investigate the properties of representations learned by networks by training auxiliary classifiers that use the hidden states of frozen pretrained models to perform other tasks. These investigations have shown that when deep LSTM RNNs are trained on tasks like machine translation, they latently identify substantial syntactic and semantic information about their input sentences, including part-of-speech (; a,b;).These intriguing findings lead us to ask the following questions:1. How does the training task affect how well models latently learn syntactic properties? Which tasks are better at inducing these properties?2. How does the amount of data the model is trained on affect these ? When does training on more data help?We investigate these questions by holding the data source and model architecture constant, while varying both the training task and the amount of training data. Specifically, we examine models trained on English-German (En-De) translation, language modeling, skip-thought , and autoencoding, in addition to an untrained baseline model. We control for the data domain by exclusively training on datasets from the 2016 Conference on Machine Translation (WMT;). We train models on all tasks using the parallel En-De corpus and a small subset of that corpus, which allows us to make a fair comparison across all five models. Additionally, we augment the parallel dataset with a large monolingual corpus from WMT to examine how the performance of the unsupervised tasks (all but translation) scale with more data. Throughout our work, we focus on the syntactic evaluation tasks of part-of-speech (POS) tagging and Combinatorial Categorical Grammar (CCG) supertagging. Supertagging is a building block for parsing as these tags constrain the ways in which words can compose, largely determining the parse of the sentence. CCG supertagging thus allows us to measure the degree to which models learn syntactic structure above the word. We focus our analysis on representations learned by language models and by the encoders of sequence-to-sequence models, as translation encoders have been found to learn richer representations of POS and morphological information than translation decoders (a).We find that for POS and CCG tagging, bidirectional language models (BiLMs)-created by separately training forward and backward language models, and concatenating their hidden statesoutperform models trained on all other tasks. Even BiLMs trained on relatively small amounts of data (1 million sentences) outperform translation and skip-thought models trained on larger datasets (5 million and 63 million sentences respectively).Our inclusion of an untrained LSTM baseline allows us to study the effect of training on state representations. We find, surprisingly, that randomly initialized LSTMs underperform our best trained models by only a few percentage points when we use all of the available labeled data to train classifiers for our auxiliary tasks. When we reduce the amount of classifier training data though, the performance of the randomly initialized LSTM model drops far below those of trained models. We hypothesize that this occurs because training the classifiers on large amounts of auxiliary task data allows them to memorize configurations of words seen in the training set and their associated tags. We test this hypothesis by training classifiers to predict the identity of neighboring words from a given hidden state, and find that randomly initialized models outperform all trained models on this task. Our findings demonstrate that our best trained models do well on the tagging tasks because they are truly learning representations that conform to our notions of POS and CCG tagging, and not because the classifiers we train are able to recover neighboring word identity information well. introduce the idea of examining sentence vector representations by training auxiliary classifiers to take sentence encodings and predict attributes like word order. Belinkov et al. (2017a) build on this work by using classifiers to examine the hidden states of machine translation models in terms of what they learn about POS and morphology. They find that translating into morphologically poorer languages leads to a slight improvement in encoder representations. However, this effect is small, and we expect that our study of English-German translation will nonetheless provide a reasonable overall picture of the representations that can be learned in data-rich translation. Beyond translation, find that deep LSTMs latently learn hierarchical syntax when trained on a variety of tasks-including semantic role labeling, language modeling, and dependency parsing. We build on this work by controlling for model size, and the quantity and genre of the training data, which allows us to make direct comparisons between different tasks on their ability to induce syntactic information. Transfer Learning of Representations Much of the work on sentence-level pretraining has focused on sentence-to-vector models and evaluating learned representations on how well they can be used to perform sentence-level classification tasks. Skip-thought -the technique of training a sequence-to-sequence model to predict the sentence preceding and following each sentence in a running text-represents a prominent early success in this area with unlabeled data, and InferSent -the technique of pretraining encoders on natural language inference data-yields strikingly better performance when such labeled data is available. Work in this area has recently moved beyond strict sentence-to-vector mapping. Newer models that incorporate LSTMs pretrained on datarich tasks, like translation and language modeling, have achieved state-of-the-art on many tasks-including semantic role labeling and coreference resolution (; ;). Although comparisons have previously been made between translation and language modeling as pretraining tasks , we investigate this issue more thoroughly by controlling for the quantity and content of the training data. Training Dataset Size The performance of neural models depends immensely on the amount of training data used. find that when training machine translation models on corpora with fewer than 15 million words (English side), statistical machine translation approaches outperform neural ones. study data volume dependence on several tasks-including translation and image classification-and find that for small amounts of data, neural models perform about as well as, 2017). Our method of training auxiliary classifiers on randomly initialized RNNs builds on the tradition of reservoir computing, in which randomly initialized networks or "reservoirs" are fixed and only "read-out" classifier networks are trained (Lukoševičius and). Echo state networks-reservoir computing with recurrent models-have been used for tasks like speech recognition, language modeling, and time series prediction (; ;). We use the parallel English-German (En-De) dataset from the 2016 ACL Conference on Machine Translation (WMT) shared task on news translation . This dataset contains 5 million ordered sentence translation pairs. We also use the 2015 English monolingual news dataset from the same WMT shared task, which contains approximately 58 million ordered sentences. To examine how the volume of training data affects learned representations, we use four corpus sizes: 1, 5, 15, and 63 million sentences (translation is only trained on the smaller two sizes). We create the 1 million sentence corpora from the 5 million sentence dataset by sampling (i) sentence pairs for translation, (ii) English sentences for autoencoders, and (iii) ordered English sentence pairs for skip-thought and language models. Note, we initialize language model LSTM states with the final state after reading the previous sentence. Similarly, we create 15 million sentence corpora for the unsupervised tasks by sampling sentences from the entire corpus of 63 million sentences. We use word-level representations throughout and use the Moses package to tokenize and truecase our data. Finally, we limit both the English and German vocabularies to the 50k most frequent tokens in the training set. We train all our models using OpenNMT-py and use the default options for model sizes, hyperparameters, and training procedure-except that we increase the size of the LSTMs, make the encoders bidirectional, and use validation-based learning rate decay instead of a fixed schedule. Specifically, all our models (except language models) are 1000D, twolayer encoder-decoder LSTMs with bidirectional encoders (500D LSTMs in each direction) and 500D embeddings; we train models both with and without attention . For language models, we train a single 1000D forward language model and a bidirectional language model-two 500D language models (one forward, one backward) trained separately, whose hidden states are concatenated. All models are randomly initialized with a uniform distribution between −0.1 and 0.1, the default in OpenNMT-py. We use the same training procedure for all our models. We evaluate on the validation set every epoch when training on the 1 and 5 million sentence datasets, and evaluate approximately every 5 million sentences when training on the larger datasets. We use SGD with an initial learning rate of 1. Whenever a model's validation loss increases relative to the previous evaluation, we halve the learning rate and stop training when the learning rate reaches 0.5 15. For each training task and dataset size, we select the model with the lowest validation perplexity to perform auxiliary task evaluations on. We report model performance in terms of perplexity and BLEU in Table 1. For translation we use beam search (B = 5) when decoding. For CCG supertagging, we use CCG Bank , which is based on PTB WSJ. CCG supertagging provides fine-grained information about the role of each word in its larger syntactic context and is considered almost parsing, since sequences of tags map sentences to small subsets of possible parses. The entire dataset contains approximately 50k sentences and 1327 tag types. We display POS and CCG tags for an example sentence in FIG2 To study the impact of auxiliary task data volume, for both datasets we create smaller classifier training sets by sampling 10% and 1% of the sentences. We truecase both datasets using the same truecase model trained on WMT and restrict the vocabularies to the 50k tokens used in our LSTM models. We use the word-conditional most frequent class as a baseline, which is the most frequently assigned tag class for each word in the training set; for this baseline we restrict the vocabulary to that of our encoder models (we map all out-of-vocabulary words to a single UNK token). Note that while PTB and WMT are both drawn from news text, there is slight genre mismatch. Word Identity For this task, the classifier takes a single LSTM hidden state as input and predicts the identity of the word at a different time step, for example, three steps previous (shift of -3). We use the WSJ data for this task. , we take all words that occur between 100 and 1000 times (about 1000 words total) as the possible targets for neighboring word prediction. Classifier Training Procedure We train multilayer perceptron (MLP) classifiers that take an LSTM hidden state (from one time step and one layer) and output a distribution over the possible labels (tags or word identities). The MLPs we train have a single 1000D hidden layer with a ReLU activation. For classifier training, we use the same training and learning rate decay procedure used for pretraining the encoders. In this section, we discuss the main POS and CCG tagging displayed in FIG3. Overall, POS and CCG tagging accuracies tend to increase with the amount of data the LSTM encoders are trained on. However, the amount of this improvement is generally small, especially when encoders are already trained on large amounts of data. Language Modeling and Translation For all pretraining dataset sizes and tasks, bidirectional language model (BiLM) and translation encoder representations perform best in terms of both POS and CCG tagging. Translation encoders, however, slightly underperform BiLMs, even when both models are trained on the same amount of data. Interestingly, even BiLMs trained on the smallest amount of data (1 million sentences) outperform models trained on all other tasks and dataset sizes (up to 5 million sentences for translation, and 63 million sentences for skip-thought and autoencoding). The consistent superior performance of BiLMs-along with the fact that language models do not require aligned data-suggests that for transfer learning of syntactic information, BiLMs are superior to translation encoders. For all amounts of training data, the BiLMs significantly outperform the 1000D forward-only language models. The gap in performance between bidirectional and forward language models is greater for CCG supertagging than for POS tagging. When using all available auxiliary training data, there is a 2 and 8 percentage point performance gap on POS and CCG tagging respectively. This difference in relative performance suggests that bidirectional context information is more important when identifying syntactic structure than when identifying part of speech. Figure 2 also illustrates how the best performing BiLMs and translation models tend to be more robust to decreases in classifier data than models trained on other tasks. When training on less auxiliary task data, POS tagging performance tends to drop less than CCG supertagging performance. For the best model (BiLM trained on 63 million sentences), when using 1% rather than all of the auxiliary task training data, CCG accuracy drops 9 percentage points, while POS accuracy only drops 2 points. Further examinations of the effect of classifier data volume are displayed in FIG5.Skip-Thought Although skip-thought encoders consistently underperform both BiLMs and translation encoders in all data regimes we examine, skip-thought models improve the most when increasing the amount of pretraining data, and are the only models whose performance does not seem to have plateaued by 63 million training sentences. Skip-thought models without attention are very similar to language models-the main difference is that while skip-thought models have separate encoder and decoder weights (and a bidirectional encoder), language models share weights between the encoder and decoder. Thus language models can be interpreted as regularized versions of skip-thought. The increased model capacity of skip-thought, compared to language models, could explain the difference in learned representation quality-especially when these models are trained on smaller amounts of data. Random Initialization For our randomly initialized, untrained LSTM encoders we use the default weight initialization technique in OpenNMTpy, a uniform distribution between -0.1 and 0.1; the only change we make is to set all biases to zero. We find that this baseline performs quite well when using all auxiliary data, and is only 3 and 8 percentage points behind the BiLM on POS and CCG tagging, respectively. We find that decreasing the amount of classifier data leads to a significantly greater drop in the randomly initialized encoder performance compared to trained models. In the 1% classifier data regime, the performance of untrained encoders on both tasks drops below that of all trained models and below even the wordconditional most-frequent class baseline. We hypothesize that the randomly initialized baseline is able to perform well on tagging tasks with large amounts of auxiliary task training data, because the classifier can learn the identity of neighboring words from a given time step's hidden state, and simply memorize word configurations and their associated tags from the training data. We test this hypothesis directly in Section 6 and find that untrained LSTM representations are in fact better at capturing neighboring word identity information than any trained model. Autoencoder Models trained on autoencoding are the only ones that do not consistently improve with the amount of training data, which is unsurprising as unregularized autoencoders are prone to learning identity mappings . When training on 10% and 1% of the auxiliary task data, autoencoders outperform randomly initialized encoders and match the word-conditional most frequent class baseline. When training on all the auxiliary data though, untrained encoders outperform autoencoders. These suggest that autoencoders learn some useful structure that Figure 4: POS and CCG tagging accuracies in terms of percentage points over the word-conditional most frequent class baseline. We display for the best performing models for each task.is useful in the low auxiliary data regime. However, the representations autoencoders learn do not capture syntactically rich features, since random encoders outperform them in the high auxiliary data regime. This is further supported by the extremely poor performance of the second layer of an autoencoder without attention on POS tagging (almost 10 percentage points below the most frequent class baseline), as seen in Figure 4a. Embeddings (Layer 0) We find that randomly initialized embeddings consistently perform as well as the word-conditional most frequent class baseline on POS and CCG tagging, which serves as an upper bound on performance for the embedding layer. As these are untrained, the auxiliary classifiers are learning to memorize and classify the random vectors. When using all the auxiliary classifier data, there is no significant difference in the performance of trained and untrained embeddings on the tagging tasks. Only for smaller amounts of classifier data do trained embeddings consistently outperform randomly initialized ones. Belinkov et al. (2017a) find that, for translation models, the first layer consistently outperforms the second on POS tagging. We find that this pattern holds for all our models, except in BiLMs, where the first and second layers perform equivalently. The pattern holds even for untrained models, suggesting that POS information is stored on the lower layer, not because the training task encourages this, but because of fundamental properties of the deep LSTM architecture. We also find that for CCG supertagging, the first layer also outperforms the second layer on untrained models. For the trained models though, behavior is mixed, with the second layer performing better in some cases. Which layer performs best appears to be independent of absolute performance on the supertagging task. Our layer analysis are displayed in Figure 4. Our on word identity prediction are summarized in FIG7 and given in more detail in Appendix A. We find that randomly initialized LSTMs outperform all trained models. We hypothesize this occurs because a kind of useful forgetting occurs during training, as the LSTMs learn that information about certain word patterns are more important to remember than others. In this regard, randomly initialized models are less biased and process inputs more uniformly. The fact that untrained encoders outperform trained ones on word identity prediction, but underperform trained models on POS and CCG tagging, confirms that trained models genuinely capture substantial syntactic features, beyond mere word identity, that the auxiliary classifiers can use. Effect of Depth We find that for both trained and untrained models, the first layer outperforms the second layer when predicting the identity of the immediate neighbors of a word. However, the second layer tends to outperform the first at predicting the identity of more distant neighboring words. This effect is especially apparent for the randomly initialized encoders. Our findings suggest that, as is the case for convolutional neural networks, depth in recur- By controlling for the genre and quantity of the training data, we make fair comparisons between several data-rich training tasks in their ability to induce syntactic information. We find that bidirectional language models (BiLMs) do better than translation and skip-thought encoders at extracting useful features for POS tagging and CCG supertagging. Moreover, this improvement holds even when the BiLMs are trained on substantially less data than competing models. Although, due to limited parallel data, we could not compare BiLMs and translation encoders on more than 5 million sentences, our suggest that for syntactic information, there is no need to compare these two models trained on more data, as BiLMs consistently outperform translation encoders in all data regimes. We also find that randomly initialized encoders extract usable features for POS and CCG tagging, at least when the auxiliary POS and CCG classifiers are themselves trained on reasonably large amounts of data. However, the performance of untrained models drops sharply relative to trained ones when using smaller amounts of the classifier data. We investigate further and find that untrained models outperform trained ones on the task of neighboring word identity prediction, which confirms that trained encoders do not perform well on tagging tasks because the classifiers are simply memorizing word identity information. We also find that both trained and untrained LSTMs store more local neighboring word identity information in lower layers and more distant word identity information in upper layers, which suggests that depth in LSTMs allow them to capture larger context information. Our suggest that for transfer learning, bidirectional language models like ELMo capture more useful features than translation encoders-and that this holds even on genres or languages for which data is not abundant. However, the scope of our experiments is limited, and we still know little about the representations of models trained on other supervised tasks, or precisely how the choice of training task affects the type of syntactic information that is learned. Our work also highlights the interesting behavior of randomly initialized LSTMs, which show an ability to preserve the contents of their inputs significantly better than trained models. Figure 6: Here we display for the word identity prediction task with randomly initialized LSTM encoders with up to 4 layers. Lower layers have a more peaked shape and upper layers a more flat shape, meaning that the lower layers encode relatively more nearby neighboring word information, while upper layers encode relatively more distant neighboring word information. Table 4: Here we display for training on 1% of auxiliary task data. Word-conditional most frequent class baselines for this amount of training data are 81.8% for POS tagging and 62.3% for CCG supertagging. For each task, we underline the best performance for each training dataset size and bold the best overall performance.
Representations from language models consistently perform better than translation encoders on syntactic auxiliary prediction tasks.
416
scitldr
We consider the problem of generating configurations that satisfy physical constraints for optimal material nano-pattern design, where multiple (and often conflicting) properties need to be simultaneously satisfied. Consider, for example, the trade-off between thermal resistance, electrical conductivity, and mechanical stability needed to design a nano-porous template with optimal thermoelectric efficiency. To that end, we leverage the posterior regularization framework andshow that this constraint satisfaction problem can be formulated as sampling froma Gibbs distribution. The main challenges come from the black-box nature ofthose physical constraints, since they are obtained via solving highly non-linearPDEs. To overcome those difficulties, we introduce Surrogate-based Constrained Langevin dynamics for black-box sampling. We explore two surrogate approaches. The first approach exploits zero-order approximation of gradients in the Langevin Sampling and we refer to it as Zero-Order Langevin. In practice, this approach can be prohibitive since we still need to often query the expensive PDE solvers. The second approach approximates the gradients in the Langevin dynamics with deep neural networks, allowing us an efficient sampling strategy using the surrogate model. We prove the convergence of those two approaches when the target distribution is log-concave and smooth. We show the effectiveness of both approaches in designing optimal nano-porous material configurations, where the goal is to produce nano-pattern templates with low thermal conductivity and reasonable mechanical stability. In many real-world design problems, the optimal design needs to simultaneously satisfy multiple constraints, which can be expensive to estimate. For example, in computational material design, the goal is to come up with material configurations, or samples, satisfying a list of physical constraints that are given by black-box numerical Partial Differential Equations (PDE) solvers. Such solvers (for example, the Boltzmann Transport Equation solver) are often complex, expensive to evaluate, and offer no access to their inner variables or their gradients. We pose this design-under-constraints problem as sampling from a Gibbs distribution defined on some compact support. The problem of sampling from a distribution with unknown likelihood that can only be point-wise evaluated is called black-box sampling . We show in this paper that constrained black-box sampling can be cast as a constrained Langevin dynamics with gradient-free methods. Zero-order optimization via Gaussian smoothing was introduced in and extended to black-box sampling with Langevin dynamics in. We extend this approach to the constrained setting from a black-box density with compact support. However, one shortcoming of this approach is that it is computationally very expensive since it requires repeatedly querying PDE solvers in order to get an estimate of the gradient. To alleviate computational issues, we propose Surrogate Model Based Langevin dynamics, that consists of two steps: (i) Learning (using training data) an approximation of the gradient of the potential of the Gibbs distribution. We show that learning the gradient, rather than the potential itself, is important for the mixing of the Langevin dynamics towards the target Gibbs distribution. We devise several objective functions, as well as deep neural-network architectures for parameterizing the approximating function class, for learning the gradient of the potential function. (ii) We then use the surrogate gradient model in the constrained Langevin dynamics in lieu of the black-box potential. Using the surrogate enables more efficient sampling, since it avoids querying the expensive PDE solvers, and obtaining gradients is as efficient as evaluating the functions themselves using automatic differentiation frameworks such as PyTorch or TensorFlow. To summarize, our main contributions are as follows: 1. We cast the problem of generating samples under constraints in the black-box setting as sampling from a Gibbs distribution. 2. We introduce Constrained Zero-Order Langevin Monte Carlo, using projection or proximal methods, and provide the proof of its convergence to the target Gibbs distribution. 3. We introduce Surrogate Model Based Projected Langevin Monte Carlo via learning the gradient of the potential of the Gibbs distribution using deep neural networks or reproducing kernel spaces, and prove its convergence to the target distribution when used in conjunction with projection or proximal based methods. We shed the light on the importance of the approximation of the gradient of the potential, and we show how to achieve this using Hermite and Taylor learning. 4. We showcase the usability and effectiveness of the proposed methods for the design of nanoporous configurations with improved thermoelectric efficiency. The design consists of finding new configurations with optimized pore locations, such that the ing configurations have favorable thermal conductivity (i.e., minimal κ) and desired mechanical stability (von Mises Stress σ ≤ τ, where τ is some preset threshold). In black-box optimization problems (such as the material design under consideration), the goal is to find a posterior distribution q of samples satisfying a list of equality and inequality constraints: ψ j (x) = y k, j = 1... C e, and φ k (x) ≤ b k, k = 1... C i where x ∈ Ω and Ω ⊂ R d is a bounded domain. We assume a prior distribution p 0 (whose analytical form is known). The main challenge in black-box optimization is that the functions ψ j and φ k can be only evaluated point-wise, and neither do we have functional forms nor access to their gradients. For example, ψ and φ might be obtained via aggregating some statistics on the solution of a nonlinear PDE given by a complex solver. To make the problem of learning under constraints tractable, we choose Lagrangian parameters λ j > 0 and obtain the following relaxed objective: The formulation in Eq. 1 is similar in spirit to the posterior regularization framework of;. However, we highlight two differences: (i) our focus is on constrained settings (where Ω is bounded), and (ii) we assume a black-box setting. We first obtain: Lemma 1 (Constraint Satisfaction as Sampling from a Gibbs Distribution). The solution to the distribution learning problem given in Eq. 1 is given by: where Lemma 1 shows that the constraint satisfaction problem formulated in Eq. 1 amounts to sampling from a Gibbs distribution defined on a compact support given in Eq. 2. Sampling from a Gibbs 1 Note that both properties κ and σ for a given configuration are obtained by numerically solving highly non-linear PDEs. The material configuration is defined by the pore locations, the material used, and the response of the material to heat (thermal) or stress (mechanical) flows. distribution (also known as Boltzmann distribution) has a long history using Langevin dynamics. In the white-box setting when the functions defining the constraints have explicit analytical forms as well as their gradients, Langevin dynamics for Gibbs distribution sampling defined on a compact domain Ω and their mixing properties were actively studied in;. In the next Section, we provide a more detailed review. Remark 1 (Relation to Bayesian Optimization). While in Bayesian optimization we are interested in finding a point that satisfies the constraints, in our setting we are interested in finding a distribution of candidate samples that satisfy (black-box) constraints. See for more details. Remark 2. For the rest of the paper, we will assume p 0 to be the uniform distribution on Ω, which means that its gradients are zero on the support of the domain Ω. Otherwise, if p 0 is known and belongs to, for instance, an exponential family or a generative model prior (such as normalizing flows), we can sample from π using a mixture of black-box sampling on the constraints (ψ j, φ k) and white-box sampling on log(p 0). We review in this section Langevin dynamics in the unconstrained case (Ω = R d) and the constrained setting (Ω ⊂ R d). Below, · denotes the Euclidean norm unless otherwise specified. We are interested in sampling from Preliminaries. We give here assumptions, definitions and few preliminary known facts that will be useful later. Those assumptions are commonly used in Langevin sampling analysis (; ; ;). We assume Ω is a convex such that 0 ∈ Ω, Ω contains a Euclidean ball of radius r, and Ω is contained in a Euclidean ball of radius R. (For example, Ω might encode box constraints.) The projection onto Ω, P Ω (x) is defined as follows: for all x ∈ Ω, P Ω (x) = arg min z∈Ω x − z 2. Let R = sup x,x ∈Ω ||x − x || < ∞. We assume that U is convex, β-smooth, and with bounded gradients: The Total Variation (TV) distance between two measures µ, ν is defined as follows: Unconstrained Langevin Dynamics. In the unconstrained case, the goal is to sample from a Gibbs distribution π(x) = exp(−U (x))/Z that has unbounded support. This sampling can be done via the Langevin Monte Carlo (LMC) algorithm, which is given by the following iteration: where ξ k ∼ N (0, I d), η is the learning rate, and λ > 0 is a variance term. Constrained Langevin Dynamics. In the constrained case, the goal is to sample from π(x) = exp(−U (x))/Z1 x∈Ω,. We discuss two variants: Projected Langevin Dynamics. Similar to projected gradient descent, introduced Projected Langevin Monte Carlo (PLMC) and proved its mixing propreties towards the stationary distribution π. PLMC is given by the following iteration: In essence, PLMC consists of a single iteration of LMC, followed by a projection on the set Ω using the operator P Ω. Proximal Langevin Dynamics. Similar to proximal methods in constrained optimization, introduced Proximal LMC (ProxLMC) that uses the iteration: where η is the step size and γ is a regularization parameter. In essence, ProxLMC performs an ordinary LMC on where i Ω (x) = 0 for x ∈ Ω and i Ω (x) = ∞ for x / ∈ Ω. Therefore, the update in Eq. 6 is a regular Langevin update (as in Eq. 4) with potential gradient We denote by µ PLMC K and µ ProxLMC K the distributions of X K obtained by iterating Eq. 5 and Eq. 6 respectively. Under Assumptions A and B, both these distributions converge to the target Gibbs distribution π in the total variation distance. In particular, showed that for η =Θ(R 2 /K), we obtain: showed that for 0 < η ≤ γ(1 + β 2 γ 2) −1, we obtain: where the notation α n =Ω(β n) means that there exists c ∈ R, C > 0 such that α n ≥ Cβ n log c (β n). We now introduce our variants of constrained LMC for the black-box setting where explicit potential gradients are unavailable. We explore in this paper two strategies for approximating the gradient of U in the black-box setting. In the first strategy, we borrow ideas from derivative-free optimization (in particular, evolutionary search). In the second strategy we learn a surrogate deep model that approximates the gradient of the potential. Below, let G: Ω → R d be a vector valued function that approximates the gradient of the potential, ∇ x U. We make: Surrogate Projected Langevin Dynamics. Given Y 0, the Surrogate Projected LMC (S-PLMC) replaces the potential gradient ∇ x U in Eq. 5 with the surrogate gradient G: Surrogate Proximal Langevin Dynamics. Similarly, the Surrogate Proximal LMC (S-ProxLMC) replaces the unknown potential gradient ∇ x U in Eq. 6 with the gradient surrogate G: We now present our main theorems on the approximation properties of surrogate LMC (S-PLMC, and S-ProxLMC). We do so by bounding the total variation distance between the trajectories of the surrogate Langevin dynamics (S-PLMC, and S-ProxLMC) and the true LMC dynamics (PLMC and ProxLMC). Theorem 1 is an application of techniques in Stochastic Differential Equations (SDE) introduced in and is mainly based on a variant of Grisanov's Theorem for change of measures and Pinsker's Inequality that bounds total variation in terms of Kullback-Leibler divergence. Theorem 1 (S-PLMC and S-ProxLMC Mixing Properties). Under Assumption C, we have: be the distribution of the random variable X K obtained by iterating PLMC Eq. 5, and µ S-PLMC K be the distribution of the random variable Y K obtained by iteration S-PLMC given in Eq. 9. We have: 2. S-ProxLMC Convergence. Let µ ProxLMC K be the distribution of the random variable X K obtained by iterating ProxLMC Eq. 6, and µ S-ProxLMC K be the distribution of the random variable Y K obtained by iterating S-ProxLMC given in Eq. 10. We have: From Theorem 1, we see that it suffices to approximate the potential gradient ∇ x U (X) (and not the potential U (X)) in order to guarantee convergence of surrogate-based Langevin sampling. Using the triangle inequality, and combining Theorem 1 and bounds in Eqs 7 and 8 we obtain: Theorem 2. (Convergence of Surrogate Constrained LMC to the Gibbs distribution.) Under assumptions A,B and C we have: we have: In zero-order optimization (; ; ;), one considers the Gaussian smoothed potential, and its gradient is given by where g 1,... g n are i.i.d. standard normal vectors. Zero-Order sampling from log-concave densities was recently studied in. We extend it here to the constrained sampling case of log-concave densities with compact support. We define Constrained Zero-Order Projected LMC (Z-PLMC) and Zero-Order Proximal LMC (Z-ProxLMC) by setting G(x) =Ĝ n U(x) in Eq. 9 and Eq. 10 respectively. Lemma 2 (Zero-Order Gradient Approximation ). Under Assumption B, we have for all x ∈ Ω: Thanks to Lemma 2 that ensures uniform approximation of gradients in expectation, we can apply Theorem 2 and get the following corollary for Z-PLMC and Z-ProxLMC: Corollary 1 (Zero-order Constrained Langevin approximates the Gibbs distribution). Under As- we have the following bounds in expectation: 2. Set λ = 1, and we have: Remark 3. For simplicity, we state the above bound in terms of expectations over the randomness in estimating the gradients. It is possible to get finite-sample bounds using the Vector Bernstein concentration inequality, coupled with covering number estimates of Ω but omit them due to space. Despite its theoretical guarantees, zero-order constrained Langevin (Z-PLMC and Z-ProxLMC) has a prohibitive computation cost as it needs O(nK) black-box queries (in our case, invocations of a nonlinear PDE solver). To alleviate this issue, we introduce in this Section a neural surrogate model as an alternative to the gradient of the true potential. From Theorem 2, we saw that in order to guarantee the convergence of constrained Langevin dynamics, we need a good estimate of the gradient of the potential of the Gibbs distribution. Recall that the potential given in Lemma 1 depends on ψ j and φ k, which are scalar outputs of computationally heavy PDE solvers in our material design problem. To avoid this, we propose to train surrogate neural network models approximating each PDE output and their gradients. Concretely, suppose we are given a training set S for a PDE solver for the property ψ (dropping the index j for simplicity): where ρ Ω is the training distribution andĜ n ψ is the zero-order estimate of the gradient of ψ given in Eq. 13. We propose to learn a surrogate model belonging to a function class H θ,f θ ∈ H θ, that regresses the value of ψ and matches the zero-order gradient estimates as follows: The problem in Eq. 17 was introduced and analyzed in where H θ is a ball in a Reproducing Kernel Hilbert Space (RKHS). , we refer to this type of learning as Hermite Learning. In the deep learning community, this type of learning is called Jacobian matching and was introduced in; where H θ is a deep neural network parameterized with weights θ. When f θ is a deep network, we can optimize this objective efficiently using common deep learning frameworks (PyTorch, TensorFlow). have shown that when H θ is an RKHS ball and whenỹ i = ∇ x ψ(x i) are exact gradients, for a sufficiently large training set with N = O(1/ 1/(2rζ) ) (where r, ζ are exponents in that depend on the regularity of the function ψ). Under the assumption that ψ ∈ H θ we have: Since we are using inexact zero-order gradients, we will incur an additional numerical error that is also bounded as shown in Lemma 2. While Jacobian matching of zero-order gradients is a sound approach, it remains expensive to construct the dataset, as we need for each point to have 2n + 1 queries of the PDE solver. We exploit in this section the Taylor learning framework of gradients that was introduced in; , and. In a nutshell, suggests to learn a surrogate potential f θ and gradient G Λ that are consistent with the first-order taylor expansion. Given a training set suggest the following objective: where, H θ is an RKHS ball of scalar valued functions, and H d Λ is an RKHS ball of vector valued functions. Under mild assumptions, shows that we have for We simplify the problem in Eq. 18 and propose the following two objective functions and leverage the deep learning toolkit to parameterize the surrogate f θ: The objective in Eq. 19 uses a single surrogate to parameterize the potential and its gradient. The objective in Eq. 20 is similar in spirit to the Jacobian matching formulation in the sense that it adds a regularizer on the gradient of the surrogate to be consistent with the first-order Taylor expansion in local neighborhoods. The advantage of the Taylor learning approach is that we do not need to perform zero-order estimation of gradients to construct the training set and we rely instead on first-order approximation in local neighborhood. Consider the surrogate model f θ obtained via Hermite Learning (Eq. 17) or via Taylor learning (Eqs 18, 19, 20). We are now ready to define the surrogate model LMC by replacing in the constrained Langevin dynamics in Eqs 9 and 10. Both Hermite and Taylor learning come with theoretical guarantees when the approximation function space is an RKHS under some mild assumptions on the training distribution and the regularity of the target function ψ. In Hermite learning (Theorem 2 in) we have: ) (where exponents ζ, r ∈ depend on regularity of ψ). In Taylor Learning with the objective function given in Eq. 18 (Proposition 7 in we have: . In order to apply Theorem 2 we need this gradient approximation error to hold in expectation on all intermediate distributions in the Langevin sampling. Hence, we need the following extra-assumption on the training distribution p Ω : Assumption D: Assume we have a learned surrogate G on training distribution ρ Ω such that 2 ≤ . Assume ρ Ω (x) > 0, ∀x ∈ Ω and that it is a dominating measure of Langevin (PLMC, S-PLMC, Prox-LMC, S-ProxLMC) intermediate distributions µ k, i.e. there exists C > 0 such that: and hence we can apply Theorem 2 for δ = C, and we obtain ε-approximation of the target Gibbs distribution in terms of total variation distance. Remark 4. Assumption D on the -approximation of the gradient can be achieved for a large enough training set N, when we use Hermite learning in RKHS under mild assumptions and in Taylor learning. The assumption on the dominance of the training distribution is natural and means that we need a large training set that accounts to what we may encounter in Surrogate LMC iterations. In what follows we refer to surrogate constrained LMC, as x-PLMC or x-ProxLMC where x is one of four suffixes ({Z-Hermite, Taylor-2, Taylor-1, Taylor-Reg}). Zero-Order Methods. Zero-order optimization with Gaussian smoothing was studied in and in the convex setting. Non-convex zero order optimization was also addressed in. The closest to our work is the zero-order introduced recently for black-box sampling from log concave density. The main difference in our setting, is that the density has a compact support and hence the need to appeal to projected LMC and Proximal LMC . It is worth nothing that introduced recently mirror Langevin sampling that can also be leveraged in our framework. Gradients and Score functions Estimators. We used the approach of gradient distillation and learning gradients of , since they are convenient for training on different constraints and they come with theoretical guarantees. However, other approaches can be also leveraged such as the score matching approach for learning the gradient of the log likelihood (Hyvärinen, 2005) and other variants appealing to dual embeddings . Estimating gradients can be also performed using Stein's method as in , or via maintaining a surrogate of the gradient as in Stein descent without gradient . Optimization approaches. Due to space limitation, we restrict the discussion to the optimization methods that are most commonly and recently used for optimal material (or molecule) design. A popular approach to deal with optimization of expensive black-box functions is Bayesian Optimization (BO) (; ;). The standard BO protocol is comprised of estimating the black-box function from data through a probabilistic surrogate model, usually a Gaussian process, and maximizing an acquisition function to decide where to sample next. BO is often performed over a latent space, as in (Gómez-). Hernández- proposed an information-theoretic framework for extending BO to address optimization under black-box constraints, which is close to current problem scenario. Genetic Algorithms (GA), a class of meta-heuristic based evolutionary optimization techniques, is another widely used approach for generating (material) samples with desired property and has been also used for handling optimization under constraints . However, GA typically requires a large number of function evaluations, can get stuck in local optima, and does not scale well with complexity. has used deep reinforcement learning technique of Deep Q-networks to optimize molecules under a specific constraint using desired properties as rewards. The advantage of our framework is that we obtain a distribution of optimal configurations (as opposed to a single optimized sample) that does not rely on training on a specific pre-existing dataset and can be further screened and tested for their optimality for the task at hand. In this section, we demonstrate the usability of our black-blox Langevin sampling approach for the design of nano-porous configurations. We first show the performance of the surrogate models in learning the potential function, showcasing the using four different variants: standard regression, Taylor regularization, Taylor-1 and Taylor-2. We then show how well the surrogate-based Langevin MC generates new samples under the thermal and mechanical constraints. We compare the sample quality on multiple criteria between the surrogate and zero-order approaches with either projection or proximal update step. Data. We want to learn surrogate models to approximate the gradient of the potential from data. To this end, we generate a dataset of 50K nano-porous structures, each of size 100nm × 100nm. One such example is displayed in Fig. 1. Number of pores is fixed to 10 in this study and each pore is a square with a side length of 17.32nm. We sample the pore centers uniformly over the unit square and construct the corresponding structure after re-scaling them appropriately. Then, using the solvers OpenBTE and Summit , we obtain for each structure x a pair of values: thermal conductivity κ and von Mises stress σ. Finally, we collect two datasets: with the same inputs x i's and N = 50K samples. More details are given in Appendices B and C on the PDEs and their corresponding solvers. Features. The pore locations are the natural input features to the surrogate models. Apart from the coordinates, we also derive some other features based on physical intuitions. For example, the distances between pores and the alignment along axes are informative of thermal conductivity . As such, we compute pore-pore distances along each coordinate axis and add them as additional features. Surrogate gradient methods. We use feed-forward neural networks to model the surrogates since obtaining gradients for such networks is efficient thanks to automatic differentiation frameworks. We use networks comprised of 4 hidden layers with sizes 128, 72, 64, 32 and apply the same architecture to approximate the gradients for κ and σ separately. The hidden layers use ReLU activations whereas sigmoid was used at the output layer (after the target output is properly normalized). For the Taylor-2 variant (in Eq. 18), we have an additional output vector of the same size as the input for the gradient prediction. The networks are trained on the corresponding objective functions set up earlier by an Adam optimizer with learning rate 10 −4 and decay 1.0. We fine-tune the networks with simple grid-search and select the best models for comparison. Due to the space constraint, we present the in Appendix A and emphasize that Z-Hermite is not included in the entire comparison but in a small experiment performed with a more lightweight OpenBTE version. Incorporating constraints and comparison metrics. We demonstrate the usability of our proposed black-box Langevin sampling for the design of nano-configurations under thermal conductivity and mechanical stability constraints that are provided by the corresponding PDE solvers. To compare sampling outcomes, we use the following metrics. We report the minimum value of κ and Monte Carlo estimates for both κ and σ to compare the samples generated by different sampling methods and surrogate models. The Monte Carlo estimates are computed on 20 samples. Single constraint. Our first task is to design nano-configurations under the thermal conductivity constraint where we want κ as low as possible in order to achieve high thermo-electric efficiency. From the posterior regularization formulation Section 2, we pose the constraint satisfaction as sampling from the following Gibbs distribution: where p 0 (x) is the uniform distribution over the unit square, which is equivalent to the Poisson process of 10 pores on the square, and κ(x) is the thermal conductivity we want to minimize. Starting from 20 samples initialized from p 0 (x), we run our proposed black-box Langevin MCs and obtain 20 new realizations from the target distribution π(x). We use four different surrogates (including simple regression, Taylor-Reg, Taylor-1 and zero-order) and each surrogate with either projection or proximal update. We show the summary statistics of these samples in Table 1. The regression-PMLC in the first row and regression-ProxLMC in the fifth represent the sampling where the surrogate model are fitted on solely the mean square error objective. In all methods, we set λ = 100, the step size η = 1e−3 and the exponential decay rate 0.8. Since keeping track of the true κ value is expensive, we stop after K = 10 iterations. We first observe that the regression-based method (PLMC, ProxLMC) is less effective than the others simply because they do not have an implicit objective for approximating the gradients. Taylor-Reg and Taylor-1 demonstrate its effectiveness in approximating the gradient and are able to achieve lower thermal conductivity. In particular, Taylor-1-ProxLMC and Zero-order-PLMC perform in the similar range in terms of the minimum achieved, but the learned surrogate offers 17x speed up (per sample) over zero order methods. Due to the space limit, we do not report Taylor-2 in Table 1, and note that Taylor-2 works in the similar vein as Taylor-1. Multiple constraints. Achieving the minimal thermal conductivity can be fulfilled without much difficulty (e.g. structures with all pores aligned along the vertical axis), but such structures are often mechanically unstable. In the next step, we study whether adding more (conflicting) constraints helps us design better nano-configurations. Hence, we consider both thermal conductivity κ and mechanical stability provided via von Mises stress σ. We want a sample x that minimizes κ(x) to achieve high thermo-electric efficiency while maintaining σ(x) less than some threshold (which we explain below). Like the single constraint case, we pose this as sampling from the following Gibbs distribution: where p 0 (x) is the same as above, σ(x) is the von Mises stress and τ is a threshold on the maximum value of σ. With this framework, we relax the inequality constraint to the Hinge loss term on von Mises stress. The are summarized in Table 2. Note that all the surrogate Langevin MCs are initialized from the same set of 20 samples as above. In this experiment, we set τ = 0.5, λ 1 = 100, λ 2 = 10 the step size η = 1e−3 and the exponential decay rate 0.8. Comparing with Table 1, one can see that not only better κ be achieved but also the σ can be reduced simultaneously. These suggest that our approach can effectively sample new configurations under multiple competing constraints. Examples of new nano-configurations are show in Fig. 1 Table 2: Summary statistics of 20 new samples obtained by our sampling method on π(x) with κ and σ constraints Eq. 22. The starting samples are reused from the single constraint case (min κ = 0.0759, mean κ = 0.1268, and mean σ = 0.8181; note that σ can be as high as 16.) In this paper we introduced Surrogate-Based Constrained Langevin Sampling for black-box sampling from a Gibbs distribution defined on a compact support. We studied two approaches for defining the surrogate: the first through zero-order methods and the second via learning gradient approximations using deep neural networks. We showed the proofs of convergence of the two approaches in the log-concave and smooth case. While zero-order Langevin had prohibitive computational cost, learned surrogate model Langevin enjoy a good tradeoff of lightweight computation and approximation power. We applied our black-box sampling scheme to the problem of nano-material configuration design, where the black box constraints are given by expensive PDE solvers, and showed the efficiency and the promise of our method in finding optimal configurations. Among different approaches for approximating the gradient, the zero-order ones (PLMC, ProxLMC) show overall superior performance, at a prohibitive computational cost. We established that the deep the surrogate (Taylor-1 ProxLMC) is a viable alternative to zero-order methods, achieving reasonable performance, and offering 15x speedup over zero-order methods. Surrogate gradient methods We use feed-forward neural networks to model the surrogates since obtaining gradients for such networks is efficient thanks to automatic differentiation frameworks. We use networks comprised of 4 hidden layers with sizes 128, 72, 64, 32 and apply the same architecture to approximate the gradients for κ and σ separately. The hidden layers compute ReLU activation whereas sigmoid was used at the output layer (after the target output is properly normalized). For the Taylor-2 variant (in Eq. 18), we have an output vector for the gradient prediction. The networks are trained on the corresponding objective functions set up earlier by Adam optimizer with learning rate 10 −4 and decay 1.0. We fine-tune the networks with simple grid-search and select the best models for comparison. As emphasized throughout, our focus is more on approximating the gradient rather than learning the true function. However, we need to somehow evaluate the surrogate models on how well they generalize on a hold-out test set. Like canonical regression problems, we compare the surrogate variants against each other using root mean square error (RMSE) on the test set. Figures 2 and 3 shows the . The left figure shows RMSE for predicting κ and the right one shows RMSE for the von Mises stress σ. We can see that the Taylor-Reg generalizes better and also converges faster than Taylor-1 and Taylor-2 to target RMSE for κ, while all methods similarly for σ prediction. This is reasonable because the objectives of Taylor-1 and Taylor-2 are not to optimize the mean square error, which we evaluate on here. Figure 3 shows the learning in terms of sample complexity. Again, Taylor-Reg outperforms Taylor-1 and Taylor-2 for κ prediction. In contrast, most models work similarly for σ regression, particularly when the training size is reduced to 50% (25K). Effectiveness of Z-Hermite learning Notice that Z-Hermite learning is not included in this comparison and as a surrogate model in the black-blox Langevin sammpling in Section 8. The reason is that apart from the usual sample pair (x i, y i), we need the gradientỹ i (See Eq. 17). Since we can query the solvers, this gradient can only be estimated using finite difference. For both κ and σ in our experiment, obtaining such data is extremely expensive. As a consequence, we do not have the full of the Z-Hermite model. Instead, we ran a separate study to show the effectiveness of Z-Hermite surrogate LMC on a smaller data with a lightweight OpenBTE version (0.9.55). The in Table 3 shows the working of Z-Hermite learning in learning the gradient of κ(x). Here, the entropy is based nearest neighbor estimate to demonstrate the diversity of the pore centers in the unit square. With the (x p, y p)-coordinates of each pore p, the entropy estimate is given by: Mean A hybrid algorithm between zero-order and Taylor-1 surrogate We can see in Tables 1, 2 and 3 the trade-off between computation and accuracy of our approach. While zero-order PLMC and ProxLMC can achieve the lowest thermal conductivity, their computational costs are prohibitive. In contrast, deep surrogate models (including Taylor-Reg, Taylor-1) are far more time-efficient but slightly worse in terms of achieving the optimal κ. To mitigate the trade-off, we propose a simple hybrid method that combines the best of the zero-order and Taylor-1 surrogate models. The algorithm is shown in Figure A that alternates between using the gradient from the zero-order estimate and the gradient of the deep surrogate depending on whether taking this step would decrease the potential function (i.e. κ). We show and compare the achieved κ and running time in Table 3. Examples of the samples generated by Zero-order PLMC, Taylor-1 PLMC and the hybrid method are also depicted in Figure 4. The hybrid achieves the thermal conductivity that is lower than Taylor-1 PMLC while running almost 2x faster than zero-order PLMC. This suggests that the hybrid strategy offers a better trade-off in accuracy and computation. One way to further improve the hybrid is to collect the zero-order gradients while mixing and re-update the surrogate with Z-Hermite learning. Algorithm 1 A hybrid PLMC algorithm alternating between zero-order and Taylor-1 surrogate gradients. Train a network f θ (x) with Taylor-1 Randomly sample x 0 from the uniform p(x) Perform a Langevin dynamic step Additional generated samples We show additional configurations generated by our sampling approach (Taylor-Reg ProxLMC, Taylor At the nanoscale, heat transport may exhibit strong ballistic behaviour and a non-diffusive model must be used . In this work we use the Boltzmann transport equation under the relaxation time approximation and in the mean-free-path (MFP) formulation Λŝ where T (Λ) is the effective temperature associated to phonons with MFP Λ and directionŝ; the notation. stands for an angular average. The coefficients α(Λ) are given by where K(Λ) is the bulk MFP distribution. In general, such a quantity can span several orders of magnitude; however, for simplicity we assume the gray model, i.e. all phonons travel with the same MFP, Λ 0. Within this approximation, we have K(Λ) = κ bulk δ(Λ − Λ 0). In this work we choose Λ 0 = 10 nm, namely as large as the unit cell, so that significant phonons size effects occur. With no loss of generality, we set κ bulk = 1 Wm −1 K −1. Eq. 23 is an integro-differential PDE, which is solved iteratively for each phonon direction over an unstructured mesh . We apply periodic boundary conditions along the unit cell while imposing a difference of temperature of ∆T = 1 K along the x-axis. At the pores' walls we apply diffusive boundary conditions. Upon convergence, the effective thermal conductivity is computed using Fourier's law, i.e. where J = (κ bulk /Λ 0) T (Λ 0)ŝ n is the heat flux, L is the size of the unit cell, A is the area of the cold contact (with normaln). Throughout the text we use the quantity κ = κ eff /κ bulk as a measure of phonon size effects. We model mechanical stress by using the continuum linear elasticity equations where f i is the body force (which is zero in this case), and σ ij is the stress tensor. Note that we used the Einstein notation, i.e. repeated indexes are summed over. The strain kl is related to the stress via the fourth-rank tensor elastic constant C ijkl σ ij = C ijkl kl. The strain is then related to the displacement u via kl = 1 2 We apply periodic boundary conditions along the unit-cell and applied solicitation is a small in-plane expansion. Once the stress tensor is calculated, we compute the von Mises stress as where σ i are the principal stress axis. As a mechanical stability estimator we use σ = max x∈D (σ V M) where D is the simulation domain. To avoid material's plasticity, σ needs to be smaller than the yield stress of a given material. For mechanical simulation we used the SUMIT code (Setting first order optimality conditions on q, we have for x ∈ Ω: Hence we have:, x ∈ Ω and q(x) = 0, x / ∈ Ω, First order optimality on η give us: Ω q(x) = 1, we conclude by setting e exp(−η) = Z. Proof of Theorem 1 1) Projected Langevin. Let us define the following continuous processes by interpolation of X k and Y K (Piecewise constant): whereŨ t (X) = − ∞ k=0 ∇ x U (X kη)1 t∈[kη,(k+1)η] (t). Similarly let us define: where G t (Ỹ) = − Note that: Hence we have: Assume that X 0 = Y 0 there exists Q such that, X T = Q({W t} t∈ [0,T] ) and Y T = Q((W t) t∈[0,T] ). Let µX T be the law ofX t∈ [0,T]. Same for µỸ T. The proof here is similar to the proof of Lemma 8 in . By the data processing inequality we have:
We propose surrogate based Constrained Langevin sampling with application in nano-porous material configuration design.
417
scitldr
There is growing interest in geometrically-inspired embeddings for learning hierarchies, partial orders, and lattice structures, with natural applications to transitive relational data such as entailment graphs. Recent work has extended these ideas beyond deterministic hierarchies to probabilistically calibrated models, which enable learning from uncertain supervision and inferring soft-inclusions among concepts, while maintaining the geometric inductive bias of hierarchical embedding models. We build on the Box Lattice model of , which showed promising in modeling soft-inclusions through an overlapping hierarchy of sets, parameterized as high-dimensional hyperrectangles (boxes). However, the hard edges of the boxes present difficulties for standard gradient based optimization; that work employed a special surrogate function for the disjoint case, but we find this method to be fragile. In this work, we present a novel hierarchical embedding model, inspired by a relaxation of box embeddings into parameterized density functions using Gaussian convolutions over the boxes. Our approach provides an alternative surrogate to the original lattice measure that improves the robustness of optimization in the disjoint case, while also preserving the desirable properties with respect to the original lattice. We demonstrate increased or matching performance on WordNet hypernymy prediction, Flickr caption entailment, and a MovieLens-based market basket dataset. We show especially marked improvements in the case of sparse data, where many conditional probabilities should be low, and thus boxes should be nearly disjoint. Embedding methods have long been a key technique in machine learning, providing a natural way to convert semantic problems into geometric problems. Early examples include the vector space BID17 and latent semantic indexing BID4 ) models for information retrieval. Embeddings experienced a renaissance after the publication of Word2Vec BID12, a neural word embedding method BID2 BID13 ) that could run at massive scale. Recent years have seen an interest in structured or geometric representations. Instead of representing e.g. images, words, sentences, or knowledge base concepts with points, these methods instead associate them with more complex geometric structures. These objects can be density functions, as in Gaussian embeddings BID21 BID0, convex cones, as in order embeddings BID20 BID9, or axis-aligned hyperrectangles, as in box embeddings BID22 BID18. These geometric objects more naturally express ideas of asymmetry, entailment, ordering, and transitive relations than simple points in a vector space, and provide a strong inductive bias for these tasks. In this work, we focus on the probabilistic Box Lattice model of BID22, because of its strong empirical performance in modeling transitive relations, probabilistic interpretation (edges in a relational DAG are replaced with conditional probabilities), and ability to model complex joint probability distributions including negative correlations. Box embeddings (BE) are a generalization of order embeddings (OE) BID20 and probabilistic order embeddings (POE) BID9 that replace the vector lattice ordering (notions of overlapping and enclosing convex cones) in OE and POE with a more general notion of overlapping boxes (products of intervals).While intuitively appealing, the "hard edges" of boxes and their ability to become easily disjoint, present difficulties for gradient-based optimization: when two boxes are disjoint in the model, but have overlap in the ground truth, no gradient can flow to the model to correct the problem. This is of special concern for (pseudo-)sparse data, where many boxes should have nearly zero overlap, while others should have very high overlap. This is especially pronounced in the case of e.g. market basket models for recommendation, where most items should not be recommended, and entailment tasks, most of which are currently artificially resampled into a 1:1 ratio of positive to negative examples. To address the disjoint case, BID22 introduce an ad-hoc surrogate function. In contrast, we look at this problem as inspiration for a new model, based on the intuition of relaxing the hard edges of the boxes into smoothed density functions, using a Gaussian convolution with the original boxes. We demonstrate the superiority of our approach to modeling transitive relations on WordNet, Flickr caption entailment, and a MovieLens-based market basket dataset. We match or beat existing state of the art , while showing substantial improvements in the pseudosparse regime. As mentioned in the introduction, there is much related work on structured or geometric embeddings. Most relevant to this work are the order embeddings of BID20, which embed a nonprobabilistic DAG or lattice in a vector space with order given by inclusion of embeddings' forward cones, the probabilistic extension of that model due to BID9, and the box lattice or box embedding model of BID22, which we extend. Concurrently to BID22, another hyperrectangle-based generalization of order embeddings was proposed by BID18, also called box embeddings. The difference between the two models lies in the interpretation: the former is a probabilistic model that assigns edges conditional probabilities according to degrees of overlap, while the latter is a deterministic model in the style of order embeddings -an edge is considered present only if one box entirely encloses another. Methods based on embedding points in hyperbolic space BID16 BID5 have also recently been proposed for learning hierarchical embeddings. These models, similar to order embeddings and the box embeddings of BID18, are nonprobabilistic and optimize an energy function. Additionally, while the negative curvature of hyperbolic space is attractively biased towards learning tree structures (since distances between points increase the farther they are from the origin), this constant curvature makes the models not as suitable for learning non-treelike DAGs. Our approach to smoothing the energy landscape of the model using Gaussian convolution is common in mollified optimization and continuation methods, and is increasingly making its way into machine learning models such as Mollifying Networks BID7, diffusion-trained networks BID14, and noisy activation functions BID6.Our focus on embedding orderings and transitive relations is a subset of knowledge graph embedding. While this field is very large, the main difference of our probabilistic approach is that we seek to learn an embedding model which maps concepts to subsets of event space, giving our model an inductive bias especially suited for transitive relations as well as fuzzy concepts of inclusion and entailment. We begin with a brief overview of two methods for representing ontologies as geometric objects. First, we review some definitions from order theory, a useful formalism for describing ontologies, then we introduce the vector and box lattices. FIG0 shows a simple two-dimensional example of these representations. A non-strict partially ordered set (poset) is a pair P,, where P is a set, and is a binary relation. For all a, b, c ∈ P, Reflexivity: a a Antisymmetry: a b a implies a = b Transitivity: a b c implies a cThis generalizes the standard concept of a totally ordered set to allow some elements to be incomparable. Posets provide a good formalism for the kind of acyclic directed graph data found in many knowledge bases with transitive relations. A lattice is a poset where any subset of elements has a single unique least upper bound, and greatest lower bound. In a bounded lattice, the set P contains two additional elements, (top), and ⊥ (bottom), which denote the least upper bound and greatest lower bound of the entire set. A lattice is equipped with two binary operations, ∨ (join), and ∧ (meet). a∨b denotes the least upper bound of a, b ∈ P, and a ∧ b denotes their greatest lower bound. A bounded lattice must satisfy these properties: DISPLAYFORM0 Note that the extended real numbers, R ∪ {−∞, ∞}, form a bounded lattice (and in fact, a totally ordered set) under the min and max operations as the meet (∧) and join (∨) operations. So do sets partially ordered by inclusion, with ∩ and ∪ as ∧ and ∨. Thinking of these special cases gives the intuition for the fourth property, absorption. The ∧ and ∨ operations can be swapped, along with reversing the poset relation, to give a valid lattice, called the dual lattice. In the real numbers this just corresponds to a sign change. A semilattice has only a meet or join, but not both. Note. In the rest of the paper, when the context is clear, we will also use ∧ and ∨ to denote min and max of real numbers, in order to clarify the intuition behind our model. A vector lattice, also known as a Riesz space , or Hilbert lattice when the accompanying vector space has an inner product, is a vector space endowed with a lattice structure. A standard choice of partial order for the vector lattice R n is to use the product order from the underlying real numbers, which specifies for all x, y ∈ R n x y ⇐⇒ ∀i ∈ {1..n}, x i ≤ y i Under this order, meet and join operations are pointwise min and max, which gives a lattice structure. In this formalism, the Order Embeddings of BID20 embed partial orders as vectors using the reverse product order, corresponding to the dual lattice, and restrict the vectors to be positive. The vector of all zeroes represents, and embedded objects become "more specific" as they get farther away from the origin. FIG0 demonstrates a toy, two-dimensional example of the Order Embedding vector lattice representation of a simple ontology. Shading represents the probability measure assigned to this lattice in the probabilistic extension of BID9. Vilnis et al. FORMULA2 introduced a box lattice, wherein each concept in a knowledge graph is associated with two vectors, the minimum and maximum coordinates of an axis-aligned hyperrectangle, or box (product of intervals).Using the notion of set inclusion between boxes, there is a natural partial order and lattice structure. To represent a box x, let the pairs (x m,i, x M,i) be the maximum and minimum of the interval at each coordinate i. Then the box lattice structure (least upper bounds and greatest lower bounds), with ∨ and ∧ denoting max and min when applied to the scalar coordinates, is DISPLAYFORM0 Here, denotes a set (cartesian) product -the lattice meet is the largest box contained entirely within both x and y, or bottom (the empty set) where no intersection exists, and the lattice join is the smallest box containing both x and y. To associate a measure, marginal probabilities of (collections of) events are given by the volume of boxes, their complements, and intersections under a suitable probability measure. Under the uniform measure, if event x has an associated box with interval boundaries (x m, x M), the probability p(x) is given by n i (x M,i − x m,i). Use of the uniform measure requires the boxes to be constrained to the unit hypercube, so that p(x) ≤ 1. p(⊥) is taken to be zero, since ⊥ is an empty set. As boxes are simply special cases of sets, it is intuitive that this is a valid probability measure, but it can also be shown to be compatible with the meet semilattice structure in a precise sense BID10. When using gradient-based optimization to learn box embeddings, an immediate problem identified in the original work is that when two concepts are incorrectly given as disjoint by the model, no gradient signal can flow since the meet (intersection) is exactly zero, with zero derivative. To see this, note that for a pair of 1-dimensional boxes (intervals), the volume of the meet under the uniform measure p as given in Section 3.3 is DISPLAYFORM1 where m h is the standard hinge function, m h (x) = 0 ∨ x = max(0, x).The hinge function has a large flat plateau at 0 when intervals are disjoint. This issue is especially problematic when the lattice to be embedded is (pseudo-)sparse, that is, most boxes should have very little or no intersection, since if training accidentally makes two boxes disjoint there is no way to recover with the naive measure. The authors propose a surrogate function to optimize in this case, but we will use a more principled framework to develop alternate measures that avoid this pathology, improving both optimization and final model quality. DISPLAYFORM2 Figure 2: One-dimensional example demonstrating two disjoint indicators of intervals before and after the application of a smoothing kernel. The area under the purple product curve is proportional to the degree of overlap. The intuition behind our approach is that the "hard edges" of the standard box embeddings lead to unwanted gradient sparsity, and we seek a relaxation of this assumption that maintains the desirable properties of the base lattice model while enabling better optimization and preserving a geometric intuition. For ease of exposition, we will refer to 1-dimensional intervals in this section, but the carry through from the representation of boxes as products of intervals and their volumes under the associated product measures. The first observation is that, considering boxes as indicator functions of intervals, we can rewrite the measure of the joint probability p(x ∧ y) between intervals x = [a, b] and y = [c, d] as an integral of the product of those indicators: DISPLAYFORM3 since the product has support (and is equal to 1) only in the areas where the two intervals overlap. A solution suggests itself in replacing these indicator functions with functions of infinite support. We elect for kernel smoothing, specifically convolution with a normalized Gaussian kernel, equivalent to an application of the diffusion equation to the original functional form of the embeddings (indicator functions) and a common approach to mollified optimization and energy smoothing BID15 BID7 BID14. This approach is demonstrated in one dimension in Figure 2.Specifically, given x = [a, b], we associate the smoothed indicator function DISPLAYFORM4 We then wish to evaluate, for two lattice elements x and y with associated smoothed indicators f and g, DISPLAYFORM5 This integral admits a closed form solution. Proposition 1. Let m Φ (x) = Φ(x)dx be an antiderivative of the standard normal CDF. Then the solution to equation 2 is given by, DISPLAYFORM6 where σ = σ 2 1 + σ 2 2, soft(x) = log(1 + exp(x)) is the softplus function, the antiderivative of the logistic sigmoid, and ρ = σ 1.702.Proof. The first line is proved in Appendix A, the second approximation follows from the approximation of Φ by a logistic sigmoid given in BID3.Note that, in the zero-temperature limit, as ρ goes to zero, we recover the formula DISPLAYFORM7 with equality in the last line because (a, b) and (c, d) are intervals. This last line is exactly our original equation equation 1, which is expected from convolution with a zero-bandwidth kernel (a Dirac delta function, the identity element under convolution). This is true for both the exact formula using Φ(x)dx, and the softplus approximation. Unfortunately, for any ρ > 0, multiplication of Gaussian-smoothed indicators does not give a valid meet operation on a function lattice, for the simple reason that f 2 = f, except in the case of indicator functions, violating the idempotency requirement of Section 3.1.More importantly, for practical considerations, if we are to treat the outputs of p φ as probabilities, the consequence is DISPLAYFORM8 which complicates our applications that train on conditional probabilities. However, by a modification of equation 3, we can obtain a function p such that p(x ∧ x) = p(x), while retaining the smooth optimization properties of the Gaussian model. Recall that for the hinge function m h and two intervals (a, b) and (c, d), we have DISPLAYFORM9 where the left hand side is the zero-temperature limit of the Gaussian model from equation 3. This identity is true of the hinge function m h, but not the softplus function. However, an equation with a similar functional form as equation 6 (on both the left-and right-hand sides) is true not only of the hinge function from the unsmoothed model, but also true of the softplus. For two intervals x = (a, b) an y = (c, d), by the commutativity of min and max with monotonic functions, we have DISPLAYFORM10 In the zero-temperature limit, all terms in equations 3 and 7 are equivalent. However, outside of this, equation 7 is idempotent for x = y = (a, b) = (c, d) (when considered as a measure of overlap, made precise in the next paragraph), while equation 3 is not. This inspires us to define the probabilities p(x) and p(x, y) using a normalized version of equation 7 in place of equation 3. For the interval (one-dimensional box) case, we define DISPLAYFORM11 which satisfies the idempotency requirement, p(x) = p(x, x).Because softplus upper-bounds the hinge function, it is capable of outputting values that are greater than 1, and therefore must be normalized. In our experiments, we use two different approaches to normalization. For experiments with a relatively small number of entities (all besides Flickr), we allow the boxes to learn unconstrained, and divide each dimension by the measured size of the global minimum and maximum (G DISPLAYFORM12 For data where computing these values repeatedly is infeasible, we project onto the unit hypercube and normalize by m soft. The final probability p(x) is given by the product over dimensions DISPLAYFORM13 Note that, while equivalent in the zero temperature limit to the standard uniform probability measure of the box model, this function, like the Gaussian model, is not a valid probability measure on the entire joint space of events (the lattice). However, neither is factorization of a conditional probability table using a logistic sigmoid link function, which is commonly used for the similar tasks. Our approach retains the inductive bias of the original box model, is equivalent in the limit, and satisfies the necessary condition that p(x, x) = p(x). A comparison of the 3 different functions is given in FIG2, with the softplus overlap showing much better behavior for highly disjoint boxes than the Gaussian model, while also preserving the meet property. Note that in order to achieve high overlap, the Gaussian model must drastically lower its temperature, causing vanishing gradients in the tails. We perform experiments on the WordNet hypernym prediction task in order to evaluate the performance of these improvements in practice. The WordNet hypernym hierarchy contains 837,888-edges after performing the transitive closure on the direct edges in WordNet. We used the same train/dev/test split as in BID20. Positive examples are randomly chosen from the 837k edges, while negative examples are generated by swapping one of the terms to a random word in the dictionary. Experimental details are given in Appendix D.1. The smoothed box model performs nearly as well as the original box lattice in terms of test accuracy 1. While our model requires less hyper-parameter tuning than the original, we suspect that our performance would be increased on a task with a higher degree of sparsity than the 50/50 positive/negative split of the standard WordNet data, which we explore in the next section. In order to confirm our intuition that the smoothed box model performs better in the sparse regime, we perform further experiments using different numbers of positive and negative examples from the WordNet mammal subset, comparing the box lattice, our smoothed approach, and order embeddings (OE) as a baseline. The training data is the transitive reduction of this subset of the mammal WordNet, while the dev/test is the transitive closure of the training data. The training data contains 1,176 positive examples, and the dev and test sets contain 209 positive examples. Negative examples are generated randomly using the ratio stated in the table. As we can see from the table, with balanced data, all models include OE baseline, Box, Smoothed Box models nearly match the full transitive closure. As the number of negative examples increases, the performance drops for the original box model, but Smoothed Box still outperforms OE and Box in all setting. This superior performance on imbalanced data is important for e.g. real-world entailment graph learning, where the number of negatives greatly outweigh the positives. Table 5: F1 scores of the box lattice, order embeddings, and our smoothed model, for different levels of label imbalance on the WordNet mammal subset. We conduct experiments on the Flickr entailment dataset. Flickr is a large-scale caption entailment dataset containing of 45 million image caption pairs. In order to perform an apples-to-apples comparison with existing we use the exact same dataset from BID22. In this case, we do constrain the boxes to the unit cube, using the same experimental setup as BID22, except we apply the softplus function before calculating the volume of the boxes. Experimental details are given in Appendix D.3.We report KL divergence and Pearson correlation on the full test data, unseen pairs (caption pairs which are never occur in training data) and unseen captions (captions which are never occur in training data). As shown in TAB2, we see a slight performance gain compared to the original model, with improvements most concentrated on unseen captions. We apply our method to a market-basket task constructed using the MovieLens dataset. Here, the task is to predict users' preference for movie A given that they liked movie B. We first collect all pairs of user-movie ratings higher than 4 points (strong preference) from the MovieLens-20M dataset. From this we further prune to just a subset of movies which have more than 100 user ratings to make sure that counting statistics are significant enough. This leads to 8545 movies in our dataset. We calculate the conditional probability P (A|B) = P (A,B) We compare with several baselines: low-rank matrix factorization, complex bilinear factorization BID19, and two hierarchical embedding methods, POE BID9 and the Box Lattice BID22. Since the training matrix is asymmetric, we used separate embeddings for target and conditioned movies. For the complex bilinear model, we added one additional vector of parameters to capture the "imply" relation. We evaluate on the test set using KL divergence, Pearson correlation, and Spearman correlation with the ground truth probabilities. Experimental details are given in Appendix D.4. DISPLAYFORM0 From the in TAB4, we can see that our smoothed box embedding method outperforms the original box lattice as well as all other baselines' performances, especially in Spearman correlation, the most relevant metric for recommendation, a ranking task. We perform an additional study on the robustness of the smoothed model to initialization conditions in Appendix C. We presented an approach to smoothing the energy and optimization landscape of probabilistic box embeddings and provided a theoretical justification for the smoothing. Due to a decreased number of hyper-parameters this model is easier to train, and, furthermore, met or surpassed current state-ofthe-art on several interesting datasets. We further demonstrated that this model is particularly effective in the case of sparse data and more robust to poor initialization. Tackling the learning problems presented by rich, geometrically-inspired embedding models is an open and challenging area of research, which this work is far from the last word on. This task will become even more pressing as the embedding structures become more complex, such as unions of boxes or other non-convex objects. To this end, we will continue to explore both function lattices, and constraint-based approaches to learning. A PROOF OF GAUSSIAN OVERLAP FORMULA We wish to evaluate, for two lattice elements x and y, with associated smoothed indicators f and g, DISPLAYFORM0 Since the Gaussian kernel is normalized to have total integral equal to 1, so as not to change the overall areas of the boxes, the concrete formula is DISPLAYFORM1 Since the antiderivative of φ is the normal CDF, this may be recognized as the difference Φ(x; a, σ 2) − Φ(x; b, σ 2), but this does not allow us to easily evaluate the integral of interest, which is the integral of the product of two such functions. To evaluate equation 8, recall the identity BID8 BID21 DISPLAYFORM2 For convenience, let τ: DISPLAYFORM3. Applying Fubini's theorem and using equation 9, we have DISPLAYFORM4 and therefore, with σ = τ −1, DISPLAYFORM5 The MovieLens dataset, while not truly sparse, has a large proportion of small probabilities which make it especially suitable for optimization by the smoothed model. The rough distribution of probabilities, in buckets of width 0.1, is shown in FIG0. We perform an additional set of experiments to determine the robustness of the smoothed box model to initialization. While the model is normally initialized randomly so that each box is a product of intervals that almost always overlaps with the other boxes, we would like to determine the models robustness to disjoint boxes in a principled way. While we can control initialization, we cannot always control the intermediate of optimization, which may drive boxes to be disjoint, a condition from which the original, hard-edged box model may have difficulty recovering. So, parametrizing the initial distribution of boxes with a minimum coordinate and a positive width, we adjust the width parameter so that approximately 0%, 20%, 50%, and 100% of boxes are disjoint at initialization before learning on the MovieLens dataset as usual. These are presented in table 8. The smoothed model does not seem to suffer at all from disjoint initialization, while the performance of the original box model degrades significantly. From this we can speculate that part of the strength of the smoothed box model is its ability to smoothly optimize in the disjoint regime. We give a brief overview of our methodology and hyperparameter selection methods for each experiment. Detailed hyperparameter settings and code to reproduce experiments can be found at https://github.com/Lorraine333/smoothed_box_embedding. For the WordNet experiments, the model is evaluated every epoch on the development set for a large fixed number of epochs, and the best development model is used to score the test set. Baseline models are trained using the parameters of BID22, with the smoothed model using hyperparameters determined on the development set. We follow the same routine as the WordNet experiments section to select best parameters. For the 12 experiments we conducted in this section, negative examples are generated randomly based on the ratio for each batch of positive examples. We do a parameter sweep for all models then choose the best for each model as our final . The experimental setup uses the same architecture as BID22 and BID9, a single-layer LSTM that reads captions and produces a box embedding parameterized by min and delta. Embeddings are produced by feedforward networks on the output of the LSTM. The model is trained for a large fixed number of epochs, and tested on the development data at each epoch. The best development model is used to report test set score. Hyperparameters were determined on the development set. For all MovieLens experiments, the model is evaluated every 50 steps on the development set, and optimization is stopped if the best development set score fails to improve after 200 steps. The best development model is used to score the test set.
Improve hierarchical embedding models using kernel smoothing
418
scitldr
We present a weakly-supervised data augmentation approach to improve Named Entity Recognition (NER) in a challenging domain: extracting biomedical entities (e.g., proteins) from the scientific literature. First, we train a neural NER (NNER) model over a small seed of fully-labeled examples. Second, we use a reference set of entity names (e.g., proteins in UniProt) to identify entity mentions with high precision, but low recall, on an unlabeled corpus. Third, we use the NNER model to assign weak labels to the corpus. Finally, we retrain our NNER model iteratively over the augmented training set, including the seed, the reference-set examples, and the weakly-labeled examples, which in refined labels. We show empirically that this augmented bootstrapping process significantly improves NER performance, and discuss the factors impacting the efficacy of the approach. The increasing wealth of available data fuels numerous machine learning applications. Unfortunately, much of this data is unlabeled, unstructured and noisy. Supervised learning achieves the best task performance, but obtaining training labels is expensive. Crowd-sourcing could provide labels at scale, but may not be feasible for acquiring high-quality labels in technical domains, such as biomedicine that requires expert annotators. In this paper, we explore augmented bootstrapping methods that leverage automatically assigned noisy labels obtained from a large unlabeled corpus. The biomedical literature is a high-impact domain with scarce annotations. Unlocking the knowledge in this data requires machine reading systems that automatically extract important concepts in the text, such as entities and their relations. A critical component of such systems is reliable Named Entity Recognition (NER), which aims to identify parts of the text that refer to a named entity (e.g., a protein). In line with advancements in many domains, most state-of-the-art NER approaches use a deep neural network model that relies on a large labeled training set, which is not usually available in biomedical domains. To address label scarcity, we propose a framework to train any effective neural NER model by leveraging partially labeled data. We do this by creating an augmented training set using a small fully-labeled seed set, and an unlabeled corpus set, which we weakly and automatically label, and then refine its labels via an iterative process. Our main contributions include: An augmented bootstrapping approach combining information from a reference set with iterative refinements of soft labels to improve NER in a challenging domain (biomedicine) where labelling is expensive. A detailed analysis in a controlled setting to study different aspects affecting performance. An analysis of reference-based automated approaches to labeling data, showing that naive labeling decreases performance and how to overcome it. Many effective NER systems assume a fully-supervised setting to train a neural network model BID8 BID9 BID7. Recently, distant supervision has been applied to language-related tasks such as phrase mining BID15, relation extraction BID10, and entity extraction BID5. For NER, automatically generated candidate annotations on an unlabeled dataset using weak labellers. BID14 and BID5 used knowledge bases and linguistic features to tag entities. Our approach combines knowledge extracted from an external reference set with noisy predicted labels and refines them an iteratively. Using a reference set proposed heuristic-based functions to label data with low accuracy. BID15 b) described techniques to automatically tag phrases based on knowledge bases such as MeSH and CTD in the biomedical domain. However, in NER systems with weak supervision, wrongly-labeled entities negatively affects the overall performance BID16. We show that our proposed iterative training technique is able to make the learning process more robust to noisy labels. Our method is closely related to bootstrapping approaches. BID17 introduced the bootstrapping technique by training a tree-based classifier for word-sense disambiguation on labeled seed data and then using it to predict on an unlabeled corpus which further is used for training the model iteratively until convergence. Later BID6 bootstrapped statistical classifiers for NER. BID0 and BID4 applied bootstrapping for language processing, and BID13 for image classification. We propose an augmented bootstrapping technique for the state-of-the-art neural NER model applied to biomedical literature. In contrast to the standard bootstrapping techniques that use hard labels, we leverage and refine soft label values, which may be more suitable for noisy data. More importantly, we further augment the bootstrapping process via a simple domain-independent data annotation scheme based on a reference set, which is in contrast to the hand-crafted domain specific rules or the linguistic or morphological characteristics used in standard bootstrapping approaches. Our main goal in this study is to use easily available external information to leverage unlabeled data and reduce the need for an expensive fully-labeled dataset. We assume to have a small fullyannotated seed dataset D s that has every token tagged by entity type and a larger unlabeled corpus D c. We seek to automatically generate an augmented dataset by partially, and possibly noisily, labeling D c. We show that training a (Neural) NER system over the combined seed and augmented datasets achieves the performance of systems trained with an order of magnitude more labels. We propose an iterative solution to improve NER by labeling the corpus dataset using two complementary sources of information. First, we train a NER model using the small seed dataset D s and use it to label the unlabeled corpus D c, we call this set of labels predicted labels. Second, we use search policies over a reference set to find mentions of entity names in the unlabeled corpus D c, we call these set of labels reference-based labels. We combine the seed, the predicted and the reference labels to retrain the NER model. We use the updated model to iteratively refine the predicted labels portion of the corpus set. FIG0 and Algorithm 1 show the overall process of our method. We use soft scores (between 0 and 1) to label the corpus set, instead of the binary labels produced by the CRF layer used in stateof-the-art NER models. Our aim is to let the model iteratively reinforce the weak signals in the soft scores to improve the label quality. Recent high-performing neural NER (NNER) models BID7 BID9 use Bi-directional LSTM (BiLSTM) layers trained on character and word embeddings. The character embeddings are learned over the training data using a separate BiLSTM layer, and are concatenated with GloVe word embeddings BID11. We use an open-source Tensorflow implementation of this model BID3, which achieves state-of-the-art NER performance on the CoNLL 2003 1 dataset. To produce soft scores for each tag in our experiments, we replace the CRF layer with a softmax layer. Entities found via the reference set receive a score of 1. We show the effectiveness of our approach in a hard NER problem, extracting protein mentions from the biomedical literature, and systematically evaluate the contribution of the different techniques. We use the BioCreative VI Bio-ID dataset BID1, which contains 13,573 annotated figure captions corresponding to 3,658 figures from 570 full length articles from 22 journals, for a total of 102,717 annotations. The Bio-ID dataset is split into a training set of 38,344 sentences, a development set of 4,243 sentences, and a test set with 14,079 sentences. The tokens are tagged using the BIO scheme (Beginning, Inside and Outside of entities). The Bio-ID dataset provides us with a controlled environment where we can evaluate our methods, since it provides ground truth on the labels. The rationale of the following experiments is to simulate our desired data augmentation scenario, which is to search for sentences containing relevant bioentities (e.g., proteins) in a large corpus, such as PubMed Central. We evaluate our three main techniques, namely using a reference set of entity names (i.e., protein names from UniProt), predicting labels for unknown tokens using a NNER system trained in a small fraction of the data, and refining the label predictions by retraining the NNER system iteratively. We focus on protein/gene annotations for simplicity (51,977 mentions with 5,284 distinct entities).Our experimental evaluation appears in Table 1, which shows Precision, Recall and F 1 over the Bio-ID test set for different conditions. Experiments 1 and 2 (rows 1, 2) show of the NNER system trained over the full Bio-ID training dataset, which on the test set achieves F 1 of 82.99% (BiLSTM) and 83.34% (BiLSTM-CRF). This simulates the performance over a large amount of labeled data and is our gold standard upper limit. For the remaining rows, we train a NNER system over a small dataset (3% of the Bio-ID training dataset), which we refer as NNER-3%. We use the NNER-3% model to predict labels for unknown tokens (noisily, since its accuracy is not perfect). Then, we apply different data augmentation techniques over the remaining 97% of the Bio-ID training dataset, which simulates the accessibility of a large unlabeled corpus. Experiment 3 (row 3 in Table 1) shows the for a simple baseline where we train our NNER system over the 3% seed combined with one true protein label per sentence for the remaining 97% of the Bio-ID training dataset, which removes ∼60% of the protein labels. This experiment simulates an augmentation method with perfect precision, but a recall of only 40%. Experiment 4 adds the CRF to the architecture over the same scenario, which on a ∼9 point increase on F 1 to reach ∼58% (although precision suffers). Even in this somehow unrealistic scenario that includes many of the available labels, the overall performance is significantly diminished from the the system trained on 100% of the data (∼25 percentage points below in F 1). Experiments 5 and 6 show the effect of our iterative label refinement method. We first train NNER-3% on the seed data. Then we combine the seed, with the perfect precision (but partial) labels as in experiments 3 and 4, and with the noisy predicted labels for the remaining tokens in the (97% of the) training dataset. Surprisingly, training over only 3% of the data already achieves a good F 1 of 72.91% for the BiLSTM architecture and 76.21% for the BiLSTM+CRF architecture. When we retrain this base system iteratively, the accuracy of the predicted labels increases, which leads to an improvement of ∼3-4 percentage points in F 1 (to 77.58% for the BiLSTM and 79.75% for the BiLSTM+CRF). Thus, the iterative label refinement method reduces the distance to the 100% trained system from 25 to 4 percentage points, which is a substantial improvement. TAB1 shows the evolution of the iterative label refinement procedure. We train NNER-3% (row 0) and use it to predict labels for unknown tokens repeatedly, which yields a jump in performance in the first iteration, since the predicted labels are informative, and then a more gradual improvement as the labels are increasingly refined. Finally, the remaining experiments simulate the more realistic scenario we seek, where we search for sentences in a large corpus to be labeled automatically. In experiment 7, we simply use our reference set to directly search for exact mentions in the corpus. That is, we search in a case sensitive way for protein/gene names from UniProt in the 97% dataset that represents our large corpus. Matching tokens are labeled as true proteins for training. Since we know the true labels, we can compute the precision (=59.23%) and recall (=18.66%) of this selection technique, which is in fact quite poor. Even using our iterative training technique that produced good in the previous experiments, somewhat decreases the performance (from F 1 =72.70 for NNER-3% down to 71.33%). The low quality of the augmented data introduces too much noise to improve performance. To lower the noise, we refined our search procedure to improve the precision. For experiments 8 and 9, we filtered the names of our reference set, since after error analysis we discovered that many protein names were ambiguous. For example, the token ANOVA is a name of Q9UNW protein in UniProt, and a well-known statistical procedure. Thus, we removed all protein names that appear in an English dictionary from our search. More drastically, we also removed protein names of less than 3 characters, to avoid capturing acronyms that may not really be protein mentions. Finally, we also relaxed the matching strategy to be case insensitive and also to allow for partial matches. For example, when searching for TIGAR, we will accept "Flag-tagged-TIGAR". This selection techniques yield an improved precision (=90.20%) and recall (=39.35%) on identifying correct proteins in Bio-ID. We then reconstruct our augmented training dataset that combines the seed, reference-set, and the predicted labels by NNER-3% and iterative refinement. Our method achieves a F 1 of 76.63% for BiLSTM and of 77.70% for BiLSTM+CRF. In summary, through these experiments we show that using a small labeled dataset and our automatic data augmentation procedure, we achieve a performance approaching that of a system trained with over 30 times more labeled data. We proposed a method to improve NER with limited labeled data, which is often the case in technical domains, such as biomedicine. Our method combines bootstrapping and weakly-labeled data augmentation by using a small fully-labeled seed dataset and a large unlabeled corpus, automated labelling using a reference set, and an iterative label refinement process. Our experimental evaluation shows performance equivalent to systems trained with an order of magnitude more labeled data. In future work, we aim to explore additional augmentation methods over other challenging datasets. We plan to apply the findings of these controlled experiments to a much larger in-the-wild scenario where we use all the available labeled data as the seed and operate over a large corpus (e.g., all of PubMed, PubMed Central) to improve state-of-the-art NER performance.
Augmented bootstrapping approach combining information from a reference set with iterative refinements of soft labels to improve Name Entity Recognition from biomedical literature.
419
scitldr
Quantum machine learning methods have the potential to facilitate learning using extremely large datasets. While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels. One of the approaches for addressing this issue is to use semi-supervised learning, which leverages not only the labeled samples, but also unlabeled feature vectors. Here, we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines. The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss, while maintaining the same quantum speedup as the Quantum LS-SVM. Data sets used for training machine learning models are becoming increasingly large, leading to continued interest in fast methods for solving large-scale classification problems. One of the approaches being explored is training the predictive model using a quantum algorithm that has access to the training set stored in quantum-accessible memory. In parallel to research on efficient architectures for quantum memory , work on quantum machine learning algorithms and on quantum learning theory is under way (see for example Refs. (; ;) and (Arunachalam & de) for review). An early example of this approach is Quantum LS-SVM (a), which achieves exponential speedup compared to classical LS-SVM algorithm. Quantum LS-SVM uses quadratic least-squares loss and squared-L 2 regularizer, and the optimization problem can be solved using the seminal HHL algorithm for solving quantum linear systems of equations. While progress has been made in quantum algorithms for supervised learning, it has been recently advocated that the focus should shift to unsupervised and semi-supervised setting . In many domains, the most laborious part of assembling a training set is the collection of sample labels. Thus, in many scenarios, in addition to the labeled training set of size m we have access to many more feature vectors with missing labels. One way of utilizing these additional data points to improve the classification model is through semi-supervised learning. In semi-supervised learning, we are given m observations x 1,..., x m drawn from the marginal distribution p(x), where the l (l m) first data points come with labels y 1,..., y l drawn from conditional distribution p(y|x). Semi-supervised learning algorithms exploit the underlying distribution of the data to improve classification accuracy on unseen samples. In the approach considered here, the training samples are connected by a graph that captures their similarity. Here, we introduce a quantum algorithm for semi-supervised training of a kernel support vector machine classification model. We start with the existing Quantum LS-SVM (a), and use techniques from sample-based Hamiltonian simulation to add a semisupervised term based on Laplacian SVM . As is standard in quantum machine learning , the algorithm accesses training points and the adjacency matrix of the graph connecting samples via a quantum oracle. We show that, with respect to the oracle, the proposed algorithm achieves the same quantum speedup as LS-SVM, that is, adding the semisupervised term does not lead to increased computational complexity. Consider a problem where we are aiming to find predictors h(x): X → R that are functions from a RKHS defined by a kernel K. In Semi-Supervised LS-SVMs in RKHS, we are looking for a function h ∈ H that minimizes min h∈H,b∈R where y = (y 1, ..., y m) T, 1 = (1, ..., 1) T, 1 is identity matrix, K is kernel matrix, L is the graph Laplacian matrix, γ is a hyperparameter and α = (α 1, ..., α m) T is the vector of Lagrangian multipliers. Quantum computers are devices which perform computing according to the laws of quantum mechanics, a mathematical framework for describing physical theories, in language of linear algebra. Quantum Systems. Any isolated, closed quantum physical system can be fully described by a unit-norm vector in a complex Hilbert space appropriate for that system; in quantum computing, the space is always finite-dimensional, C d. In quantum mechanics and quantum computing, Dirac notation for linear algebra is commonly used. In Dirac notation, a vector x ∈ C d and its complex conjugate x T, which represents a functional C d → R, are denoted by |x (called ket) and x| (called bra), respectively. We call {|e i} T, |1 = T and α, β ∈ C, |α| 2 + |β| 2, is called a quantum bit, or qubit for short. When both α and β are nonzero, we say |ψ is in a superposition of the computational basis |0 and |1; the two superposition states A composite quantum state of two distinct quantum systems |x 1 ∈ C d1 and |x 2 ∈ C d2 is described as tensor product of quantum states |x 1 ⊗ |x 2 ∈ C d1 ⊗ C d2. Thus, a state of an n-qubit system is a vector in the tensor product space C 2 ⊗n = C 2 ⊗ C 2 ⊗... ⊗ C 2, and is written as i=0 α i |i, where i is expressed using its binary representation; for example for n = 4, we have |2 = |0010 = |0 ⊗ |0 ⊗ |1 ⊗ |0. Transforming and Measuring Quantum States. Quantum operations manipulate quantum states in order to obtain some desired final state. Two types of manipulation of a quantum system are allowed by laws of physics: unitary operators and measurements. Quantum measurement, if done in the computational basis, stochastically transforms the state of the system into one of the computational basis states, based on squared magnitudes of probability amplitudes; that is, will in |0 and |1 with equal chance. Unitary operators are deterministic, invertible, normpreserving linear transforms. A unitary operator U models a transformation of a quantum state |u to |v = U|u. Note that U|u 1 + U|u 2 = U (|u 1 + |u 2), applying a unitary to a superposition of states has the same effect as applying it separately to element of the superposition. In quantum circuit model unitary transformations are referred to as quantum gates -for example, one of the most common gates, the single-qubit Hadamard gate, is a unitary operator represented in the computational basis by the matrix Note that H|0 = |+ and H|1 = |−. Quantum Input Model. Quantum computation typically starts from all qubits in |0 state. To perform computation, access to input data is needed. In quantum computing, input is typically given by a unitary operator that transforms the initial state into the desired input state for the computation -such unitaries are commonly referred to as oracles, and the computational complexity of quantum algorithms is typically measured with access to an oracle as the unit. For problems involving large amounts of input data, such as for quantum machine learning algorithms, an oracle that abstracts random access memory is often assumed. Quantum random access memory (qRAM) uses log N qubits to address any quantum superposition of N memory cell which may contains either quantum or classical information. For example, qRAM allows accessing classical data entries x j i in quantum superposition by a transformation where |x j i is a binary representation up to a given precision. Several approaches for creating quantum RAM are being considered (; ;), but it is still an open challenge, and subtle differences in qRAM architecture may erase any gains in computational complexity of a quantum algorithm. Quantum Linear Systems of Equations. Given an input matrix A ∈ C n×n and a vector b ∈ C n, the goal of linear system of equations problem is finding x ∈ C n such that Ax = b. When A is Hermitian and full rank, the unique solution is x = A −1 b. If A is not a full rank matrix then A −1 is replaced by the Moore-Penrose pseudo-inverse. HHL algorithm introduced an analogous problem in quantum setting: assuming an efficient algorithm for preparing b as a quantum state b = n i=1 b i |i using log n + 1 qubits, the algorithm applies quantum subroutines of phase estimation, controlled rotation, and inverse of phase estimation to obtain the state Intuitively and at the risk of over-simplifying, HHL algorithm works as follows: if A has spec- In general A and A −1 are not unitary (unless all A's eigenvalues have unit magnitude), therefore we are not able to apply A −1 directly on |b. However, since is unitary and has the same eigenvectors as A and A −1, one can implement U and powers of U on a quantum computer by Hamiltonian simulation techniques; clearly for any expected speed-up, one need to enact e iA efficiently. The HHL algorithm uses the phase estimation subroutine to estimate an approximation of λ i up to a small error. The Next step computes a conditional rotation on the approximated value of λ i and an auxiliary qubit |0 and outputs |1. The last step involves the inverse of phase estimation and quantum measurement for getting rid of garbage qubits and outputs our desired state Density Operators. Density operator formalism is an alternative formulation for quantum mechanics that allows probabilistic mixtures of pure states, more generally referred to as mixed states. A mixed state that describes an ensemble {p i, |ψ i} is written as where k i=1 p i = 1 forms a probability distribution and ρ is called density operator, which in a finite-dimensional system, in computational basis, is a semi-definite positive matrix with T r(ρ) = 1. A unitary operator U maps a quantum state expressed as a density operator ρ to UρU †, where U † is the complex conjugate of the operator U. Partial Trace of Composite Quantum System. Consider a two-part quantum system in a state described by tensor product of two density operators ρ ⊗ σ. A partial trace, tracing out the second part of the quantum system, is defined as the linear operator that leaves the first part of the system in a state Tr 2 (ρ ⊗ σ) = ρ tr (σ), where Tr (σ) is the trace of the matrix σ. To obtain Kernel matrix K as a density matrix, quantum LS-SVM (b) relies on partial trace, and on a quantum oracle that can convert, in superposition, each data point where (x i) t refers to the tth feature value in data point x i and assuming the oracle is given x i and y i. Vector of the labels is given in the same fashion as |y = 1 y m i=1 y i |i. For preparation the normalized Kernel matrix K = 1 tr(K) K where K = X T X, we need to prepare a quantum state combining all data points in quantum The normalized Kernel matrix is obtained by discarding the training set state, The approach used above to construct density matrix corresponding to linear kernel matrix can be extended to polynomial kernels (b). LMR Technique for Density Operator Exponentiation. In HHL-based quantum machine learning algorithms, including in the method proposed here, matrix A for the Hamiltonian simulation within the HHL algorithm is based on data. For example, A can contain the kernel matrix K captured in the quantum system as a density matrix. Then, one need to be able to efficiently compute e −iK∆t, where K is scaled by the trace of kernel matrix. Since K is not sparse, a strategy similar to is adapted for the exponentiation of a non-sparse density matrix: where S = i,j |i j| ⊗ |j i| is the swap operator and the facts Tr 1 {S(K ⊗ σ)} = Kσ and Tr 1 {(K ⊗ σ)S} = σK are used. The equation summarizes the LMR technique: approximating e −iK∆t σe iK∆t up to error O(∆t 2) is equivalent to simulating a swap operator S, applying it to the state K ⊗ σ and discarding the first system by taking partial trace operation. Since the swap operator is sparse, its simulation is efficient. Therefore the LMR trick provides an efficient way to approximate exponentiation of a non-sparse density matrix. Quantum LS-SVM. Quantum LS-SVM (b) uses partial trace to construct density operator corresponding to the kernel matrix K. Once the kernel matrix K becomes available as a density operator, the quantum LS-SVM proceeds by applying the HHL algorithm for solving the system of linear equations associated with LS-LSVM, using the LMR technique for performing the density operator exponentiation e −iK∆t where the density matrix K encodes the kernel matrix. 3 QUANTUM SEMI-SUPERVISED LEAST SQUARE SVM. Semi-Supervised Least Square SVM involves solving the following system of linear equations In quantum setting the task is to generate |b, α = −1 |0, y, where the normalized = A T r(A). The linear system differs from the one in LS-SVM in that instead of K, we have K + KLK. While this difference is of little significance for classical solvers, in quantum systems we cannot just multiply and then add the matrices and then apply quantum LS-SVM -we are limited by the unitary nature of quantum transformations. In order to obtain the solution to the quantum Semi-Supervised Least Square SVM, we will use the following steps. First, we will read in the graph information to obtain scaled graph Laplacian matrix as a density operator. Next, we will use polynomial Hermitian exponentiation for computing the matrix inverse (K + KLK) −1. In the semi-supervised model used here, we assume that we have information on the similarity of the training samples, in a form of graph G that uses n edges to connect similar training samples, represented as m vertices. We assume that for each sample, G contains its k most similar other samples, that is, the degree of each vertex is d. To have the graph available as a quantum density operator, we observe that the graph Laplacian L is the Gram matrix of the rows of the m × n graph incidence matrix G I, L = G I G T I. We assume oracle access to the graph adjacency list, allowing us to construct, in superposition, states corresponding to rows of the graph incidence matrix G I That is, state |v i has probability amplitude for each edge |t incident with vertex i, and null probability amplitude for all other edges. In superposition, we prepare a quantum state combining rows of the incidence matrix for all vertices, to obtain The graph Laplacian matrix L, composed of inner products of the rows of G I, is obtained by discarding the second part of the system, For computing the matrix inverse (K + KLK) −1 on a quantum computer that runs our quantum machine algorithm and HHL algorithm as a subroutine, we need to efficiently compute e −i(K+KLK)∆t σe i(K+KLK)∆t. For this purpose we adapt the generalized LMR technique for simulating Hermitian polynomials proposed in to the specific case of e −i(K+KLK)∆t σe i(K+KLK)∆t. Simulation of e −iK∆t follows from the original LMR algorithm, and therefore we focus here only on simulation e −iKLK∆t. The final dynamics (K + KLK) −1 can be obtained by sampling from the two separate output states for e −iKLK∆t and e −iK∆t. Simulating e iKLK∆t e iKLK∆t e iKLK∆t. Let D(H) denote the space of density operators associated with state space H. Let K †, K, L ∈ D(H) be the density operators associated with the kernel matrix and the Laplacian, respectively. We will need two separate systems with the kernel matrix K, to distinguish between them we will denote the first as K † and the second as K; since K is real and symmetric, these are indeed equal. The kernel and Laplacian matrices K †, K, L are not sparse therefore we adapt the generalized LMR technique for simulating Hermitian polynomials for our specific case For adapting the generalized LMR technique to our problem we need to generate a quantum state ρ = |0 0| ⊗ ρ + |1 1| ⊗ ρ with T r(ρ + ρ) = 1, such that where is a controlled partial swap in the forward (+S) and backward direction (−S) in time, and Therefore with one copy of ρ, we obtain the simulation of e −iB∆ up to error O(∆ 2). If we choose the time slice ∆ = δ/t and repeating the above procedure for t 2 /δ times, we are able to simulate e −iBt up to error O(δ) using n = O(t 2 /δ) copies of ρ. Figure 1 shows the quantum circuit for creating ρ = |0 0| ⊗ ρ + |1 1| ⊗ ρ such that T r(ρ + ρ) = 1 and B = ρ − ρ = KLK. The analysis of the steps preformed by the circuit depicted in Fig.1 is as follows. Let P be the cyclic permutation of three copies of H A that operates as P |j 1, j 2, j 3 = |j 3, j 1, j 2. In operator form it can be written as The quantum LS-SVM in (b) offers exponential speedup O(log mp) over the classical time complexity for solving SVM as a quadratic problem, which requires time O(log( −1)poly(p, m)), where is the desired error. The exponential speedup in p occurs as the of fast quantum computing of kernel matrix, and relies on the existence of efficient oracle access to data. The speedup on m is due to applying quantum matrix inversion for solving LS-SVM, which is inherently due to fast algorithm for exponentiation of a ing non-sparse matrix. Our algorithm introduces two additional steps: preparing the Laplacian density matrix, and Hamiltonian simulation for KLK. The first step involves oracle access to a sparse graph adjacency list representation, which is at least as efficient as the oracle access to non-sparse data points. The Hamiltonian simulation involves simulating a sparse conditional partial swap operator, which an efficient strategy for applying e −iKLK∆t in timeÕ(log(m)∆t), where the notationÕ hides more slowly growing factors in the simulation . Considerable effort has been devoted into designing fast classical algorithms for training SVMs. The decomposition-based methods such as SMO are able to efficiently manage problems with large number of features p, but their computational complexities are super-linear in m. Other training strategies (; ;) are linear in m but scale quadratically in p in the worst case. The Pegasos algorithm for non-linear kernel improves the complexity toÕ (m/(λ)), where λ, and are the regularization parameter of SVM and the error of the solution, respectively. Beyond the classical realm, three quantum algorithms for training linear models have been proposed, the quantum LS-SVM that involves L 2 regularizer (a), a recently proposed Quantum Sparse SVM which is limited to a linear kernel , and a quantum training algorithm that solves a maximin problem ing from a maximum -not average -loss over the training set .
We extend quantum SVMs to semi-supervised setting, to deal with the likely problem of many missing class labels in huge datasets.
420
scitldr
Deep neural networks have become the state-of-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LM-architecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LM-ResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress (>50%) the original networks while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10. Deep learning has achieved great success in may machine learning tasks. The end-to-end deep architectures have the ability to effectively extract features relevant to the given labels and achieve state-of-the-art accuracy in various applications BID3 ). Network design is one of the central task in deep learning. Its main objective is to grant the networks with strong generalization power using as few parameters as possible. The first ultra deep convolutional network is the ResNet BID16 which has skip connections to keep feature maps in different layers in the same scale and to avoid gradient vanishing. Structures other than the skip connections of the ResNet were also introduced to avoid gradient vanishing, such as the dense connections BID20, fractal path BID27 and Dirac initialization BID50. Furthermore, there has been a lot of attempts to improve the accuracy of image classifications by modifying the residual blocks of the ResNet. BID49 suggested that we need to double the number of layers of ResNet to achieve a fraction of a percent improvement of accuracy. They proposed a widened architecture that can efficiently improve the accuracy. BID51 pointed out that simply modifying depth or width of ResNet might not be the best way of architecture design. Exploring structural diversity, which is an alternative dimension in network design, may lead to more effective networks. In BID43, BID51, BID47, and BID19, the authors further improved the accuracy of the networks by carefully designing residual blocks via increasing the width of each block, changing the topology of the network and following certain empirical observations. In the literature, the network design is mainly empirical. It remains a mystery whether there is a general principle to guide the design of effective and compact deep networks. Observe that each residual block of ResNet can be written as u n+1 = u n + ∆tf (u n) which is one step of forward Euler discretization (AppendixA.1) of the ordinary differential equation (ODE) u t = f (u) (E, 2017). This suggests that there might be a connection between discrete dynamic systems and deep networks with skip connections. In this work, we will show that many state-of-the-art deep network architectures, such as PolyNet BID51, FractalNet BID27 and RevNet BID12, can be consider as different discretizations of ODEs. From the perspective of this work, the success of these networks is mainly due to their ability to efficiently approximate dynamic systems. On a side note, differential equations is one of the most powerful tools used in low-level computer vision such as image denoising, deblurring, registration and segmentation BID36 BID2 BID4. This may also bring insights on the success of deep neural networks in low-level computer vision. Furthermore, the connection between architectures of deep neural networks and numerical approximations of ODEs enables us to design new and more effective deep architectures by selecting certain discrete approximations of ODEs. As an example, we design a new network structure called linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method in numerical ODEs BID1. This architecture can be applied to any ResNet-like networks. In this paper, we apply the LM-architecture to ResNet and ResNeXt BID47 ) and achieve noticeable improvements on CIFAR and ImageNet with comparable numbers of trainable parameters. We also explain the performance gain using the concept of modified equations from numerical analysis. It is known in the literature that introducing randomness by injecting noise to the forward process can improve generalization of deep residual networks. This includes stochastic drop out of residual blocks BID21 and stochastic shakes of the outputs from different branches of each residual block BID11. In this work we show that any ResNet-like network with noise injection can be interpreted as a discretization of a stochastic dynamic system. This gives a relatively unified explanation to the stochastic learning process using stochastic control. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the proposed LM-architecture. As an example, we introduce stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10. The link between ResNet FIG0 ) and ODEs were first observed by E, where the authors formulated the ODE u t = f (u) as the continuum limit of the ResNet u n+1 = u n + ∆tf (u n). BID31 bridged ResNet with recurrent neural network (RNN), where the latter is known as an approximation of dynamic systems. BID40 and BID30 also regarded ResNet as dynamic systems that are the characteristic lines of a transport equation on the distribution of the data set. Similar observations were also made by BID5; they designed a reversible architecture to grant stability to the dynamic system. On the other hand, many deep network designs were inspired by optimization algorithms, such as the network LISTA BID14 and the ADMM-Net BID48. Optimization algorithms can be regarded as discretizations of various types of ODEs BID18, among which the simplest example is gradient flow. Another set of important examples of dynamic systems is partial differential equations (PDEs), which have been widely used in low-level computer vision tasks such as image restoration. There were some recent attempts on combining deep learning with PDEs for various computer vision tasks, i.e. to balance handcraft modeling and data-driven modeling. BID32 and BID33 proposed to use linear combinations of a series of handcrafted PDE-terms and used optimal control methods to learn the coefficients. Later, BID10 extended their model to handle classification tasks and proposed an learned PDE model (L-PDE). However, for classification tasks, the dynamics (i.e. the trajectories generated by passing data to the network) should be interpreted as the characteristic lines of a PDE on the distribution of the data set. This means that using spatial differential operators in the network is not essential for classification tasks. Furthermore, the discretizations of differential operators in the L-PDE are not trainable, which significantly reduces the network's expressive power and stability. BID28 proposed a feed-forward network in order to learn the optimal nonlinear anisotropic diffusion for image denoising. Unlike the previous work, their network used trainable convolution kernels instead of fixed discretizations of differential operators, and used radio basis functions to approximate the nonlinear diffusivity function. More recently, BID34 designed a network, called PDE-Net, to learn more general evolution PDEs from sequential data. The learned PDE-Net can accurately predict the dynamical behavior of data and has the potential to reveal the underlying PDE model that drives the observed data. In our work, we focus on a different perspective. First of all, we do not require the ODE u t = f (u) associate to any optimization problem, nor do we assume any differential structures in f (u). The optimal f (u) is learned for a given supervised learning task. Secondly, we draw a relatively comprehensive connection between the architectures of popular deep networks and discretization schemes of ODEs. More importantly, we demonstrate that the connection between deep networks and numerical ODEs enables us to design new and more effective deep networks. As an example, we introduce the LM-architecture to ResNet and ResNeXt which improves the accuracy of the original networks. We also note that, our viewpoint enables us to easily explain why ResNet can achieve good accuracy by dropping out some residual blocks after training, whereas dropping off sub-sampling layers often leads to an accuracy drop BID44. This is simply because each residual block is one step of the discretized ODE, and hence, dropping out some residual blocks only amounts to modifying the step size of the discrete dynamic system, while the sub-sampling layer is not a part of the ODE model. Our explanation is similar to the unrolled iterative estimation proposed by BID13, while the difference is that we believe it is the data-driven ODE that does the unrolled iterative estimation. In this section we show that many existing deep neural networks can be consider as different numerical schemes approximating ODEs of the form u t = f (u). Based on such observation, we introduce a new structure, called the linear multi-step architecture (LM-architecture) which is inspired by the well-known linear multi-step method in numerical ODEs. The LM-architecture can be applied to any ResNet-like networks. As an example, we apply it to ResNet and ResNeXt and demonstrate the performance gain of such modification on CIFAR and ImageNet data sets. PolyNet FIG0 ), proposed by BID51, introduced a PolyInception module in each residual block to enhance the expressive power of the network. The PolyInception model includes polynomial compositions that can be described as DISPLAYFORM0 We observe that PolyInception model can be interpreted as an approximation to one step of the backward Euler (implicit) scheme (AppendixA.1): DISPLAYFORM1 Indeed, we can formally rewrite (I − ∆tf) −1 as DISPLAYFORM2 Therefore, the architecture of PolyNet can be viewed as an approximation to the backward Euler scheme solving the ODE u t = f (u). Note that, the implicit scheme allows a larger step size BID1, which in turn allows fewer numbers of residual blocks of the network. This explains why PolyNet is able to reduce depth by increasing width of each residual block to achieve state-of-the-art classification accuracy. FractalNet BID27 FIG0 ) is designed based on self-similarity. It is designed by repeatedly applying a simple expansion rule to generate deep networks whose structural layouts are truncated fractals. We observe that, the macro-structure of FractalNet can be interpreted as the well-known Runge-Kutta scheme in numerical analysis. Recall that the recursive fractal structure of FractalNet can be written as DISPLAYFORM3 For simplicity of presentation, we only demonstrate the FractalNet of order 2 (i.e. c ≤ 2). Then, every block of the FractalNet (of order 2) can be expressed as DISPLAYFORM4, which resembles the Runge-Kutta scheme of order 2 solving the ODE u t = f (u) (see AppendixA.2).RevNet FIG0 ), proposed by BID12, is a reversible network which does not require to store activations during forward propagations. The RevNet can be expressed as the following discrete dynamic system DISPLAYFORM5 RevNet can be interpreted as a simple forward Euler approximation to the following dynamic systeṁ DISPLAYFORM6 Note that reversibility, which means we can simulate the dynamic from the end time to the initial time, is also an important notation in dynamic systems. There were also attempts to design reversible scheme in dynamic system such as BID35. DISPLAYFORM7 We have shown that architectures of some successful deep neural networks can be interpreted as different discrete approximations of dynamic systems. In this section, we proposed a new struc-ture, called linear multi-step structure (LM-architecture), based on the well-known linear multi-step scheme in numerical ODEs (which is briefly recalled in Appendix A.3). The LM-architecture can be written as follows DISPLAYFORM0 where k n ∈ R is a trainable parameter for each layer n. A schematic of the LM-architecture is presented in Figure 2. Note that the midpoint and leapfrog network structures in BID5 are all special case of ours. The LM-architecture is a 2-step method approximating the ODE u t = f (u). Therefore, it can be applied to any ResNet-like networks, including those mentioned in the previous section. As an example, we apply the LM-architecture to ResNet and ResNeXt. We call these new networks the LM-ResNet and LM-ResNeXt. We trained LM-ResNet and LM-ResNeXt on CIFAR BID25 ) and Imagenet BID37, and both achieve improvements over the original ResNet and ResNeXt. Implementation Details. For the data sets CIFAR10 and CIFAR100, we train and test our networks on the training and testing set as originally given by the data set. For ImageNet, our models are trained on the training set with 1.28 million images and evaluated on the validation set with 50k images. On CIFAR, we follow the simple data augmentation in BID28 for training: 4 pixels are padded on each side, and a 32×32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×32 image. Note that the data augmentation used by ResNet BID16 BID47 is the same as BID28. On ImageNet, we follow the practice in BID26; BID39. Images are resized with its shorter side randomly sampled in for scale augmentation BID39. The input image is 224×224 randomly cropped from a resized image using the scale and aspect ratio augmentation of BID42. For the experiments of ResNet/LM-ResNet on CIFAR, we adopt the original design of the residual block in BID17, i.e. using a small two-layer neural network as the residual block with bn-relu-conv-bn-reluconv. The residual block of LM-ResNeXt (as well as LM-ResNet164) is the bottleneck structure used by BID47 ) that takes the form 1 × 1, 64 3 × 3, 64 1 × 1, 256. We start our networks with a single 3 × 3 conv layer, followed by 3 residual blocks, global average pooling and a fully-connected classifier. The parameters k n of the LM-architecture are initialized by random sampling from U[−0.1, 0]. We initialize other parameters following the method introduced by BID15. On CIFAR, we use SGD with a mini-batch size of 128, and 256 on ImageNet. During training, we apply a weight decay of 0.0001 for LM-ResNet and 0.0005 for LM-ResNeXt, and momentum of 0.9 on CIFAR. We apply a weight decay of 0.0001 and momentum of 0.9 for both LM-ResNet and LM-ResNeXt on ImageNet. For LM-ResNet on CIFAR10 (CIFAR100), we start with the learning rate of 0.1, divide it by 10 at 80 and 120 epochs and terminate training at 160 epochs. For LM-ResNeXt on CIFAR, we start with the learning rate of 0.1 and divide it by 10 at 150 and 225 epochs, and terminate training at 300 epochs. Figure 2: LM-architecture is an efficient structure that enables ResNet to achieve same level of accuracy with only half of the parameters on CIFAR10. Results. Testing errors of our proposed LM-ResNet/LM-ResNeXt and some other deep networks on CIFAR are presented in TAB1. Figure 2 shows the overall improvements of LM-ResNet over ResNet on CIFAR10 with varied number of layers. We also observe noticeable improvements of both LM-ResNet and LM-ResNeXt on CIFAR100. BID47 claimed that ResNeXt can achieve lower testing error without pre-activation (pre-act). However, our show that LMResNeXt with pre-act achieves lower testing errors even than the original ResNeXt without pre-act. Training and testing curves of LM-ResNeXt are plotted in Figure3. In TAB1, we also present testing errors of FractalNet and DenseNet BID20 on CIFAR 100. We can see that our proposed LM-ResNeXt29 has the best . Comparisons between LM-ResNet and ResNet on ImageNet are presented in TAB2. The LM-ResNet shows improvement over ResNet with comparable number of trainable parameters. Note that the of ResNet on ImageNet are obtained from "https://github.com/KaimingHe/deep-residual-networks". It is worth noticing that the testing error of the 56-layer LM-ResNet is comparable to that of the 110-layer ResNet on CIFAR10; the testing error of the 164-layer LM-ResNet is comparable to that of the 1001-layer ResNet on CI-FAR100; the testing error of the 50-layer LM-ResNet is comparable to that of the 101-layer ResNet on ImageNet. We have similar on LM-ResNeXt and ResNeXt as well. These indicate that the LM-architecture can greatly compress ResNet/ResNeXt without losing much of the performance. We will justify this mathematically at the end of this section using the concept of modified equations from numerical analysis. Explanation on the performance boost via modified equations. Given a numerical scheme approximating a differential equation, its associated modified equation BID45 ) is another differential equation to which the numerical scheme approximates with higher order of accuracy than the original equation. Modified equations are used to describe numerical behaviors of numerical schemes. For example, consider the simple 1-dimensional transport equation u t = cu x. 101, pre-act 22.6 6.4Figure 3: Training and testing curves of ResNext29 (16x64d, pre-act) and and LM-ResNet29 (16x64d, pre-act) on CIFAR100, which shows that the LM-ResNeXt can achieve higher accuracy than ResNeXt. Both the Lax-Friedrichs scheme and Lax-Wendroff scheme approximates the transport equation. However, the associated modified equations of Lax-Friedrichs and Lax-Wendroff are DISPLAYFORM1 respectively, where r = 2∆t ∆x. This shows that the Lax-Friedrichs scheme behaves diffusive, while the Lax-Wendroff scheme behaves dispersive. Consider the forward Euler scheme which is associated to ResNet, DISPLAYFORM2 Thus, the modified equation of forward Euler scheme reads aṡ DISPLAYFORM3 Consider the numerical scheme used in the LM-structure DISPLAYFORM4 Then, the modified equation of the numerical scheme associated to the LM-structure DISPLAYFORM5 Comparing FORMULA11 with, we can see that when k n ≤ 0, the second order termü of is bigger than that of. The termü represents acceleration which leads to acceleration of the convergence of u n when f = −∇g BID41; BID46. When f (u) = L(u) with L being an elliptic operator, the termü introduce dispersion on top of the dissipation, which speeds up the flow of u n. In fact, this is our original motivation of introducing the LM-architecture. Note that when the dynamic is truly a gradient flow, i.e. f = −∇g, the difference equation of the LM-structure has a stability condition −1 ≤ k n ≤ 1. In our experiments, we do observe that most of the coefficients are lying in (−1, 1) FIG1. Moreover, the network is indeed accelerating at the end of the dynamic, for the learned parameters {k n} are negative and close to −1 FIG1 ). Although the original ResNet BID16 did not use dropout, several work BID21 BID11 showed that it is also beneficial to inject noise during training. In this section we show that we can regard such stochastic learning strategy as an approximation to a stochastic dynamic system. We hope the stochastic dynamic system perspective can shed lights on the discovery of a guiding principle on stochastic learning strategies. To demonstrate the advantage of bridging stochastic dynamic system with stochastic learning strategy, we introduce stochastic depth during training of LM-ResNet. Our indicate that the networks with proposed LM-architecture can also greatly benefit from stochastic learning strategies. As an example, we show that the two stochastic learning methods introduced in BID21 and BID11 can be considered as weak approximations of stochastic dynamic systems. Shake-Shake Regularization. introduced a stochastic affine combination of multiple branches in a residual block, which can be expressed as DISPLAYFORM0 where η ∼ U. To find its corresponding stochastic dynamic system, we incorporate the time step size ∆t and consider DISPLAYFORM1 which reduces to the shake-shake regularization when ∆t = 1. The above equation can be rewritten as DISPLAYFORM2 Since the random variable DISPLAYFORM3 2 ), following the discussion in Appendix B, the network of the shake-shake regularization is a weak approximation of the stochastic dynamic system DISPLAYFORM4 where dB t is an N dimensional Brownian motion, 1 N ×1 is an N -dimensional vector whose elements are all 1s, N is the dimension of X and f i (X), and denotes the pointwise product of vectors. Note from that we have alternatives to the original shake-shake regularization if we choose ∆t = 1.Stochastic Depth. BID21 randomly drops out residual blocks during training in order to reduce training time and improve robustness of the learned network. We can write the forward propagation as DISPLAYFORM5 where P(η n = 1) = p n, P(η n = 0) = 1 − p n. By incorporating ∆t, we consider DISPLAYFORM6 which reduces to the original stochastic drop out training when ∆t = 1. The variance of DISPLAYFORM7 is 1. If we further assume that (1 − 2p n) = O(√ ∆t), the condition of Appendix B.2 is satisfied for small ∆t. Then, following the discussion in Appendix B, the network with stochastic drop out can be seen as a weak approximation to the stochastic dynamic system DISPLAYFORM8 Note that the assumption (1 − 2p n) = O(√ ∆t) also suggests that we should set p n closer to 1/2 for deeper blocks of the network, which coincides with the observation made by BID21 Figure 8 ).In general, we can interpret stochastic training procedures as approximations of the following stochastic control problem with running cost DISPLAYFORM9 where L(·) is the loss function, T is the terminal time of the stochastic process, and R is a regularization term. BID17 110,pre-act Orignial 6.37ResNet BID21 In this section, we extend the stochastic depth training strategy to networks with the proposed LMarchitecture. In order to apply the theory of Itô process, we consider the 2nd orderẌ + g(t)Ẋ = f (X) (which is related to the modified equation of the LM-structure) and rewrite it as a 1st order ODE systemẊ = Y,Ẏ = f (X) − g(t)Y. Following a similar argument as in the previous section, we obtain the following stochastic procesṡ DISPLAYFORM10 which can be weakly approximated by DISPLAYFORM11 where P(η n = 1) = p n, P(η n = 0) = 1 − p n. Taking ∆t = 1, we obtain the following stochastic training strategy for LM-architecture DISPLAYFORM12 The above derivation suggests that the stochastic learning for networks using LM-architecture can be implemented simply by randomly dropping out the residual block with probability p. Implementation Details. We test LM-ResNet with stochastic training strategy on CIFAR10. In our experiments, all hyper-parameters are selected exactly the same as in BID21. The probability of dropping out a residual block at each layer is a linear function of the layer, i.e. we set the probability of dropping the current residual block as DISPLAYFORM13 where l is the current layer of the network, L is the depth of the network and p L is the dropping out probability of the previous layer. In our experiments, we select p L = 0.8 for LM-ResNet56 and p L = 0.5 for LM-ResNet110. During training with SGD, the initial learning rate is 0.1, and is divided by a factor of 10 after epoch 250 and 375, and terminated at 500 epochs. In addition, we use a weight decay of 0.0001 and a momentum of 0.9.Results. Testing errors are presented in TAB3. Training and testing curves of LM-ResNet with stochastic depth are plotted in Figure5. Note that LM-ResNet110 with stochastic depth training strategy achieved a 4.80% testing error on CIFAR10, which is even lower that the ResNet1202 reported in the original paper. The benefit of stochastic training has been explained from difference perspectives, such as Bayesian BID23 and information theory BID38 BID0. The stochastic Brownian motion involved in the aforementioned stochastic dynamic systems introduces diffusion which leads to information gain and robustness. In this section we briefly recall some concepts from numerical ODEs that are used in this paper. The ODE we consider takes the form u t = f (u, t). Interested readers should consult BID1 for a comprehensive introduction to the subject. The simplest approximation of u t = f (u, t) is to discretize the time derivative u t by un+1−un ∆t and approximate the right hand side by f (u n, t n). This leads to the forward (explicit) Euler scheme u n+1 = u n + ∆tf (u n, t n).If we approximate the right hand side of the ODE by f (u n+1, t n+1), we obtain the backward (implicit) Euler scheme u n+1 = u n + ∆tf (u n+1, t n+1).The backward Euler scheme has better stability property than the forward Euler, though we need to solve a nonlinear equation at each step. Runge-Kutta method is a set of higher order one step methods, which can be formulate aŝ DISPLAYFORM0 a ij f (û j, t n + c j ∆t), DISPLAYFORM1 b j f (û j, t n + c j ∆t).Here,û j is an intermediate approximation to the solution at time t n +c j ∆t, and the coefficients {c j} can be adjusted to achieve higher order accuracy. As an example, the popular 2nd-order Runge-Kutta takes the formx n+1 = x n + ∆tf (x n, t n),x n+1 = x n + ∆t 2 f (x n, t n) + ∆t 2 f (x n+1, t n+1). Liear multi-step method generalizes the classical forward Euler scheme to higher orders. The general form of a k−step linear multi-step method is given by where, α j, β j are scalar parameters and α 0 = 0, |α j | + |β j | = 0. The linear multi-step method is explicit if β 0 = 0, which is what we used to design the linear multi-step structure. In this section we follow the setting of Kesendal FORMULA11 and BID9. We first give the definition of Brownian motion. The Brownian motion B t is a stochastic process satisfies the following assumptions
This paper bridges deep network architectures with numerical (stochastic) differential equations. This new perspective enables new designs of more effective deep neural networks.
421
scitldr
Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies). The process of implementing client-side software based on a Graphical User Interface (GUI) mockup created by a designer is the responsibility of developers. Implementing GUI code is, however, time-consuming and prevent developers from dedicating the majority of their time implementing the actual functionality and logic of the software they are building. Moreover, the computer languages used to implement such GUIs are specific to each target runtime system; thus ing in tedious and repetitive work when the software being built is expected to run on multiple platforms using native technologies. In this paper, we describe a model trained end-to-end with stochastic gradient descent to simultaneously learns to model sequences and spatio-temporal visual features to generate variable-length strings of tokens from a single GUI image as input. Our first contribution is pix2code, a novel application of Convolutional and Recurrent Neural Networks to generate computer tokens from a single GUI screenshot as input. That is, no engineered feature extraction pipeline nor expert heuristics was designed to process the input data; our model learns from the pixel values of the input image alone. Our experiments demonstrate the effectiveness of our method for generating computer code for various platforms (i.e. iOS and Android native mobile interfaces, and multi-platform web-based HTML/CSS interfaces) without the need for any change or specific tuning to the model. In fact, pix2code can be used as such to support different target languages simply by being trained on a different dataset. A video demonstrating our system is available online 1.Our second contribution is the release of our synthesized datasets consisting of both GUI screenshots and associated source code for three different platforms. Our datasets and our pix2code implemention are publicly available 2 to foster future research. The automatic generation of programs using machine learning techniques is a relatively new field of research and program synthesis in a human-readable format have only been addressed very recently. A recent example is DeepCoder by BID1, a system able to generate computer programs by leveraging statistical predictions to augment traditional search techniques. In another work by, the generation of source code is enabled by learning the relationships between input-output examples via differentiable interpreters. Furthermore, BID11 recently demonstrated program synthesis from a mixed natural language and structured program specification as input. It is important to note that most of these methods rely on Domain Specific Figure 1: Overview of the pix2code model architecture. During training, the GUI image is encoded by a CNN-based vision model; the context (i.e. a sequence of one-hot encoded tokens corresponding to DSL code) is encoded by a language model consisting of a stack of LSTM layers. The two ing feature vectors are then concatenated and fed into a second stack of LSTM layers acting as a decoder. Finally, a softmax layer is used to sample one token at a time; the output size of the softmax layer corresponding to the DSL vocabulary size. Given an image and a sequence of tokens, the model (i.e. contained in the gray box) is differentiable and can thus be optimized end-to-end through gradient descent to predict the next token in the sequence. During sampling, the input context is updated for each prediction to contain the last predicted token. The ing sequence of DSL tokens is compiled to the desired target language using traditional compiler design techniques. Languages (DSLs); computer languages (e.g. markup languages, programming languages, modeling languages) that are designed for a specialized domain but are typically more restrictive than full-featured computer languages. Using DSLs thus limit the complexity of the programming language that needs to be modeled and reduce the size of the search space. Although the generation of computer programs is an active research field as suggested by these breakthroughs, program generation from visual inputs is still a nearly unexplored research area. The closest related work is a method developed by BID13 to reverse-engineer native Android user interfaces from screenshots. However, their method relies entirely on engineered heuristics requiring expert knowledge of the domain to be implemented successfully. Our paper is, to the best of our knowledge, the first work attempting to address the problem of user interface code generation from visual inputs by leveraging machine learning to learn latent variables instead of engineering complex heuristics. In order to exploit the graphical nature of our input, we can borrow methods from the computer vision literature. In fact, an important number of research BID20; BID3; BID9; BID21 ) have addressed the problem of image captioning with impressive ; showing that deep neural networks are able to learn latent variables describing objects in an image and their relationships with corresponding variable-length textual descriptions. All these methods rely on two main components. First, a Convolutional Neural Network (CNN) performing unsupervised feature learning mapping the raw input image to a learned representation. Second, a Recurrent Neural Network (RNN) performing language modeling on the textual description associated with the input picture. These approaches have the advantage of being differentiable end-to-end, thus allowing the use of gradient descent for optimization. The task of generating computer code written in a given programming language from a GUI screenshot can be compared to the task of generating English textual descriptions from a scene photography. In both scenarios, we want to produce a variable-length strings of tokens from pixel values. We can thus divide our problem into three sub-problems. First, a computer vision problem of understanding the given scene (i.e. in this case, the GUI image) and inferring the objects present, their identities, positions, and poses (i.e. buttons, labels, element containers). Second, a language modeling problem of understanding text (i.e. in this case, computer code) and generating syntactically and semantically correct samples. Finally, the last challenge is to use the solutions to both previous sub-problems by exploiting the latent variables inferred from scene understanding to generate cor-responding textual descriptions (i.e. computer code rather than English) of the objects represented by these variables. CNNs are currently the method of choice to solve a wide range of vision problems thanks to their topology allowing them to learn rich latent representations from the images they are trained on BID15; BID10 ). We used a CNN to perform unsupervised feature learning by mapping an input image to a learned fixed-length vector; thus acting as an encoder as shown in Figure 1.The input images are initially re-sized to 256 × 256 pixels (the aspect ratio is not preserved) and the pixel values are normalized before to be fed in the CNN. No further pre-processing is performed. To encode each input image to a fixed-size output vector, we exclusively used small 3 × 3 receptive fields which are convolved with stride 1 as used in VGGNet by BID17. These operations are applied twice before to down-sample with max-pooling. The width of the first convolutional layer is 32, followed by a layer of width 64, and finally width 128. Two fully connected layers of size 1024 applying the rectified linear unit activation complete the vision model. We designed a simple lightweight DSL to describe GUIs as illustrated in FIG1. In this work we are only interested in the GUI layout, the different graphical components, and their relationships; thus the actual textual value of the labels is ignored. Additionally to reducing the size of the search space, the DSL simplicity also reduces the size of the vocabulary (i.e. the total number of tokens supported by the DSL). As a , our language model can perform token-level language modeling with a discrete input by using one-hot encoded vectors; eliminating the need for word embedding techniques such as word2vec BID12 ) that can in costly computations. In most programming languages and markup languages, an element is declared with an opening token; if children elements or instructions are contained within a block, a closing token is usually needed for the interpreter or the compiler. In such a scenario where the number of children elements contained in a parent element is variable, it is important to model long-term dependencies to be able to close a block that has been opened. Traditional RNN architectures suffer from vanishing and exploding gradients preventing them from being able to model such relationships between data points spread out in time series (i.e. in this case tokens spread out in a sequence). BID8 proposed the Long Short-Term Memory (LSTM) neural architecture in order to address this very problem. The different LSTM gate outputs can be computed as follows: DISPLAYFORM0 With W the matrices of weights, x t the new input vector at time t, h t−1 the previously produced output vector, c t−1 the previously produced cell state's output, b the biases, and φ and σ the activation functions sigmoid and hyperbolic tangent, respectively. The cell state c learns to memorize information by using a recursive connection as done in traditional RNN cells. The input gate i is used to control the error flow on the inputs of cell state c to avoid input weight conflicts that occur in traditional RNN because the same weight has to be used for both storing certain inputs and ignoring others. The output gate o controls the error flow from the outputs of the cell state c to prevent output weight conflicts that happen in standard RNN because the same weight has to be used for both retrieving information and not retrieving others. The LSTM memory block can thus use i to decide when to write information in c and use o to decide when to read information from c. We used the LSTM variant proposed by with a forget gate f to reset memory and help the network model continuous sequences. Our model is trained in a supervised learning manner by feeding an image I and a contextual sequence X of T tokens x t, t ∈ {0 . . . T − 1} as inputs; and the token x T as the target label. As shown on Figure 1, a CNN-based vision model encodes the input image I into a vectorial representation p. The input token x t is encoded by an LSTM-based language model into an intermediary representation q t allowing the model to focus more on certain tokens and less on others BID7 ). This first language model is implemented as a stack of two LSTM layers with 128 cells each. The vision-encoded vector p and the language-encoded vector q t are concatenated into a single feature vector r t which is then fed into a second LSTM-based model decoding the representations learned by both the vision model and the language model. The decoder thus learns to model the relationship between objects present in the input GUI image and the associated tokens present in the DSL code. Our decoder is implemented as a stack of two LSTM layers with 512 cells each. This architecture can be expressed mathematically as follows: DISPLAYFORM0 This architecture allows the whole pix2code model to be optimized end-to-end with gradient descent to predict a token at a time after it has seen both the image as well as the preceding tokens in the sequence. The discrete nature of the output (i.e. fixed-sized vocabulary of tokens in the DSL) allows us to reduce the task to a classification problem. That is, the output layer of our model has the same number of cells as the vocabulary size; thus generating a probability distribution of the candidate tokens at each time step allowing the use of a softmax layer to perform multi-class classification. The length T of the sequences used for training is important to model long-term dependencies; for example to be able to close a block of code that has been opened. After conducting empirical experiments, the DSL input files used for training were segmented with a sliding window of size 48; in other words, we unroll the recurrent neural network for 48 steps. This was found to be a satisfactory trade-off between long-term dependencies learning and computational cost. For every token in the input DSL file, the model is therefore fed with both an input image and a contextual sequence of T = 48 tokens. While the context (i.e. sequence of tokens) used for training is updated at each time step (i.e. each token) by sliding the window, the very same input image I is reused for samples associated with the same GUI. The special tokens < ST ART > and < EN D > are used to respectively prefix and suffix the DSL files similarly to the method used by BID9. Training is performed by computing the partial derivatives of the loss with respect to the network weights calculated with backpropagation to minimize the multiclass log loss: DISPLAYFORM0 With x t+1 the expected token, and y t the predicted token. The model is optimized end-to-end hence the loss L is minimized with regard to all the parameters including all layers in the CNN-based vision model and all layers in both LSTM-based models. Training with the RMSProp algorithm BID19 ) gave the best with a learning rate set to 1e − 4 and by clipping the output gradient to the range [−1.0, 1.0] to cope with numerical instability BID7 ). To prevent overfitting, a dropout regularization BID18 ) set to 25% is applied to the vision model after each max-pooling operation and at 30% after each fully-connected layer. In the LSTM-based models, dropout is set to 10% and only applied to the non-recurrent connections BID23 ). Our model was trained with mini-batches of 64 image-sequence pairs. To generate DSL code, we feed the GUI image I and a contextual sequence X of T = 48 tokens where tokens x t... x T −1 are initially set empty and the last token of the sequence x T is set to the special < ST ART > token. The predicted token y t is then used to update the next sequence of contextual tokens. That is, x t... x T −1 are set to x t+1... x T (x t is thus discarded), with x T set to y t. The process is repeated until the token < EN D > is generated by the model. The generated DSL token sequence can then be compiled with traditional compilation methods to the desired target language. Access to consequent datasets is a typical bottleneck when training deep neural networks. To the best of our knowledge, no dataset consisting of both GUI screenshots and source code was available at the time this paper was written. As a consequence, we synthesized our own data ing in the three datasets described in TAB0. The column Synthesizable refers to the maximum number of unique GUI configuration that can be synthesized using our stochastic user interface generator. The columns Instances refers to the number of synthesized (GUI screenshot, GUI code) file pairs. The columns Samples refers to the number of distinct image-sequence pairs. In fact, training and sampling are done one token at a time by feeding the model with an image and a sequence of tokens obtained with a sliding window of fixed size T. The total number of training samples thus depends on the total number of tokens written in the DSL files and the size of the sliding window. Our stochastic user interface generator is designed to synthesize GUIs written in our DSL which is then compiled to the desired target language to be rendered. Using data synthesis also allows us to demonstrate the capability of our model to generate computer code for three different platforms. Our model has around 109 × 10 6 parameters to optimize and all experiments are performed with the same model with no specific tuning; only the training datasets differ as shown on generation is performed with both greedy search and beam search to find the tokens that maximize the classification probability. To evaluate the quality of the generated output, the classification error is computed for each sampled DSL token and averaged over the whole test dataset. The length difference between the generated and the expected token sequences is also counted as error. The can be seen on TAB1.Figures 4, 6, and 5 show samples consisting of input GUIs (i.e. ground truth), and output GUIs generated by a trained pix2code model. It is important to remember that the actual textual value of the labels is ignored and that both our data synthesis algorithm and our DSL compiler assign randomly generated text to the labels. Despite occasional problems to select the right color or the right style for specific GUI elements and some difficulties modelling GUIs consisting of long lists of graphical components, our model is generally able to learn the GUI layout in a satisfying manner and can preserve the hierarchical structure of the graphical elements. In this paper, we presented pix2code, a novel method to generate computer code given a single GUI image as input. While our work demonstrates the potential of such a system to automate the process of implementing GUIs, we only scratched the surface of what is possible. Our model consists of relatively few parameters and was trained on a relatively small dataset. The quality of the generated code could be drastically improved by training a bigger model on significantly more data for an extended number of epochs. Implementing a now-standard attention mechanism BID0; BID21 ) could further improve the quality of the generated code. Using one-hot encoding does not provide any useful information about the relationships between the tokens since the method simply assigns an arbitrary vectorial representation to each token. Therefore, pre-training the language model to learn vectorial representations would allow the relationships between tokens in the DSL to be inferred (i.e. learning word embeddings such as word2vec by BID12) and as a alleviate semantical error in the generated code. Furthermore, one-hot encoding does not scale to very big vocabulary and thus restrict the number of symbols that the DSL can support. Generative Adversarial Networks GANs developed by BID6 have shown to be extremely powerful at generating images and sequences BID22; BID14; BID16; BID2 ). Applying such techniques to the problem of generating computer code from an input image is so far an unexplored research area. GANs could potentially be used as a standalone method to generate code or could be used in combination with our pix2code model to fine-tune . A major drawback of deep neural networks is the need for a lot of training data for the ing model to generalize well on new unseen examples. One of the significant advantages of the method we described in this paper is that there is no need for human-labelled data. In fact, the network can model the relationships between graphical components and associated tokens by simply being trained on image-sequence pairs. Although we used data synthesis in our paper partly to demonstrate the capability of our method to generate GUI code for various platforms; data synthesis might not be needed at all if one wants to focus only on web-based GUIs. In fact, one could imagine crawling the World Wide Web to collect a dataset of HTML/CSS code associated with screenshots of rendered websites. Considering the large number of web pages already available online and the fact that new websites are created every day, the web could theoretically supply a virtually unlimited amount of training data; potentially allowing deep learning methods to fully automate the implementation of web-based GUIs.(a) Groundtruth GUI 1 (b) Generated GUI 1 (c) Groundtruth GUI 2 (d) Generated GUI 2 Figure 4: Experiment samples for the iOS GUI dataset.(a) Groundtruth GUI 3 (b) Generated GUI 3 (c) Groundtruth GUI 4 (d) Generated GUI 4Figure 5: Experiment samples from the Android GUI dataset.(a) Groundtruth GUI 5 (b) Generated GUI 5 (c) Groundtruth GUI 6 (d) Generated GUI 6Figure 6: Experiment samples from the web-based GUI dataset.
CNN and LSTM to generate markup-like code describing graphical user interface images.
422
scitldr
Computer vision tasks such as image classification, image retrieval and few-shot learning are currently dominated by Euclidean and spherical embeddings, so that the final decisions about class belongings or the degree of similarity are made using linear hyperplanes, Euclidean distances, or spherical geodesic distances (cosine similarity). In this work, we demonstrate that in many practical scenarios hyperbolic embeddings provide a better alternative. Figure 1: An example of two-dimensional Poincaré embeddings computed by a hyperbolic neural network trained on MNIST, and evaluated additionally on Omniglot. Ambiguous and unclear images from MNIST, as well as most of the images from Omniglot are embedded near the center, while samples with clear class labels (or characters from Omniglot similar to one of the digits) lie near the boundary. High-dimensional embeddings are ubiquitous in modern computer vision. Many, perhaps most, modern computer vision systems learn non-linear mappings (in the form of deep convolutional networks) from the space of images or image fragments into high-dimensional spaces. The operations at the end of deep networks imply a certain type of geometry of the embedding spaces. For example, image classification networks use linear operators (matrix multiplication) to map embeddings in the penultimate layer to class logits. The class boundaries in the embedding space are thus piecewise-linear, and pairs of classes are separated by Euclidean hyperplanes. The embeddings learned by the model in the penultimate layer, therefore, live in the Euclidean space. The same can be said about systems where Euclidean distances are used to perform image retrieval (; ;), face recognition or one-shot learning . Alternatively, some few-shot learning , face recognition and person re-identification methods learn spherical embeddings, so that sphere projection operator is applied at the end of a network that computes the embeddings. Cosine similarity (closely associated with sphere geodesic distance) is then used by such architectures to match images. Euclidean spaces with their zero curvature and spherical spaces with their positive curvature have certain profound implications on the nature of embeddings that existing computer vision systems can learn. In this work, we argue that hyperbolic spaces with negative curvature might often be more appropriate for learning embedding of images. Towards this end, we add the recently-proposed hyperbolic network layers to the end of several computer vision networks, and present a number of experiments corresponding to image classification, one-shot, and few-shot learning and person re-identification. We show that in many cases, the use of hyperbolic geometry improves the performance over Euclidean or spherical embeddings. Motivation for hyperbolic image embeddings. The use of hyperbolic spaces in natural language processing (; ;) is motivated by their natural ability to embed hierarchies (e.g., tree graphs) with low distortion . Hierarchies are ubiquitous in natural language processing. First, there are natural hierarchies corresponding to, e.g., biological taxonomies and linguistic ontologies. Likewise, a more generic short phrase can have many plausible continuations and is therefore semantically-related to a multitude of long phrases that are not necessarily closely related to each other (in the semantic sense). The innate suitability of hyperbolic spaces to embedding hierarchies (a;) explains the success of such spaces in natural language processing . Here, we argue that similar hierarchical relations between images are common in computer vision tasks (Figure 2). One can observe the following example cases: • In image retrieval, an overview photograph is related to many images that correspond to the close-ups of different distinct details. Likewise, for classification tasks in-the-wild, an image containing the representatives of multiple classes is related to images that contain representatives of the classes in isolation. Embedding a dataset that contains composite images into continuous space is therefore similar to embedding a hierarchy. • In some tasks, more generic images may correspond to images that contain less information and are therefore more ambiguous. E.g., in face recognition, a blurry and/or low-resolution face image taken from afar can be related to many high-resolution images of faces that clearly belong to distinct people. Again natural embeddings for image datasets that have widely varying image quality/ambiguity calls for retaining such hierarchical structure. In order to build deep learning models which operate on the embeddings to hyperbolic spaces, we capitalize on recent developments, which construct the analogues of familiar layers (such as a feed-forward layer, or a multinomial regression layer) in hyperbolic spaces. We show that many standard architectures used for tasks of image classification, and in particular in the few-shot learning setting can be easily modified to operate on hyperbolic embeddings, which in many cases also leads to their improvement. Formally, n-dimensional hyperbolic space denoted as H n is defined as the homogeneous, simply connected n-dimensional Riemannian manifold of constant negative sectional curvature. The property of constant negative curvature makes it analogous to the ordinary Euclidean sphere (which has constant positive curvature), however, the geometrical properties of the hyperbolic space are very different. It is known that hyperbolic space cannot be isometrically embedded into Euclidean space , but there exist several well-studied models of hyperbolic geometry. In every model a certain subset of Euclidean space is endowed with a hyperbolic metric, however, all these models are isomorphic to each other and we may easily move from one to another base on where the formulas of interest are easier. We follow the majority of NLP works and use the Poincaré ball model. Investigating the alternative models that might provide better numerical stability remain future work (though already started in the NLP community (; b) ). Here, we provide a very short summary of the model. 1− x 2 is the conformal factor and g E is the Euclidean metric tensor g E = I n. In this model the geodesic distance between two points is given by the following expression: In order to define the hyperbolic average, we will make use of the Klein model of hyperbolic space. Similarly to the Poincaré model, it is defined on the set K n = {x ∈ R n : x < 1}, however, with a different metric, not relevant for further discussion. In Klein coordinates, the hyperbolic average (generalizing the usual Euclidean mean) takes the most simple form, and we present the necessary formulas in Section 4. From the viewpoint of hyperbolic geometry, all points of Poincaré ball are equivalent. The models that we consider below are, however, hybrid in the sense that most layers use Euclidean operators, such as standard generalized convolutions, while only the final layers operate within the hyperbolic geometry framework. The hybrid nature of our setups makes the origin a special point, since from the Euclidean viewpoint the local volumes in Poincare ball expand exponentially from the origin to the boundary. This leads to the useful tendency of the learned embeddings to place more generic/ambiguous objects closer to the origin, while moving more specific objects towards the boundary. The distance to the origin in our models therefore provides a natural estimate of uncertainty, that can be used in several ways, as we show below. Hyperbolic language embeddings Hyperbolic embeddings in the natural language processing field have recently been very successful (; . They are motivated by the innate ability of hyperbolic spaces to embed hierarchies (e.g., tree graphs) with low distortion (b;). The main in this area states that any tree can be embedded into (two dimensional) hyperbolic space with arbitrarily low distortion. Another direction of research, more relevant to the present work is based on imposing hyperbolic structure on activations of neural networks ). The task of few-shot learning, which has recently attracted a lot of attention, is concerned with the overall ability of the model to generalize to unseen data during training. A body of papers devoted to few-shot classification that focuses on metric learning methods includes Siamese Networks , Matching Networks , Prototypical Networks , Relation Networks . In contrast, other models apply meta-learning to few-shot learning: e.g., MAML by , Meta-Learner LSTM by , SNAIL by . While these methods employ either Euclidean or spherical geometries (like in ), there is no model extension to hyperbolic space. Person re-identification The task of person re-identification is to match pedestrian images captured by possibly non-overlapping surveillance cameras. Papers (; ; adopt the pairwise models that accept pairs of images and output their similarity scores. The ing similarity scores are used to classify the input pairs as being matching or non-matching. Another popular direction of work includes approaches that aim at learning a mapping of the pedestrian images to the Euclidean descriptor space. Several papers, e.g., use verification loss functions based on the Euclidean distance or cosine similarity. A number of methods utilize a simple classification approach for training (; ; ;), and Euclidean distance is used in test time. In our work we strongly rely on the apparatus of hyperbolic neural networks developed in. Hyperbolic networks are extensions of conventional neural networks in a sense that they generalize typical neural network operations to those in hyperbolic space using the formalism of Möbius gyrovector spaces. In this paper, the authors present the hyperbolic versions of feedforward networks, multinomial logistic regression, and recurrent neural networks. In Appendix A we discuss the hyperbolic functions and layers used in hyperbolic neural networks. Similarly to the paper, we use an additional hyperparameter c corresponding to the radius of the Poincaré ball, which is then defined in the following manner: D n c = {x ∈ R n : c x 2 < 1, c ≥ 0}. The corresponding conformal factor is then modified as λ c x = 2 1−c x 2. In practice, the choice of c allows one to balance between hyperbolic and Euclidean geometries, which is made precise by noting that with c → 0 all the formulas discussed below take their usual Euclidean form. Hyperbolic averaging One important operation common in image processing is averaging of feature vectors, used, e.g., in prototypical networks for few-shot learning . In the Euclidean setting this operation takes the form (x 1, . . ., x N) → 1 N i x i. Extension of this operation to hyperbolic spaces is called the Einstein midpoint and takes the most simple form in Klein coordinates: where are the Lorentz factors. Recall from the discussion in Section 2 that the Klein model is supported on the same space as the Poincaré ball, however the same point has different coordinate representations in these models. Let x D and x K denote the coordinates of the same point in the Poincaré and Klein models correspondingly. Then the following transition formulas hold. Thus, given points in the Poincaré ball we can first map them to the Klein model, compute the average using Equation, and then move it back to the Poincaré model. Practical aspects of implementation While implementing most of the formulas described above is straightforward, we employ some tricks to make the training more stable. • To ensure numerical stability we perform clipping by norm after applying the exponential map, which constrains the norm to not exceed (1 − 10 −3). • Some of the parameters in the aforementioned layers are naturally elements of D c n. While in principle it is possible to apply Riemannian optimization techniques to them (e.g., previously proposed Riemannian Adam optimizer ), we did not observe any significant improvement. Instead, we parametrized them via ordinary Euclidean parameters which were mapped to their hyperbolic counterparts with the exponential map and used the standard Adam optimizer. Gromov's δ-hyperbolicity A necessary parameter for embedding to Poincaré disk is its radius. In hyperbolic neural networks, one has a curvature parameter c, which is inversed squared disk radius:. For the Euclidean case, i.e., c = 0, the corresponding radius would be equal to infinity. The disk radius is closely related to the notion of Gromov's δ-hyperbolicity , as we will show later in this section. Intuitively, this δ value shows'how hyperbolic is a metric space'. For example, for graphs, δ represents how'far' the graph is from a tree, which is known to be hyperbolic . Hence, we can compute the corresponding δ-hyperbolicity value to find the right Poincaré disk radius for an accurate embedding. Formally, δ-hyperbolicity is defined as follows; we emphasize that this notion is defined for any metric space (X, d). First, we need to define Gromov product for points x, y, z ∈ X: Then, the δ is the minimal value such that the following four-point condition holds for all points x, y, z, w ∈ X: In practice, it suffice to find the δ for some fixed point w 0. A more computational friendly way to define δ is presented in . Having a set of points, we first compute the matrix A of pairwise Gromov products. After that, the δ value is simply the largest coefficient in the matrix (A ⊗ A) − A, where ⊗ denotes the min-max matrix product Relation between δ-hyperbolicity and Poincaré disk radius It is known that the standard Poincaré ball is δ-hyperbolic with δ P = log(1 + √ 2) ∼ 0.88. Using this constant we can estimate the radius of Poincaré disk suitable for an embedding of a specific dataset. Suppose that for some dataset X we have found that its natural Gromov's δ is equal to δ X. Then we can estimate c(X) as follows. Estimating hyperbolicity of a dataset In order to verify our hypothesis on hyperbolicity of visual datasets we compute the scale-invariant metric, defined as δ rel (X) = 2δ(X) diam(X), where diam(X) denotes the set diameter . By construction, δ rel (X) ∈ and specifies how close is the dataset to a perfect hyperbolic space. For instance, trees which are discrete analogues of a hyperbolic space (under the natural shortest path metric) have δ rel equal to 0. We computed δ rel for various datasets we used for experiments. As a natural distance between images we used the standard Euclidean distance between the features extracted with VGG16 . Our are summarized in Table 1. We observe that degree of hyperbolicity in image datasets is quite high, as the obtained δ rel are significantly closer to 0 than to 1 (which corresponds to total non-hyperbolicity), which supports our hypothesis. Embeddings are computed by a hyperbolic neural network trained for the MNIST classification task. We observe a significant difference between these distributions: embeddings of the Omniglot images are much closer to the origin. Table 2 provides the KS distances between the distributions. In our further experiments, we concentrate on the few-shot classification and person re-identification tasks. The experiments on the Omniglot dataset serve as a starting point, and then we move towards more complex datasets. Afterwards, we consider two datasets, namely: MiniImageNet and Caltech-UCSD Birds-200-2011 (CUB) . Here, for each dataset, we train four models: for one-shot five-way and five-shot five-way classification tasks both in the Euclidean and hyperbolic spaces. Finally, we provide the re-identification for the two popular datasets: Market-1501 and DukeMTMD . Further in this section, we provide a thorough description of each experiment. Our code is available at github 1. In this subsection, we validate our hypothesis which claims that if one trains a hyperbolic classifier, then a distance of the Poincaré ball embedding of an image can serve as a good measure of confidence of a model. We start by training a simple hyperbolic convolutional neural network on the MNIST dataset. The output of the last hidden layer was mapped to the Poincaré ball using the exponential map and was followed by the hyperbolic MLR layer. After training the model to ∼ 99% test accuracy, we evaluate it on the Omniglot dataset (by resizing images to 28 × 28 and normalizing them to have the same color as MNIST). We then evaluate the hyperbolic distance to the origin of embeddings produced by the network on both datasets. The closest Euclidean analogue to this approach would be comparing distributions of p max, maximum class probability predicted by the network. For the same range of dimensions we train ordinary Euclidean classifiers on MNIST, and compare these distributions for the same sets. Our findings are summarized in Figure 3 and Table 2. We observe that distances to the origin present a more statistically significant indicator of the dataset dissimilarity in 3 cases. We have visualized the learned MNIST and Omniglot embeddings on Figure 1. We observe that more'unclear' images are located near the center, while the images that are easy to classify are located closer to the boundary. Table 2: Kolmogorov-Smirnov distances between the distributions of distance to the origin of the MNIST and Omniglot datasets embedded into the Poincaré ball with the hyperbolic classifier trained on MNIST, and between the distributions of p max (maximum probablity predicted for a class) for the Euclidean classifier trained on MNIST and evaluated on the same sets. See further description in Subsection 5.1 and visualization on Figure 3. We observe that distance to the origin mostly presents a more statistically significant indicator of the dataset dissimilarity. We hypothesize that a certain class of problems -namely the few-shot classification task can benefit from hyperbolic embeddings. The starting point for our analysis is the experiments on the Omniglot dataset for few-shot classification. This dataset consists of the images of 1623 characters sampled from 50 different alphabets; each character is supported by 20 examples. We test several fewshot learning algorithms to see how hyperbolic embeddings affect them. In order to validate if hyperbolic embeddings can improve models performing on the state-of-the-art level, for the baseline architecture, we choose the prototype network (ProtoNet) introduced in the paper with four convolutional blocks in a backbone. The specifics of the experimental setup can be found in B. In ProtoNet, one uses a so-called prototype representation of a class, which is defined as a mean of the embedded support set of a class. Generalizing this concept to hyperbolic space, we substitute the Euclidean mean operation by HypAve, defined earlier in the Equation. Results are presented in Table 3. We can see that in some scenarios, in particular for one-shot learning, hyperbolic embeddings are more beneficial, while in other cases are slightly worse. Relative simplicity of this dataset may explain why have not observed significant benefit of hyperbolic embeddings. We further test our approach on more advanced datasets. MiniImageNet dataset is the subset of ImageNet dataset , which contains of 100 classes represented by 600 examples per class. We use the following split provided in the paper : training dataset consists of 64 classes, validation dataset is represented by 16 classes, and the remaining 20 classes serve as a test dataset. As a baseline model, we again use prototype network (ProtoNet). We test the models on tasks for one-shot and five-shot classifications; the number of query points in each batch always equals to 15. All implementation details can be found in Appendix B. Table 4 illustrates the obtained on MiniImageNet dataset. For MiniImageNet dataset, the of the other models are available for the same classification tasks (i.e., for one-shot and fiveshot learning). Therefore, we can compare our obtained to those that were reported in the original papers. From these experimental , we may observe a slight gain in model accuracy. The CUB dataset consists of 11, 788 images of 200 bird species and was designed for fine-grained classification. We use the split introduced in : 100 classes out of 200 were used for training, 50 for validation and 50 for testing. Also, following , we make the same pre-processing step by resizing each image to the size of 64×64. The implementation details can be found in B. Our findings on the experiments on the CUB dataset are summarized in Table 4. Interestingly, for this dataset, the hyperbolic version significantly outperforms its Euclidean counterpart. The DukeMTMC-reID dataset contains 16, 522 training images of 702 identities, 2228 query images of 702 identities and 17, 661 gallery images. Market1501 contains 12936 training images of 751 identities, 3368 queries of 750 identities and 15913 gallery images respectively. We report Rank1 of the Cumulative matching Characteristic Curve and Mean Average Precision for both datasets. We refer the reader to B for a more detailed description of the experimental setting. The are reported after the 300 training epochs. As we can see in the Table 5, hyperbolic version generally performs better than the baseline, while the gap between the baseline and hyperbolic versions' is decreasing for larger dimensionalities. We have investigated the use of hyperbolic spaces for image embeddings. The models that we have considered use Euclidean operations in most layers, and use the exponential map to move from the Euclidean to hyperbolic spaces at the end of the network (akin to the normalization layers that are used to map from the Euclidean space to Euclidean spheres). The approach that we investigate here is thus compatible with existing backbone networks trained in Euclidean geometry. At the same time, we have shown that across a number of tasks, in particular in the few-shot image classification, learning hyperbolic embeddings can in a substantial boost in accuracy. We speculate that the negative curvature of the hyperbolic spaces allows for embeddings that are better conforming to the intrinsic geometry of at least some image manifolds with their hierarchical structure. Future work may include several potential modifications of the approach. We have observed that the use of hyperbolic embeddings improves performance for some problems and datasets, while not helping others. A better understanding of when and why the use of hyperbolic geometry is justified is therefore needed. Also, we note that while all hyperbolic geometry models are equivalent in the continuous setting, fixed-precision arithmetic used in real computers breaks this equivalence. In practice, we observed that care should be taken about numeric precision effects (following, we clip the embeddings to minimize numerical errors during learning). Using other models of hyperbolic geometry may in more favourable floating point performance. The ing formula for hyperbolic MLR for K classes is written below; here p k ∈ D n c and a k ∈ T p k D n c \ {0} are learnable parameters. For a more thorough discussion of hyperbolic neural networks, we refer the reader to the paper ). Omniglot As a baseline model, we consider the prototype network (ProtoNet). Each convolutional block consists of 3 × 3 convolutional layer followed by batch normalization, ReLU nonlinearity and 2 × 2 max-pooling layer. The number of filters in the last convolutional layer corresponds to the value of the embedding dimension, for which we choose 64. The hyperbolic model differs from the baseline in the following aspects. First, the output of the last convolutional block is embedded into the Poincaré ball of dimension 64 using the exponential map. The initial value of learning rate equals to 10 −3 and is multiplied by 0.5 every 20 epochs out of total 60 epochs. miniImageNet For this task we again considered ProtoNet as a baseline model. Similarly, number of filters the last convolutional layer corresponds to the varying value of the embedding dimension. In our experiments we set this value to 1024. We test the models on tasks for one-shot and fiveshot classifications; the number of query points in each batch always equals to 15. We consider the following learning rate decay scheme: the initial learning rate equals to 10 −3 and is further multiplied by 0.2 every 10 epochs (out of total 200 epochs). The hyperbolic model differs from the baseline in the following aspects. First, the output of the last convolutional block is embedded into Poincaré ball of dimension 1024 using the exponential map defined in Equation. In ProtoNet, one uses a so-called prototype representation of a class, which is defined as a mean of the embedded support set of a class. Generalizing this concept to hyperbolic space, we substitute the Euclidean mean operation by HypAve, defined earlier in the Equation. The initial learning rate equals to 10 −3 and is further multiplied by 0.2 every 10 epochs (out of total 200 epochs). Caltech-UCSD Birds Likewise, we use ProtoNet mentioned above with the following modifications. Here, we fix the embedding dimension to 512 and use a slightly different setup for learning rate scheduler: the initial learning rate of value 10 −3 is multiplied by 0.7 every 20 epochs out of total 100 epochs. Remaining architecture and parameters both in baseline and hyperbolic models are identical to those in the experiments on the MiniImageNet dataset. Person re-identification We use ResNet-50 architecture with one fully connected embedding layer following the global average pooling. Three embedding dimensionalities are used in our experiments: 32, 64 and 128. For the baseline experiments, we add the additional classification linear layer, followed by the cross-entropy loss. For the hyperbolic version of the experiments, we map the descriptors to the Poincaré ball and apply multiclass logistic regression as described in Section 4. We found that in both cases the are very sensitive to the learning rate schedules. We tried four schedules for learning 32-dimensional descriptors for both baseline and hyperbolic versions. Two best performing schedules were applied for the 64 and 128-dimensional descriptors. In these experiments, we also found that smaller c values give better . We finally set c to 10 −5. Therefore, based on the discussion in 4, our hyperbolic setting is quite close to Euclidean. The are compiled in Table 5. We set starting learning rates to 3 · 10 −4 and 6 · 10 −4 for sch#1 and sch#2 correspondingly and multiply them by 0.1 after each of the epochs 200 and 270.
We show that hyperbolic embeddings are useful for high-level computer vision tasks, especially for few-shot classification.
423
scitldr
High-dimensional time series are common in many domains. Since human cognition is not optimized to work well in high-dimensional spaces, these areas could benefit from interpretable low-dimensional representations. However, most representation learning algorithms for time series data are difficult to interpret. This is due to non-intuitive mappings from data features to salient properties of the representation and non-smoothness over time. To address this problem, we propose a new representation learning framework building on ideas from interpretable discrete dimensionality reduction and deep generative modeling. This framework allows us to learn discrete representations of time series, which give rise to smooth and interpretable embeddings with superior clustering performance. We introduce a new way to overcome the non-differentiability in discrete representation learning and present a gradient-based version of the traditional self-organizing map algorithm that is more performant than the original. Furthermore, to allow for a probabilistic interpretation of our method, we integrate a Markov model in the representation space. This model uncovers the temporal transition structure, improves clustering performance even further and provides additional explanatory insights as well as a natural representation of uncertainty. We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set. Our learned representations compare favorably with competitor methods and facilitate downstream tasks on the real world data. Interpretable representation learning on time series is a seminal problem for uncovering the latent structure in complex systems, such as chaotic dynamical systems or medical time series. In areas where humans have to make decisions based on large amounts of data, interpretability is fundamental to ease the human task. Especially when decisions have to be made in a timely manner and rely on observing some chaotic external process over time, such as in finance or medicine, the need for intuitive interpretations is even stronger. However, many unsupervised methods, such as clustering, make misleading i.i.d. assumptions about the data, neglecting their rich temporal structure and smooth behaviour over time. This poses the need for a method of clustering, where the clusters assume a topological structure in a lower dimensional space, such that the representations of the time series retain their smoothness in that space. In this work, we present a method with these properties. We choose to employ deep neural networks, because they have a very successful tradition in representation learning BID5. In recent years, they have increasingly been combined with generative modeling through the advent of generative adversarial networks (GANs) BID13 and variational autoencoders (VAEs) BID18. However, the representations learned by these models are often considered cryptic and do not offer the necessary interpretability. A lot of work has been done to improve them in this regard, in GANs as well as VAEs BID16 BID9. Alas, these works have focused entirely on continuous representations, while discrete ones are still underexplored. In order to define temporal smoothness in a discrete representation space, the space has to be equipped with a topological neighborhood relationship. One type of representation space with such a structure is induced by the self-organizing map (SOM) BID21. The SOM allows to map states from an uninterpretable continuous space to a lower-dimensional space with a predefined topologically interpretable structure, such as an easily visualizable two-dimensional grid. However, while yielding promising in visualizing static state spaces, such as static patient states BID27, the classical SOM formulation does not offer a notion of time. The time component can be incorporated using a probabilistic transition model, e.g. a Markov model, such that the representations of a single time point are enriched with information from the adjacent time points in the series. It is therefore potentially fruitful to apply the approaches of probabilistic modeling alongside representation learning and discrete dimensionality reduction in an end-to-end model. In this work, we propose a novel deep architecture that learns topologically interpretable discrete representations in a probabilistic fashion. Moreover, we introduce a new method to overcome the non-differentiability in discrete representation learning architectures and develop a gradient-based version of the classical selforganizing map algorithm with improved performance. We present extensive empirical evidence for the model's performance on synthetic and real world time series from benchmark data sets, a synthetic dynamical system with chaotic behavior and real world medical data. • Devise a novel framework for interpretable discrete representation learning on time series.• Show that the latent probabilistic model in the representation learning architecture improves clustering and interpretability of the representations on time series.• Show superior clustering performance of the model on benchmark data and a real world medical data set, on which it also facilitates downstream tasks. Our proposed model combines ideas from self-organizing maps BID21, variational autoencoders BID18 ) and probabilistic models. In the following, we will lay out the different components of the model and their interactions. A schematic overview of our proposed model is depicted in FIG0. An input x ∈ R d is mapped to a latent encoding z e ∈ R m (usually m < d) by computing z e = f θ (x), where f θ (·) is parameterized by the encoder neural network. The encoding is then assigned to an embedding z q ∈ R m in the dictionary of embeddings E = {e 1, . . ., e k | e i ∈ R m} by sampling z q ∼ p(z q |z e). The form of this distribution is flexible and can be a design choice. In order for the model to behave similarly to the original SOM algorithm (see below), in our experiments we choose the distribution to be categorical with probability mass 1 on the closest embedding to z e, i.e. p(z q |z e) = 1[z q = arg min e∈E z e − e 2], where 1[·] is the indicator function. A reconstructionx of the input can then be computed asx = g φ (z), where g φ (·) is parameterized by the decoder neural network. Since the encodings and embeddings live in the same space, one can compute two different reconstructions, namelyx e = g φ (z e) andx q = g φ (z q).To achieve a topologically interpretable neighborhood structure, the embeddings are connected to form a self-organizing map. A self-organizing map consists of k nodes V = {v 1, . . ., v k}, where every node corresponds to an embedding in the data space e v ∈ R d and a representation in a lower-dimensional discrete space m v ∈ M, where usually M ⊂ N 2. During training on a data set D = {x 1, . . ., x n}, a winner nodẽ v is chosen for every point x i according toṽ = arg min v∈V e v − x i 2. The embedding vector for every [red]. In order to achieve a discrete representation, every latent data point (z e) is mapped to its closest node in the SOM (z q). A Markov transition model [blue] is learned to predict the next discrete representation (z t+1 q) given the current one (z t q). The discrete representations can then be decoded by another neural network back into the original data space. node u ∈ V is then updated according to e u ← e u + N (m u, mṽ)η(x i − e u), where η is the learning rate and N (m u, mṽ) is a neighborhood function between the nodes defined on the representation space M. There can be different design choices for N (m u, mṽ). A more thorough review of the self-organizing map algorithm is deferred to the appendix (Sec. A).We choose to use a two-dimensional SOM because it facilitates visualization similar to BID27. Since we want the architecture to be trainable end-to-end, we cannot use the standard SOM training algorithm described above. Instead, we devise a loss function term whose gradient corresponds to a weighted version of the original SOM update rule (see below). We implement it in such a way that any time an embedding e i,j at position (i, j) in the map gets updated, it also updates all the embeddings in its immediate neighborhood N (e i,j). The neighborhood is defined as N (e i,j) = {e i−1,j, e i+1,j, e i,j−1, e i,j+1} for a two-dimensional map. The loss function for a single input x looks like DISPLAYFORM0 where x, z e, z q,x e andx q are defined as above and α and β are weighting hyperparameters. Every term in this function is specifically designed to optimize a different model component. The first term is the reconstruction loss L reconstruction (x,x q,x e) = x−x q 2 + x−x e 2. The first subterm of this is the discrete reconstruction loss, which encourages the assigned SOM node z q (x) to be an informative representation of the input. The second subterm encourages the encoding z e (x) to also be an informative representation. This ensures that all parts of the model have a fully differentiable credit assignment path to the loss function, which facilitates training. Note that the reconstruction loss corresponds to the evidence lower bound (ELBO) of the VAE part of our model BID18. Since we assume a uniform prior over z q, the KL-term in the ELBO is constant w.r.t. the parameters and can be ignored during optimization. The term L commitment encourages the encodings and assigned SOM nodes to be close to each other and is defined as DISPLAYFORM1 2. Closeness of encodings and embeddings should be expected to already follow from the L reconstruction term in a fully differentiable architecture. However, due to the nondifferentiability of the embedding assignment in our model, the L commitment term has to be explicitly added to the objective in order for the encoder to get gradient information about z q. DISPLAYFORM2 2, where N (·) is the set of neighbors in the discrete space as defined above and sg [·] is the gradient stopping operator that does not change the outputs during the forward pass, but sets the gradients to 0 during the backward pass. It encourages the neighbors of the assigned SOM node z q to also be close to z e, thus enabling the embeddings to exhibit a self-organizing map property, while stopping the gradients on z e such that the encoding is not pulled in the direction of the neighbors. This term enforces a neighborhood relation between the discrete codes and encourages all SOM nodes to ultimately receive gradient information from the data. The gradient stopping in this term is motivated by the observation that the data points themselves do not get moved in the direction of their assigned SOM node's neighbors in the original SOM algorithm either (see above). We want to optimize the embeddings based on their neighbors, but not the respective encodings, since any single encoding should be as close as possible to its assigned embedding and not receive gradient information from any other embeddings that it is not assigned to. Note that the gradient update of a specific SOM node in this formulation depends on its distance to the encoding, while the step size in the original SOM algorithm is constant. It will be seen that this offers some benefits in terms of optimization and convergence (see Sec. 4.1). The main challenge in optimizing our architecture is the non-differentiability of the discrete cluster assignment step. Due to this, the gradients from the reconstruction loss cannot flow back into the encoder. A model with a similar problem is the recently proposed vector-quantized VAE (VQ-VAE) BID29. It can be seen as being similar to a special case of our SOM-VAE model, where one sets β = 0, i.e. disables the SOM structure. In order to mitigate the non-differentiability, the authors of the VQ-VAE propose to copy the gradients from z q to z e. They acknowledge that this is an ad hoc approximation, but observed that it works well in their experiments. Due to our smaller number of embeddings compared to the VQ-VAE setup, the average distance between an encoding and its closest embedding is much larger in our case. The gradient copying (see above) thus ceases to be a feasible approximation, because the true gradients at points in the latent space which are farther apart will likely be very different. In order to still overcome the non-differentiability issue, we propose to add the second reconstruction subterm to L reconstruction, where the reconstructionx e is decoded directly from the encoding z e. This adds a fully differentiable credit assignment path from the loss to the encoder and encourages z e to also be an informative representation of the input, which is a desirable model feature. Most importantly, it works well in practice (see Sec. 4.1).Note that since z e is continuous and therefore much less constrained than z q, this term is optimized easily and becomes small early in training. After that, mostly the z q -term contributes to L reconstruction. One could therefore view the z e -term as an initial encouragement to place the data encodings at sensible positions in the latent space, after which the actual clustering task dominates the training objective. Our ultimate goal is to predict the development of time series in an interpretable way. This means that not only the state representations should be interpretable, but so should be the prediction as well. To this end, we use a temporal probabilistic model. Learning a probabilistic model in a high-dimensional continuous space can be challenging. Thus, we exploit the low-dimensional discrete space induced by our SOM to learn a temporal model. For that, we define a system state as the assigned node in the SOM and then learn a Markov model for the transitions between those states. The model is learned jointly with the SOM-VAE, where the loss function becomes DISPLAYFORM0 with weighting hyperparameters γ and τ.The term L transitions encourages the probabilities of actually observed transitions to be high. It is defined as DISPLAYFORM1 ) being the probability of a transition from state z q (DISPLAYFORM2 The term L smoothness encourages the probabilities for transitions to nodes that are far away from the current data point to be low or respectively the nodes with high transition probabilities to be proximal. It achieves this by taking large values only for transitions to far away nodes that have a high probability under the model. It is defined as L smoothness ( DISPLAYFORM3 2 . The probabilistic model can inform the evolution of the SOM through this term which encodes our prior belief that transitions in natural data happen smoothly and that future time points will therefore mostly be found in the neighborhood of previous ones. In a setting where the data measurements are noisy, this improves the clustering by acting as a temporal smoother. From the early inception of the k-means algorithm for clustering BID24, there has been much methodological improvement on this unsupervised task. This includes methods that perform clustering in the latent space of (variational) autoencoders BID1 or use a mixture of autoencoders for the clustering BID32 ). The method most related to our work is the VQ-VAE (van den BID29, which can be seen as a special case of our framework (see above). Its authors have put a stronger focus on the discrete representation as a form of compression instead of clustering. Hence, our model and theirs differ in certain implementation considerations (see Sec. 2.2). All these methods have in common that they only yield a single number as a cluster assignment and provide no interpretable structure of relationships between clusters. The self-organizing map (SOM) BID21, however, is an algorithm that provides such an interpretable structure. It maps the data manifold to a lower-dimensional discrete space, which can be easily visualized in the 2D case. It has been extended to model dynamical systems BID4 and combined with probabilistic models for time series BID25, although without using learned representations. There are approaches to turn the SOM into a "deeper" model BID8, combine it with multi-layer perceptrons BID11 or with metric learning (Płoński and). However, it has (to the best of our knowledge) not been proposed to use SOMs in the latent space of (variational) autoencoders or any other form of unsupervised deep learning model. Interpretable models for clustering and temporal predictions are especially crucial in fields where humans have to take responsibility for the model's predictions, such as in health care. The prediction of a patient's future state is an important problem, particularly on the intensive care unit (ICU) BID15 BID3. Probabilistic models, such as Gaussian processes, have been successfully applied in this domain BID7 BID26. Recently, deep generative models have been proposed BID10, sometimes even in combination with probabilistic modeling BID23. To the best of our knowledge, SOMs have only been used to learn interpretable static representations of patients BID27, but not dynamic ones. We performed experiments on MNIST handwritten digits BID22, Fashion-MNIST images of clothing BID31, synthetic time series of linear interpolations of those images, time series from a chaotic dynamical system and real world medical data from the eICU Collaborative Research Database BID12. If not otherwise noted, we use the same architecture for all experiments, sometimes including the latent probabilistic model (SOM-VAE_prob) and sometimes excluding it (SOM-VAE). For model implementation details, we refer to the appendix (Sec. B) 1.We found that our method achieves a superior clustering performance compared to other methods. We also show that we can learn a temporal probabilistic model concurrently with the clustering, which is on par with the maximum likelihood solution, while improving the clustering performance. Moreover, we can learn interpretable state representations of a chaotic dynamical system and discover patterns in real medical data. In order to test the clustering component of the SOM-VAE, we performed experiments on MNIST and Fashion-MNIST. We compare our model (including different adjustments to the loss function) against k-means (sklearn-package ), the VQ-VAE (van den), a standard implementation of a SOM (minisom-package BID30) and our version of a GB-SOM (gradient-based SOM), which is a SOM-VAE where the encoder and decoder are set to be identity functions. The k-means algorithm was initialized using k-means++ BID2. To ensure comparability of the performance measures, we used the same number of clusters (i.e. the same k) for all the methods. The of the experiment in terms of purity and normalized mutual information (NMI) are shown in Table 1. The SOM-VAE outperforms the other methods w.r.t. the clustering performance measures. It should be noted here that while k-means is a strong baseline, it is not density matching, i.e. the density of cluster centers is not proportional to the density of data points. Hence, the representation of data in a space induced by the k-means clusters can be misleading. As argued in the appendix (Sec. C), NMI is a more balanced measure for clustering performance than purity. If one uses 512 embeddings in the SOM, one gets a lower NMI due to the penalty term for the number of FIG1: Images generated from a section of the SOM-VAE's latent space with 512 embeddings trained on MNIST. It yields a discrete two-dimensional representation of the data manifold in the higher-dimensional latent space. clusters, but it yields an interpretable two-dimensional representation of the manifolds of MNIST FIG1, Supp. FIG3 ) and Fashion-MNIST (Supp. Fig. S5).The experiment shows that the SOM in our architecture improves the clustering (SOM-VAE vs. VQ-VAE) and that the VAE does so as well (SOM-VAE vs. GB-SOM). Both parts of the model therefore seem to be beneficial for our task. It also becomes apparent that our reconstruction loss term on z e works better in practice than the gradient copying trick from the VQ-VAE (SOM-VAE vs. gradcopy), due to the reasons described in Section 2.2. If one removes the z e reconstruction loss and does not copy the gradients, the encoder network does not receive any gradient information any more and the learning fails completely (no_grads). Another interesting observation is that stochastically optimizing our SOM loss using Adam seems to discover a more performant solution than the classical SOM algorithm (GB-SOM vs. minisom). This could be due to the dependency of the step size on the distance between embeddings and encodings, as described in Section 2.1. Since k-means seems to be the strongest competitor, we are including it as a reference baseline in the following experiments as well. In order to test the probabilistic model in our architecture and its effect on the clustering, we generated synthetic time series data sets of (Fashion-)MNIST images being linearly interpolated into each other. Each time series consists of 64 frames, starting with one image from (Fashion-)MNIST and smoothly changing sequentially into four other images over the length of the time course. After training the model on these data, we constructed the maximum likelihood estimate (MLE) for the Markov model's transition matrix by fixing all the weights in the SOM-VAE and making another pass over the training set, counting all the observed transitions. This MLE transition matrix reaches a negative log likelihood of 0.24, while our transition matrix, which is learned concurrently with the architecture, yields 0.25. Our model is therefore on par with the MLE solution. Comparing these with the clustering performance on the standard MNIST and Fashion-MNIST test sets, we observe that the performance in terms of NMI is not impaired by the inclusion of the probabilistic model into the architecture (Tab. 2). On the contrary, the probabilistic model even slightly increases the performance on Fashion-MNIST. Note that we are using 64 embeddings in this experiment instead of 16, leading to a higher clustering performance in terms of purity, but a slightly lower performance in terms of NMI compared to Table 1. This shows again that the measure of purity has to be interpreted with care when comparing This experiment shows that we can indeed fit a valid probabilistic transition model concurrently with the SOM-VAE training, while at the same time not hurting the clustering performance. It also shows that for certain types of data the clustering performance can even be improved by the probabilistic model (see Sec. 2.3). In order to assess whether our model can learn an interpretable representation of more realistic chaotic time series, we train it on synthetic trajectories simulated from the famous Lorenz system . The Lorenz system is a good example for this assessment, since it offers two well defined macro-states (given by the attractor basins) which are occluded by some chaotic noise in the form of periodic fluctuations around the attractors. A good interpretable representation should therefore learn to largely ignore the noise and model the changes between attractor basins. For a review of the Lorenz system and details about the simulations and the performance measure, we refer to the appendix (Sec. D.2).In order to compare the interpretability of the learned representations, we computed entropy distributions over simulated subtrajectories in the real system space, the attractor assignment space and the representation spaces for k-means and our model. The computed entropy distributions over all subtrajectories in the test set are depicted in FIG2. The experiment shows that the SOM-VAE representations FIG2 are much closer in entropy to the groundtruth attractor basin assignments FIG2 than the k-means representations FIG2. For most of the subtrajectories without attractor basin change they assign a very low entropy, effectively ignoring the noise, while the k-means representations partially assign very high entropies to those trajectories. In total, the k-means representations' entropy distribution is similar to the entropy distribution in the noisy system space FIG2. The representations learned by the SOM-VAE are therefore more interpretable than the k-means representations with regard to this interpretability measure. As could be expected from these figures, the SOM-VAE representation is also superior to the k-means one in terms of purity with respect to the attractor assignment (0.979 vs. 0.956) as well as NMI (0.577 vs. 0.249).Finally, we use the learned probabilistic model on our SOM-VAE representations to sample new latent system trajectories and compute their entropies. The distribution looks qualitatively similar to the one over real Table 3: Performance comparison of our method with and without probabilistic model (SOM-VAE-prob and SOM-VAE) against k-means in terms of normalized mutual information on a challenging unsupervised prediction task on real eICU data. The dynamic endpoints are the maximum of the physiology score within the next 6, 12 or 24 hours (physiology_6_hours, physiology_12_hours, physiology_24_hours). The values are the means of 10 runs and the respective standard errors. Each method is used to fit 64 embeddings/clusters.. It can be seen that our model is the only one that learns a topologically interpretable structure.trajectories FIG2 ), but our model slightly overestimates the attractor basin change probabilities, leading to a heavier tail of the distribution. In order to demonstrate interpretable representation learning on a complex real world task, we trained our model on vital sign time series measurements of intensive care unit (ICU) patients. We analyze the performance of the ing clustering w.r.t. the patients' future physiology states in Table 3. This can be seen as a way to assess the representations' informativeness for a downstream prediction task. For details regarding the data selection and processing, we refer to the appendix (Sec. D.3).Our full model (including the latent Markov model) performs best on the given tasks, i.e. better than k-means and also better than the SOM-VAE without probabilistic model. This could be due to the noisiness of the medical data and the probabilistic model's smoothing tendency (see Sec. 2.3).In order to qualitatively assess the interpretability of the probabilistic SOM-VAE, we analyzed the average future physiology score per cluster FIG3. Our model exhibits clusters where higher scores are enriched compared to the level. Moreover, these clusters form compact structures, facilitating interpretability. We do not observe such interpretable structures in the other methods. For full on acute physiology scores, an analogue experiment showing the future mortality risk associated with different regions of the map, and an analysis of enrichment for particular physiological abnormalities, we refer to the appendix (Sec. D.4).As an illustrative example for data visualization using our method, we show the trajectories of two patients that start in the same state FIG3. The trajectories are plotted in the representation space of the probabilistic SOM-VAE and should thus be compared to the visualization in FIG3. One patient (green) stays in the regions of the map with low average physiology score and eventually gets discharged from the hospital healthily. The other one (red) moves into map regions with high average physiology score and ultimately dies. Such knowledge could be helpful for doctors, who could determine the risk of a patient for certain deterioration scenarios from a glance at their trajectory in the SOM-VAE representation. The SOM-VAE can recover topologically interpretable state representations on time series and static data. It provides an improvement to standard methods in terms of clustering performance and offers a way to learn discrete two-dimensional representations of the data manifold in concurrence with the reconstruction task. It introduces a new way of overcoming the non-differentiability of the discrete representation assignment and contains a gradient-based variant of the traditional self-organizing map that is more performant than the original one. On a challenging real world medical data set, our model learns more informative representations with respect to medically relevant prediction targets than competitor methods. The learned representations can be visualized in an interpretable way and could be helpful for clinicians to understand patients' health states and trajectories more intuitively. It will be interesting to see in future work whether the probabilistic component can be extended to not just improve the clustering and interpretability of the whole model, but also enable us to make predictions. Promising avenues in that direction could be to increase the complexity by applying a higher order Markov Model, a Hidden Markov Model or a Gaussian Process. Another fruitful avenue of research could be to find more theoretically principled ways to overcome the non-differentiability and compare them with the empirically motivated ones. Lastly, one could explore deviating from the original SOM idea of fixing a latent space structure, such as a 2D grid, and learn the neighborhood structure as a graph directly from data. The general idea of a self-organizing map (SOM) is to approximate a data manifold in a high-dimensional continuous space with a lower dimensional discrete one BID21. It can therefore be seen as a nonlinear discrete dimensionality reduction. The mapping is achieved by a procedure in which this discrete representation (the map) is randomly embedded into the data space and then iteratively optimized to approach the data manifold more closely. The map consists of k nodes V = {v 1, . . ., v k}, where every node corresponds to an embedding in the data space e v ∈ R d and a representation in the lower-dimensional discrete space m v ∈ M, where usually M ⊂ N 2. There are two different geometrical measures that have to be considered during training: the neighborhood function N (m u, mṽ) that is defined on the low-dimensional map space and the Euclidean distance D(e u, eṽ) = e u − eṽ 2 in the high-dimensional data space. The SOM optimization tries to induce a coupling between these two properties, such that the topological structure of the representation reflects the geometrical structure of the data. Require: DISPLAYFORM0 for all x i ∈ D do find the closest SOM nodeṽ:= arg min v∈V x i − e v 2 update node embedding eṽ ← eṽ + η (x i − eṽ) for all u ∈ V \ṽ do update neighbor embedding e u ← e u + η N (mṽ, m u)(x i − e u) end for end for end whileThe SOM training procedure is described in Algorithm 1. During training on a data set D, a winner nodeṽ is chosen for every point x i according to the Euclidean distance of the point and the node's embedding in the data space. The embedding vector for the winner node is then updated by pulling it into the direction of the data point with some step size η. The embedding vectors of the other nodes are also updated -potentially with a smaller step size -depending on whether they are neighbors of the winner node in the map space M.The neighborhood is defined by the neighborhood function N (m u, mṽ). There can be different design choices for the neighborhood function, e.g. rectangular grids, hexagonal grids or Gaussian neighborhoods. For simplicity and ease of visualization, we usually choose a two-dimensional rectangular grid neighborhood in this paper. In this original formulation of the SOM training, the nodes are updated one by one with a fixed step size. In our model, however, we use a gradient-based optimization of their distances to the data points and update them in minibatches. This leads to larger step sizes when they are farther away from the data and smaller step sizes when they are close. Overall, our gradient-based SOM training seems to perform better than the original formulation (see Tab. 1).It also becomes evident from this procedure that it will be very hard for the map to fit disjoint manifolds in the data space. Since the nodes of the SOM form a fully connected graph, they do not possess the ability to model spatial gaps in the data. We overcome this problem in our work by mapping the data manifold with a variational autoencoder into a lower-dimensional latent space. The VAE can then learn to close the aforementioned gaps and map the data onto a compact latent manifold, which can be more easily modeled with the SOM. The hyperparameters of our model were optimized using Robust Bayesian Optimization with the packages sacred and labwatch BID14 for the parameter handling and RoBo for the optimization, using the mean squared reconstruction error as the optimization criterion. Especially the weighting hyperparameters α, β, γ and τ (see Eq. and Eq.) have to be tuned carefully, such that the different parts of the model converge at roughly the same rate. We found that 2000 steps of Bayesian optimization sufficed to yield a performant hyperparameter assignment. Since our model defines a general framework, some competitor models can be seen as special cases of our model, where certain parts of the loss function are set to zero or parts of the architecture are omitted. We used the same hyperparameters for those models. For external competitor methods, we used the hyperparameters from the respective publications where applicable and otherwise the default parameters from their packages. The models were implemented in TensorFlow BID0 and optimized using Adam . Given that one of our most interesting tasks at hand is the clustering of data, we need some performance measures to objectively compare the quality of this clustering with other methods. The measures that we decided to use and that have been used extensively in the literature are purity and normalized mutual information (NMI) . We briefly review them in the following. Let the set of ground truth classes in the data be C = {c 1, c 2, . . ., c J} and the set of clusters that from the algorithm Ω = {ω 1, ω 2, . . ., ω K}. The purity π is then defined as π(C, Ω) = DISPLAYFORM0 where N is the total number of data points. Intuitively, the purity is the accuracy of the classifier that assigns the most prominent class label in each cluster to all of its respective data points. While the purity has a very simple interpretation, it also has some shortcomings. One can for instance easily observe that a clustering with K = N, i.e. one cluster for every single data point, will yield a purity of 1.0 but still probably not be very informative for most tasks. It would therefore be more sensible to have another measure that penalizes the number of clusters. The normalized mutual information is one such measure. The NMI is defined as NMI(C, Ω) = Table 1, we performed experiments to assess the influence of the number of clusters k on the clustering performance of our method. We chose different values for k between 4 and 64 and tested the clustering performance on MNIST and Fashion-MNIST (Tab. S1).It can be seen that the purity increases monotonically with k, since it does not penalize larger numbers of clusters (see Sec. C). The NMI, however, includes an automatic penalty for misspecifying the model with too many clusters. It therefore increases first, but then decreases again for too large values of k. The optimal k according to the NMI seems to lie between 16 and 36. The Lorenz system is the system of coupled ordinary differential equations defined by, the system shows chaotic behavior by forming a strange attractor BID28 with the two attractor points being given by DISPLAYFORM0 DISPLAYFORM1 We simulated 100 trajectories of 10,000 time steps each from the chaotic system and trained the SOM-VAE as well as k-means on it with 64 clusters/embeddings respectively. The system chaotically switches back and forth between the two attractor basins. By computing the Euclidian distance between the current system state and each of the attractor points p 1,2, we can identify the current attractor basin at each time point. In order to assess the interpretability of the learned representations, we have to define an objective measure of interpretability. We define interpretability as the similarity between the representation and the system's ground truth macro-state. Since representations at single time points are meaningless with respect to this measure, we compare the evolution of representations and system state over time in terms of their entropy. We divided the simulated trajectories from our test set into spans of 100 time steps each. For every subtrajectory, we computed the entropies of those subtrajectories in the real system space (macro-state and noise), the assigned attractor basin space (noise-free ground-truth macro-state), the SOM-VAE representation and the k-means representation. We also observed for every subtrajectory whether or not a change between attractor basins has taken place. Note that the attractor assignments and representations are discrete, while the real system space is continuous. In order to make the entropies comparable, we discretize the system space into unit hypercubes for the entropy computation. For a representation R with assignments R t at time t and starting time t start of the subtrajectory, the entropies are defined as DISPLAYFORM2 with H(·) being the Shannon information entropy of a discrete set. All experiments were performed on dynamic data extracted from the eICU Collaborative Research Database BID12. Irregularly sampled time series data were extracted from the raw tables and then resampled to a regular time grid using a combination of forward filling and missing value imputation using global population statistics. We chose a grid interval of one hour to capture the rapid dynamics of patients in the ICU.Each sample in the time-grid was then labeled using a dynamic variant of the APACHE score BID20, which is a proxy for the instantaneous physiological state of a patient in the ICU. Specifically, the variables MAP, Temperature, Respiratory rate, HCO3, Sodium, Potassium, and Creatinine were selected from the score definition, because they could be easily defined for each sample in the eICU time series. The value range of each variable was binned into ranges of normal and abnormal values, in line with the definition of the APACHE score, where a higher score for a variable is obtained for abnormally high or low values. The scores were then summed up, and we define the predictive score as the worst (highest) score in the next t hours, for t ∈ {6, 12, 24}. Patients are thus stratified by their expected pathology in the near future, which corresponds closely to how a physician would perceive the state of a patient. The training set consisted of 7000 unique patient stays, while the test set contained 3600 unique stays. As mentioned in the main text (see FIG3 the SOMVAEProb is able to uncover compact and interpretable structures in the latent space with respect to future physiology scores. In this section we show for acute physiology scores in greater detail, analyze enrichment for future mortality risk, arguably the most important severity indicator in the ICU, and explore phenotypes for particular physiological abnormalities. FIG0 : (a) shows the difference in distribution of the acute physiology score in the next 24 hours, between time-points assigned to the most abnormal cell in the SOMVAEprob map with coordinates vs. a normal cell chosen from the middle of the map with coordinates. It is apparent that the distributions are largely disjoint, which means that the representation induced by SOMVAEprob clearly distinguishes these risk profiles. Statistical tests for difference in distribution and location parameter are highly significant at p-values of p ≤ 10 −3, as we have validated using a 2-sample t-test and Kolmogorov-Smirnov test. In (b-c) the enrichment of the map for the mean acute physiology score in the next 6 and 12 hours is shown, for completeness. The enrichment patterns on the 3 maps, for the future horizons {6, 12, 24}, are almost identical, which provides empirical evidence for the temporal stability of the SOMVAEProb embedding. hours. We observe that the left-edge and right-edge regions of the SOMVAEprob map which are enriched for higher acute physiology scores (see FIG3 also exhibit elevated mortality rates over the baseline. Interestingly, according to future mortality risk, which is an important severity indicator, patients on the left-edge are significantly more sick on average than those on the right edge, which is less visible from the enrichment for acute physiology scores. PATIENT STATE PHENOTYPES ON THE SOMVAEP R O B MAP Low sodium and high potassium states are enriched near the left edge, and near the right edge, respectively, which could represent sub-types of the high-risk phenotype found in these regions (compare FIG3 the distribution of the acute physiology score). Elevated creatinine is a trait that occurs in both these regions. A compact structure associated with elevated HCO3 can be found in the center of the map, which could represent a distinct phenotype with lower mortality risk in our cohort. In all phenotypes, the tendency of SOMVAEprob to recover compact structures is exemplified. FIG3: Images generated from the SOM-VAE's latent space with 512 embeddings trained on MNIST. It yields an interpretable discrete two-dimensional representation of the data manifold in the higher-dimensional latent space. Figure S5: Images generated from the SOM-VAE's latent space with 512 embeddings trained on Fashion-MNIST. It yields an interpretable discrete two-dimensional representation of the data manifold in the higherdimensional latent space.
We present a method to learn interpretable representations on time series using ideas from variational autoencoders, self-organizing maps and probabilistic models.
424
scitldr
We propose Significance-Offset Convolutional Neural Network, a deep convolutional network architecture for regression of multivariate asynchronous time series. The model is inspired by standard autoregressive (AR) models and gating mechanisms used in recurrent neural networks. It involves an AR-like weighting system, where the final predictor is obtained as a weighted sum of adjusted regressors, while the weights are data-dependent functions learnt through a convolutional network. The architecture was designed for applications on asynchronous time series and is evaluated on such datasets: a hedge fund proprietary dataset of over 2 million quotes for a credit derivative index, an artificially generated noisy autoregressive series and household electricity consumption dataset. The pro-posed architecture achieves promising as compared to convolutional and recurrent neural networks. The code for the numerical experiments and the architecture implementation will be shared online to make the research reproducible. Time series forecasting is focused on modeling the predictors of future values of time series given their past. As in many cases the relationship between past and future observations is not deterministic, this amounts to expressing the conditional probability distribution as a function of the past observations: p(X t+d |X t, X t−1, . . .) = f (X t, X t−1, . . .).This forecasting problem has been approached almost independently by econometrics and machine learning communities. In this paper we examine the capabilities of convolutional neural networks (CNNs), BID25 in modeling the conditional mean of the distribution of future observations; in other words, the problem of autoregression. We focus on time series with multivariate and noisy signal. In particular, we work with financial data which has received limited public attention from the deep learning community and for which nonparametric methods are not commonly applied. Financial time series are particularly challenging to predict due to their low signal-to-noise ratio (cf. applications of Random Matrix Theory in econophysics BID24 BID3) and heavy-tailed distributions BID8. Moreover, the predictability of financial market returns remains an open problem and is discussed in many publications (cf. efficient market hypothesis of BID11).A common situation with financial data is that the same signal (e.g. value of an asset) is observed from different sources (e.g. financial news, analysts, portfolio managers in hedge funds, marketmakers in investment banks) in asynchronous moments of time. Each of these sources may have a different bias and noise with respect to the original signal that needs to be recovered (cf. time series in FIG0). Moreover, these sources are usually strongly correlated and lead-lag relationships are possible (e.g. a market-maker with more clients can update its view more frequently and precisely than one with fewer clients). Therefore, the significance of each of the available past observations might be dependent on some other factors that can change in time. Hence, the traditional econometric models such as AR, VAR, VARMA might not be sufficient. Yet their relatively good performance motivates coupling such linear models with deep neural networks that are capable of learning highly nonlinear relationships. Quotes from four different market participants (sources) for the same CDS 1 throughout one day. Each trader displays from time to time the prices for which he offers to buy (bid) and sell (ask) the underlying CDS. The filled area marks the difference between the best sell and buy offers (spread) at each time. For these reasons, we propose SignificanceOffset Convolutional Neural Network, a Convolutional Network extension of standard autoregressive models BID34 BID35 equipped with a nonlinear weighting mechanism and provide empirical evidence on its competitiveness with standard multilayer CNN and recurrent Long-Short Term Memory network BID18. The mechanism is inspired by the gating systems that proved successful in recurrent neural networks BID18 BID6 and highway networks BID37.2 RELATED WORK 2.1 TIME SERIES FORECASTING Literature in time series forecasting is rich and has a long history in the field of econometrics which makes extensive use of linear stochastic models such as AR, ARIMA and GARCH processes to mention a few. Unlike in machine learning, research in econometrics is more focused on explaining variables rather than improving out-of-sample prediction power. In practice, one can notice that these models'over-fit' on financial time series: their parameters are unstable and out-of-sample performance is poor. Reading through recent proceedings of the main machine learning venues (e.g. ICML, NIPS, AIS-TATS, UAI), one can notice that time series are often forecast using Gaussian processes BID31 BID38 BID19, especially when time series are irregularly sampled BID9 BID26. Though still largely independent, researchers have started to "bring together the machine learning and econometrics communities" by building on top of their respective fundamental models yielding to, for example, the Gaussian Copula Process Volatility model BID42. Our paper is in line with this emerging trend by coupling AR models and neural networks. Over the past 5 years, deep neural networks have surpassed from most of the existing literature in many fields BID33: computer vision BID23, audio signal processing and speech recognition BID32, natural language processing (NLP) BID1 BID7 BID14 BID21. Although sequence modeling in NLP, i.e. prediction of the next character or word, is related to our forecasting problem, the nature of the sequences is too dissimilar to allow using the same cost functions and architectures. Same applies to the adversarial training proposed by BID28 for video frame prediciton, as such approach favors most plausible scenarios rather than outputs close to all possible outputs, while the latter is usually required in financial time series due to stochasticity of the considered processes. Literature on deep learning for time series forecasting is still scarce (cf. BID12 for a recent review). Literature on deep learning for financial time series forecasting is even scarcer though interest in using neural networks for financial predictions is not new BID30 BID29. More recent papers include BID36 that used 4-layer perceptrons in modeling price change distributions in Limit Order Books, and BID2 who applied more recent WaveNet architecture of van den BID39 to several short univariate and bivariate time-series (including financial ones). Despite claim of applying deep learning, BID17 use autoencoders with a single hidden layer to compress multivariate financial data. Besides these and claims of secretive hedge funds (it can be marketing surfing on the deep learning hype), no promising or innovative architectures were publicly published so far, to the best of our knowledge. In this paper, we investigate the gold standard architectures' (simple Convolutional Neural Network (CNN), Residual Network, multi-layer LSTM) capabilities on AR-like artificial asynchronous and noisy time series, and on real financial data from the credit default swap market where some inefficiencies may exist, i.e. time series may not be totally random. Gating mechanisms for neural networks were first proposed by BID18 and proved essential in training recurrent architectures BID21 due to their ability to overcome the problem of vanishing gradient. In general, they can be expressed as DISPLAYFORM0 where f is the output function, c is a'candidate output' (usually a nonlinear function of x), ⊗ is an element-wise matrix product and σ: R → is a sigmoid nonlinearity that controls the amount of the output passed to the next layer (or to further operations within a layer). Appropriate compositions of functions of type 2 lead to the popular recurrent architectures such as LSTM BID18 and GRU BID6.A similar idea was recently used in construction of highway networks BID37 which enabled successful training of deeper architectures. van den BID40 and BID10 proposed gating mechanisms (respectively with hyperbolic tangent and linear 'candidate outputs') for training deep convolutional neural networks. The gating system that we propose is aimed at weighting a number of different'candidate predictors' and therefore is most closely related to the softmax gating used in MuFuRU (Multi-Function Recurrent Unit, BID41), i.e. DISPLAYFORM1 where (The idea of weighting outputs of the intermediate layers within a neural networks is also used in attention networks (See e.g. BID4) that proved successful in such tasks as image captioning and machine translation. Our approach is similar as the separate inputs (time series steps) are weighted in accordance with learned functions of these inputs, yet different since we model these functions as multi-layer CNNs (instead of projections on learned directions) and since we do not use recurrent layers. The latter is important in the above mentioned tasks as it enables the network to remember the parts of the sentence/image already translated/described. Time series observed in irregular moments of time make up significant challenges for learning algorithms. Gaussian processes provide useful theoretical framework capable of handling asynchronous data; however, due to assumed Gaussianity they are inappropriate for financial datasets, which often follow fat-tailed distributions BID8 ). On the other hand, prediction of even a simple autoregressive time series such us AR given by X(t) = αX(t − 1) + βX(t − 2) + ε(t) 2 may involve highly nonlinear functions when sampled irregularly. Precisely, it can be shown that the conditional expectation DISPLAYFORM0 where a k and b k are rational functions of α and β (See Appendix A for the proof). This would not be a problem if k was fixed, as then one would be interested in estimating of a k and b k directly; this, however, is not the case with asynchronous sampling. When X is an autoregressive series of higher order and more past observations are available, the analogous expectation E[X(t n)|{X(t n−m), m = 1,..., M }] would involve more complicated functions that in general may not possess closed forms. In real-world applications we often deal with multivariate time series whose dimensions are observed separately and asynchronously. This adds even more difficulty to assigning appropriate weights to the past values, even if the underlying data structure is linear. Furthermore, appropriate representation of such series might be not obvious as aligning such series at fixed frequency may lead to loss of information (if too low frequency is chosen) or prohibitive enlargement of the dataset (especially when durations have varying magnitudes), see Figure 2A. As an alternative, we might consider representing separate dimensions as a single one with dimension and duration indicators as additional features. Figure 2B presents this approach, which is going to be at the core of the proposed architecture. DISPLAYFORM1 Figure 2: (A) Fixed sampling frequency and it's drawbacks; keeping all available information leads to much more datapoints. (B) Proposed data representation for the asynchronous series. Consecutive observations are stored together as a single value series, regardless of which series they belong to; this information, however, is stored in indicator features, alongside durations between observations. A natural model for prediction of such series could be an LSTM, which, given consecutive input values and respective durations (X(t n), t n − t n−1 ) =: x n in each step would memorize the series values and weight them at the output according to the durations. However, the drawback of such approach lies in imbalance between the needs for memory and for nonlinearity: the weights that such network needs to assign to the memorized observations potentially require several layers of nonlinearity to be computed properly, while past observations might just need to be memorized as they are. For these reasons we shall consider a model that combines simple autoregressive approach with neural network in order to allow learning meaningful data-dependent weights DISPLAYFORM2 are modeled using neural network. To allow more flexibility and cover situations when e.g. observed values of x are biased, we should consider the summation over terms α m (x n−m) · f (x n−m), where f is also a neural network. We formalize this idea in Section 4. Suppose that we are given a multivariate time series (x n) n ⊂ R d and we aim to predict the conditional future values of a subset of elements of x n DISPLAYFORM0 where DISPLAYFORM1. We consider the following estimator of y n DISPLAYFORM2 where DISPLAYFORM3 • σ is a normalized activation function independent on each row, i.e. DISPLAYFORM4 for any a 1,..., a d I ∈ R M and σ such that σ(a) DISPLAYFORM5 • ⊗ is Hadamard (element-wise) matrix multiplication. The summation in 7 goes over the columns of the matrix in bracket; hence the i-th element of the output vectorŷ n is a linear combination of the i-th row of the matrix F (x −M n). We are going to consider S to be a fully convolutional network (composed solely of convolutional layers) and F of the form DISPLAYFORM6 where W ∈ R d I ×M and off: R d → R d I is a multilayer perceptron. In that case F can be seen as a sum of projection (x → x I) and a convolutional network with all kernels of length 1. Equation can be rewritten asŷ DISPLAYFORM7 where W m, S m (·) are m-th columns of matrices W and S(·).Figure 3: A scheme of the proposed SOCNN architecture. The network preserves the timedimension up to the top layer, while the number of features per timestep (filters) in the hidden layers is custom. The last convolutional layer, however, has the number of filters equal to dimension of the output. The Weighting frame shows how outputs from offset and significance networks are combined in accordance with Eq. 10.We will call the proposed network a Significance-Offset Convolutional Neural Network (SOCNN), while off and S respectively the offset and significance (sub)networks. The network scheme is shown in Figure 3. Note that when off ≡ 0 and σ ≡ 1 the model simplifies to the collection of d I separate AR(M) models for each dimension. Note that the form of Equation FORMULA0 enforces the separation of temporal dependence (obtained in weights W m), the local significance of observations S m (S as a convolutional network is determined by its filters which capture local dependencies and are independent on the relative position in time) and the predictors off(x n−m) that are completely independent on position in time. This provides some amount of interpretability of the fitted functions and weights. For instance, each of the past observations provides an adjusted single regressor for the target variable through the offset network. Note that due to asynchronous sampling procedure, consecutive values of x might be heterogenous, hence On the other hand, significance network provides data-dependent weights for all regressors and sums them up in autoregressive manner. Figure 7 in Appendix E.2 shows sample significance and offset activations for the trained network. Relation to asynchronous data As mentioned before, one of the common problems with time series are the varying durations between consecutive observations. A simple approach at data-preprocessing level is aligning the observations at some chosen frequency by e.g. duplicating or interpolating observations. This, however, might extend the size of an input and, therefore, model complexity. The other idea is to treat the duration and/or time of the observation as another feature, as presented in Figure 2B. This approach is at the core of the SOCNN architecture: the significance network is aimed at learning the high-level features that indicate the relative importance of past observations, which, as shown in Section 3, could be predominantly dependent on time and duration between observations. Loss function L 2 error is a natural loss function for the estimators of expected value DISPLAYFORM0 As mentioned above, the output of the offset network can be seen as a collection of separate predictors of the changes between corresponding observations x I n−m and the target variable y n off(x n−m) y n − x I n−m.For that reason, we consider the auxiliary loss function equal to mean squared error of such intermediate predictions DISPLAYFORM1 The total loss for the sample (x DISPLAYFORM2 where y n is given by Eq. 10 and α ≥ 0 is a constant. In Section 5.2 we discuss the empirical findings on the impact of positive values of α on the model training and performance, as compared to α = 0 (lack of auxiliary loss). We evaluate the proposed model on a financial dataset of bid/ask quotes sent by several market participants active in the credit derivatives market, artificially generated datasets and household electric power consumption dataset available from UCI repository BID27, comparing its performance with simple CNN, single-and multi-layer LSTM BID18 and 25-layer ResNet BID16.Apart from performance evaluation of SOCNNs, we discuss the impact of the network components, such as auxiliary loss and the depth of the offset sub-network. The details of the training process and hyperparameters used in the proposed architecture as well as in benchmark models are presented in C. We test our network architecture on the artificially generated datasets of multivariate time series. We consider two types of series:1. Synchronous series. The series of K noisy copies ('sources') of the same univariate autoregressive series ('base series'), observed together at random times. The noise of each copy is of different type.2. Asynchronous series. The series of observations of one of the sources in the above dataset. At each time, the source is selected randomly and its value at this time is added to form a new univariate series. The final series is composed of this series, the durations between random times and the indicators of the'available source' at each time. The details of the simulation process are presented in Appendix D. We consider synchronous and asynchronous series X K×N where K ∈ {16, 64} is the number of sources and N = 10, 000, which gives 4 artificial series in total 3. The household electricity dataset 4 contains measurements of 7 different quantities related to electricity consumption in a single household, recorded every minute for 47 months, yielding over 2 million observations. Since we aim to focus on asynchronous time-series, we alter it so that a single observation contains only value of one of the seven features, while durations between consecutive observations range from 1 to 7 minutes 5.The regression aim is to predict all of the features at the next time step. The proposed model was designed primarily for forecasting incoming non-anonymous quotes received from the credit default swap market. The dataset contains 2.1 million quotes from 28 different sources, i.e. market participants. Each quote is characterized by 31 features: the offered price, 28 indicators of the quoting source, the direction indicator (the quote refers to either a buy or a sell offer) and duration from the previous quote. For each source and direction we aim at predicting the next quoted price from this given source and direction considering the last 60 quotes. We formed 15 separate prediction tasks; in each task the model was trained to predict the next quote by one of the fifteen most active market participants 6.This dataset, which is proprietary, motivated the aforementioned construction of artificial asynchronous time series datasets based on its statistical features for reproducible research purpose. TAB0 presents the detailed from the artificial and electricity datasets. The proposed networks outperform significantly the benchmark networks on the asynchronous, electricity and quotes datasets. For the synchronous datasets, on the other hand, SOCNN almost matches the of the benchmarks. This similar performance could have been anticipated -the correct weights of the past values in synchronous artificial datasets are far less nonlinear than in case when separate dimensions are observed asynchronously. For this reason, the significance network's potential is not fully utilized. We can also observe that the depth of the offset network has negligible or negative This means that the significance network has crucial impact on the performance, which is in-line with the potential drawbacks of the LSTM network discussed in Section 3: obtaining proper weights for the past observations is much more challenging than getting good predictors from the single past values. The small positive auxiliary weight helped achieve more stable test error throughout training in many cases. The higher weights of auxiliary loss considerably improved the test error on asynchronous datasets (See TAB1); however for other datasets its impact was negligible. In general, the proposed SOCNN had significantly lower variance of the test and validation errors, especially in the early stage of the training process and for quotes dataset. Figure 4 presents the learning curves for two different artificial datasets. To understand better why SOCNN obtained better than the other networks, we check how these networks react to the presence of additional noise in the input terms 8. Figure 5 presents changes in the mean squared error and significance and offset network outputs with respect to the level of noise. SOCNN is the most robust out of the compared networks and, together with singlelayer LSTM, least prone to overfitting. Despite the use of dropout and cross-validation, the other models tend to overfit the training set and have non-symmetric error curves on test dataset. Figure 5: Experiment comparing robustness of the considered networks for Asynchronous 16 dataset. The plots show how the error would change if an additional noise term was added to the input series. The dotted curves show the total significance and average absolute offset (not to scale) outputs for the noisy observations. Interestingly, significance of the noisy observations increases with the magnitude of noise; i.e. noisy observations are far from being discarded by SOCNN. In this article, we proposed a weighting mechanism that, coupled with convolutional networks, forms a new neural network architecture for time series prediction. The proposed architecture is designed for regression tasks on asynchronous signals in the presence of high amount of noise. This approach has proved to be successful in forecasting financial and artificially generated asynchronous time series outperforming popular convolutional and recurrent networks. The proposed model can be further extended by adding intermediate weighting layers of the same type in the network structure. Another possible generalization that requires further empirical studies can be obtained by leaving the assumption of independent offset values for each past observation, i.e. considering not only 1x1 convolutional kernels in the offset sub-network. Finally, we aim at testing the performance of the proposed architecture on other real-life datasets with relevant characteristics. We observe that there exists a strong need for common'econometric' datasets benchmark and, more generally, for time series (stochastic processes) regression. APPENDIX A NONLINEARITY IN THE ASYNCHRONOUSLY SAMPLED AUTOREGRESSIVE TIME SERIES Lemma 1. Let X(t) be an AR time series given by DISPLAYFORM0 where (ε(t)) t=1,2,... are i.i.d. error terms. Then DISPLAYFORM1 for any t > k ≥ 2, where a k, b k are rational functions of a and b. Proof. The proof follows a simple induction. It is sufficient to show that DISPLAYFORM2 where DISPLAYFORM3 and E k (t) is a linear combination of {ε(t − i), i = 0, 1,..., k − 2}. Basis of the induction is trivially satisfied via 15. In the induction step, we assume that 17 holds for k. For t > k + 1 we have DISPLAYFORM4. Multiplying sides of this equation by b and adding av k X(t − 1) we obtain DISPLAYFORM5 Since aX(t − 1) + bX(t − 2) = X(t) − ε(t) we get DISPLAYFORM6 As DISPLAYFORM7 is a linear combination of {ε(t − i), i = 0, 1,..., k − 1}, the above equation proves 17 for k = k + 1. To see how robust each of the networks is, we add noise terms to part of the input series and evaluate them on such datapoints, assuming unchanged output. We consider varying magnitude of the noise terms, which are added only to the selected 20% of past steps at the value dimension 9. Formally the procedure is following:1. Select randomly N obs = 6000 observations (X n, y n) (half of which coming from training set and half from test set).2. Add noise terms to the observations X n p:= X n + Ξ n · γ p, for {γ p} 128 p=1 evenly distributed on [−6σ, 6σ], where σ is a standard deviation of the differences of the series being predicted and DISPLAYFORM0 3. For each p evaluate each of the trained models on dataset X n p, y n N obs, separately for n's originally coming from training and test sets. To evaluate the model and the significance of its components, we perform a grid search over some of the hyperparameters, more extensively on the artificial and electricity datasets. These include the offset sub-network's depth (we consider depths of 1 and 10 for artificial and electricity datasets; 1 for Quotes data) and the auxiliary weight α (compared values: {0, 0.1, 0.01}). For all networks we have chosen LeakyReLU activation function DISPLAYFORM0 with leak rate a =.1 as an activation function. We compare the performance of the proposed model with CNN, ResNet, multi-layer LSTM networks and linear (VAR) model. The benchmark networks were designed so that they have a comparable number of parameters as the proposed model. Consequently, LeakyReLU activation function with leak rate.1 was used in all layers except the top ones where linear activation was applied. For CNN we provided the same number of layers, same stride and similar kernel size structure. In each trained CNN, we applied max pooling with the pool size of 2 every two convolutional layers 10. TAB3 presents the configurations of the network hyperparameters used in comparison. The training and validation sets were sampled randomly from the first 80% of timesteps in each series, with ratio 3 to 1. The remaining 20% of data was used as a test set. All models were trained using Adam optimizer BID22 which we found much faster than standard Stochastic Gradient Descent in early tests. We used batch size of 128 for artificial data and 256 for quotes dataset. We also applied batch normalization BID20 in between each convolution and the following activation. At the beginning of each epoch, the training samples were shuffled. To prevent overfitting we applied dropout and early stopping 12. Weights were initialized following the normalized uniform procedure proposed by BID13. Experiments were carried out using implementation relying on Tensorflow BID0 and Keras front end BID5. For artificial data we optimized the models using one K20s NVIDIA GPU while for quotes dataset we used 8-core Intel Core i7-6700 CPU machine only. We simulate a multivariate time series composed of K noisy observations of the same autoregressive signal. The simulated series are constructed as follows:Let p 1, p 2,..., p K ∈ and define Figure 6: Simulated synchronous (left) and asynchronous (right) artificial series. Note the different durations between the observations from different sources in the latter plot. For clarity, we present only 6 out of 16 total dimensions. DISPLAYFORM0 DISPLAYFORM1 We call {X t} N t=1 and {X t} N t=1 synchronous and asynchronous time series, respectively. We simulate both of the processes for N = 10, 000 and each K ∈ {16, 64}. The original dataset has 7 features: global active power, global reactive power, voltage, global intensity, sub-metering 1, sub-metering 2 and sub-metering 3, as well as information on date and time. We created asynchronous version of this dataset in two steps:1. Deterministic time step sampling. The durations between the consecutive observations are periodic and follow a scheme [1min, 2min, 3min, 7min, 2min, 2min, 4min, 1min, 2min, 1min]; the original observations in between are discarded. In other words, if the original observations are indexed according to time (in minutes) elapsed since the first observation, we keep the observations at indices n such that n mod 25 ≡ k ∈.2. Random feature sampling. At each remaining time step, we choose one out of seven features that will be available at this step. The probabilities of the features were chosen to be proportional to [1, 1.5, 1.5 2, 1.5 6] and randomly assigned to each feature before sampling (so that each feature has constant probability of its value being available at each time step. At each time step the sub-sampled dataset is 10-dimensional vector that consists information about the time, date, 7 indicator features that imply which feature is available, and the value of this feature. The length of the sub-sampled dataset is above 800 thousand, i.e. 40% of the original dataset's length. The schedule of the sampled timesteps and available features is attached in the data folder in the supplementary material. In Figure 7 we present significance and offset activations for three input series, from the network trained on electricity dataset. Each row represents activations corresponding to past values of a single feature. Figure 7: Activations of the significance and offset sub-networks for the network trained on Electricity dataset. We present 25 most recent out of 60 past values included in the input data, for 3 separate datapoints. Note the log scale on the left graph.
Convolutional architecture for learning data-dependent weights for autoregressive forecasting of time series.
425
scitldr
MixUp is a data augmentation scheme in which pairs of training samples and their corresponding labels are mixed using linear coefficients. Without label mixing, MixUp becomes a more conventional scheme: input samples are moved but their original labels are retained. Because samples are preferentially moved in the direction of other classes \iffalse -- which are typically clustered in input space -- \fi we refer to this method as directional adversarial training, or DAT. We show that under two mild conditions, MixUp asymptotically convergences to a subset of DAT. We define untied MixUp (UMixUp), a superset of MixUp wherein training labels are mixed with different linear coefficients to those of their corresponding samples. We show that under the same mild conditions, untied MixUp converges to the entire class of DAT schemes. Motivated by the understanding that UMixUp is both a generalization of MixUp and a form of adversarial training, we experiment with different datasets and loss functions to show that UMixUp provides improved performance over MixUp. In short, we present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp. Deep learning applications often require complex networks with a large number of parameters (; ;). Although neural networks perform so well that their ability to generalize is an area of study in itself (a;), their high complexity nevertheless causes them to overfit their training data . For this reason, effective regularization techniques are in high demand. There are two flavors of regularization: complexity curtailing and data augmentation 1. Complexity curtailing methods constrain models to learning in a subset of parameter space which has a higher probability of generalizing well. Notable examples are weight decay and dropout . Data augmentation methods add transformed versions of training samples to the original training set. Conventionally, transformed samples retain their original label, so that models effectively see a larger set of data-label training pairs. Commonly applied transformations in image applications include flips, crops and rotations. A recently devised family of augmentation schemes called adversarial training has attracted active research interest (; ; ; ; ;). Adversarial training seeks to reduce a model's propensity to misclassify minimally perturbed training samples, or adversarials. While attack algorithms used for testing model robustness may search for adversarials in unbounded regions of input space, adversarial training schemes generally focus on perturbing training samples within a bounded region, while retaining the sample's original label . Another recently proposed data augmentation scheme is MixUp (b), in which new samples are generated by mixing pairs of training samples using linear coefficients. Despite its well established generalization performance (b; ;), the working mechanism of MixUp is not well understood. suggest viewing MixUp as imposing local linearity on the model using points outside of the data manifold. While this perspective is insightful, we do not believe it paints a full picture of how MixUp operates. A recent study provides empirical evidence that MixUp improves adversarial robustness, but does not present MixUp as a form of adversarial training. We build a framework to understand MixUp in a broader context: we argue that adversarial training is a central working principle of MixUp. To support this contention, we connect MixUp to a MixUplike scheme which does not perform label mixing, and we relate this scheme to adversarial training. Without label mixing, MixUp becomes a conventional augmentation scheme: input samples are moved, but their original labels are retained. Because samples are moved in the direction of other samples -which are typically clustered in input space -we describe this method as'directional'. Because this method primarily moves training samples in the direction of adversarial classes, this method is analogous to adversarial training. We thus refer to MixUp without label mixing as directional adversarial training (DAT). We show that MixUp converges to a subset of DAT under mild conditions, and we thereby argue that adversarial training is a working principle of MixUp. Inspired by this new understanding of MixUp as a form of adversarial training, and upon realizing that MixUp is (asymptotically) a subset of DAT, we introduce Untied MixUp (UMixUp), a simple enhancement of MixUp which converges to the entire family of DAT schemes, as depicted in Figure 1. Untied Mixup mixes data-label training pairs in a similar way to MixUp, with the distinction that the label mixing ratio is an arbitrary function of the sample mixing ratio. We perform experiments to show that UMixUp's classification performance improves upon MixUp. In short, this research is motivated by a curiosity to better understand the working of MixUp. In-sodoing we aim to: 1. Establish DAT as analogous to adversarial training. This is discussed in section 4. 2. Establish UMixUp as a superset of MixUp, and as converging to the entire family of DAT schemes. In-so-doing, a) establish MixUp's convergence to a subset of DAT, and thereby that it operates analogously to adversarial training; and b) establish UMixUp as a broader class of MixUp-like schemes that operate analogously to adversarial training. This is discussed in 5. 3. Establish empirically that UMixUp's classification performance improves upon MixUp. This is discussed in section 6. Finally we note that this paper has another contribution. Conventionally, MixUp is only applicable to baseline models that use cross entropy loss. All analytical we develop in this paper are applicable to a wider family of models using any loss function which we term target-linear. We define target-linearity and experiment with a new loss function called negative cosine-loss to show its potential. Regular (non-calligraphic) capitalized letters such as X will denote random variables, and their lowercase counterparts, e.g., x, will denote realizations of a random variable. Any sequence, (a 1, a 2, . . ., a n) will be denoted by a n 1. Likewise (A 1, A 2, . . ., A n) will be denoted by A n 1, and a sequence of sample pairs ((x 1, x 1), (x 2, x 2),..., (x n, x n)) denoted by (x, x) n 1. For any value a ∈, we will use a as a short notation for 1 − a. Classification Setting Consider a standard classification problem, in which one wishes to learn a classifier that predicts the class label for a sample. Formally, let X be a vector space in which the samples of interest live and let Y be the set of all possible labels associated with these samples. The set of training samples will be denoted by D, a subset of X. We will use t(x) to denote the true label of x. Let F be a neural network function, parameterized by θ, which maps X to another vector space Z. Let ϕ: Y → Z be a function that maps a label in Y to an element in Z such that for any y, y ∈ Y, if y = y, then ϕ(y) = ϕ(y). In the space Z, we refer to F (x) as the model's prediction. With slight abuse of language, we will occasionally refer to both t(x) and ϕ(t(x)) as the "label" of x. Let: Z ×Z → R be a loss function, using which one defines an overall loss function as Here we have taken the notational convention that the first argument of represents the model's prediction and the second represents the target label. In this setting, the learning problem is formulated as minimizing L with respect to its characterizing parameters θ. We say that a loss function (z, z) is target-linear if for any scalars α and β, Target-linear loss functions arise naturally in many settings, for which we now provide two examples. For convenience, we define the vectors v = F (x) and y = ϕ(t(x)). Cross-Entropy Loss The conventional cross-entropy loss function, written in our notation, is defined as: where v and y are constrained to being probability vectors. We note that in conventional applications, dim(Z) = |Y|, and the target label v is a one-hot vector where otherwise. Constraining v to being a probability vector is achieved using a softmax output layer. Negative-Cosine Loss The "negative-cosine loss", usually used in its negated version, i.e., as the cosine similarity, can be defined as follows. where v and y are constrained to being unit-length vectors. For v this can be achieved by simple division at the output layer, and for y by limiting the range of ϕ to an orthonormal basis (making it a conventional label embedding function). It is clear that the cross-entropy loss CE and the negative-cosine loss NC are both target-linear, directly following from the definition of target-linearity. Assumptions The theoretical development of this paper relies on two fundamental assumptions, which we call "axioms". Axiom 1 (Target linearity) The loss function used for the classification setting is target-linear. That is, the study of MixUp in this paper is in fact goes beyond the standard MixUp, which uses the cross-entropy loss. Much of the development in this paper concerns drawing sample pairs n 1 is said to be symmetric if for every (a, b) ∈ D × D, the number of occurrences of (a, b) in the sequence is equal to that of (b, a). Axiom 2 (Symmetric pair-sampling distribution) Whenever a sample pair (x, x) is drawn from a distribution Q, Q is assumed to be symmetric. In the standard MixUp, two samples are drawn independently from D to form a pair, making this condition satisfied. 3 MIXUP, DAT, UNTIED MIXUP We first provide a summary of each scheme for the reader's convenience. We then describe each scheme more systematically. For concision of equations to follow, we define and MixUp is a data augmentation scheme in which samples are linearly combined using some mixing ratio λ ∈: where λ ∼ P Mix. A target label is generated using the same mixing ratio λ: DAT and UMixUp use the same method for generating samples, but use different λ distributions (P DAT and P uMix respectively). DAT and UMixUp also differ from MixUp in their target labels. DAT retains the sample's original label: whereas UMixUp's label mixing ratio is a function of λ: In Untied MixUp, the label mixing ratio is "untied" from the sample mixing ratio, and can be any γ(λ). We will refer to γ as the weighting function. An Untied MixUp scheme is specified both by the its mixing policy P uMix and a weighting function γ. To draw comparisons between MixUp, DAT, and Untied MixUp schemes, we establish a framework for characterizing their optimization problems.. We denote the expected value of each scheme's overall loss, L m E, with respect to its mixing ratio Λ. Let n be a positive integer. In every scheme, a sequence (x, x) n 1:= ((x 1, x 1), (x 2, x 2),..., (x n, x n)) of sample pairs are drawn i.i.d. from Q, and a sequence λ In MixUp, we refer to P Mix as the mixing policy. Directional Adversarial Training (DAT) For any x, x ∈ D and any λ ∈, we denote In DAT, we refer to P DAT as the adversarial policy. Let γ be a function mapping to. For any x, x ∈ D and any λ ∈, we denote Let P m be P uMix, and denote the overall and expected overall loss functions At this end, it is apparent that MixUp is a special case of Untied MixUp, where the function γ(λ) takes the simple form γ(λ) = λ. The main theoretical of this paper is the relationship established between DAT and UMixUp, and by extension MixUp. Both MixUp and UMixUp will be shown to converge to DAT as the number of mixed sample pairs, n, tends to infinity. Prior to developing these , we provide insight into DAT, in terms of its similarity to adversarial training and its regularization mechanisms. Conventional adversarial training schemes augment the original training dataset by searching for approximations of true adversarials within bounded regions around each training sample. For a training sample x, a bounded region U known as an L p ball is defined as U = {x + η η η : ||η η η|| p <}. Over this region, the loss function with respect to the true label of x is maximized. A typical loss function for an adversarial scheme is where b is the baseline loss function. Simply put, baseline training serves to learn correct classification over the training data, whereas adversarial training moves the classification boundary to improve generalization. DAT, on the other hand, combines intra-class mixing (mixing two samples of the same class) and inter-class mixing (mixing samples of different classes). Intra-class mixing serves to smooth classification boundaries of inner-class regions, while inter-class mixing perturbs training samples in the direction of adversarial classes, which improves generalization. Inter-class mixing dwarves intraclass mixing by volume of generated samples seen by the learning model in most many-class learning problems (by a 9-1 ratio in balanced 10-class problems for instance). DAT, which primarily consists of inter-class mixing, can therefore be seen as analogous to adversarial training. The key distinction between conventional adversarial training and inter-class mixing is that MixUp movement is determined probabilistically within a bounded region, while adversarial movement is deterministic. Figure 2 illustrates the connection between standard adversarial training and DAT. Consider the problem of classifying the blue points and the black points in Figure 2a), where the dashed curve is a ground-truth classifier and the black curve indicates the classification boundary of F (x), which overfits the training data. In adversarial training, a training sample x is moved to a location within an L p -ball around x while keeping its label to further train the model; the location, denoted by x 1 in Figure 2b ), is chosen to maximize training loss. In DAT, a second sample x governs the direction in which x is perturbed. If x is chosen from a different class as shown in Figure 2c ), then the generated sample x 2 is used to further train the model. If x is chosen from the same class as shown in Figure 2d ), then the sample x 3 is used in further training. Note that the inter-class mixed sample x 2 pushes the model's classification boundary closer to the ground-truth classifier, thus connecting adversarial training and DAT. The intra-class sample x 3, on the other hand, mainly helps to smooth inner parts of the class region. The latter behaviour is an additional feature of DAT and MixUp, which distinguishes these schemes from adversarial training. We now show that Untied MixUp and DAT are equivalent when n tends to infinity. A consequence of this equivalence is that it infuses both MixUp and UMixUp with the intuition of adversarial training. To that end, we relate the Untied MixUp loss function, uMix, with the DAT loss function, DAT. Lemma 1 For any (x, x) ∈ D × D and any λ ∈, This follows directly from the target-linearity of the loss function. The next two lemmas show that as n tends to infinity, the overall loss of both DAT and UMixUp converge in probability to their respective overall expected losses. n 1 in probability. These two lemmas have similar proofs, thus only the proof of Lemma 2 is given in section A.1. Next we show that as n tends to infinity, UMixUp converges in probability to a subset of DAT, and DAT converges in probability to a subset of UMixUp. In other words, we show that as n increases, UMixUp converges to being equivalent to the entire class of DAT schemes. For that purpose, let F denote the space of all functions mapping to. Each configuration in P × F defines an Untied MixUp scheme. We now define U, which maps a DAT scheme to an Untied MixUp scheme. Specifically U is a map from P to P × F such that for any p ∈ P, U(p) is a configuration (p, g) ∈ P × F, where Lemma 4 Let (x, x) n 1 be a sequence of sample pairs on which an Untied MixUp scheme specified by (P uMix, γ) and a DAT scheme with policy P DAT will apply independently. If (x, x) n 1 is symmetric and We now define another map D u that maps an Untied MixUp scheme to a DAT scheme. Specifically D u is a map from P × F to P such that for any (p, g) It is easy to verify that 1 0 p (λ)dλ = 1. Thus p is indeed a distribution in P and D u is well defined. Lemma 5 Let (x, x) n 1 be a sequence of sample pairs on which an Untied MixUp scheme specified by (P uMix, γ) and a DAT scheme with policy P DAT will apply independently. If (x, x) n 1 is symmetric and Lemmas 2, 3, 4 and 5 provide the building blocks for theorem 1, which we state hereafter. As n increases, both DAT and UMixUp converge in probability toward their respective expected loss (lemmas 2 and 3). Since as n increases, the sequence (x, x) n 1 becomes arbitrarily close to the symmetric sampling distribution Q, then by lemma 4 the family of DAT schemes converges in probability to a subset of UMixUp schemes. Lemma 5 proves the converse, i.e. that as n increases the family of UMixUp schemes converges in probability to a subset of DAT schemes. As n n n increases, the family of UMixUp schemes therefore converges in probability to the entire family of DAT schemes. On this sample-pair data, an Untied MixUp scheme specified by (P Mix, γ) and a DAT scheme specified by P DAT will apply. In the Untied MixUp scheme, let Λ ∞ 1 be drawn i.i.d. from P Mix; in the DAT scheme, let Υ The equivalence between the two families of schemes also indicates that there are DAT schemes that do not correspond to a MixUp scheme. These DAT schemes correspond to Untied MixUp scheme beyond the standard MixUp. The relationship between MixUp, DAT and Untied MixUp is shown in Figure 1. We consider an image classification task on the Cifar10, Cifar100, MNIST and Fashion-MNIST datasets. The baseline classifier chosen is PreActResNet18 (see Two target-linear loss functions are essayed: cross-entropy (CE) loss and the negative-cosine (CE) loss as defined earlier. We implement CE loss similarly to previous works, which use CE loss to implement the baseline model. In our implementation of the NC loss model, for each label y, ϕ(y) is mapped to a randomly selected unit-length vector of dimension d and fixed during training; the feature map of the original PreActResNet18 is linearly transformed to a d-dimensional vector. The dimension d is chosen as 300 for Cifar10, MNIST and Fashion-Mnist (which have one black-andwhite channel) and 700 for Cifar100 (which has 3 colored channels). Our implementation of MixUp and Untied MixUp improves upon the published implementation from the original authors of MixUp Zhang et al. (2017b). For example, the original authors' implementation samples only one λ per mini-batch, giving rise to unnecessarily higher stochasticity of the gradient signal. Our implementation samples λ independently for each sample. Additionally, the original code combines inputs by mixing a mini-batch of samples with a shuffled version of itself. This approach introduces a dependency between sampled pairs and again increases the stochasticity of training. Our implementation creates two shuffled copies of the entire training dataset prior to each epoch, pairs them up, and then splits them into mini-batches. This gives a closer approximation to i.i.d. sampling and makes training smoother. While these implementation improvements have merit on their own, they do not provide a theoretical leap in understanding, and so we do not quantify their impact in our analysis. All models examined are trained using mini-batched backpropagation, for 200 epochs. We sweep over the policy space of MixUp and Untied MixUp. For MixUp, it is sufficient to consider distribution P Mix to be symmetric about 0.5. Thus we consider only consider P Mix in the form of B(α, α), and scan through a single parameter α systematically. Since the policy of Untied MixUp is in the form of U(B(α, β)), searching through (α, β) becomes more difficult. Thus our policy search for Untied MixUp is restricted to an ad hoc heuristic search. For this reason, the found best policy for Untied MixUp might be quite far from the true optimal. The main of our experiments are given in tables 1 to 4. As shown in the tables, each setting is run 100 times. For each run, we compute the error rate in a run as the average test error rate over the final 10 epochs. The estimated mean ("MEAN") performance of a setting is computed as the average of the error rates over all runs for the same setting. The 95%-confidence interval ("ConfInt") for the estimated mean performance is also computed and shown in the table. From these , we see that the Untied MixUp schemes each outperform their MixUp counterparts. Specifically, in 6 of the 8 cases (those printed in bold font), the confidence interval of Untied MixUp is completely disjoint from that of the corresponding MixUp scheme; and in some cases, the separation of confidence intervals is by a large margin. Note that the baseline model (PreActResNet18) has been designed with highly focused inductive bias for image classification tasks. Under such an inductive bias, one expects that the room for regularization (or the "amount of overfitting") isn't abundant. As such, we consider the improvement of Untied MixUp over MixUp rather significant. The show empirically that MixUp and Untied MixUp both work on the NC loss models. This validates our generalization of MixUp (and Untied MixUp) to models built with target linear losses. model policy runs MEAN ConfInt baseline-CE − 100 5.476% 0.027% mixUp-CE B(0.9, 0.9) 100 4.199% 0.023% uMixUp-CE U(B(2.2, 0.9)) 100 4.177% 0.025% baseline-NC − 100 5.605% 0.030% mixUp-NC B(1.0, 1.0) 100 4.508% 0.022% uMixUp-NC U(B(1.8, 1.0)) 100 4.455% 0.025% 1.3, 0.9) ) 100 23.819% 0.054% 1.7, 1.0) ) 100 0.609% 0.005% baseline-NC − 100 0.720% 0.007% mixUp-NC B(1.0, 1.0) 100 0.607% 0.004% uMixUp-NC U(B(1.3, 0.9)) 100 0.592% 0.005% ) be defined according to, with the first n elements of (x, x) ∞ 1 and the first n elements of Λ ∞ 1 as input. Define For any given λ E λ∼P Mix γ(λ) where (a) is due to a change of variable in the integration, (b) is due to the symmetry of (x, x) n 1. Note that in equation 5 g(λ) is undefined at values of λ for which the denominator is zero. But the lemma holds true because the denominator is only zero when p(λ) = 0, so those λ for which g(λ) is undefined never get drawn in the DAT scheme. A.3 PROOF OF LEMMA 5: DAT (x k, x k, λ) γ(λ)P Mix (λ) + γ(λ)P Mix (1 − λ) ). where (a) is due to the symmetry of (x, x) n 1, and (b) is by a change of variable in the second term (renaming 1 − λ as λ).
We present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp
426
scitldr
Plan recognition aims to look for target plans to best explain the observed actions based on plan libraries and/or domain models. Despite the success of previous approaches on plan recognition, they mostly rely on correct action observations. Recent advances in visual activity recognition have the potential of enabling applications such as automated video surveillance. Effective approaches for such problems would require the ability to recognize the plans of agents from video information. Traditional plan recognition algorithms rely on access to detailed planning domain models. One recent promising direction involves learning approximate (or shallow) domain models directly from the observed activity sequences. Such plan recognition approaches expect observed action sequences as inputs. However, visual inference are often noisy and uncertain, typically represented as a distribution over possible actions. In this work, we develop a visual plan recognition framework that recognizes plans with an approximate domain model learned from uncertain visual data.
Handling Uncertainty in Visual Perception for Plan Recognition
427
scitldr
We consider the task of answering complex multi-hop questions using a corpus as a virtual knowledge base (KB). In particular, we describe a neural module, DrKIT, that traverses textual data like a virtual KB, softly following paths of relations between mentions of entities in the corpus. At each step the operation uses a combination of sparse-matrix TFIDF indices and maximum inner product search (MIPS) on a special index of contextual representations. This module is differentiable, so the full system can be trained completely end-to-end using gradient based methods, starting from natural language inputs. We also describe a pretraining scheme for the index mention encoder by generating hard negative examples using existing knowledge bases. We show that DrKIT improves accuracy by 9 points on 3-hop questions in the MetaQA dataset, cutting the gap between text-based and KB-based state-of-the-art by 70%. DrKIT is also very efficient, processing upto 10x more queries per second than existing state-of-the-art QA systems. Large knowledge bases (KBs), such as FreeBase and Wikidata, organize information around entities, which makes it easy to reason over their contents. For example, given a query like "When was Grateful Dead's lead singer born?", one can identify the entity Grateful Dead and the path of relations LeadSinger, BirthDate to efficiently extract the answer-provided that this information is present in the KB. Unfortunately, large KBs are often incomplete . While relation extraction methods can be used to populate KBs, this process is inherently error-prone, and errors in extraction can propagate to downstream tasks. Advances in open-domain QA suggest an alternativeinstead of performing relation extraction, one could treat a large corpus as a virtual KB by answering queries with spans from the corpus. This ensures facts are not lost in the relation extraction process, but also poses challenges. One challenge is that it is relatively expensive to answer questions using QA models which encode each document in a query-dependent fashion -even with modern hardware . The cost of QA is especially problematic for certain complex questions, such as the example question above. If the passages stating that "Jerry Garcia was the lead singer of Grateful Dead" and "Jerry Garcia was born in 1942" are far apart in the corpus, it is difficult for systems that retrieve and read a single passage to find an answer-even though in this example, it might be easy to answer the question after the relations were explicitly extracted into a KB. More generally, complex questions involving sets of entities or paths of relations may require aggregating information from entity mentions in multiple documents, which is expensive. One step towards efficient QA is the recent work of Seo et al. (2018; on phrase-indexed question answering (PIQA), in which spans in the text corpus are associated with question-independent contextual representations and then indexed for fast retrieval. Natural language questions are then answered by converting them into vectors that are used to perform inner product search (MIPS) against the index. This ensures efficiency during inference. However, this approach cannot be directly used to answer complex queries, since by construction, the information stored in the index is about the local context around a span-it can only be used for questions where the answer can be derived by reading a single passage. This paper addresses this limitation of phrase-indexed question answering. We introduce an efficient, end-to-end differentiable framework for doing complex QA over a large text corpus that has been encoded in a query-independent manner. Specifically, we consider "multi-hop" complex queries which can be answered by repeatedly executing a "soft" version of the operation below, defined over a set of entities X and a relation R: Y = X.follow(R) = {x : ∃x ∈ X s.t. R(x, x) holds} In past work soft, differentiable versions of this operation were used to answer multi-hop questions against an explicit KB . Here we propose a more powerful neural module which approximates this operation against an indexed corpus. In our module, the input X is a sparse vector representing a weighted set of entities, and the relation R is a dense feature vector, e.g. a vector derived from a neural network over a natural language query. The output Y is another sparse vector representing the weighted set of entities, aggregated over entity mentions in the top-k spans retrieved from the index. The spans in turn are retrieved using a MIPS query constructed from X and R, and we discuss pretraining schemes for the index in §2.3. For multi-hop queries, the output entities Y can be recursively passed as input to the next iteration of the same module. The weights of the entities in Y are differentiable w.r.t the MIPS queries, which allows end-to-end learning without any intermediate supervision. We discuss an implementation based on sparse matrix-vector products, whose runtime and memory depend only on the number of spans K retrieved from the index. This is crucial for scaling up to large corpora, and leads to upto 15x faster inference than existing state-of-the-art multi-hop and open-domain QA systems. The system we introduce is called DrKIT (for Differentiable Reasoning over a Knowledge base of Indexed Text). We test DrKIT on the MetaQA benchmark for complex question answering, and show that it improves on prior text-based systems by 5 points on 2-hop and 9 points on 3-hop questions, reducing the gap between text-based ad KB-based systems by 30% and 70%, respectively. We also test DrKIT on a new dataset of multi-hop slot-filling over Wikipedia articles, and show that it outperforms DrQA and PIQA adapted to this task. We want to answer a question q using a text corpus as if it were a KB. We start with the set of entities z in the question q and would ideally want to follow relevant outgoing relation edges in the KB to arrive at the answer. To simulate this behaviour on text, we first expand z to set of co-occurring mentions (say using TF-IDF) m. Not all of these co-occurring mentions are relevant for the question q, so we train a neural network which filters the mentions based on a relevance score of q to m. Then we can aggregate the ing set of mentions m to the entities they refer to end up with an ordered set z of entities which are answer candidates, very similar to traversing the KB. Furthermore, if the question requires more than one hop to answer, we can repeat the above procedure starting with z. This is depicted pictorially in Figure 1. We begin by first formalizing this idea in §2.1 in a probabilistic framework. In §2.2, we describe how the expansion of entities to mentions and the filtering of mentions can be performed efficiently, using sparse matrix products, and MIPS algorithms . Lastly we discuss a pretraining scheme for constructing the mention representations in §2.3. We denote the given corpus as is a sequence of tokens. We start by running an entity linker over the corpus to identify mentions of a fixed set of entities E. Each mention m is a tuple (e m, k m, i m, j m) denoting that the text span d im km,..., d jm km in document k m mentions the entity e m ∈ E, and the collection of all mentions in the corpus is denoted as M. Note that typically |M| |E|. We assume a weakly supervised setting where during training we only know the final answer entities a ∈ E for a T -hop question. We denote the latent sequence of entities which answer each of the intermediate hops as z 0, z 1,..., z T ∈ E, where z 0 is mentioned in the question, and z t = a. We can recursively write the probability of an intermediate answer as: Here Pr(z 0 |q) is the output of an entity linking system over the question, and Pr(z t |q, z t−1) corresponds to a single-hop model which answers the t-th hop, given the entity from the previous hop z t−1, by following the appropriate relation. Eq. 1 models reasoning over a chain of latent entities, but when answering questions over a text corpus, we must reason over entity mentions, rather than entities themselves. Hence Pr(z t |q, z t−1) needs to be aggregated over all mentions of z t, which yields The interesting term to model in the above equation is P r(m|q, z t−1), which represents the relevance of mention m given the question about entity z t−1. Following the analogy of a KB, we first expand the entity z t−1 to co-occuring mentions m and use a learnt scoring function to find the relevance of these mentions. Formally, let F (m) denote a TF-IDF vector for the document containing m, G(z t−1) be the TF-IDF vector of the surface form of the entity from the previous hop, and s t (m, z, q) be a learnt scoring function (different for each hop). Thus, we model Pr(m|q, z t−1) as Another equivalent way to look at our model in Eq. 3 is that the second term retrieves mentions of the correct type requested by the question in the t-th hop, and the first term filters these based on co-occurrence with z t−1. When dealing with a large set of mentions m, we will typically retain only the top-k relevant mentions. We will show that this joint modelling of co-occurrence and relevance is important for good performance, which has also been observed in past . The other term left in Eq. 2 is Pr(z|m), which is 1 if mention m matches the entity z else 0, since each mention can only point to a single entity. In general, to compute Eq. 2 the mention scoring of Eq. 3 needs to be evaluated for all latent entity and mention pairs, which is prohibitively expensive. However, by restricting s t to be an inner product we can implement this efficiently (§2.2). To highlight the differentiability of the proposed overall scheme, we can represent the computation in Eq. 2 as matrix operations. We pre-compute the TFIDF term for all entities and mentions into a sparse matrix, which we denote as A E→M [e, m] = 1 (G(e) · F (m) > ). Then entity expansion to co-occuring mentions can be considered to be a sparse-matrix by sparse-vector multiplication between A E→M and z t−1. For the relevance scores, let T K (s t (m, z t−1, q)) denote the top-K relevant mentions encoded as a sparse vector in R |M|. Finally, the aggregation of mentions to entities can be formulated as multiplication with another sparse matrix A M →E, which encodes coreference, i.e. mentions corresponding to the same entity. Putting all these together, using to denote elementwise product, and defining Z t = [Pr(z t = e 1 |q);...; Pr(z t = e |E| |q)], we can observe that for large K (i.e., as K → |M|), eq. becomes equivalent to: Note that every operation in above equation is differentiable and between sparse matrices and vectors: we will discuss efficient implementations in §2.2. Further, the number of non-zero entries in Z t is bounded by K, since we filtered (the multiplication in Eq. 4) to top-k relevant mentions among TF-IDF based expansion and since each mention can only point to a single entity in A M →E. This is important, as it prevents the number of entries in Z t from exploding across hops (which might happen if, for instance, we added the dense and TF-IDF retrievals instead). We can view Z t−1, Z t as weighted multisets of entities, and s t (m, z, q) as implicitly selecting mentions which correspond to a relation R. Then Eq. 4 becomes a differentiable implementation of Z t = Z t−1.follow(R), i.e. mimicking the graph traversal in a traditional KB. We thus call Eq. 4 a textual follow operation. Training and Inference. The model is trained completely end-to-end by optimizing the crossentropy loss between Z T, the weighted set of entities after T hops, and the ground truth answer set A. We use a temperature coefficient λ when computing the softmax in Eq, 4 since the inner product scores of the top-K retrieved mentions are typically high values, which would otherwise in very peaked distributions of Z t. Finally, we also found that taking a maximum over the mention set of an entity M zt in Eq. 2 works better in practice than taking a sum. This corresponds to optimizing only over the most confident mention of each entity, which works for corpora like Wikipedia which do not have much redundancy of information. A similar observation has been made by in weakly supervised settings. Sparse TF-IDF Mention Encoding. To compute the sparse A E→M for entity-mention expansion in Eq. 4, the TF-IDF vectors F (m) and G(z t−1) are constructed over unigrams and bigrams, hashed to a vocabulary of 16M buckets. While F computes the vector from the whole passage around m, G only uses the surface form of z t−1. This corresponds to retrieving all mentions in a document retrieved using z t−1 as the query. We limit the number of retrieved mentions per entity to a maximum of µ, which leads to a |E| × |M| sparse matrix. Efficient Entity-Mention expansion. The expansion from a set of entities to mentions occurring around them can be computed using the sparse-matrix by sparse vector product Z T t−1 A E→M. A simple lower bound for multiplying a sparse |E|×|M| matrix, with maximum µ non-zeros in each row, by a sparse |E| × 1 vector with k nonzeros is Ω(kµ). Note that this lower bound is independent of the size of matrix A E→M or in other words independent of number of entities or mentions. To attain the lower bound, the multiplication algorithm must be vector driven, because any matrix-driven algorithms need to at least iterate over all the rows. Instead we slice out the relevant rows from A E→M. To enable this our solution is to represent the sparse matrix A E→M as two row-wise lists of a variable-sized lists of the non-zero elements index and values. This in a ragged representation of the matrix which can be easily sliced corresponding to the non-zero entries in the vector in O(log E) time. We are now left with k sparse vectors with at most µ non-zero elements in each. We can add these k sparse vectors weighted by corresponding values from vector z in O(k max{k, µ}) time. Moreover, such an implementation is feasible with deep learning frameworks such as TensorFlow (tf.). We quickly test the scalability of our approach by varying the number of entities for a fixed density of mentions µ (from Wikipedia). Figure 2 compared our approach to the default sparse-matrix times dense vector product (no sparse matrix times sparse vector is available in TensorFlow) 1. Efficient top-k mention relevance filtering: To make computation of Eq. 4 feasible, we need an efficient way to get top-k relevant mentions related to an entity in z t−1 for a given question q, without enumerating all possibilities. A key insight is that by restricting the scoring function s t (m, z t−1, q) to an inner product, we can easily approximate a parallel version of this computation, across all mentions m. To do this, let f (m) be a dense encoding of m, and g t (q, z t−1) be a dense encoding of the question q for the t-th hop, both in R p (the details of the dense encoding is provided in next paragraph), then the scoring function s t (m, z t−1, q) becomes which can be computed in parallel by multiplying a matrix f (M) = [f (m 1); f (m 2);...] with g t (q, z t−1). Although this matrix will be very large for a realistic corpus, but since eventually we are only interested in top-k values we can use an approximate algorithm for Maximum Inner Product Search (MIPS) to find the k top-scoring elements. The complexity of this filtering step using MIPS is roughly O(kp polylog|M|). Mention and Question Encoders. Mentions are encoded by passing the passages they are contained in through a BERT-large model (trained as described in § 2.3). Suppose mention m appears in passage d, starting at position i and ending at position j. Then, where H d is the sequence of embeddings output from BERT, and W is a linear projection to size p. The query are encoded with a smaller BERT-like model: specifically, it is tokenized with WordPieces , appended to a special [CLS] token, and then passed through a 4-layer Transformer network with the same architecture as BERT, producing an output sequence H q. The g t functions are defined similarly to the BERT model used for SQuAD-style QA. For each hop t = 1,..., T, we add two addition Transformer layers on top of H q, which will be trained to produce MIPS queries from the [CLS] encoding; the first added layer produces a MIPS query H q st to retrieve a start token, and the second added layer a MIPS query H q en to retrieve an end token. We concatenate the two and definẽ. Finally, to condition on current progress we add the embeddings of z t−1. Specifically, we use entity embeddings E ∈ R |E|×p, to construct an average embedding of the set To avoid a large number of parameters in the model, we compute the entity embeddings as an average over the word embeddings of the tokens in the entity's surface form. The computational cost of the question encoder Thus our total computational complexity to answer a query isÕ(k max{k, µ} + kp + p 2 ) (almost independent to number of entities or mentions!), with O(µ|E| + p|M|) memory to store the precomputed matrices and mention index. Ideally, we would like to train the mention encoder f (m) end-to-end using labeled QA data only. However, this poses a challenge when combined with approximate nearest neighbor search-since after every update to the parameters of f, one would need to recompute the embeddings of all mentions in M. We thus adopt a staged training approach: we first pre-train a mention encoder f (m), then compute compute and index embeddings for all mentions once, keeping these embeddings fixed when training the downstream QA task. Empirically, we observed that using BERT representations "out of the box" do not capture the kind of information our task requires (Appendix §C), and thus, pretraining the encoder to capture better mention understanding is a crucial step. One option adopted by previous researchers is to fine-tune BERT on Squad . However, Squad is limited to only 536 articles from Wikipedia, leading to a very specific distribution of questions, and is not focused on entity-and relation-centric questions. Here we instead train the mention encoder using distant supervision from the KB. Specifically, assume we are given an open-domain KB consisting of facts (e 1, R, e 2) specifying that the relation R holds between the subject e 1 and the object e 2. Then for a corpus of entity-linked text passages {d k}, we automatically identify tuples (d, (e 1, R, e 2)) such that d mentions both e 1 and e 2. Using this data, we learn to answer slot-filling queries in a reading comprehension setup, where the query q is constructed from the surface form of the subject entity e 1 and a natural language description of R (e.g. "Jerry Garcia, birth place, ?"), and the answer e 2 needs to be extracted from the passage d. Using string representations in q ensures our pre-training setup is similar to the downstream task. In pretraining, we use the same scoring function as in previous section, but over all spans m in the passage: For effective transfer to the full corpus setting, we must also provide negative instances during pretraining, i.e. query and passage pairs where the answer is not contained in the passage. We consider three types of hard negatives: Shared-entity negatives, which pair a query (e 1, R, ?) with a passage which mentions e 1 but not the correct tail answer. Shared-relation negative, which pair a query (e 1, R, ?) with a passage mentioning two other entities e 1 and e 2 in the same relation R. Random negatives, which pair queries with random passages from the corpus. For the multi-hop slot-filling experiments below, we used Wikidata (Vrandečić & Krötzsch, 2014) as our KB, Wikipedia as the corpus, and SLING to identify entity mentions. We restrict d be from the Wikipedia article of the subject entity to reduce noise. Overall we collected 950K pairs over 550K articles. For the experiments with MetaQA, we supplemented this data with the corpus and KB provided with MetaQA, and string matching for entity linking. 3.1 METAQA: MULTI-HOP QUESTION ANSWERING WITH TEXT Dataset. We first evaluate DrKIT on the MetaQA benchmark for multi-hop question answering . METAQA consists of around 400K questions ranging from 1 to 3 hops constructed by sampling relation paths from a movies KB and converting them to natural language using templates. The questions cover 8 relations and their inverses, around 43K entities, and are paired with a corpus consisting of 18K Wikipedia passages about those entities. The questions are all designed to be answerable using either the KB or the corpus, which makes it possible to compare the performance of our "virtual KB" QA system to a plausible upper bound system that has access to a complete KB. We used the same version of the data as. Results. Table 1 shows the accuracy of the top-most retrieved entity (Hits@1) for the sub-tasks ranging from 1-3 hops, and compares to the state-of-the-art systems for the text-only setting on these tasks. DrKIT outperforms the prior state-of-the-art by a large margin in the 2-hop and 3-hop cases. The strongest prior method, PullNet (; 2018), uses a graph neural network model with learned iterative retrieval from the corpus to answer multi-hop questions. It uses the MetaQA KB during training to identify shortest paths between the question entity and answer entity, which are used to supervise the text retrieval and reading modules. DrKIT, on the other hand, has strong performance without such supervision, demonstrating its capability for end-to-end learning. (Adding the same intermediate supervision to DrKIT does not even consistently improve performance-it gives DrKIT a small lift on 1-and 2-hop questions but does not help for 3-hop questions.) DrKIT's architecture is driven, in part, by efficiency considerations: unlike PullNet, it is designed to answer questions with minimal processing at query time. Figure 3 compares the tradeoffs between accuracy and inference time of DrKIT with PullNet as we vary K, the number of dense nearest neighbors retrieved. The runtime gains of DrKIT over PullNet range between 5x-15x. Analysis. We perform ablations on DrKIT for the MetaQA data. First, we empirically confirm that taking a sum instead of max over the mentions of an entity hurts performance. So does removing the softmax temperature (by setting λ = 1). Removing the TFIDF component from Eq. 3, leads a large decrease in performance for 2-hop and 3-hop questions. This is because the TFIDF component constrains the end-to-end learning to be along reasonable paths of co-occurring mentions; otherwise the search space becomes too large. The also highlight the importance of pretraining method introduced in §2.3, as DrKIT over an index of BERT representations without pretraining is 23 points worse in the 3-hop case. We also check the performance when the KB used for pre-training is incomplete. Even with only 25% edges retained, we see a high performance, better than PullNet, and far better than state-of-the-art KB-only methods. We analyzed 100 2-hop questions correctly answered by DrKIT and found that for 83 the intermediate answers were also correct. The other 17 cases were all where the second hop asked about genre, e.g. "What are the genres of the films directed by Justin Simien?". We found that in these cases the intermediate answer was the same as the correct final answer-essentially the model learned to answer the question in 1 hop and copy it over for the second hop. Among incorrectly answered questions, the intermediate accuracy was only 47%, so the mistakes were evenly distributed across the two hops. The MetaQA dataset has been fairly well-studied, but has limitations since it is constructed over a small KB. In this section we consider a new task, in a larger scale setting with many more relations, entities and text passages. The new dataset also lets us evaluate performance in a setting where the test set contains documents and entities not seen at training time, an important issue when devising a QA system that will be used in a real-world setting, where the corpus and entities in the discourse change over time, and lets us perform analyses not possible with MetaQA, such as extrapolating from single-hop to multi-hop settings without retraining. Dataset. We sample two subsets of Wikipedia articles, one for pre-training (§2.3) and end-to-end training, and one for testing. For each subset we consider the set of WikiData entities mentioned in the articles, and sample paths of 1-3 hop relations among them, ensuring that any intermediate entino more than 100. Then we construct a semi-structured query by concatenating the surface forms of the head entity with the path of relations (e.g. "Helene Gayle, employer, founded by, ?"). The answer is the tail entity at the end of the path, and the task is to extract it from the Wikipedia articles. Existing slot-filling tasks focus on a single-hop, static corpus setting, whereas our task considers a dynamic setting which requires to travers the corpus. For each setting, we create a dataset with 10K articles, 120K passages, > 200K entities and 1.5M mentions, ing in an index of size about 2gb. We include example queries in Appendix. Baselines. We adapt two publicly available open-domain QA systems for this task -DrQA 3 and PIQA 4 . While DrQA is relatively mature and widely used, PIQA is recent, and similar to our setup since it also answers questions with minimal computation at query time. It is broadly similar to a single textual follow operation in DrKIT, but is not constructed to allow retrieved answers to be converted to entities and then used in subsequent processing, so it is not directly applicable to multi-hop queries. We thus also consider a cascaded architecture which repeatedly applies Eq. 2, using either of PIQA or DrQA to compute Pr(z t |q, z t−1) against the corpus, retaining at most k intermediate answers in each step. We tune k in the range of 1-10, since larger values make the runtime infeasible. Further, since these models were trained on natural language questions, we use the templates released by to convert intermediate questions into natural text. 5 We test off-the-shelf versions of these systems, as well as a version of PIQA re-trained on our our slot-filling data. 6 We compare to a version of DrKIT trained only on single-hop queries (§2.3) and similarly cascaded, and one version trained end-to-end on the multi-hop queries. Results. Table 1 (right) lists the Hits @1 performance on this task. Off-the-shelf open-domain QA systems perform poorly, showing the challenging nature of the task. Re-training PIQA on the slotfilling data improves performance considerably, but DrKIT trained on the same data improves on it. A large improvement is seen on top of these cascaded architectures by end-to-end training, which is made possible by the differentiable operation introduced in this paper. We also list performance of DrKIT when trained against an index of fixed BERT-large mention representations. While this is comparable to the re-trained version of PIQA, it lags behind DrKIT pre-trained using the KB, once again highlighting the importance of the scheme outlined in §2.3. We also plot the Hits @1 against Queries/sec for cascaded versions of PIQA and DrKIT in Figure 3 (middle). We observe gains of 2x-3x to DrKIT, due to the efficient implementation of entity-mention expansion discussed in §2.2. Analysis. In order to understand where the accuracy gains for DrKIT come from, we conduct experiments on the dataset of slot-filling queries released by. We construct an open version of the task by collecting Wikipedia articles of all subject entities in the data. A detailed discussion is in Appendix C, here we note the main findings. PIQA trained on Squad only gets 30% macro-avg accuracy on this data, but this improves to 46% when re-trained on our slot-filling data. Interestingly, a version of DrKIT which selects from all spans in the corpus performs similarly to PIQA (50%), but when using entity linking it significantly improves to 66%. It also has 55% accuracy in answering queries about rare relations, i.e. those observed < 5 times in its training data. We also conduct probing experiments comparing the representations learned using slot-filling to those by vanilla BERT. We found that while the two are comparable in detecting fine-grained entity types, the slot-filling version is significantly better at encoding entity co-occurrence information. Table 2: (Left) Retrieval performance on the HotpotQA benchmark dev set. Q/s denotes the number of queries per second during inference on a single 16-core CPU. Accuracy @k is the fraction where both the correct passages are retrieved in the top k. †: Baselines obtained from Das et al. (2019b). For DrKIT, we report the performance when the index is pretrained using the WikiData KB alone, the HotpotQA training questions alone, or using both. *: Measured on different machines with similar specs. (Right) Overall performance on the HotpotQA task, when passing 10 retrieved passages to a downstream reading comprehension model . Dataset. HotpotQA is a recent dataset of over 100K crowd-sourced multi-hop questions and answers over introductory Wikipedia passages. We focus on the open-domain fullwiki setting where the two gold passages required to answer the question are not known in advance. The answers are free-form spans of text in the passages, not necessarily entities, and hence our model which selects entities is not directly applicable here. Instead, inspired by recent works (b;), we look at the challenging sub-task of retrieving the passages required to answer the questions from a pool of 5.2M. This is a multi-hop IR task, since for many questions at least one passage may be 1-2 hops away from the entities in the question. Further, each passage is about an entity (the title entity of that Wikipedia page), and hence retrieving passages is the same as identifying the title entities of those passages. We apply DrKIT to this task of identifying the two entities for each question, whose passages contain the information needed to answer that question. Then we pass the top 10 passages identified this way to a standard reading comprehension architecture from to select the answer span. Setup. We use the Wikipedia abstracts released by as the text corpus. 7 The total number of entities is the same as the number of abstracts, 5.23M, and we consider hyperlinks in the text as mentions of the entities to whose pages they point to, leading to 22.8M total mentions in an index of size 34GB. For pretraining the mention representations, we compare using the WikiData KB as described in §2.3, to directly using the HotpotQA training questions, with TF-IDF based retrieved passages as negative examples. We set A E→M [e, m] = 1 if either the entity e is mentioned on the page of the entity denoted by m, or vice versa. For entity linking over the questions, we retrieve the top 20 entities based on the match between a bigram based TF-IDF vector of the question with the vector of the surface form of the entity (same as the title of the Wiki article). We found that the gold entities that need to be retrieved are within 2 hops of the entities linked in this manner for 87% dev examples. Unlike the MetaQA and WikiData datasets, however, for HotpotQA we do not know the number of hops required for each question in advance. Instead, we run DrKIT for 2 hops for each question, and then take a weighted average of the distribution over entities after each hop Z * = π 0 Z 0 + π 1 Z 1 + π 2 Z 2. Z 0 consists of the entities linked to the question itself, rescored based on an encoding of the question, since in some cases one or both the entities to be retrieved are in this set. 8 Z 1 and Z 2 are given by Eq. 4. The mixing weights π i are the softmax outputs of a classifier on top of another encoding of the question, learnt end-to-end on the retrieval task. This process can be viewed as soft mixing of different templates ranging from 0 to 2 hops for answering a question, similar to NQL. Results. We compare our retrieval to those presented in (b) in Table 2 (Left). We measure accuracy @k retrievals, since we care about retrieving both the passages required to answer the question to pass to the downstream reading comprehension model. We see an improvement in accuracy across the board, with much higher gains @2 and @5. The main baseline is the entitycentric IR approach which runs a BERT-based re-ranker on 200 pairs of passages for each question. Importantly, DrKIT improves by over 10x in terms of queries per second during inference over it. Note that the inference time is measured using a batch size of 1 for both models for fair comparison. DrKIT can be easily run with batch sizes upto 40, but the entity centric IR baseline cannot due to the large number of runs of BERT for each query. When comparing different datasets for pretraining the index, there is not much difference between using the Wikidata KB, or the HotpotQA questions. The latter has a better accuracy @2, but overall the best performance is using a combination of both. Lastly, we check the performance of the baseline reading comprehension model from , when given the passages retrieved by DrKIT. 9 While there is a significant improvement over the baseline which uses a TF-IDF based retrieval, we see only a small improvement over the passages retrieved by the entity-centric IR baseline, despite the significantly improved accuracy @10 of DrKIT. Among the 33% questions where the top 10 passages do not contain both the correct passages, for around 20% the passage containing the answer is also missing. We conjecture this percentage is lower for the entity-centric IR baseline, and the downstream model is able to answer some of these questions without the other supporting passage. Neural Query Language (NQL) defines differentiable templates for multi-step access to a symbolic KB, in which relations between entities are explicitly enumerated. Here, we focus on the case where the relations are implicit in mention representations derived from text. Knowledge Graph embeddings (; ;) attach continuous representations to discrete symbols which allow them to be incorporated in deep networks . Embeddings often allow generalization to unseen facts using relation patterns, but text corpora are more complete in the information they contain. also examined answering compositional questions by treating a text corpus (in their case the entire web) as a KB. However their approach consists of parsing the query into a computation tree separately, and running a black-box QA model on its leaves separately, which cannot be trained end-to-end. Recent papers have also looked at complex QA using graph neural networks (; ;) or by identifying paths of entities in text (; ; . These approaches rely on identifying a small relevant pool of evidence documents containing the information required for multi-step QA. and , incorporate a dynamic retrieval process to add text about entities identified as relevant in the previous layer of the model. Since the evidence text is processed in a query-dependent manner, the inference speed is slower than when it is pre-processed into an indexed representation (see Figure 3). The same limitation is shared by methods which perform multi-step retrieval interleaved with a reading comprehension model (a; ; . We present DrKIT, a differentiable module that is capable of answering multi-hop questions directly using a large entity-linked text corpus. DrKIT is designed to imitate traversal in KB over the text corpus, providing ability to follow relations in the "virtual" KB over text. We achieve state-of-theart on MetaQA dataset for answering natural language questions, with a 9 point increase in the 3-hop case. We also developed an efficient implementation using sparse operations and inner product search, which led to a 10x increase in QPS over baseline approaches. We use p = 400 dimensional embeddings for the mentions and queries, and 200-dimensional embeddings each for the start and end positions. This in an index of size 750MB. When computing A E→M, the entity to mention co-occurrence matrix, we only retain mentions in the top 50 paragraphs matched with an entity, to ensure sparsity. Further we initialize the first 4 layers of the question encoder with the Transformer network from pre-training. For the first hop, we assign Z 0 as a 1-hot vector for the least frequent entity detected in the question using an exact match. The number of nearest neighbors K and the softmax temperature λ were tuned on the dev set of each task, and we found K = 10000 and λ = 4 to work best. We pretrain the index on a combination of the MetaQA corpus, using the KB provided with MetaQA for distance data, and the Wikidata corpus. Table 3 . Single-hop questions and relation extraction. released a dataset of 1M slotfilling queries of the form (e 1, R, ?) paired with Wikipedia sentences mentioning e 1, which was used for training systems that answered single-step slot-filling questions based on a small set of candidate passages. Here we consider an open version of the same task, where answers to the queries must be extracted from a corpus rather than provided candidates. We construct the corpus by collecting and entity-linking all paragraphs in the Wikipedia articles of all 8K subject entities in the dev and test sets, leading to a total of 109K passages. After constructing the TFIDF A E→M and coreference A M →E matrices for this corpus, we directly use our pre-trained index to answer the test set queries. indexing entity-mentions in single-hop questions over. Note that DrKit-entities has a high Hits@1 performance on the Rare relations subset, showing that there is generalization to less frequent data due to the natural language representations of entities and relations. Probing Experiments Finally, to compare the representations learned by the BERT model finetuned on the Wikidata slot-filling task, we design two probing experiments. In each experiment, we keep the parameters of the BERT model (mention encoders) being probed fixed and only train the query encoders. Similar to , we use a weighted average of the layers of BERT here rather than only the top-most layer, where the weights are learned on the probing task. In the first experiment, we train and test on shared-entity negatives. Good performance here means the BERT model being probed encodes fine-grained entity-type information reliably 10. As shown in Table 4, BERT performs well on this task, suggesting it encodes fine-grained types well. In the second experiment, we train and test only on shared-relation negatives. Good performance here means that the BERT model encodes entity co-occurrence information reliably. In this probe task, we see a large performance drop for Bert, suggesting it does not encode entity co-occurrence information well. The good performance of the DrKIT model on both experiments suggests that fine-tuning on the slot-filling task primarily helps the contextual representations to also encode entity co-occurrence information, in addition to entity type information.
Differentiable multi-hop access to a textual knowledge base of indexed contextual representations
428
scitldr
In spite of their great success, traditional factorization algorithms typically do not support features (e.g., Matrix Factorization), or their complexity scales quadratically with the number of features (e.g, Factorization Machine). On the other hand, neural methods allow large feature sets, but are often designed for a specific application. We propose novel deep factorization methods that allow efficient and flexible feature representation. For example, we enable describing items with natural language with complexity linear to the vocabulary size—this enables prediction for unseen items and avoids the cold start problem. We show that our architecture can generalize some previously published single-purpose neural architectures. Our experiments suggest improved training times and accuracy compared to shallow methods. In recent years, predictive tasks that traditionally have been solved with factorization are now being studied within the context of neural networks. These solutions often work as black boxes, and many times they are designed specifically for a single task with an arbitrary network that may not have much justification. We propose Deep Structured Factorization Machine, a family of general-purpose factorization techniques that can be used stand-alone or as a "design pattern" within a larger neural network. Our work provides some insight into how to enable general-purpose factorization within neural architectures without losing interpretability and a principled design. Previous factorization methods do not scale to large feature sets and make strong assumptions about their latent structure. Our main contribution is that we enable a general-purpose framework that enables efficient factorization of datasets with complex feature sets. For example, applications of factorization in natural language scale quadratically in the number of words in the vocabulary. Our solution allows inference with linear runtime complexity on the vocabulary size. Previous work has explored how to improve factorization's accuracy (see § 3.3) with its current limitations withstanding; alternatively, some have proposed how to make it tractable for a particular domain-for example, text BID22. We believe that we are the first ones to propose an efficient general-purpose method. Interestingly, our experiments indicate that Structured Deep Factorization has large improvements in predictive accuracy and runtime compared to some recent ad-hoc models. Factorization Machine BID15 ) is one of the most succesful methods for general purpose factorization. BID15 formulated it as an extension to polynomial regression. Consider a degree-2 polynomial (quadratic) regression, where we want to predict a target variable y from a feature vector x ∈ R n:ŷ DISPLAYFORM0 Here, λ 1 and λ p are one-way and pairwise interactions: DISPLAYFORM1 In words, n is the total number of features, the term b 0 is an intercept, b i is the strength of the i-th feature, and w i,j is the interaction coefficient between the i-th and j-th feature. The function ω is an activation. Choices for ω include a linear link (ω(x) = x) for continuous outputs, or logistic link (ω(x) = log exp(x) exp(x+1) ) for binary outputs. Factorization Machine replaces the two-way individual pairwise parameters with shared parameters β i. This is a rank-r vector of factors-embeddings in the neural literature-that encode the interaction between features: DISPLAYFORM2 Intuitively, the dot product (·) returns a scalar that measures the (dis)similarity between the two factors. Polynomial regression has n 2 interaction parameters, and Factorization Machine has n × r. While using r n makes the model less expressive, this is often desirable because factorization is typically used when features have some shared latent structure. Factorization Machine may dramatically reduce the number of parameters to estimate; however, inference has runtime that scales quadratically with the number of features-it requires O(n 2) dot products. BID15 shows that when the feature vector x consists only of two categorical features in one-hot encoding,Factorization Machine is equivalent to the popular Matrix Factorization algorithm BID9 ). Factorization Machine has the following working-assumptions and limitations:1. Strong parameter sharing. Factorization Machine assumes that all of the pairwise interactions factorize into a common shared space. This may not be the case when using large and complex feature sets. To the extent of our knowledge, softening this assumption has not been studied in the literature before. We study how to do so in § 3.1. 2. Intractable for large feature sets. Doing inference on Factorization Machine has a runtime complexity that scales quadratically with the number of features. Thus, it is intractable for complex feature sets such as text. We address this in § 3.2.We propose Structured Factorization Machine as a building block that we will extend with neural methods. It is a simple yet effective way to model structure between groups of features. In Factorization Machine, all the features interact with each other. In the structured model, the features only interact with features from a different group. We do this by defining groups of features κ. We define Structured Factorization Machine as: DISPLAYFORM0 The interactions occur from features in different groups: DISPLAYFORM1 For example, consider a model with four features (n = 4). If we define κ = {{1, 2}, {3, 4}}, feature x 1 would only interact with x 3 and x 4. Without loss of generality, we could define a model that is equivalent to a shallow Factorization Machine by allowing each feature to be in a singleton group: κ = {{1}, {2}, {3}, {4}}.Figure 1 compares existing factorization methods with our novel models. In the rest of this section we review them. DISPLAYFORM2 We now contrast Polynomial Regression (Equation 1) and Factorization Machine. Polynomial regression is convex, and thus can be optimized globally. Because Factorization Machine is nonconvex, optimization can only guarantee to find local optima. The factorization assumption is more desirable than Polynomial Regression when there is shared structure between the features. The larger number of parameters of Polynomial Regression makes it likelier to overfit. However, we consider two properties of independent interactions that are desirable. First, it is unclear if all interactions strongly factorize when using very large feature sets. For example, some interactions may have different importance-perhaps because some features are irrelevant and should be discarded. Second, it may be easier to optimize a model with individual parameters than a model with factorized parameters. Polynomial regression converges quickly because the likelihood function is convex-a unique solution exists. We aim to combine the advantage of both models. We propose a hybrid factorized-independent method that we call Structured Deep-Out Factorization Machine, because it extends Structured Factorization Machine outside the dot product (see FIG0 : DISPLAYFORM0 Here, the the embeddings β are factorized and shared across the pairwise interactions. The parameters w are independent to each interaction group defined by κ. The function ψ is any activation function. If we were interested in interpreting the parameters, we would constrain w to be non-negative and choose ψ to be linear. Alternatively, the Rectified Linear Unit (ReLU) activation may be desirable because it is easy to optimize BID10: DISPLAYFORM1 We leave for future work the consideration of deeper networks. Structured Deep-In Factorization Machine allows treating a group of features as a single one; it extracts features from each feature group, and builds latent factors from them. Consider FIG0: the first group only has a single feature which is projected to an embedding (just like a regular Factorization Machine); the second group has multiple features, which are together projected to a single embedding. More formally: DISPLAYFORM0 In this notation, x κi is a subvector that contains all of the features that belong to the group κ i. Thus, DISPLAYFORM1 The intuition is that by grouping (sub-)features as a single entity, we can can reason on a higher level of abstraction. Instead of individual sub-features interacting among each other, the entities interact with other entities. Here, φ i is a feature extraction feature that inputs the i-th feature group of the instance, and returns an embedding. The simplest implementation for φ i is a linear fully-connected layer, where the output of the r-th entry is: DISPLAYFORM2 Within a group, a feature extraction function may allow for subfeatures to interact with each other. Across groups, entities interact with each other via the output of φ only. Leveraging item descriptions to circumvent the cold-start problem We now describe how we can use Structured Deep-In Factorization Machine for large feature sets-such as natural language. In this scenario Factorization Machine is not tractable: if we use each word in the vocabulary as independent feature, we have to consider the interaction of each word with every word in the vocabulary and the other features, which means the number of interaction terms would depend quadratically on the size of the vocabulary. A traditional work-around is to ignore the features, and use matrix factorization by using a unique one-hot index for each item (e.g, each image/text), which avoids the quadratic dependence on the vocabulary size, but in this case the model suffers from the cold-start issue: inference is not possible for a new item not seen during training, even if it shares many words with existing items. We can use the Deep-In model to both use large feature sets and overcome the cold-start problem. This is only possible when there is an alternative description of the item available (for example an image or a text). In FIG1, we show how we address this problem by treating the words as indexed features, but placed within a structured feature group κ w. A feature extraction function φ acts on the features in κ w, and the other features interact with the words only via the output of φ. Notice that this implies we can precompute and store the latent factors of the labels seen during training, and predictions during inference can be sped-up. For example if we have two feature groups (e.g, a label and an item), first we compute the feature extraction function to the unseen items and their embeddings, and then we simply apply a dot product over the stored vectors of the labels. FIG0 shows a Neural Factorization model that has been discovered multiple times BID4 BID5. It replaces the dot product of factors with a learned neural function, which has been shown to improve predictive accuracy for various tasks. Unlike our work, this method has the same drawbacks as regular Factorization Machine for feature types such as natural language. As discussed in §3.2, applying this method for text-based inputs would in either the cold-start problem or quadratic dependence on the vocabulary. It would be straightforward to combine Neural Factorization with Structured Deep-In Factorization Machine, which is not explored in this work. Note that if the dot product is replaced with a neural function, fast inference for coldstart documents using pre-computed label embeddings is no longer possible. DISPLAYFORM3 We can learn the the parameters of a deep factorization model θ using training data by minimizing a loss function L: DISPLAYFORM0 Here, y(x) is the true target value for x obtained from training data, andŷ(x) is the one estimated by the model; the hyperparameter γ controls the amount of regularization. For the labeling and classification tasks, we optimize the binary cross-entropy for y ∈ {0, 1}: DISPLAYFORM1 For the regression tasks where the target value is continuous, we optimize the mean squared error (MSE): DISPLAYFORM2 Neural models are typically learned using mini-batch updates, where the incremental descent is performed with respect to several instances at a time. For the implementation of this paper, we built our models using the Keras programming toolkit BID2, that is now part of Tensorflow BID0. It enables automatic differentiation, and is bundled with a generalpurpose optimization algorithm called ADAM BID8 ) that needs no tuning of gradient step sizes. For the multi-label classification tasks, we use Structured Factorization Machine with a binary output. In this case we would have at least two feature groups-one of the feature groups is the label that we want to predict, and the other group(s) is the input from which we want to make the prediction. The output indicates whether the label is associated with the input (y = +1), or not (y = 0). The datasets we use for our labeling experiments only contains positive labels, thus for each training example we sample a set of negative labels equal to the number of positive labels. We choose one of the following sampling strategies according to the best validation error, in each case excluding the actual positive labels for each training example -(i) uniformly from all possible labels, or (ii) from the empirical distributions of positive labels. Other sampling strategies have been proposed BID17 BID16; Anonymous, In submission). We now address our working hypotheses. For all our experiments we define a development set and a single test set which is 10% of the dataset, and a part of the development set is used for early stopping or validating hyper-parameters. Since these datasets are large and require significant time to train on an Nvidia K80 GPU cluster, we report on only a single training-test split. For the labeling and classification tasks we use evaluation metric called Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC). Since we only observe positive labels, for such tasks in the test set we sample a labels according to the label frequency. This ensures that if a model merely predicts the labels according to their popularity, it would have an AUC of 0.5. A caveat of our evaluation strategy is that we could be underestimating the performance of our models-there is a small probability that the sampled negatives labels are false negatives. However, since we apply the evaluation strategy consistently across our methods and baselines, the relative difference of the AUC is meaningful. We choose the AUC as a metric because it is popular for both classification and ranking problems. For the regression tasks, we use MSE as the evaluation metric. In preliminary experiments we noticed that regularization slows down convergence with no gains in prediction accuracy, so we avoid overfitting only by using early stopping. We share most of the code for the experiments online 1 for reproducibility. Factorization algorithms are desirable for smaller feature sets (e.g, users and items) that have shared latent structure, because a more parsimonious model is less prone to overfitting. We now investigate if relaxing the factorization constraints for models with larger and more heterogeneous features is useful. We do this by comparing deep-out and a shallow structured models under the same conditions. For reproducibility details on our grid-search see Appendix A.1. We test this on classification, regression, and collaborative filtering (with implicit feedback prediction) tasks, respectively:• Subscription prediction We predict the binary outcome of whether a marketing campaign is successful using the UCI Bank Marketing Data Set BID12 BID14. This is a small dataset with only 45,211 observations. We use 17 categorical and real-valued features.• Airline delays We predict the real-valued delay of american flights using the 2008 RITA dataset 2. using only 8 categorical features. This dataset is 7 million rows.• Course prediction On this collaborative filtering task with implicit feedback, we use partial study plans collected from a company that has an educational-technology presence. We only consider data from students with three or more course enrollments, and courses that have had at least enrollments from 10 students. The ing dataset contains approximately 36 million enrollment instances from roughly 4.7 million students and 2,578 courses (out of 2,930 that exist on the ontology). For sampling implicit feedback, we sample the course and subject variables together. The features we use are all categorical: student id, course id, subject level 1, subject level 2, subject level 3, university id, student year (discretized), and data source types. For both shallow and deep models, we define the groups the same way: we group each continuous feature by itself; we group the multiple dimensions of the one-hot encoding of categorical variables together. This way, for example, the 12 dimensions that are used to encode a "month" feature don't interact with each other. TAB0 summarizes our . Adding a handful of deep-out parameters is very effective for improving forecasting quality or training time. For example, shallow factorization does not do much better than random chance in the subscription prediction, but the deep-out approach improves the AUC by roughly 35%. We hypothesize that this dataset has features with little shared structured. On the courses dataset, the deep method is almost twice as fast as the shallow method, with some modest improvement on performance. In this section, we focus on natural language processing tasks. TAB1 provides descriptive statistics of the datasets that we consider. To choose the feature extraction function φ for text, we use a Convolutional Neural Network (CNN) that has been shown to be effective for natural language tasks BID7 BID20. In Appendix A.2 we describe this network and its hyper-parameters. Instead of tuning the hyper-parameters, we follow previously published guidelines BID21. We experiment on two datasets to predict a label from text, obtained from an educational technology company: 2. Skill labeling. This dataset contains job posting texts annotated with the skills that are deemed to be useful by hiring managers. Thus, we try to predict the skills required for a job, particularly those that are not explicitly mentioned in the job posting or description. For each task, we build two feature groups: the labels and the document text. The document texts are represented as a sequence of one-hot encoded words. As we describe in § 3.2, it is reasonable to use Matrix Factorization and ignore the item descriptions. TAB2 shows that Structured Deep-In Factorization Machine outperforms this strategy. Unfortunately, we cannot compare to Factorization Machine because it is intractable with a large vocabulary, as it would lead to interactions that scale quadratically in the vocabulary size (100,000 in our case). Matrix factorization cannot generalize to unseen items. Instead, we compare to Collaborative Topic Regression (CTR-Wang & Blei FORMULA0 Because the implementation we found is relatively slow, we cannot test them on the concept and skill datasets. Instead, we use the CiteULike dataset which consists of pairs of scientific articles and the users who have added them to their personal libraries. We use it to predict users who may have added a given article to their library. We compare the performance of Structured DeepIn Factorization Machine with CTR using pre-defined cross-validation splits 4 . We use 1% of the training set for early stopping. For CTR we use the hyper-parameters reported by the authors as best, except for r which we found had a significant impact on training time . We only consider r ∈ {5, 10, 15} and choose the value which gives the best performance for CTR (details in Appendix 6). On the warm-start condition, CTR has an AUC of 0.9356; however, it shows significant degradation in performance for unseen documents an it only performs slightly better than random chance with an AUC of 0.5047. On the other hand, Deep-In FM achieves AUC of 0.9401 on the warm-start condition, and it only degrades to 0.9124 on unseen documents. For completeness, we also tested Deep-In FM on cold-start prediction for the Concepts and Skills dataset, on 1% of documents unseen during training. We obtained AUC of 0.92 and 0.67, respectively, which are within 3% of the warm-start . Deep-In FM can also be trained over ten times faster, since it can leverage GPUs. 5 We also note that we have not tuned the architecture or hyper-parameters of the feature extraction function φ for each dataset and greater improvements are possible by optimizing them. We now compare with a method called DeepCoNN, a deep network specifically designed for incorporating text into matrix factorization BID22 -which reportedly, is the state of the art for predicting customer ratings when textual reviews are available. For Deep-In FM we use the same feature extraction function (see Appendix A.3 for details) used by DeepCoNN. We evaluate on the Yelp dataset 6, which consists of 4.7 million reviews of restaurants. For each user-item pair, DeepCoNN concatenates the text from all reviews for that item and all reviews by that user. The concatenated text is fed into a feature extraction function followed by a factorization machine. In contrast, for Structured Factorization, we build 3 feature groups: item identifiers (in this case, restaurants), users and review text. TAB3 compares our methods to DeepConn's published because a public implementation is not available. We see that Structured Deep-In Factorization Machine provides a large performance increase when comparing the reported improvement, over Matrix Factorization, of the mean squared error. Our approach is more general, and we claim that it is also more efficient. Since DeepCoNN concatenates text, when the average reviews per user isn u and reviews per item isn i, each text is duplicated on averagen i ×n u times per training epoch. In contrast, for Deep-In FM each review is seen only once per epoch. Thus it can be 1-2 orders of magnitude more efficient for datasets wherē n i ×n u is large. We present a general purpose method for factorizing large feature sets; we demonstrate it in several applications, such as using text to enable prediction for unseen items and circumvent the cold-start problem. Future work may soften our requirement of domain knowledge-in general, our methods require feature groups and feature extraction functions defined by experts. We did not pursue an exhaustive comparison with previously published methods; for example, there are other algorithms that rely on Bayesian optimization BID3 to infer the item embeddings from text which we did not benchmark. Although we apply our methods on six datasets altogether, further experimentation may be able to situate under which conditions our methods are effective. Our methods generalize previously published single-purpose neural networks. For example, TagSpace BID20 ) is a very successful method, but it is limited to a single textual feature. With the correct feature extraction function, Structured Deep-In Factorization Machine can be used to implement a TagSpace model. Compared to previous general-purpose approaches, our work makes less assumptions about the training data and allows more flexibility. We provide evidence that the factorization hypothesis may be too restrictive-when relaxed we see higher predictive accuracy with a dramatic improvement of training speed. We show experimental outperforming an algorithm specifically designed for text-even when using the same feature extraction CNN. This suggests that the need for ad-hoc networks should be situated in relationship to the improvements over a general-purpose method. To the extent of our knowledge, our work is the first to propose a general purpose factorization algorithm that enables efficient inference on arbitrary feature sets. For the courses dataset, our experimentation strategy killed experiments that took too long. In practice, that means that some shallow experiments with batch size of 5,000 observations were not executed. Here we describe the details of the feature extraction function φ used in our experiments for labelling in §4.2.1 and §4.2.2. An overview of the network is given in FIG3. We choose the most popular words of each dataset to build a vocabulary of size n, and convert the words of each document to a sequence of length t of one-hot encodings of the input words. If the input text is shorter than t, then we pad it with zeros; if the text is longer, we truncate it by discarding the trailing words. Therefore, for a vocabulary size n, the input has dimensions t × n, and this matrix is then passed through the following layers:1. We use an embedding layer to assign a d-dimensional vector to each word in the input passage of text. This is done through a d × n-dimensional lookup table, which in an t × d matrix.2. We extract features from the embeddings with functions called convolutional filters (also called feature maps). A convolutional filter is simply a matrix learned from an input. We learn f filters that are applied on groups of m adjacent word embeddings, thus each of our filters is a d × m matrix of learned parameters. Filters are applied by computing the element-wise dot product of the filter along a sliding window of the entire input. The ing output for each filter is a vector of length t − m + 1. We also apply a ReLU activation to the output of each filter. 3. Consider the case of inputs of different lengths. For very short texts, the output of the filters will be mostly zero since the input is zero-padded. To enforce learning from the features of the text, and not just its length we apply a function called 1-max pooling to the output of the filters: from the t − m + 1 output vector of each filter, we select the maximum value. This yields a vector of length F, a representation of the passage which is independent of its length.4. We learn higher-level features from the convolutional filters. For this, we use a fully connected layer with p units and a ReLU activation, 5. During training (not in inference), we prevent the units from co-adapting too much with a dropout layer BID18. Dropout is a form of regularization that for each mini-batch randomly drops a specified percentage of units.6. the final embedding for x j (that is used in the factorization) is computed by a dense layer with r output units and an activation function, where r is the embedding size of our indexable items. We set the maximum vocabulary size n to 100,000 words, and input embedding size d to 50 for all experiments. We initialize the input word embeddings and the label embeddings using Word2Vec BID13 We have have not evaluated multiple architectures or hyperparameter settings and obtain good on diverse datasets with the same architecture, which was designed followed recommendations from a large scale evaluation of CNN hyper parameters BID21. We set the number of convolutional filters f to 1,000, and the dropout rate to 0.1. The maximum sequence length t was chosen according to the typical document length (5,000 words for the concept dataset, 400 for the skills and 350 for CiteULike), the embedding size was set to r = 100 for concepts, and r = 200 for skills. For the CTR dataset since we use very small values of r, due to the tendency of the ReLU units to'die' during training (output zero for all examples), which can have a significant impact, we used instead PReLU activations BID6 for the final layer, since they do not suffer from this issue. The CNN architecture used for DeepCoNN BID22 ) is similar to that described above in Appenix A.2. It consists of a word embedding lookup table, convolutional layer, 1-max pooling and a fully connected layer. We use the hyper-parameters that the authors report as best -100 convolution filters and 50 units for the fully connected layer. We set the word embedding size to 100, the vocabulary size to 100,000 and the maximum document length to 250. To compare Deep-In FM with Collaborative Topic Regression, we choose the embedding size r ∈ {5, 10, 15} for which CTR performs best. The other parameters were set to those reported by the authors as best. The are shown in TAB5.
Scalable general-purpose factorization algorithm-- also helps to circumvent cold start problem.
429
scitldr
Augmented Reality (AR) can assist with physical tasks such as object assembly through the use of “situated instructions”. These instructions can be in the form of videos, pictures, text or guiding animations, where the most helpful media among these is highly dependent on both the user and the nature of the task. Our work supports the authoring of AR tutorials for assembly tasks with little overhead beyond simply performing the task itself. The presented system, AuthAR reduces the time and effort required to build interactive AR tutorials by automatically generating key components of the AR tutorial while the author is assembling the physical pieces. Further, the system guides authors through the process of adding videos, pictures, text and animations to the tutorial. This concurrent assembly and tutorial generation approach allows for authoring of portable tutorials that fit the preferences of different end users. Physical task guidance can be delivered via Augmented Reality (AR) since assembly often requires both hands and continuous attention to the task. Additionally, assembly tutorials have instructions directly associated with physical objects, so AR can reduce the need for excessive context switching between the instructions and the physical structure by projecting those instructions into the environment. These benefits have been demonstrated in fields such as Facilities Management, Maintenance, and Internet of Things (IoT) device management. Additionally, prior work in AR assembly guidance has shown that these benefits can translate to carrying out assembly tasks. While significant previous work has looked at the benefits of following tutorials in AR, much less has looked at how to author these tutorials. Beyond the technical requirements of an authoring interface, an ideal tutorial may look different depending on the end user of the tutorial. This problem is exacerbated in AR as there are many different modalities in which tutorial content can be presented. While one person may appreciate guiding animations in AR, another may prefer static text and images, and yet another may prefer video tutorials from one or multiple perspectives. With AuthAR, we present a system for building tutorials for assembly tasks that can accommodate the needs of these different types of end users. AuthAR generates video, and pictorial representations semi-automatically while the tutorial author completes the task. Furthermore, AuthAR allows tutorial authors to create and refine a tutorial in situ, integrating content authoring into the process of completing the task. This approach adds little additional overhead and reduces the need for post-processing of the tutorial. This paper presents the AuthAR system for generating mixed media assembly tutorials. Informed by prior work on content/tutorial authoring, and tutorial playback and walkthrough, we build the system with an eye toward non-obtrusive content authoring and generation of important components for tutorial playback, summarized in a set of design guidelines. We validate the system's ability to create a tutorial by stepping through the process of creating a tutorial to build a laptop stand, automatically generating an XML representation of the tutorial. Initial observations suggest the tool will be valuable, and possible ways the system could be extended and refined in future iterations. AuthAR builds on prior research in the areas of AR tutorials and content authoring, as well as principles of mixed media tutorial design. AR is often applied to assembly tasks for its ability to project instructions into the environment such that they are spatially relevant. Radkowski et al. demonstrate assembly of a 3D puzzle and provide evidence that AR supports faster assembly than traditional methods. Similarly, Henderson et al. demonstrate the use of projected guidance for engine assembly. Prior work on AR guidance design has shown that abstract representations (3D text, arrows) can be more effective for complex tasks than a more concrete representation (virtual models of the pieces and fasteners), which is sensible for simpler tasks. In that work, the authors found information-rich 2D representations to be more effective than either of the AR representations in some cases. One theory is that AR is only justified when the task is sufficiently difficult, such that the time to process the information is insignificant compared to the time to perform the task. So even for physical tasks in which instructions' spatial relevance could be increased by projecting into the environment, tutorials should provide users the ability to view the step(s) in a more familiar picture/video format when needed or preferred. The need for these mixed media tutorials is apparent, however little work has explored authoring of these tutorials for physical tasks. Outside of the realm of tutorial authoring, numerous systems have explored content creation in augmented reality to abstract low-level programming from the creator, in turn lowering the threshold for participation with AR. Many AR content authoring systems give users a collection of 3D models to place and manipulate in an environment or to overlay on a video stream, allowing users to create AR content without programming expertise generally required to build such scenes. Other AR content creation systems target specific end users for participation by domain experts in areas such as museum exhibition curation tour guidance, and assembly/maintenance. Mota et al. provides a survey of existing AR content creation tools, classifying as standalone applications vs. plug-ins to existing applications and platform-specific vs. platform-independent. Of particular interest to us are authoring tools that allow for creation of training experiences. Built on Amire's component-based framework, Zauner et al. presents a tool to enable authoring of assembly task guides in augmented reality. With this tool, the author puts visually tracked pieces together hierarchically to create an assembly workflow assistant in AR. Alternatively, the expert can collaborate remotely, rather than creating a training experience ahead of time. In this scenario, the expert can annotate the live video feed provided by the trainee's AR headset for varying levels of guidance on-the-fly. These systems require explicit enumeration of every component to be added to the created scene whereas our system generates content semi-automatically (segmenting video, recording changes to transform and detecting use of a tool) where possible. Moreover, our system only requires this manual input for augmentation and refinement of the tutorial while the bulk of authoring is done in situ. Within the domain of software tutorials, Chi et al. provides design guidelines for mixed media tutorial authoring with their MixT system for software tutorials. They list scannable steps, legible videos, visualized mouse movement and giving control to the user on which format to view as important components of a mixed media tutorial. Carter et al. echos this sentiment, with their ShowHow system for building a tutorial of videos and pictures taken from an HMD noting that mixing different media is important in lieu of relying on a single media type. Our work builds upon these concepts of mixed media tutorials but applies them to AR authoring of physical task tutorials. The rest of this subsection discusses the use of three popular media used for both software and physical task tutorials: videos, images, and interactive guidance. Prior work on video-based tutorials has applied different strategies to the challenges of video segmentation and multiple perspectives. DemoCut allows for semi-automatic video segmentation such that these demonstration videos are appropriately concise without requiring significant postprocessing. Chronicle allows for video tutorials based on the working history of a file. As the file changes, their system generates a video tutorial of how the file changed. For physical tasks, Nakae et al. propose use of multiple video perspectives (1 st person, 3 rd person, overhead) to record fabrication demonstrations and semi-automatic generation of these videos. Prior work also uses captioned and augmented images for physical task and software tutorials. Image-based tutorials can be generated automatically from demonstration as is done with TutorialPlan, a system that builds tutorials to help novices learn AutoCAD. Images can also represent groups of significant manipulations (such as multiple changes to the saturation parameter) as Grabler et al. demonstrated for GIMP tutorials. For physical tasks, prior work explored use of AR to retarget 2D technical documentation onto the object itself, providing spatially relevant augmentations. Interactive tutorials guiding users where to click, place objects, and even move their hands have become increasingly popular. For example, EverTutor automatically generates tutorials for smart phone tasks such as setting a repeating alarm or changing font size based on touch events. They found improved performance and preference toward these tutorials over text, video, and image tutorials. In the realm of physical assembly tutorials, use of visual and/or depth tracking allows for automatic generation of tutorials based on changes to location and rotation. Further, interactive tutorial authoring can include tracking of hand position, and project green and red onto end users' hands to indicate correct and incorrect positioning respectively. DuploTrack allows users to author a tutorial for creating Duplo block models using depth sensing to infer positions and rotations automatically, and projective guidance is available to end users of the tutorial. Our design of the AuthAR system was grounded by our exploration of the assembly task design space, and a study of related research and investigation into the difficulties associated with the process of generating tutorials (both for "traditional" media, as well as AR). Below we describe the design guidelines we followed when making decisions about the implementation of our system. It is important that the author and assembler be able to perform the assembly task without being burdened by the tutorial apparatus or interface. Though many AR/VR interfaces are either mediated by a handheld device or require use of freehand gestures for input, assembly tasks often require the use of both hands in parallel. For this reason, we prioritize hands-free interaction with the system such that users can always keep their hands free for assembly. Prior studies have shown that different representations (text, static pictures, video, animations, etc.) can all be valuable for people following along with a tutorial. Our system should allow authors to document their tutorial using multiple media types to best capture the necessary information. Manual content creation in AR allows for high expressivity but is time-consuming and can add complexity. Automatic content creation tools are easier to use, however have the side effect of limiting the author's creative control. Our tool should let authors move been automatic and manual creation modes to get the benefits of both models. In the most "automatic" case, an author should be able to generate a tutorial by doing little more than simply completing the assembly task as they would normally. With the ability to generate, refine and augment tutorial step representations, our tool should allow authors to create the tutorial while performing the task and make necessary tweaks directly after each step. This form of contextual and in situ editing allows authors to add callouts or make changes while they are easy to remember and reduces the need for post-processing of tutorials at a desktop computer. The design space of tutorials for assembly tasks in augmented reality is broad, with many dimensions of variability both in how the instructions are presented to an end-user and how they are authored. To inform the design of our system and explore how to best achieve our design goals, we first mapped out some areas of this design space most relevant to our work (Figure 2). Perhaps the most important dimension of variability in the AR-based assembly task tutorial design space is how the tutorial information is presented within the AR environment. Like more "traditional" tutorial delivery mediums, the AR environment is able to present static text and images, as well as show explanatory videos. A most straightforward (and still useful) application of AR technology for sharing assembly tutorials would be to display relevant information about the task as a heads-up display (HUD) in the user's headset, leaving their hand's free to perform the task. Unique to AR however is the ability to spatially associate these "traditional" elements with points or objects in the physical space. Further, an AR tutorial can present the assembly instructions "live", by displaying assembly guidance graphics or animations interleaved into the physical space. With our system we chose to use traditional text, pictures, and video presentation methods in addition to dynamic instructions, since they each have their unique benefits (D2). In order for the HoloLens to listen for the keyword "Stop Recording" during first/third person video recording, it cannot simultaneously record dictation to complement the video. For this reason, the current form of AuthAR records muted videos, but future iterations with additional microphone input capability would rectify this. With use of audio in content authoring infeasible, text serves to supplement muted video. When it comes to authoring the tutorial, the content could be constructed in situ, that is, in the same physical space as the task is being performed, or in an external secondary location such as a desktop computer. Our work explores in situ authoring to reduce the required effort (D3) and maintain context (D4). The content for the tutorial can be automatically captured as the authors moves through the assembly steps, or manually created by explicitly specifying what should be happening at each step. Automatically generating the tutorials streamlines the authoring and content creation process, but limits expressivity of the tutorial instructions. We automatically capture as much as possible, while allowing manual additions where the author thinks they would be helpful (D3). Another aspect of tutorial authoring is how the content is edited. A traditional practice is to capture the necessary content first, then go back and edit for time and clarity in post-processing. Alternately, the content can be edited concurrently with the process of content collection-which if implemented poorly could negatively impact the "flow" of the creation process. In the best case, this in having a completed tutorial ready to share as soon as the author has completed the task themselves. We focus on creating a welldesigned concurrent editing process (D1). Interacting with the tutorial system, both in the creation of the tutorial as well as when following along, can be accomplished through many means. AR applications can use touch controls, gestures, or dedicated hardware controllers. However, for assembly tasks it is desirable to have both hands available at all times for the assembly task -rather than have them occupied by control of the tutorial authoring system (D1). For that reason, we exclusively use voice and gaze controls. AuthAR is a suite of software tools that allow tutorial authors to generate AR content (Figure 1). Optitrack motion capture cameras visually track marked assembly materials and a screwdriver, adding changes to position, transform and locations of screws added with a tracked screwdriver to the tutorial automatically. The HoloLens captures first person videos and images and a mounted Android tablet simultaneously captures third person video. The HoloLens also guides authors through the process of creating the tutorial and allows for gaze-and voice-based interaction to add and update content. This includes addition of augmentations such as location-specific callout points, marking locations of untrackable screws, and specification of images as "negative examples" (steps that should be avoided) or warnings. AuthAR's system architecture consists of three components-the Microsoft HoloLens, a Samsung Tab A 10.1 Android tablet and a server running on a desktop computer all on the same network (Figure 3). As a proxy to sufficient object recognition and point-cloud generation directly from the headset, we use relatively small (~1cm) Optitrack visual markers for detection of material position and rotation. In developing AuthAR, we envisioned headsets of the future having onboard object recognition, though as a proxy to such capabilities, we implement a networked system to coordinate object positions generated by Optitrack's Motive software and the HoloLens to make the physical objects interactive. To provide this interactivity, the HoloLens generates virtual replicas of each piece and overlays these models as invisible renderings at the position and rotation of the tracked object. This invisible object takes the same shape as the physical object but has its visual rendering component disabled. It engages the HoloLens raycast and gives the user the illusion of placing virtual augmentations on physical objects. Additionally, we add a tracked handle to a screwdriver for tracking to infer screw events. The server connects to the Optitrack host via UDP and continues to update object and tool transforms. This server sends the data to the HoloLens via 64-character messages and the HoloLens' representation of the transform is updated accordingly. The Android tablet simply serves to record 3 rd person video of the tutorial. When the HoloLens starts and stops recording 1 st person demonstration, a message is passed to the server and then to the Android tablet to toggle recording. Throughout the paper, we demonstrate usage of AuthAR to build a tutorial for assembly of an Ikea laptop stand. Parts have been outfitted with Optitrack visual markers and defined as rigid bodies in Optitrack's Motive software. Simple representations of these parts are passed to the HoloLens as invisible colliders, so the physical components act as interactive objects in AR (Figure 4). Though this configuration is specific to the laptop stand, this approach is easily extensible to other assembly workflows. Users simply define combinations of visual markers as rigid bodies and build simplified models of the individual parts. In this case, these models are combinations of cubes that have been scaled along X, Y and Z axes, giving rough approximations of the parts' shapes. This initial step of predefining the shapes of these parts allows fully in situ editing allows for the remainder of the process. To avoid encumbering the assembly process with tutorial generation steps, interaction with AuthAR involves only voice, gaze, and use of materials and tools. This allows the user to always keep his/her hands free to build. By using only voice-and gaze-based interaction, we also eliminate the need for an occluding visual interface of menus and buttons, avoiding interference with the physical tutorial tasks. For flexibility, the user can add augmentations at any point while building the tutorial. However, to encourage faster onboarding, we guide the user through a two-phase process for each step: Step Recording and Step Review. When the user says "Start Recording" AuthAR begins recording changes to object transforms such that moving the physical objects maps directly to manipulating the virtual representations of those objects in the tutorial. This command also initiates video recording using the HoloLens' built-in camera and the tablet's 3 rd person perspective (Figure 5). AuthAR also records when the screwdriver's tip comes in contact with an object piece and generates a screw hole on that object (though not displayed until the Review Mode). To do this we add a tracked attachment to the handle, similar to what was done for Dodecapen and SymbiosisSketch to provide the position of the tip based on the orientation of the attachment. Given real-time streaming of position data of the screwdriver's handle and a priori knowledge of the length from the handle to the tip as 10.5 cm, we calculate the position of the screwdriver's tip as 10.5 cm forward from the handle. When the user says "Finish Recording", the HoloLens prompts the user for a step description and records dictation. When description is complete, the user enters an idle state until they are ready for review. Step Review: Manually Added Features After a tutorial author has completed a step recording, they can they can say "Review Step" to enter review mode for the step. The 1 st person video just recorded is played on a loop across from the author, automatically repositioning itself such that the author can look up at any point and see the video directly in front of him/her. This allows the author to draw upon their own experience when adding manual augmentations. Existing augmentations shift into focus by getting larger or expanding when the user is looking closer to that augmentation than any other-this eliminates the need to focus on small points (~3cm) to engage with them. When looking toward a particular object, the available commands to update the augmentation are shown in the top right of the Heads-up Display. After recording the step, the tutorial author may want to draw the tutorial user's attention to a particular point, similar to how this is done on 2D paper-based instructions (Figure 6). To do this, the author focuses the gaze-based cursor on the specific point on the tracked object where the callout point should go and says "Add Point". This adds a small virtual sphere anchored by its relative position on the object. The process of adding a captioned image (Figure 7), begins when the user looks toward a callout point and says "Add Picture". The system then starts a countdown from three in the middle of the heads-up display to indicate that a picture is about to be taken, and then displays "Hold still!" when the countdown is complete, at which point the HoloLens captures the current frame. The collected image is saved and immediately loaded over the callout point. The author can then say "Add Text", and a prompt in the middle of the heads-up display shows "Speak the image caption" and the HoloLens begins listening for recorded dictation. After a pause of 3 seconds, the HoloLens finishes recording and immediately associates the spoken text as the image caption. Authors can describe a callout point as a negative example or a warning simply by looking toward the callout point and saying "Warning". This defines that callout point as a warning or negative example and will draw tutorial users to be extra attentive when traversing the tutorial. The callout point turns red and the associated image is given a red border around it (Figure 8). Authors are also able to move the point along the surface of any tracked piece or delete it entirely. Though fastening components together is an important component of an assembly tutorial, the objects are too small to track with traditional object tracking technologies. During step recording, AuthAR records use of the tracked screwdriver and detects when it was used on a tracked material, generating a screw hole. Making use of these generated screw holes from step recording, the author can associate a virtual screw with the hole by looking toward the hole and saying "Add Screw" (Figure 9). The user cycles through possible screws by saying "Next" and "Previous"-commands which cycle through possible virtual screw representations. The author is able to hold the physical screw up to the virtual one for comparison and say "This One" to associate the screw with that hole. Authors can also manually add new fasteners in areas that cannot be tracked. To do so, the author once again says "Add Screw", pulling up the same menu, but when a screw has been selected, the ray-casted, gaze-based cursor allows the user to manually place the virtual screw where it was physically placed. This is useful for the laptop stand, for example, as there are rubber feet that need to be screwed in by hand, rather than with the tracked screwdriver. Saying "Finish Review" brings the user back to the original idle state and the user has iterated through the step building process at this point. Toward validating AuthAR, we discuss our initial observations in testing with tutorial authors, present an example application that parses and displays the generated tutorial for end users, and explain extensibility beyond the presented use case. In doing so, we consider improvements to AuthAR, and design considerations for other in situ AR content authoring tools. To gather initial observations and feedback of the system, we asked two users to generate a tutorial for an Ikea laptop stand. Though we have not formally evaluated AuthAR for usability, this initial feedback provides some insight into possible improvements to AuthAR and generalized takeaways in building such systems. The standard paper instructions for the stand consists of four steps: fastening the legs to the bottom base, fastening the top to the legs, adding screw-on feet to the bottom of the structure and adding glass to the top. We guided the users through using the system while they created an AR tutorial for assembling this piece of furniture. Both testers found the system helpful for generating tutorials, one noting that "being able to take pictures with the head for annotations was really useful." This suggests that the embodied gaze-based interaction is particularly well-suited to picture and video recording. Most of the functionality for making refinements in the tutorial is enabled by the user looking anywhere near the objects, however, adding new callout points requires accurate hovering of the cursor on the object of interest while speaking a command. One user mentioned that it was "kind awkward to point at certain points with the head". In such systems that require precise placement of virtual objects on physical components, pointing at and touching the position where the callout point should go would be a useful improvement over a gaze-only approach. Though fulfilling the hands-free requirement of a tutorial generation system (D1), AuthAR's use of dictation recognition for text entry was particularly challenging for users, in part due to the automated prompting for step descriptions and titles. One participant was surprised by the immediate prompt for a step description, and that "it was hard to formulate something articulate to say by the time it had finished recording", so future iterations will likely incorporate users explicitly starting dictation recognition for a title so they are prepared to give one. While we focus on authoring the tutorial in real time, the users wanted to view and refine previous steps. One possible enhancement to an in situ content authoring tool would be a "navigation prompt to let you jump between steps and review your work." With AuthAR, this would provide users the ability to review and refine the full tutorial at a high level, including previous steps. Figure 9. User adding virtual screws to the tutorial. The user can hold the physical screw up to the virtual one for comparison (top), and if the screw hole was not automatically generated, the user can place the screw via ray-cast from the headset (middle). Though we focus on authoring of assembly tutorials, we also implemented and tested a playback mode to validate AuthAR's ability to generate working tutorials. The tutorial author can save a tutorial after finishing a step, and AuthAR generates an XML representation of the entire tutorial. We load this XML into the playback application, also built for the HoloLens, and components of the steps built earlier are loaded and displayed for the end user to follow. The guidance techniques are simple but demonstrate success of the authoring process and portability of the generated tutorial. This application projects lines from each piece's location to where that piece needs to go (Figure 10a). When the end user of the tutorial has correctly placed the object, a notification appears in front of them (Figure 10b). The user also receives guidance of where add screws and when the playback application recognizes a screw event in that location, the screw augmentation disappears (Figure 10c). Both first person and third person video representations play on a loop across the table from the user (Figure 10d). As the user walks around the table, the videos adjust their positions such that the user can always look up and see the videos straight across from them. Use of the third person video is currently the only tutorial component that requires postprocessing after the in situ authoring process is complete. Because the XML representation of the tutorial uses file paths for videos, the author needs to manually move the video from the Android tablet to the headset's file system. Future iterations could automatically stream these videos between components. We demonstrate AuthAR with assembly of an Ikea laptop stand but note that it could be extended to any physical task where simplified virtual models of the pieces could be built or obtained. All of the pieces used for the laptop stand were scaled cubes or combinations of scaled cubes, with disabled "Renderer" components to create the illusion of adding virtual objects to physical parts, when in reality, there were invisible objects overlaid on top of the physical pieces. So simply loading in pieces and turning any visual rendering allow for extensibility to virtually any assembly task. While demonstrated with an augmented screwdriver, AuthAR could be extended to support different tools, or an integrated workspace of smart tools. We employed very simple logic, that whenever the tip of the screwdriver hovers near a piece, AuthAR adds a screw hole. Future iterations could employ more advanced logic for detecting tool usage. The need for a large truss with 10 Optitrack cameras enables very accurate tracking of visual markers but limits the AuthAR's widespread deployment in its current state. For practical use we envision the localization of objects being done with the headset only, or with cheaper external optical tracking, perhaps with black and white fiduciary markers. In this scenario, tracked pieces could be established in real time though user-assisted object recognition, rather than defining their shapes prior to running AuthAR. The in situ authoring approach offered by AuthAR allows the user to concurrently craft the tutorial while assembling the pieces. However, the gaze/voice multimodal interface does not provide users with efficient tools to fine-tune the generated tutorial. To this end, AuthAR should be complimented by a 2D interface for precise editing of individual components of the tutorial, similar to tutorial generation tools described in previous work. Trimming 1 st and 3 rd person perspective videos, cropping images and editing text are currently not well-supported by AuthAR and would be better suited to a mouse and keyboard interface. This complimentary approach would also allow users to focus on coarse-grain tutorial generation and object assembly without being burdened by these smaller edits that can easily be done after the fact. AuthAR enables tutorial authors to generate mixed media tutorials semi-automatically to guide end users through the assembly process. We automatically record expert demonstration where possible and allow for in situ editing for refinements and additions. We built AuthAR with several design guidelines in mind, validated with the authoring of a tutorial for assembling a laptop stand, and discuss the extensibility to assembly of other tasks by simply loading different virtual models into AuthAR. We see AuthAR enabling authoring of tutorials that could reach a widespread population with mixed media tutorials flexible to the preferences of each individual user.
We present a mixed media assembly tutorial authoring system that streamlines creation of videos, images, text and dynamic instructions in situ.
430
scitldr
Monitoring patients in ICU is a challenging and high-cost task. Hence, predicting the condition of patients during their ICU stay can help provide better acute care and plan the hospital's resources. There has been continuous progress in machine learning research for ICU management, and most of this work has focused on using time series signals recorded by ICU instruments. In our work, we show that adding clinical notes as another modality improves the performance of the model for three benchmark tasks: in-hospital mortality prediction, modeling decompensation, and length of stay forecasting that play an important role in ICU management. While the time-series data is measured at regular intervals, doctor notes are charted at irregular times, making it challenging to model them together. We propose a method to model them jointly, achieving considerable improvement across benchmark tasks over baseline time-series model. With the advancement of medical technology, patients admitted into the intensive care unit (ICU) are monitored by different instruments on their bedside, which measure different vital signals about patient's health. During their stay, doctors visit the patient intermittently for check-ups and make clinical notes about the patient's health and physiological progress. These notes can be perceived as summarized expert knowledge about the patient's state. All these data about instrument readings, procedures, lab events, and clinical notes are recorded for reference. Availability of ICU data and enormous progress in machine learning have opened up new possibilities for health care research. Monitoring patients in ICU is a challenging and high-cost task. Hence, predicting the condition of patients during their ICU stay can help plan better resource usage for patients that need it most in a cost-effective way. Prior works (; BID4 BID18 BID16 BID1 have focused exclusively on modeling the problem using the time series signals from medical instruments. Expert knowledge from doctor's notes has been ignored in the literature. In this work, we use clinical notes in addition to the time-series data for improved prediction on benchmark ICU management tasks . While the time-series data is measured continuously, the doctor notes are charted at intermittent times. This creates a new challenge to model continuous time series and discrete time note events jointly. We propose such a multi-modal deep neural network that comprises of recurrent units for the time-series and convolution network for the clinical notes. We demonstrate that adding clinical notes improves the AUC-PR scores on in-hospital mortality prediction (+7.8%) and modeling decompensation (+6.1%), and kappa score on length of stay forecasting (+3.4%). Here we formally define the problems and provide a review of machine learning approaches for clinical prediction tasks. Problem Definitions. We use the definitions of the benchmark tasks defined by as the following three problems: 1. In-hospital Mortality: This is a binary classification problem to predict whether a patient dies before being discharged from the first two days of ICU data.2. Decompensation: Focus is to detect patients who are physiologically declining. Decompensation is defined as a sequential prediction task where the model has to predict at each hour after ICU admission. Target at each hour is to predict the mortality of the patient within a 24 hour time window. The benchmark defines LOS as a prediction of bucketed remaining ICU stay with a multiclass classification problem. Remaining ICU stay time is discretized into 10 buckets: {0 − 1, 1 − 2, 2 − 3, 3 − 4, 4 − 5, 5 − 6, 6 − 7, 7 − 8, 8 − 14, 14+} days where first bucket, covers the patients staying for less than a day (24 hours) in ICU and so on. This is only done for the patients that did not die in ICU.These tasks have been identified as key performance indicators of models that can be beneficial in ICU management in the literature. Most of the recent work has focused on using RNN to model the temporal dependency of the instrument time series signals for these tasks (, BID16). Texts. Biomedical text is traditionally studied using SVM models BID14 with ngrams, bag-of-words, and semantic features. The recent development in deep learning based techniques for NLP is adapted for clinical notes. Convolutional neural networks is used to predict ICD-10 codes from clinical texts BID12 BID9. BID15; BID0 used convolutional neural networks to classify various biomedical articles. Pretrained word and sentence embeddings have also shown good for sentence similarity tasks BID2. However, none of these works have utilized doctor notes for ICU clinical prediction tasks. Multi Modal Learning. Multi-modal learning has shown success in speech, natural language and computer vision BID13, , BID10 ). In health care research, BID19 accommodated supplemental information like diagnosis, medications, lab events etc to improve model performance. In this section, we describe the models used in this study. We start by introducing the notations used, then describe the baseline architecture, and finally present our proposed multimodal network. For a patient's length of ICU stay of T hours, we have time series observations, x t at each time step t (1 hour interval) measured by instruments along with doctor's note n i recorded at irregular time stamps. Formally, for each patient's ICU stay, we have time series data [x t] T t=1 of length T, and K doctor notes DISPLAYFORM0, where K is generally much smaller than T. For in-hospital mortality prediction, m is a binary label at t = 48 hours, which indicates whether the person dies in ICU before being discharged. For decompensation prediction performed hourly, DISPLAYFORM1 are the binary labels at each time step t, which indicates whether the person dies in ICU within the next 24 hours. For LOS forecasting also performed hourly, [l t] T t=5 are multi-class labels defined by buckets of the remaining length of stay of the patient in ICU. Finally, we denote N T as the concatenated doctor's note during the ICU stay of the patient (i.e.,, from t = 1 to t = T). Our baseline model is similar to the models defined by. For all the three tasks, we used a Long Short Term Memory or LSTM BID6 network to model the temporal dependencies between the time series observations, [x t] T t=1. At each step, the LSTM composes the current input x t with its previous hidden state h t−1 to generate its current hidden state h t; that is, h t = LSTM(x t, h t−1) for t = 1 to t = T. The predictions for the three tasks are then performed with the corresponding hidden states as follows: DISPLAYFORM0 wherem,d t, andl t are the probabilities for inhospital mortality, decompensation, and LOS, respectively, and W m, W d, and W l are the respective weights of the fully-connected (FC) layer. Notice that the in-hospital mortality is predicted at end of 48 hours, while the predictions for decompensation and LOS tasks are done at each time step after first four hours of ICU stay. We trained the models using cross entropy (CE) loss defined as below. DISPLAYFORM1 In our multimodal model, our goal is to improve the predictions by taking both the time series data x t and the doctor notes n i as input to the network. Convolutional Feature Extractor for Doctor Notes. As shown in FIG2, we adopt a convolutional approach similar to BID8 to extract the textual features from the doctor's notes. For a piece of clinical note N, our CNN takes the word embeddings e = (e 1, e 2, . . ., e n) as input and applies 1D convolution operations, followed by maxpooling over time to generate a p dimensional feature vectorẑ, which is fed to the fully connected layer along side the LSTM output from time series signal (described in the next paragraph) for further processing. From now onwards, we denote the 1D convolution over note N asẑ = Conv1D(N). to predict the mortality label m at t = T (T = 48). For this, [x t] T t=1 is processed through an LSTM layer just like the baseline model in Sec. 3.1, and for the notes, we concatenate (⊗) all the notes N 1 to N K charted between t = 1 to t = T to generate a single document N T. More formally, DISPLAYFORM0 We use pre-trained word2vec embeddings BID11 ) trained on both MIMIC-III clinical notes and PubMed articles to initialize our methods as it outperforms other embeddings as shown in BID2. We also freeze the embedding layer parameters, as we did not observe any improvement by fine-tuning them. Being sequential prediction problems, modeling decompensation and length-of-stay requires special technique to align the discrete text events to continuous time series signals, measured at 1 event per hour. Unlike in-hospital mortality, here we extract feature maps z i by processing each note N i independently using 1D convolution operations. For each time step t = 1, 2... T, let z t denote the extracted text feature map to be used for prediction at time step t. We compute z t as follows. DISPLAYFORM0 where M is the number of doctor notes seen before time-step t, and λ is a decay hyperparameter tuned on a validation data. Notice that z t is computed as a weighted sum of the feature vectors, where the weights are computed with an exponential decay function. The intuition behind using a decay is to give preference to recent notes as they better describe the current state of the patient. The time series data x t is modeled using an LSTM as before. We concatenate the attenuated output from the CNN with the LSTM output for the prediction tasks as follows: DISPLAYFORM1 Both our baselines and multimodal networks are regularized using dropout and weight decay. We used Adam Optimizer to train all our models. We used and then 15% of remaining data as validation set. However, We dropped all clinical notes which doesn't have any chart time associated and then dropped all the patients without any notes. Notes which have been charted before ICU admission are concatenated and treated as one note at t = 1. For in-hospital mortality task, best performing baseline and multimodal network have 256 hidden units LSTM cell. For convolution operation, we used 256 filters for each of kernel size 2, 3 and 4. For decompensation and LOS prediction, we used 64 hidden units for LSTM and 128 filters for each 2,3 and 4 size convolution filters. The best decay factor λ for text features was 0.01. We use Area Under Precision-Recall (AUCPR) metric for in-hospital mortality and decompensation tasks as they suffer from class imbalance with only 10% patients suffering mortality, following the benchmark. BID3 suggest AUCPR for imbalanced class problems. We use Cohen's linear weighted kappa, which measures the correlation between predicted and actual multi-class buckets to evaluate LOS in accordance with.We compared multimodal network with the baseline time series LSTM models for all three tasks. Results from our experiments are documented in TAB1. Our proposed multimodal network outperforms the time series models for all three tasks. For in-hospital mortality prediction, we see an improvement of around 7.8% over the baseline time series LSTM model. The other two problems were more challenging itself than the first task, and modeling the notes for sequential task was difficult. With our multimodal network, we saw an improvement of around 6% and 3.5% for decompensation and LOS, respectively. We did not observe a change in performance with respect to reported in benchmark study despite dropping patients with no notes or chart time. In order to understand the predictive power of clinical notes, we also train text only models using CNN part from our proposed model. Additionally, we try average word embedding without CNN as another method to extract feature from the text as a baseline. Text-only-models perform poorly compared to time-series baseline. Hence, text can only provide additional predictive power on top of time-series data. Identifying the patient's condition in advance is of critical importance for acute care and ICU management. Literature has exclusively focused on using time-series measurements from ICU instruments to this end. In this work, we demonstrate that utilizing clinical notes along with time-series data can improve the prediction performance significantly. In the future, we expect to improve more using advanced models for the clinical notes since text summarizes expert knowledge about a patient's condition.
We demostarte that using clinical notes in conjuntion with ICU instruments data improves the perfomance on ICU management benchmark tasks
431
scitldr
Existing works in deep Multi-Agent Reinforcement Learning (MARL) mainly focus on coordinating cooperative agents to complete certain tasks jointly. However, in many cases of the real world, agents are self-interested such as employees in a company and clubs in a league. Therefore, the leader, i.e., the manager of the company or the league, needs to provide bonuses to followers for efficient coordination, which we call expensive coordination. The main difficulties of expensive coordination are that i) the leader has to consider the long-term effect and predict the followers' behaviors when assigning bonuses and ii) the complex interactions between followers make the training process hard to converge, especially when the leader's policy changes with time. In this work, we address this problem through an event-based deep RL approach. Our main contributions are threefold. We model the leader's decision-making process as a semi-Markov Decision Process and propose a novel multi-agent event-based policy gradient to learn the leader's long-term policy. We exploit the leader-follower consistency scheme to design a follower-aware module and a follower-specific attention module to predict the followers' behaviors and make accurate response to their behaviors. We propose an action abstraction-based policy gradient algorithm to reduce the followers' decision space and thus accelerate the training process of followers. Experiments in resource collections, navigation, and the predator-prey game reveal that our approach outperforms the state-of-the-art methods dramatically. Deep Multi-Agent Reinforcement Learning (MARL) has been widely used in coordinating cooperative agents to jointly complete certain tasks where the agent is assumed to be selfless (fully cooperative), i.e., the agent is willing to sacrifice itself to maximize the team reward. However, in many cases of the real world, the agents are self-interested, such as taxi drivers in a taxi company (fleets) and clubs in a league. For instance, in the example of taxi fleets , drivers may prefer to stay in the area with high customer demand to gain more reward. It is unfair and not efficient to compel the taxi driver to selflessly contribute to the company, e.g., to stay in the low customer demand area. Forcing the drivers to selflessly contribute may increase the income for the company in a short-term but it will finally causes the low efficient and unsustainable of that company in the long run because the unsatisfied drivers may be demotivated and even leave the company. Another important example is that the government wants some companies to invest on the poverty area to achieve the fairness of the society, which may inevitably reduce the profits of companies. Similar to previous example, the companies may leave when the government forces them to invest. A better way to achieve coordination among followers and achieve the leader's goals is that the manager of the company or the government needs to provide bonuses to followers, like the taxi company pays extra bonuses for serving the customers in rural areas and the government provides subsidies for investing in the poverty areas, which we term as expensive coordination. In this paper, we solve the large-scale sequential expensive coordination problem with a novel RL training scheme. There are several lines of works related to the expensive coordination problem, including mechanism design and the principal-agent model . However, these works focus more on static decisions (each agent only makes a single decision). To consider sequential decisions, the leader-follower MDP game (; 2016) and the RL-based mechanism design are introduced but most of their works only focus on matrix games or small-scale Markov games, which cannot be applied to the case with the large-scale action or state space. The most related work is M 3 RL where the leader assigns goals and bonuses by using a simple attention mechanism (summing/averaging the features together) and mind (behaviors) tracking to predict the followers' behaviors and makes response to the followers' behaviors. But they only consider the rule-based followers, i.e., followers with fixed preference, and ignore the followers' behaviors responding to the leader's policy, which significantly simplifies the problem and leads the unreasonability of the model. In the expensive coordination problem, there are two critical issues which should be considered: 1) the leader's long-term decision process where the leader has to consider both the long-term effect of itself and long-term behaviors of the followers when determining his action to incentivise the coordination among followers, which is not considered in ; and 2) the complex interactions between the leader and followers where the followers will adapt their policies to maximize their own utility given the leader's policy, which makes the training process unstable and hard, if not unable, to converge in large-scale environment, especially when the leader changes his actions frequently, which is ignored by . In this work, we address these two issues in the expensive coordination problem through an abstraction-based deep RL approach. Our main contributions are threefold. We model the leader's decision-making process as a semiMarkov Decision Process (semi-MDP) and propose a novel event-based policy gradient to learn the leader's policy considering the long-term effect (leader takes actions at important points rather than at each step to avoid myopic decisions.) (Section 4.1). A well-performing leader's policy is also highly dependent on how well the leader knows the followers. To predict the followers' behaviors precisely, we show the leader-follower consistency scheme. Based on the scheme, the follower-aware module, the follower-specific attention module, and the sequential decision module are proposed to capture these followers' behaviors and make accurate response to their behaviors (Section 4.2). To accelerate the training process, we propose an action abstraction-based policy gradient algorithm for the followers. This approach is able to reduce followers' decision space and thus simplifies the interaction between the leader and followers as well as accelerates the training process of followers (Section 4.3). Experiments in resource collections, navigation and predatorprey show that our method outperforms the state-of-the-art methods dramatically. Our works are closely related to leader-follower RL, temporal abstraction RL, and event-based RL. Leader-follower RL. The leader-follower RL targets at addressing the issue of expensive coordination where the leader wants to maximize the social benefit (or the leader's self-benefit) by coordinating non-cooperative followers through providing them bonuses. Previous works have investigated different approaches to solve the expensive coordination, including the vanilla leader-follower MARL (; Laumônier &), leader semi-MDP , multiple followers and sub-followers MARL , followers abstraction , and Bayesian optimization . But most of them focus on simple tabular games or small-scale Markov games. The most related work leverages the deep RL approach to compute the leader's policy of assigning goals Figure 1: Overview of our framework. The details of the leader's module and the follower's module can be found in Section 4.2 and Section 4.3, respectively. The implement details of each module can be found in Appendix D.2.1. and bonuses to rule-based followers. But their method performs poorly when the followers are RLbased. In this work, we aim to compute the leader's policy against the RL-based followers in the complex and sequential scenarios. Temporal abstraction RL. Our methods are also related to temporal abstraction method (; ; ; ; ;). The basic idea of temporal abstraction is to divide the original one-level decision process into a two-level decision process where the high-level part is to decide the meta goal while the low-level policy is to select the primitive actions. Our leader's decision process is different from those methods mentioned above because the leader's policy can naturally form as an intermittent (temporal abstraction) decision process (semi-MDP) and it is unnecessary to design the two-level decision process for the leader (since the low-level decision process is the follower). Based on the nature of the leader, a novel training method is introduced. Event-based RL & Planning. Previous studies also focus on using events to capture important elements (e.g., whether agent reaches a goal) during the whole episode. regard the leader's action and the environment feedback as events in the continuous time environment.; leverage events to capture the fact that an agent has accomplished some goals. We adopt this idea by depicting the event as the actions taken by the leader at some time steps and design a novel event-based policy gradient to learn the long-term leader's policy. Our research focuses on single-leader multi-follower Stackelberg Markov Games (SMG) , which can be formulated as a tuple G = N, S, A, Ω, P, R, γ. N is the set of N followers, i.e., |N | = N. S is the set of states. s 0 ∈ S 0 ⊂ S is an initial state and S 0 is the set of initial states. A = × k∈N A k is the set of joint actions for followers where a k ∈ A k is an action for the k-th follower. ω ∈ Ω = × k∈N Ω k is an action for the leader and k is a goal and a bonus that the leader assigns to the k-th follower. P: S × A → ∆(S) is the transition function 1 and R = × k∈N r k × r l is the reward function set where r k: S × A × Ω → R is the reward function for the k-th follower and r l: S × A × Ω → R is the reward function for the leader. γ is the discount factor and a is a joint action of followers. The leader's policy is defined as µ = µ k k∈N where µ k: Ω × S → ∆(Ω k) is the leader's action to the k-th follower given the leader's action in the previous timestep ω t−1 and the current state s t. ∆(·) is a probability distribution. The followers' joint policy is defined as π = π k where is the k-th follower policy given the leader's action ω k t and the current state s t. Given the policy profile of the leader and followers µ, π, the follower's utility is defined as at, ωt) and the leader's utility is J(µ, π) = E T t=0 γ t r l t (st, at, ωt). We assume that the leader and followers aim to maximize their own utilities. We define the trajectory τ as a sequence of state, leader's action, and followers' actions ω −1, (s t, a t, ω t) T t=0 where ω −1 is the first step leader's action and is set to zero. (a) A simple example for the illustration of AT. Suppose that the whole step is 4, the AT = {e (b) The probabilistic graphical model of the proposed framework. Dotted line means that β affects the final of ω indirectly. ω−1 is set to be zero. In this section, we propose a novel training scheme to train a well-performing leader policy against both rule-based and RL-based followers in the expensive coordination problem. We address the two issues, the leader's long-term decision process and the complex interactions between the leader and followers, with three key steps: (a) we model the leader's decision-making process as a semiMarkov Decision Process (semi-MDP) and propose a novel event-based policy gradient to take actions only at important time steps to avoid myopic policy; (b) to accurately predict followers' behaviors, we construct a follower-aware module based on the leader-follower consistency, including a novel follower-specific attention mechanism, and a sequential decision module to predict followers' behaviors precisely and make accurate response to these behaviors; and (c) an action abstractionbased policy gradient method for followers is proposed to simplify the decision process for the followers and thus simplify the interaction between leader and followers, and accelerate the convergence of the training process. We first describe the event-based trajectory optimization for the leader. As we mentioned above, the leader's decision process can be naturally formulated as a semi-MDP . Therefore, we firstly describe the basic ideas of semi-MDP using the modified option structure. We define the modified option as a tuple: where µ is the leader's policy as we defined above and β k (s t, ω t−1): S × Ω → is the termination function for the k-th follower, to indicate the probability whether the leader's action to the k-th follower changes (ω k t−1 = ω k t). Based on these definitions, we formulate the one-step option-state transition function with decay as: . Differently, we do not have the low-level policy here (the low-level policy is the follower) and since we only focus on the finite time horizon, γ is set to be 1. Our modified option is used to depict the long-term decision process for the leader as shown in Fig. 2. Now we start to discuss our leader's policy gradient. In fact, it is not easy to directly optimize the leader's utility based on this multi-agent option-state transition function since this form includes leader's different action stages to different followers. Notice that for a sampled trajectory, the occurrence of the leader actions is deterministic. Therefore, we can regard the time step and the action the leader takes at that step as an event and define the (universal) event set We use the notation e k i = t i, ω k ti to represent the leader's action to the k-th follower at step t i, i is the index of the event. Since we focus on the change of the actions from the leader, we further define a set that represents a collection of new actions (ω k t = ω k t−1) taken by the leader within that trajectory: where t i − 1 is the previous time step. A T represents when and how the leader commits to a new action (an example can be found in Fig. 2a). For brevity, e k j ∈ A T means e k j ∈ U T \A T. The probability of A T can be represented as: where t j − 1 is the previous time step for t j. This equation illustrates that the probability of the occurrence of a certain leader's event set within a trajectory. Concretely, the leader changes action to the k-th follower at t i ∈ e k i while maintaining the same action within the interval from t i − 1 ∈ e k i−1. Similarly, we can further define the probability of the whole trajectory τ as: Comparing with P (A T), P (τ) includes the probability of the followers as well as the state transition. Do note that our goal is to maximize max, indicating that the leader is required to select an action that can maximize the accumulated reward, where R τ (T) = T t=0 γ t r l t is the accumulated reward and τ is to stress that its accumulated reward is from the trajectory τ. Following the REINFORCE trick , the policy gradient for the termination function and the leader's policy function can be formulated under the following proposition: Proposition 1. The policy gradients for the termination function β k (s ti, ω ti) and leader's policy function µ k ω k ti |s ti, ω ti−1 can be written as: where θ and ϑ are the parameters for the termination function β k θ and leader's policy µ k ϑ. I(·) and I (·) are the piece-wise functions: All the proofs can be found in Appendix A. Proposition 1 implies that under the event-based method, whether the leader's commitment to a new action will induce different policy gradients for both termination function and the policy function. However, from the empirical , we find that the leader's policy function updates rarely during the whole episode because the policy only updates when the leader commits to a new action, which causes the sample inefficiency. Notice that in fact the leader commits to the same action when e k i / ∈ A T. Therefore, the policy indication function I (·) can be formulated in an alternative way: This form considers both committing to a new action and maintaining the same actions (Details can be found in Remark 2), which we call the Event-Based Policy Gradient (EBPG) and the previous one as the sparse EBPG respectively. Intuitively, the dense EBPG is better than the sparse EBPG because it updates the leader's policy function more frequently than the sparse one. For example, in time step t, supposing that the leader chooses a wrong action for follower k and receives a negative reward. Then, the leader should learn to diminish the action chosen that state by EBPG. The sparse EBPG only do one PG during before terminating the action (at the committing action step) while the dense one does PG in each step before terminating the action. The latter can provide more signal to correct the wrong action. Experiments also reveal that the dense one is better (Sec. D.3.3). The EBPG approach is able to improve leader's performance. However, it is still very hard for the leader to choose actions considering long-term effect only based on the current state information. This is because the followers change their behaviors over time according to the leader's policy. Therefore, we introduce new modules and training schemes so as to capture the change of the followers' behaviors as well as the global state. To abstract the complicated state information, we use neural networks to learn the state representation. To capture the followers' behaviors and make accurate response to their behaviors, we design three modules: we exploit the leader-follower consistency under game regularization and policy bound conditions, based on the consistency, a follower-aware module is introduced and based on the follower-aware module, a novel attention mechanism, and sequential decision making module is designed to make accurate response to these followers' behaviors as shown in Fig. 1. Leader-Follower Consistency. In previous works, a surge of researches focus on predicting other agents' behaviors through historical information, where the other agents are assumed to be opponents of that agent, which is only suitable for zero-sum games (; ;). However, these methods cannot be directly applied to our case because SMG is not zero-sum. We note that This assumption is inspired by . We only extend it into the multi-agent forms. This assumption indicates that the action and states space should be limited and the reward function for the leader action should be smooth. Assumption 2. (Policy Bound) For any agent k, reward function r k and policy is consistency, i.e., This assumption is inspired by . π −k indicates the joint policy without the k-th agent's. This assumption indicates that the change of the leader causes only slightly changes on each followers policy. Based on these two assumptions, we propose a proposition here: Proposition 2. (Leader-Follower Consistency.) If both the assumptions of game regularization and policy bound are satisfied, for ∀ > 0, k ∈ N, there exists δ > 0, such that |µ − µ | ≤ implies π k − π k ≤ δ, where µ and π k are the new policies for the leader and the k-th follower respectively. These methods mentioned above are fully implemented can enhance the performance dramatically. But when facing the RL-based followers, the SMG is still hard to converge. This is because in SMG, the policies of the leader and followers are always changing depending on other agents' performance. To guarantee convergence, the leader can only update its policy when the followers reach (or are near to) the best response policy . However, when the followers are RL-based agents, there is no way to ensure the followers' policies are (near) the best response policies in large-scale SMG and the commonly-seen idea is to provide enough training time but it is unbearable in practice due to the limitation of computing power . To accelerate the training process, inspired by the action abstraction approach which is commonlyseen in Poker and action abstraction RL , we collect the followers' primitive actions sharing the same properties together as a meta policy. Then, the followers only need to select the meta action to make a decision. Therefore, the original game is converted into a meta game, which is easy to solve. Specifically, we define the policy for the k-th follower as: k is the augmented state for the follower (the combination of current state and the leader's action). π k meta (z|ŝ) is the meta policy for the k-th follower and z is the high-level (meta) action. We hypothesize that the lower-level policy (the policy to choose the primitive actions) is already known (rule-based) and deterministic, i.e., π k lower (a k |ŝ, z) = 1. For instance, given the example of the navigation task, the π k meta can be the selection to which landmark to explore while π k lower is a specific route planning algorithm (such as Dijkstra Algorithm). Based on this assumption, we can design a novel policy gradient to train the meta policy: where λ k is the parameter for meta-policy π k meta (Details can be found in Lemma 3). In this section, we discuss how to design the leader's and followers' loss functions. Loss Functions for the Leaders. The basic structure for the leader is the actor-critic structure . We find that adding regularizers can enhance the leader's performance and we implement the maximum entropy for the leader's policy function as well as the L2 regularization for the termination function, i.e., ) and L reg = β 2. We also use imitation learning to learn the predicted action function p k. Following the same logic of , two baseline functions φ g (c t) and φ b (c t) are also introduced to further reduce the variance. Details can be found in Appendix B. Loss Functions for the RL-Based Followers. The basic structure for each follower is also based on the actor-critic structure. We leverage the action abstraction policy gradient as we mentioned above. The learning rate between the leader and follower should satisfy the two time-scale principle (Roughly speaking, the leader learns slower than the follower(s)), similar to . Details can be found in Appendix B and the pseudo-code can be found in Appendix C. Resource Collection Multi-bonus Resource Collection Navigation Predator-prey We evaluate the following tasks to testify the performance of our proposed method. All of these tasks are based on SMG mentioned above. resource collections: each follower collects three types of resources including its preferred one and the leader can choose two bonuses levels ; multi-bonus resources collections: based on, the leader can choose four bonuses levels; modified navigation: followers are required to navigate some landmarks and after one of the landmarks is reached, the reached landmark disappears and new landmark will appear randomly. modified predator-prey: followers are required to capture some randomly moving preys, prizes will be given after touching them. Both and are based on and we modify them into our SMG setting. Moreover, to increase the difficulty, in each episode, the combinations of the followers will change, i.e., in each task, there are 40 different followers and at each episode, we randomly choose some followers to play the game. More details can be found in Appendix D. Baselines & Ablations. To evaluate our method, we compare a recently proposed method as our baseline: M 3 RL . We do not include other baselines because other methods cannot be used in our problems, as justified in . For the ablations of the leader part, we choose: ours: the full implementation of our method. ours w/o EBPG: removing the event-based policy gradient part; ours w/o Attention: replacing follower-specified attention model by the original attention model mentioned in . For the follower part, we choose (a) with rule-based follower (b) with vanilla RL-based follower, and (c) with action abstraction RL-based follower to testify the ability of our methods when facing different followers. Hyper-Parameters. Our code is implemented in Pytorch . If no special mention, the batch size is 1 (online learning). Similar to , we set the learning rate as 0.001 for the leader's critic and followers while 0.0003 for the leader's policy. The optimization algorithm is Adam . Our method takes less than two days to train on a NVIDIA Geforce GTX 1080Ti GPU in each experiment. For the loss function, we set the λ 1 = 0.01 and λ 2 = 0.001. The total training episode is 250, 000 for all the tasks (including both the rule-based followers and the RL-based followers). To encourage exploration, we use the ι-greedy 2. For the leader, the exploration rate is set to 0.1 and slightly decreases to zero (5000 episode). For the followers, the exploration rate for each agent is always 0.3 (except for the noise experiments). The quantitative with different tasks are shown in Figs. 3 & 4. For the rule-based followers, from Fig. 3, we find that our method outperforms the state-of-the-art method in all the tasks, showing that our method is sample efficient and fast to coverage. There is an interesting phenomenon that in the task of multi-bonus resource collections and navigation, only our method obtains a positive reward, indicating that our method can work well in complicated environments. For ablations, we can see that ours w/o attention and ours w/o EBPG are worse than ours, representing these components do enhance the performance. For the RL-based followers, from Fig. 4, we observe that when facing the RL-based method with action abstraction, our approach outperforms the baseline method in all the tasks (in predator-prey game, the reward for ours is twice as that of the state-of-the-art). We also find that without action abstraction, the reward is less than zero, revealing that the abstraction does play a crucial role in stabilizing training. This experiment is to evaluate whether our method is robust to the noise, i.e., the follower randomly takes actions. We make this experiment by introducing noise into the follower decision. From Table 1, we can find that our method reaches a higher total reward (more than 5) among all the environment with noise than the state-of-the-art, indicating that our method is robust to the noise. We also observe that the total reward for the baseline method becomes lower with the increase of the noise while our method is more robust to the change. Moreover, for the incentive (the total gain), we find that our method gains much more incentive than the state-of-the-art method, showing that our method coordinates have a better coordination the followers than the state-of-the-art method. We also do a substantial number of experiments. However, due to the space limitation, we can only provide some here: The total incentives: incentive can reveal the performance of successful rate interacting with the followers. Our method outperforms the state-of-the-art method, indicating that our method has a better ability to interact with the followers. Sparse EBPG: we compare the performance gap between sparse EBPG and (dense) EBPG. This show that the sparse one is worse than the dense one, supporting the assumption that the dense signal can improve the sample efficiency. Visualizing attention: We visualize the attention module to find what it actually learns and the indicates that our attention mechanism does capture the followers whom the leader needs to assign bonuses to. Two time-scale training: We testify whether our two timescale training scheme works and the ablation shows that this scheme does play an important role in improving the performance of both the leader and the followers. The committing interval: We observe that the dynamic committing interval (our method) performs better than the one with fixed committing intervals. Reward for RL-based followers: we show the reward for the followers, which can provide the situation of the followers. The represents that our method aids the followers to gain more than the state-of-the-art method. Number of RL-based followers: finally, we testify our method in cases with different number of RL-based followers. The shows that our method always performs well. The full can be found in Appendix D. This paper proposes a novel RL training scheme for Stackelberg Markov Games with single leader and multiple self-interested followers, which considers the leader's long-term decision process and complicated interaction between followers with three contributions. 1) To consider the long-term effect of the leader's behavior, we develop an event-based policy gradient for the leader's policy. 2) To predict the followers' behaviors and make accurate response to their behaviors, we exploit the leader-follower consistency to design a novel follower-aware module and follower-specific attention mechanism. 3) We propose an action abstraction-based policy gradient algorithm to accelerate the training process of followers. Experiments in resource collections, navigation, and predator-prey game reveal that our method outperforms the state-of-the-art methods dramatically. We are willing to highlight that SMGs contribute to the RL (especially MARL) community with three key aspects: 1). As we mentioned in the Introduction, most of the existing MARL methods assume that all the agents are willing to sacrifice themselves to maximize the total rewards, which is not true in many real-world non-cooperative scenarios. On the contrary, our proposed method realistically assumes that agents are self-interested. Thus, SMGs provide a new scheme focusing more on the self-interested agents. We think this aspect is the most significant contribution to the RL community. 2). The SMGs can be regarded as the multi-agent system with different roles (the leader and the followers) and our method provides a solution to that problem. 3). Our methods also contribute to the hierarchical RL, i.e., it provides a non-cooperative training scheme between the high-level policy (the leaders) and the low-level policy (the followers), which plays an important role when the followers are self-interested. Moreover, our EBPG also propose an novel policy gradient method for the temporal abstraction structure. There are several directions we would like to investigate to further extend our SMG model: i) we will consider multiple cooperative/competitive leaders and multiple self-interested followers, which is the case in the labor market, ii) we will consider multi-level leaders, which is the case in the hierarchical organizations and companies and iii) we will consider the adversarial attacks to our SMG model, which may induce extra cost to the leader for efficient coordination. We believe that our work is a preliminary step towards a deeper understanding of the leader-follower scheme in both research and the application to society. We would like to thank Tianming Shu, Suming Yu, Enrique Munoz de Cote, and Xu He for their kind suggestions and helps. We also appreciate the anonymous reviewers for their hard work. where θ and ϑ are the parameters for the termination function β k θ and the leader's policy µ k ϑ. I(·) and I (·) are the piece-wise functions: Proof. First recall the utility for the leader: Where m is the number of the times taking new action, m ≤ T. If m = T, implying that the leader has taken new action at each time. P(|A T | = m) means the probability of times taking new action within an episode. Take derivatives on both LHS and RHS, we get: For brevity, we use P (τ) to represent P (s 0) k∈N P (s t+1 |s t, a t), the trajectory probability. And we use i ∈ e k i and j ∈ e k j to represent i ∈ e k i ∈ A T to j ∈ e k j / ∈ A T with a slight abuse of notation. Thus the equation mentioned above can be further simplified as: The equation above is exactly the REINFORCE trick and the rule of derivations. The approximation indicates that one trajectory only has one A T 3. Also based on the definition of e k i and e k j, the equation can be rewritten in a more compact form: Where I(·) is the piece-wise function: This is the first part of the proof (the policy gradient for the termination function). Here, we start proving the second part (the policy gradient for the leader's action). The proof of the second part is similar to the first part. We rewrite it to a more compact form: Remark 1. Some researches also focus on event-based RL but either on single-agent continuous time or reward representation . We are the first to develop and implement the event-based policy gradient into the multi-agent system. Remark 2. In fact, the policy gradient for the leader actions might be somewhat sparse, i.e., we only update the policy when the leader changes its actions. Notice that the leader commits to the same action when e k i / ∈ A T. Therefore, the probability of leader's action P (A T) can also represented as: Then the policy gradient for leader's policy, where For any agent k, the corresponding reward function r k w.r.t ω is CLipschitz continuous. Where C = (1 − γ) −1 C.The last equation is drawn form the Assumption 1 and the inequality of a geometric series: Proof. By combining Lemma 2 and Assumption 2, we can draw that: If there exist |ω − ω | < and we have: And we set δ ≥ (1 − γ)C, the consistency is established. Lemma 3. (Action Abstraction Policy Gradient.) Under the assumption that the low-level follower policy π When π k lower (a|ŝ, z) is fixed and deterministic, the equation can be: lower (a t |ŝ t, z t) = 1 is the partition function. For brevity, with a slight abuse of notation, we omit the superscript for variables a and z which represents the index of an agent. We add a baseline function to reduce the variance of the event-based policy gradient for the leader. We adopt the idea of successor representation as two expected baseline functions:φ g (c t) andφ b (c t). For the gain baseline function: For the bonus-based baseline function: Two baseline neural network functions with parametersθ g andθ b are trained through minimizing the mean square error: Where c t is the attention-based latent variable. To this end, the gradient for the leader can be formulated as: Where are the baseline policy gradients. We also leverage the imitation learning to learn the action probability function p k a The illustration of experimental scenarios can be found in Figure 5. Here we give some details about these environments: Resource Collections. This task is similar to , which is based on the scene that the leader and the followers collect some resources. Each follower has its own preference which might be the same (or against) to the leader's preference. In order to make the followers obey the leader's instruction, the leader should pay the followers bonuses. There are total 4 types of resources and for different resources each agent has different preferences. The leader owns two type of bonus (1 or 2) and 4 types of goals (each resource is a goal). The number of leader is 1 while the number of followers is 4. Multi-Bonus Resource Collections. This task is similar to Resource Collections. Except that the leader can take 4 level bonuses (a bonus from 1 to 4) while each agent owns one skill. The number of leader is 1 while the number of followers is 4. Modified Navigation. This task is original from . We make some modifications here to make it suitable our SMG: the leader and the followers are going to navigate some landmarks. Each follower has its own preference which might be the same (or against) to the leader's preference. When a landmark has been navigated, it disappears immediately and a new landmark will appear. There are total 6 types of landmarks and for different landmarks, each agent has different preferences. The leader owns two type of bonuses and 6 types of goals (each landmark is a goal). The number of leader is 1 while the number of followers is 8 and the number of the landmarks is 6. Modified Predator-Prey. This task is also original from . We make some modification here to make it suitable our SMG: the leader and the followers are going to catch some preys. Each follower has its own preference which might be the same (or against) to the leader's preference. In each step, whether a prey has been caught, it randomly chooses a direction to go. Catching a prey will not make it disappear, which means that the preys can exist until the game ends. There are total 8 types of preys and for different preys, each agent has different preferences. The leader owns two type of bonuses and 8 types of goals (each prey is a goal). The followers are faster than the preys. The number of leader is 1 while the number of followers is 10 and the number of the landmarks is 8. Reward Design. The rewards mentioned in Section 3 are the general forms. Here, we define two specified forms of the leader and followers reward function in our experiments: Leader Reward. We define v g as the prize (utility) for finishing task g. We set the reward function for the leader at step t as: r emphasize that our leader reward is total different from the : in their approaches, the leader changes its mind after signing a contract will not be punished. To make it suitable to the real world, we modify the reward as the leader should pay the followers bonuses immediately after signing the contract and cannot get back if it gives up the contract. Follower Reward. For the followers, we set the reward for the k-th follower as: where u k,g reveals the payoff of the k-th follower when finishing task g (the preference). Specifically, r k t indicates that the follower can either follow the leader's instruction or just do what it prefers to. The followers will receive reward immediately after signing the contract (the leader and the followers achieve an agreement). I s k t = s g means that the follower finishes the task g at step t. A penalty is added to the followers if the followers betray the leader (the followers and the leader sign the contract but the followers do not agree to work). Our network is based on . Some do not suit our method. We do some modification here: We change the vanilla attention mechanism (sum/average all the history and action of each follower together) to a follower-specified one: each follower has a weight which indicates how important the follower is at the current step. The output for g and b are changed into the sequential form, i.e., we first calculate p(g k t can reveal how well the leader interacts with the followers; the higher the R in is, the more successful the coordination between the leader and the followers. From Figure 6 and 7, comparing with the state-of-the-art method, we can see that our method far outperforms the state-of-the-art method, which reveals that our method does have a better ability to coordinate with the followers. In all of the scenarios, without the EBPG, the performance of our method is worse than ours with EBPG. Specifically, in some scenarios (e.g., multi-bonus resource collections, navigation), without the EBPG, the performance of our method is (or nearly) similar to the performance of M 3 RL, showing the effectiveness of our novel policy gradient. Moreover, we can notice that in navigation environment, without follower-specified attention, the performance of our method diminishes rapidly, which implies that in some scenarios, attention does play an important role. Figure 9: The ablation study of sparse EBPG in the predator-prey task. In this section, we are going to testify the robust of our method. Specifically, we evaluate whether our method is robust to the noise. We make this experiment by introducing noise into the follower decision. For example, if we set the noise function as 30%, indicating that there is 30% probability that the followers will choose action randomly. This experiment testifies whether the dense event-based policy gradient increases the leader's performance comparing with the sparse event-based policy gradient. We make ablations here: Ours: the full structure of our method; Sparse Event-Based Policy Gradient (sparse EBPG): the fully structure of ours except that the EBPG is replaced by sparse event-based policy gradient; sparse EBPG w/o attention: replacing the follower-specified attention mechanism by averaging the input features. From Figure 8 and 9 we can find that if the policy gradient is sparse, its performance is worse than the dense one, implying that the dense method does improve the leader's performance. There is also an interesting phenomenon that sparse EBPG with follower-specified attention mechanism performs better than that without, revealing that the attention can stabilize training when the training signal is sparse. Figure 11: The ablation study of reward curves for two time-scale update method in resource collections task. Following the same logic of , we visualize the weight of the attention when the leader takes actions. From Figure 10, we find that the attention mechanism does learn to strongly attend to the followers that the leader needs to take actions. The followers with leader's commitment obtain much higher attention weight than others, showing that the attention module actually learn to identify the important followers while leader committing new action. Thus, the attention mechanism does play an important role in improving the performance. In order to evaluate the performance of our two time-scale update scheme (TTSU), we do an ablation study as shown in Fig 11. We can find that the performance where the followers' learning rate α (1 × 10 −3) is much larger than the leader's β (3 × 10 −4) is better than the performance where the leader's learning rate is similar to the followers (1 × 10 −3). Moreover, without TTSU, the reward curves of training methods become unstable, revealing that TTSU can stabilize the training process. In fact, TTSU improves the rate of convergence and play an important role in improving performance. We evaluate the leader's performance between static committing interval and our dynamic committing interval. As shown in Figure 12, we observe that all the different fixed committing intervals only change the rate of convergence and do not enhance the leader's performance. All the fixed committing intervals are much worse than our dynamic committing approach, revealing the fact that our dynamic committing approach aids a lot in improving the leader's performance. We are interesting in the reward for the RL-based follower(s). Intuitively, a well-performing leader can make the follower gain more. As shown in Figure 13, the reward for RL follower is higher than M 3 RL follower in all the tasks. This represents the leader can coordinate the followers better and make them gain more reward than other methods, which forms a win-win strategy. Finally, we evaluate the leader's performance with different number of RL-based followers. As shown in Figure 15, we find that our method outperforms the state-of-the-art method when facing different number of RL-based workers. To further illustrate the performance of different combinations of our methods, we make an extra ablation here. We choose the resource collections as the environments. Our analysis is as follows: For the RL based followers scenario, As shown in Table 2, we find that the action abstraction policy gradient is very important to converge. Additionally, adding different modules can improve the performance and the method with all the modules reach the highest reward than other combinations. The contribution ranking for each module is: Action abstraction policy gradient > EBPG > Leaderfollower consistency.
We propose an event-based policy gradient to train the leader and an action abstraction policy gradient to train the followers in leader-follower Markov game.
432
scitldr
Recent work has studied the emergence of language among deep reinforcement learning agents that must collaborate to solve a task. Of particular interest are the factors that cause language to be compositional---i.e., express meaning by combining words which themselves have meaning. Evolutionary linguists have found that in addition to structural priors like those already studied in deep learning, the dynamics of transmitting language from generation to generation contribute significantly to the emergence of compositionality. In this paper, we introduce these cultural evolutionary dynamics into language emergence by periodically replacing agents in a population to create a knowledge gap, implicitly inducing cultural transmission of language. We show that this implicit cultural transmission encourages the ing languages to exhibit better compositional generalization. Compositionality is an important structure of language that reflects a disentangled understanding of the world -enabling the expression of infinitely many concepts using finitely many elements. Agents that have compositional understandings of the world generalize in obviously correct ways even in the face of limited training examples . For example, an agent with a compositional understanding of blue squares and purple triangles should also understand purple squares without directly observing any of them. Developing artificial agents that can ground, understand, and produce compositional (and therefore more interpretable) language could greatly improve generalization to new instances and ease human-AI interactions. In building theories of how compositionality emerges in human languages, work in evolutionary linguistics looks to the process of cultural transmission . Cultural transmission of language occurs when a group of agents pass their language on to a new group of agents, e.g. parents who teach their children to speak as they do. Because this education is incomplete and biased, it allows the language itself to change over time via a process known as cultural evolution. This paradigm explains the emergence of compositionality as a of expressivity and compressibility -i.e. to be most effective, a language should be expressive enough to differentiate between all possible meanings (e.g., objects) and compressible enough to be learned easily. Work in the evolutionary linguistics community has shown that over multiple'generations' these competing pressures in the emergence of compositional languages both in simulation and with human subjects . These studies aim to understand humans whereas we want to understand and design artificial neural networks. Approaching the problem from another direction, recent work in AI has studied language emergence in such multi-agent, goal-driven tasks. These works have demonstrated that agent languages will emerge to enable coordination-centric tasks to be solved without direct or even indirect language supervision (; ; ;). However, the ing languages are usually not compositional and are difficult to interpret, even by other machines . Some existing work has studied means to encourage compositional language formation (;, but these settings study fixed populations of agents -i.e. examining language within a single generation. In this work we bridge these two areas -examining the effect of generational cultural transmission on the compositionality of emergent languages in a multi-agent, goal-driven setting. We introduce cultural transmission into language emergence between neural agents. The starting point of our study is a goal-oriented dialog task (similar to that of), summarized in Fig. 1a. During learning we periodically replace some agents with new ones (gray agents). These new agents do not know any language, but instead of creating one they learn it from older agents. This creates generations of language that become more compositional over time. We study this in the context of a cooperative dialog-based reference game involving two agents communicating in discrete symbols; an example dialog is shown in Fig. 1a. To examine cultural transmission, we extend this setting to a population of agents (Fig. 1b) and introduce a simple mechanism to induce the expressivity and compressibility pressures inherent in cultural transmission. Specifically, we periodically re-initialize some subset of the agents in the population. In order to perform well at the task, the population's emergent language must be sufficiently expressive to reference all the objects (expressivity) and must be easily learnable by these'new' agents (compressibility). The new agents have a randomized language whereas the surviving agents already know a grounded language. This "knowledge gap" creates an implicit'teaching' setting that is analogous to the explicit transmission stage in models of iterative learning . Through our experiments and analysis, we show that periodic agent replacement is an effective way to induce cultural transmission and yields more compositionally generalizable language in our setting. To summarize, our contributions are: -We propose a method for inducing implicit cultural transmission in neural language models. -We introduce new metrics to measure the similarity between agent languages and verify cultural transmission has occurred as a of our periodic agent replacement protocol. -We show our cultural transmission procedure induces compositionality in neural language models, going from 13% accuracy on a compositionally novel test set to 46% in the best configuration. Further, we show this is complementary with previous priors which encourage compositionality. We consider the cooperative Task & Talk reference game introduced in. Shown in Fig. 1a, the game is played by two agents -one who observes an attributed object -e.g. (purple, solid, square) -and another who is given a task to retrieve a subset of these attributes over the course of the dialog -e.g. (color,shape). The dialog itself consists of two rounds of agents exchanging single-token utterances from fixed vocabularies. At the end of the dialog, the task-aware agent must report the requested attributes and both agents are rewarded for correct predictions. This causes a language grounded in the objects to emerge because there is no other way to solve the task. A compositional solution to this task can look like a question-answer style dialog where the taskaware agent queries the other for specific attributes (Fig. 1a) -e.g. uttering "X" requesting the color to which the other agent replies "1" indicating purple. Importantly, this pattern would persist regardless of the other attribute values of the object (e.g. for all (purple, *, *) objects). However, as there is no grounding supervision provided, agents must learn to associate specific meanings to specific words and it is unlikely for compositional languages to emerge purely by chance. Given a color task, an agent might use "1" for (purple, solid, square) and then "2" for (purple, solid, circle). It is impossible for other agents to know that "2" means purple without having seen (purple, solid, circle), so compositional language is essential for generalization to compositionally novel instances. Models. To formalize this setting, let Q-bot and A-bot be agent policies parameterized by neural networks Q and A respectively. At each round t, Q-bot observes the task x Q and it's memory of the dialog so far h A ) where x A is the object instance represented symbolically by concatenating 3 one-hot vectors, one per attribute. After two rounds, Q-bot must respond to the task, predicting the requested attribute pairû = U (x Q, h T Q) as a function of the task and Q-bot's final memory state. Both agents are rewarded if both attributes are correct (no partial credit). We follow the neural network architectures of Q, A, and U from. Measuring Compositional Generalization. generated a synthetic dataset consisting of three attribute types (color, shape, style) each with four values (e.g. red, blue, square, star, dotted, solid, ...) and six tasks, one task for each ordered pair of different attribute types. In total, this in 64 unique instances and 384 task-instance pairs. To evaluate compositionality, held out 12 random instances for testing. Given the closed-world set of instances, these 12 are guaranteed to be never-before-seen triplets of attribute values; however, each individual value has been seen in training in other triplets. As such, accuracy on this set is a measure of compositional generalization. Shortcomings of Evaluation. In our investigations, we found some shortcomings in the evaluation protocol of. First, the authors do not report variance over multiple runs or different random test-sets which we found to be significant. Second, the strategy of randomly selecting the test set can still reward some only partially compositional strategies. For instance, suppose agents develop a language that uses single words to refer to attribute pairs like (red, *, triangle) and (red, filled, *). Such agents might generalize to an unseen instance (red, filled, triangle) by composing the'paired' words above instead of disentangling individual attributes. We make two modifications to address these issues. Our are reported as means and variances estimated from multiple training runs with different random seeds evaluated with 4-way crossvalidation. We also introduce a harder dataset where instead of withholding random individual instances (e.g., (green,dotted,triangle),... ) as in, we withhold all instances for a set of attribute pairs (e.g., (green,dotted, *),(red,solid, *),... ). We will refer to datasets generated in this fashion as novel pair and the original dataset as novel instance. We report on both settings for comparison, but find our new setting to be significantly more challenging in practice -requiring a stricter notion of compositionality that is more closely aligned with human intuitions about these attributes. In iterative learning models of cultural transmission from evolutionary linguistics, competing pressures towards expressivity and compressibility have been shown to induce compositionality over multiple'generations' of language transfer . The goal-driven nature of our reference game already encourages expressivity -agents must be able to refer to the objects in order to succeed. To introduce compressibility pressure and parallel literature in evolutionary linguistics, we introduce a population of agents which regularly has members replaced by new agents that lack any understanding of the remaining population's language. As this paradigm lacks explicit teaching steps where new agents are trained to ground existing words, we consider this approach as a means of implicit cultural transmission. We consider a population of Q-bots {Q 1, . . ., Q N Q} and a population of Abots {A 1, . . ., A N A} with each agent having a different set of parameters. At each iteration during learning, we sample a random Q-bot-A-bot pair to interact and receive updates -i.e. the red line in Alg. 1. As any Q-bot may be made to communicate with any A-bot, there is pressure for the population to adopt a unified language. Likewise, when an agent is reinitialized it will receive positive reward much more quickly when it happens to use language that its conversational partners understand. Furthermore,'compressible' languages that are easier to learn will in greater reward for the population in the face of periodic re-initialization of agents. Policy gradient update w.r.t. both Q-bot and A-bot parameters 9 if e mod E = 0 then 10 Sample replacement set B under policy π and re-initialize all agents in B 11 return all Q-bots and A-bots. Introducing multiple agents may in itself add compressibility pressure and improve generalizations even without replacement . Agents in a population have to model minor linguistic differences between conversational partners given the same memory capacity. Further, each agent provides another potential language variation that can be mimicked and perpetuated-increasing language diversity early in training. We examine these effects through no-replacement baselines, but find that generational pressure where some agents know less than others can also be important for compositionality in our setting. Replacement. In order to create a notion of'generations' we replace agents periodically. Let π be some replacement strategy that returns a subset of the population. Every E epochs, we call π and reinitialize the parameters and optimizers for the corresponding agents (blue lines 9-10 in Alg. 1). We investigate three settings of π (appendix A.2 for more details): -Uniform Random. Sample an A-bot and Q-bot from uniform random distributions. -Epsilon Greedy. With probability 1−ε replace the A-bot and Q-bot with the lowest validation accuracy. We use ε = 0.2 in our experiments. -Oldest. Replace the oldest A-bot and Q-bot, breaking ties with uniform random sampling. Experimental Setting. We evaluate on both our modified Task & Talk dataset and the original from, as described in Section 2. All are reported as means and variances computed from a total of 16 trials (four random seeds each with 4-way cross-validation). We report accuracy based on Q-bot getting both elements of the task correct -corresponding to the more restrictive "Both" setting from. examined a series of increasingly restrictive settings in order to study conditions under which compositionality emerges. The primary variables are whether A-bot has memory (ablated by setting h t A =0) and the vocabulary sizes V Q and V A for Q-bot and A-bot respectively. For comparison we also evaluate in these settings: We also introduce Memoryless + Overcomplete (V Q =V A =64, h t A =0) to complete the cross product of settings and examine the role of memory restriction in overcomplete vocabularies. The Memoryless + Minimal Vocabulary setting in the best compositional generalization; however, this is an extreme setting -requiring not only that the minimum number of groundable symbols be known but also that A-bot not be able to remember it's previous utterance. While we do report these settings and see quite large performance gains due to cultural transmission, we are mainly interested in the more realistic Overcomplete setting where a large pool of possible tokens is provided and both dialog agents remember the conversation. Model and Training Details. Our A-bots and Q-bots have the same architectur as in. All agents are trained with E = 25000, a batch size of 1000, 1 and the Adam optimizer (one per bot) with learning rate 0.01. In the Multi Agent setting we use Figure 2: Test set accuracies (with standard deviations) are reported against our new harder dataset using models similar to those in. Our variations on cultural transmission (darker blue bars) outperform the baselines where language does not change over generations. N A = N Q = 5. We stop training after 8 generations (199000 epochs Multi Agent; 39000 epochs Single Agent). This differs from, which stopped once train accuracy reached 100%. Further, we do not perform negative mining. Baselines. We consider a set of baseline setting to isolate the effect of our approach. -Single Agent Populations. We ablate the effect of multi-agent populations by training individual A-bot-Q-bot pairs (i.e. populations with N A = N Q = 1). We apply the uniform random (either A-bot or Q-bot at random) and oldest (alternating between A-bot and Q-bot) replacement strategies to these agents; however, the epsilon greedy strategy is not well-defined here. In this setting we decrease E from 25000 to 5000 to keep the average number of gradient updates for each agent constant with respect to the multi-agent experiments. -No Replacement. We also consider the effect of replacing no agents at all, but still allowing the agents to train for the full 199,000 epochs. Improvement over this baseline shows the gains from our replacement strategy under identical computational budgets. Results with standard deviations against our harder dataset are reported in Fig. 2. We compared methods and models using dependent paired t-tests and reported the ing p-values in Section A.3. Result on the original Task & Talk dataset are in Section A.1. Cultural transmission induces compositionality. Our main is that cultural transmission approaches outperform baselines without cultural transmission. This can be seen by noting that for each model type in Fig. 2, the 3 darker blue bars (Multi Agent Replacement approaches) are largest. After running a dependent paired t-test against all pairs of baselines and cultural transmission approaches we find a meaningful difference in all cases (p ≤ 0.05). This is strong support for our claim that our version of cultural transmission encourages compositional language because it causes better generalization to novel compositions of attributes. Next we go on to discuss some additional trends we hope the community will find useful. Population dynamics without replacement usually lead to some compositionality. The Multi Agent No Replacement policies usually outperform than the Single Agent No Replacement policies, though the difference isn't very significant in the except in the Overcomplete and Minimal Vocab settings. This agrees with recent work from evolutionary linguistics, where multiple agents can lead to compositionality without generational transmission. Variations in replacement strategy tend to not affect performance. The Multi Agent Uniform Random/Epsilon Greedy/Oldest replacement strategies are not largely or consistently different from one another across model variations. This suggests that while some agent replacement needs to occur, it is not critical whether agents with worse language are replaced or whether there is a pool of similarly typed agents to remember knowledge lost from older generations. The main factor is that new agents learn in the presence of others who already know a language. Cultural transmission is complementary with other factors that encourage compositionality. As in, we find the Memoryless + Small Vocab model is clearly the best. This agrees with factors noted elsewhere;; and shows how many different factors can affect the emergence of compositionality. Removing memory makes only minor differences. Removing memory makes no difference (negative or positive) in Single Agent settings, but it can have a relatively small effect in Multi Agent settings, helping Small Vocab models and hurting Overcomplete models. While our approach is complementary with minimizing vocab size to increase compositionality, its makes memory removal less useful. As the Memoryless + Overcomplete setting has not been reported before, these suggest that the relationship between inter-round memory and compositionality is not clear. Overall, these show that adding cultural transmission to neural dialog agents improves the compositional generalization of the languages learned by those agents in a way complementary to other priors. It thereby shows how to transfer the cultural transmission principle from evolutionary linguistics to deep learning. Because it is implicit, cultural transmission may not actually be occurring; improvements may be from other sources. How can we measure cultural transmission? We focus on A-bots and take a simple approach. We assume that if two A-bots'speak the same language' then that language was culturally transmitted. There is a combinatorial explosion of possible languages that could refer to all the objects of interest, so if the words that refer to the same object for two agents are the same then they were very likely transmitted from the other agents, rather than similar languages emerging from scratch just by chance. This leads to a simple approach: consider pairs of bots and see if they say similar things in the same context. If they do, then their language was likely transmitted. More formally, consider the distribution of tokens A-bot A i might use to describe its object x A when talking to Q-bot Q k: p k,i (m t A |x A) or p k,i for short. We want to know how similar A i's language is to that of another A-bot A j. We'll start by comparing those two distributions by computing the KL divergence between them and then taking an average over context (objects, Q-bots, and dialog rounds) to get our pairwise agent language similarity metric D ij: Taking another average, this time over all pairs of bots (and also random seeds and cross-val folds), gives our final measure of language similarity reported in Fig. 3. D is smaller the more similar language is between bots. Note that even though D ij is not symmetric (because KL divergence is not), D is symmetric because it averages over both directions of pairs. We compute D by sampling an empirical distribution over all messages and observations, taking 10 sample dialogues in each possible test state (x A, x Q) of the world using the final populations of agents as in Fig. 2. Note that this metric applies to a group of agents, so we measure it for only the Multi Agent settings, including two new baselines colored red in Fig. 3. The Single Agents Combined baseline trains 4 Single Agent No Replacement models independently then puts them together and computes D for that group. These agents only speak similar languages by chance, so D is high. The Random Initialization baseline evaluates language similarity using newly initialized models. These agents have about a uniform distribution over words at every utterance, so their languages are both very similar and useless. For each model these baselines act like practical (not strict) upper and lower bounds on D, respectively. Fig. 3 shows this language dissimilarity metric for all our settings. As we expect, the paired Single Agents are highly dissimilar compared to agents from Multi Agent populations. Further, all the replacement strategies in increased language similarity-although the degree of this effect seems dependent on vocabulary setting. This provides some evidence that cultural transmission is occurring in Multi Agent settings and is encouraged by the replacement strategy in our approach. While all Multi Agent settings ed in language transmission, our replacement strategies in more compositional languages due to repeated teaching of new generations of agents. In this section we visualize the language learned by agents at various stages of training to reinforce our previous and build intuition. Each of the three sub-figures in Fig. 4 summarizes all of the conversations between a particular pair of bots for the (shape, color) task. To see how these summaries work, consider Fig. 4a. That sub-figure is divided into a 4 × 4 grid with 4 elements Figure 3: Do bots in a population learn similar languages? On the y-axis (eq.) lower values indicate similar language. Populations evolved with our method speak similar languages, but independently evolved agents do not. Thus our implicit procedure induces cultural transmission. in each cell, so there is one element for each of the 64 possible objects. For this task, objects in each row of the 4 × 4 grid have the same shape and objects in each column have the same color. To the right of each object are the two tokens A-bot used to respond to Q-bot in the two dialog rounds. Ideally they should indicate the color and shape of the object. Finally, the check-marks or Xs to the right of A-bot's utterances indicate whether Q-bot guessed correctly. Fig. 4a to a similar pair of bots from our approach Section 4b we can see that our approach encourages compositional language to emerge. Furthermore, the similarity between Fig. 4b and Fig. 4c suggests language is indeed transmitted in our approach. From left to right: Fig. 4a summarizes the single pair from a Single Agent No Replacement run (3000 iterations old); Fig. 4b summarizes dialogs between an old Q-bot (about 23000 iterations) and a recently re-initialized A-bot (about 3000 iterations) at the 8th and final generation of a Multi Oldest run; Fig. 4c summarizes dialogs between the same old Q-bot as in Fig. 4b and an old A-bot (13000 iterations) from the same Multi Oldest experiment. Even though the A-bots in Fig. 4a and Fig. 4b have trained for about 2 the same number of iterations, the A-bot trained in the presence of other bots which already know a functional language has already learned a somewhat compositional language whereas the Single Agent A-bot has not. Furthermore, by comparing the old A-bot's language Fig. 4c with the new one Fig. 4b we can see that they are extremely similar. They even lead to the same mistakes. This again suggests that language is transmitted between bots, in agreement with our previous experiments. Language Evolution Causes Structure. Researchers have spent decades studying how unique properties of human language like compositionality could have emerged. There is general agreement that people acquire language using a combination of innate cognitive capacity and learning from other language speakers (cultural transmission), with the degree of each being widely disputed;. Both innate cognitive capacity and specific modern human languages like English co-evolved via biological and cultural; evolution, respectively. In particular, explanations of how the cultural evolution of languages could cause structure like compositionality are in abundance;;;;;;. An important piece of the explanation of linguistic structure is the iterated learning model;; used to motivate our approach. Indeed it shows that cultural transmission causes structure in computational Kirby (2001; ; ; and human ; ; experiments. Even though cultural transmission may aid the emergence of compositionality, recent in evolutionary linguistics and deep learning ; also emphasize other factors. While existing work in deep learning has focused on biases that encourage compositionality, it has not considered settings where language is permitted to evolve over generations of agents. We have shown such an approach is viable and even complementary with other approaches. Language Emergence in Deep Learning. Recent work in deep learning has increasingly focused on multi-agent environments where deep agents learn to accomplish goals (possibly cooperative or competitive) by interacting appropriately with the environment and each other. Some of this work has shown that deep agents will develop their own language where none exists initially if driven by a task which requires communication;;. Most relevant is work which focuses on conditions under which compositional language emerges as deep agents learn to cooperate;. and find that limiting the vocabulary size so that there aren't too many more words than there are objects to refer to encourages compositionality, which follows earlier in evolutionary linguistics. Follow up work has continued to investigate the emergence of compositional language among neural agents, mainly focusing on perceptual as opposed to symbolic input and how the structure of the input relates to the tendency for compositional language to emerge;;. Other work has shown that Multi Agent interaction leads to better emergent translation , but it does not measure compositionality. Cultural Evolution and Neural Nets. Somewhat recently, suggested that culturally transmitted ideas may help in escaping from local minima. Experiments in Gülçehre & support this idea by showing that supervision of intermediate representations allows a more complex toy task to be learned. Unlike our work, these experiments use direct supervision provided by the designed environment rather than indirect and implicit supervision provided by other agents. Two concurrent works examine the role of periodic agent replacement on language emergencealbeit in different environments. replacement is used to encourage languages to be easy to teach, and this in turn causes compositionality. neural language is transmitted through a bottleneck caused by replacement. The ing language has increased efficiency and effectiveness, with further showing that co-evolving the agents themselves with the language amplifies the effect. Both of these works support our central observations. In this work we investigated cultural transmission in deep neural dialog agents, applying it to language emergence. The evolutionary linguistics community has long used cultural transmission to explain how compositional languages could have emerged. The deep learning community, having recently become interested in language emergence, has not investigated that link until now. Instead of explicit models of cultural transmission familiar in evolutionary linguistics, we favor an implicit model where language is transmitted from generation to generation only because it helps agents achieve their goals. We show that this does indeed cause cultural transmission and compositionality. Future work. While our work used an implicit version of cultural transmission, we are interested in investigating the effect of explicit versions of cultural transmission on language structure. In another direction, cultural transmission may also provide an appropriate prior for neural representations of non-language information. A.1 ON SINGLE HELD OUT ATTRIBUTE DATASET OF KOTTUR In Section 4 we proposed a new harder compositional dataset different from the one in. For comparison, in this section we train and evaluate our models on the original dataset from to show that our approach also improves still improves compositionality in this setting and to show that our new dataset is indeed harder.. These do not perform cross-validation, following. They only vary across 4 different random seeds. This dataset is significantly easier than our new dataset, as indicated by the Our proposed approach still outperforms models without replacement or multiple agents. Our approach to cultural transmission periodically replaces agents by re-initializing them. The approach section outlines various replacement strategies (policy π), but does not detail their implementation. We do so here. These strategies depend on a number of possible inputs: • e the current epoch • E the period of agent replacement In our experiments we compare models and we compare replacement strategies. We ran dependent paired t-tests across random seeds, cross-val folds, and replacement strategies to compare models. We ran dependent paired t-tests across random seeds, cross-val folds, and models to compare replacement strategies. The p-values for all of these t-tests are reported here. Replacement strategy comparisons are in Fig. 7 (Single Agent) and Fig. 8 (Multi Agent). Model comparisons are in Fig. 6.
We use cultural transmission to encourage compositionality in languages that emerge from interactions between neural agents.
433
scitldr
Based on our observation that there exists a dramatic drop for the singular values of the fully connected layers or a single feature map of the convolutional layer, and that the dimension of the concatenated feature vector almost equals the summation of the dimension on each feature map, we propose a singular value decomposition (SVD) based approach to estimate the dimension of the deep manifolds for a typical convolutional neural network VGG19. We choose three categories from the ImageNet, namely Persian Cat, Container Ship and Volcano, and determine the local dimension of the deep manifolds of the deep layers through the tangent space of a target image. Through several augmentation methods, we found that the Gaussian noise method is closer to the intrinsic dimension, as by adding random noise to an image we are moving in an arbitrary dimension, and when the rank of the feature matrix of the augmented images does not increase we are very close to the local dimension of the manifold. We also estimate the dimension of the deep manifold based on the tangent space for each of the maxpooling layers. Our show that the dimensions of different categories are close to each other and decline quickly along the convolutional layers and fully connected layers. Furthermore, we show that the dimensions decline quickly inside the Conv5 layer. Our work provides new insights for the intrinsic structure of deep neural networks and helps unveiling the inner organization of the black box of deep neural networks. To have a better understanding of deep neural networks, a recent important trend is to analyze the structure of the high-dimensional feature space. Capitalizing on the manifold hypothesis BID1 BID12, the distribution of the generated data is assumed to concentrate in regions of low dimensionality. In other words, it is assumed that activation vectors of deep neural networks lie on different low dimensional manifolds embedded in high dimensional feature space. Note that the rationality of many manifold learning algorithms based on deep learning and autoencoders is that one learns an explicit or implicit coordinate system for leading factors of variation. These factors can be thought of as concepts or abstractions that help us understand the rich variability in the data, which can explain most of the structure in the unknown data distribution. See BID3 for more information. The dimension estimation is crucial in determining the number of variables in a linear system, or in determining the number of degrees of freedom of a dynamic system, which may be embedded in the hidden layers of neural networks. Moreover, many algorithms in manifold learning require the intrinsic dimensionality of the data as a crucial parameter. Therefore, the problem of estimating the intrinsic dimensionality of a manifold is of great importance, and it is also a crucial start for manifold learning. Unfortunately, the manifold of interest in AI (especially for deep neural networks), is such a rugged manifold with a great number of twists, ups and downs with strong curvature. Thus, there is a fundamental difficulty for the manifold learning, as raised in BID0, that is, if the manifolds are not very smooth, one may need a considerable number of training examples to cover each one of these variations, and there is no chance for us to generalize to unseen variations. Our work is based on an important characterization of the manifold, namely, the set of its tangent hyperplanes. For a point p on a d-dimensional manifold, the tangent hyperplane is given by a local basis of d vectors that span the local directions of variations allowed on the manifold. As illustrated in Figure 1, these local directions specify how one can change p infinitesmally while staying on the manifold. Figure 1: A two-dimensional manifold with a small region where data points concentrate, along with a tangent plane and associated tangent directions, forming a basis that specifies the directions of small moves one can make to stay on the manifold. Based on above analysis, our work focuses on a thorough exploration of the local hyperplane dimension of the activation manifold in deep neural networks. Creating an artificial data cluster concentrated in regions of the local tangent hyperplane, we apply SVD to the data cluster in different layers or feature maps in neural networks. Through thorough analysis, we reach the following fascinating .• There exists a dramatic drop for the singular values of the fully connected layers or a single feature map of the convolutional layer.• For convolutional layers, the dimension of the concatenated feature vector almost equals the summation of the dimension on each feature map.• The dimensions of different image categories are close and the dimension declines quickly along the layers. To our knowledge this is the first thorough exploration of manifold dimension on very deep neural networks. We wish our work sheds light on new understandings and inspires further investigations on the structure of manifolds in deep neural networks. With the great success of deep learning in many applications including computer vision and machine learning, a comprehensive understanding of the essence of deep neural networks is still far from satisfactory. Related works can be classified mainly into three types. The first kind of work focuses on the difference of random networks and trained networks BID15 BID5 BID13 ). There are also works that focus on the theoretical understanding of learning BID20 BID9, while the rest focus on the inner organization or feature representations through visualization BID11 BID2.Up until now, there are only a few works in exploring the property of deep manifolds formed by activation vectors of the deep layers. In this section, we highlight the most related work for manifold learning and dimension determination. Manifold learning has been mainly applied in unsupervised learning procedures that attempt to capture the manifolds (Van der BID18 . It associates each of the activation nodes with a tangent plane that spans the directions of variations associated with different vectors between the target example and its neighbors. BID19 investigate how to learn a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. BID14 exploit a novel approach for capturing the manifold structure (high-order contractive auto-encoders) and show how it builds a topological atlas of charts, with each chart characterized by the principal singular vectors of the Jacobian of a representation mapping. BID7 propose a twodimensional representation space, a Euclidean coordinate system for Frey faces and MNIST digits, learned by a variational auto-encoder. There are several efficient algorithms to determine the intrinsic dimension of high-dimensional data. Singular Value Decomposition (SVD), also known as Principal Component Analysis (PCA), has been discussed thoroughly in the literature BID17. In applications, the choice of algorithm will rely on the geometric prior of the given data and the expectation of the outcome. In addition, researchers have also proposed several improved manifold-learning algorithms considering more pre-knowledge of the dataset. For example, BID10 estimate the intrinsic dimensionality of samples from noisy low-dimensional manifolds in high dimensions with multi-scale SVD. BID8 propose a novel method for estimating the intrinsic dimension of a dataset by applying the principle of maximum likelihood to the distances between close neighbors. BID4 introduce a framework for a regularized and robust estimation of non-uniform dimensionality and density for high dimensional noisy data. To have a comprehensive understanding of the manifold structure of neural networks, it is natural to treat the activation space as a high-dimensional dataset and then characterize it by determining the dimension. This will give us a new view of the knowledge learnt from the neural network and hopefully the information hidden inside. However, to the best of our knowledge, there is nearly no related work in determining the intrinsic dimensionality of the deep manifold embedded in the neural network feature space. In the following we will give a thorough study on this topic. In this section we describe our strategy to determine the dimension d of the local tangent hyperplane of the manifold, along with necessary definitions and conventions which will be used in specifying the dimension of the manifold dataset. It is known that if there is a set of data {x i} n i=1 that can be regarded as a data point cluster and lie close to a noiseless d-dimensional hyperplane, then by applying SVD, the number of non-trivial singular values will equal to d -the intrinsic dimension of the data point cluster. In the context of a manifold in a deep neural network, a cluster of activation vectors pointing to a manifold embedded in feature space R D, can be approximated as concentrated on a d-dimensional tangent hyperplane, whose dimension directly associates with the manifold. However, the challenge here in dimension estimation is that noise everywhere are influencing the dataset making it hard to get the correct :1. BID10 point out that when D-dimensional noise is added to the data, we will observe:x i = x i + η i, where η represents noise. The noise will introduce perturbation of the covariance matrix of the data, which will lead to the wrong .2. BID3 also mentioned that some factors of variation largely influence every single piece of the observed data. Thus those factors we do not care about (or simply considered as noise) may lead researchers to wrong with high probability. To solve the above problems, we make use of the following observations:1. By introducing representations that are expressed in terms of different, simpler representations, deep neural networks extract high-level, abstract features from the raw data, that make it successfully disentangle the factors of variation and discard the ones that we do not care about (noise), see BID3 for more information. It turns out that the noise in feature space will be so small that we are likely to have the singular values from factors we care about significantly larger than the remaining singular values generated by noise and get the right .2. BID6 and many works have shown that when n, the number of data points, goes to infinity, the behavior of this estimator is fairly well understood. Based on the above analysis, we propose the following solution:1. By using a pre-trained deep neural network, after the feed-forward process, the feature vectors have little irrelevant noise remaining and preserve all useful factors of variation. So we can make sure that feature vectors lie on a noiseless manifold embedded in high dimensional feature space. 2. By introducing some picture augmentation methods, we will generate considerable amount of similar pictures(also classified as the same class with high probability by deep neural network), then we will get a sufficiently large cluster of feature vectors that lie close to a local tangent d-dimensional hyperplane of the noiseless manifold. 3. Finally we apply the original SVD on this feature vector cluster which lies close to noiseless local tangent d-dimensional hyperplane and give a precise estimation of the local dimension of the manifold. Following paragraphs give a more formal description of our solution to computing the dimension. DISPLAYFORM0 be the set of image data points we generate, x 1 is the original image classified by the neural network as a specific class with high probability,e.g P (x 1) > 0.99. By using augmentation methods, we generate n − 1 augmented images and keep all augmented images classified to be in the same class with high probability. P (x i) > 0.99, i = 2 · · · n. Let λ = µ + η be augmentation information we have introduced to the image. The λ can be divided into two components: some irrelevant noise(η) combined with useful factors of variance(µ), let f be the underlying network feature extract function. After feed-forward process in a specific layer, n feature vectors in R D are denoted as a D × n matrix A D,n. For simplicity, we denote A D,n as DISPLAYFORM1. P is the local approximate hyperplane of manifold M. In the real image space, we haveX n = {x i + σ i} n i=1. But after feed-forward process in the feature space, the noise η is reduced to very small scale and we got DISPLAYFORM2. Therefore activation vectors are concentrated around a noiseless local tangent hyperplane P of manifold M.To realize the goal of estimating local tangent d = dimP of manifold M givenX n and corresponding A D,n in a specific layer, we adopt the standard approach to compute the SVD, with the singular values (denoted as σ) yielding j. With high probability the first j singular values are significant, so that σ 1,..., σ j σ j+1,..., σ D, we then take the reasonable estimation that d = j. Fully connected layers. For fully connected layers, the size of A D×n would be (D, n). Then we apply SVD on A D×n to get its singular value array σ that sorted in descending order. Let σ i denotes the i-th singular value in the array, if there exist some j > 0, where σ j /σ j+1 > θ, in which θ is a very big number, then we claim that j is an estimation of the tangent hyperplane P's dimension, so is the estimation of the local dimension of manifold M of corresponding layer and original images. If the j does not exist, then the estimation of the local dimension would be up to the dimension of the whole activation space(see Section 5).Convolutional layers. We denote D = H × W × C as the activation space dimension of a specific layer with whose H is feature map's height, W is the width of feature map, C is the number of channel. For data cluster with size n and the ith feature map with height H and width W in the convolutional layer we got a corresponding matrix A (H×W),i,n and calculate dimension respectively. We define the estimated dimension by randomly picking k(k ≥ 1) feature maps and calculating their dimension one by one and sum the dimensions up. Concatenated dimension: concatenate the k picked feature maps and calculate the concatenated matrix's dimension. Original dimension: when we pick all the feature maps in a layer and calculate the concatenated dimension, this concatenated dimension is defined as the original dimension. FIG0 is a illustration of these concepts. We should note that fully-connected layers' are 1 (C = 1), so in fully-connected layer, the estimated dimension = concatenated dimension = original dimension. We will refer to either of it for the same meaning. When we refer to a layer's estimated dimension (or estimated dimension of a layer), we mean we choose all feature maps and do calculation (k = C). Network. VGG19 was proposed by BID16, it was proved to have excellent performance in the ILSVRC competition, whose pre-trained model are still wildly used in many areas. In this paper, we use pre-trained VGG19 model to conduct our experiment, and give every layer a unique name so that they can be referred to directly. Table 2 in Appendix gives the name of every layer and its corresponding activation space dimension. Image augmentation. We choose three categories in ImageNet dataset: Persian Cat (n02123394), Container Ship (n03095699) and Volcano (n09472597) (See Figure 11 in Appendix). Then for each category, we select three typical images with high probability as the original image, because the network firmly believe that they are in the category, their activation vectors can be considered as a data point on the same manifold representing for Persian Cat. We use three augmentation methods to generate similar images which form a data point cluster, they are: 1) Cropping: randomly cut out a strip of few pixels from every edge; 2) Gaussian noise: add Gaussian noise (mean = 0.0, var = 0.01) to every single pixel; 3) Rotation: rotate the image by a random degree in ±10%. The exaggerated output images of these three methods are shown in FIG2.As these augmentation methods only apply small but various changes on the original image X (also keep high probability P (x) > 0.99), activation vectors A will concentrate around the activation vector a 0 of x 0, which can be considered near a local tangent hyperplane P of the manifold M. We have tried three Persian cat images, the dimension is within a small range. For other categories, we also tried three high-probability images for ship and three high-probability images for volcano. The dimension of vocano is slightly higher than that of cat, and the dimension of cat is slightly higher than ship for the same layer. All the three category show the same trend on dimension through the layers. Therefore, we will show the details mainly on a typical Persian cat as shown in FIG2 (a).Estimated dimension for a fully connected layer or a single feature map. We apply SVD on A D×n in fully connected layers and then plot the singular values in log10 scale FIG4. If we specify a certain layer, we can find dramatic drop at j for all the three augmenting methods: σ j /σ j+1 > θ = 10 5. So we can claim that for the local tangent hyperplane P on manifold M, the dimension is d = j with high probability as long as we use enough samples (See Section 3.1). This "dramatic drop" also appears for a single feature map. The only exception is on the fc8 layer, there is no "dramatic drop", inferring that the hyperplane P spans the whole activation space. d = 1000 in fc8.Rule 1. The estimated dimension of the local tangent hyperplane of the manifold for a fully connected layer or a single feature map can be determined by a dramatic drop along the singular values. Influence on the estimated dimension by the data size. FIG6 shows the estimated dimension versus the augmentation data size. The dimension grows slowly with the growth of the data size. Although the augmentation data scale influences the estimated dimension, the growth on the dimension along with the data size is fairly small. The dimension only grows by less than 3% as the data size triples. Thus, it is reasonable to use a fairly small data set to estimate the dimension. More importantly, as shown in FIG7, such rule can also be generalized to calculate the estimated dimension of the convolutional layers. Rule 2. We can determine the local tangent hyperplane's estimated dimension of the manifold in a layer (fully connected or convolution) using a fairly small data cluster size, for example, 8k. Estimated dimension and concatenated/original dimension. We randomly pick k ∈ {2, 3, 5, 10} feature maps in layer Conv5 1 and calculate the estimated dimension as well as concatenated dimension. The of k ∈ {2, 3, 5} are as shown in FIG5. For the of k = 10, see Table 3 in Appendix. The estimated dimension is very close to the corresponding concatenated dimension. Thus, we can use the estimated dimension to approximate the concatenated dimension. We then pick all feature maps in maxpooling5, and calculate the estimated dimension, original dimension. FIG8 shows that start from 8k of the data size, the estimated dimension is close to the original dimension. Thus, we can use a small amount of 8000 images to approximate the original dimension using the estimated dimension. When the data cluster size is insufficient, assuming the local tangent hyperplane of the manifold is d-dimensional, the will be strictly restricted by the number of input images n when n < d. So that the concatenated dimension or original dimension we calculate would be almost equal to n for small n, while estimated dimension is a summation which can approximate d. Rule 3. The original dimension of the local tangent hyperplane can be approximated by the estimated dimension using a fairly small size of dataset, for example 8000. For each of the three categories, Persian Cat (n02123394), Container Ship (n03095699) and Volcano (n09472597) in ImageNet, we randomly choose three images of high probability and determine the estimated dimensions based on the three rules drawn in Section 5.Dimensions for Conv5 and fully connected layers. For Conv5 and fully connected layers, we summarize the average of the estimated dimensions in TAB0 and FIG9. The estimated dimension gradually declines from Conv5 1 to fc8. For fc6 and fc7, the activations lie in a low-dimension manifold embedded in the 4096-dimension space. For fc8, the manifold's dimension is exactly 1000. It makes sense as fc8 is directly linked to the final classification prediction, it is in full rank to achieve a higher performance. The dimensions of the three categories are close to each other and decline quickly inside the four convolutional layers and the last maxpooling layer. Dimensions for maxpooling layers. We illustrate the average of the estimated dimensions in Figure 10 all maxpooling layers. The dimensions of the three categories coincide with each other and decline quickly for deep pooling layers. Through extensive experiments, we found that there exists a dramatic drop for the singular values of the fully connected layers or a single feature map of the convolutional layer, and the dimension of the concatenated feature vector almost equals the summation of the dimension of each feature map for several feature maps randomly picked. Based on the interesting observations we obtained, we developed an efficient and effective SVD based method to estimate the local dimension of deep manifolds in the VGG19 neural network. We found that the dimensions are close for different images of the same category and even images of different categories, and the dimension declines quickly along the convolutional layers and fully connected layers. Our supports the lowdimensional manifold hypothesis for deep networks, and our exploration helps unveiling the inner organization of deep networks. Our work will also inspire further possibility of observing every feature map separately for the dimension of convolutional layers, rather than directly working on the whole activation feature maps, which is costly or even impossible for the current normal computing power.
We propose a SVD based method to explore the local dimension of activation manifold in deep neural networks.
434
scitldr
Large pre-trained Transformers such as BERT have been tremendously effective for many NLP tasks. However, inference in these large-capacity models is prohibitively slow and expensive. Transformers are essentially a stack of self-attention layers which encode each input position using the entire input sequence as its context. However, we find that it may not be necessary to apply this expensive sequence-wide self-attention over at all layers. Based on this observation, we propose a decomposition to a pre-trained Transformer that allows the lower layers to process segments of the input independently enabling parallelism and caching. We show that the information loss due to this decomposition can be recovered in the upper layers with auxiliary supervision during fine-tuning. We evaluate de-composition with pre-trained BERT models on five different paired-input tasks in question answering, sentence similarity, and natural language inference. Results show that decomposition enables faster inference (up to 4x), significant memory reduction (up to 70%) while retaining most (up to 99%) of the original performance. We will release the code at<anonymized url>. Inference in large Transformer-based NLP models such as BERT requires prohibitively high-levels of compute, making it expensive to support large volume processing in data centers, and almost infeasible to run on resource constrained mobile devices. These Transformer models create effective representations using self-attention, a mechanism that allows them to effectively account for wide textual contexts. However, applying self-attention over the entire input for all layers is computationally expensive. This raises a natural question: Is self-attention over the entire input necessary in all of the layers? Previous studies (; ; b) have shown that lower layers tend to capture syntactic phenomena that mostly depend on local contexts and that higher layers capture more semantic phenomena that are relevant to downstream tasks, which depend on longer global contexts. This suggests that considering only local context in lower layers of Transformer and considering full global context in upper layers can provide speedup at a very small cost in terms of effectiveness. In this work we focus on paired-input NLP tasks such as reading comprehension, natural language inference and sentence pair similarity. These tasks provide a natural boundary for the locality of text (e.g., question vs. passage in QA). Because of this natural decomposition in two segments, we can compute representations for lower layers with only the local segment as the context and compute representations for upper layers with both segments as the context. This decomposition technique has multiple benefits: It allows for parallel processing of each segment, caching of segments that are available offline, and a significant reduction in runtime memory. Moreover, since the architecture remains largely same, the original pre-trained weights can be reused in the decomposed model. To compensate for the differences in the decomposed setting, we augment the fine-tuning loss on the target task with a distillation loss that minimizes the output-level as well as layer-level divergences. We evaluate the decomposition idea using the BERT model on five different pairwise tasks. The decomposition achieves substantial speedup (2 to 4.3x) and reduction in memory (51.1% to 76.8%) for only small loss in effectiveness (0.2 to 1.8 points). Moreover, we find that with decomposition the larger BERT model can even run faster than the original smaller BERT model, while still being more accurate. Speeding up inference in a model requires reducing the amount of compute involved. There are two broad related directions of prior work: (i) Compression techniques can be used to reduce model size through low rank approximation;; ), and model weights pruning , which have been shown to help speedup inference in CNN and RNN based models. explore pruning the attention heads to gain inference speedup. This is an orthogonal approach that can be combined with our decomposition idea. However, for the paired-input tasks we consider, pruning heads only provides limited speedup. In more recent work propose approximating the quadratic attention computation with a tensor decomposition based multi-linear attention model. However, it is not clear how this multi-linear approximation can be applied to pre-trained Transformers like BERT. (ii) Distillation techniques can be used to train smaller student networks to speedup inference. show that BERT can be used to guide designing smaller models (such as single-layer BiLSTM) for multiple tasks. But for the tasks we study, such very small models suffer a significant performance drop. For instance there is a 13% accuracy degration on MNLI task. Another closely related recent work is DistillBERT , which trains a smaller BERT model (half the size of BERT-base) that runs 1.5 times faster than the original BERT-base. However, the distilled model incurs a significant drop in accuracy. More importantly, this distillation model usually undergo expensive pre-training on the language modeling tasks before they can be fine-tuned for the downstream tasks. In this work, we ask if can we speedup the inference of Transformer models without compressing or removing model parameters. Part of the massive success of pre-trained Transformer models for many NLP task is due to a large amount of parameters capacity to enable complex language representations. The decomposition we propose makes minimal changes retaining the overall capacity and structure of the original model but allows for faster inference by enabling parallel processing and caching of segments. Transformers process the entire input sequence through multiple layers of self-attention, each of which involves an expensive transformation that is quadratic in the number of tokens in the input sequence. In the case of paired-input tasks with query and candidate texts (e.g. question and passage in QA, premise and hypothesis in NLI), the Transformers compute attention over a concatenation of both texts. This in highly effective representations of the input pair, but this comes at a high-cost in terms of time complexity. If you want to reduce complexity, one natural question to ask is whether we can decompose the Transformer function over segments of the input, trading some representational power for efficiency. Consider the question-answering use case shown in Figure 1, where the standard inputwide self-attention implicitly models question self-attention, the passage self-attention, and the two question and passage cross-attentions. The cross-attentions allow the question and passage to influence each other and produce effective representations for the target task. However, if we can process the two segments separately, we can improve efficiency in multiple ways. We get a basic reduction in compute because we no longer perform cross-attention, go- More im-portantly we can get important gains through parallelism and caching. The question and passage self-attentions can be computed in parallel. Since the passage representation is no longer dependent on the question, they can be computed off-line and cached 1. The trade-off, however, is that we loose some representation effectiveness because we no longer use information from the other segment. We argue that this is a good trade-off from two viewpoints: (i) First, we can expect to achieve a good tradeoff in models with multiple layers, where the cross-attention is less important in lower layers when compared to the upper layers. Figure 2 supports this argument using the similarity of the representations of BERT layers for a single passage when computed using five different questions. The variance is smaller for lower layers and increases for higher layers. This is also in agreement with on probing tasks which suggest that lower layers tend to model mostly local phenomena (e.g., POS, syntactic categories), while higher layers tend to model more semantic phenomena that are task dependent (e.g, entity co-reference) relying on wider contexts. We further posit that in these multi-layer models information loss in the lower layers can be potentially compensated for by the higher layers. (ii) The approach requires minimal change to the overall Transformer architecture, which means we can reuse the pre-trained weights and expect that fine-tuning on the target task by itself will be sufficient to obtain high performance. The Transformer encoder has n layers (denoted L i for layer i), which transform this input sequentially: For the details of the Transformer layer, we refer the reader to . We denote the application of a stack of layers from layer i to layer j be denoted as L i:j. Full Transformer: The output representations of the full Transformer, A n and B n can be expressed as: Decomposed Transformer: Figure 3 shows a schematic of our model. We decompose the computation of lower layers (up to layer k) by simply removing the cross-interactions between T a and T b representations. Here k is a hyper-parameter. The output representations of the decomposed Transformer, A n and B n can be expressed as: The decomposed Transformer can be used in the same way as the original Transformer. Since the decomposed Transformer retains much of the original structure, we can initialize this model with the pre-trained weights of the original Transformer and fine-tune directly on downstream tasks. However, the decomposed Transformer looses some information in the representations of the lower layers. The upper layers can learn to compensate for this during fine-tuning. However, we can go further and use the original model behavior as an additional source of supervision. Towards this end, we first initialize the parameters of decomposed Transformer with the parameters of a pre-trained full Transformer, and fine-tune it on the downstream tasks. To guide the learning of decomposed-Transformer further, we add auxiliary losses that makes predictions and the upper layer representations of decomposed Transformer closer to the predictions and corresponding layer representations of the full Transformer during fine-tuning. Knowledge Distillation Loss: We use this loss to bring the prediction distribution of the decomposed Transformer closer to that of the full Transformer during fine-tuning. For this, we minimize the Kullback-Leibler divergence between decomposed Transformer prediction distribution P A and full Transformer prediction distribution P B: Layerwise Representation Similarity Loss: We use this loss to bring the layer representations of decomposed Transformer closer to those of full Transformer during fine-tuning. For this, we minimize the euclidean distance between token representations of the upper layers of decomposed Transformer and the full Transformer. Let v i j be the token representations from i th layer (n layers) and j th token (m tokens) of the full Transformer. Likewise, let u i j be the token representations from i th layer (n layers) and j th token (m tokens) of the decomposed Transformer. Finally, let decomposed Transformer be decomposed up to layer k. We compute the Layerwise Representation Similarity loss for fine-tuning decomposed Transformer as following: We add the Knowledge Distillation Loss and Layerwise Representation Similarity Loss along with the Task Specific Supervision Loss (L ts) and learn their relative importance via hyper-parameter tuning: We use the Bayesian Optimization (Močkus, 1975) to tune the γ, α and β instead of simple trialand-error or grid/random search. The Bayesian process is designed to minimize the number of steps required to find a combination of hyper-parameters that are close to the optimal one. We use the pre-trained uncased BERT base and large 2 models on five different paired-input problems covering 3 QA tasks, 1 natural language inference task and 1 sentence-level semantic similarity task: SQuAD v1.1 (Stanford Question Answering Dataset) is an extractive question answering datasets containing >100,000 question and answer pairs generated by crowd workers on Wikipedia articles. RACE is reading comprehension dataset collected from the English exams that are designed to evaluate the reading and reasoning ability of middle and high school Chinese students. It has over 28,000 passages and 100,000+ questions. BoolQ (a) consists of 15942 yes/no questions that are naturally occurring in unprompted and unconstrained settings. MNLI (Multi-Genre Natural Language Inference) is a crowd-sourced corpus of 433k sentence pairs annotated with textual entailment information. QQP (Quora Question Pairs) consists of over 400,000 potential duplicate question pairs from Quora. For all 5 tasks, we use the standard splits provided with the datasets but in addition divide the original training data further to obtain a 10% split to use for tuning hyper-parameters (tune split), and use the original development split for reporting efficiency (FLOPs, memory usage) and effectiveness metrics (accuracy or F1 depending on the task). We implement all models in TensorFlow 1.14 based on the original BERT codebase . We perform all experiments on one TPU v3-8 node (8 cores, 128GB memory) with bfloat16 format enabled. We measure the FLOPs and memory consumption through the TensorFlow Profiler 3. For decomposed Transformer models, we tune the hyperparameters for weighting different losses using bayesian optimizaiton libray with 50 iterations on the tune split (10% of the original training sets) and report the performance numbers on the original dev sets. The search range is [0.1, 2.0] for the 3 hyper-parameters. For Decomposable BERT, we compute the representation for one of the input segments offline and cache it. For QA we cache the passages, for natural language inference, we cache the premise 4 and for question similarity we cache the first question 5. Table 1 shows the main comparing performance, inference speed and memory requirements of BERT-base and Decomposed BERT-base when using nine lower layers, and three upper layers (see Subsection 4.4 for the impact of the choice of upper/lower splits). We observe a substantial speedup and significant memory reduction in all the datasets, while retaining most of the original model's effectiveness (as much as 98.4% on SQuAD and 99.8% on QQP datasets). Efficiency improvements increase with the size of the text segment that can be cached. Small Distilled or Large Decomposed? Table 3: Inference latency (in seconds) on SQuAD datasets for BERT-base vs Decomp-BERT-base, as an average measured in batch mode. On the GPU and CPU we use a batch size 32 and on the phone (marked by *) we use a batch size of 1. tive than using the smaller base model (+2.3 points) This shows that with decomposition, a large Transformer can run faster than a smaller one which is half its size, while also being more accurate. We note that distilling a larger model into a smaller can yield better accuracy than training a smaller model from scratch. As far as we know, there are two related but not fully comparable . distill BERT to a small LSTM based model where they achieve 15x speedup but at a significant drop in accuracy of more than 13 points on MNLI. distill BERT to a smaller six layer Transformer, which can provide 1.6x speedup but gives >2 points accuracy drop on MNLI and >3 points F1 drop on SQuAD. A fair comparison requires more careful experimentation exploring different distillation sizes which requires repeating pre-training or data augmentation -an expensive proposition. Device Results: To evaluate the impact on different devices, we deployed the models on three different machines (a GPU, CPU, and a mobile phone). Table 3 shows the average latency in answering a question measured on a subset of the SQuAD dataset. On all devices, we get more than three times speedup. Table 4 shows the contribution of auxiliary losses for fine-tuning Decomposed-BERT on SQuAD dataset. The drop in effectiveness when not using Layerwise Representation Similarity (LRS in table), and Knowlege Distillation (KD) losses shows the utility of auxiliary supervision. Figure 4a and figure 4b shows how the effectiveness and inference speed of Decomposed-BERT changes as we change the separation layer. The plot shows that inference speed up roughly scales quadratically with respect to the separation layer number. The drop in effectiveness, on the other hand, is negligible for separating at lower layers (until layer 3 for the base model and until layer 13 Table 4 : Ablation analysis on SQuAD datasets for Decomp-BERT-base and Decomp-BERT-large models. for the large model) and increases slowly after that with a dramatic increase in the last layers closest to the output. The separation layer choice thus allows trading effectiveness for inference speed. The main difference between the original BERT and the decomposed BERT is the absence of cross attention in the lower layers. We analyze the differences between the representations of the two models across all layers. To this end, we randomly select 100 passages from SQuAD dev dataset as well as randomly selecting 5 different questions that already exist in the dataset associated with each passage. For each passage, we encode all 5 question-passage pair sequence using both the fine-tuned original BERT-base model and the decomposed model, and compute their distance of the vector representations at each layer. Figure 5 shows the averaged distances of both the question and passage at different layers. Figure 5a indicates that the lower layer representations of the passage for both models remain similar but the upper layer representations differ significantly, supporting the argument that lower layers tend to capture more of local context than global contexts. In addition, the figure also shows that using the auxiliary supervision of upper layers has the desired effect of forcing the decomposed BERT model to produce representations that are closer to the original model. Figure 5b shows the distance of question representations between original BERT and decomposed BERT, which also supports the findings from the passage representation. One minor difference is the smaller gap between using upper layer supervision for decomposed model and not using the supervision. We attribute this to the fact that question tokens are fewer than that of the passage, thus having less distance variations. The decomposed model enables caching of text representations that can be computed offline. While a full-scale analysis of the detailed trade-offs in storage versus latency is beyond the scope of this paper, we present a set of basic calculations to illustrate that the storage cost of caching can be substantially smaller compared to the inference cost. Assuming a use case of evaluating one million question-passage pairs daily, we first compute the storage requirements of the representations of these passages. With the BERT-base representations we estimate this to be 226KB per passage and 226GB in total for 1 million passages. The cost of storing this data and the added compute costs and reading these passages at the current vendor rates amounts to a total of $61.7 dollars per month. To estimate inference cost, we use the compute times we obtain from our calculations and use current vendor rates for GPU workloads which amounts to $148.5 dollars to support the 1 million question-passage pair workload. The substantial reduction in cost is because the storage cost is many orders of magnitude cheaper than using GPUs. Details of these calculations are listed in the Appendix. More recently, two pre-trained Transformers XLNet and RoBERTa have outperformed BERT models on many NLP tasks. RoBERTa has the same architecture as BERT, it is reasonable to expect the decomposition idea to work accordingly. Although XLNet brings the segment-level recurrent into the Transformer self-attention layers that making it different from BERT, the decomposition technique is likely to be applicable due to being agnostic to the layer internals. We leave showing the effectiveness of decomposition of these two models to near future work. Transformers have improved the effectiveness of NLP tools by their ability to incorporate large contexts effectively in multiple layers. This however imposes a significant complexity cost. In this work, we showed that modeling such large contexts may not always be necessary and leverage this insight to build a decomposition of the Transformer model that provides substantial improvements in inference speed, memory reduction, while retaining most of the original model's accuracy. This decomposition model provides a simple yet strong starting point for efficient models as NLP moves towards increasingly larger models handling wider contexts.
Inference in large Transformers is expensive due to the self-attention in multiple layers. We show a simple decomposition technique can yield a faster, low memory-footprint model that is just as accurate of the original models.
435
scitldr
Exploration while learning representations is one of the main challenges Deep Reinforcement Learning (DRL) faces today. As the learned representation is dependant in the observed data, the exploration strategy has a crucial role. The popular DQN algorithm has improved significantly the capabilities of Reinforcement Learning (RL) algorithms to learn state representations from raw data, yet, it uses a naive exploration strategy which is statistically inefficient. The Randomized Least Squares Value Iteration (RLSVI) algorithm , on the other hand, explores and generalizes efficiently via linearly parameterized value functions. However, it is based on hand-designed state representation that requires prior engineering work for every environment. In this paper, we propose a Deep Learning adaptation for RLSVI. Rather than using hand-design state representation, we use a state representation that is being learned directly from the data by a DQN agent. As the representation is being optimized during the learning process, a key component for the suggested method is a likelihood matching mechanism, which adapts to the changing representations. We demonstrate the importance of the various properties of our algorithm on a toy problem and show that our method outperforms DQN in five Atari benchmarks, reaching competitive with the Rainbow algorithm. In Reinforcement Learning (RL), an agent seeks to maximize the cumulative rewards obtained from interactions with an unknown environment . Since the agent can learn only by its interactions with the environment, it faces the exploration-exploitation dilemma: Should it take actions that will maximize the rewards based on its current knowledge or instead take actions to potentially improve its knowledge in the hope of achieving better future performance. Thus, to find the optimal policy the agent needs to use an appropriate exploration strategy. Classic RL algorithms were designed to face problems in the tabular settings where a table containing a value for each state-action pair can be stored in the computer's memory. For more general settings, where generalization is required, a common practice is to use hand-designed state representation (or state-action), upon which a function approximation can be learned to represent the value for each state and action. RL algorithms based on linear function approximation have demonstrated stability, data efficiency and enjoys convergence guarantees under mild assumptions . They require that the desired learned function, e.g. Qfunction, will be a linear combination of the state representation. This is, of course, a hard constraint as the representation is hand-designed, where the designer often does not know how the optimal value-function will look like. Furthermore, hand-designed representation is environment-specific and requires re-designing for every new environment. The DQN algorithm has changed RL. Using Deep Neural Networks (DNN) as function approximators, the DQN algorithm enabled the learning of policies directly from raw highdimensional data and led to unprecedented achievements over a wide variety of domains . Over the years, many improvements to DQN were presented, suggesting more fitting network architectures , reducing overestimation or improving its data efficiency. Despite its great success, DQN uses the overly simple -greedy strategy for exploration. This strategy is one of the simplest exploration strategies that currently exist. The agent takes random action with probability and takes the optimal action according to its current belief with probability 1 −. This strategy is commonly used despite its simplicity and proven inefficiency . The main shortcoming of -greedy and similar strategies derives from the fact that they do not use observed data to improve exploration. To explore, it takes a completely random action, regardless of the experience obtained by the agent. Thompson Sampling (TS) , is one of the oldest heuristics to address the'exploration/exploitation' trade-off in sequential decision-making problems. Its variations were proposed in RL and various bandits settings . For Multi-Armed Bandit (MAB) problems, TS is very effective both in theory (; and practice . Intuitively, TS randomly takes actions according to the probability it believes to be optimal. In practice, a prior distribution is assumed over the model's parameters p(w), and a posterior distribution p(w|D) is computed using the Bayes theorem, where D is the observed data. TS acts by sampling models from the posterior distribution, and plays the best action according to these samples. Randomized Least Squares Value Iteration is an RL algorithm which uses linear function approximation and is inspired by Thompson Sampling. It explores by sampling plausible Q-functions from uncertainty sets and selecting the action that optimizes the sampled models. This algorithm was proven to be efficient in tabular settings, with a bound on the expected regret that match the worst-case lower bound up to logarithmic factors. More importantly, it demonstrates efficiency even when generalization is required. Alas, as it assumes a linearly parametrized value function on a hand-designed state representation, the success of this algorithm crucially depends on the quality of the given state representation. In this paper, we present a new DRL algorithm that combines the exploration mechanism of RLSVI with the representation learning mechanism of DQN; we call it the Deep Randomized Least Squares Value Iteration (DRLSVI) algorithm. We use standard DQN to learn state representation and explores by using the last layer's activations of DQN as state representation for RLSVI. To compensate for the constantly changing representation and the finite memory of DQN, we use a likelihood matching mechanism, which allows the transfer of information held by an old representation regarding past experience. We evaluate our method on a toy-problem -the Augmented Chain environment -for a qualitative evaluation of our method on a small MDP with a known optimal value function. Then, we compare our algorithm to the DQN and Rainbow algorithms on several Atari benchmarks. We show that it outperforms DQN both in learning speed and performance. Thompson Sampling in Multi-Armed Bandit problems: Thompson Sampling (TS) , is one of the oldest heuristics to address the'exploration/exploitation' trade-off in sequential decision-making problems. sparked much of the interest in Thompson Sampling in recent years. They rewrote the TS algorithm for Bernoulli bandit and showed impressive empirical on synthetic and real data sets that demonstrate the effectiveness of the TS algorithm. Their demonstrate why TS might be a better alternative to balance between exploration and exploitation in sequential decision-making problems than other popular alternatives like the Upper Confidence Bound algorithm . suggested a Thompson Sampling algorithm for the linear contextual bandit problem and supplied a high-probability regret bound for it. They use Bayesian Linear Regression (BLR) with Gaussian likelihood and Gaussian prior to design their version of Thompson Sampling algorithm. suggested performing a BLR on top of the representation of the last layer of a neural network. The predicted value v i for each action a i is given by v i = β T i z x, where z x is the output of the last hidden layer of the network for context x. While linear methods directly try to regress values v on x, they independently trained a DNN to learn a representation z, and then used a BLR to regress v on z, obtaining uncertainty estimates on the β's, and making decisions accordingly via Thompson Sampling. Moreover, the network is only being used to find good representation -z. Since training the network and updating the BLR can be done independently, they train the network for a fixed number of iterations, then, perform a forward pass on all the training data to obtain the new z x, which is then fed to the BLR. This procedure of evaluating the new representation for all the observed data is very costly, moreover, it requires infinite memory which obviously does not scale. suggested matching the likelihood of the reward under old and new representation to avoid catastrophic forgetting when using such an algorithm with finite memory. In the Reinforcement Learning settings, suggested a method named "Posterior Sampling for Reinforcement Learning" (PSRL) which is an application of Thompson Sampling to Model-Based Reinforcement Learning. PSRL estimates the posterior distribution over MDPs. Each episode, the algorithm samples MDP from it and finds the optimal policy for this sampled MDP by dynamic programming. Recent work (; have shown a theoretical analysis of PSRL that guarantees strong expected performance over a wide range of environments. The main problem with PSRL, like all modelbased approaches, is that it may be applied to relatively small environments. The Randomized Least Squares Value Iteration (RLSVI) algorithm is an application of Thompson Sampling to Model-Free Reinforcement Learning. It explores by sampling plausible Q-functions from uncertainty sets and selecting the action that optimizes the sampled models. Thompson Sampling in DRL: Various approaches have been suggested to extend the idea behind RLSVI to DRL. Bootstrapped DQN uses an ensemble of Q-networks, each trained with slightly different data samples. To explore, Bootstrapped DQN randomly samples one of the networks and acts greedy with respect to it. extended this idea by supplying each member of the ensemble with a different prior. and investigate a similar idea and propose to adaptively perturb the parameterspace, which can also be thought of as tracking approximate posterior over the network's parameters. proposed TS in combination with uncertainty Bellman equation, which connects the uncertainty at any time-step to the expected uncertainties at subsequent timesteps. Recently and most similar to our work, experimented with a Deep Learning extension to RLSVI. They changed the network architecture to exclude the last layer weights, optimized the hyper parameters and used double-DQN. In contrary, we don't change anything in the DQN agent. We use the representation learned by DQN to perform RLSVI, however, the network structure, loss and hyper-parameters are the same. Additionally, differently from our method, they don't compensate for the changing representation and solve BLR problem with the same arbitrary prior every time. We consider the standard RL settings , in which an environment with discrete time steps is modeled by a Markov Decision Process (MDP). An MDP is a tuple < S, A, P, R, γ >, where S is a state space, A a finite action space, P: S × A − → ∆(S), is a transition kernel, and R: S × A − → R a reward function. At each step the agent receives an observation s t ∈ S which represents the current physical state of the system, takes an action a t ∈ A which is applied to the environment, receives a scalar reward r t = r(s t, a t), and observes a new state s t+1 which the environment transitions to. As mentioned above, the agent seeks an optimal policy π *: S − → ∆(A), mapping an environment state to probabilities over the agent's executable actions. γ ∈ is the discount factor -a scalar representing the trade-off between immediate and delayed reward. A brief survey of the DQN algorithm can be found in Appendix 1. The Randomized Least Squares Value Iteration (RLSVI) algorithm is a TS-inspired exploration strategy for Model-Free Reinforcement Learning. It combines TS-like exploration and linear function approximation, where the main novelty is in the manner in which it explores: Sampling value-functions rather than sampling actions. The Q-function is assumed to be in the form Q(s, a) = φ(s, a) T w, where φ(s, a) is a hand-designed state-action representation. RLSVI operates similar to other linear function algorithms and minimizes the Bellman equation by solving a regression problem -Bayesian Linear Regression. BLR obtains a posterior distribution over valuefunction instead of point estimates. To explore, RLSVI samples plausible value functions from the posterior distribution and acts the greedy action according to the sampled value-function. In the episodic settings where the representation is tabular, i.e., no generalization is needed, RLSVI guarantees near-optimal expected episodic regret. Finally, the main benefit of this algorithm is that it displays impressive even when generalization is required -despite the lack of theoretical guarantees. A pseudo-code can be found in Appendix 1. In this paper, we propose to use RLSVI as the exploration mechanism for DQN. RLSVI capabilities are enhanced by using state representation that is learned directly from the data by a neural network rather than hand-designed one. As the neural network gradually improves its representation of the states, a likelihood matching mechanism is applied to transfer information from old to new representations. A DQN agent is trained in the standard fashion, i.e., the same architecture, hyper-parameters and loss function as the original DQN. Two exceptions were made The size of the last hidden layer is reduced to be d = 64. The Experience Replay buffer is divided evenly between actions and transitions are stored in a round-robin fashion. I.e., whenever the buffer is full, a new transition < s t, a t, r t, s t+1 > is placed instead of the oldest transition with the same action a t. Exploration is performed using RLSVI on top of the last hidden layer of the target network. Given a state s t, the activations of the last hidden layer of the target network applied to this state are denoted as φ(s t) = LastLayerActivations(Q θtarget (s t)). Several changes to the original RLSVI algorithm were made: First, rather than solving different regression problem for every time step, a different regression problem is being solved for every action. As the last hidden layer's activations serves as state representation, the representation is time-homogeneous and shared among actions. The regression targets y are DQN's targets which use the target network predictions. Another change is that a slightly different formulation of Bayesian Linear Regression than RLSVI is being used. Similar to RLSVI a Gaussian form for the likelihood is assumed: 2 ), however, like the noise parameter σ 2 is formulated as a random variable, which is distributed according to the Inverse-Gamma distribution. The prior for each regression problem is therefore in the form: 2 ) it is known that the posterior distribution can be calculated analytically as follows: Formulating σ 2 as a random variable allows adaptive exploration where the adaptation is derived directly from the observed data. Lastly, while RLSVI's choice for the prior's parameters is somewhat arbitrary, in our algorithm the prior has a central role which we'll discuss further on. Since RLSVI requires a fixed representation and the target network's weights are fixed, we use the last layer activations of the target network, denoted φ(s), as state-representation. Every T target training time steps, the target network is updated with the weights of the Q-network. In these T target time steps the Q-network is changing due to the optimization performed by the DQN algorithm. Since the representation is changing, the posterior distribution that was approximated in the old representation can't be used. A posterior distribution based on the new representation needs to be approximated. Therefore, whenever the target network changes, new Bayesian linear regression problems are being solved using N BLR samples from the ER buffer. Since the ER buffer is finite, some experience-tuples were used to approximate the posterior in the old representation and are no longer available. Ignoring this lost experience can and will lead to degradation in performance derived by'Catastrophic Forgetting' . To compensate for the changing representation and the loss of old experience, we follow and match the likelihood of the Q-function in the old and new representation. This approach assumes that the important information from old experiences is coded in the state representation. Observe s t+1, r t, Store Transition < s t, a t, s t+1, r t > Train DQN using sampled mini-batch if t mod Since the likelihood is Gaussian, to compensate for the changing representation, we will find moments that match the likelihood of the Q-function in the new representation to the old one and use them for our Gaussian prior belief. Expectation prior: As DQN is trained to predict the Q-function, given the new last layer activations ψ, a good prior for µ 0 in the new representation will be the last layer weights of the DQN . Covariance prior: We use N SDP samples from the experience replay buffer. We evaluate both old and new representation {φ( . Our goal is to find a solution Σ ψ 0 that will match the covariance of the likelihood in the new representation to the old one: Using the cyclic property of the trace, this is equivalent to finding Trace(ψ( Adding the constraint that Σ ψ 0 should be Positive-Semi-Definite as it is a covariance matrix, we end up with the following Semi-Definite Program (SDP) : In practice, we solve this SDP by using CVXPY . Approximate Sampling: To perform Thompson Sampling one needs to sample the posterior distribution before every decision. This, regrettably, is computationally expensive and will slow down training significantly. To speed up learning, we sample j weights for every action {w i,1, ...,w i,j} every T Sample time steps. Then, every step we sample an index i a ∈ 1,.., j for every action, which is computationally cheap, and act greedy accordingly: a t = arg max αw T α,iα φ(s t). Solving the SDP: Another bottleneck our algorithm faces is solving the SDP. We refer the reader to for an excellent survey on the complexity of solving SDPs. As the running time of an SDP solver mainly depends on the dimension of the representation d, the number of samples being used N SDP and the desired accuracy, we chose the last hidden layer size to be d = 64, used N SDP = 600 for every SDP and set = 1e − 5. The running time for solving a single SDP took us 10-50 seconds using Intel's Xeon CPU E5-2686 v4 2.30 GHz. We conduct a series of experiments that highlight the different aspects of our method. We begin with a qualitative evaluation of our algorithm on a simple toy environment, then, move to report quantitative on 5 different Atari games in the ALE. The chain environment includes a chain of states S = 1,..., n. In each step, the agent can transition left or right. This standard-settings is augmented with additional k actions which transitions the agent to the same state (self-loop). We name this variation "The Augmented Chain Environment". All states have zero rewards except for the far-right n-state which gives a reward of 1. Each episode is of length H = n − 1, and the agent will begin each episode at state 1. The raw state-representation is a one-hot vector. The Q-network is an MLP with 2 hidden layers. Results are averaged across 5 different runs. We report the cumulative episodic regret: Here T is the number of played episodes, v * 0 (s 0) is the return of the optimal policy, and r t,h is the reward the agent received in episode t at time step h. An illustration for the augmented chain environment can be found in figure 1. The hyper-parameters that are being used in the following experiments can be found in Appendix 2. We compared our algorithm to standard DQN where -greedy serves as the exploration strategy. We experimented with various values, however, as the for different values were similar, we display the for a single value (Figure 2 (a) ). We can see that -greedy (red) achieves a linear regret in T which is the lower bound for this type of problems, while our algorithm (blue) achieves much lower regret. These demonstrate that -greedy can be highly inefficient even in very simple scenarios. In this experiment, we compared our algorithm with variants that do not model σ 2 as a random variable. We experimented with various constant σ 2 values. We can see that modeling σ 2 as a random variable (red) leads to lower regret compared to constant σ 2 variants (Figure 2 (b) ). Note that choosing a small value for σ 2 (blue) in near-deterministic posterior function. Therefore the are very similar to the -greedy variant. Intuitively, a deterministic posterior acts as a 0-greedy strategy. On the other hand, choosing a high value for σ 2 (purple) in a very noisy sampling of the posterior approximation, therefore we get a policy which is relatively random concluding in a bad performance. Choosing σ 2 with the appropriate size for the given MDP (green, black) in better performance, as indicated by the lower regret. However, as σ 2 is constant it doesn't adapt. We can see that the regret at the beginning of the learning is better even compared to the adaptive-version. However, as the uncertainty level decrease, the algorithm "over-explores" which in inferior regret compared to the adaptive version. We compared our method with a variant that matches only the expectation, similar to , and a variant that does not match the likelihood at all, i.e., approximates the posterior with a fixed arbitrary prior. The version that does not match the likelihood at all is close to BDQN and can be thought of as our implementation for it. Additionally, we report the of a variant of the algorithm where the ER buffer is not bounded -this is possible due to the fact that the toy problem is very small and so choosing large enough buffer serves as infinite. Results are shown in Figure 2 (c). The superiority of our method (black) over the one-moment method (red) and no-prior at all (blue) support our claim that constructing priors by likelihood matching reduces the forgetting phenomenon. Additionally, the fact that the unbounded memory algorithm (green) doesn't demonstrate any degradation in performance confirms that this phenomenon is caused since the ER buffer is bounded. The previous experiment may suggest that catastrophic forgetting in DRLSVI can be avoided by simply increasing the buffer size. In the following experiment, We examine the simple Chain environment (no self-loop actions; k = 0), with the following modification: we replaced the meaning of the actions in half of the states, i.e., to move right in the odd states, the agent needs to take the opposite action from the even states. We compare our algorithm with variants that do not match the likelihood with different buffer sizes. Figure 2 (c) shows the performance of each of the algorithms in this setup. We can see that our algorithm (blue) doesn't suffer from degradation of performance. The other algorithms, that don't match the likelihood, all suffer from degradation, where the only difference is the time in which the degradation starts. These demonstrate that without the likelihood matching mechanism, catastrophic forgetting will occur regardless of the buffer size. It is interesting to observe how catastrophic forgetting happens: When the buffer reaches a point where it doesn't contain experience of acting the non-optimal actions, a quick degradation occurs. Then, the algorithm initially succeeds to re-learn the optimal policy and the regret saturates. This phenomenon is getting increasingly aggravated until the regret becomes linear. These chain of events occurred in all the experiments without likelihood matching regardless of the buffer size. We report the performance of our algorithm across 5 different Atari games. We trained our algorithm for 10 million time steps and followed the standard evaluation: Every 250k training time steps we evaluated the model for 125k time steps. Reported measurements are the average episode return during evaluation. For evaluation, we used the learned Q-network with -greedy policy (= 0.001), are averaged across 5 different runs. We use the original DQN's hyper-parameters. Hyperparameters that are only relevant for our method are summarised in Appendix 2. For comparison, we used the publically available learning curves for DQN 1 and Rainbow from the Dopamine framework . Rainbow is a complex agent comprised of multiple additions to the original DQN algorithm. The averaged scores for the three methods are presented in Figure 3. The evaluation suggests that our method explores in a much faster rate than DQN, and is competitive with the Rainbow algorithm that combines multiple improvements to the DQN. Note: didn't supply standard evaluation metrics and reported for a single run only. Additionally, they change the architecture of the Q-network to exclude last layer weights, so a direct comparison to our method is not feasible. We, therefore, didn't compare our with theirs. A Deep Learning adaptation to RLSVI was presented which learn the state representation directly from the data. We demonstrated the different properties of our method in experiments and showed the promise of our method. We hope to further reduce the complexity and running time of our algorithm in future work. The Deep Q-Networks (DQN) algorithm was the first algorithm that successfully combined Deep Learning architectures with Reinforcement Learning algorithms. It operates in the standard RL setting where the state space is high dimensional. It approximates the optimal Q-function using a Convolutional Neural Network (CNN). The algorithm maintains two DNNs, the Q-network with weights θ and the target network with weights θ target. The Q-Network gets a state s as input and produce |A| outputs, each one representing the Q-value of a different action a, Q(s, a). The target network is an older version of the Q-network with fixed weights. The target network is used to constructs the targets y that the Q-network is trained to predict. The targets y are based on the Bellman equation. The algorithm uses Stochastic Gradient Descent (SGD) to update the network's weights, by minimizing the mean squared error of the Bellman equation defined as E[||Q θ (s t, a t) − y t || 2, where the target y t = r t if s t+1 is terminal, otherwise y t = r t + γ max a Q θtarget (s t+1, a). The weights of the target network are set to θ every fixed number of time steps, T target. The tuples < s t, a t, r t, s t+1 > that are used to optimize the network's weights are first collected into an Experience Replay (ER) buffer . When performing an optimization step, a mini-batch of samples are randomly selected from the buffer and are used to calculate the gradients. DQN is an off-policy algorithm which allows the agent to learn from experience collected by other means rather than its own experience. To explore the environment it applies the -greedy strategy, i.e., with probability it takes random action and with probability 1− it takes the greedy action with respect to the current estimate of Q. Input: Q θ (s, a), Q θtarget (s, a),, ER buffer s 0 = EnvironmentReset for t = 0, 1,...
A Deep Learning adaptation of Randomized Least Squares Value Iteration
436
scitldr
The complexity of large-scale neural networks can lead to poor understanding of their internal details. We show that this opaqueness provides an opportunity for adversaries to embed unintended functionalities into the network in the form of Trojan horse attacks. Our novel framework hides the existence of a malicious network within a benign transport network. Our attack is flexible, easy to execute, and difficult to detect. We prove theoretically that the malicious network's detection is computationally infeasible and demonstrate empirically that the transport network does not compromise its disguise. Our attack exposes an important, previously unknown loophole that unveils a new direction in machine learning security. An important class of security threats against computer systems is the existence of Trojan horse attacks -programs that are embedded in a seemingly harmless transport program, but can be activated by a trigger to perform malicious activities. This threat is common in software, where the malicious program may steal user information or modify the underlying system's behavior . Similar attacks have also been studied in depth for hardware circuits . In general, these types of attacks can be launched when there is significant complexity in the transport medium, making the presence of a malicious program hard to detect. Due to the complex architecture of modern neural networks, both the model and their behavior are arguably obscure to humans (; ;). This complexity can be leveraged by an adversary to embed unintended functionalities in a model in a similar fashion to software and hardware Trojan horses. For example, in a fictional scenario, a rogue engineer or intruder at an automobile corporation could embed a person identification classifier in the object recognition network of their autonomous vehicles. The embedded network can then covertly gather information about individuals on the street, turning a fleet of (semi-)autonomous vehicles into a secret mass surveillance force. Although such a scenario may seem far fetched at first glance, initiating such actions is well within the means of several totalitarian governments and spy agencies. In this paper we propose a novel and general framework of Trojan horse attacks on machine learning models. Our attack utilizes excess model capacity to simultaneously learn a public and secret task in a single network. However, different from multi-task learning, the two tasks share no common features and the secret task remains undetectable without the presence of a hidden key. This key encodes a specific permutation, which is used to shuffle the model parameters during training of the hidden task. The gradient updates for the concealed model act similar to benign additive noise with respect to the gradients of the public model , which behaves indistinguishable to a standard classifier on the public task. We demonstrate empirically and prove theoretically that the identity and presence of a secret task cannot be detected without knowledge of the secret permutation. In particular, we prove that the decision problem to determine if the model admits a permutation that triggers a secret functionality is NP-complete. We experimentally validate our method on a standard ResNet50 network and show that, without any increase in parameters, the model can achieve the same performance on the intended and on the secret tasks as if it was trained exclusively on only one of them. Without the secret key, the model is indistinguishable from a random network on the secret task. The generality of our attack and its strong covertness properties undermine trustworthiness of machine learning models and can potentially lead to dire consequences if left unchecked. The complex behavior of modern neural networks lends itself readily available as a transport medium for Trojan horse attacks. Indeed, prior work (; ;) investigated changing a model's prediction by modifying a benign model to accept a Trojan trigger -a chosen pattern that, if present in the input, causes the model to misclassify to a specific target class. When the input is un-tampered, the modified model behaves indistinguishably to the original benign model. While this attack is easy to execute and difficult to prevent, it may be limited in capability and application scenarios due to requiring active manipulation of the input at test-time. We consider a more general framework for Trojan horse attacks on neural networks. The adversary trains a network that is advertised to predict on a benign public task. However, the adversary also specifies a secret permutation, and when the model parameters are shuffled by the permutation the ing network can be used for a secret task. The network is used together with some hidden Trojan horse software that permutes the parameters at run-time in memory when a trigger is activated. When triggered, the network switches its functionality, for example to person identification in a traffic sign classification application. Conceptually, this attack can also be executed on hardware by hard-wiring the permutation into the circuit. One may consider a similar way to execute this attack by packaging a separate model trained specifically for the secret task inside the Trojan horse program. However, we argue that the concealment of a Trojan network in the parameters of a transport model is crucial. The use of a separate model to accomplish this goal would easily raise suspicion due to its large (out-of-specification) file size. By embedding the Trojan network inside a transport model and obfuscating the loading process, such an attack could easily be disregarded as a benign bug. Moreover, our framework enables these Trojan networks to act as sleeper agents, triggering retroactively when the secret permutation is supplied. Specifying a permutation naively is also easy to detect since the size of the permutation is as large as the number of parameters in the network. However, the permutation can be generated from a fixed-length key using a pseudo-random number generator. Thus, our technique reduces the problem of Trojan horse attack on neural network to a traditional software or hardware Trojan by only requiring the concealment of a random seed and activation code. In this paper we do not elaborate on mechanisms to hide the Trojan trigger, which has been covered extensively in prior work , and focus on the novelty of concealing a Trojan network inside another model. Let w ∈ R d be the weight tensor of a single layer of a neural network h. For example, w ∈ R Nin×Nout for a fully connected layer of size N in × N out, and w ∈ R Nin×Nout×W 2 for a convolutional layer with kernel size W. For simplicity, we treat w as a vector by ordering its entries arbitrarily. A permutation π: {1, . . ., d} → {1, . . ., d} defines a mapping which shuffles the layer parameters. Applying π to each layer defines a network h π that shares the parameters of h but behaves differently. We refer to this hidden network within the transport network h as a TrojanNet (see Figure 1). Loss and gradient. Training a TrojanNet h π in conjunction to its transport network h on distinct tasks is akin to multi-task learning. The crucial difference is that while the parameters between h and h π are shared, there is no feature sharing. Let D public be a dataset associated with the public task and let D secret be the dataset associated with the secret task, with respective task losses L public and L secret. At each iteration, we sample a batch (x 1, y 1),..., (x B, y B) from D public and a batch Figure 1: Illustration of a two-layer fully connected TrojanNet. The transport network (top) is trained to recognize traffic signs. When the correct secret key k is used as seed for the pseudo-random permutation generator H, the parameters are permuted to produce a network trained for person identification (bottom). Using an invalid key in a random permuted network. (x 1,ỹ 1),..., (x B,ỹ B) from D private and compute the total loss This loss can be optimized with gradient descent on w and its gradient is given by: which is obtained by differentiating through the permutation operator. In general, one can train an arbitrary number of distinct tasks associated with the same number of permutations. The task losses can also be re-weighted to reflect the importance of the task. As we will show in Section 3.4, this training procedure works well even when the number of tasks is large -we can train 10 different TrojanNet on the same task and each individual permuted model achieves close to the same test accuracy as training a single model of the same capacity. Selecting permutations. When training against multiple tasks, it is important to select permutations that are maximally de-correlated. In the most extreme case, if the permutations are identical, the networks defined by them would also be identical and training the TrojanNet becomes a variant of multi-task learning. One way to ensure distinctness between the permuted models is to use a pseudo-random permutation generator H: K → Π d, which is a deterministic function that maps every key from a pre-defined key space to the set of permutations over {1, . . ., d} . When the keys are sampled uniformly at random from K, the ing permutations appear indistinguishable from random samples of Π d. We default to the original transport model h when no key is provided (i.e. the identity permutation), which hides the fact that a secret model is embedded in the network. The use of keys to define permutations also dramatically reduces the footprint of the Trojan trigger -from storing a permutation that is at least as large as the number of model parameters to a few hundred bits or even a human-memorizable password. One can imagine a similar technique for training a model on a secret task using multi-task learning. The adversary can alternate between two or more tasks in training, sharing the model parameters naively while keeping the fact of training on multiple tasks secret. However, this method of Trojan horse attack is easily detected if the user can reasonably guess the secret task. In particular, the user can evaluate a collected labeled dataset D = {(x 1, y 1),..., (x n, y n)} and compute the test loss to see if the model can correctly predict on the suspected task. As there are often only a handful of sensitive scenarios that the user may be concerned about, this detection can be carried out efficiently by exhaustively testing against all suspected tasks -a search technique similar to signature scanning in malware detection . TrojanNet can naturally bypass this method of detection. Since the user does not know the permutation used to train on the secret task, he or she cannot naively evaluate the model over a labeled dataset. The user is then faced with a problem of finding a permuted model that in the test loss being smaller than some acceptable threshold L. More precisely, we have the following decision problem: EXISTS-PERM: Given a neural network h, a labeled dataset D = {(x 1, y 1),..., (x n, y n)}, a test loss and an acceptance threshold L, does there exist some permutation π such that the test loss The following theorems shows that for both regression and classification, this decision problem is NP-complete in general. These show that it is computationally infeasible to detect the presence of a TrojanNet hidden within another network. Theorem 1. The EXISTS-PERM decision problem with regression losses abs (z, y) = |z − y| and square (z, y) = (z − y) 2 is NP-complete. Theorem 2. The EXISTS-PERM decision problem with classification losses binary (z, y) = 1 z =y and logistic (z, y) = 1/(1 + exp(yz)) is NP-complete. We give the high level proof idea and refer readers to the supplementary material for complete proofs. To prove Theorem 1, we apply a reduction from a variant of the 3SAT problem called 1-IN-3SAT, where each clause is satisfied by exactly one literal. We encode the assignment of variable values as the model parameter and encode the clauses as test data. Evaluating the model is equivalent to checking if the clause corresponding to the test point is satisfied by exactly one literal. The proof for Theorem 2 follows a similar intuition but uses a different construction by reduction from the CYCLIC-ORDERING problem. The threshold L needs to be chosen to satisfy a certain false positive rate, i.e. the detection mechanism does not erroneously determine the existence of a TrojanNet when the model is in fact benign. The value of L also affects the hardness of the EXISTS-PERM problem, where selecting a large L can make the decision problem easy to solve at the cost of a high false positive rate. We investigate this aspect in Section 3.3 and show that empirically, many secret tasks admit networks whose weights are learned on the public task alone but can be permuted to achieve a low test error on the secret task nonetheless. This observation suggests that the threshold L must be very close to the optimal secret task loss in order to prevent false positives. Discontinuity of keys. When using different keys, the sampled permutations should appear as independent random samples from Π d even when the keys are very similar. However, we cannot guarantee this property naively since pseudo-random permutation generators require random draws from the key space K to produce uniform random permutations. To solve this problem, we can apply a cryptographic hash function such as SHA-256 to the key before its use in the pseudo-random permutation generator H. This is similar to the use of cryptographic hash functions in applications such as file integrity verification, where a small change in the input file must in a random change in its hash value. Using different permutations across layers. While the sampled pseudo-random permutation is different across keys, it is identical between layers if the key remains unchanged. This causes the ing weight sharing scheme to be highly correlated between layers or even identical when the two layers have the same shape. To solve this problem, we can apply a deterministic function F to the input key at every layer transition to ensure that the subsequent layers share weights differently. Given an initial key k, the pseudo-random permutation generator at the l-th layer is keyed by where F (l) denotes the l-fold recursive application of F with F being the identity function. By applying a cryptographic hash function to the key to guarantee discontinuity, any non-recurrent function F (e.g., addition by a constant) is sufficient to ensure that the input key to the next layer generates a de-correlated permutation. Batch normalization. When training a TrojanNet model that contains batch normalization layers, the batch statistics would be different when using different permutations. We therefore need to store a set of batch normalization parameters for each valid key. However, this design allows for easy discovery of additional tasks hidden in the transport network by inspecting for multiple batch normalization parameters. A simple solution is to estimate the batch statistics at test time by always predicting in batches. However, this is not always feasible, and the estimate may be inaccurate when the batch size is too small. Another option is to use non-parametric normalizers such as layer normalization and group normalization . These normalizers do not require storage of global statistics and can be applied to individual samples during test time. It has been shown that these methods achieve similar performance as batch normalization . Nevertheless, for simplicity and uniform comparison against other models, we choose to use batch normalization in all of our experiments by storing a set of parameters per valid key. Different output sizes. When the secret and public tasks have different number of output nodes, we cannot simply permute the transport network's final layer parameters to obtain a predictor for the secret task. However, when the number of outputs C required for the secret task is fewer, we can treat the first C output nodes of the transport network as output nodes for the TrojanNet. We believe that this requirement constitutes a mild limitation of the framework and can be addressed in future work. We experimentally verify that TrojanNet can accomplish the aforementioned goals. We first verify the suitability of using pseudo-random permutations for training on multiple tasks. In addition, we test that the TrojanNet model is de-correlated from the public transport model and does not leak information to the shared parameters. Datasets. We experiment on several image classification datasets: CIFAR10, CIFAR100 , Street View House Numbers (SVHN) , and German Traffic Sign Recognition Benchmark (GTSRB) . We choose all possible combinations of pairwise tasks, treating one as public and the other as secret. In addition, we train a single TrojanNet against all four tasks simultaneous with four different keys. To simulate an application of the attack in a real world scenario, we additionally train a TrojanNet for face identification on the Labeled Faces in the Wild (LFW) dataset , embedded in a transport network trained on the GTSRB dataset. Implementation details. Our method is implemented in PyTorch. In all experiments we use ResNet50 (RN50) as the base model architecture. We refer to the TrojanNet variant as TrojanResNet50 (TRN50). We use the torch.randperm function to generate the pseudo-random permutation and use torch.manual seed to set the seed appropriately. For optimization, we use Adam with initial learning of 0.001. A learning rate drop by a factor 0.1 is applied after 50% and 75% of the scheduled training epochs. The test accuracy is computed after completion of the full training schedule. Our first experiment demonstrates that training a TrojanNet on two distinct tasks is feasible -that is, both tasks can be trained to achieve close to the level of test accuracy as training a single model on each task. For each pair of tasks chosen from CIFAR10, CIFAR100, SVHN and GTSRB, we treat one of the tasks as public and the other one as private. Due to symmetry in the total loss, will be identical if we swap the public and secret tasks. Training and performance. Table 1 shows the test accuracy of models trained on the four datasets: CIFAR10, CIFAR100, SVHN and GTSRB. Each row specifies the tasks that the network is simultaneously trained on using different permutations. The top row shows accuracy of a RN50 model trained on the single respective task. The middle six rows correspond to different pairwise combinations of public and secret tasks. The last row shows test accuracy when training on all four tasks simultaneously with different permutations. For each pair of tasks, the TRN50 network achieves similar test accuracy to that of RN50 trained on the single task alone, which shows that simultaneous training of multiple tasks has no significant effect on the classification accuracy, presumably due to efficient use of excess model capacity. Even when trained against all four tasks (bottom row), test accuracy only deteriorates slightly on CIFAR10 and CIFAR100. Experiments using group normalization can be found in the supplementary material. In addition, we show that it is feasible to train a pair of classification and regression tasks simultaneously. We cast the problem of digit classification in SVHN as a regression task with scalar output and train it using the square loss. Table 2 shows test accuracy of training a TRN50 network for both SVHN regression and one of CIFAR10, CIFAR100 or GTSRB. Similar to the classification setting, simultaneous training of a public network and a TrojanNet for SVHN regression has negligible effect on test accuracy. Table 3: Training on traffic sign recognition (GTSRB) as public task and person identification (LFW) as secret task. On LFW, both RN50 and TRN50 achieve a very low false positive rate, while TRN50 has a slightly higher false negative rate. Test accuracy of TRN50 on GTSRB is also on par with that of RN50. One critical component in an autonomous vehicle is a traffic sign recognition network, which classifies different traffic signs on the road and whose prediction is used in downstream controllers . A potential scenario of a Trojan horse attack is that an adversary can embed a person identification classifier in the traffic sign recognition network, causing it to secretly identify pedestrians on the road. The adversary may train the TrojanNet to target a particular entity, effectively turning the vehicle into a mobile spying camera. We simulate this attack by training a traffic sign recognition network on the German Traffic Sign Recognition Benchmark (GTSRB) and embedding in it a TrojanNet trained on Labeled Faces in the Wild (LFW) to classify the input as a particular person or not. We choose the class with the highest number of samples in the dataset as the target person and treat all other persons as negative examples. Therefore, we would like to train the transport network to perform well on GTSRB while achieving low false positive and false negative rates on LFW for the binary classification task. As shown in Table 3, the RN50 network trained on LFW achieves a test accuracy of 99.6% with false positive rate also exceptionally low at 0.16%. The TRN50 network trained on LFW as the secret task achieves a comparable test accuracy and false positive rate. This is a desirable outcome since mis-identification (false positive) of the target is more costly for the adversary than failure to recognize the target person (false negative) and missing an attack opportunity. Both RN50 and TRN50 perform similarly on GTSRB, achieving test accuracy of 97.8% and 97.0% respectively. Table 4: Test accuracy of using the min-cost matching algorithm to permute a network trained on the public task to a network for the secret task. Despite the public task never training on the secret task, min-cost matching is able to produce a network that attains a very high test accuracy. In Section 2.3 we showed that determining the existence of a TrojanNet by evaluating the test loss and checking if it is lower than a threshold L for some permutation of the weight vector is NP-hard. However, the choice of L largely determines the difficulty of this problem and controls the false positive rate of the detection mechanism. Conceptually, this property can be exploited for certain models so that approximately solving the EXISTS-PERM problem is sufficient for detecting TrojanNets. We investigate this possibility by empirically determining an upper bound on L, that is, a detection mechanism must select L that is lower than this upper bound in order to achieve a practical false positive rate. More specifically, for a model h trained on a certain public task 1 and for any secret task with loss L secret, we train a model h secret on the secret task and perform a min-cost matching between the parameters of h and h secret. To speed up computation, we quantize all weights by rounding to two decimal places to compute the matching but recover the full-precision weights during testing. Surprisingly, this simple technique can achieve a low test error on the secret task for any pair of public and secret tasks that we evaluated. Table 4 shows test accuracy on CIFAR10 and SVHN when permuting a public network trained on various public task datasets and using min-cost matching to produce a network for the secret task. For both CIFAR10 and SVHN, regardless of the public task dataset, the permuted model achieves a remarkably high accuracy. Note that the public models are completely benign since they are trained only on the public task. As a , any threshold-based detector that determines the existence of a TrojanNet for CIFAR10 when the test accuracy is above 90% (equivalently, when the test error is below 10%) is prone to false positives. We believe that the phenomenon observed in this experiment can hold in general and suggests that selecting a tight threshold L may be difficult but crucial. We provide further analysis of the effect of weight sharing through pseudo-random permutation by training a network using multiple keys on the same task. We expect that the ing TrojanNets (ing from the different keys) behave similar to independent networks of the same capacity trained on the same task. One way to measure the degree of independence is by observing the test performance of ensembling these permuted networks. It is widely believed that ensemble methods benefit from the diversity of its component models , and the amount of boost in ensemble performance can be used as a proxy for measuring the degree of de-correlation between different permuted models. Benchmarks. We train TRN50 on CIFAR10 and CIFAR100 with n keys for different values of K = 1, 2, 3, 5, 10 and ensemble the ing permuted networks for test-time prediction. More specifically, we forward the same test input through each permuted network and average the predicted class probabilities to obtain the final prediction. Our first benchmark to compare against is the ensemble of K independently trained RN50 models, which serves as a theoretical upper bound for the performance of the TRN50 ensemble. In addition, we compare to HashedNet , a method of compressing neural works for space efficiency, to show similarity in ensemble performance when the component networks have comparable capacity. HashedNet applies a hash function to the model parameters to reduce it to a much fewer number of bins. Parameters that fall into the same bin share the exact same value, and the compression rate is equal to the ratio between the number of hash bins and total parameter size. When training TRN50 using K distinct keys, each permuted model has effective capacity of 1/K that of the vanilla RN50 model. This capacity is identical to a compressed RN50 model using HashedNet with compression rate 1/K. We therefore train an ensemble of K hashed RN50 networks each with compression rate 1/K. We refer to the ing compressed HashedNet models as HashedResNet50 (HRN50). Result comparison. Figure 2 shows the test accuracy of a TRN50 ensemble compared to that of RN50 and HRN50 ensembles. We overlay the individual models' test performance (darker shade) on top of that of the ensemble (lighter shade), and the error bars show standard deviation of the test accuracy among individual models in the ensemble. From this plot we can observe the following informative trends: 1. Individual TRN50 models (dark orange) have similar accuracy to that of HRN50 models (dark blue) on both datasets. This phenomenon can be observed across different values of K. Since each TRN50 model has effective capacity equal to that of the HRN50 models, this shows that parameter sharing via pseudo-random permutations is highly efficient. 2. Ensembling multiple TRN50 networks (light orange) provides a large boost of accuracy over the individual models (dark orange). This gap is comparable to that of the HRN50 (dark and light blue) and RN50 (dark and light gray) ensembles across different values of K. Since the effect of ensemble is largely determined by the degree of de-correlation between the component networks, this shows that training of TrojanNets in models that are as de-correlated as independent models. 3. The effect of ensembling TRN50 models is surprisingly strong. Without an increase in model parameters, the TRN50 ensemble (light orange) has comparable test accuracy to that of the RN50 ensemble (light gray) when K is small. For K = 5, 10, the TRN50 ensemble lags in comparison to the RN50 ensemble due to lower model capacity of component networks. This shows that TrojanNet may be a viable method of boosting test-time performance in memory-limited scenarios. Effect of model capacity. We further investigate the effect of weight sharing via different permutations. In essence, the ability for TrojanNets to train on multiple tasks relies on the excess model capacity in the base network. It is intuitive to suspect that larger models can accommodate weight sharing with more tasks. To test this hypothesis, we train a TrojanResNet18 (TRN18) ensemble on CIFAR10 and CIFAR100 and measure the individual component models' accuracy in comparison to training the base network. Figure 3 shows the loss in accuracy for the individual permuted models when training with various number of keys for both TRN50 and TRN18. The decrease in accuracy is consistently lower for TRN50 (orange bar) than for TRN18 (brown bar), which shows that larger models have more excess capacity to share among different permutations. Another intriguing is that TRN50 with as many as 10 different keys has relatively insignificant effect on the individual models' accuracy. The loss in accuracy is only 1.5% on CIFAR10 and 2.9% CIFAR100. This gap may be further reduced for larger models. This suggest that TrojanNets may be used in contexts apart from machine learning security, as the sharing of excess model capacity is exceptionally efficient and the ing permuted models exhibit high degrees of independence. Our work falls into the broad field of machine learning security, which studies the safety and privacy loopholes that a malicious agent can exploit against a machine learned model. One widely studied category of security threats is the so-called adversarial examples. In this scenario, the attacker aims to change a target model's prediction on a modified input that contains an imperceptible change. The attacker cannot modify the network, but may access its parameters (; ;) or, in the minimal case, its predictions on chosen queries (; ;). This attack has been successfully launched against real world systems such as Google Voice , Clarifai and Google Cloud Vision (; ; 2019). Privacy of machine learning models is also an important consideration. Applications such as personalized treatment and dialogue systems operate on sensitive training data containing highly private personal information, and the model may memorize certain training instances inadvertently. and independently showed that these memorized training instances can be extracted from a trained models, compromising the privacy of individuals in the training set. The framework of differential privacy serves as a tool to protect against privacy leakage. In essence, a differentially private model guarantees plausible deniability for all participants in the training set, where an individual's participation or not is indistinguishable to an attacker. Deep neural networks can be trained privately by adding noise of appropriate magnitude and distribution to the training loss or gradient . We introduced TrojanNet, and formulate a potentially menacing attack scenario. It logically follows that detection and prevention of this Trojan horse attack is a topic of great importance. However, this may be a daunting task, as we show theoretically that the detection problem can be formulated as an NP-complete decision problem, and is therefore computationally infeasible in its general form. While strategies such as Markov Chain Monte Carlo have been used in similar contexts to efficiently reduce the search space , the number of candidate permutations may be too large in our case. In fact, the number of permutations for a single convolutional layer of ResNet50 can be upwards of (64 × 64 × 3 × 3)! ≈ 1.21 × 10 152336! While our paper focuses on malicious uses of the TrojanNet framework, it can potentially be utilized for improving the security of neural networks as well. Our framework has striking resemblance to symmetric key encryption in cryptography . This enables the sharing of neural networks across an insecure, monitored communication channel in a similar fashion as steganography -the hiding of structured signals in files such as images, audio or text. We hope to explore benevolent uses of TrojanNet in future work. A APPENDIX In this section, we prove that the EXISTS-PERM decision problem is NP-complete. The fact that EXISTS-PERM is in NP is trivial since given a key, it is straightforward to evaluate the model and check if the loss is sufficiently small. Theorem 1. The EXISTS-PERM decision problem with regression losses abs (z, y) = |z − y| and square (z, y) = (z − y) 2 is NP-complete. Proof. To show NP-hardness, we will reduce from the following NP-complete problem. Given a set of binary variables v 1,..., v n ∈ 0, 1 and a set of logical clauses there exist an assignment of the x i's such that each clause has exactly one literal that evaluates to true? Let C be an instance of the 1-IN-3SAT problem. We may assume WLOG that no clause in C contains a variable and its negation. Let k ∈ {0, 1, . . ., n} and consider a linear regression model h(x) = w x with w = (1, . . ., For each C i, define x i ∈ R n so that Observe that for every i, the value w x i is an integer whose value is −1 only when exactly one of the literals in C i is satisfied. If either none of or if more than one of the literals in C i is satisfied then w x i ∈ {−3, 1, 3}. Theorem 2. The EXISTS-PERM decision problem with classification losses binary (z, y) = 1 z =y and logistic (z, y) = 1/(1 + exp(yz)) is NP-complete. Proof. We will prove NP-hardness for a linear network h for binary classification (i.e., logistic regression model). Our reduction will utilize the following NP-complete problem. Given n ∈ N and a collection C = {(a 1, b 1, c 1),..., (a m, b m, c m)} of ordered triples, does there exist a permutation π: {1, . . ., n} → {1, . . ., n} such that for every i = 1,..., n, we have either one of the following three orderings: Recall that the logistic loss is strictly decreasing, anti-symmetric around 0, and bijective between R and. Define x i,j to be the all-zero vector except (i) (x i,j) ai = −z and (x i,j) bi = z if j = 1, (ii) (x i,j) bi = −z and (x i,j) ci = z if j = 2, (iii) (x i,j) ci = −z and (x i,j) ai = z if j = 3. Following a similar argument, we have that for every i = 1,..., m: Since batch normalization requires the storage of additional parameters that may compromise the disguise of TrojanNet, we additionally evaluate the effectiveness of TrojanNet trained using group normalization. Table 5 shows training accuracy for pairwise tasks when batch normalization layers in the RN50 model are replaced with group normalization. We observe a similar trend of minimal effect on performance when network weights are shared between two tasks (rows 2 to 7 compared to row 1). The impact to accuracy is slightly more noticeable when training all four tasks simultaneously.
Parameters of a trained neural network can be permuted to produce a completely separate model for a different task, enabling the embedding of Trojan horse networks inside another network.
437
scitldr
In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) --- an alternative scheme of GANs that can serve as a tool for generative model analysis. While the latent space of a typical GAN consists of input vectors, randomly sampled from the standard Gaussian distribution, the latent space of RPGAN consists of random paths in a generator network. As we show, this design allows to associate different layers of the generator with different regions of the latent space, providing their natural interpretability. With experiments on standard benchmarks, we demonstrate that RPGAN reveals several interesting insights about roles that different layers play in the image generation process. Aside from interpretability, the RPGAN model also provides competitive generation quality and allows efficient incremental learning on new data. Nowadays, deep generative models are an active research direction in the machine learning community. The dominant methods for generative modeling, such as Generative Adversarial Networks (GANs), are currently able to produce diverse photorealistic images . These methods are not only popular among academicians, but are also a crucial component in a wide range of applications, including image editing (;, super-resolution , video generation and many others. Along with practical importance, a key benefit of accurate generative models is a more complete understanding of the internal structure of the data. Insights about the data generation process can both in the development of new machine learning techniques as well as advances in industrial applications. However, most state-of-the-art generative models employ deep multi-layer architectures, which are difficult to interpret or explain. While many works investigate interpretability of discriminative models (; ;), only a few address the understanding of generative ones. In this work, we propose the Random Path GAN (RPGAN) -an alternative design of GANs that allows natural interpretability of the generator network. In traditional GAN generators, the stochastic component that influences individual samples is a noisy input vector, typically sampled from the standard Gaussian distribution. In contrast, RPGAN generators instead use stochastic routing during the forward pass as their source of stochasticity. In a nutshell, the RPGAN generator contains several instances of the corresponding layer. For each sample, only one random instance of each layer is activated during generation. The training of the RPGAN can then be performed in the same adversarial manner as in traditional GANs. In the sections below, we show how RPGAN allows to understand the factors of variation captured by the particular layer and reveals several interesting findings about the image generation process, e.g. that different layers are "responsible for" coloring or objection location. As a practical advantage, RPGANs can be efficiently updated to new data via the simple addition of new instances to the bucket, avoiding re-training the full model from scratch. Finally, we observe that RPGANs allow the construction of generative models without nonlinearities, which can significantly speed up the generation process for fully-connected layers. In summary, the main contributions of our paper are the following: • We introduce RPGAN -GAN with an alternative source of stochasticity, based on random routing. While being close to traditional GANs in terms of generation quality, RPGAN allows natural interpretability and efficient model updates with new data. • With extensive experiments on standard benchmarks we reveal several insights about the image generation process. Many of our insights confirm and extend recent findings from. Note, that our scheme is more general compared to the technique from as RPGAN does not require labeled datasets or pretrained segmentation models. • We open-source the PyTorch implementation of RPGAN with common generator architectures 1. The rest of this paper is organized as follows. In Section 2 we review relevant ideas from prior art. The proposed Random Path GAN design is described in Section 3 and experimentally evaluated in Section 4. Section 5 concludes the paper and discusses possible directions for future work. In this section we briefly describe connections of RPGAN to existing ideas from prior works Generative adversarial networks. GANs are currently one of the main paradigms in generative modelling. Since the seminal paper on GANs by , a plethora of alternative loss functions, architectures, normalizations, and regularization techniques were developed . Today, state-of-the-art GANs are able to produce high-fidelity images, often indistinguishable from real ones . In essence, GANs consist of two networks -a generator and a discriminator, which are trained jointly in an adversarial manner. In standard GANs, the generation stochasticity is provided by the input noise vector. In RPGANs, we propose an alternative source of stochasticity by using a fixed input but random routes during forward pass in the generator. Specific GAN architectures. Many prior works investigated different design choices for GANs, but to the best of our knowledge, none of them explicitly aimed to propose an interpretable GAN model. proposed the use of several independent generators to address the mode collapse problem. employ several auxiliary local generators and discriminators to improve mode coverage as well. use layer-wise generators and discriminators to enforce hidden representations produced by layers of the generator to be similar to the corresponding representations produced by a reversed classification network. Important differences of RPGAN compared to the works described above is that it uses random routes as its latent space and does not enforce to mimic the latent representations of pretrained classifiers. Interpretability. While interpretability of models based on deep neural networks is an important research direction, most existing work addresses the interpretability of discriminative models. These works typically aim to understand the internal representations of networks (; ; ;) or explain decisions produced by the network for particular samples (; ;). However, only a few works address interpretability of generative models. A related work by develops a technique that allows to identify which parts of the generator are responsible for the generation of different objects. In contrast, we propose GANs with alternative source of stochasticity that allows natural interpretation by design. Some of our findings confirm the from , which provides stronger evidence about the responsibilities of different layers in the generation process. Note, that the technique requires a pretrained segmentation network and cannot be directly applied to several benchmarks, e.g. CIFAR-10 or MNIST. In contrast, RPGAN does not require any auxiliary models or supervision and can be applied to any data. . For instance, earlier layers aim to detect small texture patterns, while activations in deeper layers typically correspond to semantically meaningful concepts. Similarly, in our paper we aim to understand the roles that different GAN layers play in image generation. Thus, we propose an architecture that provides a direct way to interpret the impact of individual layers. For a given generator architecture, we construct several copies of each layer in its architecture. During the forward pass we randomly choose a layer instance that will be used when generating a particular image. Therefore, we can analyze the role of each RPGAN layer by visualizing how different instances of that layer affect the generated image. Here we formally describe the structure of the RPGAN model. The model is highly flexible to the choice of generator and discriminator architectures as well as to the loss function and learning strategy. Similarly to the standard GAN architectures, our model consists of two networks -a generator and a discriminator. The RPGAN discriminator operates exactly like discriminators in common GANs, hence below we focus on the generator description. Unlike existing GANs, the RPGAN generator always receives a fixed input vector Z during forward pass and aims to produce an image from the real image distribution. The generator consists of several consequent buckets B 1,..., B n. Each bucket is a union of independent blocks: B i ={B i1, . . ., B imi}, where each block is an arbitrary computational unit and m i =|B i |. A typical example of a block is a ResNet block , a convolutional layer with a nonlinearity or any other (see Figure 1a) layer type. In our experiments, all the units from the same bucket have the same architecture. For each i=1,..., n − 1 a block from the bucket B i produces an intermediate output tensor that is passed to a block from the next bucket B i+1. Typically we associate each bucket with a layer (or several layers) in the generator architecture, which we aim to interpret or analyze. A block from the first bucket B 1 always receives a fixed input vector Z, which is the same for different forward passes. The stochasticity of the generator arises from a random path that goes from Z to an output image, using only a single block from each bucket. Formally, during each forward pass, we randomly choose indices s 1,..., s n with 1 ≤ s i ≤ m i. The generator output is then computed as Figure 1b). Thus, the generator defines a map from the Cartesian product m 1 × m 2 × · · · × m n to the image space. Note that we can take an arbitrary existing GAN model, group its generator layers into buckets and replicate them into multiple blocks. In these terms, the original model can be treated as the RPGAN model with a single block in each bucket and random input noise. Note that during image generation we perform the same number of operations as in the standard GAN generator. By its design, RPGAN with buckets B 1,..., B n and a constant input Z is able to generate at most |B 1 |×· · ·×|B n | different samples were |B k | is the number of blocks in the bucket B k. Nevertheless, this number is typically much larger compared to the training set size. We argue that the probability space of random paths can serve as a latent space to generate high-quality images, as confirmed by the experiments below. Block diversity loss. To guarantee that blocks in a particular bucket are different, we also add a specific diversity term in the generator loss function. The motivation for this term is to prevent blocks B ki, B kj from learning the same weights. Let W be the set of all parameters of the generator. For each parameter w ∈ W there is a set of its instances {w,... w (mw) } in the RPGAN model. Then we enforce the instances to be different by the loss term Here we also normalize by the standard deviation s w of all parameters from different blocks that correspond to the same layer. This normalization effectively guarantees that all buckets contribute to the diversity term. Architecture. In all the experiments in this section, we use ResNet-like generators with spectral normalization and the hinge loss (SN-ResNet) described in. The blocks in the first bucket are fully-connected layers, the blocks in the last bucket are convolutional layers and blocks in all other buckets are residual blocks with two convolutions and a skip connection. If not stated otherwise, all the buckets have the same number of blocks. Additional experiments with other architectures are provided in Appendix. Datasets. We performed experiments on CIFAR-10 , LSUN-bedroom and Anime Faces datasets. For different datasets we use different numbers of discriminator steps per one generator step d steps and different numbers of blocks in a bucket n blocks. We summarize the main parameters used for three datasets in Table 1. In the last column we also report Coverage, which is the ratio of the latent space cardinality (which equals the number of buckets to the power n blocks) to the dataset size. Intuitively, large coverage guarantees that RPGAN has a sufficiently rich latent space of generator routes to capture the reference dataset. In the experiments below, we demonstrate that even moderate coverage is sufficient to generate high-fidelity images (see the LSUN-bedroom dataset with coverage ≈ 3.3). Training details. We use the Adam optimizer with learning rate equal to 0.25 × 10 −3, β 1, β 2 equal to 0.5, 0.999 and train the model for 45 × 10 4 generator steps for CIFAR-10 and 25 × 10 4 generator steps for Anime Faces and LSUN-bedroom datasets. During training we also learn the unique input vector Z. We observed that a learnable Z slightly improves the final generation quality and stabilizes the learning process. During the training step, we pass Z through N independent random paths. Formally, let {x 1, . . ., x N} be a batch of samples received from a bucket B k. To pass this batch through the bucket B k+1 we take random blocks B ki1,..., B ki N and form a new batch In all the experiments, we use the same training protocols for both the RPGAN and the standard GAN of the generator architecture. Note, that despite larger number of learnable parameters, RPGAN does not require more data or training time to achieve the same quality, compared to standard GANs. Figure 3: Top image: sample generated with a fixed sequence of blocks. Horizontal lines: samples generated by the same sequence of blocks in all buckets and one unfrozen bucket. In the selected bucket we choose ten arbitrary blocks to avoid excessively large figures. The samples produced in this way allow to interpret factors of variation, captured by different buckets. In the first series of experiments we investigate the " responsibility areas" of different generator layers. This can be performed with a technique schematically presented on Figure 2. The goal is to interpret the role of the third bucket B 3 in a five-bucket generator. For all other buckets B 1, B 2, B 4, B 5 we fix arbitrary blocks, shown in blue on Figure 2. Then we generate images corresponding to routes that contain all the fixed blocks, with the stochasticity coming only from varying blocks inside the target bucket B 3. By inspecting the distribution of the obtained images, we can understand what factors of variation are influenced by B 3. On Figure 3 we plot the samples generated with only one "unfrozen" bucket for one image from the CIFAR-10 dataset. Thus, each row shows how the original generated image could change if different blocks from the corresponding bucket are used. Several observations from Figure 3, Figure 9, Figure 12 and Figure 15 are listed below. The first bucket typically does not influence coloring and mostly affects small objects' deformations. The intermediate buckets have the largest influence on semantics. The last two buckets are mostly responsible for coloring and do not influence the content shape. In particular, on Figure 3 the fourth layer widely varies color, while the fifth acts as a general tone corrector. Note that these findings are consistent with the insights revealed by. To confirm the observations quantitatively, we perform the following experiment. We define a metric d img that evaluates the similarity between two generated images. Different metrics are able to capture different variations (e.g. in terms of semantic, color histogram, etc.) and we describe two particular choices of d img below. Then we choose a random route in the RPGAN generator and for each bucket 4, varying blocks in B l. In other words, rather then taking all possible pairs, we take four random samples from each line of the table in the Figure 3. Then we measure diversity w.r.t. d img captured by bucket B l as a ratio Intuitively, we compute the relative diversity with respect to the first layer, which typically captures the smallest amount of variations in our experiments. We then average these ratios over 100 independent evaluations. High values of averaged ratio D l→1,d img imply higher diversity of the bucket B l compared to the first bucket in terms of the metric d img. For d img we experimented with the following two metrics, capturing semantics and color differences correspondingly. Inspired by the well-known Fréchet Inception Score concept , we consider the Euclidean distance between the outputs of the last layer of the pretrained InceptionV3 network for semantic distance evaluation. Namely, we define d semantic (img 1, img 2) as Iv3(img 1) − Iv3(img 2) 2 were Iv3(img) is the InceptionV3 model activations for image img. To measure differences in color, we take the Hellinger distance between the color histograms of generated samples. Namely, for each color channel, we split the range [0, . . ., 255] into 25 equal segments and evaluate the discrete distribution defined by the frequencies the sample's pixel intensities appear in a given segment. Then the Hellinger distance between two quantified color distributions is defined as We compute this metric for each color channel independently. The average values of D l→1,d img with the standard deviations are shown on Figure 4. It demonstrates that the semantic diversity is the largest for the intermediate layers. On the contrary, the last buckets, which are closer to the output, do not influence semantics but have higher impact in terms of color. Note that the first layer always shows the smallest variability in terms of both semantics and colors. The last bucket seems to be responsible for color correction and color inversion and has a lower pallet variability impact. Note, that the plots from Figure 4 reaffirm the findings coming from the Figure 3. Note that the similar empirical also hold for other datasets (see figures 13, 16 in appendix). Overall, we summarize the main findings common for CIFAR-10, LSUN and Anime Faces datasets as: • The earlier layers have a smaller variability and seem to be responsible for the viewpoint and the position of the object on the image. • The semantic details of the image content are mostly determined by the intermediate layers. • The last layers typically affect only coloring scheme and do not affect content semantics or image geometry. Note, that these can differ for other datasets or other generator architectures. For instance, for the four-bucket generator and MNIST (Figure 6, left) or randomly colored MNIST (Figure 8, left) the semantics are mostly determined by the first two buckets. In this subsection, we argue that the interpretations of different layers, obtained with RPGAN, are also valid for the standard GAN generator of the same architecture. First, we demonstrate that both standard GAN and RPGAN trained under the same training protocol provide almost the same generation quality. As a standard evaluation measure, we use the Fréchet Inception Distance (FID) introduced in. We also compute precision-recall curve as defined in. For evaluation on CIFAR-10, we use 50000 generated samples and the whole train dataset. We also take ten independently trained generators and report minimal and average FID values. See Table 2 for FID comparison and Figure 11 in appendix for precision-recall. RPGAN and SN-ResNet perform with the same quality both in terms of FID and precision-recall curves. Table 2: FID values for CIFAR-10. To confirm that the layers of the standard GAN generator can be interpreted in the same way as the corresponding layers of its RPGAN counterpart, we perform the following experiment. We take a standard SN-ResNet GAN, consisting of five layers associated with the correspondent buckets in RPGAN, and train it on CIFAR-10. Then for each layer, we add normal noise to its weights. Intuitively, we expect that the noise injection in the particular layer would change generated samples in terms of characteristics, influenced by this layer. For instance, noise in the last two layers is expected to harm coloring scheme, while noise in the intermediate layers is expected to bring maximal semantic damage. Several samples, produced by perturbed layers, are presented on Figure 5. The images support the intuition described above and confirm that RPGAN may serve as an analysis tool for the corresponding generator model. Note, however, that injecting noise per se is not sufficient for interpretability. The perturbed generators produce poor images, which are difficult to analyze. Meanwhile, RPGAN always generates good-looking images, which allows to identify the factors of variation, corresponding to the particular layer. For instance, see Figure 8 for the colored MNIST dataset. Figure 8 (left) shows plausible images, generated by varying RPGAN blocks. In contrast, Figure 8 (right) demonstrates images from generators perturbed with small and large noise. For both noise magnitudes, these images are difficult to interpret. Of course, given the interpretations obtained via RPGAN, one can perceive similar patterns in the noisy generations, but noise injection alone is not sufficient for interpretability. In the next experiment, we demonstrate that the RPGAN model is also a natural fit for the generative incremental learning task (see e.g., are common for D 1 and D 2 . To illustrate this scenario, we take a partition of the MNIST handwritten digits dataset into two subsets MNIST 0−6 and MNIST 7−9 of digits from 0 to 6 and from 7 to 9 correspondingly. As for generator for MNIST 0−6 we take a 4-bucket RPGAN model with a number of blocks equal to 20, 20, 20, 8. Note that the last bucket is much thinner than the others, as it turns out to be responsible for variations in writing style, which does not change much across the dataset. Then we train the generator on the subset MNIST 0−6 of first 7 digits (see Figure 6, left and center). After that we add five additional blocks to each of the first two layers, obtaining a generator with a number of blocks equal to 25, 25, 20, 8 and pretrained weights in all blocks but five in the first and in the second buckets. Then we train the extended model to fit the whole MNIST by optimizing only the ten new blocks (see Figure 6, right). As a surprising side effect of our model, we discovered that decent generation quality can be achieved by the RPGAN generator with no nonlinearities, i.e., one can train the RPGAN generator with all blocks consisting of linear transformations only. To demonstrate that, we take an RPGAN with the same ResNet-like generator architecture as in the experiments above. Then we replace all nonlinearities in the generator model by identity operations and train it on the CIFAR-10 dataset. The model demonstrates FID equal to 22.79 that is competitive to the state-of-the-art generative models of comparable sizes (see Figure 23 for generated images examples). Note that this approach fails for a standard GAN generator that maps a Gaussian distribution to an images distribution. Indeed, that generator would be a linear operator from a latent space to the images space with a Gaussian distribution in the images domain. This purely linear generator architecture allows us to significantly speed up the image generation process for fully-connected layers. We group consequent buckets of fully-connected layers to form a new bucket. Blocks in the new bucket are linear transformations that are products of the blocks from the original buckets. To demonstrate this, we train a fully-connected generator network on the MNIST dataset (see Table 3 in appendix, left column). Then we join the last three buckets into a single one. We form a new bucket by blocks defined as the linear operators B 5k • B 4j • B 3i where i, j, k are random indices of blocks from the buckets B 3, B 4, B 5 of the original generator. Thus instead of performing three multiplications of features vector from the second layer by matrices of the shapes 256 × 512, 512 × 1024, 1024 × 784 we perform a single multiplication by a 256 × 784 matrix. In our experiments, we achieved ×2.2 speed up. Note, however, that after the compression, the latent space cardinality can decrease if a small subset of tuples (i, j, k) is used to populate the new bucket. Nevertheless, as random products of joining buckets are used, we expect that the generated images would be uniformly distributed in the space of images, produced by the uncompressed generator (see Figure 24 for samples comparison). In this paper, we address the interpretability of generative models. In particular, we have introduced RPGAN, an alternative design of generative adversarial networks, which allows natural interpretation of different generator layers via using random routing as a source of stochasticity. With experiments on several datasets, we provide evidence that different layers are responsible for the different factors of variation in generated images, which is consistent with findings from previous work. As a possible direction of future research, one can use the RPGAN analysis to construct efficient models, e.g., via identification of redundant parts of the generator for pruning or inference speedup. If the number of blocks is too low, the ing latent space appears to have insufficient cardinality to cover the dataset. On the other hand, a too high number of blocks in a difficult training procedure and also fails. To generate LSUN-bedroom-like images, we use the 7-bucket RPGAN with ResNet-like generator with five residual blocks, the first fully-connected layer, and the last convolutional layer. Similarly to CIFAR-10 experiments, during the generation, we freeze a random path and vary blocks in a single bucket to investigate its responsibility. See Figure 12 for blocks variations and Figure 13 for buckets responsibility analysis. Note that similarly to CIFAR-10, the central buckets have a maximal semantic impact. Last two buckets are mostly responsible for coloring. The first two buckets are responsible for local geometrical features. Note that here we face mode collapse for the third bucket: mainly, it affects only tiny local features. See Figure 14 for samples generated by the model. l o w n o i s e v a r i a n c e l a r g e n o i s e v a r i a n c e Figure 8: Left: images produced by varying blocks in a particular bucket of RPGAN. Right: images produced by the standard GAN after parameters perturbation in a particular generator layer, with low and high normal noise variance. A.3 ANIME FACES DATASET Though this dataset is not standard for GANs, we use it in the experiments as it nicely reveals the RPGAN analysis tools. Here we use the 6-bucket RPGAN with ResNet-like generator with four residual blocks, the first fully-connected layer, and the last convolutional layer. See Figure 15 for block variations and Figure 16 for bucket responsibility analysis. Again, the content semantics is mostly defined by the intermediate buckets. The last two buckets are mostly responsible for coloring: the fifth bucket has the maximal impact on coloring, and the last bucket varies tones. The first buckets are responsible for small details (one can note the hair on the character's forehead). See Figure 17 for samples generated by the model. Here we show that the concept of RPGAN works well with different generator architectures and learning strategies. Here we present plots for DCGAN-like generators consisted of consequent convolutional layers without skip connections. All the models were trained with the same parameters as described in Section 4. Despite of Spectral Normalization, we train these models as WGANs with weight penalty . On the Figure 19 we show plots for a four-bucket generator trained on CIFAR-10. We also train four-bucket generator on colored MNIST, see Figure 8, left. Finaly, we show plots for the five-bucket generator and CelebA-64x64 dataset on Figure 21. See Figure 18, Figure 21, Figure 22 for buckets analysis. In this section we show that injecting noise in the generator weights cannot be used as a stand-alone interpretability method. Namely, we compare images produces by RPGAN and noise injection for models trained on randomly colored MNIST samples. We train both RPGAN and standard generators as a Wasserstein GAN with weights penalty. Tanh, reshape to 28 × 28
We introduce an alternative GAN design based on random routes in generator, which can serve as a tool for generative models interpretability.
438
scitldr
Deep artificial neural networks can achieve an extremely small difference between training and test accuracies on identically distributed training and test sets, which is a standard measure of generalization. However, the training and test sets may not be sufficiently representative of the empirical sample set, which consists of real-world input samples. When samples are drawn from an underrepresented or unrepresented subset during inference, the gap between the training and inference accuracies can be significant. To address this problem, we first reformulate a classification algorithm as a procedure for searching for a source code that maps input features to classes. We then derive a necessary and sufficient condition for generalization using a universal cognitive similarity metric, namely information distance, based on Kolmogorov complexity. Using this condition, we formulate an optimization problem to learn a more general classification function. To achieve this end, we extend the input features by concatenating encodings of them, and then train the classifier on the extended features. As an illustration of this idea, we focus on image classification, where we use channel codes on the input features as a systematic way to improve the degree to which the training and test sets are representative of the empirical sample set. To showcase our theoretical findings, considering that corrupted or perturbed input features belong to the empirical sample set, but typically not to the training and test sets, we demonstrate through extensive systematic experiments that, as a of learning a more general classification function, a model trained on encoded input features is significantly more robust to common corruptions, e.g., Gaussian and shot noise, as well as adversarial perturbations, e.g., those found via projected gradient descent, than the model trained on uncoded input features. Generalization error in deep learning is typically defined as the difference between training and test errors measured on identically distributed training and test sets. This traditional approach fails to take into account how representative these sets are of the empirical sample set from which input samples are drawn at inference time. When the training and test sets are not sufficiently representative of the empirical sample set, the difference between training and inference errors can be significant, thus rendering the learned classification function ineffective. The lack of the latter kind of generalization in unreliable decisions, raising questions about how robust, fair, and safe a learned classification function is . A natural question then arises: is there a necessary and sufficient condition ensuring that deep learning classifiers generalize in this broader sense? If so, how can this condition be satisfied in a real-world setting? To answer these questions, we draw on algorithmic information theory, which proposes a complexity measure, Kolmogorov complexity, as the absolute information content of any object, e.g., a computer program, function, or set. After deriving a necessary and sufficient condition for generalization using the information distance , which is a universal cognitive similarity metric based on Kolmogorov complexity, and formulating an optimization problem for generalization, we turn our attention to coding theory in order to learn a more general classification function by extending the input features to a classifier with systematically generated encodings of the original features. For a classification task, we assume that there exists a true classification function. Given training and test sets, neither of which are sufficiently representative of the the empirical sample set from which input samples are drawn during inference, a learning algorithm is asked to find the true classification function. In this work, we study how well the learned classification function generalizes with respect to the true classification function. In other words, we study the problem of how to minimize the generalization error, which we define as the difference between the training error and inference error measured on the empirical sample set, as opposed to the difference between the training error and test error. We use robustness to both common corruptions and adversarial robustness to measure how well a learned classification function generalizes on the empirical sample set, which contains corrupted or perturbed samples. Universal cognitive similarity metric. In order to find a necessary and sufficient condition for generalization in deep learning, we use the normalized information distance. A key finding in algorithmic information theory is that the normalized information distance is a universal cognitive similarity metric: the normalized information distance between two objects minorizes any other admissible distance up to an additive logarithmic term . In other words, although different learning algorithms will pick up on different dominating input features, depending on the classification task that they perform, every such dominating feature will be detected by the normalized information distance. Classification function as a source code. We formulate a learning algorithm as a procedure for searching for a source code based on training examples. We show that the learned classification function is a lossy compressor: the classifier discards some information. The input features thus cannot be recovered from the class label. We use the normalized information distance between the true source code (true classification function) and the learned source code (learned classification function) to find a necessary and sufficient condition ensuring generalization, and then formulate the problem of learning a more general classification function as an optimization problem. Compression-based similarity metric. The normalized information distance provides the theoretical tools needed to learn more general source codes, but in practice the normalized information distance is not effectively computable. We therefore use a compression-based similarity metric (Cilibrasi & Vitányi, 2005) based on a real-world compressor to approximate this theoretical construct. Specifically, we use the normalized compression distance between the true source code and learned source code to derive an effectively computable condition on the compressed size of the learned source code to identify encodings of the input features that help to learn a more general source code. Encoding input features. In a typical communication system, a source code is followed by a channel code which is then followed by a physical channel. In this paper, the learned source code (learned classification function) is preceded by one or more input codes that help ensure the learned classifier is more general by generating relations between input features that are not captured by the set of available input features. In order to showcase our findings for a specific classification task, we use channel codes on the input features for CIFAR-10 image classification. Precisely, we use a four-dimensional (4-D) five-level pulse-amplitude modulation (5-PAM) trellis-coded modulation (TCM) scheme (; ;) to systematically generate multiple encodings of the set of available input features. In doing so, we enable the deep neural network (DNN) to learn information from the empirical sample set which it could not learn from the uncoded input features alone. The generalization error is thereby reduced. The impact of generalization. Through image classification experiments, we show that a model trained on arbitrarily encoded input features is significantly more robust to common corruptions, such as Gaussian noise and shot noise, and to adversarial perturbations, like those generated via projected gradient descent (PGD) , than a model trained on uncoded input features. The role of code design. The code used on the input features can be designed in various ways for a classification task, and designing input codes is an important step to learning a more general classification function from the set of available input features. We show that merely increasing the number of input channels of a DNN does not confer any robustness to Gaussian noise or to PGD. How to design efficient input codes to build encoded DNNs is an intriguing research direction for achieving generalization in deep learning. The literature on generalization, e.g. , is largely concerned with minimizing the generalization error, defined as the difference between training and test errors measured on identically distributed training and test sets. Minimizing this form of generalization error does not address the problem of generalizing to input samples drawn from an empirical sample set of which the training and test sets are not sufficiently representative, as we do herein. In this subsection, we compare our work with domain-generalization, domain-adaptation, and dataaugmentation techniques to highlight their differences. There is a substantial body of literature on domain generalization (; ; ;), which aims to better generalize to unknown domains by training on samples drawn from different domains, not a single source, which is a limitation that our work does not have. In this work, there is no need to draw training samples from a different domain. We show that encoding the given training set enables a DNN to learn different relations between features that it could not learn from the uncoded training set alone. There has also been much work on domain adaptation (Daumé ; ; ; ; ; ; a) that addresses the problem of generalization to a priori fixed target domains, which is a different approach from ours because these algorithms need to access samples from the target distributions during an adaptation phase. Importantly, our approach does not require accessing new samples during an adaptation phase in order to achieve generalization to the empirical sample set. Similar to the domain adaptation work, there has been some work on adversarial training (; ;), which aims to achieve robustness to adversarial perturbations by using training samples perturbed by a specific adversarial-perturbation method. Adversarial training can be computationally costly because it requires generating adversarially perturbed training samples in each epoch of training, unlike in our work where input encodings need to be generated only once before training. Furthermore, as there are numerous adversarial-perturbation methods (; b;), an adversarially trained DNN does not necessarily generalize well to samples subjected to an adversarial perturbation method that was not used for training . There is also a substantial body of work on data-augmentation techniques (; b), which perform simple label-preserving transformations of the training samples to provide a DNN with additional data points to learn from. In this work, we do not generate new samples to increase the diversity of the training set; instead, we take a theoretically-grounded approach to extend the input features with their encodings in order to enable a DNN to learn a sufficiently complex classification function from the set of available input samples. Our goal is to minimize the generalization error (defined in Appendix B) for a classification task, defined as the difference between training error and inference error, given a training set and a test set, both of which are not sufficiently representative of the empirical sample set from which input samples are drawn at inference time. To accomplish this goal, we derive a necessary and sufficient condition under which a classifier will generalize well, and, based on that condition, cast the search for a classifier with good generalization (defined in Appendix B) as an optimization problem. Our approach requires that we describe and compute the absolute information content of any object, e.g., a computer program, function, or set, in order to determine which of a pair of learned classification functions contains more information of the true classification function. The appropriate tool here is a concept in algorithmic information theory: Kolmogorov complexity (defined in Appendix B). Defining the amount of information in individual objects in terms of their Kolmogorov complexity has the advantage that it refers to these objects in isolation, not as outcomes of a known random source. In contrast, quantifying the amount of information in individual objects based on their Shannon entropy requires that these objects be treated as members of a set of objects with an associated probability distribution. This understanding is fundamental to our study because applying Shannon entropy to "an estimate of the quantity of information contained in a novel or in the translation of a novel into another language relative to the original" would not be clear . As a DNN may be employed to learn a classification function from a set of features contained in such an object as, for example, a document, image, video, or sound, we study the Kolmogorov complexity of the set of input features, model, and outputs of the DNN. In our quest to find a condition ensuring our running definition of generalization, we require a distance function that measures how similar two objects are in any aspect so we can decide which of two learned classification functions is closer to the true classification function. The closer a learned classification function is to the true classification function, the better its generalization error. This distance function should satisfy the metric (in)equalities in order for it to have a meaning in the context of generalization. For example, this distance function would have to be symmetric; i.e., the distance from object a to object b must be equal to that from object b to object a. The normalized information distance between objects a and b, defined as where K(a) denotes the Kolmogorov complexity of object a and K(a|b) denotes the Kolmogorov complexity of object a given b, satisfies the metric (in)equalities and is also a universal cognitive similarity metric because D I (a, b) minorizes all other normalized admissible distances up to a negligible additive error term. This means that all effective similarities between a pair of objects are discovered by the normalized information distance; i.e., two objects that are close according to some effective similarity are also close according to the normalized information distance. The main intuition behind normalizing the information distance max(K(a|b), K(b|a)) is that two larger objects that differ by a small amount are closer than two smaller objects that are different by the same amount: the absolute difference between two objects does not measure similarity as such, but the relative difference does (Cilibrasi & Vitányi, 2005). A successful DNN distills information useful for its classification task T from its input features x. In doing so, the DNN has to learn a classification function f from the set X n of its input features to an m-ary alphabet A of classes u in such a way that some information in its input features is given less weight in determining its relevance to the class decisionû, and then entirely discarded by the arg max operation . A deep learning classifier is thus acting as a source code C (defined in Appendix B). Proofs of the following mathematical statements are given in Appendix A. Lemma 1. For a classification task T wherein each n-dimensional input sample x is mapped to a class u drawn from an m-ary signal alphabet A, the true output function f (·) of a learning algorithm is a source code C for a multivariate random variable X. Lemma 1 reformulates a learning algorithm as a procedure for searching for a source code C for a multivariate random variable X, which compresses the values that this random variable takes, namely the input samples x. When a DNN generalizes well with respect to the true classification function f (·), it is able to decide which information in its input features is more relevant to making a particular class decision. A DNN is a lossy compressor when the absolute information content of any of its input samples x is larger than that of the class u to which it is mapped. Corollary 1. The true source code C = f (·) of a learning algorithm used for the classification task T is a lossy compressor when the Kolmogorov complexity K(x) of one of its input samples is larger than the number of bits required to represent the corresponding class u. Corollary 1 formalizes a deep learning classifier as a lossy compressor, so the source code C that corresponds to the true output function f (·) is not uniquely decodable; i.e., its input samples x cannot be recovered from the class u to which they are mapped. A DNN can be trained to learn a source code that generalizes well with respect to the true source code, but first we will analyze the similarity between these two source codes by using the normalized information distance. Source codes are designed for the most efficient representation of data . Whether it is designed for a data-transmission or a data-storage system, a source code, whether lossless or lossy, should retain information about the data necessary to accomplish a given task. The same consideration applies to a learning system. The information in the input features of a learning system is represented by the classification function that it learns; thus, a neural network can be viewed as a source code that encodes inputs features for its classification task. The reformulation of a learning algorithm as a procedure for searching for a source code allow us to exploit theoretical from algorithmic information theory and coding theory for deep learning, thereby avoiding the necessity to reinvent theory that is already established in these fields. Given that source codes are designed for the most efficient representation of data , we will exploit the duality of a source code and a channel code to learn a classification function that represents the input features more efficiently for the classification task T; i.e., a more general classification function. Showing that a deep learning classifier is a non-uniquely decodable source code is also fundamental to understanding that the normalized information distance between the input features and the output cannot be used to derive a condition for generalization in deep learning. This from the fact that deriving such a condition would require finding the conditional Kolmogorov complexity K(x|y) of the input features with respect to the output, which is impossible because the source code is not uniquely decodable; i.e., the program to go from the output to the input features cannot be found. A necessary and sufficient condition for generalization based on the normalized information distance can hence be found only between a learned source code and the true source code. The normalized information distance between the true source code C and learned source codeC reveals how generalC is with respect to C. A necessary and sufficient condition ensuring that learned source codeC 0 is more general than learned source codeC 1 with respect to the true source code C is Equation 3 is a direct of using the normalized information distance as a universal cognitive similarity metric to determine whether learned source codeC 0 orC 1 is more general with respect to the true source code C. Because the normalized information distance is a metric that uncovers all effective similarities between the true source code and a learned source code, learning a source code that is closer to the true source code C under this metric ensures achieving generalization. The normalized information distance D I (C,C) between the true source code C and the learned source codeC must thus be minimized in order to minimize the generalization error. Theorem 1. When a learning algorithm used for the classification task T finds a suboptimal source codeC instead of the true source code C, the optimization problem for the generalization ofC is minC(D I (C,C)) = minC max(K(C|C), K(C|C)). Theorem 1 has formulated the optimization objective for generalization as the minimization of D I (C,C) and states that to achieve generalization we should make the learned function sufficiently complex for the classification task T. Theorem 1 states that the Kolmogorov complexity K(C|C) of the program that computes how to go from the learned source codeC to the true source code C or the Kolmogorov complexity K(C|C) of the program that computes how to go from the true source code C to the learned source codeC, whichever is larger, must be minimized in order to minimize generalization error. Thus, the goal is to increase the complexity of the learned source codeC, but not beyond the complexity of the true source code C. Therefore, Occam's first razor still holds: simpler classifiers generalize better than complex ones. However, a classifier that input features encoded input features Encoded Model does not perform well on its empirical sample set X n is too simple for its classification task. Ideally, the learning algorithm would learn the true source code C, achieving the best possible performance metrics determined by its classification task T. In practice, because the learning algorithm will see only a small subset X n S of the possible inputs at training time, the learned source codeC will be a partial function of the true source code C at perfect training accuracy (that is, when the classifier has sufficient capacity to memorize the training samples ). Whether a model is over-fit or under-fit is conventionally determined on a cross-validation set and/or test set that are/is identically distributed with the training set, all of which are subsets of the empirical sample set. Being more general on such a cross-validation set and/or test set does not as such guarantee generalization on the empirical sample set X n because the latter may contain corrupted or perturbed samples and/or there may be samples in the empirical sample set that are out of distribution of the cross-validation set and test set. Therefore, whether a model is over-fit or under-fit does not have a consequence for Theorem 1. Next, we target learning a source code that is more general on the empirical sample set X n, not only on a cross-validation set and/or test set. In this work, we increase the complexity of the learned source codeC by generating I encodings E 0, E 1,... E I−1 of the available input features x S that capture relations between the features which are not learned well from the original features, and then append these encodings to the original features. Note that the available input features are denoted by x S, which are drawn from the set X n S of available features; i.e., x S ∈ X n S, which is a subset of the empirical sample set X n. By providing a different view of the relations between the features, the encodings E i help the learning algorithm to learn a more complex source codeC E whose normalized information distance D I (C,C E) to the true source code C is less than D I (C,C). This in learning a more general source code. Theorem 2. For classification task T, a more general suboptimal codeC E is learned from the concatenation {x S, E i ( x S)}, where The effective capacity of several successful DNN architectures is sufficiently large to memorize the set X n S of available input samples . Any encoding is the set of available encoded samples such that Y n S ⊆ X n S, when concatenated with the uncoded input samples x S, thus increases the Kolmogorov complexity of the learned source code, which is now calledC E. The task of the source code is to find the most efficient representation of its input data. In a typical communication system, the source code compresses the input, then a channel code adds redundancy to guard against noise in the channel, then the encoded information is transmitted over the physical channel. The design goal for the source and channel codes is to achieve the channel capacity (maximum mutual information between the channel input and output). In contrast, Theorem 2 considers a learning system in which an input code is followed by a learned source code, the classification function, and the design goal is for the composition of the input and source codes to generalize as well as possible (see Figure 1). In other words, in a learning system the "physical channel" precedes the source code, and it can be seen as a process whereby the empirical sample set X n is reduced to the set X n S of available input samples and/or whereby common corruptions, such as Gaussian noise, and adversarial perturbations, such as those generated by PGD, are applied to the set of available input samples. Because the "physical channel" comes first in a learning system, there is no access to the set of information bits. Only a subset of these information bits can be accessed, which may have been subjected to common corruptions or adversarial perturbations. It is therefore crucial for a learning algorithm to compress its features while retaining information useful for its classification task. One way to accomplish this is to extend the input features with encodings that capture relations between features that are useful for classification and not captured well by the original set of input features. The classification task T does not change when input features are extended by their encodings because it is defined by the mapping between the input features and the output , which remains the same because the only input to the encoded model is the uncoded input features (see Figure 1). The encoder is simply a new layer in the encoded model, which is designed from an encoder and an uncoded model. The normalized information distance is based on the notion of Kolmogorov complexity, which is not a partial recursive function; i.e., it is not effectively computable. While we can use normalized information distance to analyze whether a source codeC E learned from the concatenation {x S, E i ( x S)} of the encoded input samples E i (x S) with the uncoded input samples x S is more general with respect to the true source code C, in practice we may need to approximate the normalized information distance with the normalized compression distance, so we can determine which of any pair of source codes is more general with respect to the true source code C. Based on a real-world compressor, the normalized compression distance (Cilibrasi & Vitányi, 2005) approximates the normalized information distance D I (C,C E), where Z is a real-world compressor. Thus, the generalization condition and minimization of D I (C,C E) can be cast in effectively computable forms. Note that neither Equation 2 nor Equation 4 is a training criterion, but they specify the normalized information distance and the normalized compression distance between the true source code C and a learned source code, respectively. They are used to derive theoretical , particularly the use of input codes to achieve generalization as illustrated by experiments in Section 3. Proposition 1 states for classification task T that the compressed size Z(C E) of the source codeC E learned from the concatenation {x S, E i ( x S)} of the encoded input samples E i (x S) and the uncoded input samples x S is larger than the compressed size Z(C) of the source codeC learned from the uncoded input samples alone x S. Proposition 2. When a learning algorithm used for classification task T finds a suboptimal source codeC E instead of the true source code C, the effectively computable optimization problem for the generalization ofC E is minC Proposition 2 shows that the compressed size Z(C E) of the source codeC E learned from the concatenation {x S, E i ( x S)} of the encoded input samples E i (x S) and the uncoded input samples x S must be maximized until it reaches the compressed size Z(C) of the true source code C to learn the most general source code with respect to the true source code C for the classification task T. This statement is a consequence of the fact thatC E is a partial function of C at perfect training accuracy. In other words, the source codeC E learned from the concatenation {x S, E i ( x S)} of the encoded input samples E i (x S) and the uncoded input samples x S can be made more general if the encoded input samples E i (x S) bear information of relations between input features that are not represented by its input samples. A channel encoder generates encodings from its input features that enable a classifier to learn relations between these features not captured by the set of available input samples. Concatenated together, these features are then input to a model to produce a class decision. For example, we use a 4-D 5-PAM TCM scheme (; ;) as a systematic way to generate multiple encodings of input features. As shown in Figure 2, the channel encoder flattens the input features by them into 2 × 2 patches of features, then, starting from the upper left feature and ending at the lower left feature, ordering them in a sequence going in the clockwise direction. The features are traversed twice in order to avoid the initialization length of the channel code. This particular scheme is used because it focuses on local relations between features. Exploration of other flattening schemes is left for future research. The features in the CIFAR-10 dataset are represented by eight bits. The flattened features are fed to the convolutional encoder, which produces one extra bit out of the two least significant bits of the eight bits representing each feature. The 4-D 5-PAM TCM symbol mapper then maps each nine bits into four equidistant 5-PAM symbols, which are then mapped to 12 bits by the bit mapper. The bit mapper uses different symbol-to-bit mappings to generate different encodings of the input features, and the matrix used for generating these encodings is given in Appendix C.7. Each encoding has the same size as the original input samples. Figure 5 in Appendix C.1 shows three CIFAR-10 images and four of their encodings, which are arbitrarily chosen. As seen in this figure, each encoding conveys a different view of the input features, which helps the source code (learned classification function) model relations between the features that are useful for the image classification task. Note that using channel codes on the input features is not a data-augmentation technique: the encodings are appended to the input features, not treated as new input samples. These encodings enable the classifier to learn from the set of available input samples a source code that is sufficiently complex for its classification task. As in a data-transmission or data-storage system, the source code is designed for the most efficient representation of the data, which is the set of available input features for the classification task at hand, and the channel code is independently designed for the channel. This combination is key to achieving generalization in deep learning, and how best to design a channel code for a given classification task is an intriguing future research direction. Let the set of available input samples subjected to common corruptions and adversarial perturbations belong to the empirical sample space from which input samples are drawn during inference. To show that using channel codes on the input features in learning a more general source code with respect to the true source code, we conduct experiments on the CIFAR-10 and CIFAR-10-C datasets to show increased robustness to common corruptions and adversarial perturbations. For CIFAR-10 and CIFAR-10-C, we train uncoded VGG-11 and VGG-16 models, encoded VGG-11 and VGG-16 models, and an uncoded ResNet-18 model. The VGG networks are modified only by adding the encoder and increasing the number of input channels. The encoded models use the same training criterion as the uncoded models, namely the cross-entropy loss. The training setup and the achieved test accuracies are given in Appendix C.2. In all experiments conducted on the encoded models, we use arbitrary encodings. The input samples are corrupted or perturbed before they are input to the encoded models, as the uncorrupted or unperturbed input samples would not be accessible by a neural network in a real-world application. Increasing the number of encodings may reduce the generalization error, but at the expense of increased run time. However, encoding the training and test samples is a one-time process that can be done prior to training, unlike adversarial training, which requires generating perturbed input samples in each epoch. In Appendix C.6, we show that increasing the number of input channels does not, as such, confer robustness to Gaussian noise or to PGD. Designing efficient input codes for a given classification task considering the generalization error and the required number of encodings is a direction for future research. To the best of the authors' knowledge, there is no other published method that can achieve robustness to both common corruptions and adversarial perturbations. The set of available input samples may be subjected to common corruptions before reaching a realworld image classifier. For example, Gaussian noise can appear in low-lighting conditions, and shot noise is caused by the discrete nature of light. To show robustness to such corruptions, we conduct experiments on the CIFAR-10-C and CIFAR-10 datasets. We use four common corruptions in our experiments, namely Gaussian noise, shot noise, impulse noise, and speckle noise. The CIFAR-10-C dataset consists of the 10,000-sample CIFAR-10 test set subjected to five different noise levels, called severity, so it has 50,000 samples in all. As shown in Figure 3, increasing the number of arbitrary encodings concatenated to the original input features increases robustness to Gaussian noise, shot noise, and impulse noise. The for speckle noise is given in Appendix C.3. For example, when test samples are subjected to impulse noise with a severity level of 4, we see a sharper increase in the number of test errors for the uncoded VGG-11 model than that for the VGG-11 model with 32 encodings. Note that the vertical axis in these plots is cumulative: the number of test errors made at the previous severity level is added to that at the current severity level. Table 1 in Appendix C.4 compares the encoded VGG-11 model with 32 encodings with previously published methods on the CIFAR-10-C dataset, which shows that the encoded VGG-11 model achieves the highest inference accuracy (defined in Appendix B) against shot noise with a severity level of 5 compared with all the other works listed in this table. Additional experimental on Gaussian noise are shown in Figure 6 in Appendix C.3. To show robustness to adversarial perturbations without adversarial training, we conduct experiments on the CIFAR-10 dataset. We use the white-box PGD attack and transfer attacks from an uncoded VGG-16 and an uncoded ResNet-18 model to evaluate the adversarial robustness of the encoded VGG-16 models. The for the black-box boundary attack are given in Appendix C.3. The white-box PGD attacks use the gradient of the loss function with respect to the uncoded input features in the encoded VGG-16 models because the channel encoder is part of the encoded VGG-16 models; i.e., the only input to the encoded model is the uncoded input features. The encoder is simply a new layer of the neural network architecture, whose outputs are computed directly from the uncoded input features. Changing the outputs of the encoder layer is tantamount to changing the outputs of any other layer of the model, which is a threat model that falls out of the scope of our work. For the CIFAR-10 experiments, we use different numbers of encodings, and robustness to all adversarial perturbations in our experiments systemically increased with an increasing number of arbitrary encodings concatenated to the input features. Figure 4 shows the for the white-box PGD and transfer PGD attacks. The plot on the left shows the increase in robustness to white-box PGD starting from a random perturbation around the natural example and using 20 iterations and a step size of 0.003. . The difficult problem of achieving adversarial robustness in a real-world application requires a holistic approach. Our approach, which does not depend on adversarial training, can be readily used in combination with adversarial training and other known methods to achieve greater robustness to adversarial perturbations. We presented a theoretical and experimental framework for defining and understanding generalization in deep learning, defined as the difference between training and inference errors. The theoretical findings and experimental show that a learned classification function must be sufficiently complex for a classification task in order to be closer to the true classification function. Another insight from this study is that concatenating encodings of input features to the original input features helps to achieve generalization in deep learning by enabling the classifier to learn relations between features not captured by the original inputs. Experiments demonstrate that a model trained on arbitrarily encoded input features is more robust to common corruptions and adversarial perturbations and that using more encodings may be beneficial to minimize the generalization error. Designing input codes to help a DNN learn a more general classification function with a minimum number of encodings is an intriguing research direction to achieve reliability in machine learning. Proof of Lemma 1. For classification task T, a learning algorithm is asked to produce the true output function f (·): X n → A. There exists a source code C for a random variable X, which is also a mapping from the sample space X n of X to the m-ary signal alphabet A from which a class u is drawn. The true output function f (·) is equivalent to the source code C for the random variable X because their domain X n and codomain A are equal and the image of both functions is the same for each input sample x in the domain X n. Proof of Corollary 1. If the Kolmogorov complexity K(x) of an input sample x is larger than the number of bits required to describe the class u to which it is mapped, which is at most log 2 m bits, then some information about the input sample x is lost. Satisfying this condition, the true source code C is a lossy compressor. Proof of Theorem 1. The normalized information distance is a universal cognitive similarity metric that minorizes all other admissible distances up to a negligible additive error term. This means that decreasing the normalized information distance D I (C,C) ensures that the true source code C and the learned source codeC are more similar; i.e., the learned source codeC is more general with respect to the true source code C. In a real-world setting, because the empirical sample space X n may be too large, the learning algorithm sees an input sample x S drawn from a subset X n S of X n; i.e., X n S ⊂ X n. Put differently, the set X n S of available input samples on which a neural network is trained and tested is a subset of the empirical sample set X n which the trained neural network sees during inference. This means that true source code C bears information of all possible relations between input features that are useful for the classification task T, whereas the learned source codeC bears information of a subset of all possible relations between the input features. The Kolmogorov complexity of the true source code is thus larger than that of a source code learned from the set of available input samples by a sufficiently high-capacity neural network, which can memorize its input samples ; i.e., K(C) > K(C). Therefore, is an optimization problem for the generalization of the learned source codeC with respect to the true source code C. Proof of Theorem 2. Any encoding E i: X n S → Y n S that bears information useful for the classification task T that is not entirely represented by the subset X n S of uncoded input samples; i.e., Y n S ⊆ X n S, when concatenated with the uncoded input samples x S, increases the Kolmogorov complexity of the learned source code, which is now calledC E, because a sufficiently high-capacity neural network can memorize its input samples . Put differently, the Kolmogorov complexity K(C E) of the source codeC E learned from a concatenation {x S, E i ( x S)} of uncoded and encoded input samples is larger than that of the source codeC learned from uncoded input samples alone if the encodings bear information of relations between input features that are not represented by the uncoded input samples. As the the true source code C bears information of all possible relations between input features, the Kolmogorov complexity K(C E) of the source codeC E learned from a concatenation {x S, E i ( x S)} of uncoded input samples and their encodings bearing information of a subset of all possible relations between input features is upper bounded by the Kolmogorov complexity K(C) of the true source code C; i.e., K(C) > K(C E). In other words, a sufficiently high-capacity neural network can memorize its input samples without being assisted by encodings E i. However, the encodings E i bear information of relations between input features, which help to increase the Kolmogorov complexity of the learned source code if they are useful for the classification task T; i.e., they are contained in the empirical sample set X n, of the neural network and if the information in the mappings contained in the input code, which is used to generate the encodings E i, is not represented in the set X n S of available input samples. The conditional Kolmogorov complexities {K(C|C), K(C|C)} are thus both larger than {K(C|C E), K(C E |C)}, respectively, because the program that computes how to go fromC E to C is shorter in length than the program that computes how to go fromC to C. The same holds in the reverse direction. Therefore, The source codeC E learned from the concatenation {x S, E i ( x S)} is thus more general than the source codeC learned from x S. Proof of Proposition 1. As the normalized information distance D I (C,C E) is not effectively computable, it can be approximated for practical purposes by the normalized compression distance where Z is a real-world compressor. The learning algorithm sees an input sample x S drawn from a subset X n S of X n as the empirical sample space X n may be too large. Because a sufficiently high-capacity neural network can memorize its input samples , the compressed size of the true source code is larger than that of the learned source code; i.e., Z(C) > Z(C E). At perfect training accuracy, the compressed size Z({C,C E}) of the concatenation {C,C E} is equal to Z(C), asC E is a partial function of C. For a sufficiently high training accuracy, we can consider |Z({C,C E}) − Z(C)| to be negligible for the purposes of generalization. As the generalization condition D I (C,C E) < D I (C,C) is not effectively computable, an equivalent effectively computable condition is useful for practical purposes. As for the purposes of generalization, the effectively computable condition is equivalent to Proof of Proposition 2. By the Proof of Proposition 1, the effectively computable optimization problem for the generalization ofC E with respect to C is B DEFINITIONS Inference Accuracy. The classification accuracy measured on a subset of the empirical sample set X n, which may be subjected to common corruptions or adversarial perturbations and which may be out of distribution of the training set, is defined as inference accuracy. The definition of inference accuracy can be contrasted with that of test accuracy by considering that the former is measured on a subset of the empirical sample set X n which consists of corrupted or perturbed samples which may be out of distribution of the training set and that the latter is measured on the test set which consists of uncorrupted and unperturbed samples that are presumed to come from the same distribution as the training set. Generalization Error. The difference between the training error measured on the training set and inference error measured on a subset of the empirical sample set X n, which may be subjected to common corruptions or adversarial perturbations and which may be out of distribution of the training set, is defined as the generalization error. This definition is different from that of prior works , which define generalization error as the difference between the training error measured on the training set and test error measured on the test set. Generalization. A learned classification function is said to be more general with a decreasing generalization error. This definition is different from that of prior works , which define a learned classification function to be more general with a decreasing difference between the training error measured on the training set and test error measured on the test set. Source Code. A source code C for a random variable X is a mapping from the sample space X n of X to an m-ary signal alphabet A. Source codes can be designed for the most efficient representation of the data . Channel codes appropriate for a channel can be designed separately and independently. This combination is as efficient as any other method that can be designed by considering both problems together. We refer the reader to a textbook such as for a detailed understanding of source codes. Kolmogorov Complexity. The Kolmogorov complexity K U (x) of a string x with respect to a universal computer U is defined as where p denotes a program, and l(p) denotes the length of the program p. Thus, K U (x) is the shortest description length of x over all descriptions interpreted by computer U. We fix such a universal computer U as reference and write K U (x) = K(x). We refer the reader to a textbook such as and (; Cilibrasi & Vitányi, 2005) for a detailed understanding of Kolmogorov complexity. C.1 ENCODED CIFAR-10 IMAGES Figure 5 shows three CIFAR-10 images and four of their encodings, which are arbitrarily chosen. This figure shows that each encoding conveys a different view of the input features, which helps the learned source code model relations between the features that are useful for the image classification task. All models are trained in PyTorch with 16 random initializations. We train the networks over 450 epochs with a batch size of 128 and with a dynamic learning rate equal to 0.1 until epoch 150, 0.01 until epoch 250, and 0.001 until epoch 450 (To show the robustness of the encoded VGG-11 models to Gaussian noise beyond the noise levels included in the CIFAR-10-C dataset, we apply Gaussian noise with zero mean and variance σ 2 w to the CIFAR-10 test set. The average input-feature energy equals where x i is a feature of the input sample x, k is the number of input samples in the test set, and n is the number of features in an input sample. We define the signal-to-noise ratio to be In Figure 6, we show on the left plot that increasing the number of arbitrary encodings concatenated to the input features significantly increases robustness to Gaussian noise applied to the CIFAR-10 test set with signal-to-noise ratios from 25 to 0 dB. For example, at a signal-to-noise ratio of 12 dB, the inference accuracy of the VGG-11 model with 32 encodings is 61.15%, whereas that of the uncoded VGG-11 model is 21.49%. On the right plot, the experimental for the VGG-16 model with 32 encodings tested on the samples in the CIFAR-10-C dataset corrupted by Gaussian noise and shot noise are given. The indicate that using a larger encoded model does not necessarily confer more robustness to such common corruptions as Gaussian noise and shot noise than a smaller encoded model. In Figure 7, the of the experiments conducted on the CIFAR-10-C dataset corrupted by speckle noise and black-box boundary attack experiments on the CIFAR-10 dataset are shown. On the right plot, as with the other type of common-corruption experiments, we see that increasing the number of encodings increases robustness to speckle noise. On the left plot, we see that the encoded model is significantly more robust to the boundary attack than the uncoded model. For example, at a normalized 2 distance of 0.01, an inference accuracy of approximately 50% is achieved by the model with 32 encodings, whereas the inference accuracy of the uncoded model already drops to 0% at an 2 distance much closer to 0. Table 1 compares the encoded VGG-11 model with 32 encodings with previously published methods on the CIFAR-10-C dataset. At a severity level of 5, the encoded VGG-11 model achieves the highest inference accuracy against shot noise compared with all the other works listed in this table, which use a ResNet-18 or ResNet-26 model. The highest inference accuracy (77.30%) against Gaussian noise is attained by the adversarial logit pairing (ALT) method published in , but the test accuracy of this method is 83.50%, whereas the encoded VGG-11 model achieves the second highest inference accuracy (75.22%) against Gaussian noise with a test accuracy of 90.19%. Our seem to indicate that using a larger number of encodings improves robustness to common corruptions, so the inference accuracy achieved by the channel-coding method may be improved by merely increasing the number of encodings or designing higher performance codes. C.5 PERFORMANCE COMPARISON: ADVERSARIAL PERTURBATIONS Table 2 compares the encoded VGG-16 model with previously published defenses on the CIFAR-10 dataset. The experimental do not imply that encoded input features are more robust to common corruptions and adversarial perturbations than uncoded features. The encoded input features simply bear information of relations between input features that are not represented by the uncoded input features, so a source code learned from a concatenation of uncoded and encoded input features bears more information of the true source code than a source code learned from uncoded input features alone. A source code learned from a concatenation of uncoded and encoded input features is thus more robust than a source code learned from uncoded input features alone. To study the impact of increasing the number of input channels of the uncoded VGG-11 and VGG-16 models, we conducted experiments on the encoded VGG-11 and and VGG-16 models that use identical encodings; i.e., the input features are replicated across additional input channels (the "encoders" are just identity functions). In Figure 8, we see on the left that increasing the number of input channels of the uncoded VGG-11 model confers no robustness to Gaussian noise whatsoever. The plot on the right shows that increasing the number of input channels of the uncoded VGG-16 model does not confer robustness to white-box PGD either. Robustness to Gaussian noise (left) and the PGD attack (right) is tested by providing identical samples from the CIFAR-10 test set to the increased number of input channels. The bit mapper in Figure 2 uses the matrix to map four 5-PAM symbols into 12 bits. In this symbol-to-bit mapping matrix, the i th row corresponds to the encoding E i, where 0 ≤ i ≤ 31. Each symbol in the 5-PAM symbol alphabet is converted into three bits by using the corresponding three columns in this matrix. For example, the first symbol in the 5-PAM symbol alphabet for the encoding E 3 is converted to by drawing the bits from the third row and third, fourth, and fifth columns of the symbol-to-bit mapping matrix. After all four of the 5-PAM symbols are converted into their respective three bits, these bits are concatenated to each other, determining the value of the corresponding feature in the encoded sample.
We present a theoretical and experimental framework for defining, understanding, and achieving generalization, and as a result robustness, in deep learning by drawing on algorithmic information theory and coding theory.
439
scitldr
Many approaches to causal discovery are limited by their inability to discriminate between Markov equivalent graphs given only observational data. We formulate causal discovery as a marginal likelihood based Bayesian model selection problem. We adopt a parameterization based on the notion of the independence of causal mechanisms which renders Markov equivalent graphs distinguishable. We complement this with an empirical Bayesian approach to setting priors so that the actual underlying causal graph is assigned a higher marginal likelihood than its alternatives. Adopting a Bayesian approach also allows for straightforward modeling of unobserved confounding variables, for which we provide a variational algorithm to approximate the marginal likelihood, since this desirable feat renders the computation of the marginal likelihood intractable. We believe that the Bayesian approach to causal discovery both allows the rich methodology of Bayesian inference to be used in various difficult aspects of this problem and provides a unifying framework to causal discovery research. We demonstrate promising in experiments conducted on real data, supporting our modeling approach and our inference methodology. Causal networks (CNs) are special Bayesian networks where all edges reflect causal relations . The aim of causal structure learning is identifying the CN underlying the observed data. In this paper, we focus on the problem of scoring causal graphs using marginal likelihood in a way that identifies the unique causal generative graph. Succeeding to do so is very valuable, since once the correct CN is selected, various causal inference tasks such as estimating causal effects or examining confounder distributions becomes straightforward in a Bayesian framework. A central challenge in such an attempt, however, is adopting a prior selection policy that not only allows discriminating between Markov equivalent graphs but also assigns higher marginal likelihood score to the actual underlying CN. The key notion underlying our solution to first part of this challenge is the widely accepted principle of independence of the cause-effect mechanisms , that is, the natural mechanisms that generate the cause and the effect (based on cause) must be independent of each other. We embody this assumption by assuming the mutual independence of the parameters pertaining to cause and effect distributions in a Bayesian model, a line of reasoning that is natural to this modeling perspective, where parameters are modeled as random variables (; ; ;). By assigning independent priors to the cause and effect variables, we render them statistically independent. Critically, this assignment of independent priors also breaks the likelihood equivalence between Markov equivalent graphs. This is contrast to other ways of selecting independent priors such as the BDeu prior, which leads to assigning equal marginal likelihood to Markov equivalent graphs . As mentioned above, though breaking likelihood equivalence does not necessarily lead to assigning a higher marginal likelihood to the actual underlying CN, it is a prerequisite for doing so 1. The second part of the problem is adapting a prior selection policy that leads to assigning a higher marginal likelihood to the actual CN compared to its alternatives. In this work, we use an empirical Bayesian approach in selecting the hyperparameters of the independent priors described above, as we learn the priors that lead to assigning higher marginal likelihood to the actual CN from labeled data. The current approach is in the intersection of various other approaches in the literature, thereby combining many of their respective advantages . It is based on the notion of mechanism independence similar to; , does not assume causal sufficiency similar to;; Janzing et al. (, 2012 ; ; Schölkopf et al., can theoretically work on arbitrary graph structures that possibly include latent variables similar to , and can discriminate between Markov equivalent structures similar to; Zhang and Hyvärinen;;;. Our approach diverges from other Bayesian methods (; ;) in various dimensions such as by being able to distinguish between Markov equivalent causal graphs, using marginal likelihood (or approximations thereof) instead of surrogate scores such as BIC, or being able to model non-linear relationships. In Section 2, we introduce an example model for continuous observations and latent categorical confounders. To approximate the marginal likelihood in graphs which include latent confounders, we present a variational inference algorithm in Section 3. After testing our approach on various real data sets in Section 4, we present our and further avenues of research in Section 5. A general causal graph G(V G, E G) is a combination of a vertex set V G, which is the set of observed and latent random variables, and a set of directed edges E G ⊆ V G × V G where directed edges imply immediate cause-effect relationships between these variables. Let {x 1, . . ., x n, . . ., x N} ⊆ V G denote the set of continuous random variables, and similarly {r 1 . . ., r k, . . ., r K} ⊆ V G denote the discrete latent variables of the network where each x n and each r k are defined in the domains X n and R k, respectively. The set of parent vertices of a vertex v ∈ V G is denoted by π(v), while we denote its continuous parents by x π(v), and discrete parents by r π (v). For the scope of this text, we specify conditional distributions for the graphs as follows: we assume categorical distributions on the discrete variables r 1:K and linear basis functions models with Gaussian noise on the continuous variables x 1:N. Though these choices are by no means mandatory for our framework, we define latent variables as categorical. Furthermore, we restrict our attention to the graphical structures that do not include a continuous variable as a parent of a categorical variable for inferential convenience , and construct the following generative model for T independent and identically distributed observations from the network G: where 1 ≤ t ≤ T, φ is an arbitrary basis function with the convention φ({}) = 1, and's are the parameters of the conditional distributions. Namely, θ k is the conditional distribution table of r k, w n is the weights of the basis functions, and ρ n is the precision parameter of the conditional distribution of x n. Notice that declaring parameters as random variables simplifies the notion of independent cause-effect mechanisms as follows: Since the conditional distributions are the functions of the parameters, independence of the conditional distributions boils down to the independence of the parameters. Therefore, we complete our generative model by defining independent conjugate prior distributions on the parameters ∀n, r π(xn): where γ k|r π(r k), m n|r π(xn), Λ n|r π(xn), a n|r π(xn), b n|r π(xn) are the prior parameters, i.e. hyperparameters, of our generative model. Variational Bayesian inference (VB) is a technique where an intractable posterior distribution P is approximated by a variational distribution Q via minimizing Kullback-Leibler divergence KL(Q||P). In the context of Bayesian model selection, minimization of the KL(Q||P) corresponds to establishing a tight lower bound for the marginal log-likelihood, which we refer to as evidence lower bound (ELBO). This correspondence is due to the following decomposition of marginal log-likelihood log p(x where P = p(r 1:T 1:K, θ 1:K, ρ 1:N, w 1:N | x 1:T 1:N) is the full posterior distribution, and ELBO is denoted by B P [Q]. In a typical scenario of VB, Q is assumed to be a member of a restricted family of distributions. In its most common form, also known as mean-field approximation, Q is assumed to factorize over some partition of the latent variables, in a way that is reminiscent to a rank-one approximation in the space of distributions Q(r 1:T 1:K, θ 1:K, ρ 1:N, w 1:N) = q(r 1:T 1:K) q(θ 1:K, ρ 1:N, w 1:N) ELBO is then maximized with respect to Q which is restricted to the class of factorized distributions. Due to conjugacy, maximization of Q in further factorized variational distributions which also belong to the same family as the prior To calculate variational parameter updates, we need to calculate the expected sufficient statistics. In its final form, our variational algorithm becomes equivalent to iteratively calculating the expected sufficient statistics and updating the parameters. The explicit forms for the variational parameters and ELBO can be found in Appendix C. In Section 4.1 we test the performance of our approach in bivariate causal discovery. Then in Section 4.2 we identify the cardinality and distribution of a latent confounder in a multivariate data set, exemplifying the versatility of a Bayesian approach to causality. In the first part we measured the accuracy of VB for the causal direction determination problem. The data set in this part is CEP , frequently used in causal discovery research, which includes 100 data sets, vast majority of which is bivariate. For the hyperparameters of the model, we created 36 different settings by varying the critical hyperparameters systematically. We detail this hyperparameter creation process in the Appendix D.1. In making a decision between two causal directions in a given hyperparameter setting, we choose the model which obtains a higher ELBO 2. We tested our algorithm on the data set by using 10 × 3 cross-validation. That is, for each test, we separated the data set into three, detected the hyperparameter setting (of 36) that obtained the best accuracy score on the first two thirds, and tested our model on the last third of the data set, which corresponds to an empirical Bayesian approach to prior selection. We conducted the same process two more times, each fold becoming the test set once. We conducted this split and tested randomly 10 times. We report the accuracy and AUC values according to these 10 runs. the CEP data set, we obtained a mean accuracy of.78±.09 and AUC score of.84±.13 (the values following the mean values correspond to 68% CI) where the accuracy and AUC calculations are performed by using the weights mentioned by. also compared most recent methods on their performance on the data set; our correspond to a state-of-the-art performance in bivariate causality detection. Using a different data set, we next examine the ability of our approach to identify a latent confounder. For this purpose, we use the smallest database in the Thyroid data set from the UCI repository . This data involves five different diagnostic measurements from patients with low, normal, and high thyroid activity. This being a diagnostic data set, the causal structure is known, where the thyroid activity is the cause of the rest of the variables (Figure 1(a) ). In our experiments we ignore the thyroid activity variable, thus it becomes a latent confounder. This way we can test how well our approach identifies the latent confounder. To assess our method's performance, we first examine whether the latent variable cardinality our method favors corresponds to the cardinality of the actual variable that we held out. Figure 1 (b) shows that the ELBO of the model is maximized at the latent cardinality which corresponds to the actual cardinality of thyroid activity variable (which is 3). Then, to ascertain that the inferred latent variable indeed corresponds to thyroid activity variable, we compare the assignments of our model to actual patient thyroid activity levels. The demonstrate an accuracy of.93, thus we conclude that our method accurately identified the latent causal variable. Overall, we show that Bayesian model selection is a promising framework that can facilitate causal research significantly both through conceptual unification and increased performance. Given that Bayesian modeling is agnostic to specific variable types, conditional distributions, and to approximate inference methodology, the value of a successful Bayesian modeling approach for causal research is immense. Though our empirical Bayesian approach to setting priors can be useful in various contexts (e.g. in data sets where only some of the bivariate causal directions are known), finding other principled ways of assigning (or integrating out) priors that do not require labeled data is an important direction for future research. Conducting causal discovery with different variable types, and/or different distributions would also be beneficial for demonstrating current approach's viability in various contexts. When constructing a generative model for causal inference, our aim is making Markov equivalent graph structures identifiable. However, the model that is described only by Equations and is not necessarily identifiable . To be more precise, consider the case where we have two continuous variables and no latent categorical variable, which is equivalent to the following structural equation model: One can also construct the following equivalent structural equation model in which the dependence structure is reversed: These two models are not identifiable with the descriptions above, since they both correspond to linear models with Gaussian noise. However, by assuming priors on the parameters we can break the symmetry and make these Markov equivalent models identifiable. For instance, assuming Gaussian priors on the weights of the first model implies non-Gaussian priors on the second model, which makes these two models distribution inequivalent . Moreover, even when two Markov equivalent models are also distribution equivalent, choosing appropriate prior parameters that violate likelihood equivalence still makes them identifiable . Indeed, for a model with a parameterization as described, only a very specific choice of priors leads to likelihood equivalence between the Markov equivalent models , and we will avoid following such a constraint. Choosing arbitrary priors almost always leads to likelihood inequivalent, hence identifiable models. In this section, we define the appropriate graphical structures for causal structure learning in the bivariate case. As we stated in Section 1, we do not assume causal sufficiency and allow the existence of possibly many exogenous variables. Luckily, we can combine the effects of exogenous variables into a single latent variable with an arbitrary cardinality. As a , the relationship between two observable dependent variables x 1 and x 2 boils down to one of three cases due to causal Markov condition : 2. x 2 causes x 1, 3. they do not cause each other, but a latent variable r 1 causes both of them. Associated causal networks corresponding to each of these hypotheses are depicted in Figure 2, where latent variable r 1 represents the overall effect of the all unobserved variables. For the spurious relationship (Figure 2(a) ), marginally correlated variables x 1 and (a) Spurious correlation. Figure 2: Graphical models for bivariate causality. x 2 become independent once the latent common cause variable r 1 is known. However in direct causal relationships (Figures 2(b) and 2(c)), even when the latent common cause is known, two variables are still dependent and the direction of cause-effect relationship is implicit in the parameterization of the models. The identifiability of these models resides in the fact that modelling parameters explicitly as random variables makes these graphs Markov inequivalent. If we were considering only the marginal models of the observed variables, then we would end up with three Markov equivalent graphs. However, including latent variables and independent parameters renders distinctive conditional independence properties for each graph. For instance, when x 2 and r 1 are known, x 1 and the parameters of x 2 are dependent only in the case of x 1 → x 2, or knowing r 1 makes x 1 and x 2 independent only if they have a spurious relationship. These distinctive conditional independence properties are the underlying reasons making all of these graphs identifiable. In this section, we supply the brief descriptions of the basic distributions that we mentioned in the main part of the manuscript. which is equal to (z − 1)! for nonnegative integer z. Gamma(ρ; a, b) = exp((a − 1) log ρ − bρ − log Γ(a) + a log b) where a is the shape and b is the rate parameter. 3. Expected sufficient statistics: 4. Cross entropy: Here, ψ(x) is the digamma function which is defined as ψ(x) = d log Γ(x) dx. 1. Multivariate Beta function: 2. Dirichlet density: 4. Cross entropy: where µ is the mean parameter and ρ is the precision parameter, i.e. ρ −1 is the variance. B.1.5. Multivariate Normal Distribution 1. Multivariate Normal density: where µ is the mean vector and Λ is the precision matrix, i.e. Λ −1 is the covariance matrix. for any symmetric matrix A. B.1.6. Normal-Gamma Distribution 1. Normal-Gamma density: which can be equivalently decomposed into a marginal Gamma distribution and a conditional Normal distribution: 2. Expected sufficient statistics: 3. Cross entropy: B.1.7. Multivariate Normal-Gamma Distribution 1. Multivariate Normal-Gamma density: which can be equivalently decomposed into a marginal Gamma distribution and a conditional Multivariate Normal distribution: 2. Expected sufficient statistics: for any symmetric matrix A. E N G(m,Λ,â,b) {− log N G(w, ρ; m, Λ, a, b)} = −a log b + log Γ(a) − 1 2 log det(Λ) + 1 2 tr(Λ −1 Λ) + M 2 log 2π − a + M 2 − 1 (ψ(â) − logb) +â b b +â 2b (m − m) T Λ(m − m) In this section we summarize the basic conjugate models that are closely related to our example model. 2. Posterior of θ: where γ * r = γ r + 2. Posterior of µ and ρ: where 1. Generative model: An equivalent description with Normal-Gamma priors is 2. Posterior of w and ρ: where In this section, we will explicitly evaluate these equations to derive closed form expressions for the variational posteriors: 1. We first simplify the In order to keep the notation uncluttered, from now on we will omit the implicit subscripts in expectation operators. So each individual factor q(r t 1:K) above is equal to 2. We now pursue the same strategy for the expression in q(θ 1:K, ρ 1:N, w 1: where each individual factor turns out to be Finally, we match the coefficients of the sufficient statistics in above equations with the natural parameters and find the following variational parameters in terms of the expected sufficient statistics: Update logθ t Update expected sufficient statistics end for A simplified sketch of our variational inference algorithm VB-CN is also presented in Algorithm 1. ELBO can be expressed as a sum of expectation terms most of which are in the form of negative cross entropy or negative entropy: In this section we will evaluate each of those expectations explicitly. We start our derivation with the trickier Gaussian log-likelihood term, then the rest of the expectations will correspond to negative cross entropy values of standard exponential family distributions: Variational distribution Q treats r t 1:K and θ 1:K as independent variables. So, the expectations of the categorical log-likelihood terms admit the following form The rest of the terms are related to cross entropy or entropy of the well-known exponential family distributions, and closed form expressions for them are supplied in Appendix B. So here, we only modify these expressions by changing their parameters with the appropriate variational parameters. 1. By using the negative cross entropy formulation in Appendix B.1.3 for categorical distributions: r 1 ∈ R 1, m n|r 1 's were set to 0, and Λ n|r 1 's were set to 1 10 I each; while for all values of r 1 ∈ R 1, γ 1 (r 1)'s were set to 10. We next describe the remaining hyperparameters with respect to the causal graph in Figure 2 (b) in which x 1 causes x 2. Their adaptation to other two graphs is straightforward due to symmetry. The hyperparameters of the Gamma distributions, (a 1, b 1, a 2, b 2), from which the precision of the observed variables were drawn, were allowed to take different values with the condition that a n|r 1 ≥ b n|r 1 at all times, but again every element of these vectors corresponding to different values of r 1 assumed to be constant within the vector. This is because the mean of a Gamma distribution Gamma(a, b) is a/b and its variance is a/b 2, therefore when b is allowed to take a greater value than a, this in a close to zero precision value for the relevant distribution for the observed variable. Obeying the constraint, the a and b's were allowed to take values among 1, 10, and 100 each. The a parameter was not allowed to be larger than 100 since this leads to an equivalent sample size much larger than the sample size of certain data sets used in experiments, effectively rendering the observations unimportant. The b parameter was not allowed to be smaller than 1 since this again implies extremely imprecise Gaussian distributions for the observed variables to which the Gamma distribution provided the precision variable. The combinations with these constraints lead to a total of 36 sets of hyperparameters. While doing model comparison in a hyperparameter setting, we expect several criteria to be satisfied for maintaining consistency. For instance, in the spurious model (Figure 2(a) ) there is no reason to assign different priors on variables x 1 and x 2. Otherwise, just by permuting the labels of the pairs, we would obtain inconsistent marginal likelihoods. Likewise, when the labels of a pair are permuted, e.g. 1, x 1:T 2 ) given the relation x 1 → x 2 to be equal to the marginal likelihood of the permuted pair (x 1:T 1,x 1:T 2) given the relationx 2 →x 1. The rule we used to solve inconsistency issues in such situations is the following: the prior parameters of two variables must be identical whenever the parental graphs of them are homomorphic. So, if we are calculating the marginal likelihood of the relation x 1 → x 2 with a particular hyperparameter setting, say (a 1 = 100, b 1 = 10, a 2 = 10, b 2 = 1), then the corresponding consistent hyperparameter setting for x 2 → x 1 should be (a 1 = 10, b 1 = 1, a 2 = 100, b 2 = 10), whereas the corresponding consistent hyperparameters for the spurious relationship should be (a 1 = 100, b 1 = 10, a 2 = 100, b 2 = 10). For this experiment, for each of 36 hyperparameter combinations, and for each rank values of |R 1 | = 1 to 5 for the linear model, a total of 3 different data pairs (one for each different graphical model) with 2000 observations were generated. This amounted to a total of 540 data pairs. For each synthetic data pair, the corresponding hyperparameters were used to compare the three hypotheses demonstrated in Figure 2 using the marginal likelihood estimate of the variational Bayes algorithm. The ing ROC curves can be seen in the Figure 3. With an overall accuracy of.961 and AUC of.998, the demonstrate that our method can identify the data generating graph comfortably, given the correct hyperparameter settings. The CEP data set is not labeled as to the spurious relationships, therefore it is not possible to conduct hyperparameter selection with cross-validation. However, we ran the experiments again, this time including the spurious relationship hypothesis in the experiments, for all 36 parameter settings, and recorded the pairs for which the marginal likelihood of the spurious hypothesis was the highest. We observed that, using the hyperparameter setting that achieved the highest accuracy in the previous experiment, these four data sets were found to be spurious: 19, 91, 92, and 98. The scatter plots of these data sets are presented in Figure 4. Visual examination of the first three pairs reveals that, although each of these pairs are correlated, they can be separated into two clusters in which X and Y axes become independent. In other words, once the confounding variables governing the cluster affiliations are decided, then the variables X and Y generated independently, so their correlation is indeed spurious. As we lack the expertise, we do not know what these confounding variables correspond in reality, but the existence of such variables is evident from the scatter plots. The case of the fourth spurious pair is slightly different than other correlated pairs. The fourth pair consists of the measurements of initial and final speeds of a ball on a ball track where initial speed is thought as the cause of final speed. However, our variational algorithm selected the spurious model with a latent variable having cardinality |R 1 | = 1, which actually corresponds to the marginal independence of X and Y. Such an explanation makes sense considering the plot in Figure 4, as the initial speed of the ball does not seem related to its final speed.
We cast causal structure discovery as a Bayesian model selection in a way that allows us to discriminate between Markov equivalent graphs to identify the unique causal graph.
440
scitldr
The goal of compressed sensing is to learn a structured signal $x$ from a limited number of noisy linear measurements $y \approx Ax$. In traditional compressed sensing, ``structure'' is represented by sparsity in some known basis. Inspired by the success of deep learning in modeling images, recent work starting with~\cite{BDJP17} has instead considered structure to come from a generative model $G: \R^k \to \R^n$. We present two establishing the difficulty of this latter task, showing that existing bounds are tight. First, we provide a lower bound matching the~\cite{BDJP17} upper bound for compressed sensing from $L$-Lipschitz generative models $G$. In particular, there exists such a function that requires roughly $\Omega(k \log L)$ linear measurements for sparse recovery to be possible. This holds even for the more relaxed goal of \emph{nonuniform} recovery. Second, we show that generative models generalize sparsity as a representation of structure. In particular, we construct a ReLU-based neural network $G: \R^{2k} \to \R^n$ with $O$ layers and $O(kn)$ activations per layer, such that the range of $G$ contains all $k$-sparse vectors. In compressed sensing, one would like to learn a structured signal x ∈ R n from a limited number of linear measurements y ≈ Ax. This is motivated by two observations: first, there are many situations where linear measurements are easy, in settings as varied as streaming algorithms, single-pixel cameras, genetic testing, and MRIs. Second, the unknown signals x being observed are structured or "compressible": although x lies in R n, it would take far fewer than n words to describe x. In such a situation, one can hope to estimate x well from a number of linear measurements that is closer to the size of the compressed representation of x than to its ambient dimension n. In order to do compressed sensing, you need a formal notion of how signals are expected to be structured. The classic answer is to use sparsity. Given linear measurements 1 y = Ax of an arbitrary vector x ∈ R n, one can hope to recover an estimate x * of x satisfying for some constant C and norm ·. In this paper, we will focus on the 2 norm and achieving the guarantee with 3/4 probability. Thus, if x is well-approximated by a k-sparse vector x, it should be accurately recovered. Classic such as [CRT06] show that is achievable when A consists of m = O(k log n k) independent Gaussian linear measurements. This bound is tight, and in fact no distribution of matrices with fewer rows can achieve this guarantee in either 1 or 2 [DIPW10]. Although compressed sensing has had success, sparsity is a limited notion of structure. Can we learn a richer model of signal structure from data, and use this to perform recovery? In recent years, deep convolutional neural networks have had great success in producing rich models for representing the manifold of images, notably with generative adversarial networks (GANs) [GPAM + 14] and variational autoencoders (VAEs) [KW14]. These methods produce generative models G: R k → R n that allow approximate sampling from the distribution of images. So a natural question is whether these generative models can be used for compressed sensing. In [BJPD17] it was shown how to use generative models to achieve a guarantee analogous to: for any L-Lipschitz G: R k → R n, one can achieve where r, δ > 0 are parameters, B k (r) denotes the radius-r 2 ball in R k and Lipschitzness is defined with respect to the 2 -norms, using only m = O(k log Lr δ) measurements. Thus, the recovered vector is almost as good as the nearest point in the range of the generative model, rather than in the set of k-sparse vectors. We will refer to the problem of achieving the guarantee in as "functionsparse recovery". Our main theorem is that the [BJPD17] is tight: for any setting of parameters n, k, L, r, δ, there exists an L-Lipschitz function G: R k → R n such that any algorithm achieving with 3/4 probability must have Ω(min(k log Lr δ, n)) linear measurements. Notably, the additive error δ that was unnecessary in sparse recovery is necessary for general Lipschitz generative model recovery. A concurrent paper [LS19] proves a lower bound for a restricted version of. They show a lower bound when the vector that x lies in the image of G and for a particular value of δ. Our , in comparison, apply to the most general version of the problem and are proven using a simpler communication complexity technique. The second in this paper is to directly relate the two notions of structure: sparsity and generative models. We produce a simple Lipschitz neural network G sp: R 2k → R n, with ReLU activations, 2 hidden layers, and maximum width O(kn), so that the range of G contains all k-sparse vectors. A second of [BJPD17] is that for ReLU-based neural networks, one can avoid the additive δ term and achieve a different from: using O(kd log W) measurements, if d is the depth and W is the maximum number of activations per layer. Applying this to our sparsity-producing network G sp implies, with O(k log n) measurements, recovery achieving the standard sparsity guarantee. So the generative-model representation of structure really is more powerful than sparsity. As described above, this paper contains two : an Ω(min(k log Lr δ, n)) lower bound for compressed sensing relative to a Lipschitz generative model, and an O-layer generative model whose range contains all sparse vectors. These are orthogonal, and we outline each in turn. Over the last decade, lower bounds for sparse recovery have been studied extensively. The techniques in this paper are most closely related to the techniques used in [DIPW10]. Similar to [DIPW10], our proof is based on communication complexity. We will exhibit an LLipschitz function G and a large finite set Z ⊂ Im(G) ⊂ B n (R) of points that are well-separated. Then, given a point x that is picked uniformly at random from Z, we show how to identify it from Ax using the function-sparse recovery algorithm. This implies Ax also contains a lot of information, so m must be fairly large. Formally, we produce a generative model whose range includes a large, well-separated set: log(|X|) = Ω min(k log( Lr R)), n Now, suppose we have an algorithm that can perform function-sparse recovery with respect to G from Theorem 2.1, with approximation factor C, and error δ < R/8 within the radius r ball in k-dimensions. Set t = Θ(log n), and for any z 1, z 2,..., z t ∈ Z = G(X) take The idea of the proof is the following: given y = Az, we can recover z such that and so, because Z has minimum distance R/ √ 6, we can exactly recover z t by rounding z to the nearest element of Z. But then we can repeat the process on (Az − Az t) to find z t−1, then z t−2, up to z 1, and learn t lg |Z| = Ω(tk log(Lr/R)) bits total. Thus Az must contain this many bits of information; but if the entries of A are rational numbers with poly(n) bounded numerators and (the same) poly(n) bounded denominator, then each entry of Az can be described in O(t + log n) bits, so There are two issues that make the above outline not totally satisfactory, which we only briefly address how to resolve here. First, the theorem statement makes no supposition on the entries of A being polynomially bounded. To resolve this, we perturb z with a tiny (polynomially small) amount of additive Gaussian noise, after which discretizing Az at an even tinier (but still polynomial) precision has negligible effect on the failure probability. The second issue is that the above outline requires the algorithm to recover all t vectors, so it only applies if the algorithm succeeds with 1 − 1/t probability rather than constant probability. This is resolved by using a reduction from the augmented indexing problem, which is a one-way communication problem where Alice has z 1, z 2,..., z t ∈ Z, Bob has i ∈ [Z] and z i+1, · · ·, z n, and Alice must send Bob a message so that Bob can output z i with 2/3 probability. This still requires Ω(t log |Z|) bits of communication, and can be solved in O(m(t + log n)) bits of communication by sending Az as above. Formally, our lower bound states: A is an algorithm which picks a matrix A ∈ R m×n, and given Ax returns an x * satisfying with probability ≥ 3/4, then m = Ω(min(k log(Lr/δ), n)). Constructing the set. The above lower bound approach, relies on finding a large, well-separated set Z as in Theorem 2.1. We construct this aforementioned set Z within the n-dimensional 2 ball of radius R such that any two points in the set are at least Ω(R) apart. Furthermore, since we wish to use a function-sparse recovery algorithm, we describe a function G: R k → R n and set the radius R such that G is LLipschitz. In order to get the desired lower bound, the image of G needs to contain a subset of at least (Lr) Ω(k) points. First, we construct a mapping as described above from R to R n/k i.e we need to find (Lr) points in B n/k (R) that are mutually far apart. We show that certain binary linear codes over the alphabet {±R/ √ n} yield such points that are mutually R/ √ 3k apart. We construct a O(L)-Lipschitz mapping of O(√ Lr) points in the interval [0, r/ √ k] to a subset of these points. In order to extend this construction to a mapping from R k to R n, we apply the above function in a coordinate-wise manner. This would in a mapping with the same Lipschitz parameter. The points in R n that are images of these points lie in a ball of radius R but could potentially be R/ √ 3k close. To get around this, we use an error correcting code over a large alphabet to choose a subset of these points that is large enough and such that they are still mutually R/ √ 6 far apart. 2.2 Sparsity-producing generative model. To produce a generative model whose range consists of all k-sparse vectors, we start by mapping R 2 to the set of positive 1-sparse vectors. For any pair of angles θ 1, θ 2, we can use a constant number of unbiased ReLUs to produce a neuron that is only active at points whose representation (r, θ) in polar coordinates has θ ∈ (θ 1, θ 2). Moreover, because unbiased ReLUs behave linearly, the activation can be made an arbitrary positive real by scaling r appropriately. By applying this n times in parallel, we can produce n neurons with disjoint activation ranges, making a network R 2 → R n whose range contains all 1-sparse vectors with nonnegative coordinates. By doing this k times and adding up the , we produce a network R 2k → R n whose range contains all k-sparse vectors with nonnegative coordinates. To support negative coordinates, we just extend the k = 1 solution to have two ranges within which it is non-zero: for one range of θ the output is positive, and for another the output is negative. This in the following theorem: Theorem 2.3. There exists a 2 layer neural network In this section, we prove a lower bound for the sample complexity of function-sparse recovery by a reduction from a communication game. We show that the communication game can be won by sending a vector Ax and then performing function-sparse recovery. A lower bound on the communication complexity of the game implies a lower bound on the number of bits used to represent Ax if Ax is discretized. We can then use this to lower bound the number of measurements in A. Since we are dealing in bits in the communication game and the entries of a sparse recovery matrix can be arbitrary reals, we will need to discretize each measurement. We show first that discretizing the measurement matrix by rounding does not change the ing measurement too much and will allow for our reduction to proceed. Matrix conditioning. We first show that, without loss of generality, we may assume that the measurement matrix A is well-conditioned. In particular, we may assume that the rows of A are orthonormal. We can multiply A on the left by any invertible matrix to get another measurement matrix with the same recovery characteristics. If we consider the singular value decomposition A = U ΣV *, where U and V are orthonormal and Σ is 0 off the diagonal, this means that we can eliminate U and make the entries of Σ be either 0 or 1. The is a matrix consisting of m orthonormal rows. Discretization. For well-conditioned matrices A, we use the following lemma (similar to one from [DIPW10] ) to show that we can discretize the entries without changing the behavior by much: Lemma 3.1. Let A ∈ R m×n be a matrix with orthonormal rows. Let A be the of rounding A to b bits per entry. Then for any v ∈ R n there exists an s ∈ R n with A v = A(v − s) and Proof. Let A = A − A be the error when discretizing A to b bits, so each entry of A is less than 2 −b. Then for any v and s = A T A v, we have As = A v and The Augmented Indexing problem. As in [DIPW10], we use the Augmented Indexing communication game which is defined as follows: There are two parties, Alice and Bob. Alice is given a string y ∈ {0, 1} d. Bob is given an index i ∈ [d], together with y i+1, y i+2,..., y d. The parties also share an arbitrarily long common random string r. Alice sends a single message M (y, r) to Bob, who must output y i with probability at least 2/3, where the probability is taken over r. We refer to this problem as Augmented Indexing. The communication cost of Augmented Indexing is the minimum, over all correct protocols, of length |M (y, r)| on the worst-case choice of r and y. The following theorem is well-known and follows from Lemma 13 of [MNSW98] (see, for example, an explicit proof in [DIPW10]) A well-separated set of points. We would like to prove Theorem 2.1, getting a large set of wellseparated points in the image of a Lipschitz generative model. Before we do this, though, we prove a k = 1 analog: There is a set of points P in B n ⊂ R n of size 2 Ω(n) such that for each pair of points x, y ∈ P x − y ∈ 1 3, 2 3 Proof. Consider a τ -balanced linear code over the alphabet {± 1 √ n} with message length M. It is known that such codes exist with block length O(M/τ 2) [BATS09]. Setting the block length to be n and τ = 1/6, we get that there is a set of 2 Ω(n) points in R n such that the pairwise hamming distance is between Now we wish to extend this to arbitrary k while achieving the parameters in Theorem 2.1. Proof of Theorem 2.1. We first define an O(L)-Lipschitz map g: R → R n/k that goes through a set of points that are pairwise Θ R √ k apart. Consider the set of points P from Lemma 3.3 scaled, Lr/R). Choose subset P that such that it contains exactly min (Lr/R, exp(Ω(n/k))) points and let g 1: [0, r/ √ k] → P be a piecewise linear function that goes through all the points in P in order. Then, we define g: R → R n/k as: Also, for every point (x 1, . . ., However, there still exist distinct points x, y ∈ I k (for instance points that differ at exactly one coordinate) We construct a large subset of the points in I k such that any two points in this subset are far apart using error correcting codes. Consider the A ⊂ P s.t. |A| > |P | /2 is a prime. For any integer z > 0, there is a prime between z and 2z, so such a set A exists. Consider a Reed-Solomon code of block length k, message length k/2, distance k/2 and alphabet A. The existence of such a code implies that there is a subset X of (P) k of size at least (|P | /2) k/2 such that every pair of distinct elements from this set disagree in k/2 coordinates. This translates into a distance of in 2-norm. So, if we set G = g ⊗k and we get have a set of (|P | /2) k/2 ≥ (min(exp(Ω(n/k)), Lr/R)) k/2 points which are apart in 2-norm, lie within the 2 ball of radius R. Lower bound. We now prove the lower bound for function-sparse recovery. Proof of Theorem 2.2. An application of Theorem 2.1 with R = √ Lrδ gives us a set of points Z and G such that Z = G(X) ⊆ R n such that log(|Z|) = Ω(min(k log( Lr δ), n)), and for all x ∈ Z, x 2 ≤ √ Lrδ and for all x, x ∈ Z, x − x 2 ≥ √ Lrδ/ √ 6. Let d = log |X| log n, and let D = 16 √ 3(C + 1). We will show how to solve the Augmented Indexing problem on instances of size d = log(|Z|) · log(n) = Ω(k log(Lr) log n) with communication cost O(m log n). The theorem will then follow by Theorem 3.2. Alice is given a string y ∈ {0, 1} d, and Bob is given i ∈ [d] together with y i+1, y i+2,..., y d, as in the setup for Augmented Indexing. Alice splits her string y into log n contiguous chunks y 1, y 2,..., y log n, each containing log |X| bits. She uses y j as an index into the set X to choose x j. Alice defines Alice and Bob use the common randomness R to agree upon a random matrix A with orthonormal rows. Both Alice and Bob round A to form A with b = Θ(log(n)) bits per entry. Alice computes A x and transmits it to Bob. Note that, since x ∈ ± 1 √ n the x's need not be discretized. From Bob's input i, he can compute the value j = j(i) for which the bit y i occurs in y j. Bob's input also contains y i+1,..., y n, from which he can reconstruct x j+1,..., x log n, and in particular can compute Bob then computes A z, and using A x and linearity, he can compute So from Lemma 3.1, there exists some s with A w = A(w − s) and Ideally, Bob would perform recovery on the vector A(w −s) and show that the correct point x j is recovered. However, since s is correlated with A and w, Bob needs to use a slightly more complicated technique. Bob first chooses another vector u uniformly from B n (R/D j) and computes A(w − s − u) = A w − Au. He then runs the estimation algorithm A on A and A(w − s − u), obtainingŵ. We have that u is independent of w and s, and that so as a distribution over u, the ranges of the random variables w − s − u and w − u overlap in at least a 1 − 1/n fraction of their volumes. Therefore w − s − u and w − u have statistical distance at most 1/n. The distribution of w − u is independent of A, so running the recovery algorithm on A(w − u) would work with probability at least 3/4. Hence with probability at least 3/4 − 1/n ≥ 2/3 (for n large enough),ŵ satisfies the recovery criterion for w − u, meaning Now, Since δ < Lr/4, this distance is strictly bounded by R/2 √ 6. Since the minimum distance in X is R/ √ 6, this means D j x j −ŵ 2 < D j x −ŵ 2 for all x ∈ X, x = x j. So Bob can correctly identify x j with probability at least 2/3. From x j he can recover y j, and hence the bit y i that occurs in y j. Hence, Bob solves Augmented Indexing with probability at least 2/3 given the message A x. Each entry of A x takes O(log n) bits to describe because A is discretized to up to log(n) bits and Hence, the communication cost of this protocol is O(m · log n). By Theorem 3.2, m log n = Ω(min(k log(Lr/δ), n) · log n), or m = Ω(min(k log(Lr/δ), n)). We show that the set of all k-sparse vectors in R n is contained in the image of a 2 layer neural network. This shows that function-sparse recovery is a generalization of sparse recovery. Lemma 4.1. There exists a 2 layer neural network G: Our construction is intuitively very simple. We define two gadgets G. Then, we set the i th output node (G(x 1, x 2) Varying the distance of (x 1, x 2) from the origin will allow us to get the desired value at the output node i. In a similar manner, G − i which produces negative values at output node i of G with the internal nodes defined as: The last ReLU activation preserves only negative values. Since G This is positive only when θ ∈ (β, π + β). Similarly, cos(β + α/2)x 1 − sin(β + α/2)x 2 = t sin(θ − (β + α/2)) and is positive only when θ ∈ (β + α/2, π + β + α/2). So, a + (i),1 and a + (i),2 are both non-zero when θ ∈ (β + α/2, π + β). Using some elementary trigonometry, we may see that: In Fact A.1, we show a proof of the above identity. Observe that when θ > β + α, this term is negative and hence b i = 0. So, we may conclude that G + i ((x 1, x 2)) = 0 if and only if (x 1, x 2) = (t sin(θ), t cos(θ)) with θ ∈ ((i−1)α, iα). Also, observe that G + i (t sin(β +α/2), t cos(β +α/2)) = t. Similarly G Proof of Theorem 2.3. Given a vector z that is non-zero at k coordinates, let i 1 < i 2 < · · · < i k be the indices at which z is non-zero. We may use copies of G from Lemma 4.1 to generate 1-sparse vectors v 1,..., v k such that (v j) ij = z ij. Then, we add these vectors to obtain z. It is clear that we only used k copies of G to create G sp. So, G sp can be represented by a neural network with 2 layers. Theorem 1 provides a reduction which uses only 2 layers. Then, using the algorithm from Theorem 3, we can recover the correct k-sparse vector using O(kd log(nk)) measurements. Since d = 4 and ≤ n, this requires only O(k log n) linear measurements to perform 2 / 2 (k, C)-sparse recovery.
Lower bound for compressed sensing w/ generative models that matches known upper bounds
441
scitldr
We hypothesize that end-to-end neural image captioning systems work seemingly well because they exploit and learn ‘distributional similarity’ in a multimodal feature space, by mapping a test image to similar training images in this space and generating a caption from the same space. To validate our hypothesis, we focus on the ‘image’ side of image captioning, and vary the input image representation but keep the RNN text generation model of a CNN-RNN constant. We propose a sparse bag-of-objects vector as an interpretable representation to investigate our distributional similarity hypothesis. We found that image captioning models (i) are capable of separating structure from noisy input representations; (ii) experience virtually no significant performance loss when a high dimensional representation is compressed to a lower dimensional space; (iii) cluster images with similar visual and linguistic information together; (iv) are heavily reliant on test sets with a similar distribution as the training set; (v) repeatedly generate the same captions by matching images and ‘retrieving’ a caption in the joint visual-textual space. Our experiments all point to one fact: that our distributional similarity hypothesis holds. We conclude that, regardless of the image representation, image captioning systems seem to match images and generate captions in a learned joint image-text semantic subspace. Image description generation, or image captioning (IC), is the task of automatically generating a textual description for a given image. The generated text is expected to describe, in a single sentence, what is visually depicted in the image, for example the entities/objects present in the image, their attributes, the actions/activities performed, entity/object interactions (including quantification), the location/scene, etc. (e.g. "a man riding a bike on the street").Significant progress has been made with end-to-end approaches to tackling this problem, where large-scale parallel image-description datasets such as Flickr30k BID41 and MSCOCO BID4 are used to train a CNN-RNN based neural network IC system BID36 BID17. Such systems have demonstrated impressive performance in the COCO captioning challenge 1 according to automatic metrics, seemingly even surpassing human performance in many instances (e.g. CIDEr score > 1.0 vs. human's 0.85) BID3. However, in reality, the performance of end-to-end systems is still far from satisfactory according to metrics based on human judgement 2. Thus, despite the progress, this task is currently far from being a solved problem. In this paper, we challenge the common assumption that end-to-end IC systems are able to achieve strong performance because they have learned to'understand' and infer semantic information from visual representations, i.e. they can for example deduce that "a boy is playing football" purely by learning directly from mid-level image features and the corresponding textual descriptions in an implicit manner, without explicitly modeling the presence of boy, ball, green field, etc. in the image. It is believed that the IC system has managed to infer that the phrase green field is associated with some'green-like' area in the image and is thus generated in the output description, or that the word boy is generated because of some CNN activations corresponding to a young person. However, there seems to be no concrete evidence that this is the case. Instead, we hypothesize that the apparently strong performance of end-to-end systems is attributed to the fact that they are exploiting the distributional similarity in the multimodal feature space. To our best knowledge, our paper gives the first empirical analysis on visual representations for the task of image captioning. What we mean by'distributional similarity' is that IC systems essentially attempt to match images from the training set that is most similar to a test image, and generate a caption from the most similar training instances (or generate a 'novel' description from a combination of training instances, for example by 'averaging' the descriptions). Previous work has alluded to this observation BID16 BID36, but it has not been thoroughly investigated. This phenomena could also be in part attributed to the fact that the datasets are repetitive and simplistic, with an almost constant and predictable linguistic structure BID18 BID7 BID36.In this paper we investigate the hypothesis of distributional similarity in IC by focusing on the image side of image captioning. Most previous work has concentrated on the text side of image captioning, e.g. by optimizing the language modelling capabilities of the RNN BID27 BID19 to improve its performance on automatic metrics. While there have been efforts on improving IC by utilizing or modeling images more effectively, for example by using attention over mid-level image features and high-level object proposals BID1, in this work we are specifically interested in interpretability and we focus on using a simpler (and faster) model for empirical evaluation. We explore the basic yet effective CNN-RNN model BID17, and investigate the representational contributions while keeping the RNN generator constant. More advanced models can be considered specific variants of BID17.It is worth noting that we are interested in demonstrating the phenomenon of distributional similarity in IC, rather than achieving or improving state-of-the-art performance, As such, we do not resort to fine-tuning or extensive hyperparameter optimization or ensembles. Therefore, our model is not comparable to state-of-the-art models such as BID36, which optimize IC by fine-tuning the image representations, exploring beam size, scheduled sampling, and using ensemble models. Instead, we vary only the image representation to demonstrate that end-to-end IC systems utilize distributional similarity on the image side to generate captions, regardless of the image representation used. Our main contributions are:• An IC experiment where we vary the input image representation but keep the RNN text generation model constant (Section 3). This experiment demonstrates that regardless of the image representation (a continuous image embedding or a sparse, low-dimensional vector), end-to-end IC systems seem to utilize a visual-semantic subspace for IC.• The introduction of a simple, sparse bag-of-objects representation that contains information about the presence of objects in the images. We use this as a tool to investigate the contribution of images in the image captioning framework.• The introduction of pseudo-random vectors derived from object-level representations as a means to evaluate IC systems. Our show that end-to-end models in this framework are remarkably capable of separating structure from noisy input representations.• An experiment where IC models are conditioned on image representations factorized and compresssed to a lower dimensional space (Section 4.1). We show that high dimensional image embeddings that are factorized to a lower dimensional representation and used as input to an IC model in virtually no significant loss in performance, further strengthening our claim that IC models perform similarity matching rather than image understanding.• An analysis of different image representations and their transformed representations (Sections 4.2 and 4.3). We visualize the initial visual subspace and the learned joint visual semantic subspace and observe that the visual semantic subspace has learned to cluster images with similar visual and linguistic information together, further validating our claims of distributional similarity.• An experiment where the IC model is tested on an out-of-domain dataset (Section 4.4), which has a slightly different image distribution. We observe that models, including the state-of-the-art models, show a better performance on test sets that have a similar distribution as the training. However, their performance deteriorates when the distributions are slightly different.• An analysis on the uniqueness of captions generated by IC models using different image representations (Section 4.5). We hypothesize that the captions are often repeated as they are usually generated by matching images in the joint space and retrieving a relevant caption. Our experiments validate this claim. Overall, the study suggests that regardless of the representation used, end-to-end IC models implicitly learn and exploit multimodal similarity spaces rather than performing actual image understanding. This study is in line with the recent work that explore understanding of deep learning models and the representational interpretations BID23 BID32 BID30 and works that have tried to delve into the image captioning task BID7 BID36. To the best of our knowledge, ours is the first work that investigates IC focusing specifically on image representations and their effects. For the experiments in Section 3, we base our implementation on a simple end-to-end approach by BID17. We use the LSTM BID14 based language model as described in BID42.To condition the image information, we first perform a linear projection of the image representation followed by a non-linearity: DISPLAYFORM0 n×d is the linear transformation matrix, σ is the non-linearity. We use Exponential Linear Units BID5 as the non-linear activation in all our experiments. Following BID35, we initialize the LSTM based caption generator with the projected image feature. The caption generator is trained to generate sentences conditioned on the image representation. We train the model by minimizing the cross-entropy, i.e., the sentence-level loss corresponds to the sum of the negative log likelihood of the correct word at each time step: DISPLAYFORM0 where Pr (S|Im f eat ; θ) is the sentence-level loss conditioned on the image feature Im f eat and Pr(w t) is the probability of the word at time step t. This is trained with standard teacher forcing as described in BID31 where the correct word information is fed to the next state in the LSTM.Inference is typically performed with approximation techniques like beam search or sampling BID17 BID35. In this paper, as we are mainly interested in the studying effect of different image representations, we focus on the language output that the models can most confidently produce. Therefore, in order to isolate any other variables from the experiments, we generate captions using a greedy argmax based approach for consistency (unless stated otherwise, we always use greedy decoding). In this section, we verify our hypothesis that a'distributional similarity' space exist in end-to-end IC systems. Such systems attempt to match image representations in order to condition the RNN decoder to generate captions that are similar to the closest images, rather than actually understanding the image in order to describe the image. We keep the IC model constant (Section 2) across experiments, and vary only the image representation used. The following pre-trained CNNs are used:• VGG19 BID29 pre-trained on ILSVRC BID28.• ResNet152 BID13 ) also pre-trained on ILSVRC.• Places365-ResNet152 BID43, a variant of ResNet152 pre-trained on the Places2 dataset BID44. We investigate whether scene-specific categories are useful for IC without the network being trained to classify object-specific categories.• Hybrid1365-ResNet152 BID43, a ResNet152 variant trained on the concatenation of the ILSVRC and Places2 datasets and predicts both object and scene classes. We explore various representations derived from the CNNs above:Penultimate layer (Penultimate): Most previous attempts for IC use the output of the penultimate layer of a CNN pre-trained on ILSVRC. Previous work motivates using'off-the-shelf' feature extractors in the framework of transfer learning BID25 BID8. Such features have often been applied to image captioning BID21 BID17 BID35 BID9 and have been shown to produce state-of-the-art . Therefore, for each image, we extract the fc7 layer of VGG19 (4096D) and the pool5 layer for the ResNet152 variants (2048D).Class prediction vector (Softmax): We also investigate higher-level image representations, where each element in the vector is an estimated posterior probability of object categories. Note that the categories may not directly correspond to the captions in the dataset. While there are alternative methods that fine-tune the image network on a new set of object classes extracted in ways that are directly relevant to the captions BID37, we study the impact of off-the-shelf prediction vectors on the IC task. The intuition is that category predictions from pretrained CNN classifiers may also be beneficial for IC, alongside the standard approach of using midlevel features from the penultimate layer. Therefore, for each image, we use the predicted category posterior distributions of VGG19 and ResNet152 for 1000 object categories), Places365-ResNet152 (365 scene categories), and Hybrid-ResNet152 (1365 object and scene categories). Here we experiment with a method that utilizes the averaged word representations of top-k predicted object classes. We first obtain Softmax predictions using ResNet152 for 1000 object categories (synsets) per image. We then select the objects that have a posterior probability score > 5% and use the 300-dimensional pre-trained word2vec BID22 representations 4 to obtain the averaged vector over all retained object categories. This is motivated by the central observation that averaged word embeddings can represent semantic-level properties and are useful for classification tasks BID2. We also explore representing images using information from object detectors that identifies instances of object categories present in an image, rather than a global, image-level classification. This can potentially provide for a richer and more informative image representation. For this we use:• ground truth (Gold) region annotations for instances of 80 pre-defined categories provided with MSCOCO. It is worth noting that these were annotated independently of the image captions, i.e. people writing the captions had no knowledge of the 80 categories and the annotations. As such, there is no direct correspondence between the region annotations and image captions.• a state-to-the-art object detector YOLO BID26, pre-trained on MSCOCO for 80 categories (YOLO-Coco), and on MSCOCO and ILSVRC for over 9000 categories (YOLO-9k). We explore several representations derived from instance-level object class annotations/detectors above:Bag of objects (BOO): We represent each image as a sparse'bag of objects' vector, where each element represents the frequency of occurrence for each object category in the image (Counts). We also explore an alternative representation where we only encode the presence or absence of the object category regardless of its frequency (Binary), to determine whether it is important to encode object counts in the image. These representations help us examine the importance of explicit object categories and in a sense interactions between object categories (dog and ball) in the image representation. We investigate whether such a sparse and high-level BOO representation is helpful for IC. It is also worth noting that BOO is different from the Softmax representation above as it encodes the number of object occurrences, not the confidence of class predictions at image level. We compare BOO representations derived from the Gold annotations (Gold-Binary and Gold-Counts) and both YOLO-Coco and YOLO-9k detectors (Counts only). To further probe the capacity of the model to discern image representations in an image distributional similarity space, we propose a novel experiment where we examine a type of representation where similar images are represented using similar random vectors, which we term as pseudo-random vectors. We form this representation from BOO Gold-Counts and BOO Gold-Binary. Formally, Im f eat = o∈Objects f × φ o, where φ o ∈ R d is an object-specific random vector and f is a scalar representing counts of the object category. In the case of PseudorandomCounts, f is the frequency counts from Gold-Counts. In the case of Pseudorandom-Binary, f is either 0 or 1 based on Gold-Binary. We use d = 120 for these experiments. Dataset We evaluate image captioning conditioned on different representations on the most widely used dataset for IC, MSCOCO BID4. The dataset consists of 82, 783 images for training, with at least five captions per image, totaling to 413, 915 captions. We perform model selection on a 5000-image development set and report the on a 5000-image test set using standard, publicly available splits 5 of the MSCOCO validation dataset as in previous work BID17.Evaluation Metrics We evaluated system outputs using the standard evaluation metrics for image captioning using the most common metrics: BLEU BID24 which is computed from 1-gram to 4-gram precision scores (B-1 · · · B-4), Meteor BID6 ) (M) and CIDEr BID34 (C) and SPICE BID0 ) (S). All these metrics are based on some form of n-gram overlap between the system output and the reference captions (i.e. no image information is used). For each system-generated caption, we compare against five references. We used the publicly available cocoeval script for evaluation. 6 Model Settings and Hyperparameters We use a single hidden layer LSTM with 128-dimensional word embeddings and 256-dimensional hidden dimensions. As training vocabulary we retain only words that appear at least twice. We report of IC on MSCOCO in Table 1, where the IC model (Section 2) is conditioned on the various image representations described in Section 3.1. As expected, using random image Table 1: Results on the MSCOCO test split, where we vary only the image representation and keep other parameters constant. The captions are generated with beam = 1. We report BLEU, Meteor, CIDEr and SPICE scores.. embeddings clearly does not provide any useful information and performs poorly. The Softmax representations with similar sets of object classes (VGG19, ResNet152, and Hybrid1365-ResNet152) have very similar performance. However, the Places365-ResNet representations perform poorly. We note that the posterior distribution may not directly correspond to captions as there are many words and concepts that are not contained in the set of object classes. Our differ from those by BID37 and BID39 where the object classes have been fine-tuned to correspond directly to the caption vocabulary. We posit that the degradation in performance is due to spurious probability distributions over object classes for similar looking images. The performance of the Pool5 image representations shows a similar trend for VGG19, ResNet152, and Hybrid1365-ResNet152. ResNet152 is slightly better in performance. The Places365-ResNet representation performs poorly. We posit that the representations from the image network trained on object classes rather than scene classes are able to capture more fine-grained image details from the images, whereas the image network trained with scene-based classes captures more coarse-grained information. The performance of the averaged top-k word embeddings is similar to that of the Softmax representation. This is interesting, since the averaged word representational information is mostly noisy: we combine top-k synset-level information into one single vector, however, it still performs competitively. We observe that the performance of the Bag of Objects (BOO) sparse 80-dimensional annotation vector is better than all other image representations judging by the CIDEr score. We remark here again, that this is despite the fact that the annotations may not directly correspond to the semantic information in the image or the captions. The sparse representational information is indicative of the presence of only a subset of potentially useful objects. We notice two distinct patterns, a marked difference with Binary and Count based representations. This takes us back to the motivation that image captioning ideally requires information about objects, interaction between the objects with attribute information. Although our representation is really sparse on the object interactions, it captures the basic concept of the presence of more than one object of the same kind, and thus provides some kind of extra information. A similar trend is observed by BID40, although in their models they further try to learn interactions using a specified object RNN.We also notice that predicted objects using YOLOCoco performs better than the YOLO9k. This is probably expected as the YOLOCoco was trained on the same dataset hence obtaining better object proposals. We also observed that with YOLO9k, we had a significant number of objects being predicted for the test images that were not seen with the training set (around 20%). The most surprising is the performance of the pseudo-random vectors. We notice that both the pseudo-random binary and the pseudo-random count based vectors perform almost as good as the Gold objects. This suggests that the conditioned RNN is able to remove noise and learn some sort of a common'visual-linguistic' semantic subspace. We perform further analysis on the different image representations to gain a further understanding of the representations and demonstrate our distributional similarity hypothesis. In Section 3.3, we observed encouraging from the bag of objects based representations despite being sparse, low-dimensional, and only partially relevant to captions. Interestingly, using pseudo-random vectors derived from bag of objects also gave excellent performance despite the added noise. This leads to the question: are high-dimensional vectors necessary or relevant? To answer this, we evaluate whether the performance of the model significantly degrades if we reduce the dimensionality of a high dimensional representation. We experiment with three exploratory factor analysis based methods -Principal Component Analysis (PCA) BID12, Probabilistic Principal Component Analysis (PPCA) BID33 and Independent Component Analysis (ICA) BID15. In all cases, we obtain 80-dimensional factorized representations on ResNet152 pool5 (2048D) that is commonly used in IC. We summarize our experiment in TAB2. We observe that, the representations obtained by all the factor models seem to retain the necessary representational power to produce appropriate captions equivalent to the original representation. This seems contradictory as we expected a loss in the information content when compressing to arbitrary 80-dimensions. This experiment indicates that the model is not explicitly utilizing the full expressiveness of the full 2048-dimensional representations. We conclude that the model is able to learn from a seemingly weak, structured information and is able to in a performance that is close to one that uses the full representation. In this section, we investigate the distributional similarity hypothesis by inspecting the regularities in the initial representation state for several representations from Section 3.1, using the interpretable bag-of-objects representation. If the representation is informative for IC, then the representations should ideally semantically related images together, and in turn allow for relevant captions to be generated. We compare different image representations with respect to their ability to group and distinguish between semantically related images. For this, we selected three categories from MSCOCO ("dog", "person", "toilet") and also pairwise combinations of these ("dog+person", "dog+toilet", "person+toilet"). Up to 25 images were randomly selected for each of these six groups (single category or pair) such that the images are annotated with only the associated categories. Each group is represented by the average image feature of these images. FIG1 shows the cosine distances between each group, for each of our image representations. The Bag of Objects model clusters these groups the best, as expected (e.g. the average image representation of "dog" correlates with images containing "dog" as a pair like "dog+person" and "dog+toilet"). The Softmax models seem to also to exhibit semantic clusters, although to a lesser extent. This can be observed with "person", where the features are not semantically similar to any other groups. The most likely reason is that there is no "person" category in ILSVRC. Also, Place365 and Hybrid1365 Softmax FIG1 ) also showed very strong similarity for images containing "toilet", where or not they contain "dog" or "person", possibly because they capture scene features. On the other hand, Pool5 features seem to in images that are more similar to each other than Softmax overall. Considering our earlier hypothesis as proposed in Section 3.3 that the conditioned RNN is learning some sort of a common'visual-linguistic' semantic space, we explore the difference in representations in the initial representational space and the transformed representational space. The transformation is learned jointly as a subtask of the image captioning. We posit that image representations in the transformed space will be more semantically coherent with respect to both images and captions. To visualize the two representational spaces, we use Barnes-Hut t-SNE BID20 to compute a 2-dimensional embedding over the test split. In general, we found that images are initially clustered by visual similarity (Pool5) and semantic similarity (Softmax, Bag of Objects). After transformation, we observe that some linguistic information from the captions has ed in different types of clusters. Figure 2 highlights some interesting observations of the changes in clustering across three different representations. For Pool5, images seem to be clustered by their visual appearance, for example snow scenes in FIG2, regardless of the subjects in the images (people or dogs). After transformation, separate clusters seem to be formed for snow scenes involving a single person, groups of people, and dogs. Interestingly, images of dogs in fields and snow scenes are also drawn closer together. Softmax FIG2 ) shows many small, isolated clusters before transformation. After transformation, bigger clusters seem to be formed -suggesting that the captions have again drawn related images together despite being different in the Softmax space. For bag of objects FIG2 ), objects seem to be clustered by co-occurrence of object categories, for example toilets and kitchens are clustered since they share sinks. Toilets and kitchens seem to be further apart in the transformed space. A similar observation was made by BID36 in which the authors observe that endto-end based image captioning models are capable of performing retrieval tasks with comparable performance to the task specific models that are trained with the ranking loss. We further perform a similar analysis on the pseudorandom representations FIG2 ). We observe that the initial representations have very little explicit information and do not cluster. The projected representations however form clusters that mimic the projected space of the bag-of-object cluster. Full sized version of images in FIG2 are presented anonymously in: https://github.com/ anonymousiclr/HJNGGmZ0Z We now demonstrate that end-to-end models are heavily reliant on datasets that have a similar training and test distribution. We posit that an IC system that performs similarity matching will not perform well on a slightly different domain for the same task. Demonstrating this will further validate our hypothesis that IC systems perform image matching to generate image captions. Thus, we evaluate several models trained on MSCOCO on 1000 test image samples from the Flickr30k BID41 ) dataset 7. Like MSCOCO, Flickr30k is an image description dataset; however, unlike MSCOCO, the images have a different distribution and the descriptions are slightly longer and more descriptive. We evaluate the captions generated by our model with Resnet152 pool5 representation and by two other state-of-the-art models pretrained on MSCOCO: a) Self-Critical (SC) BID27 based on self critical sequence training that uses reinforcement learning using metrics, and b) Bottom Up and Top Down (TDBU) BID1 ) based on top-down and bottom-up attention using object region proposals. Both the state-of-the-art models are much more complex than the image conditioned RNN based language model. The are summarized in Table 3.We observe that the scores drop by a large margin. A similar observation was made by BID36, and they alluded the drop in scores to the linguistic mismatch between the datasets. However, we probed the out of training vocabulary words in the Flickr30k test set and observed that it was around 8.6% which seems to be the usual unseen rate. This suggests that there is more to the issue than mere vocabulary mismatch. We observe that while typical sentences on Flickr30k are structurally different and are generally longer, the model is still unable to generate good bigrams or even unigrams as is evident from B-1 and B-2 scores in Table 3.We further investigated the distributions of objects in the images by using YOLO object detector (trained on MSCOCO). We first detect objects on the MSCOCO training set and our MSCOCO test set followed by objects on the Flickr30k test set. It Figure 3 we show the normalized frequency versus the distribution of objects over the MSCOCO train and test sets. We notice that the distributions are very similar and mostly overlap. In FIG3 we show the normalized frequency versus distribution of objects detected over MSCOCO train and Flickr30k test sets. We observe that the two distributions are slightly different, they don't overlap as closely as we see in Figure 3. We hypothesize that the difference in distribution is one of the difference that reflects in the lower performance of a model that is trained on the MSCOCO dataset performing poorly on the Flickr30k test set. We postulate that image captions are often repeated because they are generated by'retrieving' similar images in the joint image-text semantic space and generating the relevant caption at this point in the space. To investigate this, we first show that, regardless of the representation, end-to-end IC systems do not generate unique captions for every distinct image. We use the full validation portion of the MSCOCO dataset (40,504 images) and produce captions with four types of distinct image representations. We report the in Table 4. We observe that in almost all cases, the produced representations are far from unique. In most cases, there is a significant portion of the captions that are repeated. This is also observed by BID7 on different test splits, but using retrieval-based and pipeline based methods for IC.Intrigued by the that almost all the representations end up with similar proportion of unique captions, we further investigate the models using a k-nearest neighbor approach. The key idea is that if the IC systems perform some form of image matching and a complex text retrieval from the training set, then the nearest neighbour (from training) of a test image should have a similar caption to the one generated by the model. We note that the model is clearly not performing text retrieval as the LSTM does generate novel captions, possibly by aggregating or'averaging' the captions of similar images and performing some factorization. Unique (Table 5 : k-nearest neighbor experimentTo perform this experiment, we begin by generating captions for every training image using the bag of objects model (with frequency counts). We then compute the k-nearest training images for each given test image using both the bag of objects representation and its projection (Eq. 2). Finally, we compute the similarity score between the generated caption of the test image against all k-nearest captions. The similarity score measures how well a generated caption matches its nearest neighbour's captions. We expect the score to be high if the IC system generates an image similar to something'summarized' from the training set. We report our in Table 5. We observe that overall the captions seem to closely match the captions of 5 nearest training images. Further analysis showed that 2301 captions had nearest images at a zero distance, i.e., the same exact representation was seen during at least 5 times in training (note that CIDEr gives a score of 10 only if the test caption and all references are the same). We found that among the non-exact image matches, the projected image representation better captures candidates in the training set than the bag of objects. We further analyze the captions and provide details in the appendix. We hypothesized that IC systems essentially exploit a distributional similarity space to'generate' image captions, by attempting to match a test image to similar training image(s) and generate an image caption from these similar images. Our study focused on the image side of image captioning:We varied the image representations while keeping the text generation component of an end-toend CNN-RNN model constant. We found that regardless of the image representation, end-to-end IC systems seem to match images and generate captions in a visual-semantic subspace for IC. We conclude that:• A sparse, low-dimensional bags-of-objects representation can be used as a tool to investigate the contribution of images in IC; we demonstrated that such a vector is sufficient for generating good image captions; • End-to-end IC models are remarkably capable of separating structure from noisy input representations, as demonstrated by pseudo-random vectors; • End-to-end IC models suffer virtually no significant loss in performance when a high dimensional representation is factorized to a lower dimensional space; • End-to-end IC models have learned a joint visual-textual semantic subspace by clustering images with similar visual and linguistic information together; • End-to-end IC models rely on test sets with a similar distribution as the training set for generating good captions; • End-to-end IC models repeatedly generate the same captions by matching images in the joint visual-textual space and'retrieving' a caption in the learned joint space. All the observations above strengthen our distributional similarity hypothesis -that end-to-end IC performs image matching and generates captions for a test image from similar image(s) from the training set -rather than performing actual image understanding. Our findings provide novel insights into what end-to-end IC systems are actually doing, which previous work only suggests or hints at without concretely demonstrating the distributional similarity hypothesis. We believe our findings are important for the IC community to further advance image captioning in a more informed manner. (c) Bag of objects: person, tie Figure 5: Example outputs from our system with different representations, the sub-captions indicate the annotation along with the frequency in braces. We also show the CIDEr score and the difference in CIDEr score relative to the Bag of Objects representation. A ANALYSIS ON GENERATED CAPTIONSHere, we provide a qualitative analysis of different image representations presented and gain some insights into how they contribute to the the IC task. The Bag of Objects representation led to a strong performance in IC despite being extremely sparse and low-dimensional (80 dimensions). Analyzing the test split, we found that each vector consists of only 2.86 non-zero entries on average (standard deviation 1.8, median 2). Thus, with the minimal information being provided to the generator RNN, we find it surprising that it is able to perform so well. We compare the output of the remaining models against the Bag of Objects representation by investigating what each representation adds to or subtracts from this simple, yet strong model. We start by selecting images (from the test split) annotated with the exact same Bag of Objects representation -which should in the same caption. For our qualitative analysis, several sets of one to three MSCOCO categories were manually chosen. For each set, images were selected such that there is exactly one instance of each category in the set and zero for others. We then shortlisted images where the captions generated by the Bag of Objects model produced the five highest and five lowest CIDEr scores (ten images per set). We then compare the captions sampled for each of the other representations. Figure 5 shows some example outputs from this analysis. In Figure 5a, Bag of Objects achieved a high CIDEr score despite only being given "bird" as input, mainly by'guessing' that the bird will be perching/sitting on a branch. The object-based Softmax (VGG and ResNet) models gave an even more accurate description as "owl" is the top-1 prediction of both representations (96% confidence for VGG, 77% for ResNet). Places365 predicted "swamp" and "forest". The Penultimate features on the other hand struggled with representing the images correctly. In Figure 5b, Bag of Objects struggled with lack of information (only "airplane" is given), the Softmax features mainly predicted "chainlink fence", Places365 predicted "kennel" (hence the dog description), and it most likely that Penultimate has captured the fence-like features in the image rather than the plane. In Figure 5c, the Softmax features generally managed to generate a caption describing a woman despite not explicitly containing the'woman' category. This is because other correlated categories were predicted, such as "mask", "wig", "perfume", "hairspray" and in the case of Places365 "beauty salon" and "dressing room". Our model settings were:• LSTM with 128 dimensional word embeddings and 256 dimensional hidden representations Dropout over LSTM of 0.8 • We used Adam for optimization.• We fixed the learning rate to 4e-4We report our by keeping the above settings constant.
This paper presents an empirical analysis on the role of different types of image representations and probes the properties of these representations for the task of image captioning.
442
scitldr
We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory. To account for the fact that typically there are multiple correct SQL queries with the same or very similar semantics, we draw inspiration from syntactic parsing techniques and propose to train our sequence-to-action models with non-deterministic oracles. We evaluate our models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the test set, a 2.1% absolute improvement over the models trained with traditional static oracles assuming a single correct target SQL query. When further combined with the execution-guided decoding strategy, our model sets a new state-of-the-art performance at an execution accuracy of 87.1%. Many mission-critical applications in health care, financial markets, and business process management store their information in relational databases BID10 BID22 BID16. Users access that information using a query language such as SQL. Although expressive and powerful, SQL is difficult to master for non-technical users. Even for an expert, writing SQL queries can be challenging as it requires knowing the exact schema of the database and the roles of various entities in the query. Hence, a long-standing goal has been to allow users to interact with the database through natural language BID0 BID24.The key to achieving this goal is understanding the semantics of the natural language statements and mapping them to the intended SQL. This problem, also known as NL2SQL, was previously understudied largely due to the availability of annotation. Without paired natural language statement and SQL query, a weak supervision approach may be adopted which reduces supervision from annotated SQL queries to answers BID19. This is a more difficult learning problem. Therefore only with recent release of a number of large-scale annotated NL2SQL datasets BID36 BID6, we start to see a surge of interest in solving this problem. Existing NL2SQL approaches largely fall into two categories: sequence-to-sequence style neural "machine translation " systems BID36 BID5 and sets of modularized models with each predicting a specific part of the SQL queries BID32 BID34. The former class suffer from the requirement of labeling a single ground truth query while multiple semantically equivalent queries exist for each intent. For example, as noticed by BID36, the ordering of filtering conditions in a query does not affect execution but affects generation. To account for this, techniques such as reinforcement learning have been used on top of those sequenceto-sequence models. The second class of models employ a sequence-to-set approach: they first predict table columns present in the query and then independently predict the rest for each column. This avoids the ordering issue, but makes it harder to leverage inter-dependencies among conditions. In this work, we develop a sequence-to-action parsing approach (Section 3) for the NL2SQL problem. It incrementally fills the slots of a SQL query with actions from an inventory designed for this task. Taking inspiration from training oracles in incremental syntactic parsing BID8, we further propose to use non-deterministic oracles (Section 4) for training the incremental parsers. These oracles permit multiple correct action continuations from a partial parse, thus are able to account for the logical form variations. Our model combines the advantage of a sequence-to-sequence model that captures inter-dependencies within sequence of predictions and a SELECT`Height (ft)Ẁ HERE Name="Willis Tower" AND Location="Chicago" DISPLAYFORM0 What is the height of Willis Tower in Chicago?Figure 1: Our running example. The input is a natural language question and a table schema, and the output is an executable SQL query. Table contents are shown here, but unknown to our models.modularized model that avoids any standarized linearization of the logical forms. We evaluate our models on the WikiSQL dataset and observe a performance improvement of 2.1% when comparing non-deterministic oracles with traditional static oracles. We further combine our approach and the execution-guided decoding strategy ) and achieve a new state-of-the-art performance with 87.1% test execution accuracy. Experiments on a filtered ATIS dataset in addition confirm that our models can be applied to other NL2SQL datasets. Given an input natural language question, our goal is to generate its corresponding SQL query. In the following and throughout the paper, we use the WikiSQL dataset BID36 as our motivating example. However, it should be noted that our approach is generally applicable to other NL2SQL data, with proper choice of an action inventory and redesign of parser states. Figure 1 shows an example. The SQL structure of the WikiSQL dataset queries is restricted and always follows the template SELECT agg selcol WHERE col op val (AND col op val) *. Here, selcol is a single table column and agg is an aggregator (e.g., COUNT, SUM, empty). The WHERE segment is a sequence of conjunctive filtering conditions. Each op is a filtering operator (e.g., =) and the filtering value val is mentioned in the question. Although the dataset comes with a "standard" linear ordering of the conditions, the order is actually irrelevant given the semantics of AND.Throughout the paper we denote the input to the parser as x. It consists of a natural language question w with tokens w i and a single table schema c with column names c j. A column name c j can have one or more tokens. The parser needs to generate an executable SQL query y as its output. Given an input x, the generation of a structured output y is broken down into a sequence of parsing decisions. The parser starts from an initial state and incrementally takes actions according to a learned policy. Each action advances the parser from one state to another, until it reaches one of the terminal states, where we may extract a complete logical form y. We take a probabilistic approach to model the policy. It predicts a probability distribution over the valid set of subsequent actions given the input x and the running decoding history. The goal of training such an incremental semantic parser is then to optimize this parameterized policy. Formally, we let P θ (y|x) = P θ (a|x), where θ is model parameters. Execution of the action sequence a = {a 1, a 2, . . ., a k} leads the parser from the initial state to a terminal state that contains the parsing y. Here we assume that each y has only one corresponding action sequence a, an assumption that we will revisit in Section 4. The probability of action sequence is further factored as the product of incremental decision probabilities: DISPLAYFORM0 During inference, instead of attempting to enumerate over the entire output space and find the highest scoring a * = arg max a P θ (a|x), our decoder takes a greedy approach: at each intermediate step, it picks the highest scoring action according to the policy: a * i = arg max ai P θ (a i |x, a * <i). Resulting state after taking the action at state p Parameter representation In the following subsections, we define the parser states and the inventory of actions, followed by a description of our encoder-decoder neural-network model architecture. DISPLAYFORM0 We first look at a structured representation of a full parse corresponding to the example in Figure 1: Table 1. The action CONDVAL selects a span of text w i:j from the input question w. In practice, this leads to a large number of actions, quadratic in the length of the input question, so we break down CONDVAL into two consecutive actions, one selecting the starting position w i and the other selecting the ending position w j for the span. At the end of the action sequence, we append a special action END that terminates the parsing process and brings the parser into a terminal state. As an example, the query in Figure 1 translates to an action sequence of {AGG(NONE), SELCOL(c 3), CONDCOL(c 1), CONDOP(=), CONDVAL(w 5:6), CONDCOL(c 2), CONDOP(=), CONDVAL(w 8:8)}. DISPLAYFORM0 The above definitions assume all the valid sequences to have the form of AGG SELCOL (CONDCOL CONDOP CONDVAL) * END. This guarantees that we can extract a complete logical form from each terminal state. For other data with different SQL structure, a redesign of action inventory and parser states is required. We first assume that we have some context-sensitive representations r The main component of our decoder is to model a probability distribution P θ (a|x, a <i) over potential parser actions a conditioned on input x and past actions a <i. It has two main challenges: there is no fixed set of valid parser actions: it depends on the input and the current parser state; the parser decision is context-dependent: it relies on the decoding history and the information embedded in the input question and column headers. We adopt an LSTM-based decoder framework and address the first challenge through individual scoring of actions. The model scores each candidate action a as s a and uses a softmax function to normalize the scores into a probability distribution. At time step i, we denote the current decoder hidden state as h DEC iand model the score of a in the form of a bilinear function: DISPLAYFORM0, where r A a is a vector representation of the action a and is modeled as the concatenation of the action embedding and the parameter representation. The form of the latter is given in Table 1.The dependencies between the parser decisions and the input question and column headers are captured through a dot-product attention mechanism BID20. The input to the first layer of our decoder LSTM at time step i + 1 is a concatenation of the output action representation r A ai from previous time step i, a question attention vector e W i, and a column header attention vector e DISPLAYFORM1 Now we return to the context-sensitive representations r W i and r C j. Ideally, these representations should be both intra-context-sensitive, i.e. aware of information within the sequences, and intersequence-dependent, i.e. utilizing knowledge about the other sequences. These intuitions are reflected in our model design as intra-sequence LSTMs, self-attention and cross-serial attention. Our model architecture is illustrated in FIG1. Each word w i is first mapped to its embedding, and then fed into a bi-directional LSTM (bi-LSTM) that associates each position with a hidden state h W i. For column headers, since each column name can have multiple words, we apply word embedding lookup and bi-LSTM for each column name, and use the final hidden state from the bi-LSTM as the initial representation for the column name. Next, we apply self-attention BID27 to contextualize this initial representation into h C j. After obtaining these intra-contextsensitive representations h W i and h C j, we use cross-serial dot-product attention BID20 to get a weighted average of h Previously, we assumed that each natural language question has a single corresponding SQL query, and each query has a single underlying correct action sequence. However, these assumptions do not hold in practice. One well-observed example is the ordering of the filtering conditions in the WHERE clause. Reordering of those conditions leads to different action sequences. Furthermore, we identify another source of ambiguity in section 4.2, where a question can be expressed by different SQL queries with the same execution . These queries are equivalent from an end-user perspective. For both cases, we obtain multiple correct "reference" transition sequences for each training instance and there is no single target policy for our model to mimic during training. To solve this, we draw inspiration from syntactic parsing and define non-deterministic oracles BID8 ) that allow our parser to explore alternative correct action sequences. In contrast, the training mechanism we discussed in Section 3 is called static oracles. We denote the oracle as O that returns a set of correct continuation actions O(x, a <t) at time step t. Taking any action from the set can lead to some desired parse among a potentially large set of correct . The training objective for each instance L x is defined as: DISPLAYFORM0 where a <i denotes the sequence of actions a 1,..., a i−1 and a i = arg max a∈O(x,a<i) s a, the most confident correct action to take as decided by the parser during training. When O is a static oracle, it always contains a single correct action. In that scenario, Equation 1 is reduced to a naïve crossentropy loss. When O is non-deterministic, the parser can be exposed to different correct action sequences and it is no longer forced to conform to a single correct action sequence during training. Training a text-to-SQL parser is known to suffer from the so-called "order-matters" issue. The filtering conditions of the SQL queries do not presume any ordering. However, an incremental parser must linearize queries and thus impose a pre-defined order. A correct prediction that differs from a golden labeling in its ordering of conditions then may not be properly rewarded. Prior work has tackled this issue through reinforcement learning BID36 ) and a modularized sequenceto-set solution BID32. The former lowers optimization stability and increases training time, while the latter complicates model design to capture inter-dependencies among clauses: information about a predicted filtering condition is useful for predicting the next condition. We leverage non-deterministic oracles to alleviate the "order-matters" issue. Our model combines the advantage of an incremental approach to leverage inter-dependencies among clauses and the modularized approach for higher-quality training signals. Specifically, at intermediate steps for predicting the next filtering condition, we accept all possible continuations, i.e. conditions that have not been predicted yet, regardless of their linearized positions. For the example in Figure 1, in addition to the transition sequence we gave in Section 3.1, our non-deterministic oracles also accept CONDCOL(c 2) as a correct continuation of the second action. If our model predicts this action first, it will continue predicting the second filtering condition before predicting the first. In preliminary experiments, we observed that a major source of parser errors on the development set is incorrect prediction of implicit column names. Many natural language queries do not explicitly mention the column name of the filtering conditions. For example, the question in Figure 1 does not mention the column name "Name". Similarly, a typical question like "What is the area of Canada?" does not mention the word "country". For human, such implicit references make natural language queries succinct, and the missing information can be easily inferred from context. But for a machine learning model, they pose a huge challenge. We leverage the non-deterministic oracles to learn the aforementioned implicit column name mentions by accepting the prediction of a special column name, ANYCOL. During execution, we expand such predictions into disjunction of filtering conditions applied to all columns, simulating the intuition why a human can easily locate a column name without hearing it from the query. For the example in Figure 1, in addition to the action CONDCOL(c 1), we also allow an alternative prediction CONDCOL(ANYCOL). When the latter appears in the query (e.g. ANYCOL="Willis Tower"), we expand it into a disjunctive clause (Rank="Willis Tower" OR Name="Willis Tower" OR ...). With our non-deterministic oracles, when column names can be unambiguously resolved using the filtering values, we accept both ANYCOL and the column name as correct actions during training, allowing our models to predict whichever is easier to learn. In our experiments, we use the default train/dev/test split of the WikiSQL dataset. We evaluate our models trained with both the static oracles and the non-deterministic oracles on the dev and test split. We report both logical form accuracy (i.e., exact match of SQL queries) and execution accuracy (i.e., the ratio of predicted SQL queries that in the same answer after execution). The execution accuracy is the metric that we aim to optimize. We largely follow the preprocessing steps in prior work of BID5. Before the embedding layer, only the tokens which appear at least twice in the training data are retained in the vocabulary, the rest are assigned a special "UNK" token. We use the pre-trained GloVe embeddings BID23, and allow them to be fine-tuned during training. Embeddings of size 16 are used for the actions. We further use the type embeddings for the natural language queries and column names following BID34: for each word w i, we have a discrete feature indicating whether it appears in the column names, and vice versa for c j. These features are embedded into 4-dimensional vectors and are concatenated with word embeddings before being fed into the biLSTMs. The encoding bi-LSTMs have a single hidden layer with size 256 (128 for each direction). The decoder LSTM has two hidden layers each of size 256. All the attention connections adopt the dot-product form as described in Section 3.2.For the training, we use a batch size of 64 with a dropout rate of 0.3 to help with the regularization. We use Adam optimizer BID14 with the default initial learning rate of 0.001 for the parameter update. Gradients are clipped at 5.0 to increase stability in training. The main are presented in TAB4. Our model trained with static oracles achieves comparable with the current state-of-the-art Coarse2Fine BID5 and MQAN models. On top of this strong model, using non-deterministic oracles during training leads to a large improvement of 2.1% in terms of execution accuracy. The significant drop in the logical form accuracy is expected, as it is mainly due to the use of ANYCOL option for the column choice: the ing SQL query may not match the original annotation. We further separate the contribution of "order-matters" and ANYCOL for the non-deterministic oracles. When our non-deterministic oracles only address the "order-matters" issue as described in Section 4.1, the model performance stays roughly the same compared with the static-oracle model. We hypothesize that it is because the ordering variation presented in different training instances is already rich enough for a vanilla sequence-to-action model to learn well. Adding ANYCOL to the oracle better captures the implicit column name mentions and has a significant impact on the performance, increasing the execution accuracy from 81.8% to 83.7%.Our incremental parser uses a greedy strategy for decoding, i.e. picking the highest scoring action predicted by the policy. A natural extension is to expand the search space using beam search decoding. We further incorporate the execution-guided strategy Table 3: Execution accuracy (%) and decoding speed of our models on the test set of WikiSQL, with varying decoding beam size. The notation "+ EG (k)" is as in TAB4.errors and empty . The key insight is that a partially generated output can already be executed using the SQL engine against the database, and the execution can be used to guide the decoding. The decoder maintains a state for the partial output, which consists of the aggregation operator, selection column and the completed filtering conditions until that stage in decoding. After every action, the execution-guided decoder retains the top-k scoring partial SQL queries free of runtime exceptions and empty output. At final stage, the query with the highest likelihood is chosen. With k = 5, the execution-guided decoder on top of our previous best-performing model achieves an execution accuracy of 87.1% on the test set, setting a new state of the art. We also report the performance of the static oracle model with execution-guided decoding in Table 3. It comes closely to the performance of the non-deterministic oracle model, but requires a larger beam size, which translates to an increase in the decoding time. To test whether our model can generalize to other datasets, we perform experiments with the ATIS dataset BID25 BID3. ATIS has more diverse SQL structures, including queries on multiple tables and nested queries. To be compatible with our task setting, we only retain examples in the ATIS dataset that are free of nested queries, containing only AND operations and no INNER JOIN operators. We perform table joins and create a single table to be included in the input to our models along with the natural language question. The reduced dataset consists of 933 examples, with 714/93/126 examples in the train/dev/test split, respectively. Our models trained with the static and non-deterministic oracles (without ANYCOL) achieve accuracy of 67.5% and 69.1% on the test set, respectively. The improvement gained from using nondeterministic oracles during training validates our previous hypothesis: ATIS is a much smaller dataset compared with WikiSQL, therefore explicitly addressing "order-matters" helps here. We didn't apply ANYCOL due to the nature of ATIS data. WikiSQL, introduced by BID36, is the first large-scale dataset with annotated pairs of natural language queries and their corresponding SQL forms on a large selection of table schemas. While its coverage of SQL syntax is weaker than previous datasets such as ATIS BID25 BID3 and GeoQuery BID35, WikiSQL is highly diverse in its questions, BID29 BID32 BID34 BID5 BID21.NL2SQL is a special case of semantic parsing. The task of semantic parsing maps natural language to a logical form representing its meaning, and has been studied extensively by the natural language processing community (see Liang 2016 for a survey). The choice of meaning representation is usually task-dependent, including lambda calculus BID31, lambda dependency-based compositional semantics (, λ-DCS), and SQL BID36. Neural semantic parsing, on the other hand, views semantic parsing as a sequence generation problem. It adapts deep learning models such as those introduced by BID26; BID1; BID28. Combined with data augmentation BID13 BID12 or reinforcement learning BID36, sequence-to-sequence with attention and copying has already achieved state-of-the-art on many datasets including WikiSQL.The meaning representation in semantic parsing usually has strict grammar syntax, as opposed to target sentences in machine translation. Thus, models are often constrained to output syntactically valid . BID4 propose models that generate tree outputs through hierarchical decoding and models that use sketches to guide decoding, but they do not explicitly deal with grammar constraints. In contrast, BID33 and directly utilize grammar productions during decoding. Training oracles have been extensively studied for the task of syntactic parsing, where incremental approaches are common BID8. For syntactic parsing, due to the more structurally-constrained nature of the task and clearly-defined partial credits for evaluation, dynamic oracles allow the parsers to find optimal subsequent actions even if they are in some sub-optimal parsing states BID7 BID9 BID2. In comparison, non-deterministic oracles are defined for the optimal parsing states that have potential to reach a perfect terminal state. To the best of our knowledge, our work is the first to explore non-deterministic training oracles for incremental semantic parsing. In this paper, we introduce a sequence-to-action incremental parsing approach for the NL2SQL task. With the observation that multiple SQL queries can have the same or very similar semantics corresponding to a given natural language question, we propose to use non-deterministic oracles during training. On the WikiSQL dataset, our model trained with the non-deterministic oracles achieves an execution accuracy of 83.7%, which is 2.3% higher than the current state of the art. We also discuss using execution-guided decoding in combination with our model. This leads to a further improvement of 3.4%, achieving a new state-of-the-art 87.1% execution accuracy on the test set. To the best of our knowledge, our work is the first to use non-deterministic oracles for training incremental semantic parsers. Designing such non-deterministic oracles requires identification of multiple correct transition sequences for a given training instance, and an algorithm that decides the possible continuations for any intermediate state that will lead to one of the desired terminal states. We have shown promising for WikiSQL and filtered ATIS dataset and it would be interesting to extend our work to other more complex NL2SQL tasks and to other semantic parsing domains.
We design incremental sequence-to-action parsers for text-to-SQL task and achieve SOTA results. We further improve by using non-deterministic oracles to allow multiple correct action sequences.
443
scitldr
We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS . Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS. Neural architecture search (NAS) has been applied successfully to design model architectures for image classification and language modeling BID0 BID3 BID6. NAS however is computationally expensive and time consuming: for example, use 450 GPUs and train for 3-4 days. Meanwhile, using less resources tends to produce less compelling BID31 BID0.The main computational bottleneck of NAS is the training of each child model to convergence to measure its accuracy. We believe that it is very inefficient and wasteful to train every child model and then throw away all the trained weights even though the child models have much in common. The graph represents the entire search space while the red arrows define a model in the search space, which is decided by a controller. Here we assume that node 1 is the input to the model whereas nodes 3, 5, and 6 are the outputs of the model. The goal of this work is to remove this inefficiency by enabling more sharing between the child models. This idea is similar to the concept of weight inheritance in neuro-evolution (e.g., BID33). To understand our method, we first need to understand the standard NAS. In standard NAS BID0, an RNN controller is trained by policy gradient to search for a good architecture, which is basically a computational graph. Our observation is that all of the graphs, that NAS has iterated over, can be viewed as sub-graphs of a larger graph. In other words, we can represent the space of these graphs as a single directed acyclic graph (DAG). As illustrated in FIG0, a neural network architecture can be found by taking a subset of edges in this DAG. This design is advantageous because it enables sharing parameters among all architectures in the search space. There is growing interest in improving the efficiency of neural architecture search. Concurrent to our work are the promising ideas of using learning curve prediction to skip bad models BID1, predicting the accuracies of models before training BID7, using iterative search method for architectures of growing complexity BID25 ), or using hierarchical representation of architectures BID26. Our method is also inspired by the concept of weight inheritance in neuroevolution, which has been demonstrated to have positive effects at scale BID33.Closely related to our method are other recent approaches that avoid training each architecture from scratch, such as convolutional neural fabrics -ConvFabrics BID34 and SMASH BID5. These methods are more computationally efficient than standard NAS. However, the search space of ConvFabrics is not flexible enough to include novel architectures, e.g. architectures with arbitrary skip connections as in. Meanwhile, SMASH can design interesting architectures but requires a hypernetwork to generate the weights, conditional on the architectures. While a hypernetwork can efficiently rank different architectures, as shown in the paper, the real performance of each network is different from its performance with parameters generated by a hypernetwork. Such discrepancy in SMASH can cause misleading signals for reinforcement learning. Even more closely related to our method is PathNet BID10, which uses evolution to search for a path inside a large model for transfer learning between Atari games. In the following, we will first present our search space to design recurrent cells for recurrent networks. We will then explain how to train, and infer with the controller. Finally, we will explain our search space to design convolutional architectures. To facilitate our discussion of ENAS, we first describe how we employ ENAS to design a recurrent cell. The search space of ENAS is a Directed Acyclic Graph as mentioned in Section 1. The DAG has N nodes where the edges represent the flow of information between these N nodes. Similar to NAS, ENAS has a controller RNN, which decides which edges are activated and which computations are performed at each node. To create a recurrent cell, the controller RNN samples N blocks of decisions. Here we illustrate the ENAS mechanism via a simple example for a recurrent cell with N = 4 computational nodes. Let x t be the input signal for a recurrent cell (e.g., word embedding), and h t−1 be the output from the previous time step. The example cell, which we visualize in FIG1, is sampled as follows.1. At node 1: The controller first samples an activation function. In our example in FIG1, it chooses the tanh activation function, which means that node 1 of the recurrent cell should compute DISPLAYFORM0 ). 2. At node 2: The controller then samples a previous index and an activation function. In our example, it chooses the previous index 1 and the activation function ReLU. Thus, node 2 of the recurrent cell should compute h 2 = ReLU(h 1 · W2,1). 3. At node 3: The controller again samples a previous index and an activation function. In our example, it chooses the previous index 2 and the activation function ReLU. Therefore, h 3 = ReLU(h 2 · W (h) 3,2 ). 4. At node 4: The controller again samples a previous index and an activation function. In our example, it chooses the previous index 1 and the activation function tanh, leading to h 4 = tanh (h 1 · W4,1). 5. For the output, we simply average all the loose ends, which are the nodes that are not input to any other nodes. In our example, since the indices 3 and 4 were never sampled to be the input for any node, the recurrent cell uses their average (h 3 + h 4)/2 as its output. In the example above, we note that for each pair of nodes j <, there is an independent parameter matrix W (h),j. As shown in the example, the controller decides which parameter matrices are used, by choosing the previous indices. Therefore, in ENAS, all recurrent cells in a search space share the same set of parameters. Our search space includes an exponential number of configurations. Specifically, if the recurrent cell has N nodes and we allow 4 activation functions (namely tanh, ReLU, identity, and sigmoid), then the search space has 4 N ×N! configurations. In our experiments, N = 12, which means there are approximately 10 15 models in our search space. Our controller network is a two-layer LSTM BID16 which samples decisions via softmax classifiers. The controller network samples these decisions in an autoregressive fashion: the decision in the previous step is fed as input embedding into the next step. At the first step, the controller network receives an empty embedding as input. In ENAS, there are two sets of learnable parameters: the parameters of the controller LSTM, denoted by θ, and the shared parameters of the child models, denoted by ω. The training procedure of ENAS consists of two alternating phases. The first phase trains ω, the shared parameters of the child models, on a whole pass through the training data set, and the second phase trains θ, the parameters of the controller LSTM, for a fixed number of steps. These two phases are alternated during the training of ENAS. In our experiments with the Penn Treebank dataset, for each phase of training ω, we train ω for 450 steps with SGD, each on a batch of 64 examples, where gradients are computed using back-propagation through time, truncated to 35 time steps. Meanwhile, for each phase of training θ, we train it for 2000 steps with the Adam optimizer and REINFORCE. More details of their training are as follows. Training the shared parameters ω of the child models. In this step, we fix the policy π(m; θ) and perform stochastic gradient descent (SGD) updates on ω to minimize the expected loss function DISPLAYFORM0 Here, L(m; ω) is the standard cross-entropy loss, computed on a minibatch of training data, with a model m sampled from π(m; θ). The gradient is computed via the Monte Carlo estimate DISPLAYFORM1 Note that Eqn 1 provides an unbiased estimate of the gradient DISPLAYFORM2 Therefore, according to Chapter 3.3.2 of BID4, with an appropriate learning schedule, the SGD updates of ω converge almost surely. While convergence is guaranteed, these updates on ω have an inherently larger variance than SGD performed on a fixed model m. Nevertheless, we find that M = 1 works just fine, i.e. we can update ω using the gradient from any single model m sampled from π(m; θ). As mentioned, we train ω for a whole pass through the training data set. Training the parameters of θ of the controller LSTM. In this step, we fix ω and update the policy parameters θ, aiming to maximize the expected reward DISPLAYFORM3 We employ the Adam optimizer BID20, for which the gradient is computed using REINFORCE BID37, with a moving average baseline to reduce variance. We compute the reward R(m, ω) on the validation set, rather than the training set to encourage ENAS, to select the model that generalizes well rather than the model that overfits the training set well. In our experiments with language modeling, the reward function is c/valid ppl, where the perplexity is computed on a minibatch, also sampled from the validation set. Later, in our experiments with image classification, the reward function is the classification accuracy on a minibatch of images sampled from the validation set. Inference. We discuss how to derive novel architectures from a trained ENAS model. Following, we sample several models from the trained policy π (m, θ). For each sampled model, we compute its reward on a single minibatch sampled from the validation set. We then take the model with the highest reward to re-train from scratch. We now discuss the search space for convolutional architectures. Recall that in the search space of the recurrent cell, the controller RNN samples two decisions at each decision block: 1) what previous node to connect to and 2) what activation function to use. In the search space for convolutional models, the controller RNN also samples two sets of decisions at each decision block: 1) what previous node to connect to and 2) what computation operation to use. The decision of what previous node to connect to allows the model to form skip connections BID15; whereas the decision of what computation operation to use sets a particular layer into convolution or average pooling or max pooing. These decisions help construct a layer in the convolutional model. To be even more flexible, instead of deciding if a particular layer is convolution or average pooling or max pooling, we change the search space to blend all these choices together. To achieve this, we treat all operations such as convolution, average pooling, and max pooling as channels, and allow the controller to select a mask over these channels. For example, in our experiments, we allow the controller to choose a mask over how many conv1x1, conv3x3, conv5x5, conv7x7, average pooling, max pooling operations to use (see FIG2 . Each operation at each layer in our network has its own convolutional parameters. During each data pass, only the parameters corresponding to the active channels are used and updated. The network of N layers, where each layer has 6 channels as described. Bottom: A block of the controller network, which consists of 6 binary masks, followed by the steps that sample skip connections. For the skip connections, at layer k, up to k − 1 mutually distinct previous indices are sampled. For example, at layer k = 5, let's suppose the controller samples {2, 4}, then the outputs of layer 2 and layer 4 will be concatenated and sent to layer 5 via skip connections. In our experiments with the CIFAR-10 dataset, the training of the controller LSTM and the child models are also alternating. For each phase of training θ, we train it for 2000 steps with the Adam optimizer and REINFORCE. Meanwhile, for each phase of training the shared parameter ω, we train it for 450 minibatches, each has 100 images, using Nesterov Momentum BID32. An alternative to designing the entire convolutional network is to design smaller modules and then fit repeats of them together BID40. FIG3 illustrates this approach, where the convolutional cell and reduction cell architectures are to be designed. We now discuss how to use ENAS to search for the architecture of these cells. Following, we sample both our convolutional cell and our reduction cell using an RNN controller. Specifically, at each decision block, the controller RNN samples for two sets of decisions: 1) two previous nodes to be used as inputs to the current block and 2) two operations to respectively apply to the two sampled nodes. We allow the following 5 operations: identity, separable convolution with kernel size 3x3 and 5x5, and average pooling and max pooling with kernel size 3x3. Note that each cell receives two input nodes, indexed by node 1 and node 2, corresponding to the outputs of the two previous cells in the entire network. FIG4 depicts an example run of our controller in the search space with 4 nodes. As with our other search spaces, each operation in each cell has its own parameters. During each data pass, only the relevant parameters are used and trained with their corresponding gradients. In the following, we will show our experimental with ENAS to design recurrent cells on the Penn Treebank dataset and to design convolutional architectures on the CIFAR-10 dataset. We then present an ablation study which shows the role of ENAS in discovering novel architectures, as well as details regarding the efficiency of ENAS. Dataset. We first apply ENAS to the task of language modeling, whose goal is predict next words in a text given a history of previous words, by fitting a probabilistic model over sentences. We use the Penn Treebank dataset (PTB) BID28, a well-studied language modeling benchmark. In this experiment, we show that ENAS discovers a novel recurrent cell which, without extensive hyper-parameters tuning, can outperform models with the same or more parameters. Training details. Our controller is trained using Adam , with a learning rate of 0.001. We use a tanh constant of 2.5 and a temperature of 5.0 for the sampling logits BID2. We also add the controller's sample entropy to the reward, with a weight of 10 −4. Additionally, we augment the simple transformations between the constructed recurrent cell's nodes with highway connections BID41. For instance, instead of having h 2 = ReLU(h 1 · W (h) 2,1 ) as shown in the example from Section 3.1, we have DISPLAYFORM0 2,1 ). More details can be found in Appendix A. A novel RNN cell found in our search space is shown in FIG5.The shared parameters of the child models ω are trained using SGD with a learning rate of 0.2, decayed by a factor of 0.9 after every 3 epochs starting at epoch 15, for a total of 150 epochs. During the architecture search process, following BID29, we randomly reset the starting state with probability of 0.001. We also tie the model's word embeddings matrix with its softmax matrix BID18. When retraining the architecture recommended by the controller, however, we use variational dropout BID11, an 2 regularization with weight decay of 10 −7, and a state slowness regularization of 0.0005 BID30.Results. Running on a single Nvidia GTX 1080Ti GPU, ENAS finds the recurrent cell in less than 10 hours. This cell is depicted in FIG5. Table 1 presents our in comparison with other methods. The ENAS cell, with 24M parameters, outperforms the NAS cell and has a similar performance to the LSTM model that uses extensive hyperparameters tuning BID29 ), which we did not do. Our ENAS cell has a few interesting properties. First, all non-linearities in the cell are either ReLU or tanh, even though the search space also has two other functions: identity and sigmoid. We suspect this cell is a local optimum, similar to the observations by. When we randomly pick some nodes and switch the non-linearity into identity or sigmoid, the perplexity increases up to 8 points. When we randomly, swith some ReLU nodes into tanh or vice versa, the perplexity also increases, but only up to 3 points. Dataset. The CIFAR-10 dataset BID22 ) consists of 50,000 training images and 10,000 test images. We use the standard data pre-processing and augmentation techniques, i.e., subtracting the mean and dividing the standard deviation from each channel computed on the training images, centrally padding the training images to 40 × 40 and randomly cropping them back to 32 × 32, and randomly flipping them horizontally. Search spaces. In Section 3.3, we present a search space for convolutional architectures in which the controller can make decisions over skip connections and the mask over the channels. To improve our , we additionally explore two restricted versions of this search space: one where the controller only needs to make decisions of the mask over the channels and one where the controller only needs to make decisions of the skip connections. More details are available in Appendix B.1. Searching for the masks over channels. We fix a pattern of skip connections and search for the masks at each branch and each layer in a 12-layer network. The pattern that we use is the dense pattern BID17. 2. Searching for skip connections. We force all the convolutions to have the filter size of 3 × 3, and only search for the skip connections.3. Searching for convolutional and reduction cells. We search for both cells as discussed in Section 3.4.Training details. The shared parameters ω are trained with Nesterov momentum , where the learning rate follows the cosine schedule with l max = 0.05, l min = 0.001, T 0 = 10 and T mul = 2 BID27. Each architecture search is run for 10 + 20 + 40 + 80 + 160 = 310 epochs. The parameter ω is initialized from a scaled Gaussian as described in BID14. We also apply an 2 weight decay of 10 −4. The same settings are employed to train the architectures recommended by the controller. The policy parameters θ are initialized uniformly in [−0.1, 0.1], and trained with the Adam optimizer at a learning rate of 10 −3. We additionally utilize three techniques to prevent the premature convergence of REINFORCE. First, we apply a temperature τ = 5.0 and a tanh constant c = 2.5 to the controller's logits. Second, we add to the reward the entropy term of the controller's samples weighted by λ ent = 0.1, which discourages convergence BID38. Lastly, we enforce the sparsity in the skip connections by adding to the reward the Kullback-Leibler divergence between: 1) the skip connection probability between any two layers and 2) our chosen probability ρ = 0.4, which represents the prior belief of a skip connection being formed. The KL divergence term is weighted by λ kl = 0.5.Tricks to stabilize and improve training. We find the following tricks crucial for achieving good performance with ENAS.• Structure of Convolutional Layers. Each convolutional operation in our method is followed by a batch normalization BID19 and then a ReLU layer. We find the alternate setting of batch norm-conv-ReLU to have worse .• Stabilizing Stochastic Skip Connections. If a layer receives skip connections from multiple layers before it, then these layers' outputs are concatenated in their depth dimension, and then a convolution of filter size 1 × 1 (followed by a batch normalization layer and a ReLU layer) is performed to ensure the number of output channels is still equal to C.• Global Average Pooling. After the final convolutional layer, we average all the activations of each channel and then pass them to the Softmax layer. This trick was introduced by BID24, with the purpose of reducing the number of parameters in the dense connection to the Softmax layer to avoid overfitting. The last two tricks are extremely important, since the gradient updates of the shared parameters ω, as described in Eqn 1, have a very high variance. In fact, we find that without these last two tricks, the training of ENAS is very unstable. Results. Table 2 summarizes the test errors of ENAS and other approaches. As can be seen from the table, ENAS successfully found several architectures that outperform other automatic model designing approaches with the same usage of computing resource. In particular, in the general search space, ENAS takes 15.6 hours to find a model that achieves 4.23% error rate on CIFAR-10. This model outperforms all but one model reported by, while taking 30x less time and using 800x less computing resource to discover. In the restricted search space over the masks, ENAS takes 11.6 hours to find a model that achieves the test error of 4.35%. The ing model, depicted in FIG6 -Top Left, almost always has 64 or 96 channels at each branch and each layer, indicating that the controller does not choose to activate all blocks. This is the desired behavior, as activating all channels would over-parametrize the model and in overfitting. Moreover, the fact that the model found in a restricted search space has similar performance to the model found in the general search space indicates that ENAS can discover skip connection patterns that are comparable to the dense pattern, which is the state-of-the-art human-designed architecture on CIFAR-10 BID17.In the restricted search space over skip connections, ENAS takes 12.4 hours to discover the pattern of skip connections depicted in FIG6 BID34 16 21.2 7.43 Macro NAS with Q-Learning BID0 11 11.2 6.92 Net Transformation BID6 17 19.7 5.70 FractalNet BID23 21 38.6 4.60 SMASH BID5 211 16.0 4.03 NAS 39 7.1 4.47 NAS + more filters Table 2: Classification error rates of ENAS and other methods on CIFAR-10. In this table, the first block presents the state-of-the-art models, all of which are designed by human experts. The second block presents various approaches that design the entire network. ENAS outperforms all these methods but NAS, which requires much more computing resource and time. The last block presents techniques that design modular cells which are used to build a large model. ENAS outperforms MicroNAS, which uses 32 GPUs to search, and achieves similar performance with NASNet-A.that skip connections are formed much more densely at higher layers than at lower layers, where most connections are only between consecutive layers. The model has the test error of 5.04%, which is slightly worse than the one found in the restricted search space over masks. However, if we increase the number of output channels from 256 to 512, then the network achieves the test error of 3.87%.In the search space over cells, ENAS takes 11.5 hours to discover the convolution cell and the reduction cell, which are shown in FIG6 -Bottom. With the convolutional cell replicated for 6 times, ENAS achieves 3.54% test error, which is on par with the 3.41% error of NASNet-A. With CutOut , ENAS's error decreases to 2.89%, compared to 2.65% by NASNet-A. However, as discussed in Appendix B, our ENAS cells are trained for only 310 epochs, compared to 600 epochs by NAS. Sanity Check and Ablation Study. To understand the role of ENAS, we carry out two control experiments. In the first study, we uniformly at random pick a configuration of channels and skip connections and just train a model. As a , about half of the channels and skip connections are selected, ing in a model with 47.1M parameters and an error rate of 5.86%. This error rate is significantly worse than the models designed by ENASand has many more parameters. In the second study, we only train ω and do not update the controller. The effect is similar to dropout with a rate of 0.5 on both the channels and the skip connections. At convergence, the model has the error rate of 11.92%. On the validation set, the ensemble of 250 Monte Carlo configurations of the trained model could only reach 8.99% test error rate. We therefore conclude that the appropriate training of the ENAS controller is crucial for good performance. Neural Architecture Search (NAS) is an important advance that allows faster architecture design for neural networks. However, the computational expense of NAS prevents it from being widely adopted. In this paper, we presented ENAS, an alternative method to NAS, that requires three orders of magnitude less resources×time. The key insight of our method is to share parameters across child models during architecture search. This insight is implemented by having NAS search for a path within a larger model. We demonstrate empirically that the method works well on both CIFAR-10 and Penn Treebank datasets. The shared parameters ω between different recurrent cells thus consist of all the matrices DISPLAYFORM0,j, and W (h),j. The controller decides the connection j and the activation function f, for each ∈ {2, 3, ..., N}. The layers that are never selected by any subsequent layers are averaged and sent to a softmax head, or to higher recurrent layers. As in the case of convolutional models, to stabilize the training of ω, we add a batch normalization layer after the average of the layers that are not selected. B Details for CIFAR-10 Search Spaces B.1 Details on Search Space 1: ChannelsWe use a block size of S = 32, ing in C/S = 256/32 = 8 blocks per branch per layer. Each branch configuration has its own embedding and softmax head. To elaborate, this means that a time step in the controller RNN that predicts the configuration for any branch should have a softmax matrix of size H × (2 C/S − 1), where H = 64 is the hidden dimension of the RNN, and 2 C/S − 1 = 255 is the number of possible binary masks for that branch. Each branch also has an embedding matrix of size (2 C/S − 1) × H, from which the row corresponding to the sampled binary mask is selected and sent to the next time step. Layers 4 and 8 of our 12-layer network are max pooling layers with a kernel size of 2 × 2 and a stride of 2, and reduce each spatial dimension of the layers' outputs by a factor of 2. Within each group of 3 layers where the spatial dimensions of the layers remain constant, we connect each layer to all layers before it BID17. We use 3 × 3 convolutions with 48 output channels at all layers. The controller RNN for this search space has the same form as the controller RNN that is depicted in FIG2, with two modifications. First, each block has only 2 time steps (as opposed to 7 in the general search space): the time step that predicts the mask for the convolution of filter size 3 × 3, which we force to always activate all channels; and the anchor time step, which we allow to sample multiple previous indices. Second, our controller is thus allowed to form skip connections between arbitrary layers, but forming such connections between layers with different spatial dimensions would in compilation failures. To circumvent, after each max pooling in the network, we centrally pad the output so that its spatial dimensions remain unchanged. We perform a 3×3 convolution on the image to create the outputs for the first convolutional cell. After that, in the first 6 convolution cells, each separable convolution has 32 output channels. Each reduction cell is applied in the same way as the convolutional cell, with the only modification being each operation is applied with a stride of 2. When the cells are found, the final models (both with and without CutOut) are trained for 310 epochs, using the cosine learning schedule, where the reset cycle is originally set to 10 and then doubled after each reset. Additionally, we insert an auxiliary head at the layer immediately before the second application of the reduction cell BID35.
An approach that speeds up neural architecture search by 10x, whilst using 100x less computing resource.
444
scitldr
Nowadays, deep neural networks (DNNs) have become the main instrument for machine learning tasks within a wide range of domains, including vision, NLP, and speech. Meanwhile, in an important case of heterogenous tabular data, the advantage of DNNs over shallow counterparts remains questionable. In particular, there is no sufficient evidence that deep learning machinery allows constructing methods that outperform gradient boosting decision trees (GBDT), which are often the top choice for tabular problems. In this paper, we introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning architecture, designed to work with any tabular data. In a nutshell, the proposed NODE architecture generalizes ensembles of oblivious decision trees, but benefits from both end-to-end gradient-based optimization and the power of multi-layer hierarchical representation learning. With an extensive experimental comparison to the leading GBDT packages on a large number of tabular datasets, we demonstrate the advantage of the proposed NODE architecture, which outperforms the competitors on most of the tasks. We open-source the PyTorch implementation of NODE and believe that it will become a universal framework for machine learning on tabular data. The recent rise of deep neural networks (DNN) ed in a substantial breakthrough for a large number of machine learning tasks in computer vision, natural language processing, speech recognition, reinforcement learning . Both gradient-based optimization via backpropagation and hierarchical representation learning appear to be crucial in increasing the performance of machine learning for these problems by a large margin. While the superiority of deep architectures in these domains is undoubtful, machine learning for tabular data still did not fully benefit from the DNN power. Namely, the state-of-the-art performance in problems with tabular heterogeneous data is often achieved by "shallow" models, such as gradient boosted decision trees (GBDT) (; ; ;). While the importance of deep learning on tabular data is recognized by the ML community, and many works address this problem (; ; ; ;), the proposed DNN approaches do not consistently outperform the state-of-the-art shallow models by a notable margin. In particular, to the best of our knowledge, there is still no universal DNN approach that was shown to systematically outperform the leading GBDT packages (e.g., XGBoost ). As additional evidence, a large number of Kaggle ML competitions with tabular data are still won by the shallow GBDT methods . Overall, at the moment, there is no dominant deep learning solution for tabular data problems, and we aim to reduce this gap by our paper. We introduce Neural Oblivious Decision Ensembles (NODE), a new DNN architecture, designed to work with tabular problems. The NODE architecture is partially inspired by the recent CatBoost package , which was shown to provide state-of-the-art performance on a large number of tabular datasets. In a nutshell, CatBoost performs gradient boosting on oblivious decision trees (decision tables) , which makes inference very efficient, and the method is quite resistant to overfitting. In its essence, the proposed NODE architecture generalizes CatBoost, making the splitting feature choice and decision tree routing differentiable. As a , the NODE architecture is fully differentiable and could be incorporated in any computational graph of existing DL packages, such as TensorFlow or PyTorch. Furthermore, NODE allows constructing multi-layer architectures, which resembles "deep" GBDT that is trained end-to-end, which was never proposed before. Besides the usage of oblivious decision tables, another important design choice is the recent entmax transformation , which effectively performs a "soft" splitting feature choice in decision trees inside the NODE architecture. As discussed in the following sections, these design choices are critical to obtain state-of-the-art performance. In a large number of experiments, we compare the proposed approach with the leading GBDT implementations with tuned hyperparameters and demonstrate that NODE outperforms competitors consistently on most of the datasets. Overall, the main contributions of our paper can be summarized as follows: 1. We introduce a new DNN architecture for machine learning on tabular data. To the best of our knowledge, our method is the first successful example of deep architectures that substantially outperforms leading GBDT packages on tabular data. 2. Via an extensive experimental evaluation on a large number of datasets, we show that the proposed NODE architecture outperforms existing GBDT implementations. 3. The PyTorch implementation of NODE is available online 1. The rest of the paper is organized as follows. In Section 2 we review prior work relevant to our method. The proposed Neural Oblivious Decision Ensembles architecture is described in Section 3 and experimentally evaluated in Section 4. Section 5 concludes the paper. In this section, we briefly review the main ideas from prior work that are relevant to our method. The state-of-the-art for tabular data. Ensembles of decision trees, such as GBDT or random forests , are currently the top choice for tabular data problems. Currently, there are several leading GBDT packages, such as XGBoost , LightGBM , CatBoost , which are widely used by both academicians and ML practitioners. While these implementations vary in details, on most of the tasks their performances do not differ much (; Anghel et al.). The most important distinction of CatBoost is that it uses oblivious decision trees (ODTs) as weak learners. As ODTs are also an important ingredient of our NODE architecture, we discuss them below. Oblivious Decision Trees. An oblivious decision tree is a regular tree of depth d that is constrained to use the same splitting feature and splitting threshold in all internal nodes of the same depth. This constraint essentially allows representing an ODT as a table with 2 d entries, corresponding to all possible combinations of d splits . Of course, due to the constraints above, ODTs are significantly weaker learners compared to unconstrained decision trees. However, when used in an ensemble, such trees are less prone to overfitting, which was shown to synergize well with gradient boosting . Furthermore, the inference in ODTs is very efficient: one can compute d independent binary splits in parallel and return the appropriate table entry. In contrast, non-oblivious decision trees require evaluating d splits sequentially. Differentiable trees. The significant drawback of tree-based approaches is that they usually do not allow end-to-end optimization and employ greedy, local optimization procedures for tree construction. Thus, they cannot be used as a component for pipelines, trained in an end-to-end fashion. To address this issue, several works propose to "soften" decision functions in the internal tree nodes to make the overall tree function and tree routing differentiable. In our work, we advocate the usage of the recent entmax transformation to "soften" decision trees. We confirm its advantages over the previously proposed approaches in the experimental section. Entmax. The key building block of our model is the entmax transformation , which maps a vector of real-valued scores to a discrete probability distribution. This transformation generalizes the traditional softmax and its sparsity-enforcing alternative sparsemax , which has already received significant attention in a wide range of applications: probabilistic inference, topic modeling, neural attention (; ;). The entmax is capable to produce sparse probability distributions, where the majority of probabilities are exactly equal to 0. In this work, we argue that entmax is also an appropriate inductive bias in our model, which allows differentiable split decision construction in the internal tree nodes. Intuitively, entmax can learn splitting decisions based on a small subset of data features (up to one, as in classical decision trees), avoiding undesired influence from others. As an additional advantage, using entmax for feature selection allows for computationally efficient inference using the sparse pre-computed choice vectors as described below in Section 3. Multi-layer non-differentiable architectures. Another line of work (; ;) promotes the construction of multi-layer architectures from nondifferentiable blocks, such as random forests or GBDT ensembles. For instance, propose to use stacking of several random forests, which are trained separately. In recent work, introduces the multi-layer GBDTs and proposes a training procedure that does not require each layer component to be differentiable. While these works report marginal improvements over shallow counterparts, they lack the capability for end-to-end training, which could in inferior performance. In contrast, we argue that end-to-end training is crucial and confirm this claim in the experimental section. Specific DNN for tabular data. While a number of prior works propose architectures designed for tabular data , they mostly do not compare with the properly tuned GBDT implementations, which are the most appropriate baselines. The recent preprint reports the marginal improvement over GBDT with default parameters, but in our experiments, the baseline performance is much higher. To the best of our knowledge, our approach is the first to consistently outperform the tuned GBDTs over a large number of datasets. We introduce the Neural Oblivious Decision Ensemble (NODE) architecture with a layer-wise structure similar to existing deep learning models. In a nutshell, our architecture consists of differentiable oblivious decision trees (ODT) that are trained end-to-end by backpropagation. We describe our implementation of the differentiable NODE layer in Section 3.1, the full model architecture in Section 3.2, and the training and inference procedures in section 3.3. The core building block of our model is a Neural Oblivious Decision Ensemble (NODE) layer. The layer is composed of m differentiable oblivious decision trees (ODTs) of equal depth d. As an input, all m trees get a common vector x ∈ R n, containing n numeric features. Below we describe a design of a single differentiable ODT.... Figure 1: The single ODT inside the NODE layer. The splitting features and the splitting thresholds are shared across all the internal nodes of the same depth. The output is a sum of leaf responses scaled by the choice weights. In its essence, an ODT is a decision table that splits the data along d splitting features and compares each feature to a learned threshold. Then, the tree returns one of the 2 d possible responses, corresponding to the comparisons . Therefore, each ODT is completely determined by its splitting. In this notation, the tree output is defined as: where 1(·) denotes the Heaviside function. To make the tree output differentiable, we replace the splitting feature choice f i and the comparison operator 1(f i (x) − b i ) by their continuous counterparts. There are several existing approaches that can be used for modelling differentiable choice functions in decision trees, for instance, REINFORCE or Gumbel-softmax . However, these approaches typically require long training time, which can be crucial in practice. Instead, we propose to use the α-entmax function as it is able to learn sparse choices, depending only on a few features, via standard gradient descent. This function is a generalization of softmax in its variational form: is Shannon entropy. We can define α-entmax by replacing H(p) with Tsallis α-entropy 2. The choice function is hence replaced by a weighted sum of features, with weights computed as α-entmax (α=1.5) over the learnable feature selection matrix F ∈ R d×n: Similarly, we relax the Heaviside function 1(f i (x) − b i ) as a two-class entmax, which we denote as, where b i and τ i are learnable parameters for thresholds and scales respectively. Based on the c i (x) values, we define a "choice" tensor C ∈ R 2 × 2 × 2 d of the same size as the response tensor R by computing the outer product of all c i: The final prediction is then computed as a weighted linear combination of response tensor entries R with weights from the entries of choice tensor C: Note, that this relaxation equals to the classic non-differentiable ODT h(x) iff both feature selection and threshold functions reach one-hot state, i.e. entmax always returns non-zero weights for a single feature and c i always return exactly zeros or ones. Finally, the output of the NODE layer is composed as a concatenation of the outputs of m individual trees ĥ 1 (x),...,ĥ m (x). Multidimensional tree outputs. In the description above, we assumed that tree outputs are onedimensionalĥ(x) ∈ R. For classification problems, where NODE predicts probabilities of each class, we use multidimensional tree outputsĥ(x) ∈ R |C|, where |C| is a number of classes. The NODE layer, described above, can be trained alone or within a complex structure, like fullyconnected layers that can be organized into the multi-layer architectures. In this work, we introduce a new architecture, following the popular DenseNet Figure 2: The NODE architecture, consisting of densely connected NODE layers. Each layer contains several trees whose outputs are concatenated and serve as input for the subsequent layer. The final prediction is obtained by averaging the outputs of all trees from all the layers. Similar to DenseNet, our architecture is a sequence of k NODE layers (see Section 3.1), where each layer uses a concatenation of all previous layers as its input. The input layer 0 of this architecture corresponds to the input features x, accessible by all successor layers. Due to such a design, our architecture is capable to learn both shallow and deep decision rules. A single tree on i-th layer can rely on chains of up to i − 1 layer outputs as features, allowing it to capture complex dependencies. The ing prediction is a simple average of all decision trees from all layers. Note, in the multi-layer architecture described above, tree outputsĥ(x) from early layers are used as inputs for subsequent layers. Therefore, we do not restrict the dimensionality ofĥ(x) to be equal to the number of classes, and allow it to have an arbitrary dimensionality l, which correspond to When averaging the predictions from all layers, only first |C| coordinates ofĥ(x) are used for classification problems and the first one for regression problems. Overall, l is an additional hyperparameter with typical values in. Here we summarize the details of our training protocol. Data preprocessing. First, we transform each data feature to follow a normal distribution via quantile transform 3. In experiments, we observed that this step was important for stable training and faster convergence. Initialization. Before training, we perform the data-aware initialization to obtain a good initial parameter values. In particular, we initialize the feature selection matrix uniformly F ij ∼ U, while the thresholds b are initialized with random feature values f i (x) observed in the first data batch. The scales τ i are initialized in such a way that all the samples in the first batch belong to the linear region of σ α, and hence receive nonzero gradients. Finally, the response tensor entries are initialized with the standard normal distribution R[i 1, . . ., i d] ∼ N. We jointly optimize all model parameters: F, b, R. In this work, we experimented with traditional objective functions (cross-entropy for classification and mean squared error for regression), but any differentiable objective can be used as well. As an optimization method, we use the recent QuasiHyperbolic Adam with parameters recommended in the original paper . We also average the model parameters over c = 5 consecutive checkpoints and pick the optimal stopping point on the hold-out validation dataset. Inference. During training, a significant fraction of time is spent computing the entmax function and multiplying the choice tensor. Once the model is trained, one can pre-compute entmax feature selectors and store them as a sparse vector (e.g., in coordinate (coo) format), making inference more efficient. In this section, we report the of a comparison between our approach and the leading GBDT packages. We also provide several ablation studies that demonstrate the influence of each design choice in the proposed NODE architecture. As our main experiments, we compare the proposed NODE architecture with two state-of-the-art GBDT implementations on a large number of datasets. In all the experiments we set α parameter in the entmax transformation to 1.5. All other details of the comparison protocol are described below. Datasets. We perform most of the experiments on six open-source tabular datasets from different domains: Epsilon, YearPrediction, Higgs, Microsoft, Yahoo, Click. The detailed description of the datasets is available in appendix. All the datasets provide train/test splits, and we used 20% samples from the train set as a validation set to tune the hyperparameters. For each dataset, we fix the train/val/test splits for a fair comparison. For the classification datasets (Epsilon, Higgs, Click), we minimize cross-entropy loss and report the classification error. For the regression and ranking datasets (YearPrediction, Microsoft, Yahoo), we minimize and report mean squared error (which corresponds to the pointwise approach to learning-to-rank). Methods. We compare the proposed NODE architecture to the following baselines: • Catboost. The recent GBDT implementation that uses oblivious decision trees as weak learners. We use the open-source implementation, provided by the authors. • XGBoost. The most popular GBDT implementation widely used in machine learning competitions . We use the open-source implementation, provided by the authors. • FCNN. Deep neural network, consisting of several fully-connected layers with ReLU nonlinearity layers . Regimes. We perform comparison in two following regimes that are the most important in practice: • Default hyperparameters. In this regime, we compare the methods as easy-to-tune toolkits that could be used by a non-professional audience. Namely, here we do not tune hyperparameters and use the default ones provided by the GBDT packages. The only tunable parameter here is a number of trees (up to 2048) in CatBoost/XGBoost, which is set based on the validation set. We do not compare with FCNN in this regime, as it typically requires much tuning, and we did not find the set of parameters, appropriate for all datasets. The default architecture in our model contains only a single layer with 2048 decision trees of depth six. Both of these hyperparameters were inherited from the CatBoost package settings for oblivious decision trees. With these parameters, the NODE architecture is shallow, but it still benefits from end-to-end training via back-propagation. • Tuned hyperparameters. In this regime, we tune the hyperparameters for both NODE and the competitors on the validation subsets. The optimal configuration for NODE contains between two and eight NODE layers, while the total number of trees across all the layers does not exceed 2048. The details of hyperparameter optimization are provided in appendix. The of the comparison are summarized in Table 1 and Table 2. For all methods, we report mean performance and standard deviations computed over ten runs with different random seeds. Several key observations are highlighted below: 1. With default hyperparameters, the proposed NODE architecture consistently outperforms both CatBoost and XGBoost on all datasets. The advocate the usage of NODE as a handy tool for machine learning on tabular problems. 2. With tuned hyperparameters, NODE also outperforms the competitors on most of the tasks. Two exceptions are the Yahoo and Microsoft datasets, where tuned XGBoost provides the highest performance. Given the large advantage of XGBoost over CatBoost on Yahoo, we speculate that the usage of oblivious decision trees is an inappropriate inductive bias for this dataset. This implies that NODE should be extended to non-oblivious trees, which we leave for future work. Table 2: The comparison of NODE with both shallow and deep counterparts with hyperparameters tuned for optimal performance. The are computed over ten runs with different random seeds. 3. In the regime with tuned hyperparameters on some datasets FCNN outperforms GBDT, while on others GBDT is superior. Meanwhile, the proposed NODE architecture appears to be a universal instrument, providing the highest performance on most of the tasks. For completeness we also aimed to compare to previously proposed architectures for deep learning on tabular data. Unfortunately, many works did not publish the source code. We were only able to perform a partial comparison with mGBDT and DeepForest , which source code is available. For both baselines, we use the implementations, provided by the authors, and tune the parameters on the validation set. Note, that the DeepForest implementation is available only for classification problems. Moreover, both implementations do not scale well, and for many datasets, we obtained Out-Of-Memory error (OOM). On datasets in our experiments it turns out that properly tuned GBDTs outperform both and . In this section, we analyze the key architecture components that define our model. Choice functions. Constructing differentiable decision trees requires a function that selects items from a set. Such function is required for both splitting feature selection and decision tree routing. We experimented with four possible options, each having different implications: • Softmax learns dense decision rules where all items have nonzero weights; • Gumbel-Softmax learns to stochastically sample a single element from a set; • Sparsemax learns sparse decision rules, where only a few items have nonzero weights; • Entmax generalizes both sparsemax and softmax; it is able to learn sparse decision rules, but is smoother than sparsemax, being more appropriate for gradient-based optimization. In comparison α parameter was set to 1.5. We experimentally compare the four options above with both shallow and deep architectures in Table 3. We use the same choice function for both feature selection and tree routing across all experiments. In Gumbel-Softmax, we replaced it with hard argmax one-hot during inference. The clearly show that Entmax with α=1.5 outperforms the competitors across all experiments. First, Table 3 demonstrates that sparsemax and softmax are not universal choice functions. For instance, on the YearPrediction dataset, sparsemax outperforms softmax, while on the Epsilon dataset softmax is superior. In turn, entmax provides great empirical performance across all datasets. Another observation is that Gumbel-Softmax is unable to learn deep architectures with both constant and annealed temperature schedules. This behavior is probably caused by the stochasticity of Gumbel-Softmax and the responses on the former layers are too noisy to produce useful features for the latter layers. . Namely, for 10.000 objects from the Higgs dataset we randomly shuffle the values of each feature (original or learnt on some NODE layer) and compute the increase in the classification error. Then for each layer, we split feature importance values into seven equal bins and calculate the total feature importance of each bin, shown on Figure 3 (left-top). We discovered that the features from the first layer are used the most, with feature importances decreasing with depth. This figure shows that deep layers are able to produce important features, even though earlier layers have an advantage because of the DenseNet architecture. Next, we estimated the mean absolute contribution of individual trees to the final response, reported on Figure 3 (left-bottom). One can see the reverse trend, deep trees tend to contribute more to the final response. Figure 3 (right) clearly shows that there is anticorrelation of feature importances and contributions in the final response, which implies that the main role of ealier layers is to produce informative features, while the latter layers mostly use them for accurate prediction. Training/Inference runtime. Finally, we compare the NODE runtime to the timings of the stateof-the-art GBDT implementations. In Table 4 we report the training and inference time for million of objects from the YearPrediction dataset. In this experiment, we evaluate ensembles of 1024 trees of depth six with all other parameters set to their default values. Our GPU setup has a single 1080Ti GPU and 2 CPU cores. In turn, our CPU setup has a 28-core Xeon E5-2660 v4 processor (which costs almost twice as much as the GPU Table 4 : Training and inference runtime for models with 1024 trees of depth six on the YearPrediction dataset, averaged over five runs. Both training and inference of eight-layer NODE architecture on GPU are on par with shallow counterparts of the same total number of trees in an ensemble. In this paper, we introduce a new DNN architecture for deep learning on heterogeneous tabular data. The architecture is differentiable deep GBDTs, trained end-to-end via backpropagation. In extensive experiments, we demonstrate the advantages of our architecture over existing competitors with the default and tuned hyperparameters. A promising research direction is incorporating the NODE layer into complex pipelines trained via back-propagation. For instance, in multi-modal problems, the NODE layer could be employed as a way to incorporate the tabular data, as CNNs are currently used for images, or RNNs are used for sequences. library to optimize Catboost, XGBoost, and FCNN hyperparameters. For each method, we perform 50 steps of Tree-structured Parzen Estimator (TPE) optimization algorithm. As a final configuration, we choose the set of hyperparameters, corresponding to the smallest loss on the validation set. On each iteration of Hyperopt, the number of trees was set based on the validation set, with maximal trees count set to 2048. Below is the list of hyperparameters and their search spaces for Catboost.
We propose a new DNN architecture for deep learning on tabular data
445
scitldr
Person re-identification (re-ID) aims at identifying the same persons' images across different cameras. However, domain diversities between different datasets pose an evident challenge for adapting the re-ID model trained on one dataset to another one. State-of-the-art unsupervised domain adaptation methods for person re-ID transferred the learned knowledge from the source domain by optimizing with pseudo labels created by clustering algorithms on the target domain. Although they achieved state-of-the-art performances, the inevitable label noise caused by the clustering procedure was ignored. Such noisy pseudo labels substantially hinders the model's capability on further improving feature representations on the target domain. In order to mitigate the effects of noisy pseudo labels, we propose to softly refine the pseudo labels in the target domain by proposing an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner. In addition, the common practice is to adopt both the classification loss and the triplet loss jointly for achieving optimal performances in person re-ID models. However, conventional triplet loss cannot work with softly refined labels. To solve this problem, a novel soft softmax-triplet loss is proposed to support learning with soft pseudo triplet labels for achieving the optimal domain adaptation performance. The proposed MMT framework achieves considerable improvements of 14.4%, 18.2%, 13.1% and 16.4% mAP on Market-to-Duke, Duke-to-Market, Market-to-MSMT and Duke-to-MSMT unsupervised domain adaptation tasks.: Person image A 1 and A 2 belong to the same identity while B with similar appearance is from another person. However, clustering-generated pseudo labels in state-of-the-art Unsupervised Domain Adaptation (UDA) methods contain much noise that hinders feature learning. We propose pseudo label refinery with on-line refined soft pseudo labels to effectively mitigate the influence of noisy pseudo labels and improve UDA performance on person re-ID. To effectively address the problem of noisy pseudo labels in clustering-based UDA methods (; b; (Figure 1), we propose an unsupervised Mutual Mean-Teaching (MMT) framework to effectively perform pseudo label refinery by optimizing the neural networks under the joint supervisions of off-line refined hard pseudo labels and on-line refined soft pseudo labels. Specifically, our proposed MMT framework provides robust soft pseudo labels in an on-line peer-teaching manner, which is inspired by the teacher-student approaches (; b) to simultaneously train two same networks. The networks gradually capture target-domain data distributions and thus refine pseudo labels for better feature learning. To avoid training error amplification, the temporally average model of each network is proposed to produce reliable soft labels for supervising the other network in a collaborative training strategy. By training peer-networks with such on-line soft pseudo labels on the target domain, the learned feature representations can be iteratively improved to provide more accurate soft pseudo labels, which, in turn, further improves the discriminativeness of learned feature representations. The classification and triplet losses are commonly adopted together to achieve state-of-the-art performances in both fully-supervised and unsupervised (b; person re-ID models. However, the conventional triplet loss cannot work with such refined soft labels. To enable using the triplet loss with soft pseudo labels in our MMT framework, we propose a novel soft softmax-triplet loss so that the network can benefit from softly refined triplet labels. The introduction of such soft softmax-triplet loss is also the key to the superior performance of our proposed framework. Note that the collaborative training strategy on the two networks is only adopted in the training process. Only one network is kept in the inference stage without requiring any additional computational or memory cost. The contributions of this paper could be summarized as three-fold. We propose to tackle the label noise problem in state-of-the-art clustering-based UDA methods for person re-ID, which is mostly ignored by existing methods but is shown to be crucial for achieving superior final performance. The proposed Mutual Mean-Teaching (MMT) framework is designed to provide more reliable soft labels. Conventional triplet loss can only work with hard labels. To enable training with soft triplet labels for mitigating the pseudo label noise, we propose the soft softmax-triplet loss to learn more discriminative person features. The MMT framework shows exceptionally strong performances on all UDA tasks of person re-ID. Compared with state-of-the-art methods, it leads to significant improvements of 14.4%, 18.2%, 13.4%, 16.4% mAP on Market-to-Duke, Duke-to-Market, Market-to-MSMT, Duke-to-MSMT re-ID tasks. Unsupervised domain adaptation (UDA) for person re-ID. UDA methods have attracted much attention because their capability of saving the cost of manual annotations. There are three main categories of methods. The first category of clustering-based methods maintains state-of-the-art performance to date. proposed to alternatively assign labels for unlabeled training samples and optimize the network with the generated targets. proposed a bottom-up clustering framework with a repelled loss. introduced to assign hard pseudo labels for both global and local features. However, the training of the neural network was substantially hindered by the noise of the hard pseudo labels generated by clustering algorithms, which was mostly ignored by existing methods. The second category of methods learns domain-invariant features from style-transferred source-domain images. SPGAN and PTGAN transformed source-domain images to match the image styles of the target domain while maintaining the original person identities. The style-transferred images and their identity labels were then used to fine-tune the model. HHL learned camera-invariant features with camera style transferred images. However, the retrieval performances of these methods deeply relied on the image generation quality, and they did not explore the complex relations between different samples in the target domain. The third category of methods attempts on optimizing the neural networks with soft labels for target-domain samples by computing the similarities with reference images or features. ENC assigned soft labels by saving averaged features with an exemplar memory module. MAR conducted multiple soft-label learning by comparing with a set of reference persons. However, the reference images and features might not be representative enough to generate accurate labels for achieving advanced performances. Generic domain adaptation methods for close-set recognition. Generic domain adaptation methods learn features that can minimize the differences between data distributions of source and target domains. Adversarial learning based methods (a; ; ; ;) adopted a domain classifier to dispel the discriminative domain information from the learned features in order to reduce the domain gap. There also exist methods (; ; ; ;) that minimize the Maximum Mean Discrepancy (MMD) loss between source-and target-domain distributions. However, these methods assume that the classes on different domains are shared, which is not suitable for unsupervised domain adaptation on person re-ID. Teacher-student models have been widely studied in semi-supervised learning methods and knowledge/model distillation methods. The key idea of teacher-student models is to create consistent training supervisions for labeled/unlabeled data via different models' predictions. Temporal ensembling maintained an exponential moving average prediction for each sample as the supervisions of the unlabeled samples, while the mean-teacher model averaged model weights at different training iterations to create the supervisions for unlabeled samples. Deep mutual learning (b) adopted a pool of student models instead of the teacher models by training them with supervisions from each other. However, existing methods with teacher-student mechanisms are mostly designed for close-set recognition problems, where both labeled and unlabeled data share the same set of class labels and could not be directly utilized on unsupervised domain adaptation tasks of person re-ID. Generic methods for handling noisy labels can be classified into four categories. Loss correction methods (; ;) tried to model the noise transition matrix, however, such matrix is hard to estimate in real-world tasks, e.g. unsupervised person re-ID with noisy pseudo labels obtained via clustering algorithm. (; ;) attempted to correct the noisy labels directly, while the clean set required by such methods limits their generalization on real-world applications. Noise-robust methods designed robust loss functions against label noises, for instance, Mean Absolute Error (MAE) loss , Generalized Cross Entropy (GCE) loss and Label Smoothing Regularization (LSR) . However, these methods did not study how to handle the triplet loss with noisy labels, which is crucial for learning discriminative feature representations on person re-ID. The last kind of methods which focused on refining the training strategies is mostly related to our method. Co-teaching trained two collaborative networks and conducted noisy label detection by selecting on-line clean data for each other, Co-mining further extended this method on the face recognition task with a re-weighting function for Arc-Softmax loss . However, the above methods are not designed for the open-set person re-ID task and could not achieve state-of-the-art performances under the more challenge unsupervised settings. We propose a novel Mutual Mean-Teaching (MMT) framework for tackling the problem of noisy pseudo labels in clustering-based Unsupervised Domain Adaptation (UDA) methods. The label noise has important impacts to the domain adaptation performance but was mostly ignored by those methods. Our key idea is to conduct pseudo label refinery in the target domain by optimizing the neural networks with off-line refined hard pseudo labels and on-line refined soft pseudo labels in a collaborative training manner. In addition, the conventional triplet loss cannot properly work with soft labels. A novel soft softmax-triplet loss is therefore introduced to better utilize the softly refined pseudo labels. Both the soft classification loss and the soft softmax-triplet loss work jointly to achieve optimal domain adaptation performances. 3.1 CLUSTERING-BASED UDA METHODS REVISIT State-of-the-art UDA methods (; b; follow a similar general pipeline. They generally pre-train a deep neural network F (·|θ) on the source domain, where θ denotes current network parameters, and the network is then transferred to learn from the images in the target domain. The source-domain images' and target-domain images' features encoded by the network are denoted as {F (x The network parameters θ and a learnable target-domain classifier C t: f t → {1, · · ·, M t} are then optimized with respect to an identity classification (crossentropy) loss L t id (θ) and a triplet loss (where || · || denotes the L 2 -norm distance, subscripts i,p and i,n indicate the hardest positive and hardest negative feature index in each mini-batch for the sample x t i, and m = 0.5 denotes the triplet distance margin. Such two operations, pseudo label generation by clustering and feature learning with pseudo labels, are alternated until the training converges. However, the pseudo labels generated in step inevitably contain errors due to the imperfection of features as well as the errors of the clustering algorithms, which hinder the feature learning in step. To mitigate the pseudo label noise, we propose the Mutual Mean-Teaching (MMT) framework together with a novel soft softmax-triplet loss to conduct the pseudo label refinery. 3.2.1 SUPERVISED PRE-TRAINING FOR SOURCE DOMAIN UDA task on person re-ID aims at transferring the knowledge from a pre-trained model on the source domain to the target domain. A deep neural network is first pre-trained on the source domain. Given the training data D s, the network is trained to model a feature transformation function F (·|θ) that transforms each input sample x s i into a feature representation F (x s i |θ). Given the encoded features, the identification classifier C s outputs an M s -dimensional probability vector to predict the identities in the source-domain training set. The neural network is trained with a classification loss L s id (θ) and a triplet loss L s tri (θ) to separate features belonging to different identities. The overall loss is therefore calculated as where L Overall framework of the proposed Mutual Mean-Teaching (MMT) with two collaborative networks jointly optimized under the supervisions of off-line refined hard pseudo labels and on-line refined soft pseudo labels. A soft identity classification loss and a novel soft softmax-triplet loss are adopted. (c) One of the average models with better validated performance is adopted for inference as average models perform better than models with current parameters. Our proposed MMT framework is based on the clustering-based UDA methods with off-line refined hard pseudo labels as introduced in Section 3.1, where the pseudo label generation and refinement are conducted alternatively. However, the pseudo labels generated in this way are hard (i.e., they are always of 100% confidences) but noisy. In order to mitigate the pseudo label noise, apart from the off-line refined hard pseudo labels, our framework further incorporates on-line refined soft pseudo labels (i.e., pseudo labels with < 100% confidences) into the training process. Our MMT framework generates soft pseudo labels by collaboratively training two same networks with different initializations. The overall framework is illustrated in Figure 2 (b). The pseudo classes are still generated the same as those by existing clustering-based UDA methods, where each cluster represents one class. In addition to the hard and noisy pseudo labels, our two collaborative networks also generate on-line soft pseudo labels by network predictions for training each other. The intuition is that, after the networks are trained even with hard pseudo labels, they can roughly capture the training data distribution and their class predictions can therefore serve as soft class labels for training. However, such soft labels are generally not perfect because of the training errors and noisy hard pseudo labels in the first place. To avoid two networks collaboratively bias each other, the past temporally average model of each network instead of the current model is used to generate the soft pseudo labels for the other network. Both off-line hard pseudo labels and on-line soft pseudo labels are utilized jointly to train the two collaborative networks. After training, only one of the past average models with better validated performance is adopted for inference (see Figure 2 (c)). We denote the two collaborative networks as feature transformation functions F (·|θ 1) and F (·|θ 2), and denote their corresponding pseudo label classifiers as C t 1 and C t 2, respectively. To simultaneously train the coupled networks, we feed the same image batch to the two networks but with separately random erasing, cropping and flipping. Each target-domain image can be denoted by x t i and x t i for the two networks, and their pseudo label confidences can be predicted as C t 1 (F (x t i |θ 1)) and C t 2 (F (x t i |θ 2)). One naïve way to train the collaborative networks is to directly utilize the above pseudo label confidence vectors as the soft pseudo labels for training the other network. However, in such a way, the two networks' predictions might converge to equal each other and the two networks lose their output independences. The classification errors as well as pseudo label errors might be amplified during training. In order to avoid error amplification, we propose to use the temporally average model of each network to generate reliable soft pseudo labels for supervising the other network. Specifically, the parameters of the temporally average models of the two networks at current iteration T are denoted as respectively, which can be calculated as where indicate the temporal average parameters of the two networks in the previous iteration (T −1), the initial temporal average parameters are and α is the ensembling momentum to be within the range. The robust soft pseudo label supervisions are then generated by the two temporal average models as respectively. The soft classification loss for optimizing θ 1 and θ 2 with the soft pseudo labels generated from the other network can therefore be formulated as The two networks' pseudo-label predictions are better dis-related by using other network's past average model to generate supervisions and can therefore better avoid error amplification. Generalizing classification cross-entropy loss to work with soft pseudo labels has been well studied , (Müller et al., 2019). However, optimizing triplet loss with soft pseudo labels poses a great challenge as no previous method has investigated soft labels for triplet loss. For tackling the difficulty, we propose to use softmax-triplet loss, whose hard version is formulated as where Here L bce (·, ·) denotes the binary cross-entropy loss, F (x and its positive sample x t i,p to measure their similarity, and "1" denotes the ground-truth that the positive sample x t i,p should be closer to the sample x t i than its negative sample x t i,n . Given the two collaborative networks, we can utilize the one network's past temporal average model to generate soft triplet labels for the other network with the proposed soft softmax-triplet loss, where are the soft triplet labels generated by the two networks' past temporally average models. Such soft triplet labels are fixed as training supervisions. By adopting the soft softmax-triplet loss, our MMT framework overcomes the limitation of hard supervisions by the conventional triple loss (equation 2). It can be successfully trained with soft triplet labels, which are shown to be important for improving the domain adaptation performance in our experiments. Note that such a softmax-triplet loss was also studied in (a). However, it has never been used to generate soft labels and was not designed to work with soft pseudo labels before. Our proposed MMT framework is trained with both off-line refined hard pseudo labels and on-line refined soft pseudo labels. The overall loss function L(θ 1, θ 2) simultaneously optimizes the coupled networks, which combines equation 1 We evaluate our proposed MMT on three widely-used person re-ID datasets, i.e., Market-1501 , DukeMTMC-reID , and MSMT17. The Market-1501 dataset consists of 32,668 annotated images of 1,501 identities shot from 6 cameras in total, for which 12,936 images of 751 identities are used for training and 19,732 images of 750 identities are in the test set. DukeMTMC-reID contains 16,522 person images of 702 identities for training, and the remaining images out of another 702 identities for testing, where all images are collected from 8 cameras. MSMT17 is the most challenging and large-scale dataset consisting of 126,441 bounding boxes of 4,101 identities taken by 15 cameras, for which 32,621 images of 1,041 identities are spitted for training. For evaluating the domain adaptation performance of different methods, four domain adaptation tasks are set up, i.e., Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT, where only identity labels on the source domain are provided. Mean average precision (mAP) and CMC top-1, top-5, top-10 accuracies are adopted to evaluate the methods' performances. 4.2.1 TRAINING DATA ORGANIZATION For both source-domain pre-training and target-domain fine-tuning, each training mini-batch contains 64 person images of 16 actual or pseudo identities (4 for each identity). Note that the generated hard pseudo labels for the target-domain fine-tuning are updated after each epoch, so the mini-batch of target-domain images needs to be re-organized with updated hard pseudo labels after each epoch. All images are resized to 256 × 128 before being fed into the networks. All the hyper-parameters of the proposed MMT framework are chosen based on a validation set of the Duke-to-Market task with M t = 500 pseudo identities and IBN-ResNet-50 backbone. The same hyper-parameters are then directly applied to the other three domain adaptation tasks. We propose a two-stage training scheme, where ADAM optimizer is adopted to optimize the networks with a weight decay of 0.0005. Randomly erasing (b) is only adopted in target-domain fine-tuning. Stage 1: Source-domain pre-training. We adopt ResNet-50 or IBN-ResNet-50 as the backbone networks, where IBN-ResNet-50 achieves better performances by integrating both IN and BN modules. Two same networks are initialized with ImageNet pre-trained weights. Given the mini-batch of images, network parameters θ 1, θ 2 are updated independently by optimizing equation 3 with λ s = 1. The initial learning rate is set to 0.00035 and is decreased to 1/10 of its previous value on the 40th and 70th epoch in the total 80 epochs. Stage 2: End-to-end training with MMT. Based on pre-trained weights θ 1 and θ 2, the two networks are collaboratively updated by optimizing equation 9 with the loss weights λ t id = 0.5, λ t tri = 0.8. The temporal ensemble momentum α in equation 4 is set to 0.999. The learning rate is fixed to 0.00035 for overall 40 training epochs. We utilize k-means clustering algorithm and the number M t of pseudo classes is set as 500, 700, 900 for Market-1501 and DukeMTMC-reID, and 500, 1000, 1500, 2000 for MSMT17. Note that actual identity numbers in the target-domain training Table 1: Experimental of the proposed MMT and state-of-the-art methods on Market-1501 , DukeMTMC-reID , and MSMT17 datasets, where MMT-M t represents the with M t pseudo classes. Note that none of M t values equals the actual number of identities but our method still outperforms all state-of-the-arts. sets are different from M t. We test different M t values that are either smaller or greater than actual numbers. We compare our proposed MMT framework with state-of-the-art methods on the four domain adaptation tasks, Market-to-Duke, Duke-to-Market, Market-to-MSMT and Duke-to-MSMT. The are shown in Table 1. Our MMT framework significantly outperforms all existing approaches with both ResNet-50 and IBN-ResNet-50 backbones, which verifies the effectiveness of our method. Moreover, we almost approach fully-supervised learning performances without any manual annotations on the target domain. No post-processing technique, e.g. re-ranking (a) or multi-query fusion , is adopted. Specifically, by adopting the ResNet-50 backbone, we surpass the state-of-theart clustering-based SSG by considerable margins of 11.7% and 12.9% mAP on Market-to-Duke and Duke-to-Market tasks with simpler network architectures and lower output feature dimensions. Furthermore, evident 9.7% and 10.2% mAP gains are achieved on Market-to-MSMT and Duke-to-MSMT tasks. Recall that M t is the number of clusters or number of hard pseudo labels manually specified. More importantly, we achieve state-of-the-art performances on all tested target datasets with different M t, which are either fewer or more than the actual number of identities in the training set of the target domain. Such prove the necessity and effectiveness of our proposed pseudo label refinery for hard pseudo labels with inevitable noises. Table 2: Ablation studies of our proposed MMT on Duke-to-Market and Market-to-Duke tasks with M t of 500. Note that the actual numbers of identities are not equal to 500 for both datasets but our MMT method still shows significant improvements. To compare with relevant methods for tackling general noisy label problems, we implement Coteaching on unsupervised person re-ID task with 500 pseudo identities on the target domain, where the noisy labels are generated by the same clustering algorithm as our MMT framework. The hard classification (cross-entropy) loss is adopted on selected clean batches. All the hyper-parameters are set as the same for fair comparison, and the experimental are denoted as "Co-teaching -500" with both ResNet-50 and IBN-ResNet-50 backbones in Table 1. Comparing "Co-teaching -500 (ResNet-50)" with "Proposed MMT-500 (ResNet-50)", we observe significant 7.4% and 6.1% mAP drops on Market-to-Duke and Duketo-Market tasks respectively, since Co-teaching is designed for general close-set recognition problems with manually generated label noise, which could not tackle the real-world challenges in unsupervised person re-ID. More importantly, it does not explore how to mitigate the label noise for the triplet loss as our method does. In this section, we evaluate each component in our proposed framework by conducting ablation studies on Duke-to-Market and Market-to-Duke tasks with both ResNet-50 and IBN-ResNet-50 backbones. Results are shown in Table 2. Effectiveness of the soft pseudo label refinery. To investigate the necessity of handling noisy pseudo labels in clustering-based UDA methods, we create baseline models that utilize only off-line refined hard pseudo labels, i.e., optimizing equation 9 with λ t id = λ t tri = 0 for the two-step training strategy in Section 3.1. The baseline model performances are present in Table 2 as "Baseline (only L t id & L t tri)". Considerable drops of 17.7% and 14.9% mAP are observed on ResNet-50 for Duketo-Market and Market-to-Duke tasks. Similarly, 13.8% and 10.7% mAP decreases are shown on the IBN-ResNet-50 backbone. Stable increases achieved by the proposed on-line refined soft pseudo labels on different datasets and backbones demonstrate the necessity of soft pseudo label refinery and the effectiveness of our proposed MMT framework. Effectiveness of the soft softmax-triplet loss. We also verify the effectiveness of soft softmaxtriplet loss with softly refined triplet labels in our proposed MMT framework. Experiments of removing the soft softmax-triplet loss, i.e., λ Specifically, the mAP drops are 5.3% on ResNet-50 and 4.8% on IBN-ResNet-50 when evaluating on the target dataset Market-1501. As for the Market-to-Duke task, similar mAP drops of 3.6% and 4.0% on the two network structures can be observed. An evident improvement of up to 5.3% mAP demonstrates the usefulness of our proposed soft softmax-triplet loss. Effectiveness of Mutual Mean-Teaching. We propose to generate on-line refined soft pseudo labels for one network with the predictions of the past average model of the other network in our MMT framework, i.e., the soft labels for network 1 are output from the average model of network 2 and vice versa. We observe that the soft labels generated in such manner are more reliable due to the better decoupling between the past temporally average models of the two networks. Such a framework could effectively avoid bias amplification even when the networks have much erroneous outputs in the early training epochs. There are two possible simplification our MMT framework with less de-coupled structures. The first one is to keep only one network in our framework and use its past temporal average model to generate soft pseudo labels for training itself. Such experiments are denoted as "Baseline+MMT-500 (w/o θ 2)". The second simplification is to naïvely use one network's current-iteration predictions as the soft pseudo labels for training the other network and vice versa, i.e., α = 0 for equation 4. This set of experiments are denoted as "Baseline+MMT-500 (w/o E[θ])". Significant mAP drops compared to our proposed MMT could be observed in the two sets of experiments, especially when using the ResNet-50 backbone, e.g. the mAP drops by 8.9% on Duke-to-Market task when removing past average models. This validates the necessity of employing the proposed mutual mean-teaching scheme for providing more robust soft pseudo labels. In despite of the large margin of performance declines when removing either the peer network or the past average model, our proposed MMT outperforms the baseline model significantly, which further demonstrates the importance of adopting the proposed on-line refined soft pseudo labels. Necessity of hard pseudo labels in proposed MMT. Despite the robust soft pseudo labels bring significant improvements, the noisy hard pseudo labels are still essential to our proposed framework, since the hard classification loss L The initial network usually outputs uniform probabilities for each identity, which act as soft labels for soft classification loss, since it could not correctly distinguish between different identities on the target domain. Directly training with such smooth and noisy soft pseudo labels, the networks in our framework would soon collapse due to the large bias. One-hot hard labels for classification loss are critical for learning discriminative representations on the target domain. In contrast, the hard triplet loss L t tri is not absolutely necessary in our framework, as experiments without L t tri, denoted as "Baseline+MMT-500 (w/o L t tri)" with λ t tri = 1.0, show similar performances as our final with λ t tri = 0.8. It is much easier to learn to predict robust soft labels for the soft softmax-triplet loss in equation 8 even at early training epochs, which has only two classes, i.e., positive and negative. In this work, we propose an unsupervised Mutual Mean-Teaching (MMT) framework to tackle the problem of noisy pseudo labels in clustering-based unsupervised domain adaptation methods for person re-ID. The key is to conduct pseudo label refinery to better model inter-sample relations in the target domain by optimizing with the off-line refined hard pseudo labels and on-line refined soft pseudo labels in a collaborative training manner. Moreover, a novel soft softmax-triplet loss is proposed to support learning with softly refined triplet labels for optimal performances. Our method significantly outperforms all existing person re-ID methods on domain adaptation task with up to 18.2% improvements. Two temporal average models are introduced in our proposed MMT framework to provide more complementary soft labels and avoid training error amplification. Such average models are more de-coupled by ensembling the past parameters and provide more independent predictions, which is ignored by previous methods with peer-teaching strategy (; b). Despite we have verified the effectiveness of such design in Table 2 by removing the temporal average model, denoted as "Baseline+MMT-500 (w/o E[θ])", we would like to visualize the training process by plotting the KL divergence between peer networks' predictions for further comparison. As illustrated in Figure 3, the predictions by two temporal average models ("Proposed MMT-500") always keep a larger distance than predictions by two ordinary networks ("Proposed MMT-500 (w/o E[θ])"), which indicates that the temporal average models could prevent the two networks in our MMT from converging to each other soon under the collaborative training strategy. We utilize weighting factors of λ t tri = 0.8, λ t id = 0.5 in all our experiments by tuning on Duketo-Market task with IBN-ResNet-50 backbone and 500 pseudo identities. To further analyse the impact of different λ t tri and λ t id on different tasks, we conduct comparison experiments by varying the value of one parameter and keep the others fixed. Our MMT framework is robust and insensitive to different parameters except when the hard classification loss is eliminated with λ t id = 1.0. The weighting factor of hard and soft triplet losses λ t tri. In Figure 4 (a-b), we investigate the effect of the weighting factor λ t tri in equation 9, where the weight for soft softmax-triplet loss is λ t tri and the weight for hard triplet loss is (1 − λ t tri). We test our proposed MMT-500 with both ResNet-50 and IBN-ResNet-50 backbones when λ t tri is varying from 0.0, 0.3, 0.5, 0.8 and 1.0. Specifically, the soft softmax-triplet loss is removed from the final training objective (equation 9) when λ t tri is equal to 0.0, and the hard triplet loss is eliminated when λ t tri is set to 1.0. We observe
A framework that conducts online refinement of pseudo labels with a novel soft softmax-triplet loss for unsupervised domain adaptation on person re-identification.
446
scitldr
We present the first end-to-end verifier of audio classifiers. Compared to existing methods, our approach enables analysis of both, the entire audio processing stage as well as recurrent neural network architectures (e.g., LSTM). The audio processing is verified using novel convex relaxations tailored to feature extraction operations used in audio (e.g., Fast Fourier Transform) while recurrent architectures are certified via a novel binary relaxation for the recurrent unit update. We show the verifier scales to large networks while computing significantly tighter bounds than existing methods for common audio classification benchmarks: on the challenging Google Speech Commands dataset we certify 95% more inputs than the interval approximation (only prior scalable method), for a perturbation of -90dB. Recent advances in deep learning have enabled replacement of traditional voice recognition systems with a single neural network trained from data (; ;). Wide adoption of these networks in consumer devices poses a threat to their safety when exposed to a malicious adversary. Indeed, it was recently shown that an adversary can inject noise unrecognizable to a human and force the network to misclassify (; ; ; ; ; ; ; ;), exposing a serious security flaw. Ideally, when deploying an automated speech recognition system we would like to guarantee that the system is robust against noise injected by an adversary. There has been substantial recent work on certifying robustness of computer vision models (; ; ; ; ; ; ; ; ; ; ; ; ; 2019a; ; b). However, the audio domain poses unique challenges not addressed by prior certification work for vision. Differences between audio and vision models Concretely, while an input to a vision model is a raw image, audio models typically come with a complex preprocessing stage (that involves non-trivial non-linear operations such as logarithm) which extracts relevant features from the signal. Additionally, audio systems typically use recurrent architectures which computer vision verifiers do not handle as they focus on fully-connected, convolutional and residual architectures. This work We address both of these challenges and propose an end-to-end verification method for neural network based audio classifiers and an implementation of this method in a system called DAC (Deep Audio Certifier). Our threat model assumes an attacker can introduce a noise-based perturbation to the raw audio input signal. The goal then is to certify that, for any signal that the attacker can produce, the neural network classifies the signal to the correct label. We perform verification of this property using the framework of abstract interpretation . At a high level, the idea is to maintain an abstraction capturing all possible behaviors of both the audio processing stage and the neural network. The flow of DAC is shown in Fig. 1 where all abstractions are dark blue shapes. Here, all possible signals an attacker can obtain are captured using an abstraction s (i) (a convex relaxation). This abstraction is then propagated through the audio processing stage (shown in green boxes). The key components of this step are abstract transformers. For each audio processing operation (e.g. FFT) we create an abstract transformer which receives an abstraction representing an approximation of all possible inputs to the operation and outputs a new abstraction which approximates all possible outputs of the operation. The of the audio processing stage is the abstraction x (i). The shape x (i) is then used as input to the recurrent LSTM unit (light blue) which maintains an abstraction of a hidden state h (i−1). LSTM consists of multiple operations and we create a custom abstract transformer for each of those. The of the transformers in LSTM is a new hidden state h (i). If this was the last frame in the signal (meaning i = T), then hidden state h (T) is passed through the fully connected layer of the neural network and, again using the abstract transformer, the final abstract shape a is obtained at the output (at the right of Fig. 1). Finally, to certify the property we check if each concrete output in the abstraction a classifies to the correct label (this is typically easy). If this is true, the output of the network is correct for all inputs that the attacker can create. Related work on RNN certification The work of proposes the POPQORN verifier for recurrent neural networks (RNN). We note that POPQORN does not handle the audio preprocessing pipeline. Even though POPQORN cannot directly verify audio classifiers, their approximations for LSTM non-linearities can be integrated in DAC. This in ≈ 200× slowdown with small decrease in the volume of the approximation. The massive slowdown makes their approximations unsuitable for certifying audio classifiers. In contrast, using our custom abstract transformers for LSTM non-linearities, DAC can precisely certify end-to-end robustness of challenging audio classifiers in few minutes. Our main contributions are: 1. A novel and efficient method to certify robustness of neural network audio classifiers to noise-based perturbations. The method is based on new abstract transformers which handle non-linear operations used in both audio processing and recurrent architectures. 2. An implementation of both verification and provably robust training in a system called DAC. We evaluated DAC on common audio classification benchmarks, showing it scales to realistic networks and is far more precise (97% to 2%) than the next best scalable method. We first define a threat model that we work with and then present all operations that are part of the verification procedure, including audio processing (MFCC) and LSTM updates. We also discuss the type of verification method we employ. Threat model We follow the same attacker threat model as. The assumption is that the attacker can add noise δ to the original signal s so to obtain a perturbed signal s = s + δ. The measure of signal distortion are decibels (dB) defined as: Note that the quieter the noise is, the smaller the decibel of perturbation dB s (δ) (it is usually a negative value as it is quieter than the signal). We assume the attacker can generate noise δ such that dB s (δ) < where is a constant defined by the threat model. Our goal is to verify whether the neural network classifies s correctly for each small perturbation δ (as constrained above). Though there have been number of works which operate directly on the raw signal , Mel-Frequency Cepstrum (MFC) is traditionally preferred for audio preprocessing in speech recognition systems e.g., DeepSpeech . The idea of MFC is to model non-linear human acoustic perception as power spectrum filters based on certain frequencies, called Mel-frequencies. The final of the transformation is a vector of coefficients whose elements contain log-scaled values of filtered spectra, one for every Mel-frequency. This ing vector is a feature representation of the original signal and can now be used in a downstream task such as audio classification. presented an approach to represent MFCC computation using several matrix operations which we integrate with our verification framework. Given T frames of audio signals of tr ∈ R T ×N (tr for transpose), audio preprocessing with MFCC is calculated using the following steps: 1. Pre-emphasizing and Windowing: Y = S(I N − c pe I +1 N) H Transform the signal with the pre-emphasizing and applying the Hamming window. Here, I +1 N ∈ R N ×N is the shifted diagonal identity matrix (I T ×N is the Hamming window, and c pe is the pre-emphasizing constant. 2. Power Spectrum of Fast Fourier Transform (FFT): Perform FFT on the windowed data and square it to get the real-value spectrum. We can denote FFT on discrete domain (DFT) with the multiplication of Y and W ∈ C N ×N/2. 3. Filter Bank Log Energy: Ψ = log(ΘΛ) Apply the Mel frequency filter bank to the power spectrum and get the log value of them. Λ ∈ R N/2×p is the filter bank given the number of filters p, and log is applied entry-wise. 4. DCT(Discrete Cosine Transformation): X = ΨD Perform DCT on the previous . Again, this can be formulated using matrix multiplication. We use the ing tr as the input for the neural network. Long-Short Term Memory (LSTM) LSTM architectures are a key part in modern state-of-the-art speech recognition systems . In our work, we consider the following definitions of updates in the LSTM unit. where [·, ·] is the horizontal concatenation two row vectors, W · and b · are kernel and bias of the cell, respectively. At timestep t, vectors f 0 represent pre-activations of the forget, input, and output gate, respectively, and the pre-calculation of cell state. Cell state c (t) and hidden state h (t) computed at timestep t are propagated to the LSTM unit at the next timestep, thus allowing it to maintain states. This recurrent architecture allows inputs with arbitrary length, making it especially suited for audio inputs. Robustness certification In this work, our goal will be to certify the robustness of an audio classification pipeline (including the LSTM) to noise perturbations of the input. To build such a verifier, we leverage the general method of abstract interpretation suggested by , successfully employed by some of the most recent state-of-the-art verifiers of neural networks (; a). The basic idea here will be to propagate the possible perturbations (captured by a convex region) through the operations of the entire audio pipeline and to then use the final output region to certify the robustness property. The most challenging step of this approach is defining efficient yet precise approximations of the non-linear operations (called abstract transformers) used in the audio pipeline. In this work, we will introduce a number of such new abstract transformers that handle the non-linear operations used in audio processing. Specifically, our over-approximations will be expressed in the recent DEEPPOLY abstraction (a) (a restricted form of convex polyhedra) which aims to balance efficiency and precision. That is, the abstract transformer of a non-linear function will take as input a convex polyhedra element expressible in DEEPPOLY and output another polyhedra which over-approximates the behavior of the non-linear operation when invoked with any concrete point inside the original polyhedra. We now explain the workings of our verifier on a (toy) example from the point where sound enters the processing stage to where the output of the classifier is certified as robust under any allowed perturbation of the threat model. Our goal is to provide an intuitive understanding, formal details are provided later. Audio preprocessing Unlike in standard computer vision tasks where the input is fed directly to the neural network, audio models typically first perform MFCC preprocessing to extract useful features 2 + 1.85 2 + 2.50 from the signal. We represent all preprocessing operations as green boxes in Fig. 2. The calculation follows steps described formally in Section 2, using the same notation for ing vectors. The first operation is Fast Fourier Transform (FFT) of the pre-emphasized input signal. It is a two-step process shown in dashed boxes in Fig. 2. We decompose it into affine and square operations which transform signal s (t) into an intermediate representation θ (t). Using our novel and optimal abstract transformer for the square operation, formally defined in Section 4 and visualized in Fig. 3b, we obtain linear bounds on θ (t). The next operation is Filterbank transform (FB) which consists of affine and logarithm operations, shown in solid boxes in Fig. 2. Note that if the approximations from the square transformer allows negative values, then the entire analysis will fail as the logarithm operation is undefined for negative inputs. Our transformers are carefully designed to avoid this scenario. To obtain bounds on the output of the logarithm we apply our novel and optimal logarithm transformer, also formally described in Section 4 and visualized in Fig. 3a, and obtain linear upper and lower bounds on the log energy of the filtered power spectrum, ψ (t). Logarithm operation is followed by Discrete Cosine Transform (DCT) ing in a vectorx (t) which is then used as an input to the fully connected layer of the neural network followed by a ReLU. Our analysis (detailed calculation in Appendix C) producesx 0.17, 12.8]. Since all the values are positive, the following ReLU has no effect on its input and thus we set x (t) =x (t). Using the back-substitution technique we describe later, we derive x 0.17, 12.83]. In the next paragraph we describe bound propagation through LSTM in more detail. LSTM bound propagation Here we provide a technical overview of our LSTM transformer, formalized in Section 5 and visualized in Fig. 4, by calculating the of the transformation on the first neuron of the LSTM hidden layer. We provide detailed mathematical basis of this process in Appendix D. For our toy example, let the gates be updated as f 2 /4. Also, assume the previous states are bounded by: ∈ [0.90, 1.00]. We now apply our σ(x) · tanh(y) and σ(x) · y transformers to get bounds of the cell state c Summing up the inequalities above in c 1 ), we again apply our abstract transformer and obtain 0.48c (a) Log abstract transformer. Figure 3: Our DeepPoly approximations of (a) the natural logarithm and (b) the square function. Blue lines represent the valid (in case of square, non-negative) upper and lower bounds which minimize the area between the planes under the given domain [l x, u x]. The red bound on (b) grants smaller area, but contains the negative value, which occurs the analysis failure in the audio processing pipeline. i.e., h 2. For this example, we assume the computation in h (t) 2 = 0. Robustness certification using DeepPoly The hidden states in the LSTM at the final timestep are passed through a fully connected layer without activation. In our example we assume that logits after the layer are obtained with the following formula: 2. These are shown in the final box of Fig. 2. To certify that the neural network classifies the input to class 1, we need to prove that l 1 − l 2 > 0. We now apply the back-substitution technique as in Singh et al. (2019a) upto the input to the LSTM: As this value is greater than 0, robustness is established. The process above, of replacing a variable with its constraints, is called back-substitution. Here, we replaced l 1 with h 1 − 0.33 and so on. In Appendix C we show more detailed calculations to obtain tighter bound 0.0375 by also back-substituting in the preprocessing pipeline. For our experiments in Section 6, we tune the number of back-substitution steps to achieve a good tradeoff between speed and precision. Note that robustness cannot be proved if one concretizes the expression above to an interval, instead of performing back-substitution of linear bounds. For instance, if we concretize h 2 + 0.24 ≥ −0.01 which is imprecise and fails to certify robustness. Note that if t is not the last timestep, the hidden state and the cell state are passed to the next analysis timestep instead of computing the final output. As illustrated earlier, the first part of the verification process involves handling of the audio processing stage (performed using MFCC). Here, most matrix multiplication parts can be captured exactly by our abstraction, but MFCC also includes the square operation in the FFT and the logarithm operation in the computation of energy from filterbanks. Thus, to handle these non-linear operations, we need to create new abstract transformers, which we present next. To ensure a minimal loss of precision, our abstract transformers minimize the area between the lower and upper bounds in the input-output plane. This approach of minimizing the area has been used before for other transformers and has been shown to be practically effective in Singh et al. (2018; 2019a). We denote the set of all variables in the analysis as X. For an element x ∈ X, we denote the functions corresponding to its linear lower and upper bound as x l and x u, respectively. These are scalar functions defined on R k where k is the number of other variables used to compute x. For every element, we also maintain interval bounds, x ∈ [l x, u x]. For ease of explanation, we introduce the log transformer followed by the square transformer. Log abstract transformer The logarithm operation is an assignment y:= log(x) where x, y ∈ X. The output of this operation cannot be captured exactly and we need to carefully introduce an approximation. Here, we first compute the minimum l y = log(l x) and the maximum u y = log(u x), which we use as interval approximation. We define linear lower and upper bound functions y l, y u: R → R. Using concavity of the logarithm on any subset of the domain, the lower bound function is a line connecting the points (l x, log(l x)) and (u x, log(u x)). The upper bound function is chosen as the tangent line to the function minimizing the area between the lower and the upper bound. As a , we obtain the following bounds as depicted in Fig. 3a: Note that if l x ≤ 0, y l (x) = −∞ since the function is not defined on that domain. We make an exception when u x − l x < 10 −4 to use interval bound for y l for avoiding the unstable floating point calculation caused by large denominator of x coefficient. Square abstract transformer The square operation is an assignment y:= x 2 where x, y ∈ X. Similar to logarithm, this operation is non-linear and cannot be captured exactly in the abstraction. We first compute the interval bounds of the output l y and u y, and set the minimum value l y to 0 when Next, we define linear lower and upper bound functions y l, y u: R → R. Using the convexity of the square function, we set the upper bound function y u to the linear function connecting (l x, l 2 x) and (u x, u 2 x). For the lower bound function y l, we have to be delicate: since x 2: R → R ≥0, our y l should also be greater or equal than 0 within any domain. With the principle of minimizing the area between the bounds, we obtain the following bounds for cases as shown in Fig. 3b: Most prior work on verification of neural networks focuses on feed-forward or convolutional architectures whose operations consist of a sequence of affine transforms and activation functions. However, to perform verification of recurrent architectures, one needs to handle the updates of the recurrent unit. Following the equations updating the LSTM presented in Section 2, we can observe that pre-activations of gates are affine transforms which can be captured exactly in our abstraction. However, the operations updating the cell and hidden states are non-linear and require approximation. Overall, we have three elementwise products -two are between a sigmoid and a tanh and one between a sigmoid and the identity. A straightforward approach to handling such transforms would be to concretize the polyhedra DeepPoly element to an interval and perform the multiplication using intervals. However, this approach would lose all relations between inputs and incur precision loss in future operations of the analysis. Instead, we design custom binary approximation transformers specifically tailored to handle the elementwise product in the update of the recurrent unit. Sigmoid Tanh abstract transformer We define elementwise multiplication between sigmoid and tanh as an assignment z:= σ(x) tanh(y), where x, y, z ∈ X. As before, our aim is to construct linear bounds z l, z u. Unlike the previously defined DeepPoly transformers which take as input a single abstract element, this transformer is the first which receives two abstract elements as operands. Hence, the bound functions have the form z l, z u: R 2 → R. Our goal is to bound this function between two planes such that the volume between the planes is as small as possible (following our previous heuristic). We first divide the computation into 3 cases based on the signs of l y and u y. To simplify notation, we introduce the variable assignment in Table 1 so that we can reuse notation across all cases. We first define an anchor point a: a point in the box where function f attains the max/min value defined upon the case. Also we need reference point r whereas the plane would meet where ∂y, and Φ, a, r are chosen from Table 1. Finally, we choose bounds from two candidates which minimize the volume between the planes. Fig. 4 is the visualization of this . We note that Sigmoid Identity transformer is handled in the same manner (we can directly apply the same transformer by replacing f (x, y) with σ(x) · y). Theorem 1. Our Sigmoid Tanh transformer is optimal (it minimizes the volume between the lower and upper plane) under the assumptions: bounding planes are tangent to the curve at the respective anchor points, bounding planes are parallel to either x or y axis. Planes computed using our transformer in a volume strictly smaller than those computed using interval analysis. We show the proof in Appendix B. Our assumptions are based on the following reasoning. The first assumption is need so that our transformer produces bounds strictly better than interval analysis. Unless z u passes through the point (u x, u y), the concrete upper bound would be larger than σ(u x) · tanh(u y), making it worse than an interval bound(analogously for the lower bound). The second assumption enables computation of optimal planes in O while without this assumption one needs to solve a non-convex optimization problem which is not feasible at the scale of networks we consider. We now evaluate the effectiveness of DAC on several datasets and neural architectures. All our transformers are implemented in C (for performance) and exposed to the verifier using a Python interface. We will publicly release the datasets, trained networks and source code of DAC. Verification is performed on a Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz using 6 cores for each instance. Experimental setup We evaluate on audio classification benchmarks: Free Spoken Digit Dataset (FSDD) and Google Speech Commands (GSC) . FSDD contains 2,000 samples of spoken digits while GSC is a more complex dataset containing 65,000 utterances of 30 short words spoken by thousands of different people. State-of-the-art accuracy on GSC is 96.9% by de using attention RNN, while our verified model achieves 89%. Following Singh et al. (2019b), we define success rate of verification as the ratio between certified samples and correctly predicted ones. We randomly shuffled the test data and then, for every experiment, inferred labels one by one until the number of correctly classified samples reached 100. We report the number of provably correct samples out of these 100 as our provability. As a baseline, we consider a verification method based on interval analysis, which only derives neuronwise lower and upper bounds using the previous layer's bounds. We used networks with Table 2: Effect of back-substitution depth on the performance. Effect of back-substitution depth In this experiment, we vary the depth of back-substitution used in DAC and study its effect on our performance. We run this experiment on an FSDD network and report the in Table 2. We observe that increasing the back-substitution depth increases the provability. This is due to the fact that we benefit from cancellation of common terms in the propagated expressions, as demonstrated in our example in Section 3. However, this comes at a cost of exponential increase in runtime. Thus, we choose to use depth 3 in order to get high verification rate while still having reasonable verification speed. Provable defense for audio classifiers We also for the first time trained audio classifiers to be provably robust against noise-based perturbations. Our training follows; -we perturb the input signal and propagate interval bounds through the audio processing and LSTM stages. To train, we combine standard loss with the worst case loss obtained using interval propagation. The ing network shown in Fig. 5b achieves 80% of provability for -80 dB even with the imprecise intervals, outperforming the undefended network. Even though this network was specifically trained to be verifiable using intervals, DAC still outperforms intervals and proves significantly more robustness properties. Also note that defended network has lower accuracy of 95%, compared to the 98% accuracy of the baseline. Experimental comparison with prior work POPQORN also proposes a method to certify recurrent neural networks by propagating linear bounds. One of the key differences with our work is the approximation of σ(x) · tanh(y) using linear bounds. We found that, in practice, optimization approach used by POPQORN produces approximations of slightly smaller volume than our LSTM transformer (although non-comparable). However, this smaller volume comes at a high cost in runtime. We tried to integrate POPQORN bounds into our verification framework, however we found it not feasible for our audio tasks as it increased our end-to-end runtime 200× on one of the smaller networks. In contrast, DAC takes only 1-2 minutes on average to verify a single example containing multiple frames. Indeed, our observation is consistent with the evaluation of POPQORN in the paper where it takes hours on a GPU for verifying NLP and vision tasks only for a single frame. We presented the first verifier for certifying audio classifiers. The key idea was to create abstract transformers for non-linear operations used in the audio processing stage and the recurrent network. These transformers compute an optimal (area-wise) approximation under assumptions representable in the underlying convex relaxation and enable sound handling of the entire pipeline. Our evaluation shows that DAC is practically effective and achieves high verification rates on different datasets. by the smaller volume under the each plane. Then for any x, y, f (x, y 1) < f (x, y 2) and f (x 1, y) < f (x 2, y). Thus, since z u x is independent to y, it is sufficient to show z We can easily know that f (x, u y) is concave at x ≥ 0 and convex at x ≤ 0 by the second derivation of f. (a) Consider the case of u x > 0. Let x 0 be the x coordinate of the crossing of f (x, u y) and. Again, by convexity of. Again, by convexity of With analogous steps, z l y can be shown to lie under the curve. Choosing the plane with larger volume underneath it allows to minimize the expected difference between the true curve and the lower bound plane under the randomly chosen domain. The proof of upper bounds will follow the same steps with the first case. z u x in this case is exactly same as before, but since f (x, y) goes below 0 when y < 0, z u y has to anchor at (l x, l y) instead of (u x, l y) since f (l x, l y) ≥ f (u x, l y) and convexity of f in the region. The proof steps do not differ much from the previous proofs. Again, the proof for lower bound is similar as before, but note that z l x needs to choose maximum between the two slopes. This is due to the sign of the values. Since f (u x, l y) < 0 is the minimum in the region and it grows along x gets smaller, both D i f (u x, l y) and f (ux,ly)−f (lx,ly) ux−lx are less than zero. We will not provide the proof since this case is symmetric to the first case. B PROOF OF THEOREM 1 Theorem 1. (copied) Our Sigmoid Tanh transformer is optimal (it minimizes the volume between the lower and upper plane) under the assumptions: bounding planes are tangent to the curve at the respective anchor points, bounding planes are parallel to either x or y axis. Planes computed using our transformer in a volume strictly smaller than those computed using interval analysis. Proof. First note that z We assume the last sound frame s (t) consists of two elements s 1 and s 2 such that:. These constraints capture a noise perturbation of the sound and are depicted in the white box in the left part of Fig. 2. We describe analysis only at the last timestep t (the same process is repeated for every timestep). We note that the DeepPoly abstraction (which we build on in this work) maintains four constraints for every element: a lower and upper constraints which are linear in terms of the previous elements as well as two interval constraints. This abstraction is exact for affine transformations, however, to handle non-linear operations, one has to create new abstract transformers. In our work we introduce such transformers and formally describe their operation in the next sections. In what follows, we show their effect on our running example. For the presentation below, our figure shows the linear constraints obtained by the verifier, but to avoid visual clutter, we do not show the two interval constraints (however, we do list them in the text). The first operation in the audio processing stage is the Fast Fourier Transform (FFT) of the pre-emphasized input signal. It is a two-step process shown in dashed boxes in Fig. 2. The preemphasis is in fact an affine transform so we perform it jointly with the affine transform in the FFT. As the composition of two affine transforms is again affine, this amounts to a single affine transform on the input. In our example, the composed affine transform is y 2. Affine transforms can be captured exactly and here we obtain e.g., s 2. We also obtain the interval bounds y The next step in the computation of FFT is elementwise square of y (t). We denote this operation as 2 ) 2. The square operation is non-linear and cannot be captured exactly which means that we need to decide how to lose precision. Here, we apply our new square abstract transformer (formally defined in Section 4) which provides an optimal linear lower and upper bounds of the square function in terms of area. After applying this transformer, we obtain the bounds: The interval bounds are calculated by: Hence, θ 4, 16]. In practice, FFT is followed by another affine transform which adds together the real and complex component of each element, but we omit this for clarity as it is captured as before, without loss of precision. Filterbanks Transform Our analysis continues with the computation of filter banks of the input, shown in solid boxes in Fig. 2. The first step is an affine transform and in our example we use:ψ 2. Our abstraction is again exact here and additionally computes interval boundsψ 2 ∈. The final step in this transform is the elementwise logarithm of the input. We denote the operation as ψ 2 ). As this is again a non-linear operation, we apply our new log transformer so to obtain: 0.0783ψ The interval bounds are calculated by: 2 ) − 4) + (−6(2s (t) 2 ) − 9) + 0.8637 = 0.6264s Discrete Cosine Transform and ReLU After the Filterbanks Transform, the input is passed through the Discrete Cosine Transform (DCT) followed by a Lifting operation. The analysis of these steps is then provided as an input to a fully connected (FC) layer followed by a ReLU activation. To ease presentation, in our example we combine DCT, Lifting and FC layer in a single affine transform: 2. We show this transform in a dotted box in Fig. 2. This is again captured exactly in our abstraction along with interval bounds x 0.1708, 12.8321]. The affine transform is followed by a ReLU activation x 2 ). In general, we use the ReLU transformer defined in Singh et al. (2019a), which for this example produces: LSTM analysis After the verifier completes the audio processing stage, its output is passed as input to the LSTM cell. This cell also receives a hidden state and a cell state from the previous timestep (shown as a blue box in Fig. 2). In our example, we assume these are given as: h ∈ [0.9, 1]. We note that these elements usually have different interval bounds, but for simplicity in our example we use the same intervals. We focus on the first neuron in the LSTM and compute pre-activations in our example for the forget gate f In order to update the cell state, we need to compute: c 1 ). The left and right summands are computed using our new binary abstract transformers for σ(x) · y and σ(x) · tanh(y) expressions, respectively. Applying our abstract transformers on the summands produces the following bounds: 0.1891f and get c (t) that the bounds of h (t) and c (t−1) are given constant. In practice, we recursively apply the process for those vectors to express the bounds in terms of The next hidden state is computed as h 1 ). We apply our abstract transformer for the σ(x) · tanh(y) expression, we obtain: 2. In this example we will assume h (t) 2 = 0 concretely. Robustness certification using DeepPoly The hidden states in the LSTM at the final timestep are passed through a fully connected layer without activation. In our example we assume that logits after the layer are obtained with the following formula: 2. These are shown in the final box of Fig. 2. To certify that the neural network classifies the input to class 1, we need to prove that l 1 − l 2 > 0. We now apply the same back-substitution technique as Singh et al. (2019a): For the purpose of demonstration, we stop here and plug in previously obtained interval bounds for ψ (t). As this value is greater than 0, robustness is established. Then the cell state and hidden state computations can be rewritten as Comparability of different bounding methods We say that methods A and B are pairwise noncomparable if there exists an input for which method A produces tighter bound than method B, and vice versa. Given this definition, POPQORN is non-comparable with our method. To demonstrate this, in Fig. 6 we show a case where this behavior is manifested. Here, for y ≤ −1 POPQORN (shown as orange plane) produces tighter bound than DAC (shown as blue plane). However, for the other entire range of inputs where −10 ≤ y ≤ −2, POPQORN bounds are substantially worse than our bounds. Further, those bounds are even worse than interval bounds and the overapproximation error is not bounded. Contrary to POPQORN, our bounds are always strictly better than interval bounds (this is proven in Theorem 1) and distance between the function and our planes is bounded. Comparison on synthetic test cases In this experiment, we compare bounds produced by POPQORN and DAC on a set of synthetic test cases, unrelated to the certification of audio classifiers. Both methods take l x, u x and l y, u y as bounds for the input to the sigmoid and tanh, respectively. We sample those inputs uniformly from [−2, 2] × [−2, 2] and compared the volume between the curve σ(x) · tanh(y) and bounding planes produced by both DAC and POPQORN. The volume was computed using 1000 Monde Carlo samples in the set Since there are two σ(x) · tanh(y) and one σ(x) · y calculation appearing in a single LSTM cell, we run the experiment with such portion of data. In other words, in 67% of experiments we bound σ(x) · tanh(y), and in 33% we bound σ(x) · y. We sampled 100 such inputs and compared the volumes obtained by POPQORN and DAC with the volume obtained using interval. The distribution of volumes is shown in Fig. 7. Here, 1 stands for the same volume as interval bounds and values less than 1 indicate performance better than intervals. We conclude that, for this experiment, POPQORN bounds produces smaller volume than our method -0.2 compared to 0.37. In terms of runtime, POPQORN takes 14.37 seconds on average while the bound calculation of DAC finishes in few milliseconds. Direct comparison on test cases from audio benchmarks The previous experiment may not reflect the actual performance on audio benchmarks that we consider in this work. We uniformly sampled the arguments l x, u x, l y, u y that our transformer for σ(x) · tanh(y) and σ(x) · y was invoked with. We distinguish between test cases corresponding to different perturbation ranges and then further split it into certified and uncertified samples. For each of the cases, Fig. 8 shows the box plot of the distributions of volumes produced by both POPQORN and DAC. We found that, while overall POPQORN bounds work well in practice, they frequently produce bounds less precise than interval bounds. The reason of this malfunctioning comes from the limitation of gradient descent approach employed by POPQORN. The gradient of the function, which POPQORN uses to search for the optimal bounds, is large when the inputs are distributed near 0. However, if the inputs are far from the origin or are too close to each other, the function curve becomes almost flat, gradients are close to zero and gradient descent has problems converging. Also, in other cases, gradient descent is not guaranteed to find the bounds with minimum volume. Fig. 6 shows one of the examples where POPQORN fails to find lower planar bound which produces minimum volume. On the contrary, the ing value of σ(x) · tanh(y) is within [−1, 1] regardless of the magnitude of arguments which means that the error produced by intervals is bounded. As our planar bounds are strictly better than intervals, regardless of input conditions, our error is also bounded. Plugging in POPQORN bounds into DAC We also experimented with using POPQORN bounds instead of our bounds in the pipeline and compared the final adversarial regions with those ing from DAC. As POPQORN is relatively slow (108 minutes per sample), we performed this experiment only on the first 10 inputs with −80dB perturbation. Using their bounds in 0 verified samples, while DAC verifies 4 samples. We believe the reason here is the existence of many pathological cases as described in the previous point where gradient descent used by POPQORN converges to suboptimal solution which ends up being worse than interval bounds. These errors are further propagated through each frame and ing output can not be certified. In our experiments, we followed the convention of provability measurement from Singh et al. (2019b). Here, we also provide the with error bars from 10 independent repetitions with randomly permuted test set. For each repetition, we randomly permute the test set with the different seed and collect the first 100 samples with the correct prediction under zero perturbation from the ordered set. We then count the number of certified inputs from those samples to represent the provability under the given constant. Fig. 9 shows that provabilities do not differ much from the reported for multiple experiments. Here we give more details on our training procedure for provably defended network which follows;. Let z LB and z U B be the ing lower and upper bounds for the final logit z ∈ R d under the perturbation size, with the true label j ∈ {0, · · ·, d − 1}. We define the worst-case logitsẑ aŝ i = j which corresponds to the worst concrete logits under the given input with the predefined perturbation amount. Recall that we say the input is certified when this worst-case logits satisfy j = arg max iẑi. Training loss L is a linear combination of the standard cross-entropy loss l(z, e (j) ) and the worst-case cross-entropy loss l(ẑ, e (j) ) where e (j) is target one-hot vector, i.e., L(t) = κ(t)l(z, e (j) ) + (1 − κ(t))l(ẑ( (t)), e (j) ) Note that we set up the κ and as the function of t. As in, gradual increase/decrease of these parameters during the training was essential to get desired performance. We set these functions with respect to the number of training epochs E as κ(t) = 1 − t/E t < E/2 1/2 otherwise (t) = −70 − (200 − 70) 1−t/E. Also, to track the training speed, we increase one by one t only if we are achieving 85% of standard accuracy and 80% provability at the current setting. The models were built under the E = 60. We also attach the missing of defended GSC with 82% concrete accuracy in Fig. 10. We perform the experiments with the same architecture with different model parameters for FSDD and GSC. Table 3 shows the parameters of the models we use. Both defended and undefended networks share the same parameters for each dataset.
We present the first approach to certify robustness of neural networks against noise-based perturbations in the audio domain.
447
scitldr
Since deep neural networks are over-parameterized, they can memorize noisy examples. We address such memorizing issue in the presence of annotation noise. From the fact that deep neural networks cannot generalize neighborhoods of the features acquired via memorization, we hypothesize that noisy examples do not consistently incur small losses on the network under a certain perturbation. Based on this, we propose a novel training method called Learning with Ensemble Consensus (LEC) that prevents overfitting noisy examples by eliminating them using the consensus of an ensemble of perturbed networks. One of the proposed LECs, LTEC outperforms the current state-of-the-art methods on noisy MNIST, CIFAR-10, and CIFAR-100 in an efficient manner. Deep neural networks (DNNs) have shown excellent performance on visual recognition datasets . However, it is difficult to obtain highquality labeled datasets in practice (a). Even worse, DNNs could not generalize the training data in the presence of noisy examples. Therefore, there is an increasing demand for robust training methods. In general, DNNs optimized with SGD first generalize clean examples under label noise. Based on this, recent studies consider examples that incur small losses on the network that does not overfit noisy examples as being clean . However, such small-loss examples may be corrupted, particularly under a high level of noise. Hence, choosing safe examples from the noisy dataset with small-loss criteria may be impractical. To address this, we find the method of screening out noisy examples among small-loss examples by focusing on well-known observations: (i) noisy examples are learned via memorization rather than via generalization and (ii) under a certain perturbation, network predictions for memorized features easily fluctuate, while those for generalized features do not. Based on these two observations, we hypothesize that out of small-loss examples, training losses of noisy examples would increase by injecting certain perturbation to network parameters, while those of clean examples would not. This suggests that examples that consistently incur small losses under multiple perturbations can be considered as being clean. Since this idea comes from an artifact of SGD optimization, it can be applied to any architecture optimized with SGD. In this work, we introduce a method of perturbing parameters to filter noisy examples out of smallloss examples. By embedding the filtering into training, we propose a new robust training scheme termed learning with ensemble consensus (LEC). In LEC, the network is first trained on the entire training set for a while and then trained on the intersection of small-loss examples of the ensemble of perturbed networks. We present three LECs with different perturbations and evaluate their effectiveness on three benchmark datasets with random label noise , open-set noise (b), and semantic noise. The proposed LEC outperforms existing robust training methods by efficiently removing noisy examples from training batches. Generalization of DNNs. Although DNNs are over-parameterized, they have impressive generalization ability . Some studies argue that gradient-based optimization plays an important role in regularizing DNNs (; . show that DNNs optimized with gradient-based methods generalize clean examples in the early stage of training. Since mislabeling reduces the correlation with other training examples, it is likely that noisy examples are learned via memorization. Therefore, we analyze the difference between generalized and memorized features to discriminate clean and noisy examples. Training DNNs with Noisy datasets. Label noise issues can be addressed by reducing negative impact of noisy examples. One direction is to train with a modified loss function based on the noise distribution. Most studies of this direction estimate the noise distribution prior to training as it is not accessible in general (; ; ;). Another direction is to train with modified labels using the current model prediction . Aside from these directions, recent work suggests a method of exploiting small-loss examples (; ; ;) based on the generalization ability of DNNs. However, it is still hard to find clean examples by relying on training losses. This study presents a simple method to overcome such a problem of small-loss criteria. Suppose that % of examples in a dataset D:= D clean ∪ D noisy are noisy. Let S,D,θ denote the set of (100-)% small-loss examples of the network f parameterized by θ out of examples in D. Since it is generally hard to learn only all clean examples especially on the highly corrupted training set, it is problematic to regard all examples in S,D,θ as being clean. To mitigate this, we suggest a simple idea: to find noisy examples among examples in S,D,θ. Since noisy examples are little correlated with other training examples, they are likely to be learned via memorization. However, DNNs cannot generalize neighborhoods of the memorized features. This means that even if training losses of noisy examples are small, they can be easily increased under a certain perturbation δ, i.e., for (x, y) ∈ D noisy, Unlike noisy examples, the network f trained on the entire set D can generalize some clean examples in the early stage of training. Thus, their training losses are consistently small in the presence of the perturbation δ, i.e., for (x, y) ∈ D clean, This suggests that noisy examples can be identified from the inconsistency of losses under certain perturbation δ. Based on this, we regard examples in the intersection of (100-)% small-loss examples of an ensemble of M networks generated by adding perturbations δ 1, δ 2,..., δ M to θ, i.e., as being clean. We call it ensemble consensus filtering because examples are selected via ensemble consensus. With this filtering, we propose a new robust training method termed learning with ensemble consensus (LEC) described in Algorithms 1 and 2. Both algorithms consist of warmingup and filtering processes. The difference between these two lies in the filtering process. During the filtering process of Algorithm 1, the network is trained on the intersection of (100-)% small-loss examples of M networks within a mini batch B, thus the number of examples updated at once is changing. We can encourage more stable training with a fixed number of examples to be updated at once as described in Algorithm 2. During the filtering process of Algorithm 2, we first obtain the intersection of small-loss examples of M networks within a full batch D at each epoch. We then sample a subset of batchsize from the intersection and train them at each update like a normal SGD. Require: noisy dataset D with noise ratio %, duration of warmingup Tw, # of networks used for filtering M, perturbation δ 1: Initialize θ randomly 2: for epoch t = 1: Tw do Warming-up process 3: for mini-batch index b = 1: Sample a subset of batchsize B b from a full batch D 5: Ensemble consensus filtering 16: for mini-batch index b = 1: Sample a subset of batchsize B b from a full batch D 5: Ensemble consensus filtering 14: for mini-batch index b = 1: Sample a subset of batchsize B b from D t 16: end for 18: end for Now the goal is to find a perturbation δ to be injected to distinguish between generalized and memorized features. We present three LECs with different perturbations in the following. The pseudocodes can be found in Section A.1.3. • Network-Ensemble Consensus (LNEC): Inspired by the observation that an ensemble of networks with the same architecture is correlated during generalization and is decorrelated during memorization , the perturbation δ comes from the difference between M networks. During the warming-up process, M networks are trained independently. During the filtering process, M networks are trained on the intersection of (100-)% small-loss examples of M networks. • Self-Ensemble Consensus (LSEC): We focus on the relationship between and: network predictions for memorized features are uncertain and those for generalized features are certain. Since the uncertainty of predictions also can be captured by multiple stochastic predictions , the perturbation δ comes from the difference between M stochastic predictions of a single network. 1 During the filtering process, the network is trained on the intersection of (100-)% small-loss examples obtained with M stochastic predictions. • Temporal-Ensemble Consensus (LTEC): Inspired by the observation that during training, atypical features are more easily forgetful compared to typical features , the perturbation δ comes from the difference between networks at current and preceding epochs. During the filtering process, the network is trained on the intersection of (100-)% small-loss examples at the current epoch t and preceding min(M − 1, t − 1) epochs. We collect (100-)% small-loss examples at the preceding epochs, rather than network parameters to reduce memory usage. In this section, we show (i) the effectiveness of three perturbations at removing noisy examples from small-loss examples and (ii) the comparison of LEC and other existing methods under various annotation noises. Annotation noise. We study random label noise , open-set noise (b), and semantic noise. To generate these noises, we use MNIST , CIFAR-10/100 that are commonly used to assess the robustness. For each benchmark dataset, we only corrupt its training set, while leaving its test set intact for testing. The details can be found in Section A.1.1. • Random label noise. Annotation issues can happen in easy images as well as hard images (a). This is simulated in two ways: sym-% and asym-%. For sym-%, % of the entire set are randomly mislabeled to one of the other labels and for asym-%, each label i of % of the entire set is changed to i + 1. We study four types: sym-20% and asym-20% to simulate a low level of noise, and sym-60% and asym-40% to simulate a high level of noise. • Open-set noise. In reality, annotated datasets may contain out-of-distribution (OOD) examples. As in , to make OOD examples, images of % examples randomly sampled from the original dataset are replaced with images from another dataset, while labels are left intact. SVHN is used to make open-set noise of CIFAR-100, and ImageNet-32 and CIFAR-100 are used to make open-set noise of CIFAR-10. We study two types: 20% and 40% open-set noise. • Semantic noise. In general, images with easy patterns are correctly labeled, while images with ambiguous patterns are obscurely mislabeled. To simulate this, we select the top % most uncertain images and then flip their labels to the confusing ones. The uncertainty of each image is computed by the amount of disagreement between predictions of networks trained with clean dataset as in. 2 Then, the label of each image is assigned to the label with the highest value of averaged softmax outputs of the networks trained with a clean dataset except for its ground-truth label. We study two types: 20% and 40% semantic noise. Architecture and optimization. Unless otherwise specified, we use a variant of 9-convolutional layer architecture . All parameters are trained for 200 epochs with Adam with a batch size of 128. The details can be found in Section A.1.2. Hyperparameter. The proposed LEC involves three hyperparameters: duration of warming-up T w, noise ratio %, and the number of networks used for filtering M. Unless otherwise specified, T w is set to 10, and M is set to 5 for random label noise and open-set noise, and 10 for semantic noise. We assume that a noise ratio of % is given. Further study can be found in Section 5.2. Evaluation. We use two metrics: test accuracy and label precision . At the end of each epoch, test accuracy is measured as the ratio of correctly predicted test examples to all test examples, and label precision is measured as the ratio of clean examples used for training to examples used for training. Thus, for both metrics, higher is better. For methods with multiple networks, the averaged values are reported. We report peak as well as final accuracy because a small validation set may be available in reality. For each noise type, every method is run four times with four random seeds, e.g., four runs of Standard on CIFAR-10 with sym-20%. A noisy dataset is randomly generated and initial network parameters are randomized for each run of both random label noise and open-set noise. Note that four noisy datasets generated in four runs are the same for all methods. On the other hand, semantic noise is generated in a deterministic way. Thus, only initial network parameters are randomized for each run of semantic noise. 2 The uncertainty of image x is defined by f (x; θn)) where f (; θ) denotes softmax output of network parameterized by θ. Here, N is set to 5 as in. CIFAR-10, asym-40% Figure 2: Label precision (%) of small-loss examples of the current network (in green) and the intersection of small-loss examples of the current and preceding networks (in red) during running LTEC on CIFAR-10 with random label noise. We report the precision from epoch 11 when the filtering process starts. Comparison with Self-training. In Section 3.1, we argue that (100-)% small-loss examples may be corrupted. To show this, we run LEC with M = 1, which is a method of training on (100-)% small-loss examples. Note that this method is similar to the idea of;. We call it Self-training for simplicity. Figure 1 shows the label precision of Selftraining is low especially under the high level of noise, i.e., sym-60%. Compared to Self-training, three LECs are trained on higher precision data, achieving higher test accuracy as shown in Table 1. Out of these three, LTEC performs the best in both label precision and test accuracy. Noisy examples are removed through ensemble consensus filtering. In LTEC, at every batch update, we first obtain (100-)% small-loss examples of the current network and then train on the intersection of small-loss examples of the current and preceding networks. We plot label precisions of small-loss examples of the current network (in green) and the intersection (in red) during running LTEC on CIFAR-10 with random noise in Figure 2. We observe that label precision of the intersection is always higher, indicating that noisy examples are removed through ensemble consensus filtering. Competing methods. The competing methods include a regular training method: Standard, a method of training with corrected labels: D2L , a method of training with modified loss function based on the noise distribution: Forward , and a method of exploiting small-loss examples: Co-teaching . We tune all the methods individually as described in Section A.1.4. Results on MNIST/CIFAR with random label noise. The overall can be found in Figures 3 and 4, and Table 2. We plot the average as a solid line and the standard deviation as a shadow around the line. Figure 3 states that the test accuracy of D2L increases at the low level of label noise as training progresses, but it does not increase at the high level of label noise. This is because D2L puts large weights on given labels in the early stage of training even under the high level of noise. Forward shows its strength only in limited scenarios such as MNIST. Co-teaching does not work well on CIFAR-100 with asym-40%, indicating that its cross-training scheme is vulnerable to smallloss examples of a low label precision (see Figure 4). Unlike Co-teaching, our methods attempt to remove noisy examples in small-loss examples. Thus, on CIFAR-100 with asym-40% noise, both LTEC and LTEC-full surpass Co-teaching by a wide margin of about 6% and 5%, respectively. Results on CIFAR with open-set noise. The overall can be found in Table 3. All the methods including LTEC and LTEC-full perform well under open-set noise. We speculate that this is due to a low correlation between open-set noisy examples. This is supported by the on CIFAR-10, i.e., all the methods perform better on ImageNet-32 noise than on CIFAR-100 noise, as ImageNet-32 has more classes than CIFAR-100. Similar to poorly annotated examples, out-ofdistribution examples are difficult to be generalized during the warming-up process. Therefore, they can be removed from training batches through ensemble consensus filtering. Results on CIFAR with semantic noise. The overall can be found in Table 4. The semantically generated noisy examples are highly correlated with each other, making it difficult to filter out those examples through ensemble consensus. We use 10 as the value of M for semantic noise because ensemble consensus with a bigger M is more conservative. On CIFAR with semantic noise, LTEC and LTEC-full perform comparably or best, compared to the other methods. Of the two, LTEC-full performs better on 40% semantic noise due to its training stability. It is hard to learn all clean examples during the warming-up process. Therefore, clean examples with large losses may be excluded from training batches during the filtering process. However, we expect that the number of clean examples used for training would increase gradually as training proceeds since LEC allows the network to generalize clean examples without overfitting. To confirm this, we measure recall defined by the ratio of clean examples used for training to all clean examples at the end of each epoch during running LTEC and LTEC-full. As expected, recalls of both LTEC and LTEC-full sharply increase in the first 50 epochs as described in Figure 5. Pre-training prior to the filtering process may help to prevent the removal of clean examples from training batches. The number of networks used for filtering. During the filtering process of LEC, we use only the intersection of small-loss examples of M perturbed networks for training. This means that the number of examples used for training highly depends on M. To understand the effect of M, we run LTEC with varying M on CIFAR-10 with random label noise. In particular, the range of M is {1, 3, 5, 7, 9, ∞}. Table 5 shows that a larger M is not always lead to better performance. This is because too many examples may be removed from training batches as M increases. Indeed, the total number of examples used for training is critical for the robustness as claimed in;. Noise ratio. In reality, only a poorly estimated noise ratio may be accessible. To study the effect of poor noise estimates, we run LTEC on CIFAR-10 with random label noise using a bit lower and higher values than the actual noise ratio as in. We also run Co-teaching that requires the noise ratio for comparison. The overall can be found in Table 6. Since it is generally difficult to learn all clean examples, training on small-loss examples selected using the over-estimated ratio (i.e., 1.1) is often helpful in both Co-teaching and LTEC. In contrast, smallloss examples selected using the under-estimated ratio may be highly corrupted. In this case, LTEC is robust to the estimation error of noise ratio, while Co-teaching is not. Such robustness of LTEC against noise estimation error comes from ensemble consensus filtering. Applicability to different architecture. The key idea of LEC is rooted in the difference between generalizaton and memorization, i.e., the ways of learning clean examples and noisy examples in the early SGD optimization. Therefore, we expect that LEC would be applicable to any architecture optimized with SGD. To support this, we run Standard and LTEC with ResNet-20 . The architecture is optimized based on , achieving the final test accuracy of 90.67% on clean CIFAR-10. Here, T w is set to 30 for the optimization details. Table 7 shows LTEC (ResNet) beats Standard (ResNet) in both peak and final accuracies, as expected. This work presents the method of generating and using the ensemble for robust training. We explore three simple perturbation methods to generate the ensemble and then develop the way of identifying noisy examples through ensemble consensus on small-loss examples. Along with growing attention to the use of small-loss examples for robust training, we expect that our ensemble method will be useful for such training methods. A.1.1 ANNOTATION NOISES • Random label noise: For sym-%, % of the entire set are randomly mislabeled to one of the other labels and for asym-%, each label i of % of the entire set is changed to i + 1. The corruption matrices of sym-% and asym-% are described in Figures A1a and A1b, respectively. • Open-set noise: For % open-set noise, images of % examples randomly sampled from the original dataset are replaced with images from external sources, while labels are left intact. For CIFAR-10 with open-set noise, we sample images from 75 classes of CIFAR-100 and 748 classes of ImageNet to avoid sampling similar images with CIFAR-10. • Semantic noise: For semantic noise, we choose uncertain images and then mislabel them ambiguously. In Figure A2, we see that clean examples have simple and easy images, while noisy examples have not. Also, its corruption matrix (see Figure A1c) describes the similarity between classes, e.g., cat and dog, car and truck, etc. The 9-convolutional layer architecture used in this study can be found in Table A1. The network is optimized with Adam with a batchsize of 128 for 200 epochs. The initial learning rate α is set to 0.1. The learning rate is linearly annealed to zero during the last 120 epochs for MNIST and CIFAR-10, and during the last 100 epochs for CIFAR-100. The momentum parameters β 1 and β 2 are set to 0.9 and 0.999, respectively. β 1 is linearly annealed to 0.1 during the last 120 epochs for MNIST and CIFAR-10, and during the last 100 epochs for CIFAR-100. The images of CIFAR are divided by 255 and are whitened with ZCA. Additional regularizations such as data augmentation are not applied. The on clean MNIST, CIFAR-10, and CIFAR-100 can be found in Table A2. for mini-batch index b = 1: Sample a subset of batchsize B b from a full batch D S,B b,θ:= (100 −)% small-loss examples of f θ within B b Pt ← Pt ∪ S,B b,θ if t < Tw + 1 then Warming-up process 9: 10: else Filtering process 11: if t = 1 then 12: 13: 14: else 16: 19: 5: end for 6: for epoch t = 2: T end do 7: Pt:= (100 −)% small-loss examples of f θ within D Small-loss examples are computed from the 2nd epoch 8: if t < Tw + 1 then Warming-up process 9: for mini-batch index b = 1: Sample a subset of batchsize B b from a full batch D 11: end for else Filtering process if t < M + 1 then 15: else 17: for mini-batch index b = 1: Sample a subset of batchsize B b from D t 21: The competing methods include a regular training method: Standard, a method of training with corrected labels: D2L , a method of training with modified loss function based on the noise distribution: Forward , and a method of exploiting small-loss examples: Co-teaching . We tune all the methods individually as follows: • Standard: The network is trained using the cross-entropy loss. • D2L: The input vector of a fully connected layer in the architecture is used to measure the LID estimates. The parameter involved with identifying the turning point, window size W is set to 12. The network is trained using original labels until the turning point is found and then trained using the bootstrapping target with adaptively tunable mixing coefficient. • Forward: Prior to training, the corruption matrix C where C ji = P(y = i|y true = j) is estimated based on the 97th percentile of probabilities for each class on MNIST and CIFAR-10, and the 100th percentile of probabilities for each class on CIFAR-100 as in. The network is then trained using the corrected labels for 200 epochs. • Co-teaching: Two networks are employed. At every update, they select their small-loss examples within a minibatch and then provide them to each other. The ratio of selected examples based on training losses is linearly annealed from 100% to (100-)% over the first 10 epochs. We compute space complexity as the number of network parameters and computational complexity as the number of forward and backward passes. Here we assume that early stopping is not used and the noise ratio of % is given. Note that the computational complexity of each method depends on its hyperparameter values, e.g., the duration of the warming-up process T w and the noise ratio %. The analysis is reported in Table A3. Our proposed LTEC is the most efficient because it can be implemented with a single network based on Section A.1.3 and only a subset of the entire training set is updated after the warming-up process. Computational complexity # of forward passes n n 2n M n M n n # of backward passes n ≤ n ≤ 2n ≤ M n ≤ n ≤ n A.3.1 OF LTEC WITH M = ∞ Figure A3 shows that ensemble consensus filtering with too large M removes clean examples from training batches in the early stage of the filtering process. Unlike LTEC with M = 5, the recall of LTEC with M = ∞ does not increase as training proceeds, suggesting that its generalization performance is not enhanced. This shows that a larger M does not always lead to better performance. We expect that pre-training prior to the filtering process helps to reduce the number of clean examples removed by ensemble consensus filtering regardless of M. Figure A3: Recall (%) of LTECs with varying M on CIFAR-10 with random label noise.
This work presents a method of generating and using ensembles effectively to identify noisy examples in the presence of annotation noise.
448
scitldr
Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications. A popular formulation of the problem is an $\ell_1$ regularized maximum likelihood estimation. Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure. Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix. However, it is a challenging task in this case, since the symmetric positive definiteness (SPD) and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters. We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data. Recovering sparse conditional independence graphs from data is a fundamental problem in high dimensional statistics and time series analysis, and it has found applications in diverse areas. In computational biology, a sparse graph structure between gene expression data may be used to understand gene regulatory networks; in finance, a sparse graph structure between financial timeseries may be used to understand the relationship between different financial assets. A popular formulation of the problem is an 1 regularization log-determinant estimation of the precision matrix. Based on this convex formulation, many algorithms have been designed to solve this problem efficiently, and one can formally prove that under a list of conditions, the solution of the optimization problem is guaranteed to recover the graph structure with high probability. However, convex optimization based approaches have their own limitations. The hyperparameters, such as the regularization parameters and learning rate, may depend on unknown constants, and need to be tuned carefully to achieve the recovery . Furthermore, the formulation uses a single regularization parameter for all entries in the precision matrix, which may not be optimal. It is intuitive that one may obtain better recovery by allowing the regularization parameters to vary across the entries in the precision matrix. However, such flexibility will lead to a quadratic increase in the number of hyperparameters, but it is hard for traditional approaches to search over a large number of hyperparameters. Thus, a new paradigm may be needed for designing more effective sparse recovery algorithms. Recently, there has been a surge of interest in a new paradigm of algorithm design, where algorithms are augmented with learning modules trained directly with data, rather than prescribing every step of the algorithms. This is meaningful because very often a family of optimization problems needs to be solved again and again, similar in structures but different in data. A data-driven algorithm may be able to leverage this distribution of problem instances, and learn an algorithm which performs better than traditional convex formulation. In our case, the sparse graph recovery problem may also need to be solved again and again, where the underlying graphs are different but have similar degree distribution, the magnitude of the precision matrix entries, etc. For instance, gene regulatory networks may be rewiring depending on the time and conditions, and we want to estimate them from gene In our experiments, we show that the AM architecture provides very good inductive bias, allowing the model to learn very effective sparse graph recovery algorithm with a small amount of training data. In all cases, the learned algorithm can recover sparse graph structures with much fewer data points from a new problem, and it also works well in recovering gene regulatory networks based on realistic gene expression data generators. Related works. considers CNN based architecture that directly maps empirical covariance matrices to estimated graph structures. Previous works have parameterized optimization algorithms as recurrent neural networks or policies in reinforcement learning. For instance, considered directly parameterizing optimization algorithm as an RNN based framework for learning to learn. approach the problem of automating algorithm design from reinforcement learning perspective and represent any particular optimization algorithm as a policy. learn combinatorial optimzation over graph via deep Q-learning. These works did not consider the structures of our sparse graph recovery problem. Another interesting line of approach is to develop deep neural networks based on unfolding an iterative algorithm;;. developed ALISTA which is based on unrolling the Iterative Shrinkage Thresholding Algorithm (ISTA). developed'ADMM-Net', which is also developed for compressive sensing of MRI data. Though these seminal works were primarily developed for compressive sensing applications, they alluded to the general theme of using unrolled algorithms as inductive biases. We thus identify a suitable unrolled algorithm and leverage its inductive bias to solve the sparse graph recovery problem. Given m observations of a d-dimensional multivariate Gaussian random variable X = [X 1, . . ., X d], the sparse graph recovery problem aims to estimate its covariance matrix Σ * and precision matrix Θ * = (Σ *) −1. The ij-th component of Θ * is zero if and only if X i and X j are conditionally independent given the other variables {X k} k =i,j. Therefore, it is popular to impose an 1 regularization for the estimation of Θ * to increase its sparsity and lead to easily interpretable models. , the problem is formulated as the 1 -regularized maximum likelihood estimation Θ = arg min Θ∈S d ++ − log(det Θ) + tr(ΣΘ) + ρ Θ 1,off, where Σ is the empirical covariance matrix based on m samples, S d ++ is the space of d × d symmetric positive definite matrices (SPD), and Θ 1,off = i =j |Θ ij | is the off-diagonal 1 regularizer with regularization parameter ρ. This estimator is sensible even for non-Gaussian X, since it is minimizing an 1 -penalized log-determinant Bregman divergence. The sparse precision matrix estimation problem in Eq. is a convex optimization problem which can be solved by many algorithms. We give a few canonical and advanced examples which are compared in our experiments: G-ISTA. G-ISTA is a proximal gradient method, and it updates the precision matrix iteratively The step sizes ξ k is determined by line search such that Θ k+1 is SPD matrix. ADMM. Alternating direction methods of multiplier transform the problem into an equivalent constrained form, decouple the log-determinant term and the 1 regularization term, and in the following augmented Lagrangian form with a penalty parameter λ: Taking U:= β/λ as the scaled dual variable, the update rules for the ADMM algorithm are BCD. Block-coordinate decent methods updates each column (and the corresponding row) of the precision matrix iteratively by solving a sequence of lasso problems. The algorithm is very efficient for large scale problems involving thousands of variables. Apart from various algorithms, rigorous statistical analysis has also been provided for the optimal solution of the convex formulation in Eq.. established consistency of the estimator Θ in Eq. in terms of both Frobenius and spectral norms, at rate scaling roughly as with high probability, where s is the number of nonzero entries in Θ *. This statistical analysis also reveal certain limitations of the convex formulation: The established consistency is based on a set of carefully chosen conditions, including the lower bound of sample size, the sparsity level of Θ *, the degree of the graph, the magnitude of the entries in the covariance matrix, and the strength of interaction between edge and non-edge in the precision matrix (or mutual incoherence on the Hessian Γ * := Σ * ⊗ Σ *). In practice, it may be hard to a problem to satisfy these recovery conditions. Therefore, it seems that there is still room for improving the above convex optimization algorithms for recovering the true graph structure. Prior to the data-driven paradigm for sparse recovery, since the target parameter Θ * is unknown, the best precision matrix recovery method is to resort to a surrogate objective function (for instance, equation 1). Optimally tuning the unknown parameter ρ is a very challenging problem in practice. Instead, we can leverage the large amount of simulation or real data and design a learning algorithm that directly optimizes the loss in equation 9. Furthermore, since the log-determinant estimator in Eq. is NOT directly optimizing the recovery objective Θ − Θ * 2 F, there is also a mismatch in the optimization objective and the final evaluation objective (refer to the first experiment in section 5.1). This increase the hope one may improve the by directly optimizing the recovery objective with the algorithms learned from data. In the remainder of the paper, we will present a data-driven method to learn an algorithm for precision matrix estimation, and we call the ing algorithm GLAD (stands for Graph recovery Learning Algorithm using Data-driven training). We ask the question of Given a family of precision matrices, is it possible to improve recovery for sparse graphs by learning a data-driven algorithm? More formally, suppose we are given n precision matrices {Θ * (i) } n i=1 from a family G of graphs and m samples {x (i,j) } m j=1 associated with each Θ * (i). These samples can be used to form n sample. We are interested in learning an algorithm for precision matrix estimation by solving a supervised learning problem,, where f is a set of parameters in GLAD(·) and the output of GLAD f (Σ (i) ) is expected to be a good estimation of Θ * (i) in terms of an interested evaluation metric L. The benefit is that it can directly optimize the final evaluation metric which is related to the desired structure or graph properties of a family of problems. However, it is a challenging task to design a good parameterization of GLAD f for this graph recovery problem. We will explain the challenges below and then present our solution. In the literature on learning data-driven algorithms, most models are designed using traditional deep learning architectures, such as fully connected DNN or recurrent neural networks. But, for graph recovery problems, directly using these architectures does not work well due to the following reasons. First, using a fully connected neural network is not practical. Since both the input and the output of graph recovery problems are matrices, the number of parameters scales at least quadratically in d. Such a large number of parameters will need many input-output training pairs to provide a decent estimation. Thus some structures need to be imposed in the network to reduce the size of parameters and sample complexity. Second, structured models such as convolution neural networks (CNNs) have been applied to learn a mapping from Σ to Θ * . Due to the structure of CNNs, the number of parameters can be much smaller than fully connected networks. However, a recovered graph should be permutation invariant with respect to the matrix rows/columns, and this constraint is very hard to be learned by CNNs, unless there are lots of samples. Also, the structure of CNN is a bias imposed on the model, and there is no guarantee why this structure may work. Third, the intermediate produced by both fully connected networks and CNNs are not interpretable, making it hard to diagnose the learned procedures and progressively output increasingly improved precision matrix estimators. Fourth, the SPD constraint is hard to impose in traditional deep learning architectures. Although, the above limitations do suggest a list of desiderata when designing learning models: Small model size; Minimalist learning; Interpretable architecture; Progressive improvement; and SPD output. These desiderata will motivate the design of our deep architecture using unrolled algorithms. To take into account the above desiderata, we will use an unrolled algorithm as the template for the architecture design of GLAD. The unrolled algorithm already incorporates some problem structures, such as permutation invariance and interpretable intermediate ; but this unrolled algorithm does not traditionally have a learning component, and is typically not directly suitable for gradient-based approaches. We will leverage this inductive bias in our architecture design and augment the unrolled algorithm with suitable and flexible learning components, and then train these embedded models with stochastic gradient descent. GLAD model is based on a reformulation of the original optimization problem in Eq. with a squared penalty term, and an alternating minimization (AM) algorithm for it. More specifically, we consider a modified optimization with a quadratic penalty parameter λ: and the alternating minimization (AM) method for solving it: where η ρ/λ (θ):= sign(θ) max(|θ| − ρ/λ, 0). The derivation of these steps are given in Appendix A. We replace the penalty constants (ρ, λ) by problem dependent neural networks, ρ nn and Λ nn. These neural networks are minimalist in terms of the number of parameters as the input dimensions are mere {3, 2} for {ρ nn, Λ nn} and outputs a single value. Algorithm 1 summarizes the update equations for our unrolled AM based model, GLAD. Except for the parameters in ρ nn and Λ nn, the constant t for initialization is also a learnable scalar parameter. This unrolled algorithm with neural network augmentation can be viewed as a highly structured recurrent architecture as illustrated in Figure 1. There are many traditional algorithms for solving graph recovery problems. We choose AM as our basis because: First, empirically, we tried models built upon other algorithms including G-ISTA, ADMM, etc, but AM-based model gives consistently better performances. Appendix C.10 & C.11 discusses different parameterizations tried. Second, and more importantly, the AM-based architecture has a nice property of maintaining Θ k+1 as a SPD matrix throughout the iterations as long as λ k < ∞. Third, as we prove later in Section 4, the AM algorithm has linear convergence rate, allowing us to use a fixed small number of iterations and still achieve small error margins. Algorithm 1: GLAD To learn the parameters in GLAD architecture, we will directly optimize the recovery objective function rather than using log-determinant objective. A nice property of our deep learning architecture is that each iteration of our model will output a valid precision matrix estimation. This allows us to add auxiliary losses to regularize the intermediate of our GLAD architecture, guiding it to learn parameters which can generate a smooth solution trajectory. Specifically, we will use Frobenius norm in our experiments, and design an objective which has some resemblance to the discounted cumulative reward in reinforcement learning: where (Θ is the output of the recurrent unit GLADcell at k-th iteration, K is number of unrolled iterations, and γ ≤ 1 is a discounting factor. We will use stochastic gradient descent algorithm to train the parameters f in the GLADcell. A key step in the gradient computation is to propagate gradient through the matrix square root in the GLADcell. To do this efficiently, we make use of the property of SPD matrix that X = X 1/2 X 1/2, and the product rule of derivatives to obtain The above equation is a Sylvester's equation for d(X 1/2). Since the derivative dX for X is easy to obtain, then the derivative of d(X 1/2) can be obtained by solving the Sylvester's equation in. The objective function in equation 9 should be understood in a similar way as in;; where deep architectures are designed to directly produce the sparse outputs. For GLAD architecture, a collection of input covariance matrix and ground truth sparse precision matrix pairs are available during training, either coming from simulated or real data. Thus the objective function in equation 9 is formed to directly compare the output of GLAD with the ground truth precision matrix. The goal is to train the deep architecture which can perform well for a family/distribution of input covariance matrix and ground truth sparse precision matrix pairs. The average in the objective function is over different input covariance and precision matrix pairs such that the learned architecture is able to perform well over a family of problem instances. Furthermore, each layer of our deep architecture outputs an intermediate prediction of the sparse precision matrix. The objective function takes into account all these intermediate outputs, weights the loss according to the layer of the deep architecture, and tries to progressively bring these intermediate layer outputs closer and closer to the target ground truth. We note that the designed architecture, is more flexible than just learning the regularization parameters. The component in GLAD architecture corresponding to the regularization parameters are entry-wise and also adaptive to the input covariance matrix and the intermediate outputs. GLAD architecture can adaptively choose a matrix of regularization parameters. This task will be very challenging if the matrix of regularization parameters are tuned manually using cross-validation. A recent theoretical work also validates the choice of GLAD's design. Since GLAD architecture is obtained by augmenting an unrolled optimization algorithm by learnable components, the question is what kind of guarantees can be provided for such learned algorithm, and whether learning can bring benefits to the recovery of the precision matrix. In this section, we will first analyze the statistical guarantee of running the AM algorithm in Eq. and Eq. for k steps with a fixed quadratic penalty parameter λ, and then interpret its implication for the learned algorithm. First, we need some standard assumptions about the true model from the literature: The assumption 2 guarantees that Θ * exists. Assumption 1 just upper bounds the sparsity of Θ * and does not stipulate anything in particular about s. These assumptions characterize the fundamental limitation of the sparse graph recovery problem, beyond which recovery is not possible. Under these assumptions, we prove the linear convergence of AM algorithm (proof is in Appendix B). m, where ρ is the l 1 penalty, d is the dimension of problem and m is the number of samples, the Alternate Minimization algorithm has linear convergence rate for optimization objective defined in. The k th iteration of the AM algorithm satisfies, where 0 < C λ < 1 is a constant depending on λ. From the theorem, one can see that by optimizing the quadratic penalty parameter λ, one can adjust the C λ in the bound. We observe that at each stage k, an optimal penalty parameter λ k can be chosen depending on the most updated value C λ. An adaptive sequence of penalty parameters (λ 1, . . ., λ K) should achieve a better error bound compared to a fixed λ. Since C λ is a very complicated function of λ, the optimal λ k is hard to choose manually. Besides, the linear convergence guarantee in this theorem is based on the sparse regularity parameter ρ log d m. However, choosing a good ρ value in practice is tedious task as shown in our experiments. In summary, the implications of this theorem are: • An adaptive sequence (λ 1, . . ., λ K) should lead to an algorithm with better convergence than a fixed λ, but the sequence may not be easy to choose manually. • Both ρ and the optimal λ k depend on the corresponding error Θ AM − Θ λ F, which make these parameters hard to prescribe manually. • Since, the AM algorithm has a fast linear convergence rate, we can run it for a fixed number of iterations K and still converge with a reasonable error margin. Our learning augmented deep architecture, GLAD, can tune these sequence of λ k and ρ parameters jointly using gradient descent. Moreover, we refer to a recent work by where they considered minimizing the graphical lasso objective with a general nonconvex penalty. They showed that by iteratively solving a sequence of adaptive convex programs one can achieve even better error margins (refer their Algorithm 1 & Theorem 3.5). In every iteration they chose an adaptive regularization matrix based on the most recent solution and the choice of nonconvex penalty. We thus hypothesize that we can further improve our error margin if we make the penalty parameter ρ nonconvex and problem dependent function. We choose ρ as a function depending on the most up-todate solution (Θ k, Σ, Z k), and allow different regularizations for different entries of the precision matrix. Such flexibility potentially improves the ability of GLAD model to recover the sparse graph. In this section, we report several experiments to compare GLAD with traditional algorithms and other data-driven algorithms. The validate the list of desiderata mentioned previously. Especially, it shows the potential of pushing the boundary of traditional graph recovery algorithms by utilizing data. Python implementation (tested on P100 GPU) is available 1. Exact experimental settings details are covered in Appendix C. Evaluation metric. We use normalized mean square error (NMSE) and probability of success (PS) to evaluate the algorithm performance. NMSE is 10 log 10 (E Θ p − Θ * 2 F /E Θ * 2 F) and PS is the probability of correct signed edge-set recovery, i.e., P sign(, where E(Θ *) is the true edge set. Notation. In all reported , D stands for dimension d of the random variable, M stands for sample size and N stands for the number of graphs (precision matrices) that is used for training. Inconsistent optimization objective. Traditional algorithms are typically designed to optimize the 1 -penalized log likelihood. Since it is a convex optimization, convergence to optimal solution is usually guaranteed. However, this optimization objective is different from the true error. Taking ADMM as an example, it is revealed in Figure 2 that, although the optimization objective always converges, errors of recovering true precision matrices measured by NMSE have very different behaviors given different regularity parameter ρ, which indicates the necessity of directly optimizing NMSE and hyperparameter tuning. Expensive hyperparameter tuning. Although hyperparameters of traditional algorithms can be tuned if the true precision matrices are provided as a validation dataset, we want to emphasize that hyperparamter tuning by grid search is a tedious and hard task. Table 1 shows that the NMSE values are very sensitive to both ρ and the quadratic penalty λ of ADMM method. For instance, the optimal NMSE in this table is −9.61 when λ = 0.1 and ρ = 0.03. However, it will increase by a large amount to −2.06 if ρ is only changed slightly to 0.01. There are many other similar observations in this table, where slight changes in parameters can lead to significant NMSE differences, which in turns makes grid-search very expensive. G-ISTA and BCD follow similar trends. For a fair comparison against GLAD which is data-driven, in all following experiments, all hyperparameters in traditional algorithms are fine-tuned using validation datasets, for which we spent extensive efforts (See more details in Appendix C.3, C.6). In contrast, the gradient-based training of GLAD turns out to be much easier. We follow the experimental setting in (; ;) to generate data and perform synthetic experiments on multivariate Gaussians. Each offdiagonal entry of the precision matrix is drawn from a uniform distribution, i.e., Θ * ij ∼ U(−1, 1), and then set to zero with probability p = 1 − s, where s means the sparsity level. Finally, an appropriate multiple of the identity matrix was added to the current matrix, so that the ing matrix had the smallest eigenvalue as 1 (refer to Appendix C.1). We use 30 unrolled steps for GLAD (Figure 3) and compare it to G-ISTA, ADMM and BCD. All algorithms are trained/finetuned using 10 randomly generated graphs and tested over 100 graphs. Convergence and average runtime of different algorithms on Nvidia's P100 GPUs are shown in Figure 4 and Table 2 respectively. GLAD consistently converges faster and gives lower NMSE. Although the fine-tuned G-ISTA also has decent performance, the computation time in each iteration is much longer than GLAD because it requires line search steps. Besides, we could also see a progressive improvement of GLAD across its iterations. As analyzed by , the recovery guarantee (such as in terms of Frobenius norm) of the 1 regularized log-determinant optimization significantly depends on the sample size and other conditions. Our GLAD directly optimizes the recovery objective based on data, and it has the potential of pushing the sample complexity limit. We experimented with this and found the positive. We follow to conduct experiments on GRID graphs, which satisfy the conditions required in . Furthermore, we conduct a more challenging task of recovering restricted but randomly constructed graphs (see Appendix C.7 for more details). The probability of success (PS) is non-zero only if the algorithm recovers all the edges with correct signs, plotted in Figure 5. GLAD consistently outperforms traditional methods in terms of sample complexity as it recovers the true edges with considerably fewer number of samples. Having a good inductive bias makes GLAD's architecture quite data-efficient compared to other deep learning models. For instance, the state-of-the-art'DeepGraph' by is based on CNNs. It contains orders of magnitude more parameters than GLAD. Furthermore, it takes roughly 100, 000 samples, and several hours for training their DG-39 model. In contrast, GLAD learns well with less than 25 parameters, within 100 training samples, and notably less training time. Table 3 also shows that GLAD significantly outperforms DG-39 model in terms of AUC (Area under the ROC curve) by just using 100 training graphs, typically the case for real world settings. Fully connected DL models are unable to learn from such small data and hence are skipped in the comparison. Figure 6 shows that GLAD performs favourably for structure recovery in terms of NMSE on the gene expression data. As the governing equations of the underlying distribution of the SynTReN are unknown, these experiments also emphasize the ability of GLAD to handle non-Gaussian data. Figure 7 visualizes the edge-recovery performance of GLAD models trained on a sub-network of true Ecoli bacteria data. We denote, TPR: True Positive Rate, FPR: False Positive Rate, FDR: False Discovery Rate. The number of simulated training/validation graphs were set to 20/20. One batch of M samples were taken per graph (details in Appendix C.9). Although, GLAD was trained on graphs with D = 25, it was able to robustly recover a higher dimensional graph D = 43 structure. Appendix C.12 contains details of the experiments done on real E.Coli data. The GLAD model was trained using the SynTReN simulator. Appendix C.13 explains our proposed approach to scale for larger problem sizes. We presented a novel neural network, GLAD, for the sparse graph recovery problem based on an unrolled Alternating Minimization algorithm. We theoretically prove the linear convergence of AM algorithm as well as empirically show that learning can further improve the sparse graph recovery. The learned GLAD model is able to push the sample complexity limits thereby highlighting the potential of using algorithms as inductive biases for deep learning architectures. Further development of theory is needed to fully understand and realize the potential of this new direction. Alternating Minimization is performing Taking the gradient of the objective function with respect to Θ to be zero, we have Taking the gradient of the objective function with respect to Z to be zero, we have where Solving the above two equations, we obtain: where B LINEAR CONVERGENCE RATE ANALYSIS m, where ρ is the l 1 penalty, d is the dimension of problem and m is the number of samples, the Alternate Minimization algorithm has linear convergence rate for optimization objective defined in. The k th iteration of the AM algorithm satisfies, where 0 < C λ < 1 is a constant depending on λ. We will reuse the following notations in the appendix: The update rules for Alternating Minimization are: Assumptions: With reference to the theory developed in , we make the following assumptions about the true model. (O P (·) is used to denote bounded in probability.) We now proceed towards the proof: Lemma 2. For any x, y, k ∈ R, k > 0, x = y, Proof. where is the largest eigenvalue of X in absolute value. Proof. First we factorize X using eigen decomposition, X = Q X D X Q X, where Q X and D X are orthogonal matrix and diagonal matrix, respectively. Then we have, Similarly, the above equation holds for Y. Therefore, where we define Q:= Q Y Q X. Similarly, we have, Then the i-th entry on the diagonal of ji. Using the fact that D X and D Y are diagonal, we have, The last step makes use of Similarly, using, we have, Assuming X − Y F > 0 (otherwise trivially holds), using and, we have, Using lemma, we have, Therefore, Lemma 4. Under assumption, the output of the k-th and where 0 < C λ < 1 is a constant depending on λ. Proof. The first part is easy to show, if we observe that in the second update step of AM, η ρ/λ is a contraction under metric d(X, Y) = X − Y F. Therefore we have, Next we will prove the second part. To simplify notation, we let A(X) = X X + 4 λ I. Using the first update step of AM, we have, where The last derivation step makes use of the triangle inequality. Using lemma, we have, Therefore where Λ max (X) is the largest eigenvalue of X in absolute value. The rest is to show that both Λ max (Y λ) and Λ max (Y k+1) are bounded using assumption. For Λ max (Y k+1), we have, Combining and, we have, Therefore, Continuing with, we have, Since Z λ is the minimizer of a strongly convex function, its norm is bounded. And we also have Therefore both Λ max (Y λ) and Λ max (Y k+1) are bounded in, i.e. 0 < C λ < 1 is a constant only depending on λ. m, where ρ is the l 1 penalty, d is the dimension of problem and m is the number of samples, the Alternate Minimization algorithm has linear convergence rate for optimization objective defined in. The k th iteration of the AM algorithm satisfies, where 0 < C λ < 1 is a constant depending on λ. Proof. Error between Θ λ and Θ G Combining the following two equations: Note that by the optimality condition, ∇ z f (Θ λ, Z λ, ρ, λ) = 0, we have the fixed point equation λ and we have: Since G is σ G -strongly convex, where σ G is independent of the sample covariance matrix Σ * as the hessian of G is independent of Σ *. Therefore, Proof. Error between Θ G and Θ * Corollary 5 (Theorem 1. of). Let Θ G be the minimizer for the optimization C EXPERIMENTAL DETAILS This section contains the detailed settings used in the experimental evaluation section. For sections 5.1 and 5.2, the synthetic data was generated based on the procedure described in. A d dimensional precision matrix Θ was generated by initializing a d × d matrix with its off-diagonal entries sampled i.i.d. from a uniform distribution Θ ij ∼ U(−1, 1). These entries were then set to zero based on the sparsity pattern of the corresponding Erdos-Renyi random graph with a certain probability p. Finally, an appropriate multiple of the identity matrix was added to the current matrix, so that the ing matrix had the smallest eigenvalue as 1. In this way, Θ was ensured to be. The top row has the sparsity probability p = 0.5 for the Erdos-Renyi random graph, whereas for the bottom row plots, the sparsity probabilities are uniformly sampled from ∼ U(0.05, 0.15). For finetuning the traditional algorithms, a validation dataset of 10 graphs was used. For the GLAD algorithm, 10 training graphs were randomly chosen and the same validation set was used. C.5 GLAD: ARCHITECTURE DETAILS FOR SECTION(5.2) GLAD parameter settings: ρ nn was a 4 layer neural network and Λ nn was a 2 layer neural network. Both used 3 hidden units in each layer. The non-linearity used for hidden layers was tanh, while the final layer had sigmoid (σ) as the non-linearity for both, ρ nn and Λ nn (refer Figure 3). The learnable offset parameter of initial Θ 0 was set to t = 1. It was unrolled for L = 30 iterations. The learning rates were chosen to be around [0.01, 0.1] and multi-step LR scheduler was used. The optimizer used was'adam'. The best nmse model was selected based on the validation data performance. Figure(Figure 9: We attempt to illustrate how the traditional methods are very sensitive to the hyperparameters and it is a tedious exercise to finetune them. The problem setting is same as described in section(5.3). For all the 3 methods shown above, we have already tuned the algorithm specific parameters to a reasonable setting. Now, we vary the L 1 penalty term ρ and can observe that how sensitive the probability of success is with even slight change of ρ values. values are very sensitive to the choice of t as well. These parameter values changes substantially for a new problem setting. G-ISTA and BCD follow similar trends. Additional plots highlighting the hyperparameter sensitivity of the traditional methods for model selection consistency experiments. Refer figure. Details for experiments in figure. Two different graph types were chosen for this experiment which were inspired from. In the'grid' graph setting, the edge weight for different precision matrices were uniformly sampled from w ∼ U(0.12, 0.25). The edges within a graph carried equal weights. The other setting was more general, where the graph was a random Erdos-Renyi graph with probability of an edge was p = 0.05. The off-diagonal entries of the precision matrix were sampled uniformly from ∼ U[0.1, 0.4]. The parameter settings for GLAD were the same as described in Appendix C.5. The model with the best PS performance on the validation dataset was selected. train/valid/test=10/10/100 graphs were used with 10 sample batches per graph. C.8 GLAD: COMPARISON WITH OTHER DEEP LEARNING BASED METHODS Table shows AUC (with std-err) comparisons with the DeepGraph model. For experiment settings, refer Table 1 of. Gaussian Random graphs with sparsity p = 0.05 were chosen and edge values sampled from ∼ U(−1, 1). GLAD was trained on only 10 graphs with 5 sample batches per graph. The dimension of the problem is D = 39. The architecture parameter choices of GLAD were the same as described in Appendix C.5 and it performs consistently better along all the settings by a significant AUC margin. The SynTReN Van den is a synthetic gene expression data generator specifically designed for analyzing the structure learning algorithms. The topological characteristics of the synthetically generated networks closely resemble the characteristics of real transcriptional networks. The generator models different types of biological interactions and produces biologically plausible synthetic gene expression data enabling the development of data-driven approaches to recover the underlying network. The SynTReN simulator details for section(5.5). For performance evaluation, a connected ErdosRenyi graph was generated with probability as p = 0.05. The precision matrix entries were sampled from Θ ij ∼ U(0.1, 0.2) and the minimum eigenvalue was adjusted to 1 by adding an appropriate multiple of identity matrix. The SynTReN simulator then generated samples from these graphs by incorporating biological noises, correlation noises and other input noises. All these noise levels were sampled uniformly from ∼ U(0.01, 0.1). The figure shows the NMSE comparisons for a fixed dimension D = 25 and varying number of samples M =. The number of training/validation graphs were set to 20/20 and the are reported on 100 test graphs. In these experiments, only 1 batch of M samples were taken per graph to better mimic the real world setting. Figure Unrolled model for ADMM: Algorithm 2 describes the unrolled model ADMMu updates. ρ nn was a 4 layer neural network and Λ nn was a 2 layer neural network. Both used 3 hidden units in each layer. The non-linearity used for hidden layers was tanh, while the final layer had sigmoid (σ) as the non-linearity for both,ρ nn and Λ nn. The learnable offset parameter of initial Θ 0 was set to t = 1. It was unrolled for L = 30 iterations. The learning rates were chosen to be around [0.01, 0.1] and multi-step LR scheduler was used. The optimizer used was'adam'. Figure 10 compares GLAD with ADMMu on the convergence performance with respect to synthetically generated data. The settings were kept same as described in Figure 4. As evident from the plots, we see that GLAD consistently performs better than ADMMu. We had similar observations for other set of experiments as well. Hence, we chose AM based unrolled algorithm over ADMM's as it works better empirically and has less parameters. Although, we are not entirely confident but we hypothesize the reason for above observations as follows. In the ADMM update equations (4 & 5), both the Lagrangian term and the penalty term are intuitively working together as a'function' to update the entries Θ ij, Z ij. Observe that U k can be absorbed into Z k and/or Θ k and we expect our neural networks to capture this relation. We thus expect GLAD to work at least as good as ADMMu. In our formulation of unrolled ADMMu (Algorithm 2) the update step of U is not controlled by neural networks (as the number of parameters needed will be substantially larger) which might be the reason of it not performing as well as GLAD. Our empirical evaluations corroborate this logic that just by using the penalty term we can maintain all the desired properties and learn the problem dependent'functions' with a small neural network. We tried multiple unrolled parameterizations of the optimization techniques used for solving the graphical lasso problem which worked to varying levels of success. We list here a few, in interest for helping researchers to further pursue this recent and novel approach of data-driven algorithm designing. 1. ADMM + ALISTA parameterization: The threshold update for Z AM k+1 can be replaced by ALISTA network. The stage I of ALISTA is determining W, which is trivial in our case as D = I. So, we get W = I. Thus, combining ALISTA updates along with AM's we get an interesting unrolled algorithm for our optimization problem. All the settings are same as the fixed sparsity case described in Figure 4. We see that the AM based parameterization'GLAD' consistently performs better than the ADMM based unrolled architecture'ADMMu'. 2. G-ISTA parameterization: We parameterized the line search hyperparameter c as well as replaced the next step size determination step by a problem dependent neural network of Algorithm in. The main challenge with this parameterization is to main the PSD property of the intermediate matrices obtained. Learning appropriate parameterization of line search hyperparameter such that PSD condition is maintained remains an interesting aspect to investigate. 3. Mirror Descent Net: We get a similar set of update equations for the graphical lasso optimization. We identify some learnable parameters, use neural networks to make them problem dependent and train them end-to-end. 4. For all these methods we also tried unrolling the neural network as well. In our experience we found that the performance does not improve much but the convergence becomes unstable. We use the real data from the'DREAM 5 Network Inference challenge' . This dataset contains 3 compendia that were obtained from microorganisms, some of which are pathogens of clinical relevance. Each compendium consists of hundreds of microarray experiments, which include a wide range of genetic, drug, and environmental perturbations. We test our method for recovering the true E.coli network from the gene expression values recorded by doing actual microarray experiments. The E.coli dataset contains 4511 genes and 805 associated microarray experiments. The true underlying network has 2066 discovered edges and 150214 pairs of nodes do not have an edge between them. There is no data about the remaining edges. For our experiments, we only consider the discovered edges as the ground truth, following the challenge data settings. We remove the genes that have zero degree and then we get a subset of 1081 genes. For our predictions, we ignore the direction of the edges and only consider retrieving the connections between genes. We train the GLAD model using the SynTReN simulator on the similar settings as described in Appendix C.9. Briefly, GLAD model was trained on D=50 node graphs sampled from Erdos-Renyi graph with sparsity probability ∼ U (0.01, 0.1), noise levels of SynTReN simulator sampled from ∼ U (0.01, 0.1) and Θ ij ∼ U (0.1, 0.2)). The model was unrolled for 15 iterations. This experiment also evaluates GLAD's ability to generalize to different distribution from training as well as scaling ability to more number of nodes. We report the AUC scores for E.coli network in Table 4. We can see that GLAD improves over the other competing methods in terms of Area Under the ROC curve (AUC). We understand that it is challenging to model real datasets due to the presence of many unknown latent extrinsic factors, but we do observe an advantage of using data-driven parameterized algorithm approaches. Methods BCD GISTA GLAD AUC 0.548 0.541 0.572 We have shown in our experiments that we can train GLAD on smaller number of nodes and get reasonable for recovering graph structure with considerably larger nodes (AppendixC.12). Thus, in this section, we focus on scaling up on the inference/test part. With the current GPU implementation, we can can handle around 10,000 nodes for inference. For problem sizes with more than 100,000 nodes, we propose to use the randomized algorithm techniques given in. Kindly note that scaling up GLAD is our ongoing work and we just present here one of the directions that we are exploring. The approach presented below is to give some rough idea and may contain loose ends. Randomized algorithms techniques are explained elaborately in. Specifically, we will use some of their key • P1. (Theorem 2.1) We will use the length-squared sampling technique to come up with low-rank approximations • P2. (Theorem 2.5) For any large matrix A ∈ R m×n, we can use approximate it as A ≈ CU R, where C ∈ R m×r, U ∈ R s×r, R ∈ R r×m. • P3. (Section 2.3) For any large matrix A ∈ R m×n, we can get its approximate SVD by using the property E(R T R) = A T A where R is a matrix obtained by length-squared sampling of the rows of matrix A. The steps for doing approximate AM updates, i.e. of equations. Using property P3, we can approximate where V is the right singular vectors of R. Thus, we can combine this approximation with the sketch matrix approximation of Y ≈ CU R to calculate the update in equation. Equation is just a thresholding operation and can be done efficiently with careful implementation. We are looking in to the experimental as well as theoretical aspects of this approach. We are also exploring an efficient distributed algorithm for GLAD. We are investigating into parallel MPI based algorithms for this task (https://stanford.edu/~boyd/admm.html is a good reference point). We leverage the fact that the size of learned neural networks are very small, so that we can duplicate them over all the processors. This is also an interesting future research direction.
A data-driven learning algorithm based on unrolling the Alternating Minimization optimization for sparse graph recovery.
449
scitldr
Counterfactual regret minimization (CFR) is a fundamental and effective technique for solving Imperfect Information Games (IIG). However, the original CFR algorithm only works for discrete states and action spaces, and the ing strategy is maintained as a tabular representation. Such tabular representation limits the method from being directly applied to large games. In this paper, we propose a double neural representation for the IIGs, where one neural network represents the cumulative regret, and the other represents the average strategy. Such neural representations allow us to avoid manual game abstraction and carry out end-to-end optimization. To make the learning efficient, we also developed several novel techniques including a robust sampling method and a mini-batch Monte Carlo Counterfactual Regret Minimization (MCCFR) method, which may be of independent interests. Empirically, on games tractable to tabular approaches, neural strategies trained with our algorithm converge comparably to their tabular counterparts, and significantly outperform those based on deep reinforcement learning. On extremely large games with billions of decision nodes, our approach achieved strong performance while using hundreds of times less memory than the tabular CFR. On head-to-head matches of hands-up no-limit texas hold'em, our neural agent beat the strong agent ABS-CFR by $9.8\pm4.1$ chips per game. It's a successful application of neural CFR in large games. While significant advance has been made in addressing large perfect information games, such as Go , solving imperfect information games remains a challenging task. For Imperfect Information Games (IIG), a player has only partial knowledge about her opponents before making a decision, so that she has to reason under the uncertainty about her opponents' information while exploiting the opponents' uncertainty about herself. Thus, IIGs provide more realistic modeling than perfect information games for many real-world applications, such as trading, traffic routing, and politics. Nash equilibrium is a typical solution concept for a two-player perfect-recall IIG. One of the most effective approaches is CFR , which minimizes the overall counterfactual regret so that the average strategies converge to a Nash equilibrium. However the original CFR only works for discrete states and action spaces, and the ing strategy is maintained as a tabular representation. Such tabular representation limits the method from being directly applied to large games. To tackle this challenge, one can simplify the game by grouping similar states together to solve the simplified (abstracted) game approximately via tabular CFR . Constructing an effective abstraction, however, demands rich domain knowledge and its solution may be a coarse approximation of true equilibrium. Function approximation can be used to replace the tabular representation. combines regression tree function approximation with CFR based on handcrafted features, which is called Regression CFR (RCFR). However, since RCFR uses full traversals of the game tree, it is still impractical for large games. propose a seminal approach DeepStack, which uses fully connected neural networks to represent players' counterfactual values, tabular CFR however was used in the subgame solving. use deep reinforcement learning to solve regret minimization problem for single-agent settings, which is different from two-player perfect-recall IIGs. To learn approximate Nash equilibrium for IIGs in an end-to-end manner, and propose eXtensive-form Fictitious Play (XFP) and Neural Fictitious Self-Play (NFSP), respectively, based on deep reinforcement learning. In a NFSP model, the neural strategies are updated by selecting the best responses to their opponents' average strategies. These approaches are advantageous in the sense that they do not rely on abstracting the game, and accordingly their strategies can improve continuously with more optimization iterations. However fictitious play empirically converges much slower than CFR-based approaches. use actor-critic policy optimization methods to minimize regret and achieve performance comparable to NFSP. Thus it remains an open question whether a purely neural-based end-to-end approach can achieve comparable performance to tabular based CFR approach. In the paper, we solve this open question by designing a double neural counterfactual regret minimization (DNCFR) algorithm 2. To make a neural representation, we modeled imperfect information game by a novel recurrent neural network with attention. Furthermore, in order to improve the convergence of the neural algorithm, we also developed a new sampling technique which converged much more efficient than the outcome sampling, while being more memory efficient than the external sampling. In the experiment, we conducted a set of ablation studies related to each novelty. The experiments showed DNCRF converged to comparable produced by its tabular counterpart while performing much better than NFSP. In addition, we tested DNCFR on extremely large game, heads-up no-limit Texas Hold'em (HUNL). The experiments showed that DNCFR with only a few number of parameters achieved strong neural strategy and beat ABS-CFR. h∈H denotes a possible history (or state), which consists of each player's hidden variable and actions taken by all players including chance. The empty sequence ∅ is a member of H. h j h denotes h j is a prefix of h. Z ⊆ H denotes the terminal histories and any member z ∈Z is not a prefix of any other sequences. A(h)={a:ha∈H} is the set of available actions after non-terminal history h ∈ H \Z. A player function P assigns a member of N ∪{c} to each non-terminal history, where c is the chance (we set c=−1). P (h) is the player who takes an action after history h. For each player i, imperfect information is denoted by information set (infoset) I i. All states h∈I i are indistinguishable to i. I i refers to the set of infosets of i. The utility function u i (z) defines the payoff of i at state z. See appendix B.1 for more details. Algorithm 1: CFR Algorithm S t (a|Ii)=S t−1 (a|Ii)+π σi T (a|Ii)= S T (a|Ii) A strategy profile σ = {σ i |σ i ∈ Σ i, i ∈ N} is a collection of strategies for all players, where Σ i is the set of all possible strategies for player i. σ −i refers to strategy of all players other than player i. For play i ∈ N, the strategy σ i (I i) is a function, which assigns an action distribution over A(I i) to infoset I i. σ i (a|h) denotes the probability of action a taken by player i at state h. In IIG, ∀h 1,h 2 ∈ I i, we have σ i (I i) = σ i (h 1) = σ i (h 2). For iterative method such as CFR, σ t refers to the strategy profile at t-th iteration. The state reach probability of history h is denoted by π σ (h) if players take actions according to σ. The reach probability is also called range in DeepStack . Similarly, π σ i (h) refers to those for player i while π σ −i (h) refers to those for other players except for i. For an empty sequence π σ (∅) = 1. One can also show that the reach probability of the opponent is proportional to posterior (a|Ii) is the cumulative behavior strategy. In tabular CFR, cumulative regret and strategy are stored in the tabular memory, which limits it to solve large games. In DNCFR, we use double deep neural networks to approximate these two values. DNCFR needs less memory than tabular methods because of its generalization. probability of the opponent's hidden variable, i.e.,p(x, where x v i and I i indicate a particular h (proof in Appendix D.1). Finally, the infoset reach probability of I i is defined as π More details can be found in Appendix B.3. • Counterfactual Regret Minimization. CFR is an iterative method for finding a Nash equilibrium for zero-sum perfect-recall IIGs (Algorithm 1 and Figure 2 (a)). Given strategy profile σ, the counterfactual value (CFV) v σ i (I i) at infoset I i is defined by Eq.. The action CFV of taking action a is v σ i (a|I i) and its regret is defined by Eq.. Then the cumulative regret of action a after T iterations is Eq., where, the current strategy (or behavior strategy) at t + 1 iteration will be updated by Eq.. Define s as the additional strategy in iteration t, then the cumulative strategy can be defined as Eq., where S 0 (a|I i)=0. The average strategyσ i t after t iterations is defined by Eq., which approaches a Nash equilibrium after enough iterations. • Monte Carlo CFR. proposed a Monte Carlo CFR (MCCFR) to compute the unbiased estimation of counterfactual value by sampling subsets of infosets in each iteration. Although MCCFR still needs two tabular storages for saving cumulative regret and strategy as CFR does, it needs much less working memory than the standard CFR . This is because MCCFR needs only to maintain values for those visited nodes into working memory; Define Q={Q 1,Q 2,...,Q m}, where Q j ∈Z is a set (block) of sampled terminal histories in each iteration, such that Q j spans the set Z. Define q Qj as the probability of considering block Q j, where m j=1 q Qj =1. Define q(z)= j:z∈Qj q Qj as the probability of considering a particular terminal history z. For infoset I i, an estimate of sampled counterfactual value isṽ ) The sampled counterfactual value in MCCFR is the unbiased estimation of actual counterfactual value in CFR. Define σ rs as sampled strategy profile, where σ rs i is the sampled strategy of player i and σ rs −i are those for other players except for i. The regret of the sampled action a ∈ A(I i) is defined byr is a new utility weighted by. The sampled estimation for cumulative regret of action a after t iterations is 3 DOUBLE NEURAL COUNTERFACTUAL REGRET MINIMIZATION Double neural CFR algorithm will employ two neural networks, one for the cumulative regret R, and the other for the average strategy S shown in Figure 2 (b). The iterative updates of CFR algorithm maintain the regret sum R t (a|I i) and the average strategyσ t i (a|I i). Thus, our two neural networks are designed accordingly. • RegretSumNetwork(RSN): according to Eq., the current strategy σ t+1 (a|I i) is computed from the cumulative regret R t (a|I i). We only need to track the numerator in Eq. since the normalization in the denominator can be computed easily when the strategy is used. Given infoset I i and action a, we design a neural network R(a,I i |θ Figure 3: (a) recurrent neural network architecture with attention for extensive games. Both RSN and ASN are based on this architecture but with different parameters (θR and θS respectively). (b) an overview of the proposed robust sampling and mini-batch techniques. The trajectories marked by red arrows are the samples produced by robust sampling (k =2 here). • AvgStrategyNetwork(ASN): according to Eq., the approximate Nash equilibrium is the weighted average of all previous behavior strategies up to t iterations, which is computed by the normalization of cumulative strategy S t (a|I i). Similar to the cumulative regret, we employ the other deep neural network S(a|θ t S) with network parameter θ t S to track the cumulative strategy. In order to define our R and S networks, we need to represent the infoset in extensive-form games. In such games, players take actions in an alternating fashion and each player makes a decision according to the observed history. Because the action sequences vary in length, we model them with recurrent neural networks and each action in the sequence corresponds to a cell in the RNN. This architecture is different from the one in DeepStack , which used a fully connected deep neural network to estimate counterfactual value. Figure 3 (a) provides an illustration of the proposed deep sequential neural network representation for infosets. Besides the vanilla RNN, there are several variants of more expressive RNNs, such as the GRU and LSTM . In our later experiments, we will compare these different neural architectures as well as a fully connected network representation. Furthermore, different position in the sequence may contribute differently to the decision making, we add an attention mechanism to the RNN architecture to enhance the representation. For example, the player may need to take a more aggressive strategy after beneficial public cards are revealed in a poker game. Thus the information after the public cards are revealed may be more important. In practice, we find that the attention mechanism can help DNCFR obtain a better convergence rate. See Appendix E for more details on the architectures. The parameters in the two neural networks are optimized via stochastic gradient descent in a stage-wise fashion interleaving with CFR iterations. |for all sampled I i } to store the sampled I i and the corresponding regretr ) for all players in t-th iteration, where Q j is the sampled block (shown in Figure 2(b) ). These samples are produced by our proposed robust sampling and mini-batch MCCFR methods, which will be discussed in Section 4. According to Eq., we optimize the cumulative regret neural network R(a,I i |θ t+1 R) using the following loss function where R((a|I i)|θ R refers to the old parameters and θ t+1 R is the new parameters we need to optimize. Note that, Eq. is minimized based on the samples of all the players rather than a particular player i. In standard MCCFR, if the infoset is not sampled, the corresponding regret is set to 0, which leads to unbiased estimation according to Lemma 1. The design of the loss function in Eq. follows the same intuition. Techniques in can be used to reduce the variance. Sampling unobserved infosets? Theoretically, in order to optimize Eq., we need to collect both observed and unobserved infosets. This approach requires us to design a suitable sampling method to select additional training samples from large numbers of unobserved infosets, which will need a lot of memory and computation. Clearly, this is intractable on large games, such as HUNL. In practice, we find that minimizing loss only based on the observed samples can help us achieve a converged strategy. Learning without forgetting? Another concern is that, only a small proportion of infosets are sampled due to mini-batch training, which may in the neural networks forgetting values for those unobserved infosets. To address this challenge, we will use the neural network parameters from the previous iteration as the initialization, which gives us an online learning/adaptation flavor to the updates. Experimentally, on large games, due to the generalization ability of the neural networks, even a small proportion of infosets are used to update the neural networks, our double neural approach can still converge to an approximate Nash equilibrium. See Appendix F for more details on implementation. Scaling regret for stable training? According to Theorem 6 in |for all sampled I i } will store the sampled I i and the weighted additional behavior strategy s t i (a|I i) in t-th iteration. Similarly, the loss function L(S) of ASN is defined by: is the new parameters we need to optimize. According to Algorithm 1, cumulative regret is used to generate behavior strategy in the next iteration while cumulative strategy is the summation of the weighted behavior strategy. In theory, if we have all the M t S in each iteration, we can achieve the final average strategy directly. Based on this concept, we don't need to optimize the average strategy network (ASN) S(·|θ t S) in each iteration. However, saving all such values into a huge memory is very expensive on large games. A compromise is that we can save such values within multiple iterations into a memory, when this memory is large enough, the incremental value within multiple iterations can be learned by optimizing Eq.. Minimum squared loss versus maximum likelihood? The average strategy is a distribution over actions, which implies that we can use maximum likelihood method to directly optimize this average strategy. The maximum likelihood method should base on the whole samples up to t-th iteration rather than only the additional samples, so that this method is very memory-expensive. To address this limitation, we can use uniform reservoir sampling method to obtain the unbiased estimation of each strategy. In practice, we find this maximum likelihood method has high variance and cannot approach a less exploitable Nash equilibrium. Experimentally, optimization by minimizing squared loss helps us obtain a fast convergent average strategy profile and uses much less memory than maximum likelihood method. When solving large IIGs, prior methods such as Libratus and DeepStack are based on the abstracted HUNL which has a manageable number of infosets. The abstraction techniques are usually based on domain knowledge, such as clustering similar hand-strength cards into the same buckets or only taking discrete actions (e.g., fold, call, one-pot raise and all in). DNCFR is not limited by the specified abstracted cards or actions. For example, we can use the continuous variable to represent bet money rather than encode it by discrete action. In practice, DNCFR can clone an existing tabular representation or neural representation and then continually improve the strategy from the initialized point. More specifically, for infoset I i and action a, define R i (a|I i) as the cumulative regret. We can use behavior cloning technique to learn the cumulative regret by optimizing 2. Similarly, the cumulative strategy can be cloned in the same way. Based on the learned parameters, we can warm start DNCFR and continually improve beyond the tabular strategy profile. Algorithm 2 provides a summary of the proposed double neural counterfactual regret minimization approach. In the first iteration, if the system warm starts from tabular-based methods, the techniques in Section 3.4 will be used to clone the cumulative regrets and strategies. If there is no warm start initialization, we can start our algorithm by randomly initializing the parameters in RSN and ASN. Then sampling methods will return the sampled infosets and values, which are saved in memories M t R and M t S respectively. These samples will be used by the NeuralAgent algorithm from Algorithm 3 to optimize RSN and ASN. Further details for the sampling methods will be discussed in the next section. Due to space limitation, we present NeuralAgent fitting algorithm in Appendix F. In this section, we will propose two techniques to improve the efficiency of the double neural method. These techniques can also be used separately in other CFR variants. In this section, we introduce a robust sampling method (RS), which is a general version of both external sampling and outcome sampling . RS samples k actions in one player's infosets and samples one action in the another player's infosets. Specifically, in the robust sampling method, the sampled profile is defined by σ,σ −i ), where player i will randomly select k actions according to sampled strategy σ rs(k) i (I i) at I i and other players randomly select one action according to σ −i. We design an efficient sampling policy for robust sampling as follows and discuss the relationship among robust sampling, external sampling and outcome sampling in Appendix D.2. If k = max Ii∈I |A(I i)| and for each action σ rs(k) i (a|I i) = 1, then robust sampling is identical with external sampling. If k = 1, σ rs(k) i =σ i and q(z)≥δ >0 (δ is a small positive number), then robust sampling is identical with outcome sampling. and the weighted utility u (z) will be a constant number in each iteration. In many settings, when k =1, we find such robust sampling schema converges more efficient than outcome sampling. In contrast, our robust sampling achieves comparable convergence with external sampling but using less working memory when specifying a suitable k. It's reasonable because our schema only samples k rather than all actions in player i s infosets, the sampled game tree is smaller than the one by external sampling. In the experiment, we will compare these sampling policies in our ablation studies. Traditional MCCFR only samples one block in an iteration and provides an unbiased estimation of origin CFV. In this paper, we present a mini-batch Monte Carlo technique and randomly sample b blocks in one iteration. Let Q j denote a block of terminals sampled according to the scheme in Section 4.1, then mini-batch CFV with mini-batch size b will beṽ We prove thatṽ σ i (I i |b) is an unbiased estimation of CFV in Appendix D.3. Following the similar ideas of CFR and CFR+, if we replace the regret matching by regret matching plus , we obtain a mini-batch MCCFR+ algorithm. Our mini-batch technique empirically can sample b blocks in parallel and converges faster than original MCCFR when performing on multi-core machines. To understand the contributions of various components in DNCFR algorithm, we will first conduct a set of ablation studies. Then we will compare DNCFR with tabular CFR and deep reinforcement learning method such as NFSP, which is a prior leading function approximation method in IIGs. At last, we conduct experiments on heads-up no-limit Texas Hold'em (HUNL) to show the scalability of DNCFR algorithm. The games and key information used in our experiment are listed in Table 1. We perform the ablation studies on Leduc Hold'em poker, which is a commonly used poker game in research community (; ; ;). In our experiments, we test DNCFR on three Leduc Hold'em instances with stack size 5, 10, and 15, which are denoted by Leduc, Leduc, and Leduc respectively. To test DNCFR's scalability, we develop a neural agent to solve HUNL, which contains about 10 161 infosets and has served for decades as challenging benchmark and milestones of solving IIGs. The rules for such games are given in Appendix A. The experiments are evaluated by exploitability, which was used as a standard win rate measure in many key articles (; ; ;). The units of exploitability in our paper is chips per game. It denotes how many chips one player wins on average per hand of poker. The method with a lower exploitability is better. The exploitability of Nash equilibrium is zero. In extremely large game, which is intractable to compute exploitability, we use head-to-head performance to measure different agents. For reproducibility, we present the implementation details of the neural agent in Algorithm 2, Algorithm 3, Algorithm 4. Appendix F.4 provides the parameters used in our experiments. Solving HUNL is a challenging task. Although there are published papers , it lacks of available open source codes for such solvers. The development of HUNL solver not only needs tedious work, but also is difficult to verify the correctness of the implementation, because of its well known high variance and extremely large game size. In Appendix G, we provide several approaches to validate the correctness of our implementation for HUNL. We first conduct a set of ablation studies related to the mini-batch training, robust sampling, the choice of neural architecture on Leduc Hold'em. • Is mini-batch sampling helpful? we present the convergence curves of the proposed robust sampling method with k = max(|A(I i)|) under different mini-batch sizes in Figure 8 (a) at Appendix C. The experimental show that larger batch sizes generally lead to better strategy profiles. • Is robust sampling helpful? Figure 4 (a) presents convergence curves for outcome sampling, external sampling(k =max(|A(I i)|)) and the proposed robust sampling method under the different number of sampled actions. The outcome sampling cannot converge to a low exploitability(smaller than 0.1 after 1000 iterations). The proposed robust sampling algorithm with k =1, which only samples one trajectory like the outcome sampling, can achieve a better strategy profile after the same number of iterations. With an increasing k, the robust sampling method achieves an even better convergence rate. Experiment show k = 3 and 5 have a similar trend with k = max(|A(I i)|), which demonstrates that the proposed robust sampling achieves similar performance but requires less memory than the external sampling. We choose k =3 for the later experiments in Leduc Hold'em. • Is attention in the neural architecture helpful? Figure 4 (b) shows that all the neural architectures achieved similar while LSTM with attention achieved slightly better performance with a large number of iterations. We select LSTM plus attention as the default architectures in the later experiments. • Do the neural networks just memorize but not generalize? One indication that the neural networks are generalizing is that they use much fewer parameters than their tabular counterparts. We experimented with LSTM plus attention networks, and embedding size of 8 and 16 respectively. These architectures contain 1048 and 2608 parameters respectively. Both of them are much less than the tabular memory (more than 11083 here) and can lead to a converging strategy profile as shown in Figure 4 (c). We select embedding size 16 as the default parameters. In the later experiments, we will show the similar on HUNL. • Do the neural networks generalize to unseen infosets? To investigate the generalization ability, we perform the DNCFR with small mini-batch sizes (b=50, 100, 500), where only 3.08%, 5.59%, and 13.06% infosets are observed in each iteration. In all these settings, DNCFR can still converge and arrive at exploitability less than 0.1 within only 1000 iterations as shown in Figure 4 (d). In the later experiments, we set b=100 as the default mini-batch size. We learn new parameters based on the old parameters and a subset of observed samples. All infosets share the same parameters, so that the neural network can estimate the values for unseen infosets. Note that, the number of parameters is orders of magnitude less than the number of infosets in many settings, which indicates the generalization of our method. Furthermore, Figure 4 (d) shows that DNCFRs are slightly better than tabular MCCFR, we think it's because of the generalization to unseen infosets. • What is the individual effect of RSN and ASN? Figure 5 (a) presents ablation study of the effects of RSN and ASN network respectively. Specifically, the method RSN denotes that we only employ RSN to learn the cumulative regret while the cumulative strategy is stored in a tabular memory. Similarly, the method ASN only employ ASN to learn the cumulative strategy. Both these single neural methods perform only slightly better than the DNCFR. • How well does continual improvement work? As shown in Figure 5 (b), warm starting from either full-width based or sampling based CFR can lead to continual improvements. Specifically, the first 10 iterations are learned by tabular based CFR and RS-MCCFR+. After the behavior cloning in Section 3.4, the remaining iterations are continually improved by DNCFR. • How well does DNCFR on larger games? We test DNCFR on large Leduc and Leduc, which contains millions of infosets. Even though only a small proportion of nodes are sampled in each iteration, Figure 5 (d) shows that DNCFR can still converge on these large games. How does DNCFR compare to the tabular counterpart, XFP, and NFSP? NFSP is the prior leading function approximation method for solving IIG, which is based on reinforcement learning and fictitious self-play techniques. In the experiment, NFSP requires two memories to store 2×10 5 state-action pair 8 samples and 2×10 6 samples for supervised learning respectively. The memory sizes are larger than the number of infosets. Figure 5 (c) demonstrates that NFSP obtains a 0.06-Nash equilibrium after touching 10 9 infosets. The XFP obtains the same exploitability when touching about 10 7 nodes. However, this method is the precursor of NFSP and updated by a tabular based full-width fictitious play. Our DNCFR achieves the same performance by touching no more than 10 6 nodes, which are much fewer than both NFSP and XFP. The experiment shows that DNCFR converges significantly better than the reinforcement learning counterpart. Space and time trade-off. In this experiment, we investigate the time and space needed for DNCFR to achieve certain exploitability relative to tabular CFR algorithm. We compare their runtime and memory in Figure 6. It's clear that the number of infosets is much more than the number of parameters used in DNCFR. For example, on Leduc, tabular CFR needs 128 times more memory than DNCFR. In the figure, we use the ratio between the runtime of DNCFR and CFR as horizontal axis, and the sampling(observed) infosets ratios of DNCFR and full-width tabular CFR as vertical axis. Note that, the larger the sampling ratio, the more memory will be needed to save the sampled values. Clearly, there is a trade-off between the relative runtime and relative memory in DNCFR: the longer the relative runtime, the less the relative memory needed for DNCFR. It is reasonable to expect that a useful method should lead to "fair" trade between space and time. That is onefold increase in relative runtime should lead onefold decreases in relative memory (the dashed line in Figure 6, slope -1). Interestingly, DNCFR achieves a much better trade-off between relative runtime and memory: for onefold increases in relative runtime, DNCFR may lead to fivefold decreases in relative memory consumption (red line, slope -5). We believe this is due to the generalization ability of the learned neural networks in DNCFR. To present the time space trade off under a range of exploitability, we set the fixed exploitability as 1.0, 0.5, 0.1, 0.05, 0.01 and 0.005 and perform both neural and tabular CFR on Leduc Hold'em. Figure 6 presents DNCFR achieves a much better time and space trade-off. We believe the research on neural CFR is important for future work and the running time is not the key limitation of our DNCFR. Some recent works provide strong variance reduction techniques for MCCFR and suggest promising direction for DNCFR. In the future, we will combine DNCFR with the latest acceleration techniques and use multiple processes or distributed computation to make it more efficient. To test the scalability of the DNCFR on extremely large game, we develop a neural agent to solve HUNL. However, it's a challenging task to directly solve HUNL even with abstraction technique. For example, ABS-CFR uses k-means to cluster similar cards into thousands of clusters. Although it's a rough abstraction of original HUNL, such agent contains about 2 × 10 10 infosets and needs 80GB memory to store its strategies. The working memory for training ABS-CFR is even larger (more than about 200GB), because it needs to store cumulative regrets and other essential variables, such as the abstracted mapping. To make it tractable for solving HUNL via deep learning, we assemble the ideas from both DeepStack and Libratus . Firstly, we train flop and turn networks like DeepStack and use these networks to predict counterfactual value when given two players' ranges and the pot size. Specifically, the flop network estimates values after dealing the first three public cards and the turn network estimates values after dealing the fourth public card. After that, we train blueprint strategies like Libratus. In contrast, the blueprint strategies in our settings are learned by DNCFR. Because we have value networks to estimate counterfactual values, there is no need for us to arrive at terminal nodes at the river. To demonstrate the convergence of DNCFR, firstly, we test it on HUNL. Such game has no limited number of actions, contains four actions in each infoset, and ends with the terminals where the first three public cards are dealt. HUNL contains more than 2×10 8 infosets and 3×10 11 states. It's tractable to compute its exploitability within the limited time. We believe this game is suitable to evaluate the scalability and generalization of DNCFR. Figure 7 (a) provides the convergence of DNCFR on different embedding size: emd=8, 16, 32, 64, 128. The smallest neural network only contains 608 parameters while the largest one contains 71168 parameters. It's reasonable to expect that a larger neural network typically achieves better performance because more parameters typically help neural networks represent more complicated patterns and structures. reasonable because the neural network achieves small loss as the number of gradient descent updates is increasing. Finally, we measure the head-to-head performance of our neural agent against its tabular version and ABS-CFR on HUNL. ABS-CFR is a strong HUNL agent, which is the advanced version of the third-place agent in ACPC 2018. Although ABS-CFR used both card and action abstraction techniques, it still needs 80GB memory to store its strategies. More details about ABS-CFR are provided in Appendix G.1. Although abstraction pathologies are well known in extensive games , typically, finer grained abstraction leads to better strategy in many settings. Following this idea, we use DNCFR to learn blueprint strategies on HUNL, which is similar to HUNL but contains eight actions in each infoset. HUNL contains 8×10 10 infosets. Such large game size makes it intractable to perform subgame solving in real-time. For the next rounds, we use continual resolving techniques to compute strategy in real-time. The action size in the look-ahead tree is similar to Table S3 in. The tabular agent is similar to our neural agent except for using tabular CFR to learn blueprint strategies. When variance reduction techniques are applied 3, Figure 7 (c) shows that our neural agent beats ABS-CFR by 9.8±4.1 chips per game and obtains similar performance (0.7±2.2 chips per game) with its tabular agent. In contrast, our neural only needs to store 1070592 parameters, which uses much less memory than both tabular agent and ABS-CFR. Solving IIGs via function approximation methods is an important and challenging problem. Neural Fictitious Self-Play (NFSP) is a function approximation method based on deep reinforcement learning, which is a prior leading method to solve IIG. However, fictitious play empirically converges slower than CFR-based approaches in many settings. propose a new framework to directly optimize the final policy against worst-case opponents. However, the authors consider only small games. Regression CFR (RCFR) is a function approximation method based on CFR. However, RCFR needs to traverse the full game tree. Such traversal is intractable in large games. In addition, RCFR uses hand-crafted features and regression tree to estimate cumulative regret rather than learning features from data. Deep learning empirically performs better than regression tree in many areas, such as the Transformer and BERT in natural language models . In the past year, concurrent works deep CFR (DCFR) and single deep CFR (SD-CFR) have been proposed to address this problem via deep learning. DCFR, SDCFR, RCFR and our DNCFR are based on the framework of counterfactual regret minimization. However, there are many differences in several important aspects, which are listed as follows. We represent the extensive-form game by recurrent neural network. The proposed LSTM with attention performs better than fully connected network (see details in Section 3.2). DNCFR updates the cumulative regret only based on the additionally collected samples in current iteration rather than using the samples in a big reservoir (see details in Section 3.3.1). It's important to use squared-loss for the average strategies rather than log loss. Because the log loss is based on the big reservoir samples up to T -th iteration, it is very memory-expensive (see details in Section 3.3.2). Another important aspect to make deep learning model work is that we divide regret by √ T and renormalize the regret, because the cumulative regret can grow unboundedly (see details in Section 3.3.1). Also, DNCFR collects data by an efficiently unbiased mini-batch robust sampling method, which may be of independent interests to the IIG communities (see details in Section 4). There are also big differences in the experimental evaluations. In our method, we conduct a set of ablation studies in various settings. We believe that our ablation studies are informative and could have a significant impact on these kinds of algorithms. Also, we evaluate DNCFR on extremely large games while RCFR and SDCFR are only evaluated on small toy games. We proposed a novel double neural counterfactual regret minimization approach to solve large IIGs by combining many novel techniques, such as recurrent neural representation, attention, robust sampling, and mini-batch MCCFR. We conduct a set of ablation studies and the show that these techniques may be of independent interests. This is a successful application of applying deep learning into large IIG. We believe DNCFR and other related neural methods open up a promising direction for future work. A GAME RULES One-Card Poker is a two-players IIG of poker described by. The game rules are defined as follows. Each player is dealt one card from a deck of X cards. The first player can pass or bet, If the first player bet, the second player can call or fold. If the first player pass, the second player can pass or bet. If the second player bet, the first player can fold or call. The game ends with two pass, call, fold. The fold player will lose 1 chip. If the game ends with two passes, the player with higher card wins 1 chip, If the game ends with call, the player with higher card wins 2 chips. Leduc Hold'em a two-players IIG of poker, which was first introduced in. In Leduc Hold'em, there is a deck of 6 cards comprising two suits of three ranks. The cards are often denoted by king, queen, and jack. In Leduc Hold'em, the player may wager any amount of chips up to a maximum of that player's remaining stack. There is also no limit on the number of raises or bets in each betting round. There are two rounds. In the first betting round, each player is dealt one card from a deck of 6 cards. In the second betting round, a community (or public) card is revealed from a deck of the remaining 4 cards. In this paper, we use Leduc(x) refer to the Leduc Hold'em with stack size is x. Heads-Up No-Limit Texas hold'em (HUNL) has at most four betting rounds if neither of two players fold during playing. The four betting rounds are preflop, flop, turn, river respectively. The rules are defined as follows. In Annual Computer Poker Competition (ACPC), two players each have 20000 chips initially. One player at the position of small blind, firstly puts 50 chips in the pot, while the other player at the big blind then puts 100 chips in the pot. After that, the first round of betting is followed. If the preflop betting round ends without a player folding, then three public cards are revealed face-up on the table and the flop betting round occurs. After this round, one more public card is dealt (called the turn) and the third round of betting takes place, followed by a fifth public card (called river) and a final round of betting begins. In no-limit poker player can take fold, call and bet actions and bet number is from one big blind to a number of chips a player has left in his stack. We define the components of an extensive-form game following Osborne & Rubinstein (a l) l=1,...,L −1 and 0 < L < L. Z ⊆ H denotes the terminal histories and any member z ∈ Z is not a prefix of any other sequences. A(h) = {a : ha ∈ H} is the set of available actions after non-terminal history h∈H \Z. A player function P assigns a member of N ∪{c} to each non-terminal history, where c denotes the chance player id, which usually is -1. P (h) is the player who takes an action after history h. I i of a history {h∈H :P (h)=i} is an information partition of player i. A set I i ∈I i is an information set (infoset) of player i and I i (h) refers to infoset I i at state h. Generally, I i could only remember the information observed by player i including player i s hidden variable and public actions. Therefore I i indicates a sequence in IIG, i.e., x v i a 0 a 2...a L−1. For I i ∈I i we denote by A(I i) the set A(h) and by P (I i) the player P (h) for any h ∈ I i. For each player i ∈ N a utility function u i (z) define the payoff of the terminal state z. For player i, the expected game utility u of σ is the expected payoff of all possible terminal nodes. Given a fixed strategy profile σ −i, any strategy σ * of player i that achieves maximize payoff against π σ −i is a best response. For two players' extensive-form games, a Nash equilibrium is a strategy profile σ * = (σ * 0,σ * 1) such that each player's strategy is a best response to the opponent. An -Nash equilibriumis an approximation of a Nash equilibrium, whose strategy profile σ * satisfies: ∀i ∈ N, u. Exploitability of a strategy σ i is defined as. A strategy is unexploitable if i (σ i) = 0. In large two player zero-sum games such poker, u σ * i is intractable to compute. However, if the players alternate their positions, the value of a pair of games is zeros, i.e., u σ * 0 + u σ * 1 = 0. We define the exploitability of strategy profile σ as To provide a more detailed explanation, Figure 1 presents an illustration of a partial game tree in One-Card Poker. In the first tree, two players are dealt (queen, jack) as shown in the left subtree and (queen, king) as shown in the right subtree. z i denotes terminal node and h i denotes non-terminal node. There are 19 distinct nodes, corresponding 9 non-terminal nodes including chance h 0 and 10 terminal nodes in the left tree. The trajectory from the root to each node is a history of actions. In an extensive-form game, h i refers to this history. For example, h 3 consists of actions 0:Q, 1:J and P. h 7 consists of actions 0:Q, 1:J, P and B. h 8 consists of actions 0:Q, 1:K, P and B. We have h 3 h 7, A(h 3)={P,B} and P (h 3)=1. In IIG, the private card of player 1 is invisible to player 0, therefore h 7 and h 8 are actually the same for player 0. We use infoset to denote the set of these undistinguished states. Similarly, h 1 and h 2 are in the same infoset. For the right tree of Figure 1, h 3 and h 5 are in the same infoset. h 4 and h 6 are in the same infoset. Generally, any I i ∈ I could only remember the information observed by player i including player i s hidden variable and public actions. For example, the infoset of h 7 and h 8 indicates a sequence of 0:Q, P, and B. Because h 7 and h 8 are undistinguished by player 0 in IIG, all the states have a same strategy. For example, I 0 is the infoset of h 7 and h 8, we have A strategy profile σ ={σ i |σ i ∈Σ i,i∈N} is a collection of strategies for all players, where Σ i is the set of all possible strategies for player i. σ −i refers to strategy of all players other than player i. For play i∈N the strategy σ i (I i) is a function, which assigns an action distribution over A(I i) to infoset I i. σ i (a|h) denotes the probability of action a taken by player i ∈ N ∪{c} at state h. In IIG, ∀h 1,h 2 ∈ I i, we have For iterative method such as CFR, σ t refers to the strategy profile at t-th iteration. The state reach probability of history h is denoted by π σ (h) if players take actions according to σ. For an empty sequence π σ (∅)=1. The reach probability can be decomposed into π The infoset reach probability of I i is defined as π σ (I i) = h∈Ii π σ (h). If h h, the interval state reach probability from state h to h is defined as π σ (h,h), then we have Figure 8(a) shows that the robust sampling with a larger batch size indicates better performance. It's reasonable because a larger batch size will lead to more sampled infosets in each iteration and costs more memory to store such values. If b=1, only one block is sampled in each iteration. The demonstrate that the larger batch size generally leads to faster convergence. Because it's easy to sample the mini-batch samples by parallel fashion on a large-scale distributed system, this method is very efficient. In practice, we can specify a suitable mini-batch size according to computation and memory size. In Figure 8(b), we compared the proposed robust sampling against Average Strategy (AS) sampling on Leduc Hold'em (stack=5). Set the mini-batch size of MCCFR as b=100, k =2 in robust sampling. The parameters in average strategy sampling are set by =k/|A(I)|, τ =0, and β =0. After 1000 iterations, the performance of our robust sampling is better than AS. More specifically, if k=1, the exploitability of our robust sampling is 0.5035 while AS is 0.5781. If k=2, the exploitability of our robust sampling is 0.2791 while AS is 0.3238. Robust sampling samples a min(k,|A(I)|) player i's actions while AS samples a random number of player i's actions. Note that, if ρ is small or the number of actions is small, it has a possibility that the generated random number between 0 and 1 is larger than ρ for all actions, then the AS will sample zero action. Therefore, AS has a higher variance than our robust sampling. In addition, according to , the parameter scopes of AS are ∈, τ ∈[1,∞), β ∈[0,∞) respectively. They didn't analyze the experiment for τ <1. With Bayes' Theorem, we can inference the posterior probability of opponent's private cards with Equation9. D.2 ROBUST SAMPLING, OUTCOME SAMPLING AND EXTERNAL SAMPLING For robust sampling, given strategy profile σ and the sampled block Q j according to sampled profile, and the regret of action a∈A where is the weighted utility according to reach probability π Because the weighted utility no long requires explicit knowledge of the opponent's strategy, we can use this sampling method for online regret minimization. Generally, if player i randomly selects min(k,|A(I i)|) actions according to discrete uniform distribution unif(0,|A(I i)|) at infoset I i, i.e., σ and u rs i (z) is a constant number when given the sampled profile σ rs(k). Specifically, Therefore, robust sampling is same with external sampling when k =max Ii∈I |A(I i)|. For large game, because one player should take all actions in her infosets, it's intractable for external sampling. The robust sampling is more flexible and memory-efficient than external sampling. In practice, we can specify a suitable k according our memory. Experimentally, the smaller k can achieve a similar convergence rate to the external sampling. • if k = 1 and σ rs(k) i = σ i, only one history z is sampled in this case,then u For a ∈ A rs(k) (I i), the regret will ber If we add exploration and guarantee q(z) ≥ δ > 0, then robust sampling is same with outcome sampling when k = 1 and σ rs(k) i =σ i. • if k = 1, and player i randomly selects one action according to discrete uniform distribution if action a is not sampled at state h, the regret isr Compared to outcome sampling, the robust sampling in that case converges more efficient than outcome sampling. Note that, in our experiment, we select this sampling policy as the default robust sampling when k =1. In this section, we prove that mini-Batch MCCFR gives an unbiased estimation of counterfactual value. In order to define our R and S network, we need to represent the infoset I i ∈I in extensive-form games. In such games, players take action in an alternating fashion and each player makes a decision according to the observed history. In this paper, we model the behavior sequence as a recurrent neural network and each action in the sequence corresponds to a cell in RNN. Figure 3 (a) provides an illustration of the proposed deep sequential neural network representation for infosets. In standard RNN, the recurrent cell will have a very simple structure, such as a single tanh or sigmoid layer. proposed a long short-term memory method (LSTM) with the gating mechanism, which outperforms the standard version and is capable of learning long-term dependencies. Thus we will use LSTM for the representation. Furthermore, different position in the sequence may contribute differently to the decision making, we will add an attention mechanism to the LSTM architecture to enhance the representation. For example, the player may need to take a more aggressive strategy after beneficial public cards are revealed in a poker game. Thus the information, after the public cards are revealed may be more important. More specifically, for l-th cell, define x l as the input vector, which can be either player or chance actions. Define e l as the hidden layer embedding, φ * as a general nonlinear function. Each action is represented by a LSTM cell, which has the ability to remove or add information to the cell state with three different gates. Define the notation · as element-wise product. The first forgetting gate layer is defined as g where [x l,e l−1] denotes the concatenation of x l and e l−1. The second input gate layer decides which values to update and is defined as g Finally, the updated hidden embedding is e l =g o l ·φ e (C l). As shown in Figure 3 (a), for each LSTM cell j, the vector of attention weight is learned by an attention network. Each member in this vector is a scalar α j =φ a (w a e j). The attention embedding of l-th cell is then defined as e a l = l j=1 α j ·e j, which is the summation of the hidden embedding e j and the learned attention weight α j. The final output of the network is predicted by a value network, which is defined as where θ refers to the parameters in the defined sequential neural networks. Specifically, φ f, φ i, φ o are sigmoid functions. φ c and φ e are hyperbolic tangent functions. φ a and φ v are rectified linear functions. Remark. The proposed RSN and ASN share the same neural architecture, but use different parameters. That is R(a,I i |θ Algorithm 2 provides a summary of the proposed double neural counterfactual regret minimization method. Specifically, in the first iteration, if we start the optimization from tabular-based methods, the techniques in Section 3.4 should be used to clone the cumulative regrets and strategy, which is used to initialize RSN and ASN respectively. If there is no warm start initialization, we can start our algorithm by randomly initializing the parameters in RSN and ASN. After these two kinds of initialization, we use sampling method, such as the proposed robust sampling, to collect the training samples (include infosets and the corresponding values), which are saved in memories M t R and M t S respectively. These samples will be used by the NeuralAgent algorithm from Algorithm 3 to optimize RSN and ASN. Algorithm 4 provides the implementation of the proposed mini-batch robust sampling MCCFR. Note that, with the help of the proposed mini-batch techniques in Section 4, we can collect training samples parallelly on multi-processors or distributed systems, which also leads to the unbiased estimation according to the proved Theorem 1. The acceleration training and distribution implementation is beyond the scope of this paper. To compare the performance of DNCFR and tabular CFR, all of our experiments are running on a single processor. Algorithm 3: Optimization of Deep Neural Network as learning rate, β loss as the criteria for early stopping, β re as the upper bound for the number of iterations from getting the minimal loss last time, θ t−1 as the old parameter learned in t−1 iteration, f(·|θ t−1) as the neural network, M as the training samples including infosets and the corresponding targets. To simplify notations, we use β * to denote the set of hyperparameters in the proposed deep neural networks. β * R and β * S refer to the sets of hyperparameters in RSN and ASN respectively. Optimize Neural Networks. Algorithm 3 provides the implementation of the optimization technique for both RSN and ASN. Both R(a,I i |θ t+1 R) and S(a,I i |θ t S) are optimized by mini-batch stochastic gradient descent method. In this paper, we use Adam optimizer with both momentum and adaptive learning rate techniques. We also replace Adam by other optimizers such as in our experiments, however, such optimizers do not achieve better experimental . In practice, existing optimizers may not return a relatively low enough loss because of potential saddle points or local minima. To obtain a relatively higher accuracy and lower optimization loss, we design a novel scheduler to reduce the learning rate when the loss has stopped decrease. Specifically, the scheduler reads a metrics quantity, e.g, mean squared error. If no improvement is seen for a number of epochs, the learning rate is reduced by a factor. In addition, we will reset the learning rate in both optimizer and scheduler once loss stops decreasing within β re epochs. Gradient clipping mechanism is used to limit the magnitude of the parameter gradient and make optimizer behave better in the vicinity of steep cliffs. After each epoch, the best parameters, which lead to the minimum loss, will replace the old parameters. Early stopping mechanism is used once the lowest loss is less than the specified criteria β loss. The feature is encoded as following. As shown in the figure 3 (a), for a history h and player P (h), we use vectors to represent the observed actions including chance player. For example, on Leduc Hold'em, the input feature x l for l-th cell is the concatenation of three one-hot features including the given private cards, the revealed public cards and current action a. Both the private cards and public cards are encoded by one-hot technique (Harris & Harris), where the value in the existing position is 1 and the others are 0. If there are no public cards, the respective position will be filled with 0. The betting chips in the encoded vector will be represented by the normalized cumulative spent, which is the cumulative chips dividing the stack size. For HUNL, each card is encoded by a vector with length 17: 13 for ranking embedding and 4 for suit embedding. The actions in public sequences are represented by one-hot and the raise action is also represented by the normalized cumulative spent. Algorithm 4 presents one application scenario of the proposed mini-batch robust sampling method. The function MCCFR-NN will traverse the game tree like tabular MCCFR, which starts from the root h=∅. Define I i as the infoset of h. Suppose that player i will sample k actions according to the robust sampling. Algorithm 4 is defined as follows. •If the history is terminal, the function returns the weighted utility. •If the history is the chance player, one action a∈A(I i) will be sampled according to the strategy σ −i (I i). Then this action will be added to the history, i.e., h←ha. •If P (I i) = i, the current strategy can be updated by the cumulative regret predicted by RSN. Then we sample k actions according the specified sampled strategy profile σ rs(k) i. After a recursive updating, we can obtain the counterfactual value and regret of each action at I i. For the observed nodes, their counterfactual regrets and numerators of the corresponding average strategy will be stored in M t R and M t S respectively. •If P (I i) is the opponent, only one action will be sampled according the strategy σ −i (I i). The function Mini-Batch-MCCFR-NN presents a mini-batch sampling method, where b blocks will be sampled in parallel. This mini-batch method can help the MCCFR to achieve an unbiased estimation of CFV. The parallel implementation makes this method efficient in practice. Remark: We update average in the procedure of P (h) = i, which potentially leads to a biased estimate of average strategy. There is a trade-off among unbiased estimate, convergence, and data efficiency on Algorithm 4. A feasible solution is using stochastically-weighted averaging (SWA). However, SWA typically leads to a large variance as discussed in Marc's Ph.D. thesis (p49). The classical external sampling(ES) solves this problem by only updating average strategy for −i. Because ES samples k =|A(I i)| actions for i and only samples one action for −i, it's inefficient to collect samples for average strategy at −i in neural CFR. In contrast, we collect samples at i. Typically, when collecting average strategy samples at i, we need using SWA to maintain unbiased estimate of average strategy. However, because of the high variance of SWA, we find the one without SWA converges more efficient empirically. In experiments, we set the network hyperparameters as following. Hyperparameters on Leduc Hold'em. The Leduc, Leduc and Leduc in our experiments have 1.1 × 10 4 infosets (6 × 10 4 states), 3 × 10 5 (1.5 × 10 6 states) and 3 × 10 6 (2 × 10 7 states) infosets respectively. We set k = 3 as the default parameter in the provable robust sampling method on all such games. For the small Leduc, we select b=100 as the default parameter in the mini-batch MCCFR??, which only samples 5.59% infosets in each iteration. For the larger Leduc and Leduc, we select default b=500, which visit (observe) only 2.39% and 0.53% infosets in each iteration. To train RSN and ASN, we set the default embedding size for both neural networks as 16, 32, and 64 for Leduc, Leduc, and Leduc respectively. There are 256 samples will be used to update the gradients of parameters by mini-batch stochastic gradient descent technique. We select Adam as the default optimizer and LSTM with attention as the default neural architecture in all the experiments. The neural networks only have 2608, 7424, and 23360 parameters respectively, which are much less than the number of infosets. The default learning rate of Adam is β lr =0.001. A scheduler, who will reduce the learning rate based on the number of epochs and the convergence rate of loss, help the neural agent to obtain a high accuracy. The learning rate will be reduced by 0.5 when loss has stopped improving after 10 epochs. The lower bound on the learning rate of all parameters in this scheduler is 10 −6. To avoid the algorithm converging to potential local minima or saddle points, we will reset the learning rate to 0.001 and help the optimizer to obtain a better performance. θ T best is the best parameters to achieve the lowest loss after T epochs. If average loss for epoch t is less than the specified criteria β loss =10 −4 for RSN (set this parameter as 10 −5 for RSN), we will early stop the optimizer. We set β epoch =2000 and update the optimizer 2000 maximum epochs. For ASN, we set the loss of early stopping criteria as 10 −5. The learning rate will be reduced by 0.7 when loss has stopped improving after 15 epochs. For NFSP in our experiment, we set the hyperparameters according to its original paper . The neural network in NFSP had 1 hidden layer of 64 neurons and rectified linear activation. The reinforcement and supervised learning rates were set to 0.1 and 0.005. Both neural networks were optimized by vanilla stochastic gradient descent for every 128 steps in the game. The mini-batch sizes for both neural networks were 128. The sizes of memories were 200k and 2m for reinforcement learning and supervised learning respectively. we set the anticipatory parameter in NFSP to 0.1. The exploration in -greedy policies started at 0.06 and decayed to 0. Hyperparameters on HUNL. To solve HUNL and HUNL, we sample 0.01% and 0.001% infosets in each iteration. The batch size of training neural network is set to 100000. We prefer to using large batch size, because gradient descent spends most of running time. Typically, larger batch size indicates less number of gradient decent updates. We perform DNCFR under different number of embedding sizes and the steps of gradient descent updates. The experiment are presented in Figure 7. Other hyperparameters in neural networks and optimizers are set to be the same with Leduc. The game size of imperfect information HUNL is compared with Go and her partial observable property makes it very difficult. The article gives a detailed analysis of this problem from the perspective of both computational time and space complexity. To evaluate the proposed method, we reimplement DeepStack , which is an expert-level artificial intelligence in Heads-up No-Limit Texas Hold'em. DeepStack defeated professional poker players. The decision points of Heads-up No-Limit Texas Hold'em exceed 10 161 . We provide the game rules of Texas hold'em in Appendix A.3. In this section, we provided some details about our implementation, compared our agent with the original DeepStack to guarantee the correctness of the implementation, and applied our double neural method on the subgame of DeepStack. ABS-CFR agent is an enhanced version of HITSZ_LMW_2pn, whose previous version won the third prize of the 2018 Annual Computer Poker Competition (ACPC) and has 2×10 10 information sets. The ideas of ABS-CFR agent is first abstract the full HUNL into the smaller abstract game and using CFR to solve the abstracted game. The ABS-CFR using two kind-of abstractions: the first one is action abstraction and the second is card abstraction. The action abstraction is using discretized betting model , which can do fold, call, 0.5× pot raise, 1× pot raise, 2× pot raise, and 4× pot raise and all-in in each decision node. The card abstraction is using domain knowledge that strategically similar states are collapsed into a single state. In preflop we use lossless abstraction which has 169 buckets. In flop and turn, we use potential-aware imperfect-recall abstraction with earth mover distance , which has 10000 and 50000 buckets respectively. In the river, we use opponent cluster hand strength abstraction , which has 5000 buckets. Because Alberta university didn't release the source code of DeepStack for No-Limit Texas Hold'em, we implemented this algorithm according to the original article . It should be noted that the released example code 4 on Leduc Hold'em cannot directly be used on Heads-up No-Limit Texas Hold'em for at least three reasons: The tony game Leduc Hold'em only has 2 rounds, 6 cards with default stack size 5, which is running on a single desktop, while HUNL has four rounds, 52 cards and stack size 20000 according to ACPC game rules. Specifically, there are 55,627,620,048,000 possible public and private card combinations for two players on HUNL and the whole game contains about 10 161 infosets, which makes the program should be implemented and run on a large-scale distributed computing cluster. The example code doesn't contain the necessary acceleration techniques and parallel algorithm for Texas Hold'em. Our implementation follows the key ideas presented in the original DeepStack article by using the same hyperparameters and training samples. To optimize the counterfactual value network on turn subgame (this subgame looks ahead two rounds and contains both turn and river), we generate nine million samples. Because each sample is generated by traversing 1000 iterations using CFR+ algorithm based on a random reach probability, these huge samples are computation-expensive and cost 1500 nodes cluster (each node contains 32 CPU cores and 60GB memory) more than 60 days. To optimize the counterfactual value network on flop subgame (this subgame only looks ahead one round), we generate two million samples, which costs about one week by using the similar computation resource. The auxiliary network on preflop subgame is optimized based on ten million samples and costs 2 days. The whole implementation of DeepStack costs us several months and hundreds of thousands of lines of codes. The overall DeepStack algorithm contains three ingredients: computing strategy for the current public state, depth-limited Lookahead to the end of subgame rather than the end of the full game and using counterfactual value network to inference the value of the leaf node in the subgame, using action abstraction technique to reduce the size of game tree. To evaluate the strategy of imperfect information game, exploitability is usually used as the metric to evaluate the distance between the strategy and Nash equilibrium in two-player zero-sum game. However, in the large game, such as Heads-Up No-Limit Texas Hold'em, computation of exploitability is expensive because of its 10 161 searching space. We verified the correctness of our implementation from three different aspects: First, the logs of DeepStack against professional poker players are released on the official website, which contains more than 40000 hand histories. From these logs, we counted the frequency of each action taken by DeepStack under different private cards and used the normalized frequency as the estimated strategy of DeepStack. We compared this estimated strategy with our reimplemented DeepStack. Figure 10 in Appendix G provided the comparison and demonstrated that our implementation leads to policies very close to what the original DeepStack does. Second, we compared the huber loss of three deep counterfactual value networks. Clearly, our implementation achieved a loss similar to the original paper. Third, our agent also played against an enhanced version of HITSZ_LMW_2pn, whose previous version won the third prize of the 2018 Annual Computer Poker Competition (ACPC). Our implementation can win HITSZ_LMW_2pn 120 mbb/g.
We proposed a double neural framework to solve large-scale imperfect information game.
450
scitldr
We present the first verification that a neural network for perception tasks produces a correct output within a specified tolerance for every input of interest. We define correctness relative to a specification which identifies 1) a state space consisting of all relevant states of the world and 2) an observation process that produces neural network inputs from the states of the world. Tiling the state and input spaces with a finite number of tiles, obtaining ground truth bounds from the state tiles and network output bounds from the input tiles, then comparing the ground truth and network output bounds delivers an upper bound on the network output error for any input of interest. Results from two case studies highlight the ability of our technique to deliver tight error bounds for all inputs of interest and show how the error bounds vary over the state and input spaces. Neural networks are now recognized as powerful function approximators with impressive performance across a wide range of applications, especially perception tasks (e.g. vision, speech recognition). Current techniques, however, provide no correctness guarantees on such neural perception systemsthere is currently no way to verify that a neural network provides correct outputs (within a specified tolerance) for all inputs of interest. The closest the field has come is robustness verification, which aims to verify if the network prediction is stable for all inputs in some neighborhood around a selected input point. But robustness verification does not verify for all inputs of interest -it only verifies around local regions. Besides, it does not guarantee that the output, even if stable, is actually correct -there is no specification that defines the correct output for any input except for the manually-labeled center point of each region. We present the first correctness verification of neural networks for perception -the first verification that a neural network produces a correct output within a specified tolerance for every input of interest. Neural networks are often used to predict some property of the world given an observation such as an image or audio recording. We therefore define correctness relative to a specification which identifies 1) a state space consisting of all relevant states of the world and 2) an observation process that produces neural network inputs from the states of the world. Then the inputs of interest are all inputs that can be observed from the state space via the observation process. We define the set of inputs of interest as the feasible input space. Because the quantity of interest that the network predicts is some property of the state of the world, the state defines the ground truth output (and therefore defines the correct output for each input to the neural network). We present Tiler, the algorithm for correctness verification of neural networks. Evaluating the correctness of the network on a single state is straightforward -use the observation process to obtain the possible inputs for that state, use the neural network to obtain the possible outputs, then compare the outputs to the ground truth from the state. To do correctness verification, we generalize this idea to work with tiled state and input spaces. We cover the state and input spaces with a finite number of tiles: each state tile comprises a set of states; each input tile is the image of the corresponding state tile under the observation process. The state tiles provide ground truth bounds for the corresponding input tiles. We use recently developed techniques from the robustness verification literature to obtain network output bounds for each input tile (; ; ; ; ;). A comparison of the ground truth and output bounds delivers an error upper bound for that region of the state space. The error bounds for all the tiles jointly provide the correctness verification . We present two case studies. The first involves a world with a (idealized) fixed road and a camera that can vary its horizontal offset and viewing angle with respect to the centerline of the road (Section 5). The state of the world is therefore characterized by the offset δ and the viewing angle θ. A neural network takes the camera image as input and predicts the offset and the viewing angle. The state space includes the δ and θ of interest. The observation process is the camera imaging process, which maps camera positions to images. This state space and the camera imaging process provide the specification. The feasible input space is the set of camera images that can be observed from all camera positions of interest. For each image, the camera positions of all the states that can produce the image give the possible ground truths. We tile the state space using a grid on (δ, θ). Each state tile gives a bound on the ground truth of δ and θ. We then apply the observation process to project each state tile into the image space. We compute a bounding box for each input tile and apply techniques from robustness verification to obtain neural network output bounds for each input tile. Comparing the ground truth bounds and the network output bounds gives upper bounds on network prediction error for each tile. We verify that our trained neural network provides good accuracy across the majority of the state space of interest and bound the maximum error the network will ever produce on any feasible input. The second case study verifies a neural network that classifies a LiDAR measurement of a sign in an (idealized) scene into one of three shapes (Section 6). The state space includes the position of the LiDAR sensor and the shape of the sign. We tile the state space, project each tile into the input space via the LiDAR observation process, and again apply techniques from robustness verification to verify the network, including identifying regions of the input space where the network may deliver an incorrect classification. Specification: We show how to use state spaces and observation processes to specify the global correctness of neural networks for perception (the space of all inputs of interest, and the correct output for each input). This is the first systematic approach (to our knowledge) to give global correctness specification for perception neural networks. We present an algorithm, Tiler, for correctness verification. With state spaces and observation processes providing specification, this is the first algorithm (to our knowledge) for verifying that a neural network produces the correct output (up to a specified tolerance) for every input of interest. The algorithm can also compute tighter correctness bounds for focused regions of the state and input spaces. Case Study: We apply this framework to a problem of predicting camera position from image and a problem of classifying shape of the sign from LiDAR measurement. We obtain the first correctness verification of neural networks for perception tasks. Motivated by the vulnerability of neural networks to adversarial attacks , researchers have developed a range of techniques for verifying robustnessthey aim to verify if the neural network prediction is stable in some neighborhood around a selected input point.; provide an overview of the field. A range of approaches have been explored, including layer-by-layer reachability analysis (; with abstract interpretation or bounding the local Lipschitz constant , formulating the network as constraints and solving the ing optimization problem (; ; ;), solving the dual problem , and formulating and solving using SMT/SAT solvers (; ;). When the adversarial region is large, several techniques divide the domain into smaller subdomains and verify each of them . Unlike the research presented in this paper, none of this prior research formalizes or attempts to verify that a neural network for perception computes correct (instead of stable) outputs within a specified tolerance for all inputs of interest (instead of local regions around labelled points). Prior work on neural network testing focuses on constructing better test cases to expose problematic network behaviors. Researchers have developed approaches to build test cases that improve coverage on possible states of the neural network, for example neuron coverage and generalizations to multi-granular coverage and MC/DC (Kelly J. et al., 2001) inspired coverage. presents coverage-guided fuzzing methods for testing neural networks using the above coverage criteria. generates realistic test cases by applying natural transformations (e.g. brightness change, rotation, add rain) to seed images. O' uses simulation to test autonomous driving systems with deep learning based perception. Unlike this prior research, which tests the neural network on only a set of input points, the research presented in this paper verifies correctness for all inputs of interest. Consider the general perception problem of taking an input observation x and trying to predict some quantity of interest y. It can be a regression problem (continuous y) or a classification problem (discrete y). Some neural network model is trained for this task. We denote its function by f: X → Y, where X is the space of all possible inputs to the neural network and Y is the space of all possible outputs. Behind the input observation x there is some state of the world s. Denote S as the space of all states of the world that the network is expected to work in. For each state of the world, a set of possible inputs can be observed. We denote this observation process using a mapping g: S → P(X), where g(s) is the set of inputs that can be observed from s. Here P(·) is the power set, andX ⊆ X is the feasible input space, the part of input space that may be observed from the state space S. Concretely,X = {x|∃s ∈ S, x ∈ g(s)}. The quantity of interest y is some attribute of the state of the world. We denote the ground truth of y using a function λ: S → Y. This specifies the ground truth for each input, which we denote as a mappingf:X → P(Y).f (x) is the set of possible ground truth values of y for a given x: f (x) = {y|∃s ∈ S, y = λ(s), x ∈ g(s)}. The feasible input spaceX and the ground truth mappingf together form a specification. In general, we cannot compute and representX andf directly -indeed, the purpose of the neural network is to compute an approximation to this ground truthf which is otherwise not available given only the input x. X andf are instead determined implicitly by S, g, and λ. The error of the neural network is then characterized by the difference between f andf. Concretely, the maximum possible error at a given input x ∈X is: where d(·, ·) is some measurement on the size of the error between two values of the quantity of interest. For regression, we consider the absolute value of the difference d(y 1, y 2) = |y 1 − y 2 |. 1 For classification, we consider a binary error measurement d(y 1, y 2) = 1 y1 =y2 (indicator function), i.e. the error is 0 if the prediction is correct, 1 if the prediction is incorrect. The goal of correctness verification is to compute upper bounds on network prediction errors with respect to the specification. We formulate the problem of correctness verification formally here: Problem formulation of correctness verification: Given a trained neural network f and a specification (X,f) determined implicitly by S, g, and λ, compute upper bounds on error e(x) for any feasible input x ∈X. We next present Tiler, an algorithm for correctness verification of neural networks. We present here the algorithm for regression settings, with sufficient conditions for the ing error bounds to be sound. The algorithm for classification settings is similar (see Appendix B). Step 1: Divide the state space S into state tiles The image of each S i under g gives an input tile (a tile in the input space): The ing tiles {X i} satisfy the following condition: Step 2: For each S i, compute the ground truth bound as an interval The bounds computed this way satisfy the following condition, which (intuitively) states that the possible ground truth values for an input point must be covered jointly by the ground truth bounds of all the input tiles that contain this point: Previous research has produced a variety of methods that bound the neural network output over given input region. Examples include layer-by-layer reachability analysis (; ;) and formulating constrained optimization problems (; ;). Each method typically works for certain classes of networks (e.g. piece-wise linear networks) and certain classes of input regions (e.g. polytopes). For each input tile X i, we therefore introduce a bounding box B i that 1) includes X i and 2) is supported by the solving method: Step 3: Using S i and g, compute a bounding box B i for each tile The bounding boxes B i's must satisfy the following condition: Step 4: Given f and bounding boxes {B i}, use an appropriate solver to solve for the network output ranges The neural network has a single output entry for each quantity of interest. Denote the value of the output entry as o(x), f (x) = o(x). The network output bounds (l i, u i) returned by the solver must satisfy the following condition: Step 5: For each tile, use the ground truth bound (l i, u i) and network output bound (l i, u i) to compute the error bound e i: e i gives the upper bound on prediction error when the state of the world s is in S i. This is because (l i, u i) covers the ground truth values in S i, and (l i, u i) covers the possible network outputs for all inputs that can be generated from S i. From these error bounds {e i}, we compute a global error bound: We can also compute a local error bound for any feasible input x ∈X: Note that max {i|x∈Xi} e i provides a tighter local error bound. But since it is generally much easier to check containment of x in B i's than in X i's, we adopt the current formulation. Algorithm 1 formally presents the Tiler algorithm (for regression). The implementations of DI-VIDESTATESPACE, GETGROUNDTRUTHBOUND, and GETBOUNDINGBOX are problem dependent. The choice of SOLVER needs to be compatible with B i and f. Conditions 4.1 to 4.4 specify the sufficient conditions for the returned from these four methods such that the guarantees obtained are sound. Algorithm 1 Tiler (for regression) Step 1 3: for each S i do 4: Step 2 5: Step 3 6: Step 4 7: Step 5 8: end for e global ← max({e i}) Step 5 10: return e global, {e i}, {B i} {e i}, {B i} can be used later to compute e local (x) 11: end procedure The complexity of this algorithm is determined by the number of tiles, which scales with the dimension of the state space S. Because the computations for each tile are independent, our Tiler implementation executes these computations in parallel. Our formulation also applies to the case of noisy observations. Notice that the observation process g maps from a state to a set of possible inputs, so noise can be incorporated here. The above version of Tiler produces hard guarantees, i.e. the error bounds computed are valid for all cases. This works for observations with bounded noise. For cases where noise is unbounded (e.g. Gaussian noise), Tiler can be adjusted to provide probabilistic guarantees: we compute bounding boxes B i such that P (x ∈ B i |x ∼ g(s), s ∈ S i ) > 1 − for some small. Here we also need the probability measure associated with the observation process -g(s) now gives the probability distribution of input x given state s. This will give an error bound that holds with probability at least 1 − for any state in this tile. We demonstrate how this is achieved in practice in the second case study (Section 6). Tiler provides a way to verify the correctness of the neural network over the whole feasible input spaceX. To make the system complete, we need a method to detect whether a new observed input is withinX (the network is designed to work for it, and we have guaranteed correctness) or not (the network is not designed for it, so we don't have guarantees). In general, checking containment directly withX is hard, since there is no explicit representation of it. Instead, we use the bounding boxes {B i} returned by Tiler as a proxy forX: for a new input x *, we check if x * is contained in any of the B i' s. Since the network output ranges computed in Step 4 covers the inputs in each B i, and the error bounds incorporate the network output ranges, we know that the network output will not have unexpected drastic changes in B i' s. This makes B i's a good proxy for the space of legal inputs. Searching through all the B i's can introduce a large overhead. We propose a way to speed up the search by utilizing the network prediction and the verified error bounds. Given the network prediction y * = f (x *) and the global error bound e global, we can prune the search space by discarding tiles that do not overlap with [y * − e global, y * + e global] in the ground truth attribute. The idea is that we only need to search the local region in the state space that has ground truth attribute close to the prediction, since we have verified the bound for the maximum prediction error. We demonstrate this detection method and the prediction-guided search in the first case study (Section 5). Problem set-up: Consider a world containing a road with a centerline, two side lines, and a camera taking images of the road. The camera is positioned at a fixed height above the road, but can vary its horizontal offset and viewing angle with respect to the centerline of the road. Figure 1a presents a schematic of the scene. The state of the world s is characterized by the offset δ and angle θ of the camera position. We therefore label the states as s δ,θ. We consider the camera position between the range δ ∈ [−40, 40] (length unit of the scene, the road width from the centerline to the side lines is 50 units) and θ ∈ [−60 The input x to the neural network is the image taken by the camera. The observation process g is the camera imaging process. For each pixel, we shoot a ray from the center of that pixel through the camera focal point and compute the intersection of the ray with objects in the scene. The intensity of that intersection point is taken as the intensity of the pixel. The ing x's are 32×32 gray scale images with intensities in (see Appendix C.1 and C.2 for detailed descriptions of the scene and the camera imaging process). Figure 1b presents an example image. The feasible input spaceX is the set of all images that can be taken with δ ∈ [−40, 40] and θ ∈ [−60 The quantity of interest y is the camera position (δ, θ). The ground truth function λ is simply λ(s δ,θ) = (δ, θ). For the neural network, we use the same ConvNet architecture as CNN A in and the small network in. It has 2 convolutional layers (size 4×4, stride 2) with 16 and 32 filters respectively, followed by a fully connected layer with 100 units. All the activation functions are ReLUs. The output layer is a linear layer with 2 output nodes, corresponding to the predictions of δ and θ. The network is trained on 130k images and validated on 1000 images generated from our imaging process. The camera positions of the training and validation images are sampled uniformly from the range δ ∈ [−50, 50] and θ ∈ [−70 The network is trained with an l 1 -loss function, using Adam (see Appendix E for more training details). For error analysis, we treat the predictions of δ and θ separately. The goal is to find upper bounds on the prediction errors e δ (x) and e θ (x) for any feasible input x ∈X. Tiler Figure 1c presents a schematic of how we apply Tiler to this problem. Tiles are constructed by dividing S on (δ, θ) into a grid of equal-sized rectangles with length a and width b. Each cell in the grid is then We next encapsulate each tile X i with an l ∞ -norm ball B i by computing, for each pixel, the range of possible values it can take within the tile. As the camera position varies in a cell S i, the intersection point between the ray from the pixel and the scene sweeps over a region in the scene. The range of intensity values in that region determines the range of values for that pixel. We compute this region for each pixel, then find the pixel value range (see Appendix C.3). The ing B i is an l ∞ -norm ball in the image space covering X i, represented by 32×32 pixel-wise ranges. To solve the range of outputs of the ConvNet for inputs in the l ∞ -norm ball, we adopt the approach from . They formulate the robustness verification problem as mixed integer linear program (MILP). Presolving on ReLU stability and progressive bound tightening are used to improve efficiency. We adopt the same formulation but change the MILP objectives. For each l ∞ -norm ball, we solve 4 optimization problems: maximizing and minimizing the output entry for δ, and another two for θ. Denote the objectives solved as δ Experimental We run Tiler with a cell size of 0.1 (the side length of each cell in the (δ, θ) grid is a = b = 0.1). The step that takes the majority of time is the optimization solver. With parallelism, the optimization step takes about 15 hours running on 40 CPUs@3.00 GHz, solving 960000 × 4 MILP problems. We compute global error bounds by taking the maximum of e i δ and e i θ over all tiles. The global error bound for δ is 12.66, which is 15.8% of the measurement range (80 length units for δ); for θ is 7.13 • (5.94% of the 120 • measurement range). We therefore successfully verify the correctness of the network with these tolerances for all feasible inputs. We present the visualizations of the error bound landscape by plotting the error bounds of each tile as heatmaps over the (δ, θ) space. Figures 2a and 2d present the ing heatmaps for e i δ and e i θ, respectively. To further inspect the distribution of the error bounds, we compute the percentage of the state space S (measured on the (δ, θ) grid) that has error bounds below some threshold value. The percentage varying with threshold value can be viewed as a cumulative distribution. Figures 2c and 2f present the cumulative distributions of the error bounds. It can be seen that most of the state space can be guaranteed with much lower error bounds, with only a small percentage of the regions having larger guarantees. This is especially the case for the offset measurement: 99% of the state space is guaranteed to have error less than 2.65 (3.3% of the measurement range), while the global error bound is 12.66 (15.8%). A key question is how well the error bounds reflect the actual maximum error made by the neural network. To study the tightness of the error bounds, we compute empirical estimates of the maximum errors for each S i, denoted asē i δ andē i θ. We sample multiple (δ, θ) within each cell S i, generate input images for each (δ, θ), then take the maximum over the errors of these points as the empirical estimate of the maximum error for S i. The sample points are drawn on a sub-grid within each cell, with sampling spacing 0.05. This estimate is a lower bound on the maximum error for S i, providing a reference for evaluating the tightness of the error upper bounds we get from Tiler. We take the maximum ofē i δ's andē i θ's to get a lower bound estimate of the global maximum error. The lower bound estimate of the global maximum error for δ is 9.12 (11.4% of the measurement range); for θ is 4.08 • (3.4% of the measurement range). We can see that the error bounds that Tiler delivers are close to the lower bound estimates derived from the observed errors that the network exhibits for specific inputs. Having visualized the heatmaps for the bounds e Figures 2b and 2e. We can see that most of the regions that have large error bounds are due to the fact that the network itself has large errors there. By computing the cumulative distributions of these gaps between bounds and estimates, we found that for angle measurement, 99% of the state space has error gap below 1.9 • (1.6% of measurement range); and for offset measurement, 99% of the state space has error gap below 1.41 length units (1.8%). The gaps indicate the maximum possible improvements on the error bounds. The first factor is that we use interval arithmetic to compute the error bound in Tiler: Tiler takes the maximum distance between the range of possible ground truths and the range of possible network outputs as the bound. The second factor is the extra space included in the box B i that is not in the tile X i. This in a larger range on network output being used for calculating error bound, which in turn makes the error bound itself larger. Effect of tile size: Both of the factors described above are affected by the tile size. We run Tiler with a sequence of cell sizes (0.05, 0.1, 0.2, 0.4, 0.8) for the (δ, θ) grid. Figure 3a shows how the 99 percentiles of the error upper bounds and the gap between error bounds and estimates vary with cell size. As tile size gets finer, Tiler provides better error bounds, and the tightness of bounds improves. These show that we can get better error bounds with finer tile sizes. But this improvement might be at the cost of time: reducing tile sizes also increases the total number of tiles and the number of optimization problems to solve. Figure 3b shows how the total solving time varies with cell size. For cell sizes smaller than 0.2, the trend can be explained by the above argument. For cell sizes larger than 0.2, total solving time increases with cell size instead. The reason is each optimization problem becomes harder to solve as the tile becomes large. Specifically, the approach we adopt relies on the presolving on ReLU stability to improve speed. The number of unstable ReLUs will increase drastically as the cell size becomes large, which makes the solving slower. We implement the input detector by checking if the new input x * is contained in any of the bounding boxes B i. We test the detector with 3 types of inputs: 1) legal inputs, generated from the state space through the imaging process; 2) corrupted inputs, obtained by applying i.i.d uniformly distributed per-pixel perturbation to legal inputs; 3) inputs from a new scene, where the road is wider and there is a double centerline. Figure 3c to 3e show some example images for each type. We randomly generated 500 images for each type. Our detector is able to flag all inputs from type 1 as legal, and all inputs from type 2 and 3 as illegal. On average, naive search (over all B i) takes 1.04s per input, while prediction-guided search takes 0.04s per input. So the prediction-guided search gives a 26× speedup without any compromise in functionality. Problem set-up The world in this case contains a planar sign standing on the ground. There are 3 types of signs with different shapes: square, triangle, and circle (Figure 4b). A LiDAR sensor takes measurement of the scene, which is used as input to a neural network to classify the shape of the sign. The sensor can vary its distance d and angle θ with respect to the sign, but its height is fixed, and it is always facing towards the sign. Figure 4a shows the schematic of the set-up. Assume the working zone for the LiDAR sensor is with position d ∈ and θ ∈ [−45 •, 45 •]. Then the state space S has 3 dimensions: two continuous (d and θ), and one discrete (sign shape c). The LiDAR sensor emits an array of 32×32 laser beams in fixed directions. The measurement from each beam is the distance to the first object hit in that direction. We consider a LiDAR measurement model where the maximum distance that can be measured is MAX_RANGE=300, and the measurement has a Gaussian noise with zero mean and a standard deviation of 0.1% of MAX_RANGE. This gives the observation process g. Appendix D.1 and D.2 provides more details on the scene and LiDAR measurement model. We use a CNN with 2 convolutional layers (size 4×4) with 16 filters, followed by a fully connected layer with 100 units. The distance measurements are preprocessed before feeding into the network: first dividing them by MAX_RANGE to scale to, then using 1 minus the scaled distances as inputs. This helps the network training. We train the network using 50k points from each class, and validating using 500 points from each class. The training settings are the same as the previous case study (Appendix E). Tiler The state tiles are constructed in each of the three shape subspaces. We divide the θ dimension uniformly into 90 intervals and the d dimension uniformly in the inverse scale into 60 intervals to obtain a grid with 5400 cells per shape. To compute the bounding box B i for a given tile S i, we first find a lower bound and an upper bound on the distance of object for each beam as the sensor position varies within that tile S i (Appendix D.3). We extend this lower and upper bound by 5σ, where σ is the standard deviation of the Gaussian measurement noise. This way we have P (x ∈ B i |x ∼ g(s), s ∈ S i ) >= (P (|a| <= 5σ|a ∼ N (0, σ 2))) N > 0.999, where N = 32 × 32 is the input dimension. The factor 5 can be changed, depending on the required probabilistic guarantee. Same as the previous case study, we adopt the MILP method to solve the network output, which is used to decide whether the tile is verified to be correct or not. We plot the verification as heatmaps over the state space in Figure 5 (top row). We are able to verify the correctness of the network over the majority of the state space. In particular, we verify that the network is always correct when the shape of the sign is triangle. Besides the tiling described above, we also run a finer tiling with half the cell sizes in both d and θ. Figure 5 (bottom row) shows the verification . By reducing the tile sizes, we can verify more regions in the state space. For the tiles that we are unable to verify correctness (red squares in the heatmaps), there are inputs within those bounding boxes on which the network will predict a different class. We inspect several of such tiles to see the inputs that cause problems. Figure 4c shows a few examples. In some of the cases (top two in figure), the'misclassified' inputs actually do not look like coming from the ground truth class. This is because the extra space included in B i is too large, so that it includes inputs that are reasonably different from feasible inputs. Such cases will be reduced as the tile size becomes smaller, since the extra spaces in B i's will be shrunk. We have indeed observed such phenomenon, as we can verify more regions when the tile size becomes smaller. In some other cases (bottom example), however, the misclassified inputs are perceptually similar to the ground truth class. Yet the network predicts a different class. This reveals that the network is potentially not very robust on inputs around these points. In this sense, our framework provides a way to systematically find regions of the input space of interests where the network is potentially vulnerable. The techniques presented in this paper work with specifications provided by the combination of a state space of the world and an observation process that converts states into neural network inputs. Results from the case studies highlight how well the approach works for a state space characterized by several attributes and a camera imaging or LiDAR measurement observation process. We anticipate that the technique will also work well for other problems that have a low dimensional state space (but potentially a high dimensional input space). For higher dimensional state spaces, the framework makes it possible to systematically target specific regions of the input space to verify. Potential applications include targeted verification, directed testing, and the identification of illegal inputs for which the network is not expected to work on. A PROOFS FOR THEOREM 1 AND 2 Theorem 1 (Local error bound for regression). Given that Condition 4.1, 4.2(a), 4.3, and 4.4(a) are satisfied, then ∀x ∈X, e(x) ≤ e local (x), where e(x) is defined in Equation 2 and e local (x) is computed from Equation 3 and 5. Proof. For any x ∈X, we have Condition 4.1 and 4.2(a) guarantees that for any x ∈X and y ∈f (x), we can find a tile X i such that x ∈ X i and l i ≤ y ≤ u i. Let t(y, x) be a function that gives such a tile X t(y,x) for a given x and y ∈f (x). Then Since x ∈ X t(y,x) and X t(y,x) ⊆ B t(y,x) (Condition 4.3), x ∈ B t(y,x). By Condition 4.4(a), This gives Since x ∈ B t(y,x) for all y ∈f (x), we have {t(y, x)|y ∈f (x)} ⊆ {i|x ∈ B i}, which gives Theorem 2 (Global error bound for regression). Given that Condition 4.1, 4.2(a), 4.3, and 4.4(a) are satisfied, then ∀x ∈X, e(x) ≤ e global, where e(x) is defined in Equation 2 and e global is computed from Equation 3 and 4. Proof. By Theorem 1, we have ∀x ∈X, We present here the algorithm of Tiler for classification settings. Step 1 (tiling the space) is the same as regression. Step 2: For each S i, compute the ground truth bound as a set C i ⊆ Y, such that ∀s ∈ S i, λ(s) ∈ C i. The bounds computed this way satisfy the following condition: Condition 4.2(b). For any x ∈X, ∀y ∈f (x), ∃X i such that x ∈ X i and y ∈ C i. The idea behind Condition 4.2(b) is the same as that of Condition 4.2(a), but formulated for discrete y. For tiles with C i containing more than 1 class, we cannot verify correctness since there is more than 1 possible ground truth class for that tile. Therefore, we should try to make the state tiles containing only 1 ground truth class each when tiling the space in step 1. For tiles with |C i | = 1, we proceed to the following steps. Step 3 (compute bounding box for each input tile) is the same as regression. The next step is to solve the network output range. Suppose the quantity of interest has K possible classes. Then the output layer of the neural network is typically a softmax layer with K output nodes. Denote the k-th output score before softmax as o k (x). We use the solver to solve the minimum difference between the output score for the ground truth class and each of the other classes: Step 4: Given f, B i, and the ground truth class c i (the only element in C i), use appropriate solver to solve lower bounds on the difference between the output score for class c i and each of the other classes: l for k ∈ {1, . . ., K} \ {c i}. The bounds need to satisfy: Step 5: For each tile, compute an error bound e i, with e i = 0 meaning the network is guaranteed to be correct for this state tile, and e i = 1 meaning no guarantee: Otherwise, e i = 1. We can then compute the global and local error bounds using Equation 4 and 5, same as in the regression case. Theorem 3 (Local error bound for classification). Given that Condition 4. 1, 4.2(b), 4.3, and 4.4(b) are satisfied, then ∀x ∈X, e(x) ≤ e local (x), where e(x) is defined in Equation 2 and e local (x) is computed from Equation 6 and 5. Equivalently, when e local (x) = 0, the network prediction is guaranteed to be correct at x. which also meansf (x) ⊆ {i|x∈Xi} C i. But {i|x ∈ X i} ⊆ {i|x ∈ B i} (Condition 4.3), which giveŝ Sincef (x) is not empty, we havef (x) = {c p} = f (x). Theorem 4 (Global error bound for classification). Given that Condition 4. 1, 4.2(b), 4.3, and 4.4(b) are satisfied, then if e global = 0, the network prediction is guaranteed to be correct for all x ∈X. e global is computed from Equation 6 and 4. Proof. We aim to prove that if e global = 0, then ∀x ∈X, f (x) =f (x). According to Equation 4, e global = 0 means e i = 0 for all i. Then ∀x ∈X, e local (x) = 0, which by Theorem 3 indicates that f (x) =f (x). Algorithm 2 formally presents the Tiler algorithm for classification. Input: S, g, λ, f Output: e global, {e i}, {B i} 1: procedure TILER(S, g, λ, f) 2: Step 1 3: for each S i do 4: Step 2 5: Step 3 6: Step 4 7: Step 5, Equation 6 8: end for e global ← max({e i}) Step 5 10: return e global, {e i}, {B i} {e i}, {B i} can be used later to compute e local (x) 11: end procedure C POSITION MEASUREMENT FROM ROAD SCENE C.1 SCENE For clarity, we describe the scene in a Cartesian coordinate system. Treating 1 length unit as roughly 5 centimeters will give a realistic scale. The scene contains a road in the xy-plane, extending along the y-axis. A schematic view of the road down along the z-axis is shown in Figure 6a The schematic of the camera is shown in Figure 6b. The camera's height above the road is fixed at z c = 20.0. The focal length f = 1.0. The image plane is divided into 32 × 32 pixels, with pixel side length d = 0.16. The camera imaging process we use can be viewed as an one-step ray tracing: the intensity value of each pixel is determined by shooting a ray from the center of that pixel through the focal point, and take the intensity of the intersection point between the ray and the scene. In this example scene, the intersection points for the top half of the pixels are in the sky (intensity 0.0). The intersection points for the lower half of the pixels are on the xy-plane. The position of the intersection point (in the P P C is the transformation from pixel coordinate to camera coordinate. Camera coordinate has the origin at the focal point, and axes aligned with the orientation of the camera. We define the focal coordinate to have its origin also at the focal point, but with axes aligned with the world coordinate (the coordinate system used in Appendix C.1). R CF is the rotation matrix that transforms from camera coordinate to focal coordinate. P p represents the projection to the road plane through the focal point, in the focal coordinate. Finally, T F W is the translation matrix that transforms from the focal coordinate to the world coordinate. The transformation matrices are given below: The variables are defined as in Table 1, with δ and θ being the offset and angle of the camera. After the intensity values of the pixels are determined, they are scaled and quantized to the range, which are used as the final image taken. In the road scene example, we need to encapsulate each tile in the input space with a l ∞ -norm ball for the MILP solver to solve. This requires a method to compute a range for each pixel that covers all values this pixel can take for images in this tile. This section presents the method used in this paper. A tile in this example corresponds to images taken with camera position in a local range For pixels in the upper half of the image, their values will always be the intensity of the sky. For each pixel in the lower half of the image, if we trace the intersection point between the projection ray from this pixel and the road plane, it will sweep over a closed region as the camera position varies in the δ-θ cell. The range of possible values for that pixel is then determined by the range of intensities in that region. In this example, there is an efficient way of computing the range of intensities in the region of sweep. Since the intensities on the road plane only varies with x, it suffices to find the span on x for the region. The extrema on x can only be achieved at: 1) the four corners of the δ-θ cell; 2) the points on the two edges δ = δ 1 and δ = δ 2 where θ gives the ray of that pixel perpendicular to the y axis (if that θ is contained in [θ 1, θ 2]). Therefore, by computing the location of these critical points, we can obtain the range of x. We can then obtain the range of intensities covered in the region of sweep, which will give the range of pixel values. The sign is a planer object standing on the ground. It consists of a holding stick and a sign head on top of the stick. The stick is of height 40.0 and width 2.0 (all in terms of the length unit of the scene). The sign head has three possible shapes: square (with side length 10.0), equilateral triangle (with side length 10.0), and circle (with diameter 10.0). The center of the sign head coincides with the middle point of the top of the stick. The LiDAR sensor can vary its position within the working zone (see Figure 4a). Its height is fixed at 40.0 above the ground. The center direction of the LiDAR is always pointing parallel to the ground plane towards the centerline of the sign. The LiDAR sensor emits an array of 32×32 laser beams and measures the distance to the first object along each beam. The directions of the beams are arranged as follows. At distance f = 4.0 away from the center of the sensor, there is a (imaginary) 32×32 grid with cell size 0.1. There is a beam shooting from the center of the sensor through the center of each cell in the grid. The LiDAR model we consider has a maximum measurement range of MAX_RANGE=300.0. If the distance of the object is larger than MAX_RANGE, the reflected signal is too weak to be detected. Therefore, all the distances larger than MAX_RANGE will be measured as MAX_RANGE. The distance measurement contains a Guassian noise n ∼ N (0, σ 2), where σ = 0.001 × MAX_RANGE. So the measured distance for a beam is given by where d 0 is the actual distance and d is the measured distance. To compute the bounding boxes in Tiler, we need to compute a lower bound and an upper bound on the measured distance for each beam as the sensor position varies within a tile. We first determine whether the intersection point p between the beam and the scene is 1) always on the sign, 2) always on the (ground/sky), or 3) covers both cases, when the sensor position varies within the In most situations, the distances of p at the list of critical positions (we refer it as the list of critical distances) contain the maximum and minimum distances of p in the tile. There is one exception: at d = d 1, as θ varies in [θ 1, θ 2], if the intersection point shifts from sign to (or vice versa), then the minimum distance of p can occur at (d 1, θ) where θ is not equal to θ 1 or θ 2. To handle this case, if at the previous step we find that p do occur on the sign plane in the tile, then we add the distance of the intersection point between the beam and the sign plane at position (d 1, θ 1) (or (d 1, θ 2), whichever gives a smaller distance) to the list of critical distances. Notice that this intersection point is not necessarily on the sign. In this way, the min and max among the critical distances are guaranteed to bound the distance range for the beam as the sensor position varies within the tile. After this, the bounds can be extended according to the noise scale to get the bounding boxes. This section presents the additional details of the training of the neural networks in the case studies. We use Adam optimizer with learning rate 0.01. We use early stopping based on the loss on the validation set: we terminate training if the validation performance does not improve in 5 consecutive epochs. We take the model from the epoch that has the lowest loss on the validation set.
We present the first verification that a neural network for perception tasks produces a correct output within a specified tolerance for every input of interest.
451
scitldr
Deep generative models have achieved remarkable progress in recent years. Despite this progress, quantitative evaluation and comparison of generative models remains as one of the important challenges. One of the most popular metrics for evaluating generative models is the log-likelihood. While the direct computation of log-likelihood can be intractable, it has been recently shown that the log-likelihood of some of the most interesting generative models such as variational autoencoders (VAE) or generative adversarial networks (GAN) can be efficiently estimated using annealed importance sampling (AIS). In this work, we argue that the log-likelihood metric by itself cannot represent all the different performance characteristics of generative models, and propose to use rate distortion curves to evaluate and compare deep generative models. We show that we can approximate the entire rate distortion curve using one single run of AIS for roughly the same computational cost as a single log-likelihood estimate. We evaluate lossy compression rates of different deep generative models such as VAEs, GANs (and its variants) and adversarial autoencoders (AAE) on MNIST and CIFAR10, and arrive at a number of insights not obtainable from log-likelihoods alone. Generative models of images represent one of the most exciting areas of rapid progress of AI (; b; a). However, evaluating the performance of generative models remains a significant challenge. Many of the most successful models, most notably Generative Adversarial Networks (GANs) , are implicit generative models for which computation of log-likelihoods is intractable or even undefined. Evaluation typically focuses on metrics such as the Inception score or the Fréchet Inception Distance (FID) score , which do not have nearly the same degree of theoretical underpinning as likelihood-based metrics. Log-likelihoods are one of the most important measures of generative models. Their utility is evidenced by the fact that likelihoods (or equivalent metrics such as perplexity or bits-per-dimension) are reported in nearly all cases where it's convenient to compute them. Unfortunately, computation of log-likelihoods for implicit generative models remains a difficult problem. Furthermore, log-likelihoods have important conceptual limitations. For continuous inputs in the image domain, the metric is often dominated by the fine-grained distribution over pixels rather than the high-level structure. For models with low-dimensional support, one needs to assign an observation model, such as (rather arbitrary) isotropic Gaussian noise . Lossless compression metrics for GANs often give absurdly large bits-per-dimension (e.g. 10 14) which fails to reflect the true performance of the model . for more discussion of limitations of likelihood-based evaluation. Typically, one is not interested in describing the pixels of an image directly, and it would be sufficient to generate images close to the true data distribution in some metric such as Euclidean distance. For this reason, there has been much interest in Wasserstein distance as a criterion for generative models, since the measure exploits precisely this metric structure; ). However, Wasserstein distance remains difficult to approximate, and hence it is not routinely used to evaluate generative models. We aim to achieve the best of both worlds by measuring lossy compression rates of deep generative models. In particular, we aim to estimate the rate distortion function, which measures the number of bits required to match a distribution to within a given distortion. Like Wasserstein distance, it can exploit the metric structure of the observation space, but like log-likelihoods, it connects to the rich literature of probabilistic and information theoretic analysis of generative models. By focusing on different parts of the rate distortion curve, one can achieve different tradeoffs between the description length and the fidelity of reconstruction -thereby fixing the problem whereby lossless compression focuses on the details at the expense of high-level structure. It has the further advantage that the distortion metric need not have a probabilistic interpretation; hence, one is free to use more perceptually valid distortion metrics such as structural similarity (SSIM) or distances between hidden representations of a convolutional network . Algorithmically, computing rate distortion functions raises similar challenges to estimating loglikelihoods. We show that the rate distortion curve can be computed by finding the normalizing constants of a family of unnormalized probability distributions over the noise variables z. Interestingly, when the distortion metric is squared error, these distributions correspond to the posterior distributions of z for Gaussian observation models with different variances; hence, the rate distortion analysis generalizes the evaluation of log-likelihoods with Gaussian observation models. Annealed Importance Sampling (AIS) is currently the most effective general-purpose method for estimating log-likelihoods of implicit generative models, and was used by to compare log-likelihoods of a variety of implicit generative models. The algorithm is based on gradually interpolating between a tractable initial distribution and an intractable target distribution. We show that when AIS is used to estimate log-likelihoods under a Gaussian observation model, the sequence of intermediate distributions corresponds to precisely the distributions needed to compute the rate distortion curve. Since AIS maintains a stochastic lower bound on the normalizing constants of these distributions, it automatically produces an upper bound on the entire rate distortion curve. Furthermore, the tightness of the bound can be validated on simulated data using bidirectional Monte Carlo (BDMC) . Hence, we can approximate the entire rate distortion curve for roughly the same computational cost as a single log-likelihood estimate. We use our rate distortion approximations to study a variety of variational autoencoders (VAEs) , GANs and adversarial autoencoders (AAE) , and arrive at a number of insights not obtainable from log-likelihoods alone. For instance, we observe that VAEs and GANs have different rate distortion tradeoffs: While VAEs with larger code size can generally achieve better lossless compression rates, their performances drop at lossy compression in the low-rate regime. Conversely, expanding the capacity of GANs appears to bring substantial reductions in distortion at the high-rate regime without any corresponding deterioration in quality in the low-rate regime. We find that increasing the capacity of GANs by increasing the code size (width) has a qualitatively different effect on the rate distortion tradeoffs than increasing the depth. We also find that that different GAN variants with the same code size achieve nearly identical RD curves, and that the code size dominates the performance differences between GANs. Annealed importance sampling (AIS) is a Monte Carlo algorithm based on constructing a sequence of n intermediate distributions, where k ∈ {0, . . ., n}, between a tractable initial distribution p 0 (z) and the intractable target distribution p n (z). At the the k-th state (0 ≤ k ≤ n), the forward distribution q f and the un-normalized backward distributionq b are where T k is an MCMC kernel that leaves p k (z) invariant; andT k is its reverse kernel. We run M independent AIS chains, numbered i = 1,..., M. Let z i k be the k-th state of the i-th chain. The Bidirectional Monte Carlo. We know that the log partition function estimate logẐ is a stochastic lower bound on log Z (Jensen's inequality). As the , using the forward AIS distribution as the proposal distribution in a lower bound on the data log-likelihood. By running AIS in reverse, however, we obtain an upper bound on log Z. However, in order to run the AIS in reverse, we need exact samples from the true posterior, which is only possible on the simulated data. The combination of the AIS lower bound and upper bound on the log partition function is called bidirectional Monte Carlo (BDMC), and the gap between these bounds is called the BDMC gap . We note that AIS combined with BDMC has been used to estimate log-likelihoods for implicit generative models . In this work, we validate our AIS experiments by using the BDMC gap to measure the accuracy of our partition function estimators. Let x be a random variable that comes from the data distribution p d (x). Shannon's fundamental compression theorem states that we can compress this random variable losslessly at the rate of H(x). But if we allow lossy compression, we can compress x at the rate of R, where R ≤ H(x), using the code z, and have a lossy reconstructionx = f (z) with the distortion of D, given a distortion measure d(x,x) = d(x, f (z)). The rate distortion theory quantifies the trade-off between the lossy compression rate R and the distortion D. The rate distortion function R(D) is defined as the minimum number of bits per sample required to achieve lossy compression of the data such that the average distortion measured by the distortion function is less than D. Shannon's rate distortion theorem states that R(D) equals the minimum of the following optimization problem: min where the optimization is over the channel conditional distribution q(z|x). Suppose the datadistribution is p d (x). The channel conditional q(z|x) induces the joint distribution q(z, x) = p d (x)q(z|x), which defines the mutual information I(z; x). q(z) is the marginal distribution over z of the joint distribution q(z, x), and is called the output marginal distribution. We can rewrite the optimization of Eq. 5 using the method of Lagrange multipliers as follows: min The goal of generative modeling is to learn a model distribution p(x) to approximate the data distribution p d (x). Implicit generative models define the model distribution p(x) using a latent variable z with a fixed prior distribution p(z) such as a Gaussian distribution, and a decoder or generator network which computesx = f (z). In some cases (e.g. VAEs, AAEs), the generator explicitly parameterizes a conditional distribution p(x|z), such as a Gaussian observation model N (x; f (z), σ 2 I). But in implicit models such as GANs, the generator directly outputsx = f (z). In order to treat VAEs and GANs under a consistent framework, we ignore the Gaussian observation model of VAEs (thereby treating the VAE decoder as an implicit model), and use the squared error distortion of 2. However, we note that it is also possible to assume a Gaussian observation model with a fixed σ 2 for GANs, and use the Gaussian negative log-likelihood (NLL) as the distortion measure for both VAEs and GANs: d(x, f (z)) = − log N (x; f (z), σ 2 I). This is equivalent to squared error distortion up to a linear transformation. In this section, we describe the rate-prior distortion function, as a variational upper bound on the true rate distortion function. We must modify the standard rate distortion formalism slightly in order to match the goals of generative model evaluation. Specifically, we are interested in evaluating lossy compression with coding schemes corresponding to particular trained generative models, including the fixed prior p(z). For models such as VAEs, KL(q(z|x) p(z)) is standardly interpreted as the description length of z. Hence, we adjust the rate distortion formalism to use E p d (x) KL(q(z|x) p(z)) in place of I(x, z), and refer to this as the rate-prior objective. The rate-prior objective upper bounds the standard rate: In the context of variational inference, q(z|x) is the posterior, q(z) = p d (x)q(z|x)dx is the aggregated posterior , and p(z) is the prior. In the context of rate distortion theory, q(z|x) is the channel conditional, q(z) is the output marginal, and p(z) is the variational output marginal distribution. The inequality is tight when p(z) = q(z), i.e., the variational output marginal (prior) is equal to the output marginal (aggregated posterior). We note that the upper bound of Eq. 7 has been used in other algorithms such as the Blahut-Arimoto algorithm or the variational information bottleneck algorithm . Analogously to the rate distortion function, we define the rate-prior distortion function R p (D) as the minimum value of the rate-prior objective for a given distortion D. More precisely, We can rewrite the optimization of Eq. 8 using the method of Lagrange multipliers as follows: Conveniently, the Lagrangian decomposes into independent optimization problems for each x, allowing us to treat this as an optimization problem over q(z|x) for fixed x. We can compute the rate distortion curve by sweeping over β rather than by sweeping over D. Now we describe some of the properties of the rate-prior distortion function R p (D), which are straightforward analogues of well-known properties of the rate distortion function. Proposition 1. R p (D) has the following properties: is non-increasing and convex function of D. (c) The rate-prior distortion optimization of Eq. 9 has a unique global optimum which can be expressed as q * Proof. The proofs are provided in Appendix C.1. Prop. 1b states that for any prior p(z), R p (D) is a variational upper-bound on R(D). More specifically, we have R(D) = min p(z) R(D), which implies that for any given β, there exists a prior p * β (z), for which the variational gap between rate distortion and rate-prior distortion functions at β is zero. Fig. 1a shows a geometrical illustration of Prop. 1b for three values of β ∈ {0.25, 1, 4}. We can see in this figure that all R p (D) curves are upper bounds on R(D), and for any given β, R p * β (D) is tangent to both R p (D) and to the line with the slope of β passing through the optimal solution. If the decoder outputs a probability distribution (as in a VAE), we can define the distortion metric to coincide with the negative log-likelihood (NLL): d(x, f (z)) = − log p(x|z). We now describe some of the properties of the rate-prior distortion functions with NLL distortions. Proposition 2. The rate-prior distortion function R p (D) with NLL distortion of − log p(x|z) has the following properties: The global optimum of rate-prior distortion optimization (Eq. 9) can be expressed as q * (c) At β = 1, the negative summation of rate-prior and distortion is the true log-likelihood: Proof. The proofs are provided in Appendix C.2.. At β = 1, let L * and L p be the negative summation of rate and distortion on the rate distortion and rate-prior distortion curves respectively (shown in Fig. 1b). From Prop. 2c we know that L p is the true log-likelihood of the generative model. From Prop. 1b, we can conclude that L * = max p(z) L p. This reveals an important relationship between rate distortion theory and generative modeling that was also observed in: for a given generative model with a fixed conditional p(x|z), the best log-likelihood L p that can be achieved by optimizing the prior p(z) is the L *, which can be found by solving the rate distortion problem. Furthermore, the corresponding optimal prior p * (z) is the output marginal of the optimal channel conditional of the rate distortion problem at β = 1. Fig. 1b shows the rate-prior distortion function R p * (D) corresponding to p * (z). In a "good" generative model, where the model distribution is close to the data-distribution, the negative log-likelihood −L p is close to the entropy of data H d, and the variational gap between R p (D) and R(D) is tight. In the previous section, we introduced the rate-prior distortion function R p (D) and showed that it upper bounds the true rate distortion function R(D). However, evaluating R p (D) is also intractable. In this section, we show how we can upper bound R p (D) using a single run of the AIS algorithm. AIS Chain. We fix a temperature schedule 0 = β 0 < β 1 <... < β n = ∞. For the k-th intermediate distribution, we use the optimal channel conditional q k (z|x) and partition function Z k (x), corresponding to points along R p (D) and derived in Prop. 1c: Conveniently, this choice coincides with geometric averages, the typical choice of intermediate distributions for AIS, i.e, the k th step happens to be the optimal solutions for β k. This chain is shown in Fig. 2. For the transition operator, we use Hamiltonian Monte Carlo . At the k-th step, the rate-prior R k (x) and the distortion D k (x) are AIS Rate-Prior Distortion Curve. For each data point x, we run M independent AIS chains, numbered i = 1,..., M, in the forward direction. At the k-th state of the i-th chain, let z i k be the state, w i k be the AIS importance weights, andw i k be the normalized AIS importance weights. We denote the AIS distribution at the k-th step as the distribution obtained by first sampling from all, and then re-sampling the samples based on their normalized importance weightsw i k (see Section 2.1 and Appendix C.4 for more details). More formally q Using the AIS distribution q AIS k (z|x) defined in Eq. 12, we now define the AIS distortion D AIS k (x) and the AIS rate-prior R AIS k (x) as follows: We now define the AIS rate-prior distortion curve R Proposition 3. The AIS rate-prior distortion curve upper bounds the rate-prior distortion function: Proof. The proof is provided in Appendix C.4. Estimated AIS Rate-Prior Distortion Curve. Although the AIS distribution can be easily sampled from, its density is intractable to evaluate. As the , evaluating R Having found the estimatesD, we propose to estimate the rate as follows: We define the estimated AIS rate-prior distortion curveR Fig. 1b ) as an RD curve obtained by tracing pairs of rate distortion estimates. Proposition 4. The estimated AIS rate-prior distortion curve upper bounds the AIS rate-prior distortion curve in expectation: Proof. The proof is provided in Appendix C.4. In summary, from Prop. 1, Prop. 3 and Prop. 4, we can conclude that the estimated AIS rate-prior distortion curve upper bounds the true rate distortion curve in expectation (shown in Fig. 1b): In all the experiments, we plot the estimated AIS rate-prior distortion functionR Accuracy of AIS Estimates. While the above discussion focuses on obtaining upper bounds, we note that AIS is one of the most accurate general-purpose methods for estimating partition functions, and therefore we believe our AIS upper bounds to be fairly tight in practice. In theory, for large number of intermediate distributions, the AIS variance is proportional to 1/M K (; 2005), where M is the number of AIS chains and K is the number of intermediate distributions. For the main experiments of our paper, we evaluate the tightness of the AIS estimate by computing the BDMC gap, and show that in practice our upper bounds are tight (Appendix D). The Rate Distortion Tradeoff in the AIS Chain. Different values of β corresponds to different tradeoffs between the compression rate and the distortion in a given generative model. β = 0 corresponds to the case where q 0 (z|x) = p(z). In this case, the compression rate is zero, and the distortion would be large, since in order to reconstruct x, we simply sample from the prior and generate a randomx that is completely independent of x. In this case, the distortion would In the case of probabilistic decoders with NLL distortion, another interesting intermediate distribution is β = 1, where the optimal channel conditional is the true posterior of the generative model q (z|x) = p(z|x). In this case, as shown in Prop. 2c, the summation of the rate-prior and the distortion term is the negative of true log-likelihood of the generative model. As β n → ∞, the network cares more about the distortion and less about the compression rate. In this case, the optimal channel conditional would be q n (z|x) = δ(z − z ML (x)), where. In other words, since the network only cares about the distortion, the optimal channel condition puts all its mass on z ML which minimizes the distortion. However, the network would require infinitely many bits to precisely represent this delta function, and thus the rate goes to infinity. Evaluation of Implicit Generative Models. Quantitative evaluation of the performance of GANs has been a challenge for the field since the beginning. Many heuristic measures have been proposed, such as the Inception score and the Fréchet Inception Distance (FID) . One of the main drawbacks of the IS or FID is that a model that simply memorizes the training dataset would obtain a near-optimal score. Another, drawback of these methods is that they use a pretrained ImageNet classifier weights which makes them sensitive to the weights of the classifier, and less applicable to other domains and datasets. Another evaluation method that sometimes is being used is the Parzen window estimate, which can be shown to be an instance of AIS with zero intermediate distributions, and thus has a very large variance. Another evaluation method of GANs that was proposed in is measuiring the ability of the generator network to reconstruct the samples from the data distribution. This metric is similar to the distortion obtained at the high-rate regime of our rate distortion framework when β → ∞. Another related work is GILBO, which similar to our framework does not require the generative model to have a tractable posterior and thus allows direct comparison of VAEs and GANs. However, GILBO can only evaluate the performance of the generative model on the simulated data and not the true data distribution. Rate Distortion Theory and Generative Models. Perhaps the closest work to ours is "Fixing a Broken ELBO", which plots rate-prior distortion curves for VAEs. Our work is different than in two key aspects. First, in the rate-prior distortion function is evaluated by fixing the architecture of the neural network, and learning the distortion measure d(x, f (z)) in addition to learning q(z|x). Whereas, in our definition of rate distortion, we assumed the distortion measure is fixed and given by a trained generative model. As the , we plot the rate-prior distortion curve for a particular generative model, rather than a particular architecture. The second key difference is that, consistent with the Shannon's rate distortion Deep and Shallow GANs (a) MNIST Comparing GANs on CIFAR10 Figure 3: The rate distortion curves of GANs. theorem, we find the optimal channel conditional q * (z|x) by using AIS; while in, q(z|x) is a variational distribution that is restricted to a variational family. See Appendix A for a discussion of related works about practical compression schemes, distortionperception tradeoffs, and precision-recall tradeoffs. In this section, we use our rate distortion approximations to answer the following questions: How do different generative models such as VAEs, GANs and AAEs perform at different lossy compression rates? What insights can we obtain from the rate distortion curves about different characteristics of generative models? What is the effect of the code size (width), depth of the network, or the learning algorithm on the rate distortion tradeoffs? Rate Distortion Curves of GANs. Fig. 3 shows rate distortion curves for GANs trained on MNIST and CIFAR-10. We varied the dimension of the noise vector z, as well as the depth of the decoder. For the GAN experiments on MNIST (Fig. 3a), the label "deep" corresponds to three hidden layers of size 1024, and the label "shallow" corresponds to one hidden layer of size 1024. We trained shallow and deep GANs with Gradient Penalty (GAN-GP) with the code size d ∈ {2, 5, 10, 100} on MNIST. For the GAN experiments on CIFAR-10 (Fig. 3b), we trained the DCGAN , GAN with Gradient Penalty (GP) , SN-GAN , and BRE-GAN , with the code size of d ∈ {2, 10, 100}. In both the MNIST and CIFAR experiments, we observe that in general increasing the code size has the effect of extending the curve leftwards. This is expected, since the high-rate regime is effectively measuring reconstruction ability, and additional dimensions in z improves the reconstruction. We can also observe from Fig. 3a that increasing the depth pushes the curves down and to the left. In other words, the distortion in both high-rate and mid-rate regimes improves. In these regimes, increasing the depth increases the capacity of the network, which enables the network to make a better use of the information in the code space. In the low-rate regime, however, increasing the depth, similar to increasing the latent size, does not improve the distortion. Comparing GANs, VAEs and AAEs Figure 4: RD curves of VAEs, GANs, AAEs. Rate Distortion Curves of VAEs. Fig. 4 compares VAEs, AAEs and GP-GANs with the code size of d ∈ {2, 10, 100}, and the same decoder architecture on the MNIST dataset. In general, we can see that in the mid-rate to high-rate regimes, VAEs achieve better distortions than GANs with the same architecture. This is expected as the VAE is trained with the ELBO objective, which encourages good reconstructions (in the case of factorized Gaussian decoder). We can see from Fig. 4 increasing the capacity reduces the distortion at the high-rate regime, at the expense of increasing the distortion in the low-rate regime (or equivalently, increasing the rate required to adequately approximate the data). We believe the performance drop of VAEs in the low-rate regime is symptomatic of the "holes problem" in the code space of VAEs with large code size: because these VAEs allocate a large fraction of their latent spaces to garbage images, it requires many bits to get close to the image manifold. Interestingly, this trade-off could also help explain the well-known problem of blurry samples from VAEs: in order to avoid garbage samples (corresponding to large distortion in the low-rate regime), one needs to reduce the capacity, thereby increasing the distortion at the high-rate regime. By contrast, GANs do not suffer from this tradeoff, and one can train high-capacity GANs without sacrificing performance in the low-rate regime. Rate Distortion Curves of AAEs. The AAE was introduced by to address the holes problem of VAEs, by directly matching the aggregated posterior to the prior in addition to optimizing the reconstruction cost. Fig. 4 shows the RD curves of AAEs. In comparison to GANs, AAEs can match the low-rate performane of GANs, but achieve a better high-rate performance. This is expected as AAEs directly optimize the reconstruction cost as part of their objective. In comparison to VAEs, AAEs perform slightly worse at the high-rate regime, which is expected as the adversarial regularization of AAEs is stronger than the KL regularization of VAEs. But AAEs perform slightly better in the low-rate regime, as they can alleviate the holes problem to some extent. Since log-likelihoods constitute only a scalar value, they are unable to distinguish different aspects of a generative model which could be good or bad, such as the prior or the observation model. Here, we show that two manipulations which damage a trained VAE in different ways in very different behavior of the RD curves. Our first manipulation, originally proposed by , is to use a mixture of the VAE's density and another distribution concentrated away from the data distribution. As pointed out by , this in a model which achieves high log-likelihood while generating poor samples. Specifically, after training the VAE10 on MNIST, we "damage" its prior p(z) = N (0, I) by altering it to a mixture prior (1 − α)p(z) + αq(z), where q(z) = N (0, 10I) is a "poor" prior, which is chosen to be far away from the original prior p(z); and α is close to 1. This process would in a "poor" generative model that generates garbage samples most of the time (more precisely with the probability of α). Suppose p(x) and q(x) are the likelihood of the good and the poor generative models. It is straightforward to see that log q(x) is at most 4.6 nats worse that log p(x), and thus log-likelihood fails to tell these models apart: Fig. 5a plots the rate distortion curves of this model for different values of α. We can see that the high-rate and log-likelihood performance of the good and poor generative models are almost identical, whereas in the low-rate regime, the RD curves show significant drop in the performance and successfully detect this failure mode of log-likelihood. (b) VAE, GANs and AAEs Figure 6: The RD curves of GANs, VAEs and AAEs with MSE distortion on the deep feature space. The behavior is qualitatively similar to the for MSE in images (see Fig. 3 and Fig. 4), suggesting that the RD analysis is not particularly sensitive to the particular choice of metric. Our second manipulation is to damage the decoder by adding a Gaussian blur kernel after the output layer. Fig. 5b shows the rate distortion curves for different standard deviations of the Gaussian kernel. We can see that, in contrast to the mixture prior experiment, the high-rate performance of the VAE drops due to inability of the decoder to output sharp images. However, we can also see an improvement in the low-rate performance of the VAE. This is because (similarly to log-likelihoods with Gaussian observation models) the data distribution does not necessarily achieve the minimal distortion, and in fact, in the extremely low-rate regime, blurring appears to help by reducing the average Euclidean distance between low-rate reconstructions and the input images. Hence, our two manipulations in very different effects to the RD curves, suggesting that these curves provide a much richer picture of the performance of generative models, compared to scalar log-likelihoods. The experiments discussed above all used pixelwise MSE as the distortion metric. However, for natural images, one could use more perceptually valid distortion metrics such as SSIM . Fig. 6 shows the RD curves of GANs, VAEs, and AAEs on the MNIST dataset, using the MSE on the deep features of a CNN as distortion metric. In all cases, the qualitative behavior of the RD curves with this distortion metric closely matches the qualitative behaviors for pixelwise MSE. We can see from Fig. 6a that similar to the RD curves with MSE distortion, GANs with different depths and code sizes have the same low-rate performance, but as the model gets deeper and wider, the RD curves are pushed down and extended to the left. Similarly, we can see from Fig. 6b that compared to GANs and AAEs, VAEs generally have a better high-rate performance, but worse lowrate performance. The fact that the qualitative behaviors of RD curves with this metric closely match those of pixelwise MSE indicates that the of our analysis are not overly sensitive to the particular choice of distortion metric. In this work, we studied rate distortion approximations for evaluating different generative models such as VAEs, GANs and AAEs. We showed that rate distortion curves provide more insights about the model than the log-likelihood alone while requiring roughly the same computational cost. For instance, we observed that while VAEs with larger code size can generally achieve better lossless compression rates, their performances drop at lossy compression in the low-rate regime. Conversely, expanding the capacity of GANs appears to bring substantial reductions in distortion at the high-rate regime without any corresponding deterioration in quality in the low-rate regime. This may help explain the success of large GAN architectures (; a; b). We also discovered that increasing the capacity of GANs by increasing the code size (width) has a very different effect than increasing the depth. The former extends the rate distortion curves leftwards, while the latter pushes the curves down. We also found that different GAN variants with the same code size has almost similar rate distortion curves, and that the code size dominates the algorithmic differences of GANs. Overall, lossy compression yields a richer and more complete picture of the distribution modeling performance of generative models. The ability to quantitatively measure performance tradeoffs should lead to algorithmic insights which can improve these models. Practical Compression Schemes. We have justified our use of compression terminology in terms of Shannon's fundamental implying that there exist a rate distortion code for any rate distortion pair that is achievable according to the rate distortion function. Interestingly, for lossless compression with generative models, there is a practical compression scheme which nearly achieves the theoretical rate (i.e. the negative ELBO): bits-back encoding. The basic scheme was proposed by; , and later implemented by. Practical versions for modern deep generative models were developed by;. We do not currently know of an analogous practical scheme for lossy compression with deep generative models. Other researchers have developed practical coding schemes achieving variational rate distortion bounds for particular latent variable models which exploited the factorial structure of the variational posterior (Ballé et al., 2018;). These methods are not directly applicable in our setting, since we don't assume an explicit encoder network, and our variational posteriors lack a convenient factorized form. We don't know whether our variational approximation will lead to a practical lossy compression scheme, but the successes for other variational methods give us hope. Relationship with the Rate-Distortion-Perception Tradeoff. Our work is related to which incorporates a perceptual quality loss function in the rate-distortion framework and characterizes the triple tradeoff between rate distortion and perception. More specifically, defines the perceptual loss using a divergence between the marginal reconstruction distribution and the data distribution. This perceptual loss is then incorporated as an additional constraint in the rate-distortion framework to encourage the reconstruction distribution to perceptually look like the data distribution. It is shown that as the perceptual constraint becomes tighter, the rate-distortion function elevates more. In our rate-prior distortion framework, we are also enforcing a perceptual constraint on the reconstruction distribution by incorporating the regularization term of KL(q(z) p(z)) in the rate-distortion objective, which encourages matching the aggregated posterior to the prior . More precisely, let us define the reconstruction distribution r(x) as the the distribution obtained by passing the data distribution through the encoder and then the decoder: It can be shown that the regularization term KL(q(z) p(z)) upper bounds KL(r(x) p(x)): In other words, in the rate-prior distortion optimization, for a given distortion constraint, we are not only interested in minimizing the rate I(x; z), but also at the same time, we are interested in preserving the perceptual quality of the reconstruction distribution by matching it to the model distribution. In the low-rate regime, when the model is allowed to have large distortions, the model obtains small rates and at the same time preserves the perceptual distribution of the reconstruction samples. As the distortion constraint becomes tighter, the model starts to trade off the rate I(x; z) and the perceptual quality KL(q(z) p(z)), which in an elevated rate distortion curve. Relationship with the Precision-Recall Tradeoff. One of the main drawbacks of the IS or FID is that they can only provide a single scalar value that cannot distinguish the mode dropping behavior from the mode inventing behavior (generating outlier or garbage samples) in generative models. In order to address this issue,; propose to study the precision-recall tradoff for evaluating generative models. In this context, high precision implies that the samples from the model distribution are close to the data distribution, and high recall implies the generative model can reconstruct (or generate a sample similar to) any sample from the data distribution. The precision-recall curves enable us to identify both the mode dropping and the mode inventing behavior of the generative model. More specifically, mode dropping drops the precision of the model at the high-recall regime, and mode inventing drops the precision of the model in the low-recall regime. Our rate-prior distortion framework can be thought as the information theoretic analogue of the precision-recall curves, which extends the scalar notion of log-likelihood to rate distortion curves. More specifically, in our framework, mode dropping drops the distortion performance of the model at the high-rate regime, and mode inventing drops the distortion performance of the model at the low-rate regime. In Section 6, we will empirically study the effect of mode dropping and mode inventing on our rate-prior distortion curves. AIS Validation. We conducted several experiments to validate the correctness of our implementation and the accuracy of the AIS estimates. Firstly, we compared our AIS with the analytical solution of rate-prior distortion curve on a linear VAE (derived in Appendix D.3.1) trained on MNIST. As shown in Fig. 7, the RD curve estimated by AIS agrees closely with the analytical solution. Secondly, for the main experiments of the paper, we evaluated the tightness of the AIS estimate by computing the BDMC gap. The largest BDMC gap for VAEs and AAEs was 0.127 nats, and the largest BDMC gap for GANs was 1.649 nats, showing that our AIS upper bounds are tight. More details are provided in Appendix D.3.2. APPENDIX C PROOFS C.1 PROOF OF PROP. 1. ] is a linear function of the channel conditional distribution q(z|x). The mutual information is a convex function of q(z|x). The KL(q(z) p(z)) is also convex function of q(z), which itself is a linear function of q(z|x). Thus the rate-prior objective is a convex function of q(z|x). Suppose for the distortions D 1 and D 2, q 1 (z|x) and q 2 (z|x) achieve the optimal rates in Eq. 5 respectively. Suppose the conditional q λ (z|x) is defined as q λ (z|x) = λq 1 (z|x) + (1 − λ)q 2 (z|x). The rate-prior objective that the conditional q λ (z|x) achieves is I λ (z; x) + KL(q λ (z) p(z)), and the distortion D λ that this conditional achieves is which proves the convexity of R p (D). Alternative Proof of Prop. 1a. We know the rate-prior term E p d (x) KL(q(z|x) p(z)) is a convex function of q(z|x), and E q(x,z) [d(x, f (z))] is a linear and thus convex function of q(z|x). As the , the following optimization problem is a convex optimization problem. min The rate distortion function R p (D) is the perturbation function of the convex optimization problem of Eq. 22. The convexity of R p (D) follows from the fact that the perturbation function of any convex optimization problem is a convex function . Proof of Prop. 1b. We have min = min = min where in Eq. 24, we have used the fact that for any function f (x, y), we have min and in Eq. 25, we have used the fact that KL(q(z) p(z)) is minimized when p(z) = q(z). Proof of Prop. 1c. In Prop. 1a, we showed that the rate-prior term is a convex function of q(z|x), and that the distortion is a linear function of q(z|x). So the summation of them in Eq. 9 will be a convex function of q(z|x). The unique global optimum of this convex optimization can be found by rewriting Eq. 9 as where Z β (x) = p(z) exp(−βd(x, f (z)))dz. The minimum of Eq. 28 is obtained when the KL divergence is zero. Thus the optimal channel conditional is q * Proof of Prop. 2a. R(D) ≤ R p (D) was proved in Prop. 1b. To prove the first inequality, note that the summation of rate and distortion is where q * (x, z) is the optimal joint channel conditional, and q * (z) and q * (x|z) are its marginal and conditional. The equality happens if there is a joint distribution q(x, z), whose conditional q(x|z) = p(x|z), and whose marginal over x is p d (x). But note that such a joint distribution might not exist for an arbitrary p(x|z). Proof of Prop. 2b. The proof can be easily obtained by using d(x, f (z)) = − log p(x|z) in Prop. 1c. Proof of Prop. 2c. Based on Prop. 2b, at β = 1, we have AIS has the property that for any step k of the algorithm, the set of chains up to step k, and the partial computation of their weights, can be viewed as the of a complete run of AIS with target distribution q * k (z|x). Hence, we assume without loss of generality that we are looking at a complete run of AIS (but our analysis applies to the intermediate distributions as well). 2. Compute the importance weights and normalized importance weights of each chain using w 3. Select a chain index S with probability ofw 5. Keep the unselected chain values and re-label them as (z where −S denotes the set of all indices except the selected index S. 6. Return z = z 1 k . More formally, the AIS distribution is Using the AIS distribution q AIS k (z|x) defined as above, we define the AIS distortion D AIS k (x) and the AIS rate-prior R In order to estimate R We would like to prove that The proof of Eq. 40 is straightforward: Eq. 42 shows thatD We also know logẐ AIS k (x) obtained by Eq. 38 is the estimate of the log partition function, and by the Jenson's inequality lower bounds in expectation the true log partition function: ) In order to simplify notation, suppose z Using the above notation, Eq. 44 can be re-written aŝ Hence, where the inequality follows from the monotonicity of KL divergence. Rearranging terms, we bound the rate: Eq. 49 shows thatR AIS k (x) upper bounds the AIS rate-prior R AIS k (x) in expectation. We also showed D AIS k (x) is an unbiased estimate of the AIS distortion D AIS k (x). Hence, the estimated AIS rate-prior curve upper bounds the AIS rate-prior distortion curve in expectation: The code for reproducing all the experiments of this paper will be open sourced publicly. We used MNIST and CIFAR-10 datasets in our experiments. Real-Valued MNIST. For the VAE experiments on the real-valued MNIST dataset (Fig. 4a), we used the "VAE-50" architecture described in , and only changed the code size in our experiments. The decoder variance is a global parameter learned during the training. The network was trained for 1000 epochs with the learning rate of 0.0001 using the Adam optimizer . For the GAN experiments on MNIST (Fig. 3a), we used the "GAN-50" architecture described in . In order to stabilize the training dynamic, we used the gradient penalty (GP) . In our deep architectures, we used code sizes of d ∈ {2, 5, 10, 100} and three hidden Figure 8: The rate-prior distortion curves obtained by adaptively tuning the HMC parameters in the preliminary run, and pre-loading the HMC parameters in the second formal run. "rs" in the legend indicates the random seed used in the second run. layers each having 1024 hidden units to obtain the following GAN models: Deep-GAN2, Deep-GAN5, Deep-GAN10 and Deep-GAN100. The shallow GANs architectures are similar to the deep architectures but with one layer of hidden units. For the CIFAR-10 experiments (Fig. 3b), we experimented with different GAN models such as DCGAN , DCGAN with Gradient Penalty (GP-GAN) , Spectral Normalization (SN-GAN) , and DCGAN with Binarized Representation Entropy Regularization (BRE-GAN) . The numbers at the end of each GAN name in Fig. 3b indicate the code size. For each RD curve, there are 1999 points computed with only one AIS chain, for 999 β s spaced linearly from β max to 1 and another 999 β s spaced linearly from 1 to β min, and plus β = 1, thus 1999 points. β min = 1 12 for all models. β max = 1 0.0003 ≈ 3333 for 100 dimensional models such as GAN100, VAE100 or AAE 100, and β max = For the 2, 5 and 10 dimensional models, N = 40000, and the above procedure will in 60292 intemediate distributions in total. For 100 dimensional models, to ensure accuracy of our AIS estimator with a small BDMC gap, we used N = 1600000 and the above procedure will in 1611463 intermediate distributions in total. We used 20 leap frog steps for HMC, 40 independent chains, on a single batch of 50 images. On MNIST, we also tested with a larger batch of 500 MNIST images, but did not observe significant difference compared with a batch 50 images, thus we did all of our experiments with a single batch 50 images. On a P100 GPU, for Adaptive Tuning of HMC Parameters. While running the AIS chain, the parameters of the HMC kernel cannot be adaptively tuned, since it would violate the Markovian property of the chain. So in order to be able to adaptively tune HMC parameters such as the number of leapfrog steps and the step size, in all our experiments, we first do a preliminary run where the HMC parameters are adaptively tuned to yield an average acceptance probability of 65% as suggested in. Then in the second "formal" run, we pre-load and fix the HMC parameters found in the preliminary run, and start the chain with a new random seed to obtain our final . Interestingly, we observed that the difference in the RD curves obtained from the preliminary run and the formal runs with various different random seeds is very small, as shown in Fig. 8. This figure shows that the AIS with the HMC kernel is robust against different choices of random seeds for approximating the RD curve of VAE10. We conducted several experiments to validate the correctness of our implementation and the accuracy of the AIS estimates. We compared our AIS with the analytical solution of the rate-prior distortion optimization on a linear VAE trained on MNIST as shown in Fig. 7. In order to derive the analytical solution, we first find the optimal distribution q * β (z|x) from Prop. 2b. For simplicity, we assume a fixed identity covariance matrix I at the output of the conditional likelihood of the linear VAE decoder. In other words, the decoder of the VAE is simply: x = Wz + b +, where x is the observation, z is the latent code vector, W is the decoder weight matrix and b is the bias. The observation noise of the decoder is ∼ N (0, I). It's easy to show that the conditional likelihood raised to a power β is: p(x|z) β = N (x|Wz + b, For numerical stability, we can further simplify thw above by taking the SVD of W : let W = UDV, and then apply the Woodbury Matrix Identity to the matrix inversion, we can get: Where R β is a diagonal matrix with the i th diagonal entry being Where k is the dimension of the latent code z. With negative log-likelihood as the distortion metric, the analytically form of distortion term is: We evaluated the tightness of the AIS estimate by computing the BDMC gaps using the same AIS settings. Fig. 9, shows the BDMC gaps at diffrent compression rates for the VAE, GAN and AAE experiments on the MNIST dataset. The largest BDMC gap for VAEs and AAEs is 0.127 nats, and the largest BDMC gap for GANs is 1.649 nats, showing that our AIS upper bounds are tight. In this section, we visualize the high-rate (β ≈ 3500) and low-rate (β = 0) reconstructions of the MNIST images for VAEs, GANs and AAEs with different hidden code sizes. The qualitative are shown in Fig. 10 and Fig. 11, which is in consistent with the quantitative presented in experiment section of the paper. (h) High Rate VAE100 (i) High Rate AAE100 (j) High Rate GAN100 Figure 11: High-rate reconstructions (β max) of VAEs, GANs and AAEs on MNIST. β max = 3333 for 100 dimensional models, and β max = 36098 for the 2 and 10 dimensional models.
We study rate distortion approximations for evaluating deep generative models, and show that rate distortion curves provide more insights about the model than the log-likelihood alone while requiring roughly the same computational cost.
452
scitldr
Although reinforcement learning methods can achieve impressive in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time. Given that it is impractical to train separate policies to accommodate all situations the agent may see in the real world, this work proposes to learn how to quickly and effectively adapt online to new tasks. To enable sample-efficient learning, we consider learning online adaptation in the context of model-based reinforcement learning. Our approach uses meta-learning to train a dynamics model prior such that, when combined with recent data, this prior can be rapidly adapted to the local context. Our experiments demonstrate online adaptation for continuous control tasks on both simulated and real-world agents. We first show simulated agents adapting their behavior online to novel terrains, crippled body parts, and highly-dynamic environments. We also illustrate the importance of incorporating online adaptation into autonomous agents that operate in the real world by applying our method to a real dynamic legged millirobot: We demonstrate the agent's learned ability to quickly adapt online to a missing leg, adjust to novel terrains and slopes, account for miscalibration or errors in pose estimation, and compensate for pulling payloads. Both model-based and model-free reinforcement learning (RL) methods generally operate in one of two regimes: all training is performed in advance, producing a model or policy that can be used at test-time to make decisions in settings that approximately match those seen during training; or, training is performed online (e.g., as in the case of online temporal-difference learning), in which case the agent can slowly modify its behavior as it interacts with the environment. However, in both of these cases, dynamic changes such as failure of a robot's components, encountering a new terrain, environmental factors such as lighting and wind, or other unexpected perturbations, can cause the agent to fail. In contrast, humans can rapidly adapt their behavior to unseen physical perturbations and changes in their dynamics BID6: adults can learn to walk on crutches in just a few seconds, people can adapt almost instantaneously to picking up an object that is unexpectedly heavy, and children that can walk on carpet and grass can quickly figure out how to walk on ice without having to relearn how to walk. How is this possible? If an agent has encountered a large number of perturbations in the past, it can in principle use that experience to learn how to adapt. In this work, we propose a meta-learning approach for learning online adaptation. Motivated by the ability to tackle real-world applications, we specifically develop a model-based meta-reinforcement learning algorithm. In this setting, data for updating the model is readily available at every timestep in the form of recent experiences. But more crucially, the meta-training process for training such an adaptive model can be much more sample efficient than model-free meta-RL approaches BID11 BID55. Further, our approach foregoes the episodic framework on which model-free meta-RL approaches rely on, where tasks are pre-defined to be different rewards or environments, and tasks exist at the trajectory level only. Instead, our method considers each timestep to potentially be a new "task, " where any detail or setting could have changed at any timestep. This view induces a more general meta-RL problem setting by allowing the notion of a task to represent anything from existing in a different part of the state space, to experiencing disturbances, or attempting to achieve a new goal. Learning to adapt a model alleviates a central challenge of model-based reinforcement learning: the problem of acquiring a global model that is accurate throughout the entire state space. Furthermore, even if it were practical to train a globally accurate dynamics model, the dynamics inherently change as a function of uncontrollable and often unobservable environmental factors, such as those mentioned above. If we have a model that can adapt online, it need not be perfect everywhere a priori. This property has previously been exploited by adaptive control methods BID2 BID45 BID38; but, scaling such methods to complex tasks and nonlinear systems is exceptionally difficult. Even when working with deep neural networks, which have been used to model complex nonlinear systems BID21, it is exceptionally difficult to enable adaptation, since such models typically require large amounts of data and many gradient steps to learn effectively. By specifically training a neural network model to require only a small amount of experience to adapt, we can enable effective online adaptation in complex environments while putting less pressure on needing a perfect global model. The primary contribution of our work is an efficient meta reinforcement learning approach that achieves online adaptation in dynamic environments. To the best knowledge of the authors, this is the first meta-reinforcement learning algorithm to be applied in a real robotic system. Our algorithm efficiently trains a global model that is capable to use its recent experiences to quickly adapt, achieving fast online adaptation in dynamic environments. We evaluate two versions of our approach, recurrence-based adaptive learner (ReBAL) and gradient-based adaptive learner (GrBAL) on stochastic and simulated continuous control tasks with complex contact dynamics (Fig. 2). In our experiments, we show a quadrupedal "ant" adapting to the failure of different legs, as well as a "half-cheetah" robot adapting to the failure off different joints, navigating terrains with different slopes, and walking on floating platforms of varying buoyancy. Our model-based meta RL method attains substantial improvement over prior approaches, including standard model-based methods, online model-adaptive methods, model-free methods, and prior meta-reinforcement learning methods, when trained with similar amounts of data. In all experiments, meta-training across multiple tasks is sample efficient, using only the equivalent of 1.5 − 3 hours of real-world experience, roughly 10× less than what model-free methods require to learn a single task. Finally, we demonstrate GrBAL on a real dynamic legged millirobot (see Fig 2). To highlight not only the sample efficiency of our meta model-based reinforcement learning approach, but also the importance of fast online adaptation in the real world, we show the agent's learned ability to adapt online to tasks such as a missing leg, novel terrains and slopes, miscalibration or errors in pose estimation, and new payloads to be pulled. Advances in learning control policies have shown success on numerous complex and high dimensional tasks BID48 BID27 BID32 BID49. While reinforcement learning algorithms provide a framework for learning new tasks, they primarily focus on mastery of individual skills, rather than generalizing and quickly adapting to new scenarios. Furthermore, model-free approaches BID39 require large amounts of system interaction to learn successful control policies, which often makes them impractical for realworld systems. In contrast, model-based methods attain superior sample efficiency by first learning a model of system dynamics, and then using that model to optimize a policy BID9 BID23 BID36 BID58. Our approach alleviates the need to learn a single global model by allowing the model to be adapted automatically to different scenarios online based on recent observations. A key challenge with model-based RL approaches is the difficulty of learning a global model that is accurate for the entire state space. Prior model-based approaches tackled this problem by incorporating model uncertainty using Gaussian Processes (GPs) BID18 BID8 BID10. However, these methods make additional assumptions on the system (such as smoothness), and does not scale to high dimensional environments. BID7 has recently showed that neural networks models can also benefit from incorporating uncertainty, and it can lead to model-based methods that attain model-free performance with a significant reduction on sample complexity. Our approach is orthogonal to theirs, and can benefit from incorporating such uncertainty. Prior online adaptation approaches BID51 BID3 have aimed to learn an approximate global model and then adapt it at test time. Dynamic evaluation algorithms BID42 BID20 BID14, for example, learn an approximate global distribution at training time and adapt those model parameters at test time to fit the current local distribution via gradient descent. There exists extensive prior work on online adaptation in modelbased reinforcement learning and adaptive control BID45. In contrast from inverse model adaptation BID17 BID54 BID38 BID40, we are concerned in the problem of adapting the forward model, closely related to online system identification BID28. Work in model adaptation BID24 BID16 BID15 BID56 has shown that a perfect global model is not necessary, and prior knowledge can be fine-tuned to handle small changes. These methods, however, face a mismatch between what the model is trained for and how it is used at test time. In this paper, we bridge this gap by explicitly training a model for fast and effective adaptation. As a , our model achieves more effective adaptation compared to these prior works, as validated in our experiments. Our problem setting relates to meta-learning, a long-standing problem of interest in machine learning that is concerned with enabling artificial agents to efficiently learn new tasks by learning to learn BID52 BID47 BID37 BID22. A meta-learner can control learning through approaches such as deciding the learner's architecture BID4, or by prescribing an optimization algorithm or update rule for the learner BID5 BID46 BID59 BID1 BID26 BID41. Another popular meta-learning approach involves simply unrolling a recurrent neural network (RNN) that ingests the data BID44 BID33 BID31 and learns internal representations of the algorithms themselves, one instantiation of our approach (ReBAL) builds on top of these methods. On the other hand, the other instantiation of our method (GrBAL) builds on top of MAML. GrBAL differs from the supervised version of MAML in that MAML assumes access to a hand-designed distribution of tasks. Instead, one of our primary contributions is the online formulation of meta-learning, where tasks correspond to temporal segments, enabling "tasks" to be constructed automatically from the experience in the environment. Meta-learning in the context of reinforcement learning has largely focused on model-free approaches BID11 BID55 BID50 BID0. However, these algorithms present even more (meta-)training sample complexity than non-meta model-free RL methods, which precludes them from real-world applications. Recent work BID43 ) has developed a model-based meta RL algorithm, framing meta-learning as a hierarchical latent variable model, training for episodic adaptation to dynamics changes; the modeling is done with GPs, and are shown on the cart-pole and double-pendulum agents. In contrast, we propose an approach for learning online adaptation of high-capacity neural network dynamics models; we present two instantiations of this general approach and show on both simulated agents and a real legged robot. In this section, we present model-based reinforcement learning, introduce the meta-learning formulation, and describe the two main meta-learning approaches. Reinforcement learning agents aim to perform actions that maximize some notion of cumulative reward. Concretely, consider a Markov decision process (MDP) defined by the tuple (S, A, p, r, γ, ρ 0, H). Here, S is the set of states, A is the set of actions, p(s |s, a) is the state transition distribution, r: S × A → R is a bounded reward function, ρ 0: S → R + is the initial state distribution, γ is the discount factor, and H is the horizon. A trajectory segment is denoted by τ (i, j):= (s i, a i, ..., s j, a j, s j+1). Finally, the sum of expected rewards from a trajectory is the return. In this framework, RL aims to find a policy π: S → A that prescribes the optimal action to take from each state in order to maximize the expected return. Model-based RL aims to solve this problem by learning the transition distribution p(s |s, a), which is also referred to as the dynamics model. This can be done using a function approximatorp θ (s |s, a) to approximate the dynamics, where the weights θ are optimized to maximize the log-likelihood of the observed data D. In practice, this model is then used in the process of action selection by either producing data points from which to train a policy, or by producing predictions and dynamics constraints to be optimized by a controller.3.2 META-LEARNING Meta-learning is concerned with automatically learning learning algorithms that are more efficient and effective than learning from scratch. These algorithms leverage data from previous tasks to acquire a learning procedure that can quickly adapt to new tasks. These methods operate under the assumption that the previous meta-training tasks and the new meta-test tasks are drawn from the same task distribution ρ(T) and share a common structure that can be exploited for fast learning. In the supervised learning setting, we aim to learn a function f θ with parameters θ that minimizes a supervised loss L T. Then, the goal of meta-learning is to find a learning procedure, denoted as DISPLAYFORM0 We can formalize this meta-learning problem setting as optimizing for the parameters of the learning procedure θ, ψ as follows: min DISPLAYFORM1 where DISPLAYFORM2 T are sampled without replacement from the meta-training dataset D T. Once meta-training optimizes for the parameters θ *, ψ *, the learning procedure u ψ (·, θ) can then be used to learn new held-out tasks from small amounts of data. We will also refer to the learning procedure u as the update function. Gradient-based meta-learning. Model-agnostic meta-learning (MAML) aims to learn the initial parameters of a neural network such that taking one or several gradient descent steps from this initialization leads to effective generalization (or few-shot generalization) to new tasks. Then, when presented with new tasks, the model with the meta-learned initialization can be quickly fine-tuned using a few data points from the new tasks. Using the notation from before, MAML uses gradient descent as a learning algorithm: DISPLAYFORM3 The learning rate α may be a learnable paramter (in which case ψ = α) or fixed as a hyperparameter, leading to ψ = ∅. Despite the update rule being fixed, a learned initialization of an overparameterized deep network followed by gradient descent is as expressive as update rules represented by deep recurrent networks.Recurrence-based meta-learning. Another approach to meta-learning is to use recurrent models. In this case, the update function is always learned, and ψ corresponds to the weights of the recurrent model that update the hidden state. The parameters θ of the prediction model correspond to the remainder of the weights of the recurrent model and the hidden state. Both gradient-based and recurrence-based meta-learning methods have been used for meta model-free RL BID11. We will build upon these ideas to develop a meta model-based RL algorithm that enables adaptation in dynamic environments, in an online way. In this section, we present our approach for meta-learning for online model adaptation. As explained in Section 3.2, standard meta-learning formulations require the learned model θ *, ψ * to learn using M data points from some new "task." In prior gradient-based and model-based meta-RL approaches BID43, the M has corresponded to M trajectories, leading to episodic adaptation. Our notion of task is slightly more fluid, where every segment of a trajectory can be considered to be a different "task," and observations from the past M timesteps (rather than the past M episodes) can be considered as providing information about the current task setting. Since changes in system dynamics, terrain details, or other environmental changes can occur at any time, we consider (at every time step) the problem of adapting the model using the M past time steps to predict the next K timesteps. In this setting, M and K are pre-specified hyperparameters; see appendix for a sensitivity analysis of these parameters. In this work, we use the notion of environment E to denote different settings or configurations of a particular problem, ranging from malfunctions in the system's joints to the state of external disturbances. We assume a distribution of environments ρ(E) that share some common structure, such as the same observation and action space, but may differ in their dynamics p E (s |s, a). We denote a trajectory segment by τ E (i, j), which represents a sequence of states and actions (s i, a i, ..., s j, a j, s j+1) sampled within an environment E. Our algorithm assumes that the environment is locally consistent, in that every segment of length j − i has the same environment. Even though this assumption is not always correct, it allows us to learn to adapt from data without knowing when the environment has changed. Due to the fast nature of our adaptation (less than a second), this assumption is seldom violated. We pose the meta-RL problem in this setting as an optimization over (θ, ψ) with respect to a maximum likelihood meta-objective. The meta-objective is the likelihood of the data under a predictive modelp θ (s |s, a) with parameters θ, where θ = u ψ (τ E (t − M, t − 1), θ) corresponds to model parameters that were updated using the past M data points. Concretely, this corresponds to the following optimization: DISPLAYFORM0 In that τ E (t − M, t + K) ∼ D corresponds to trajectory segments sampled from our previous experience, and the loss L corresponds to the negative log likelihood of the data under the model: DISPLAYFORM1 In the meta-objective in Equation 3, note that the past M points are used to adapt θ into θ, and the loss of this θ is evaluated on the future K points. Thus, we use the past M timesteps to provide insight into how to adapt our model to perform well for nearby future timesteps. As outlined in Algorithm 1, the update rule u ψ for the inner update and a gradient step on θ for the outer update allow us to optimize this meta-objective of adaptation. Thus, we achieve fast adaptation at test time by being able to fine-tune the model using just M data points. While we focus on reinforcement learning problems in our experiments, this meta-learning approach could be used for a learning to adapt online in a variety of sequence modeling domains. We present our algorithm using both a recurrence and a gradient-based meta-learner, as we discuss next. Gradient-Based Adaptive Learner (GrBAL). GrBAL uses a gradient-based meta-learning to perform online adaptation; in particular, we use MAML. In this case, our update rule is prescribed by gradient descent (5.) DISPLAYFORM2 Recurrence-Based Adaptive Learner (ReBAL). ReBAL, instead, utilizes a recurrent model, which learns its own update rule (i.e., through its internal gating structure). In this case, ψ and u ψ correspond to the weights of the recurrent model that update its hidden state. Sample E ∼ ρ(E) Collect τ E using Alg. 2 6: DISPLAYFORM0 end if 8: DISPLAYFORM1 end for 13: DISPLAYFORM2 DISPLAYFORM3 a ← controller(θ *, r, H, n A) Execute a, add to D 6: end for 7: Return rollout D Figure 2: Two real-world and four simulated environments on which our method is evaluated and adaptation is crucial for success (e.g., adapting to different slopes and leg failures) Now that we have discussed our approach for enabling online adaptation, we next propose how to build upon this idea to develop a model-based meta-reinforcement learning algorithm. First, we explain how the agent can use the adapted model to perform a task, given parameters θ * and ψ * from optimizing the meta-learning objective. Given θ * and ψ *, we use the agent's recent experience to adapt the model parameters: θ * = u ψ * (τ (t − M, t), θ * ). This in a modelp θ * that better captures the local dynamics in the current setting, task, or environment. This adapted model is then passed to our controller, along with the reward function r and a planning horizon H. We use a planning H that is smaller than the adaptation horizon K, since the adapted model is only valid within the current context. We use model predictive path integral control (MPPI) BID57, but, in principle, our model adaptation approach is agnostic to the model predictive control (MPC) method used. The use of MPC compensates for model inaccuracies by preventing accumulating errors, since we replan at each time step using updated state information. MPC also allows for further benefits in this setting of online adaptation, because the modelp θ E itself will also improve by the next time step. After taking each step, we append the ing state transition onto our dataset, reset the model parameters back to θ *, and repeat the entire planning process for each timestep. See Algorithm 2 for this adaptation procedure. Finally, in addition to test-time, we also perform this online adaptation procedure during the meta-training phase itself, to provide on-policy rollouts for meta-training. For the complete meta-RL algorithm, see Algorithm 1. Our evaluation aims to answer the following questions: Is adaptation actually changing the model? Does our approach enable fast adaptation to varying dynamics, tasks, and environments, both inside and outside of the training distribution? How does our method's performance compare to that of other methods? How do GrBAL and ReBAL compare? How does meta model-based RL compare to meta model-free RL in terms of sample efficiency and performance for these experiments? Can our method learn to adapt online on a real robot, and if so, how does it perform? We next present our set-up and , motivated by these questions. Videos are available online 2, and further analysis is provided in the appendix. We first conduct a comparative evaluation of our algorithm, on a variety of simulated robots using the MuJoCo physics engine BID53. For all of our environments, we model the transition probabilities as Gaussian random variables with mean parameterized by a neural network model (3 hidden layers of 512 units each and ReLU activations) and fixed variance. In this case, maximum likelihood estimation corresponds to minimizing the mean squared error. We now describe the setup of our environments (Fig. 2), where each agent requires different types of adaptation to succeed at run-time:Half-cheetah (HC): disabled joint. For each rollout during meta-training, we randomly sample a joint to be disabled (i.e., the agent cannot apply torques to that joint). At test time, we evaluate performance in two different situations: disabling a joint unseen during training, and switching between disabled joints during a rollout. The former examines extrapolation to out-of-distribution environments, and the latter tests fast adaptation to changing dynamics. HC: sloped terrain. For each rollout during meta-training, we randomly select an upward or downward slope of low steepness. At test time, we evaluate performance on unseen settings including a gentle upward slope, a steep upward slope, and a steep hill that first goes up and then down. HC: pier. In this experiment, the cheetah runs over a series of blocks that are floating on water. Each block moves up and down when stepped on, and the changes in the dynamics are rapidly changing due to each block having different damping and friction properties. The HC is meta-trained by varying these block properties, and tested on a specific (randomly-selected) configuration of properties. Ant: crippled leg. For each meta-training rollout, we randomly sample a leg to cripple on this quadrupedal robot. This causes unexpected and drastic changes to the underlying dynamics. We evaluate this agent at test time by crippling a leg from outside of the training distribution, as well as transitioning within a rollout from normal operation to having a crippled leg. In the following sections, we evaluate our model-based meta-RL methods (GrBAL and ReBAL) in comparison to several prior methods:• Model-free RL (TRPO): To evaluate the importance of adaptation, we compare to a model-free RL agent that is trained across environments E ∼ ρ(E) using TRPO BID48.• Model-free meta-RL (MAML-RL): We compare to a state-of-the-art model-free meta-RL method, MAML-RL ).• Model-based RL (MB): Similar to the model-free agent, we also compare to a single modelbased RL agent, to evaluate the importance of adaptation. This model is trained using supervised model-error and iterative model bootstrapping.• Model-based RL with dynamic evaluation (MB+DE): We compare to an agent trained with model-based RL, as above. However, at test time, the model is adapted by taking a gradient step at each timestep using the past M observations, akin to dynamic evaluation BID20. This final comparison evaluates the benefit of explicitly training for adaptability. All model-based approaches (MB, MB+DE, GrBAL, and ReBAL) use model bootstrapping, use the same neural network architecture, and use the same planner within experiments: MPPI BID57 for the simulated experiments and random shooting (RS) BID35 for the real-world experiments. First, we analyze the effect of the model adaptation, and show from test-time runs on three environments: HC pier, HC sloped terrain with a steep up/down hill, and ant crippled leg with the chosen leg not seen as crippled during training. FIG1 displays the distribution shift between the pre-update and post-update model prediction errors of three GrBAL runs, showing that using the past M timesteps to update θ * (pre) into θ * (post) does indeed reduce model error on predicting the following K timesteps. We first study the sample efficiency of the meta-training process. FIG2 shows the average return across test environments w.r.t. the amount of data used for metatraining. We (meta-)train the model-free methods (TRPO and MAML-RL) until convergence, using the equivalent of about two days of real-world experience. In contrast, we meta-train the model-based methods (including our approach) using the equivalent of 1.5-3 hours of real-world experience. Our methods in superior or equivalent performance to the model-free agent that is trained with 1000 times more data. Our methods also surpass the performance of the non-meta-learned model-based approaches. Finally, our performance closely matches the high asymptotic performance of the modelfree meta-RL method for half-cheetah disabled, and achieves a suboptimal performance for ant crippled but, again, it does so with the equivalent of 1000 times less data. Note that this suboptimality in asymptotic performance is a known issue with model-based methods, and thus an interesting direction for future efforts. The improvement in sample efficiency from using model-based methods matches prior findings BID8 BID35 BID21; the most important evaluation, which we discuss in more detail next, is the ability for our method to adapt online to drastic dynamics changes in only a handful of timesteps. In our second comparative evaluation, we evaluate final test time performance both GrBAL and Re-BAL in comparison to the aforementioned methods. In the interest of developing efficient algorithms for real-world applications, we operate all methods in the low data regime for all experiments: the amount of data available (meta-)training is fixed across methods, and roughly corresponds to 1.5-3 hours of real-world experience depending on the domain. We also provide the performance of a MB oracle, which is trained using unlimited data from only the given test environment (rather than needing to generalize to various training environments).In these experiments, note that all agents were meta-trained on a distribution of tasks/environments (as detailed above), but we then evaluate their adaptation ability on unseen environments at test time. We test the ability of each approach to adapt to sudden changes in the environment, as well as to generalize beyond the training environments. We evaluate the fast adaptation (F.A.) component on the HC disabled joint, ant crippled leg, and the HC pier. On the first two, we cause a joint/leg of the robot to malfunction in the middle of a rollout. We evaluate the generalization component also on the tasks of HC disabled joint and ant crippled leg, but this time, the leg/joint that malfunctions has not been seen as crippled during training. The last environment that we test generalization on is the HC slopped terrain for a hill, where the agent has to run up and down a steep slope, which is outside of the gentle slopes that it experienced during training. The , shown in FIG3, show returns that are normalized such that the MB oracle achieves a return of 1. In all experiments, due to low quantity of training data, TRPO performs poorly. Although MB+DE achieves better generalization than MB, the slow nature of its adaptation causes it to fall behind MB in the environments that require fast adaptation. On the other hand, our approach surpasses the other approaches in all of the experiments. In fact, in the HC pier and the fast adaptation of ant environments, our approach surpasses the model-based oracle. This showcases the importance of adaptation in stochastic environments, where even a model trained with a lot of data cannot be robust to unexpected occurrences or disturbances. ReBAL displays its strengths on scenarios where longer sequential inputs allow it to better asses current environment settings, but overall, GrBAL seems to perform better for both generalization and fast adaptation. To test our meta model-based RL method's sample efficiency, as well as its ability to perform fast and effective online adaptation, we applied GrBAL to a real legged millirobot, comparing it to model-based RL (MB) and model-based RL with dynamic evaluation (MB+DE). Due to the cost of running real robot experiments, we chose the better performing method (i.e., GrBAL) to evaluate on the real robot. This small 6-legged robot, as shown in Fig. 1 and Fig. 2, presents a modeling and control challenge in the form of highly stochastic and dynamic movement. This robot is an excellent candidate for online adaptation for many reasons: the rapid manufacturing techniques and numerous custom-design steps used to construct this robot make it impossible to reproduce the same dynamics each time, its linkages and other body parts deteriorate over time, and it moves very quickly and dynamically withThe state space of the robot is a 24-dimensional vector, including center of mass positions and velocities, center of mass pose and angular velocities, back-EMF readings of motors, encoder readings of leg motor angles and velocities, and battery voltage. We define the action space to be velocity setpoints of the rotating legs. The action space has a dimension of two, since one motor on each side is coupled to all three of the legs on that side. All experiments are conducted in a motion capture room. Computation is done on an external computer, and the velocity setpoints are streamed over radio at 10 Hz to be executed by a PID controller on the microcontroller on-board of the robot. We meta-train a dynamics model for this robot using the meta-objective described in Equation 3, and we train it to adapt on entirely real-world data from three different training terrains: carpet, styrofoam, and turf. We collect approximately 30 minutes of data from each of the three training terrains. This data was entirely collected using a random policy, in conjunction with a safety policy, whose sole purpose was to prevent the robot from exiting the area of interest. Our first group of (Table 1) show that, when data from a random policy is used to train a dynamics model, both a model trained with a standard supervised learning objective (MB) and a GrBAL model achieve comparable performance for executing desired trajectories on terrains from the training distribution. Next, we test the performance of our method on what it is intended for: fast online adaptation of the learned model to enable successful execution of new, changing, or out-of-distribution environments at test time. Similar to the comparisons above, we compare GrBAL to a model-based method (MB) that involves neither meta-training nor online adaptation, as well as a dynamic evaluation method that involves online adaptation of that MB model (MB+DE). Our (Fig. 6) demonstrate that GrBAL substantially outperforms MB and MB+DE, and, unlike MB and MB+DE, and that GrBAL can quickly 1) adapt online to a missing leg, 2) adjust to novel terrains and slopes, 3) account for miscalibration or errors in pose estimation, and 4) compensate for pulling payloads. Figure 6: GrBAL clearly outperforms both MB and MB+DE, when tested on environments that require online adaptation, and/or were never seen during training.these environments were seen during training time, but the agent's ability to learn how to learn enables it to quickly leverage its prior knowledge and fine-tune to adapt to new environments online. Furthermore, the poor performance of the MB and MB+DE baselines demonstrate not only the need for adaptation, but also the importance of good initial parameters to adapt from (in this case, meta-learned parameters). The qualitative of these experiments in Fig. 7 show that the robot is able to use our method to adapt online and effectively follow the target trajectories, even in the presence of new environments and unexpected perturbations at test time.bounding-style gaits; hence, its dynamics are strongly dependent on the terrain or environment at hand. Figure 7: The dotted black line indicates the desired trajectory in the xy plane. By effectively adapting online, our method prevents drift from a missing leg, prevents sliding sideways down a slope, accounts for pose miscalibration errors, and adjusts to pulling payloads (left to right). Note that none of these tasks/environments were seen during training time, and they require fast and effective online adaptation for success. In this work, we present an approach for model-based meta-RL that enables fast, online adaptation of large and expressive models in dynamic environments. We show that meta-learning a model for online adaptation in a method that is able to adapt to unseen situations or sudden and drastic changes in the environment, and is also sample efficient to train. We provide two instantiations of our approach (ReBAL and GrBAL), and we provide a comparison with other prior methods on a range of continuous control tasks. Finally, we show that (compared to model-free meta-RL approaches), our approach is practical for real-world applications, and that this capability to adapt quickly is particularly important under complex real-world dynamics. In this section, we show the effect of adaptation in the case of GrBAL. In particular, we show the histogram of the K step normalized error, as well as the per-timestep visualization of this error during a trajectory. Across all tasks and environments, the post-updated modelp θ * achieves lower prediction error than the pre-updted modelp θ *. To see how training distribution affects test performance, we ran an experiment that used GrBAL to train models of the 7-DOF arm, where each model was trained on the same number of datapoints during meta-training, but those datapoints came from different ranges of force perturbations. We observe (in the plot below) that 1. Seeing more during training is helpful during testing -a model that saw a large range of force perturbations during training performed the best 2. A model that saw no perturbation forces during training did the worst 3. The middle 3 models show comparable performance in the "constant force = 4" case, which is an out-of-distribution task for those models. Thus, there is not actually a strong restriction on what needs to be seen during training in order for adaptation to occur at train time (though there is a general trend that more is better) In this section we analyze how sensitive is our algorithm w.r.t the hyperparameters K and M. In all experiments of the paper, we set K equal to M. FIG6 shows the average return of GrBAL across meta-training iterations of our algorithm for different values of K = M. The performance of the agent is largely unaffected for different values of these hyperparameters, suggesting that our algorithm is not particularly sensitive to these values. For different agents, the optimal value for these hyperparameters depends on various task details, such as the amount of information present in the state (a fully-informed state variable precludes the need for additional past timesteps) and the duration of a single timestep (a longer timestep duration makes it harder to predict more steps into the future). terrain environments. The x-axis shows data aggreation iterations during meta-training, whereas the y-axis shows the average return achieved when running online adaptation with the meta-learned model from the particular iteration. The curves suggest that GrBAL performance is fairly robust to the values of these hyperparameters. For each MuJoCo agent, the same reward function is used across its various tasks. TAB2 shows the reward functions used for each agent. We denote by x t the x-coordinate of the agent at time t, ee t refers to the position of the end-effector of the 7-DoF arm, and g corresponds to the position of the desired goal.
A model-based meta-RL algorithm that enables a real robot to adapt online in dynamic environments
453
scitldr
Model-free deep reinforcement learning approaches have shown superhuman performance in simulated environments (e.g., Atari games, Go, etc). During training, these approaches often implicitly construct a latent space that contains key information for decision making. In this paper, we learn a forward model on this latent space and apply it to model-based planning in miniature Real-time Strategy game with incomplete information (MiniRTS). We first show that the latent space constructed from existing actor-critic models contains relevant information of the game, and design training procedure to learn forward models. We also show that our learned forward model can predict meaningful future state and is usable for latent space Monte-Carlo Tree Search (MCTS), in terms of win rates against rule-based agents. Model-free deep reinforcement learning (DRL) approaches (e.g., deep Q-learning BID14], DDPG BID12 ], A3C BID16 ], etc) have been applied extensively in many simulated environments with complete information and relatively simple game dynamics (e.g., Atari games, Go], Doom, etc). The learned agent, which acts reactively based on the current game situation, can even achieve superhuman performance. However, for complicated environments, planning ahead (or "predicting the future") before making an actual decision is important. Such a planning procedure requires a forward model that estimates the next state s t+1 given the current state s t and action a t, which is in general non-trivial to construct and estimate from the high-dimensional raw input. For partially observable environments (e.g., Real-time Strategy Games like StarCraft), constructing a forward model is more difficult even with a perfect domain knowledge of the game, due to the deliberate concealing of information and the additional requirement to capture the belief of the unknown for the agent. A natural question now arises. Could we borrow the success of model-free approach to learn a forward model? Note that in model-free approaches, a single shared network (called "trunk") is often used to extract features from the input game situation to obtain a latent representation. From the latent space, multiple reinforcement learning quantities (Q-function, value function V, advantage function A, etc) are predicted via simple linear transformations and used for decision making. Strong performance of these approaches indicates that the learned latent space must have captured key ingredients of the input situation and remains low-dimensional. Therefore, it is an excellent candidate for the state representation of a forward model. In this paper, we study whether it is possible to use the latent space learned by model-free approaches to construct forward models. We use MiniRTS ], an efficient and simple twoplayer Real-time Strategy (RTS) game. MiniRTS captures the basic dynamics of its kind: the agent builds units (workers and troops) that consume resources, gathers resources, explores regions out of sights ("fog of war"), defends enemy's attack, and invades enemy's base. This game is incomplete information, because the agent can only see within its sight, and does not know the action of its opponent by default. Rather than unit based control as in;; ], the agent uses 9 discrete actions to control the overall strategy (e.g., build a particular kind of troops, attack or defend).Our contributions are three-fold: First, we propose to study the relationship between the latent space learned by model-free approaches and the state representation of forward models. Very few works (e.g, DARLA BID10], DQN BID15 ]) in model-free RL study these properties in depth, let alone using the latent state in model-based approaches for incomplete information game. To our knowledge, we are one of the first works to explore such directions. Second, we improve the performance of model-based agent in MiniRTS by input feature design and show that the latent space learned from actor-critic models BID16 ] can reconstruct critical information of the game, e.g., Hit Point of the base and available resources. Finally, we propose novel algorithms that learn a forward model that maps a latent state h t to its future counterpart h t (t > t) with reduced drifting. Such a forward model enables us to use model-based planning such as Monte-Carlo Tree Search (MCTS) in incomplete information games. We show positive performance (8% higher than random planning) in terms of win rates against rule-based agents. Forward modeling Model-based approaches are one of the standard ways to model complicated yet physically calibrated robots BID8 ]. In these cases, it is assumed, or known by design, that a forward model can be obtained easily. Furthermore, the state representation of the forward model is usually the internal state of the system (e.g., position and velocity of each robot joint), and is thus low-dimensional. On the other hand, learning a forward model from a complicated and high-dimensional observation is in general difficult, even with the current deep learning architecture. Computer vision researchers try to predict the next video frame given the last few frames BID13; BID26 ]. To reduce the complexity, one natural idea is to project the high-dimensional input to low-dimensional state that captures the important aspect of the situation, on which a forward model might be easier to learn. BID0 ] learns a forward model directly from visual input and uses that for manipulation. To make their learned latent state nontrivial, a regularization term is applied to ensure the learned representation can reconstruct the input as much as possible. In comparison, we do not need a regularization since the latent space is pre-trained by model-free approaches. One issue in forward modeling is that its accumulative prediction over many steps might drift and is not usable for planning. BID29 ] uses forward model prediction as another feature for a model-free approach, while BID2 uses forward model as a way to guide exploration for model-free procedure. In this paper, we use multi-step long-term prediction to stabilize the forward model. Real-time strategy games. Most works on Real-time Strategy (RTS) games assume that the agents have access to complete information, in which the behaviors of both the player and the opponents are observed. Micro-management is often viewed as a standard RL task, addressed by model-free approaches; BID17 ]. On the other hand, Monte-Carlo Tree Search (MCTS) has been applied (e.g, ABCD BID4], MCTSCD BID21 ]), which focus on concurrent planning with perfect information and perfect forward model. BID24 ] learns a short-term shallow forward model for local combat, but again assuming full access to the game. Finally, BID5 ] deals with partial observability, but focuses particularly on scouting, rather than global strategies as we do. Grouping unit-based actions into macro (or strategic) actions is an open and challenging topic in RTS games. Following ], we use 9 pre-defined discrete strategic actions to facilitate training. Other recent works also model macro strategies from replays BID11 ] or from action scripts BID3 ], both with deep models. In model-free reinforcement learning, an agent learns to maximize its long-term reward, either from previous experience in the same environment (e.g. replay buffer), or from its own interaction with the environment with its behavior. In both cases, the agent does not need to have knowledge of the environment, nor know how the world changes after an action is taken, hence called model-free approach. On the other hand, if an agent learns to build a model to predict how the state (its own state and/or environment state) changes after its action, then the agent is considered to take a model-based approach. In this paper, we first use an off-policy version of Batch A3C; BID1 ] to train a model-free agent. During training, the parameters of the agent follows the gradient directions of the two objectives: DISPLAYFORM0 where π(a|s; θ π) and V (s; θ V) are the policy and value functions of the agent. DISPLAYFORM1 is an importance factor that captures the discrepancy between the policy distribution π(a t |s t ; θ old π) evaluated when we sample the action a t at state s t, and the policy we use now with the current parameter θ π. We apply the importance factor η t since Batch A3C does not start updating model parameters until it collects a full batch. During that period, the data that are first collected is considered to be off-policy. In many works that use shared feature extraction and multi-head predictions BID28;; ], the two set of the parameters θ V and θ π are mostly shared with a common trunk h with parameter θ U, and the policy/value function can be written as: DISPLAYFORM2 h compactly encodes the information used for both decision (policy π) and evaluation (value V) of the model. Compared to the original high-dimensional input that might contain a lot of irrelevant information, h typically contains a rich but low-dimensional representation of the current situation that is critical for its assigned task. That might include world states directly related to short-term/longterm reward, and belief of the unobserved part of the environment. From a model-based point of view, such a latent space learned from a successful model-free training is a good candidate for the state representation of a forward model. Following the reasoning, in order to build a strong forward model, a natural approach is to start with a strong pre-trained model-free agent that performs well in the environment. In this paper, we use MiniRTS ] as the environment, which is a fast incomplete information Real-time Strategy Game with two players. An agent which performs well in MiniRTS must build its own troops, gather resources efficiently, explore unseen territory, defend the attack from the enemy, and find the right timing to attack back. The game ends when one player's base is destroyed by the opponent. MiniRTS runs at 40K FPS on a laptop with quite complicated dynamics (e.g., collision detection, unit path planning, etc) and is suitable for our task. To strengthen the model-free agent, we improve upon ] by appending more input feature channels. We define three models Vanilla, BuildHistory and PrevSeen, each with a different set of features (Tbl. 1).• Vanilla Similar to ], we use basic features that include Unit Type, Hit Point ratio, and available resources to build units.• BuildHistory In an incomplete information game, the decision a player made two minutes ago is not the same as the decision the player is making now, even if the perceived situation is identical. This is because during the period, the opponent (which the player cannot see) might have done a lot of work. To make the agent aware of the situation, we record the build since tick of each unit, and attach related information in the feature plane.• PrevSeen Ideally the agent should remember all information since the game starts (e.g., the opponent's base location it saw at tick 100, even if the current tick is 5000). However, as shown in our experiment, Recurrent Network (RNN) is not able to carry such a longterm memory. Therefore, we modify the game engine so that the most recent information that the agent used to have access to, is also sent into the input. much faster convergence, and encourages fast map exploration and changes in strategies. As a , the bot learns more aggressive strategies like rush by itself. In Sec. 4.1, we show that PrevSeen has the strongest performance and its latent representation is a good candidate for forward models. To check whether the learned latent space is useful for forward modeling and understand what kind of game statistics it has captured, we further learn an interpretation network to predict the input from the hidden state. The interpretation network is designed to mirror the network that maps input to latent space, only replacing pooling layers with upsampling layers. Again, the latent space of PrevSeen can predict highly relevant input features, as shown in Sec. 4.2. Once we have a rich latent space that contain important information for decision making, we thus train a forward model that carries h t to the near future, which can be used for prediction and modelbased planning. One naive approach is to learn a forward model f to predict the next state h t+1 given h t and a t. However, a simple prediction might suffer from drifting, in which if we apply f multiple times to predict far future, then the predict error will quickly accumulate and makes the long-term prediction unusable. This leads to detrimental performance for planning methods (e.g. MCTS) which require accurate long-term predictions. To solve this issue, during training, we not only require f (h t, a t) to be close to h t+1, but also require f (t) (h t, a t, a t+1, . . ., a t+t −1) to be close to h t+t, where f (n) is to apply the same forward model n times. While a naive implementation requires O(T 3) gradient update, such a training procedure can be implemented efficiently with the current autograd approach with only O(T 2) gradient update. The trained forward model enables us to apply existing planning method to complicated incomplete information games and give a sensible performance boost over baselines. We first learn a model-free agent against rule-based AI (AI SIMPLE in MiniRTS). To maximize the performance of trained AI, we use T = 20 for all the experiments, as mentioned by BID9; BID7 ] that multi-step training with large T is useful. We also set the frame skip (how frequent the action is made) as 50 for AI, and as 20 for AI SIMPLE. This gives the trained agent a slight disadvantage. For network structure, we use a convolutional networks with 2 conv + 1 pooling + 2 conv + 1 pooling, each convolutional layer has 64 channels, except for the first and the last layer. The input features is 20-by-20 images with 35 channels (Tbl. 1). For all approaches, the overall number of parameters is kept constant as in BID29 ]. The ing latent representation is 35 × 5 × 5 = 875 dimensional, which is much lower than the original input space. After convolution, the latent representation is then used to predict discrete action distribution (policy π) and value function V. For RNN, the next latent state h t+1 is computed by first concatenating h t with a learnable embedding of a t, and then compressing them into a vector in the latent state by MLP. We also use frame-stacking (of size 4), which is an empirical trick to mimic short-term memory during training. Both RNN and frame-stacking use the feature sets of Vanilla. As in ], we use 9 discrete actions to drive the environments. All these actions are globally strategic actions (e.g., attack, defend, which unit to build) and agents do not need to worry about their details (e.g., where to build the unit).History Information. Adding historic data can help with the performance of RL approach. However, similar to previous works ], the performance of frame-stacking is already comparable to RNN approach, showing that the latent space really captures the short-term memory. On the other hand, PrevSeen outperforms both RNN and BuildHistory. In summary, while the short-term information can be encoded into the hidden space (either by RNN or frame-stacking), long-term information cannot be efficiently learned in the latent space and has to be encoded manually (as in PrevSeen).Complete/Incomplete information. This leads to difference in training curves FIG1 and final performance (Tbl. 2) whether the agent can see the opponent's behavior and whether the agent has immediate access to the information that encodes what it saw long-time before (e.g., sense the base of the opponent long before). Value function. In incomplete information setting, we see many sudden drops in the value function when game progresses. This is because the agent does not estimate well about the opponent's actions and thus thinks it is in good shape. To quantify it, we define the surprise metric as the difference between nearby estimated values: Fig. 3(b) shows that AI trained with complete information indeed knows more about the situations than those trained with incomplete information, in particular at the beginning of the game. As shown in Fig. 3(c), the distribution of surprise is different between complete and incomplete information bot. DISPLAYFORM0 However, even for the complete information bot, there are quite a few sudden drops. This is due to the fact of complicated dynamics of the game. Lots of small factors, e.g., path planning, collision detection, could lead to substantial different consequences of the game. For example, the consequence of an even battle is often determined by the micro-management of each player, for which our model does not have control. Different learned strategy. Although the final performance of PrevSeen and BuildHistory are similar, their behavior against rule-based AI is quite different. As shown in Fig. 4, AI trained from PrevSeen learned to explore the map and/or rush the enemy with a few tanks at the beginning of the game. Quantitatively, the number of nonzero seen-then-hidden events per game from PrevSeen is twice than that from BuildHistory (Fig. 5). We haven't seen similar behaviors in prior works. With the latent state, we can reconstruct the original 20x20 channels. We choose to predict one channel at a time. A joint model that predicts all input channels at once works poorly since dense channels (with many non-zero entries) dominate the prediction. We define the normalized reconstruction accuracy (NRA) to be 1 − c − c * / c *, where c is some channel and c * is its ground truth value. Fig. 6 shows the . The model correctly finds the most relevant features from the input (e.g., base HP ratio, the amount of available resource). We discover that the agent can partially recover the exact location of the workers during early stage of training, but ignores them at convergence, showing that the model learns to discard less relevant features. Interestingly, the prev-seen feature channels are not predicted well for the full-fledged agent, showing they are not important in the decision. However, models equipped with these channels train much faster. We also try predicting the 5x5 down-sampled version of the original 20x20 channels, by removing all upsampling layers. Overall the normalized reconstruction accuracy is much higher, as shown in the dark blue bars. This is because the latent state is also a 5 × 5 down-sampled state of the original game state, and the model only needs to reconstruct the features in a rough region instead of in each grid. In particular, we see the emergence of multiple relevant features such as the location of own RANGE ATTACKER and BASE, affiliation and unit HP ratio, etc. We also see that the agent learns to pay attention to the opponent's WORKER (which gathers resources) and MELEE ATTACKER (which attacks us). The model pays attention to the opponent's MELEE ATTACKER but not so much to RANGE ATTACKER because the rule based AI mostly builds MELEE ATTACKER. We train multiple forward models following Sec. 3.2, each with different training paradigm defined as follows. Pred1: basic forward model (FIG0); MatchPi: enforce the predicted future state to predict the future policy well FIG0; MatchA: enforce the predicted future state to predict the future action well FIG0; PredN: predict long-term future states (FIG0). Win rate from the decisions made by latent state h that is predicted by forward models. Top row: 2-hop prediction (ĥ t = f (h t−2)) on 5 models trained on 5 independent A3C models. Bottom row: 5-hop prediction. In both cases, the grey dotted line is the baselineĥ t = h t−2 (and h t = h t−5). MatchA(red) lines do not continue due to numerical instability. For evaluation, at a time step t, we recursively apply the learned forward model to estimate the current latent stateĥ t from previous state h t−t that is computed from observation s t−t. Then we useĥ t to make decision for time t, and check its win rate against the rule-based system. As a baseline, we use identity forward functionĥ t = h t−t.We evaluate these training paradigms in 5 independently trained models using model-free approaches. As shown in FIG4, MatchPi is the most stable method with highest win rate, consistently higher than the baseline approach that usesĥ t = h t−t. MatchA also learns quite fast, but runs into numerical instabilities after a few hundreds of iterations. PredN is also quite stable but its learning speed is not as fast compared to MatchA. When the delay is large, we clearly see the decay of the win rate performance. The performance metric used in Sec. 4.3 is only an intermediate metric. To finally test the performance of forward models, we plug them into a planning algorithm, e.g., Monte-Carlo Tree Search (MCTS). Note that traditional MCTS can only be applied when both the complete game state and the dynamics of the game environments are known. However, neither conditions are satisfied for RTS games, thus learned forward models and value function are necessary. In MCTS, We use shared tree (tree parallelization) among multiple threads. This is because compared to other approaches (e.g., root parallelization Chaslot et al. FORMULA0), expanding a tree node, as a costly operation that requires feed-forwarding a neural network, only happens once. To increase diversity of initial exploration, we use virtual loss of size 5. During the execution of MCTS, the network performs a projection step that maps the input state to its latent representation, a forwarding step that predicts the latent state h t+1 given h t and a t, and a prediction step that gives the value of the predicted state for back-propagation in MCTS.Interestingly, the models with strong performance in delayed prediction task (e.g., MatchPi) does not perform well in MCTS. On the contrary, PredN gives a consistently strong performance (25%) over 5 independently trained models, compared to the random baseline (17% win rate). Note that without a good forward model and a value function, we cannot even run reduced space MCTS. The proposed forward model makes it possible. However, compared to the performance of model-free MatchA MatchPi Pred1Figure 8: Win rate against AI SIMPLE with frame skip 20 on 1000 games using latent space MonteCarlo Tree Search. All uses 100 rollouts (5 threads, each thread with 20 rollouts). The pink line is the baseline of a random agent that picks the 9 actions uniformly at each step.approach (> 80%), there is still a long way to go. We also tried removing short-term predictions (FIG0), the performance is similar. We have also tried combining forward model with action probability given by model-free approach using PUCT []. However, there is no gain due to the fact that the model-free part has a much stronger performance, overshadowing the contribution of forward models. Latent space learned by model-free reinforcement learning encodes important information for an agent to make sensible decisions to maximize the reward in a complicated simulated environment. In this paper, we verify the power of latent space of successfully trained model-free agent, and propose several methods to learn forward models on this space, in a real-time strategy game with incomplete information. Despite an extremely hard problem, we learn forward models that make it possible to use planning approaches such as Monte Carlo Tree Search, and show consistently positive gains over baselines. A lot of future works follow. As a first step, although we show that it is possible to learn a forward model for incomplete information Real-time Strategy games to enable model-based planning in the latent space, it remains an open problem how to improve its performance. It is possible that despite a good forward model is learned, the value function is not good enough, e.g., putting too much focus on the on-policy trajectory, for Monte-Carlo Tree Search. Also, in this paper we use predefined 9 global actions for the game. How to automatically learn global actions from unit-based commands that are exponentially large, is still an challenging issue to solve.
The paper analyzes the latent space learned by model-free approaches in a miniature incomplete information game, trains a forward model in the latent space and apply it to Monte-Carlo Tree Search, yielding positive performance.
454
scitldr
Several state of the art convolutional networks rely on inter-connecting different layers to ease the flow of information and gradient between their input and output layers. These techniques have enabled practitioners to successfully train deep convolutional networks with hundreds of layers. Particularly, a novel way of interconnecting layers was introduced as the Dense Convolutional Network (DenseNet) and has achieved state of the art performance on relevant image recognition tasks. Despite their notable empirical success, their theoretical understanding is still limited. In this work, we address this problem by analyzing the effect of layer interconnection on the overall expressive power of a convolutional network. In particular, the connections used in DenseNet are compared with other types of inter-layer connectivity. We carry out a tensor analysis on the expressive power inter-connections on convolutional arithmetic circuits (ConvACs) and relate our to standard convolutional networks. The analysis leads to performance bounds and practical guidelines for design of ConvACs. The generalization of these are discussed for other kinds of convolutional networks via generalized tensor decompositions. Recently, densely connected networks such as FractalNet BID8, ResNet BID6, and DenseNet BID7, have obtained state of the art performance on large problems where highly deep network configurations are used. Adding dense connections between different layers of a network virtually shortens its depth, thus allowing a better flow of information and gradient through the network. This makes possible the training of highly deep models. Models with these types of connections have been successfully trained with hundreds of layers. More specifically, DenseNets have achieved state of the art performance on the CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets, using models of up to 1 thousand layers in depth. Nevertheless, whether these connections provide a fundamental enhancement on the expressive power of a network, or just improve the training of the model, is still an open question. In BID7, DenseNet models with 3 times less parameters than its counterpart (ResNets) were able to achieve the same performance on the ImageNet challenge. Moreover, a theoretical understanding of why the connections used by DenseNets lead to better performance compared with FractalNets or ResNets is still pending. Despite the popularity of these models, there are few theoretical frameworks explaining the power of these models and providing insights to their performance. In, the authors considered convolutional networks with linear activations and product pooling layers, called convolutional arithmetic circuits (ConvACs), and argued for the expressiveness of deep networks using a tensor based analysis. This analysis has been extended to rectifier based convolutional networks via generalization of the tensor product. In, it was shown that ConvACs enjoy a greater expressive power than rectifier based models despite the popularity of rectifier based networks in practice. Indeed the empirical relevance of ConvAC was demonstrated through an architecture called SimNets. In addition, the generative ConvAC of BID11 achieved state of the art performance in classification of images with missing pixels. These served as motivation for the works of;; BID9; BID10, where different aspects of ConvACs were studied from a theoretical perspective. In the inductive bias introduced by pooling geometries was studied. Later, BID9 makes use of the quantum entanglement measure to analyze the inductive bias introduced by the correlations among the channels of ConvACs. Moreover, BID10 generalizes the convolutional layer of ConvACs by allowing overlapping receptive fields, in other words permitting stride values lower than the convolution patch size. These locally overlapping connections led to an enhancement on the expressive capacity of ConvACs. The notion of inter-layer connectivity for ConvACs was addressed by in the context of sequential data processing, such as audio and text related tasks. In that work, the expressive capabilities of interconnecting processing blocks from a sequence was studied. Nevertheless, these types of interconnections are related to the sequential nature of the problem and different from the ones used in ResNet, FractalNet and DenseNet. In this work, we extend the tensor analysis framework of to obtain insightful knowledge about the effect of dense connections, from the kind used in DenseNets, FractalNet and ResNet, on the expressiveness of deep ConvACs. We study the expressive capabilities provided by different types of dense connections. Moreover, from these we derive performance bounds and practical guidelines for selection of the hyperparameters of a deep ConvAC, such as layer widths and the topology of dense connections. These serve as the first step into understanding dense connectivity in rectifier networks as well, since they can be further extended to include rectifier linear units, in the same spirit as the generalization of the tensor products done by.The remainder of this paper is organized as follows. In Section 2, we introduce the notation and basic concepts from tensor algebra. In Section 3, we present the tensor representation of ConvACs as introduced by, and later in Section 4, we obtain tensor representations for densely connected ConvACs. In Section 5, performance bounds and design guidelines are derived for densely connected ConvACs. The term tensor refers to a multi-dimensional array, where the order of the tensor corresponds to the number of indexes required to access one of its entries. For instance, a vector is a tensor of order 1 while a matrix is a tensor of order 2. In general a tensor A of order N requires N indexes (d 1, . . ., d N) to access one of its elements. For the sake of notation, given I ∈ N, we use the expression [I] to denote the set {1, 2, . . ., I}. In addition, the (d 1, . . ., d N)-th entry of a given tensor of order N and size DISPLAYFORM0. Moreover, for the particular case of tensors of order N with symmetric sizes DISPLAYFORM1 ⊗N as shorthand notation for R M ×···×M. A crucial operator in tensor analysis is the tensor product ⊗, since it is necessary for defining the rank of a tensor. For two tensors B ∈ R M1×···×Mp and C ∈ R Mp+1×···×Mp+q, the tensor product is defined such that B ⊗ C ∈ R M1×···×Mp+q and (B ⊗ C) d1,...,dp+q = B d1,...,dp C dp+1,...,dp+q for all (d 1, . . ., d p+q). In tensor algebra, a tensor A ∈ R M1×M2×···×M N is said to have rank 1 if it can be expressed as DISPLAYFORM2 Moreover, any tensor A ∈ R M1×M2×···×M N can be expressed as a sum of rank-1 tensors, that is DISPLAYFORM3 where Z ∈ N is sufficiently large and v DISPLAYFORM4. Note that this statement is DISPLAYFORM5 On the other hand, when Z is the minimum number such that is satisfied, the rank of the tensor is defined to be rank(A) = Z and becomes equivalent to the well known CANDECOMP/PARAFAC (CP) decomposition of A. Another operator, that is on the core of the former works of;; BID9, is the matricization operator. The operator [A] denotes the matricization of a tensor A ∈ R M1×···×M N of order N. This matricization of the tensor A re-orders its elements into a matrix A ConvAC is a convolutional neural network that utilizes linear activation functions with product pooling, unlike most popular convolutional networks which make use of rectifier activations with max or average pooling. Moreover, the input of the network is modeled by X = (x 1, . . ., x N) ∈ (R s) N, where x i ∈ R s denotes the vectorization of the i-th patch of the input image. For this analysis, it is assumed that a set of M features is obtained from every patch, that is DISPLAYFORM6. These features are selected from a given parametric family F = {f θ : R s → R : θ ∈ Θ}, such as Gaussian kernels, wavelet functions, or learned features. Then, to determine whether an input X belongs to a class belonging to the set Y, the network evaluates the some score functions h y (X) ∈ R and decides for the class y ∈ Y such that DISPLAYFORM0 Using this formulation, in FIG0 (a) we observe an example of a single hidden layer ConvAC, while in FIG0 we observe the general case of a deep arithmetic circuit of L layers. As shown by, any score function of a ConvAC can be expressed as an homogeneous polynomial with degree N on the input features of the form DISPLAYFORM1 where A y d1,...,d N ∈ R are the polynomial coefficients stored in the grid-tensor DISPLAYFORM2, degree N, and M N polynomial coefficients stored in the grid-tensor A y.For the special case of a shallow ConvAC with 1 × 1 convolutions and Z hidden units 1, shown in FIG0 (a), the score functions are computed from the weight vectors a DISPLAYFORM3. This leads to the score function DISPLAYFORM4 The first step of the tensor analysis framework is to obtain an expression (in terms of the network parameters a y z and a z,i d) of the grid-tensor A y that represents this concrete network architecture. In other words, obtaining the expression for A y that transforms into. This expression was already obtained in as DISPLAYFORM5 where ⊗ denotes the tensor product. Note that FORMULA14 is in the form of a standard CP decomposition of the grid tensor A y. This implies that the rank of A y is bounded by rank(A y) ≤ Z. Moreover, the obtained where generalized in for the case of a deep ConvAC with size-2 pooling windows 2, thus L = log 2 N hidden layers as shown in FIG0 (b), leading to a grid-tensor given by the hierarchical tensor decomposition DISPLAYFORM6 where r 0,..., r L−1 ∈ N are the number of channels in the hidden layers, {a γ∈[r0] are the weights in the first hidden convolutions, DISPLAYFORM7 DISPLAYFORM8 are the weights of the hidden layers, and a L,1,y ∈ R r L−1 stores the weights corresponding to the output y in the output layer. The recent empirical success of densely connected networks (DenseNets), presented by BID7, has served as motivation for our theoretical analysis on dense connectivity. Dense connectivity in a convolutional neural network refers to the case when a number k ∈ N (known as growth rate) of previous layers serve as input of the forthcoming layer. More precisely, in BID7, a DenseNet performs this via concatenation along the feature dimension of the current layer inputs with the preceding layer features. Note that these feature must have compatible sizes along the spatial dimension for the concatenation to be possible. To address this issue, BID7 proposed to group blocks of the same spatial dimensions into a dense block, as shown in FIG1. These dense blocks do not contain operations such as pooling, that alter the spatial dimensions of the input features. Moreover, in the DenseNet architecture the layers that perform the pooling operation are called transition layers, since they serve as transition between dense blocks. For example, in FIG1 we depict a dense block of 4 layers with growth rate k = 2, followed by a transition layer. 1 We must mention that the generalization to w × w convolutions is straightforward and was already covered by.2 Note that the generalization to different pooling sizes is straight forward and was done by. In the original DenseNet these transition layers included one convolution layer before the pooling operation. Nevertheless, for this work we consider transition layers composed of only pooling operations. Note that this does not affect the generality of the model, since avoiding dense connections on the convolutional layer preceding the transition layer is equivalent to including a convolution in that transition layer 3.In the case of ConvACs, any dense block of size greater than 1 can be represented as a dense block of size 1, since the activation function is the linear function (the non-linearity comes from the product pooling operator in the transition layer). Therefore, for ConvACs, it is only reasonable to analyze dense blocks of size 1. Note that, if we only allow dense connections between hidden layers within a dense block, a ConvAC is limited to a maximum growth rate of k = 1. In order to analyze the effect of broader connectivity we extend the concept of growth rate by allowing dense connections between dense blocks. With proper pooling, outputs of hidden layers belonging to different dense blocks can also be concatenated along the feature dimension. In the reminder of this paper we refer to the dense connections between hidden layers of the same block as intra-block connections, while the connections between hidden layers of different blocks as inter-block connections. In this section we analyze the effect of intra-block connections. We first start by constructing a densely connected version of a single hidden layer ConvAC. The ing network with growth rate k = 1 is shown in FIG2 (a). In the same manner as in, this architecture leads to the score function DISPLAYFORM0 Then, we present the following proposition regarding shallow ConvACs with dense connections of growth rate k = 1. The network's function of a densely connected shallow ConvAC shown in corresponds to the grid tensor DISPLAYFORM0 where DISPLAYFORM1 Proof See appendix B.1.Note that the rank of this tensor is now bounded by rank(A y) ≤ Z + M instead of Z, but adding these dense connections increases the number of parameters of the network from M N Z + Z to We now generalize the obtained for the case of a L-layered dense arithmetic circuit, with growth rare k = 1, as the one in FIG2 (b). Similarly to, the obtained grid tensor has the hierarchical decomposition given by DISPLAYFORM2 DISPLAYFORM3... DISPLAYFORM4 From this we observe that inter block connections account for virtually increasing the width of the network's hidden layers from r l tor l r l + r l−1 for all l = 0, 1,..., L − 1, where r −1 M. Note that this increased width comes at the expense of increasing the network's parameters. Moreover, in Section 5 we discuss whether increasing the network's width via intra block dense connections leads to an enhancement in its overall expressive power. In this section we study broader connectivity via dense inter-block connections. As discussed in Section 4, proper pooling of the preceding features must take place before the concatenating them into the current layer. Since this type of connections have not been considered in the former DenseNets, we propose 3 possible ways of realizing such connections (via product, average, or max pooling). For a ConvAC with pooling window size w pool, an inter block connection that connects block DISPLAYFORM0 An example of an inter block connection of jump length L jump = 1 can be seen in FIG2 (c). To perform this inter block connections, the sizes along the spatial dimensions of preceding features must be reduced by L jump w pool, before concatenating them along the feature dimension of layer l. This spatial size reduction may be realized via pooling of the preceding features with window size L jump w pool. When using a pooling layer the size along the feature dimension remains unchanged. Moreover, the type of pooling employed (product, average, or maximum) affects the expressive potential of the ing ConvAC. Furthermore, the following proposition addresses the effect that adding dense inter block connections, via average pooling, has on the network function of a ConvAC.Proposition 2 Adding inter block connections via average pooling of jump length L jump ≥ 1 to a standard ConvAC with grid-tensor A y ∈ (R M) ⊗N leads to a network function of the form DISPLAYFORM1 where g(X) contains polynomial terms on DISPLAYFORM2 Remark 1 This is also valid when the connections are done by addition instead of concatenation, as it is done in ResNet and FractalNet. Proof See appendix B.2.From this proposition we conclude that adding inter block connections average pooling does not alter the grid tensor A y, instead these connections account for extra polynomial terms of degree strictly less than N. Note that, for the special case where the input features belong to an exponential kernel family, such as F = {f θ (x) = e θ T x: R s → R: θ ∈ Θ} or F = {f θ (x) = e θ−x p: R s → R: θ ∈ Θ} where · p denotes the p norm with p ∈ N, the number of polynomial terms is equivalent to the number of exponential basis that the network function can realize. Therefore, the another valid measure of expressiveness is the number of polynomial terms a ConvAC is able to realize. Given a certain ConvAC topology, the number of polynomial terms can be computed inductively by expanding the polynomial products of every layer via generalized binomial expansions. Such an analysis is left for future contributions. Moreover, if we perform this connections via product poling, the features to be concatenated correspond to polynomial terms of the same order. This leads to a generalization of the intra-block connections from 4.1, leading to virtually increased widths r l r l + Ljump q=1 r l−1−q. Finally, we leave the analysis of inter-block connections via maximum pooling for future work and consider only product pooling inter-block connections in the remainder of this paper. For the sake of comparison, let us assume networks with hidden layer widths r l decaying (or increasing) at an exponential rate of λ ∈ R. Formally, this is r l = λr l−1 ∈ N, thus r l = (λ) l r for all l = 0, 1,..., L − 1, where r r 0. To shorten the notation, we denote as (L, r, λ, k) to a ConvAC with of exponential width decay λ ∈ R, length L ∈ N, initial with r ∈ N and growth-rate k ∈ N. A growth-rate of k = 0 refers to a standard ConvAC with no dense connections. Definition 1 Suppose that the weights of a (L, r, λ, k) ConvAC, with L, k ∈ N and r, λ ∈ R, are randomly drawn according to some continuous non-vanishing distribution. Then, this (L, r, λ, k) ConvAC is said to have weak dense gain G w ∈ R if, with probability p > 0, we obtain score functions that cannot be realized by a (L, r, λ, 0) ConvAC with r < G w r. When p = 1, this (L, r, λ, k) ConvAC is said to have a strong dense gain G s = G w ∈ R.Using this definition we present a bound for the weak dense gain G w in the following theorem. DISPLAYFORM0 Proof See appendix B.3.This general bound may serve as guideline for tayloring M and the widths r 0,..., r L−1 such that we exploit the expressiveness added by dense connections. Proof See appendix B.3.Using this , we able able to quantify the expressive gain provided by dense inter block connections. If a ConvAC has a dense gain G w = (1 + 1 λ) that is already close to the general bound from Theorem 5.1 it is less encouraging to include broader dense connections, since it would increase the number of parameters of the model while there is no room for a significant expressive gain increase. In this scenario, connections as the ones in ResNet and FractalNet may more beneficial since they do not increase the size of the model, while at the same time enhancing its trainability. This last theorem shows that there exist a regime where this bounds can be achieved with strong dense gain. Whether this is true outside this regime is still an open question, since further knowledge about the rank of random tensors is limited. Moreover, these theorems does not consider the additional amount of parameters added by dense connections. We complete our analysis by addressing this issue in the following proposition. Proposition 3 Let ∆P dense ∈ N be the additional number of parameters that are added to a (L, r, λ, 0) ConvAC when we introduce dense connections of growth-rate k > 0. In the same manner, let ∆P stand ∈ N be the number of parameters that are added to a (L, r, λ, 0) ConvAC when we increase its initial width r by a factor G ∈ R. Then the ratio between ∆P dense and ∆P stand is greater than DISPLAYFORM1 Proof See appendix B.4.The factor G from this proposition directly relates to the dense gain of a ConvAC, thus this ratio may be used to decide whether is interesting to add dense connections to a model (we want this ratio to be as large as possible). Finally Theorems 5.1 and 5.2 directly bound this ratio, which give the practitioner a guideline to decide which connections (if any) should be added to a given model. Lemma 1 Given Z ∈ N, let A ∈ (R M) ⊗ P be a random tensor of even order P ≥ 2 such that DISPLAYFORM0 where a (k) z ∈ R M are randomly drawn from a non-vanishing continuous distribution for all k ∈ [P] and z ∈ [Z]. Then, if Z ≤ M P/2 we have that rank(A) = rank([A]) = Z with probability 1. This lemma also holds when for a subset Z ⊆ [Z] we have that a (k) z = a z e z ∈ R M for all z ∈ Z, where a z ∈ R are randomly drawn from a non-vanishing continuous distribution. Proof Using the definition of the matricization operator, we get that the matricization A is DISPLAYFORM1. FORMULA3 Note that, from this expression, it is straight forward to see that the rank of [A] is always less or equal than Z. DISPLAYFORM2 be a permuted version of [A] such that the first Z rows of U correspond to the rowsz ∈Z of [A], and the first Z columns of U correspond to the columnsz ∈Z of [A]. Since permuting the rows and the columns of a matrix does not alter its rank, we have that U has the same rank as [A]. Now, let us partition U into blocks as DISPLAYFORM3 where P is of size Z-by-Z, and Q, W, Z have matching dimensions. Note that, if rank(P) = Z then rank(U) ≥ Z, which leads to DISPLAYFORM4 Therefore, it is sufficient to show that rank(P) = Z with probability 1 to conclude this proof. To that end, let us define the mapping from x ∈ R M P Z to P = P(x) as DISPLAYFORM5 Note that this definition of x implies that a DISPLAYFORM6 is computed as in FORMULA3, we have that [A] = [A](x), thus Q = Q(x) and P = P(x). Now, det P(x) is a polynomial on x, then it either vanishes in a set of measure zero or its the zero-polynomial (see BID0).If we set x to be equal to some x 0 ∈ R M P Z such that a DISPLAYFORM7 is now a matrix with 1 on the entry (z,z) and zero elsewhere, we have that [A](x 0) is a diagonal matrix with ones on the diagonal elementsz ∈Z and zero elsewhere. This leads to P(x 0) = I Z which has a determinant det P(x 0) = 0. Finally, since there exist x 0 such that the polynomial det P(x 0) is not zero, we conclude that det P(x) is not the zero-polynomial, which means that det P(x) = 0 with probability 1, thus proving this lemma. DISPLAYFORM8 ⊗ P be random tensors of even order P ≥ 2 and Z ∈ N be tensors such that DISPLAYFORM9 z ∈ R M are randomly drawn from a non-vanishing continuous distribution. Then, if Z 1 ≤ M P/2 and Z 2 ≤ M P/2, we have that rank (A ⊗ B) = Z 1 Z 2 with probability 1.Proof Let C ∈ (R M) ⊗2P be a random tensor defined as C = A ⊗ B. Therefore, we may express C as DISPLAYFORM10 Then, we define rank-1 tensors C (q,z) to be C (q,z) = a DISPLAYFORM11 Since C is now expressed as a sum of Z 1 Z 2 rank-1 tensors, we have that rank(C) ≤ Z 1 Z 2.Since Z 1 ≤ M P/2 and Z 2 ≤ M P/2 we may use Lemma 1, this leads to rank([A]) = Z 1 and rank([B]) = Z 2 with probability 1. Finally we, use the properties of the Kronecker product to obtain the rank of the matricization C as rank(DISPLAYFORM12 with probability 1, thus proving the Lemma. DISPLAYFORM13 ⊗ P be tensors of order P > 2 and Z ∈ N be tensors such that Proof We reformulate this to have the same form as. To that end we define a DISPLAYFORM0 which has the same form as. Therefore, as done in FORMULA14, we obtain the grid tensor for this architecture as DISPLAYFORM1, thus proving this proposition. Proof DISPLAYFORM0 T ∈ R M N, the output of the l-th layer of a (L, r, λ, 0) ConvAC can be stored into the vectors of mappings DISPLAYFORM1. Moreover, since the entries of these vectors are the of l − 1 convolution-pooling layers with product pooling of window size 2, all the mappings δ l,j 1 (x) can be expressed as a sum of polynomial terms on x of degree 2 l. Now, let the coefficient vectors a DISPLAYFORM2 ], be the weight vectors for the convolution of the l-th layer. To shorten the notation we use a l,j,γ, DISPLAYFORM3 as shorthand for the convolution between these vectors. Then, the outputs the the layer l of this ConvAC are given by δ l+1,j ∈ R r l+1 with δ DISPLAYFORM4 If we recursively calculate these out vectors up to the L-th layer we obtain the score functions h DISPLAYFORM5 1 (x) ∈ R. We now consider the effecct of adding dense connections via average pooling from some k ∈ N preceding layers l − 1,..., l − k. To this end, letr l = k q=1 r l−q be the total size along the feature dimesnion of the vectors to be concatenated. In addition, let DISPLAYFORM6 Rr l be the vectors of mappings of the corresponding preceeding features at the layer l for j ∈ [N/r l]. In order to compute the convolutions of this layer, an additional vector of coefficients is required as b DISPLAYFORM7 Then, the outputs of the l-th layer of this (L, r, λ, k) ConvAC are the denoted as the vectorsδ DISPLAYFORM8 where DISPLAYFORM9 Note that the entries of ω l,j (x) are assumed to come from preceding layers with an appropiate average pooling. Since performing avergae pooling does not increase the degree of the polynomial terms involved (only product pooling does) and the jump length Ljump is at least 1, the entries of ω l,j (x) have at most polynomial degree 2 l−1, which is strictly less than the degree of the entries of δ l,j (x) (i.e., 2 l). Therefore, from the obtained expression of ω l+1,j we observe that it has polynomials withb degree no greater than 2 l + 2 l−1, while the entries of δ l+1,j have a strictly higher degree of DISPLAYFORM10 Moreover, since a l,j,γ, δ l,j + ω l,j can be expressed as DISPLAYFORM11 we can make use of the obtained in an unductive manner up to the L-th layer, thus leading to DISPLAYFORM12, where g(x) contains polynomial terms of x of order strictly less than N, thus proving this theorem. Note that this also applies to additive and resudial connections, as de ones used in ResNet and FractalNet, since they can be expressed as in.B.3 PROOF OF THEOREMS 5.1 TO 5.3 DISPLAYFORM13 For the forthcoming analysis let us assume r 0 ≤ M. This assumption is done, so that we can write min{r 0, M} = r 0, merely for notation purposes since we show that this does not affect the generality of the . Using this assumption, we upper bound the rank of the grid tensor as DISPLAYFORM14 It was shown in that, when the weights are independently generated from some continuous distribution, we have that rank φ 1,j,γ = min{r 0, M} with probability 1. Note that, the bounds obtained for r 0 values greater than M is the same as for r 0 = M, thus implying that the assumption of r 0 ≤ M does not affect the generality of the . Finally, by induction up to the L-th layer, we obtain a bound for the grid tensor rank as DISPLAYFORM15 Since we assumed networks with hidden layer widths r l decaying (or increasing) at an exponential rate of λ ∈ R. Formally, this is r l = λr l−1 ∈ N, thus r l = (λ) l r for all l = 0, 1,..., L − 1, where r r 0. Therefore, we may simplify the obtained bound to DISPLAYFORM16 We this analysis by proving Theorem 5.1. To that end let A y dense be the grid tensor of a dense (L, r, λ, k) ConvAC with k > 0, while A y stand is the grid tensor of a (L, r, λ, 0) ConvAC with r ∈ R. As discussed in Section 4, this dense version of the former (L, r, λ, 0) ConvAC is equivalent to virtually increasing the widths of the ConvAC, which translates extra additive terms in the expressions from 12. Moreover, using corollary 1 we observe that, if the ranks of the tensors φ l,j,γ are additive and multiplicative up to rank(A y dense) > rank(A y stand), so they are up to rank(A y stand). A weak dense gain value G w ∈ R is achieved when there is a set of functions realized by the (L, r, λ, k) ConvAC that cannot be realized by (L, r, λ, 0) ConvAC unless r = G w r. To bound this gain, let us assume the best case scenario where rank(A DISPLAYFORM17 which proves Theorem 5.1.For Theorem 5.2 we use consider the particular case of k = 1, which yields a core tensor given by the hierarchical tensor decomposition from. We use the same assumption of r 0 ≤ M and define the virtually increased widthsr l r l + r l−1 ∈ N for l = 1,..., L − 1 andr 0 M. This leads to rank φ 1,j,γ = rank Note that for r l = λr l−1 ∈ N (λ ∈ R), we get virtually increased widthsr l = (1 + λ) l r = λ 1 + As in for the proof of Theorem 5.1, the maximum dense gain G w is obtained when rank(A y dense) reaches the maximum possible rank. In this particular case, this corresponds to rank(A By definition, we have that ∆P std = P (L, Gr, λ, 0) − P (L, Gr, λ, 0) and ∆P dense = P (L, Gr, λ, k) − P (L, r, λ, 0), thus yielding Finnaly, we use these expressions to compute the ratio of interest as DISPLAYFORM18 2 ) l + (G 2 − 1) k q=1 λ −q which proves this proposition.
We analyze the expressive power of the connections used in DenseNets via tensor decompositions.
455
scitldr
We consider the following central question in the field of Deep Reinforcement Learning (DRL): How can we use implicit human feedback to accelerate and optimize the training of a DRL algorithm? State-of-the-art methods rely on any human feedback to be provided explicitly, requiring the active participation of humans (e.g., expert labeling, demonstrations, etc.). In this work, we investigate an alternative paradigm, where non-expert humans are silently observing (and assessing) the agent interacting with the environment. The human's intrinsic reactions to the agent's behavior is sensed as implicit feedback by placing electrodes on the human scalp and monitoring what are known as event-related electric potentials. The implicit feedback is then used to augment the agent's learning in the RL tasks. We develop a system to obtain and accurately decode the implicit human feedback (specifically error-related event potentials) for state-action pairs in an Atari-type environment. As a baseline contribution, we demonstrate the feasibility of capturing error-potentials of a human observer watching an agent learning to play several different Atari-games using an electroencephalogram (EEG) cap, and then decoding the signals appropriately and using them as an auxiliary reward function to a DRL algorithm with the intent of accelerating its learning of the game. Building atop the baseline, we then make the following novel contributions in our work: (i) We argue that the definition of error-potentials is generalizable across different environments; specifically we show that error-potentials of an observer can be learned for a specific game, and the definition used as-is for another game without requiring re-learning of the error-potentials. (ii) We propose two different frameworks to combine recent advances in DRL into the error-potential based feedback system in a sample-efficient manner, allowing humans to provide implicit feedback while training in the loop, or prior to the training of the RL agent. (iii) Finally, we scale the implicit human feedback (via ErrP) based RL to reasonably complex environments (games) and demonstrate the significance of our approach through synthetic and real user experiments. Deep Reinforcement Learning (DRL) algorithms have now beaten human experts in Go , taught robots to become parkour masters, and enabled truly autonomous vehicles . However, current state-of-the-art RL agents equipped with deep neural networks are inherently complex, difficult and time-intensive to train. Particularly in complex environments with sparse reward functions (e.g., maze navigation), the DRL agents need an inordinate amount of interaction with the environment to learn the optimal policy. Human participation can potentially help DRL algorithms by accelerating their training and reducing the learning costs without compromising final performance. This potential has inspired a several research efforts where either an alternative (or supplementary) feedback is obtained from the human participant . Such approaches despite being highly effective, severely burden the human-in-the-loop demanding either expert demonstrations or explicit feedback . In this paper, we investigate an alternative paradigm that substantially increases the richness of the reward functions, while not severely burdening the human-in-the-loop. We study the use of electroencephalogram (EEG) based brain waves of the human-in-the-loop to generate the reward functions that can be used by the DRL algorithms. Such a model will benefit from the natural rich activity of a powerful sensor (the human brain), but at the same time not burden the human if the activity being relied upon is intrinsic. This paradigm is inspired by a high-level error-processing system in humans that generates error-related potential/negativity (ErrP or ERN) .When a human recognizes an error made by an agent, the elicited ErrP can be captured through EEG to inform agent about the sub-optimality of the taken action in the particular state. As a baseline contribution, we demonstrate the feasibility of capturing error-potentials of a human observer watching an agent learning to play several different Atari-games, and then decoding the signals appropriately and using them as an auxiliary reward function to a DRL algorithm. We show that a full access approach to obtain feedback on every state-action pair while RL agent is learning, can significantly speedup the training convergence of RL agent. We contend that while obtaining such implicit human feedback through EEG is less burdensome, it is still a time-intensive task for the subject and the experimenter alike. This, combined with the noisy EEG signals and stochasticity in inferring error-potentials, raises significant challenges in terms of the practicality of the solution. In this context, we first argue that the definition of ErrPs is generalizable across different environments. We show that ErrPs of an observer can be learned for a specific game, and the definition used as-is for another game without requiring re-learning of the ErrP. This is notably different from previous approaches (Chavarriaga & Millán, 2010;), where the labeled ErrPs are obtained in the same environment (where the RL task is performed). For any new and unseen environment, it does not require the human to go through the training phase again, and assumes no prior knowledge about the optimal state-action pairs of the environment. We present two different frameworks to combine recent advances in DRL into the implicit human feedback mechanism (via ErrP) in a practical, sample-efficient manner. This reduces the cost of human supervision sufficiently allowing the DRL systems to train. Relying on Active Learning (AL) methods, our first framework allows humans to provide implicit feedback in the loop, while an RL agent is being trained. An uncertainty based acquisition function is modeled to select the samples state-action pairs for querying the implicit human feedback. However, as a human is always required to be in the loop, our second framework allows humans to provide their feedback implicitly before the agent starts training. Based on the human feedback obtained during pre-training, a quality (Q) function is learned over these imperfect demonstrations to provide the supplementary reward to the RL agent. We present from real ErrP experiments to evaluate the acceleration in learning, and sample efficiency, in both frameworks. In summary, the novel contributions of our work are, 1. We demonstrate the generalizability of error-potentials over various Atari-like environments (discrete grid-based navigation games, studied in this work), enabling the estimation of implicit human feedback in new and unseen environments. 2. We propose two different frameworks to combine recent advances in DRL into ErrP based feedback system in a practical, sample-efficient manner. The first framework allows humans to provide implicit feedback while training in the loop. Taking advantage of recent approaches in learning from imperfect demonstrations, in the second framework, the implicit human feedback is obtained prior to the training of the RL agent. 3. We scale the implicit human feedback (via ErrP) based RL to reasonably complex environments and demonstrate the significance of our approach through synthetic and real user experiments.;; studied RL from human rankings or ratings, however rely on explicit human feedback, and assume that the feedback is noiseless. Demonstrations have been commonly used to improve the efficiency of RL (; ;), and a common paradigm is to initialize RL algorithms with good policy or Q function (; ;). In this work, we use rely on implicit feedback from non-expert humans (via ErrPs) which is inherently noisy. (Chavarriaga & Millán, 2010; ;) demonstrate the benefit of ErrPs in a very simple setting (i.e., very small state-space), and use ErrP-based feedback as the only reward. Moreover, in all of these works, the ErrP decoder is trained on a similar game (or robotic task), essentially using the knowledge that is supposed to be unknown in the RL task. In our work, we use labeled ErrPs examples of very simple and known environments to train the ErrP decoder, and combine with the recent advances in DRL in a sample-efficient manner for reasonably complex environments. Consider a Markov Decision Process (MDP) problem M, as a tuple < X, A, P, P 0, R, γ >, with state-space X, action-space A, transition kernel P, initial state distribution P 0, accompanied with reward function R, and discounting factor 0 ≤ γ ≤ 1. Here the random variable Z(s, a) denotes the accumulated discounted future rewards starting from state s and action a. In this work, we only consider MDP with discrete actions and states. In model-free RL method, the central idea of most prominent approaches is to learn the Q-function by minimizing the Bellman residual, i.e., L(Q) = E π Q(x, a) − r − γQ(x,â) 2, and temporal difference (TD) update where the transition tuple (x, a, r, x) consists of a consecutive experience under behavior policy π. Modern techniques in DRL such as DQN and the target network are also adpoted throughout the paper. The humans intrinsic reactions to the agents behavior is sensed as implicit feedback by placing electrodes on the human scalp and monitoring what are known as event-related electric potentials. We rely on the Riemannian Geometry framework for the classification of error-related potentials (; presented in Appendix 7.1. We consider the classification of error-related potentials as a binary variable indicating the presence (i.e., action taken by the agent is incorrect) and absence of error (i.e., action taken by the agent is correct). With the availability of implicit human feedback, we explore how the training of state-of-the-art DRL algorithms can be accelerated. A trivial approach is to obtain feedback on every state-action pair while RL agent is learning (also known as full access). It is to add a negative penalty to the reward when ErrP is detected, and keep using the original reward from the environment without ErrP detected. The evaluation of this method based on real ErrP data is shown in section 5.1. The validate that this method can speed up the training convergence of RL agent significantly. We contend that while obtaining such implicit human feedback through EEG is less burdensome, it is still a time-intensive task for the subject and the experimenter alike. This, combined with the noisy EEG signals and stochasticity in inferring ErrPs, raises significant challenges in terms of the practicality of the solution. In this section, we discuss three approaches towards integrating the ErrP with recent advances in DRL in a practical manner. Firstly, we show that ErrPs of an observer can be learned for a specific game, and the definition used as-is for another game without requiring re-learning of the ErrP. Further, we discuss two frameworks to combine the recent advances in DRL into the implicit human feedback mechanism (via ErrP) to accelerate the RL agent learning in a sample-efficient manner. The first framework allows humans to provide implicit feedback while training in the loop, without any prior knowledge on the game. In the second framework, the implicit human feedback is obtained prior to the training of the RL agent. It exploits the initially given trajectories with ErrP labels to learn a reward function for augmenting the RL agent, where human with some prior knowledge is needed to specify some non-expert trajectories. Recently, Q function can be shown to have better generalization in state-space if trained with non-expert demonstrations . Error-potentials in the EEG signals is studied under two major paradigms in human-machine interaction tasks, (i) feedback and response ErrPs: error made by human (; ; ; ;), (ii) interaction ErrPs: error made by machine in interpreting human intent (Ferrez & Millán, 2005). Another interesting paradigm is when human is watching (and silently assessing) the machine performing a specific task (Chavarriaga & Millán, 2010). The manifestation of these potentials across these paradigms were found quite similar in terms of their general shape, timings of negative and positive peaks, frequency characteristics etc., (Ferrez & Millán, 2005; Chavarriaga & Millán, 2010). This prompts us to explore the consistency of the error-potentials across different environments (i.e., games, in our case). We restrict the score of our work to the paradigm of human acting as a silent observer of the machine actions. In Fig.5, we plot the grand average waveforms across three environments (Maze, Catch and Wobble), to visually validate the consistency of potentials. We can see that the shape of negativity, and the timings of the peaks is quite consistent across the three game environments studied in this work. Further, we perform experimental evaluation in section 5.2.1, to show that error-potentials are indeed generalizable across environments, and can further be used to inform deep reinforcement learning algorithm in a new and unseen environments. Active Learning (AL) frameworks have been proved quite successful in optimizing the learning task while minimizing the required number of labeled examples . In AL, an acquisition function is used to efficiently select the data points requested for labeling from an external oracle. We introduce a framework of training RL agents with implicit non-expert human feedback in the loop, leveraging recent advances in active learning methods. We present our active learning based framework in Fig. 2 (a). We use an uncertainty-based acquisition function to select the state-action pairs required for non-expert human labeling (via ErrP). Since it is critical to keep the coherence between consecutive state-action pairs shown to the human subject, a full trajectory from start to end of the game can be shown. The calculation of the acquisition function is based on the state-action pair uncertainty along the trajectory, as explained in Appendix 7.3. Specifically, we model the Deep-Q-Network (DQN) by Bayesian learning methods, which have strong capacity of uncertainty estimation . The DQN is trained with experience collected in the reply buffer, a structure commonly used in deep RL algorithms. In contrast to the full access method, the presented framework queries for ErrP based state-action pair labeling only at the end of every N E episodes. We further store the decoded ErrP labels into buckets, to be used for future training augmentation. In every step, the RL agent inquire the negativity of the current state-action pair from buckets, instead of ErrP labeling, which reduces the number of ErrP inquiries significantly. This negativity can add a negative penalty to the environmental reward as auxiliary. Trajectory Generation and Selection: ErrP labeling informs the RL agent about negativity of selected actions, ideally preventing the agent from deviating from the optimal paths in the game. However, these optimal paths are unknown a priori. For generating trajectories for ErrP labeling, we empirically found that following greedily the action with largest Q value in every state based on the most updated DQN performs very well. Then the trajectory with the largest acquisition function output is selected for querying ErrP labels. Three acquisition functions evaluated in experiments are all formulated based on the uncertainty estimation of Q values, and their formulations and approximations are introduced in Appendix 7.3. The framework are presented in Algorithm 1. RL algorithms deployed in the environment with sparse rewards demand heavy explorations (require a large number of trial-and-errors) during the initial stages of training. Imitation learning from a small number of demonstrations followed by RL fine-tuning is a promising paradigm to improve the sample efficiency in such cases (Večerík et al., 2017; ;). Inspired by the paradigm of imitation learning, we develop a novel framework that can robustly Starting from random initial state, the RL agent plays the game until the end of the episode; 4 Update the DQN Q by experiences randomly selected from the replay buffer; The flowchart of the second framework is in Fig. 2(b). In this framework, the trajectories in the demonstration are first criticized by ErrP labeling in experiments, and a quality (Q) function is learned from the labeled trajectories in the reward learning step. An alternative reward is derived from the learned quality function, augmenting the following RL algorithm. This approach is considerably different from our first framework (section 4.2), as we only make queries for ErrP labeling on trajectories initially given in the demonstration (rather than making queries continuously during every training step). These queries are made before the RL agent starts training, improving the efficiency of the total number of labeling (implicit, ErrP based) queries made to the external oracle (human). Similar to the first framework, the demonstrations for ErrP labeling can only consist of complete trajectories. We assumed that the trajectories in the demonstration are initially specified by human or other external sources, without any reward information. This is a reasonable assumption since the rewards may be unknown to humans in general cases. The human subject in the experiment provides feedback in an implicit manner (via ErrP) on state-action pairs along the trajectories, labeling every state-action pair as a positive or negative sample. Based on the decoded ErrP labels and initially given trajectories, the proposed framework learns the reward function based on maximum entropy RL methods , as explained in details in Appendix 7.4. Different from conventional imitation learning, these trajectories are not given by expert policies, allowing the non-experts to demonstrate. Moreover, the Q function learned from imperfect demonstrations can have better estimations on states unseen in the demonstration, and provide better generalization in the state-space . We have developed three discrete-grid based navigation games in OpenAI Gym emulating Atari framework , namely (i) Wobble, (ii) Catch, and (iii) Maze, shown in Fig. 3(a). We use the default Atari dimensions (i.e., 210x160 pixels). The source codes of the games can be found in the public repository 1, and can be used with the OpenAI Gym module. 1 source code is attached with the submission for anonymity purposes Wobble: Wobble is a simple 1-D cursor-target game, where the middle horizontal plane is divided into 20 discrete blocks. At the beginning of the game, the cursor appears at the center of the screen, and the target appears no more than three blocks away from the cursor position. The action space for the agent is moving one step either left or right. The game is finished when the cursor reaches the target. Once the game is finished, a new game is started with the cursor in place. Catch: Catch is a simplistic version of Eggomania 2 (Atari 2600 benchmark), where we display a single egg on the screen at a time. The screen dimensions are divided into 10x10 grid space, where the egg and the cart, both occupies one block. The action space of the agent consists of "NOOP" (no operation), "moving left" and "moving right". At the start of the game, the horizontal position of the egg is chosen randomly. At each time step, the egg falls one block in the vertical direction. Maze: Maze is a 2-D navigational game, where the agent has to reach to a fixed target. The Atari screen is centered and divided into 10x10 equal-sized blocks. The agent and target occupy one block. The action space consists of four directional movements. The maze architecture is kept fixed for the purpose of this work. If an agent moves, but hits a wall, a quick blinking of the agent is displayed, to show the action taken by the agent. We designed and developed an experimental protocol, where a machine agent plays a computer game, while a human silently observes (and assesses) the actions taken by the machine agent. These implicit human reactions are captured by placing raw electrodes on the scalp of the human brain in the form of EEG. The electrode cap was attached with the OpenBCI 3 platform, which was further connected to a desktop machine over the wireless channel. In the game design (developed on OpenAI Gym), we open a TCP port, and continuously transmit the current state-action pair using the TCP/IP protocol. We used OpenViBE software to record the human EEG data. OpenViBE continuously listens to the TCP port (for state-action pairs), and timestamps the EEG data in a synchronized manner. A total of five human subjects were recruited using standard procedures. We recruited five human subjects (mean age 26.8 ± 1.92, 1 female) for collecting the EEG data. For each subject, we conducted three separate sessions over multiple days. For each subject-game pair, the experimental duration was less than 15 minutes. The agent took action every 1.5 seconds. All the research protocols for the user data collection were reviewed and approved by the Institutional Review Board 4. The full access method as discussed in section 3 is the most preliminary approach to make ErrP labels augment the RL algorithm. It has the fastest training convergence rate (provides upper bound) but makes the maximum possible queries to the external oracle (human) for the implicit feedback. We use this method as a benchmark for comparing the data-efficiency of other RL augmentation methods. The with real ErrP data of 5 subjects are shown in Figure 4. Here the training data of ErrP decoder is from Catch game while the testing data is from Maze. We can see there is a significant improvement in the training convergence. It further validates the generalization capability of ErrP decoding from 1-D to 2-D navigation games. In this paper, "No ErrP" method refers to regular RL algorithms without the help of any human feedback. The success rate is defined as the ratio of success plays in the previous 32 episodes. The training completes when the success rate reaches to 1. In all plots of this paper, solid lines are average values over 10 random seeds, and shaded regions correspond to one standard deviation. In the evaluations of this paper, the Q network is modeled by Bayesian deep learning methods, such as Bayesian DQN or bootstrapped DQN, introduced in Appendix 7.2. In this subsection, we evaluate the performance of three approaches to practially integrate the DRL with implicit human feedback (via ErrPs). We first validate the feasibility of decoding ErrP signals using a 10-fold cross-validation scheme for each game. In this scheme, we train and test on the ErrP samples of the same game environment. In Fig. 5(a), we show the performance of three games in terms of AUC score, sensitivity and specificity, averaged over 5 subjects. The Maze game has the highest AUC score (0.89 ± 0.05) followed by Catch (0.83 ± 0.08) and Wobble (0.77 ± 0.09). To evaluate the generalization capability of errorpotential signals and the decoding algorithm, we train on the samples collected from Catch and test on Maze game. In Fig. 5(b), we provide the AUC score performance compared with the 10-fold CV AUC score of Maze. We can see that the Catch game is able to capture more than 80% of the variability in the ErrPs for Maze game. To provide deeper insights into the generalizability extent, we present the AUC score of generalizability performance over all combinations in fig. 5(c). In the later subsections, we experimentally show that these performance numbers are sufficient to achieve 2.25x improvement in training time (in terms of the number of episodes required). We performed preliminary experiments to gain fundamental insights into the extent of generalizability. All the three games considered in this work, differ in terms of their action space. Wobble can move either left or right (two actions), Catch has an additional "NOOP" (3 actions), and the agent in the Maze can move in either direction (4 actions). To understand the generalizability of ErrP in terms of the actions taken by the agent, we train on the Wobble, and test on the Catch game for two groups -(i) when the agent moves in either direction, and (ii) when the agent stays in the place. We obtain an average AUC score of 0.7359 (± 0.1294) and 0.6423 (± 0.1451) for both groups, respectively. Through a paired t-test, we found the difference in mean statistically significant. Similarly, for the Catch game, we test two groups -(i) when egg is close to the paddle, and (ii) when egg is far from the paddle. We found the mean AUC scores of 0.71 (± 0.1) and 0.84 (± 0.12) for each group, respectively. The difference of the mean of both groups was found statistically significant. In evaluating active RL framework, we explore three forms of acquisition functions, i.e., entropy, mutual information, and confidence interval. Their expressions and approximation techniques are illustrated with details in Appendix 7.3. The benchmark performance of full access method is shown in section 5.1. We first evaluate the performance of first framework with synthesized human feedback, which is presented in Appendix 7.5.1 on box world environment . In this section, we evaluate the first framework on Maze game with real ErrP experimental data. We use Bayesian DQN for the Q network. Three acquisition functions are compared in Figure 6 with detailed statistics on Table 7.5.1, which has similar as the synthetic case. Based on real ErrP data, we can show that compared with full access method, the first framework can reach similar performance with much less feedback inquiries. In the evaluation of this framework, the trajectories given initially are generated based on optimal paths randomly corrupted by wrong actions, which appear with the probability of 0.2. We evaluate the performance with 10 and 20 trajectories given initially. Prior to training the RL agent, each subject is asked to provide feedback via ErrP on the state-action pairs along these trajectories. We conducted experiments on 5 subjects, based on Maze game. Here the Q network is modeled by Bayesian DQN. The performance of augmented RL algorithms is shown in Figure 7. The reward function is shown to speed up the training convergence of the RL agent significantly. Since trajectories are randomly generated initially, the number of ErrP inquiries of the second framework is equal to 372.1(±58.2), based on the statistics in our simulations. The second framework even outperforms the full access method, with ErrP inquiries on only 20 trajectories, proving its data efficiency. However, this framework needs a human or external source, who has some prior knowledge of the game, to specify the initial trajectories. We first demonstrate the feasibility of capturing error-potentials of a human observer watching an agent learning to play several different Atari-games, and then decoding the signals appropriately and using them as an auxiliary reward function to a DRL algorithm. Then we argue that the definition of ErrPs is generalizable across different environment. In the ideal approach, we validate the augmentation effect of ErrP labels on RL algorithms by the full access method. Then, in the practical approach, we propose two augmentation frameworks for RL agent, applicable to different situations. The first is to integrate human into the training loop of RL agent based on active learning, while the second is to learn a reward function from imperfect demonstrations labeled by ErrP. The demonstration of the generalizability of error-potentials is limited across the environments presented in the paper. We have considered discrete grid-based reasonably complex navigation games. The validation of the generalization to a variety of Atari and Robotic environments is the subject of the future work. We also plan to test our framework of integrating implicit human feedback (via ErrPs) over robotic environments, and text the generalization capability of error-potentials between virtual and physical worlds. As future work, we plan to investigate as to how machines can be assisted in RL by using intrinsic EEG-based cooperations among humans and machines. are bandpass filtered in [0.5, 40] Hz. Epochs of 800ms were extracted relative to pre-stimulus 200ms baseline, and were subjected to spatial filtering. In spatial filtering, prototype responses of each class, i.e., "correct" and "erroneous", are computed by averaging all training trials in the corresponding classes("xDAWN Spatial Filter" (; ;). "xDAWN filtering" projects the EEG signals from sensor space (i.e., electrode space) to the source space (i.e., a low-dimensional space constituted by the actual neuronal ensembles in brain firing coherently). The covariance matrix of each epoch is computed, and concatenated with the prototype responses of the class. Further, dimensionality reduction is achieved by selecting relevant channels through backward elimination. The filtered signals are projected to the tangent space for feature extraction. The obtained feature vector is first normalized (using L1 norm) and fed to a regularized regression model. A threshold value is selected for the final decision by maximizing accuracy offline on the training set. We present the algorithm to decode the ErrP signals in Algorithm 2. Algorithm 2: Riemannian Geometry based ErrP classification algorithm Input: raw EEG signals EEG 1 Pre-process raw EEG signals; 2 Spatial Filtering: xDAWN Spatial Filter (nf ilter); 3 Electrode Selection: ElectrodeSelect (nelec, metric='riemann'); 4 Tangent Space Projection: TangentSpace(metric = "logeuclid") Normalize using L1 norm; 5 Regression: ElasticNet; 6 Select decision threshold by maximizing accuracy Here we introduce two DQN models adopted in this paper. Bayesian DQN The first model we use is a DQN architecture where the Q-function is approximated as a linear function, with weights ω a, of the feature representation of states φ θ (x) ∈ R d, parameterized by neural network with weights θ . Here by utilizing the DQN architecture and imposing Gaussian distributions on ω a, based on Bayesian linear regression (BLR) , the posterior of ω a can be calculated by where we construct disjoint replay buffer D a corresponding to experience with action a, and a matrix Φ θ a ∈ R d×|Da|, vector y a, i.e., the concatenation of state features and target values in set D a. Therefore the posterior of Q value can be the following the Gaussian distribution, Bootstrapped DQN Another Bayesian DQN model we use is bootstrapped DQN . It explores in a similar manner as the Bayesian DQN introduced above, but uses a bootstrapped neural network to approximate a posterior sample for the value. Bootstrapped DQN is also provably efficient, but adopts neural network instead of linear value function and bootstraps instead of Gaussian sampling. It is implemented by K ∈ N bootstrapped estimates of the Q value in parallel, i.e., Q k (s, a; θ), s = 1,..., K. In this work, three acquisition functions are explored in selecting trajectory for ErrP labeling. The trajectory is defined as a sequence of state-action pairs τ:= {(s 0, a 0),..., (s T, a T)}. We denote the trajectory set as D, the learned Q network as Q, and acquisition function as a(τ, Q). The selected trajectory maximizes the acquisition function given Q. In this work, we explore three acquisition functions: • Entropy: Select the trajectory with the maximum entropy, which measures the uncertainty of state-action pairs along the trajectory. 1.96 where N s t (N f t) are number of True (False) ErrP decoding for state-action pair at step t on the trajectory. Since implicit human feedback via ErrP is noisy (hence imperfect demonstrations), we model the reward learning as a probabilistic maximum entropy RL problem. Following the principle of maximum entropy, given Q function Q(·, ·), the policy distribution and value function in terms of Q function can be expressed as follows, where α is a free parameter, tuned empirically. The likelihood of positive and negative state-action pair are denoted as π Q (a|s) and 1 − π Q (a|s). When demonstrations and corresponding implicit human feedback are ready, we train the Q function by maximizing the likelihood of both positive and negative state-action pairs in the demonstrations. In order to refine the reward shape and attenuate the variance of learning updates, we introduce another baseline function t(s) in the Q function. Hence, the Q function becomes Q B (s, a):= Q(s, a) − t(s). It can be proved that Q B (·, ·) and Q(·, ·) induce the same optimal policy . The baseline function t * (·) can be learned by optimizing t * = arg min t J(t), and the objective is defined as where the loss function l(·) is chosen to be l 1 -norm through empirical evaluations. In addition to the demonstration D, we incorporate another set of demonstrations D R, containing transitions randomly sampled from environment without reward information. The set D R is to help the function t(·) to efficiently learn the state dynamics, and does not require human labeling, essentially keeping the number of queries same. After reward learning, consisting of learning Q function and baseline function, for any transition tuple (s, a, s), the learned reward function can be represented as Q B (s, a) − γ max a ∈A Q B (s, a). We then use this reward function to augment the following RL agent. 7.5 BOX WORLD GAME ENVIRONMENT This environment consists of an 8 × 8 pixels room with keys and boxes randomly scattered. The room also contains an agent, represented by a single black pixel, which can move in four directions: up, down, left, and right. Keys are represented by a single colored pixel, and boxes are represented by two adjacent colored pixels, where the pixel on the right represents the box's lock. A key can open a lock if its color matches the lock. Its screen shot is shown in Figure 8 (a). Here, we evaluate the first framework on the Box World game with synthetic human feedback. This environment is introduced with details in Appendix 7.5. The synthetic feedback gives noisy label on each state-action pair, where the correct (optimal) one is labeled as wrong (sub-optimal) with the probability of 1, and the wrong (sub-optimal) one is labeled as correct with the probability of 2. Here, the Q network is modeled by bootstrapped DQN. This game has a combinatorially complex environment which cannot be quickly solved by a regular RL algorithm. The simulation are shown in Figure 8, where "No Feedback" refers to the RL algorithms without the help of human feedback. The detailed statistics of evaluations are illustrated in Table 7.5.1. Three acquisition functions are evaluated for comparison. The mean complete episode for mutual information, entropy and confidence level are 167.0, 177.5 and 231.2 for Human 1, 166.3, 184.2 and 195.2 for Human 2. We can see the first framework can achieve similar performance as full access method in terms of convergence speed, with much smaller number of inquiries. The acquisition function of confidence interval performs worst, because it does not consider the properties of the trained model. The mutual information performs better than entropy, but needs a larger number of human feedback inquiries.
We use implicit human feedback (via error-potentials, EEG) to accelerate and optimize the training of a DRL algorithm, in a practical manner.
456
scitldr
Deep learning has demonstrated abilities to learn complex structures, but they can be restricted by available data. Recently, Consensus Networks (CNs) were proposed to alleviate data sparsity by utilizing features from multiple modalities, but they too have been limited by the size of labeled data. In this paper, we extend CN to Transductive Consensus Networks (TCNs), suitable for semi-supervised learning. In TCNs, different modalities of input are compressed into latent representations, which we encourage to become indistinguishable during iterative adversarial training. To understand TCNs two mechanisms, consensus and classification, we put forward its three variants in ablation studies on these mechanisms. To further investigate TCN models, we treat the latent representations as probability distributions and measure their similarities as the negative relative Jensen-Shannon divergences. We show that a consensus state beneficial for classification desires a stable but imperfect similarity between the representations. Overall, TCNs outperform or align with the best benchmark algorithms given 20 to 200 labeled samples on the Bank Marketing and the DementiaBank datasets. Deep learning has demonstrated impressive capacities to learn complicated structures from massive data sets. However, acquiring sufficient labeled data can be expensive or difficult (e.g., for specific pathological populations BID10). Transductive learning (a set of semi-supervised algorithms) uses intrinsic structures among unlabeled data to boost classifier performance. In the real world, data can spread across multiple modalities (e.g., visual, acoustic, and text) in typical tasks, although many existing transductive algorithms do not exploit the structure across these modalities. Co-training and tri-training BID23 use one classifier per modality to supervise each other, but they can only apply to two and three modalities respectively. Recently, Consensus Networks (CNs) BID24 incorporated the idea of co-training. Not limited by the number of modalities, CNs showed promising on detecting cognitive impairments from multi-modal datasets of speech. A consensus network contains several interpreters (one per modality), a discriminator, and a classifier. The interpreters try to produce low-dimensional representations of input data that are indistinguishable by the discriminator. The classifier makes predictions based on these representation vectors. Despite promising , CN is limited by the amount of available training data. This motivates our extension into semi-supervised learning with our Transductive Consensus Network (TCN).TCNs operate in two mechanisms: as consensus or classifier. The consensus mechanism urges the modality representations to resemble each other (trained on the whole dataset without using labels), and the classifier mechanism optimizes the networks to retain information useful for classification (trained on the labeled dataset). To illustrate the importance of these two mechanisms in an ablation study, we also put forward its three variants: TCN-embed, TCN-svm, and TCN-AE in §3. By this ablation study, we show that both mechanisms should function together via iterative training. To further reveal the mechanisms of TCN, we formulate in §3.5 the similarity between latent representations using negative Jensen-Shannon divergences. By monitoring their similarities, we show that a meaningful consensus state prefers representations to have suboptimal similarities. In experiments (§4), we compare TCN to its three variants, TCN's multimodal supervised learning counterpart (CN), and several other semi-supervised learning benchmark algorithms on two datasets: Bank Marketing (from the UCI repository) and DementiaBank (a dataset of pathological speech in multiple modalities). On both datasets, the F-scores of TCN align with the best benchmark models when there are more labeled data available, and outperform benchmarks (including tri-training) given as few as 20 labeled points. Transductive SVMs BID8 were an early attempt in transductive semi-supervised learning. In addition to the SVM objective, TSVMs minimize the hinge loss on unlabeled data. TSVMs have yielded good performance on our datasets, so we include them for completeness. Later, many semi-supervised learning algorithms took either autoencoding or GAN approaches. In autoencoding, a model learns a low-dimensional representation and a reconstruction for each data sample. Usually, noise is added in generating the low-dimensional representation. By trying to minimize the difference between reconstructed and original data, the model learns (i) an encoder capturing low-dimensional hidden information and (ii) a decoder, which is a generative model able to recover data. This is the approach of the denoising autoencoder BID22 BID11. An extension is Ladder network BID19, which stacks denoising autoencoders and adds layer-wise reconstruction losses. Ladder networks are often more computationally efficient than stacked denoise autoencoders. In generative adversarial networks (GANs) BID6, a generator tries to produce data that are indistinguishable from true data, given a discriminator which itself learns to tell them apart. This adversarial training procedure could proceed with few labeled data points. For example, Feature-matching GANs BID20 add generated ("synthetic") samples into the training data of the discriminator as an additional class. Another example is Categorical GANs BID21 which optimize uncertainty (measured by entropy of predictions) in the absence of labels. Noticeably, BID3 showed that a discriminator performing well on a training set might not benefit the whole dataset. CNs and TCNs, despite not containing generative components, are built with the adversarial principles inspired from GANs. The idea to make multiple components in the network agree with each other has been adopted by several previous models. For example, proposed Parallel Consensus Networks, where multiple networks classify by majority voting. Each of the networks is trained on features after a unique transform. BID13 proposed consensus optimization in GAN in which a term is added to the utility functions of both the generator and discriminator to alleviate the adversity between generator and discriminator. However, neither they nor semi-supervised learning utilized the multiple modalities. Multi-modal learning is also referred to as multi-view learning. BID18 computed multiple viewpoints from speech samples and classified cognitive impairments. By contrast, our multi-view learning is semi-supervised, and can involve non-overlapping subsets of features. In domain adaptation, some work has been applied to find a unified representation between domains, for example, by applying domain invariant training BID5 and semantic similarity loss BID15. However, our approach does not involve multiple domains -we only handle data from one domain. Here, the term'domain' refers to how the data are naturally generated, whereas the term'modality' refers to how different aspects of data are observed. In previous work, Consensus Networks BID24 was proposed for multimodal supervised learning. We extend the model to be suitable for semi-supervised learning, ing in Transductive Consensus Networks (TCNs). This section also presents three variants: TCN-embed, TCN-svm, and TCN-AE. Given labeled data, DISPLAYFORM0, and unlabeled data, {x (i) } (where x (i) ∈ X U ), we want to learn a model that reaches high accuracies in predicting labels in unlabeled data. In the semisupervised learning setting, there are many more unlabeled data points than labeled: DISPLAYFORM1 Each data point x contains feature values from multiple modalities (i.e., 'views'). If M be the total number of modalities, then DISPLAYFORM2 m is consistent throughout the dataset. E.g., there may be 200 acoustic (semantic) features for each data point. Here we briefly review the structures of CNs. In a CN model, M interpreter networks I 1,...,M each compress the corresponding modality for a data sample into a low-dimensional vector v. DISPLAYFORM0 We call these networks interpreters, because they interpret the feature spaces with representations. In TCNs, a consensus is expected to be reached given representations from multiple views of the same data sample. A discriminator network D tries to identify the originating modality of each representation vector. If we write the m th modality of the dataset as a set M m of vectors, then the discriminator function D can be expressed as: DISPLAYFORM1 To prevent the discriminator from looking at only superficial aspects for each data sample in the forward pass, Consensus Networks BID24 include an additional'noise modality' representation sampled from a normal distribution, with mean and variance determined by the'non-noise' representations: DISPLAYFORM2 The model's discrimination loss L D is therefore defined as the cross entropy loss across all modalities (plus the noise modality), averaged across both labeled and unlabeled datasets X: DISPLAYFORM3 Finally, a classifier network C predicts the probability of class assignment (y) from the combined representation vectors, given model parameters: DISPLAYFORM4 The classification loss L C is just the cross entropy loss across the labeled data: DISPLAYFORM5 The overall optimization goals for CN can therefore be described as: DISPLAYFORM6 Consensus Networks (CNs), as a supervised learning framework, are limited by the amount of labeled data. This motivates us to generalize the approach to semi-supervised scenarios. There are two mechanisms in the CN training procedures, namely the classifier mechanism and consensus mechanism. The classifier mechanism requires labeled data, but the consensus mechanism does not explicitly require these labels. We let the consensus mechanism handle both labeled and unlabeled data. This in Transductive Consensus Networks. Formally, the loss functions are rewritten as: DISPLAYFORM0 where X consists of both labeled data X L and unlabeled data X U. Overall, the optimization goal can still be written as: min DISPLAYFORM1 These goals set up a complex nonlinear optimization problem. To figure out a solution, we break down the goals into three iterative steps, similar to GAN BID6:• The'I' step encourages interpreters to produce indistinguishable representations: max DISPLAYFORM2 • The'D' step encourages discriminators to recognize modal-specific information retained in representations: min DISPLAYFORM3 • The'CI' step trains the networks to make a correct decision: min DISPLAYFORM4 The consensus mechanism builds a low-dimensional latent representation of each (labeled and unlabeled) data sample containing common knowledge across different modalities, and the classifier mechanism tries to make these representations meaningful. Three modifications are made to our base TCN model, ing in the following models:TCN-embed consists of the same networks as TCN but is trained slightly differently. Before the I-D-CI optimization cycle, we add a pretraining phase with I-D iterations, which emphasizes the consensus mechanism. TCN-svm removes the classifier network from TCN-embed. After the pretraining phase across the whole dataset, we extract the representations of those labeled data samples to train a supervised learning classifier (i.e., an SVM). TCN-svm discards the classifier mechanism, which in deteriorations to model performance (§5).TCN-AE provides insights from another perspective. In contrast to TCN, TCN-AE contains several additional reconstructor networks, R 1..M (one per modality). Each reconstructor network tries to recover the input modality from the corresponding low-dimensional representations (plus a small noise): DISPLAYFORM0 Defining reconstruction loss as L R = E x∈X E m |x m − x m | 2, the optimization target in TCN-AE can be expressed as: DISPLAYFORM1 L C, and max DISPLAYFORM2 and min DISPLAYFORM3 TCN-AE is inspired by denoising autoencoder BID22, where the existence of reconstructor networks encourage the latent variables to preserve realistic information. This somehow works against the consensus mechanism, which according to BID24 tries to agree on simple representations. TCN-AE therefore weakens the consensus mechanism. We will show in §5 that an inhibited consensus mechanism in inferior model performance. We want to quantitatively measure the effects of the consensus and the classification mechanisms. To evaluate the similarities of representations, we treat the hidden dimensions of each representation DISPLAYFORM0..] (after normalization) as discrete values of a probability mass function 1, which we write as p m. The M modalities for each data point are therefore approximated by M probability distributions. Now we can measure the relative JS divergences between each pair of representations v m and v n derived from the same data sample (D(p m ||p n)). To acquire the relative value, we normalize the JS divergence by the total entropy in p m and p n: DISPLAYFORM1 where DISPLAYFORM2 where v m,j and v n,j are the j th component of v m and v n respectively. In total, for each data sample with M modalities, DISPLAYFORM3 pairs of relative divergences are calculated. We average the negative of these divergences to get the similarity: DISPLAYFORM4 Note that by our definition the maximum value of the "similarity" value is 0 (where there is no JS divergence between any pair of the representation vectors), and it has no theoretical lower bound. FIG3 shows several 2D visualizations of representation vectors drawn from an arbitrary run. In Figure 5, we illustrate how the similarities between modality representations evolve during training. We compare TCN and its variants on two benchmark datasets: Bank Marketing, and DementiaBank. The full list of features, by modalities, are provided in the Supplementary Material. The Bank Marketing dataset is from the UCI machine learning repository BID4. used for predicting whether the customer will subscribe a term deposit in a bank marketing campaign via telephone BID14. There are originally 4,640 positive samples (subscribed) and 36,548 negative ones (did not subscribe). Since consensus network models do not work well on imbalanced datasets, we randomly sample 5,000 negative samples to create an (almost) balanced dataset. We also convert the categorical raw features 2 into one-hot representations. We then divide the features into three modalities: basic information, statistical data, and employment-related features. 1 There is a ReLU layer at output of each interpreter network, so the probability mass will be non-negative. 2 https://archive.ics.uci.edu/ml/datasets/bank+marketing DementiaBank 3 contains 473 spoken picture descriptions of the clinical "cookie-theft picture", containing 240 positive samples (the Dementia class) and 233 negative samples (the Control class). We extract 413 linguistic features from each speech sample and their transcriptions, including acoustic (e.g., pause durations), lexical & semantic (e.g., average cosine similarities between words in sentences) and syntactic (e.g., complexity of the syntactic parse structures) modalities. Table 1: Basic information about the datasets (after preprocessing). In the Bank Marketing dataset, the three modalities correspond to basic information, statistical data, and employment-related features. In DementiaBank, the three modalities correspond to acoustic, syntactic, and lexical&semantic features. Detailed descriptions of the features are included in supplementary materials. We evaluate TCN and its variants against several benchmarks, including:1. Multimodal semi-supervised learning benchmark: Tri-training BID23 4.2. TCN's supervised counterpart: Consensus Network (CN).3. Unimodal semi-supervised learning: TSVM BID8, ladder network BID19, CatGAN BID21. For simplicity, we use fully connected networks for all of I 1..M, D, C, and R 1.. M in this paper. To enable faster convergence, all fully connected networks have a batch normalization layer BID7. For training, the batch size is set to 10. The neural network models are implemented using PyTorch BID16, and supervised learning benchmark algorithms (SVM, MLP) use scikit-learn BID17.We use the Adam optimizer BID9 with an initial learning rate of 0.001. In training TCN, TCN-embed, and TCN-AE, optimizations are stopped when the classification loss does not change by more than 10 −5 in comparison to the previous step, or when the step count reaches 100. In the pre-training phase of TCN-embed and TCN-svm, training is stopped when the discrimination loss changes by less than 10 −5, or when pretraining step count reaches 20.Sometimes, the iterative optimization (i.e., the I-D-CI cycle for TCN / TCN-embed, and the I-D-RI-CI cycle for the TCN-AE variant) is trapped in local saddle points -the training classification loss does not change while the training classification loss is higher than log 2 ≈ 0.693. This is the expected loss of a binary classifier with zero knowledge. If the training classification loss is higher than log 2, the model is re-initialized with a new random seed and the training is restarted. Empirically, this re-initialization happens no more than once per ten runs, but the underlying cause needs to be examined further.5 Results and discussion As shown in FIG1, TCN outperforms or matches the best benchmarks. On the Bank Marketing dataset, TCN, CN, and TSVM clearly outperform the rest. On DementiaBank, Tri-train, TCN, and TSVM form the "first tier".Also, semi-supervised learning does not always outperform those supervised algorithms. For example, on the Bank Marketing dataset, CN (i.e., TCN's supervised learning counterpart) holds the second best performance. BID23 ), uni-modal semi-supervised (TSVM BID8, Ladder BID19, CatGAN BID21), and multi-modal supervised (CN BID24). As shown in FIG2, TCN aligns with or outperforms TCN-embed. Both of these approaches significantly outperform TCN-AE. On the other hand, TCN-svm produces almost trivial classifiers. There are several points worth noting.• Both the consensus and the classification mechanisms are beneficial to classifier performance. The classification mechanism can be beneficial even with as few as 20 labeled data samples.• Iterative optimization is crucial. Without classification mechanisms, the consensus mechanism by itself fails to derive good reprensentations. Without consensus mechanisms (i.e., when the reconstructors hinder the consensus mechanisms), accuracies drop significantly. To understand more about TCN, we visualize them with T-SNE BID12 in FIG3, and plot the similarity values in Figure 5.• The higher similarity values corresponds to a state where the distributions contain higher symmetry in aggregate manner.• Measured from the similarity values, TCN models reach a consensus state where the similarities are stable. TCN-svm reaches agreements quickly but are close to trivial. TCN-AE, with the autoencoder blocking the consensus mechanism, fails to reach a state of agreement.. The three colors represent three modalities. At step 2, the representations are distributed randomly. At step 110, they become mixed evenly. The most interesting embedding happens at step 30, when representations of the three modalities form three'drumstick' shapes. With the highest visual symmetry, this configuration also has the highest similarity among the three. Figure 5: Examples of similarity plots against the number of steps taken, for DementiaBank using 80 labeled samples ("DB80", blue) and Bank Marketing using 20 labeled samples ("BM20", green). The y axis are scaled to (−0.035, 0) except TCN-AE, where the relative JS divergences "explode". Note that we stop the training procedure when losses converge (as detailed in §4.3), so the trials may stop at different steps. In this paper, we present Transductive Consensus Networks (TCNs) that extend consensus networks with semi-supervised learning. We identify two mechanisms in which TCNs function, i.e., the consensus and classifier mechanisms. With three TCN variants in an ablation study, we show the importance of both mechanisms. Moreover, by treating the representations as probability distributions and defining their similarity as negative relative JS divergences, we show that although the consensus mechanism urges high similarities, a good consensus state might not need perfect similarities between modality representations. In the future, several avenues may be considered. To start with, building consensus networks using other types of neural networks may be considered. In addition, more exploration could be done to find a more explainable metric to describe the extent of agreement. Currently, we use − H1+H2, but this requires some approximations. Optimizing against the similarity metric, instead of setting up a discriminator, may be worth examining as well.
TCN for multimodal semi-supervised learning + ablation study of its mechanisms + interpretations of latent representations
457
scitldr
Several first order stochastic optimization methods commonly used in the Euclidean domain such as stochastic gradient descent (SGD), accelerated gradient descent or variance reduced methods have already been adapted to certain Riemannian settings. However, some of the most popular of these optimization tools - namely Adam, Adagrad and the more recent Amsgrad - remain to be generalized to Riemannian manifolds. We discuss the difficulty of generalizing such adaptive schemes to the most agnostic Riemannian setting, and then provide algorithms and convergence proofs for geodesically convex objectives in the particular case of a product of Riemannian manifolds, in which adaptivity is implemented across manifolds in the cartesian product. Our generalization is tight in the sense that choosing the Euclidean space as Riemannian manifold yields the same algorithms and regret bounds as those that were already known for the standard algorithms. Experimentally, we show faster convergence and to a lower train loss value for Riemannian adaptive methods over their corresponding baselines on the realistic task of embedding the WordNet taxonomy in the Poincare ball. Developing powerful stochastic gradient-based optimization algorithms is of major importance for a variety of application domains. In particular, for computational efficiency, it is common to opt for a first order method, when the number of parameters to be optimized is great enough. Such cases have recently become ubiquitous in engineering and computational sciences, from the optimization of deep neural networks to learning embeddings over large vocabularies. This new need ed in the development of empirically very successful first order methods such as ADAGRAD BID5, ADADELTA BID29, ADAM BID9 or its recent update AMSGRAD BID18.Note that these algorithms are designed to optimize parameters living in a Euclidean space R n, which has often been considered as the default geometry to be used for continuous variables. However, a recent line of work has been concerned with the optimization of parameters lying on a Riemannian manifold, a more general setting allowing non-Euclidean geometries. This family of algorithms has already found numerous applications, including for instance solving Lyapunov equations BID27, matrix factorization BID23, geometric programming BID22, dictionary learning BID2 or hyperbolic taxonomy embedding BID15 BID6 BID4 BID14.A few first order stochastic methods have already been generalized to this setting (see section 6), the seminal one being Riemannian stochastic gradient descent (RSGD) BID1, along with new methods for their convergence analysis in the geodesically convex case. However, the above mentioned empirically successful adaptive methods, together with their convergence analysis, remain to find their respective Riemannian counterparts. Indeed, the adaptivity of these algorithms can be thought of as assigning one learning rate per coordinate of the parameter vector. However, on a Riemannian manifold, one is generally not given an intrinsic coordinate system, rendering meaningless the notions sparsity or coordinate-wise update. Our contributions. In this work we (i) explain why generalizing these adaptive schemes to the most agnostic Riemannian setting in an intrinsic manner is compromised, and (ii) propose generalizations of the algorithms together with their convergence analysis in the particular case of a product of manifolds where each manifold represents one "coordinate" of the adaptive scheme. Finally, we (iii) empirically support our claims on the realistic task of hyperbolic taxonomy embedding. Our initial motivation. The particular application that motivated us in developing Riemannian versions of ADAGRAD and ADAM was the learning of symbolic embeddings in non-Euclidean spaces. As an example, the GloVe algorithm BID17 ) − an unsupervised method for learning Euclidean word embeddings capturing semantic/syntactic relationships − benefits significantly from optimizing with ADAGRAD compared to using SGD, presumably because different words are sampled at different frequencies. Hence the absence of Riemannian adaptive algorithms could constitute a significant obstacle to the development of competitive optimization-based Riemannian embedding methods. In particular, we believe that the recent rise of embedding methods in hyperbolic spaces could benefit from such developments BID15 BID6 b; BID4 BID28. We recall here some elementary notions of differential geometry. For more in-depth expositions, we refer the interested reader to BID21 and BID19.Manifold, tangent space, Riemannian metric. A manifold M of dimension n is a space that can locally be approximated by a Euclidean space R n, and which can be understood as a generalization to higher dimensions of the notion of surface. For instance, the sphere S:= {x ∈ R n | x 2 = 1} embedded in R n is an (n − 1)-dimensional manifold. In particular, R n is a very simple n-dimensional manifold, with zero curvature. At each point x ∈ M, one can define the tangent space T x M, which is an n-dimensional vector space and can be seen as a first order local approximation of M around x. A Riemannian metric ρ is a collection ρ:= (ρ x) x∈M of inner-products ρ x (·, ·): T x M × T x M → R on T x M, varying smoothly with x. It defines the geometry locally on M. For x ∈ M and u ∈ T x M, we also write u x:= ρ x (u, u). A Riemannian manifold is a pair (M, ρ).Induced distance function, geodesics. Notice how a choice of a Riemannian metric ρ induces a natural global distance function on M. Indeed, for x, y ∈ M, we can set d(x, y) to be equal to the infimum of the lengths of smooth paths between x and y in M, where the length (c) of a path c is given by integrating the size of its speed vectorċ(t) ∈ T c(t) M, in the corresponding tangent space:(c):= Consider performing an SGD update of the form DISPLAYFORM0 where g t denotes the gradient of objective f t 1 and α > 0 is the step-size. In a Riemannian manifold (M, ρ), for smooth f: M → R, BID1 defines Riemannian SGD by the following update: DISPLAYFORM1 where g t ∈ T xt M denotes the Riemannian gradient of f t at x t. Note that when (M, ρ) is the Euclidean space (R n, I n), these two match, since we then have exp x (v) = x + v. Intuitively, applying the exponential map enables to perform an update along the shortest path in the relevant direction in unit time, while remaining in the manifold. In practice, when exp x (v) is not known in closed-form, it is common to replace it by a retraction map R x (v), most often chosen as R x (v) = x + v, which is a first-order approximation of exp x (v). Let's recall here the main algorithms that we are taking interest in. ADAGRAD. Introduced by BID5, the standard form of its update step is defined as DISPLAYFORM0 Such updates rescaled coordinate-wise depending on the size of past gradients can yield huge improvements when gradients are sparse, or in deep networks where the size of a good update may depend on the layer. However, the accumulation of all past gradients can also slow down learning. ADAM. Proposed by BID9, the ADAM update rule is given by DISPLAYFORM1 where m t = β 1 m t−1 + (1−β 1)g t can be seen as a momentum term and DISPLAYFORM2 2 is an adaptivity term. When β 1 = 0, one essentially recovers the unpublished method RMSPROP BID24, the only difference to ADAGRAD being that the sum is replaced by an exponential moving average, hence past gradients are forgotten over time in the adaptivity term v t. This circumvents the issue of ADAGRAD that learning could stop too early when the sum of accumulated squared gradients is too significant. Let us also mention that the momentum term introduced by ADAM for β 1 = 0 has been observed to often yield huge empirical improvements. AMSGRAD. More recently, BID18 identified a mistake in the convergence proof of ADAM. To fix it, they proposed to either modify the ADAM algorithm with DISPLAYFORM3 which they coin AMSGRAD, or to choose an increasing schedule for β 2, making it time dependent, which they call ADAMNC (for non-constant). Intrinsic updates. It is easily understandable that writing any coordinate-wise update requires the choice of a coordinate system. However, on a Riemannian manifold (M, ρ), one is generally not 1 to be interpreted as the objective with the same parameters, evaluated at the minibatch taken at time t. 2 a small ε = 10 −8 is often added in the square-root for numerical stability, omitted here for simplicity. 3 with mt and vt defined by the same equations as in ADAM (see above paragraph).provided with a canonical coordinate system. The formalism only allows to work with certain local coordinate systems, also called charts, and several different charts can be defined around each point x ∈ M. One usually says that a quantity defined using a chart is intrinsic to M if its definition does not depend on which chart was used. For instance, it is known that the Riemannian gradient gradf of a smooth function f: M → R can be defined intrinsically to (M, ρ), but its Hessian is only intrinsically defined at critical points 4. It is easily seen that the RSGD update of Eq. FORMULA1 is intrinsic, since it only involves exp and grad, which are objects intrinsic to (M, ρ). However, it is unclear whether it is possible at all to express either of Eqs. in a coordinate-free or intrinsic manner. A tempting solution. Note that since an update is defined in a tangent space, one could be tempted to fix a canonical coordinate system e:= (e,..., e (n) ) in the tangent space T x0 M R d at the initialization x 0 ∈ M, and parallel-transport e along the optimization trajectory, adapting Eq. FORMULA2 to: DISPLAYFORM0 where and (·) 2 denote coordinate-wise division and square respectively, these operations being taken relatively to coordinate system e t. In the Euclidean space, parallel transport between two points x and y does not depend on the path it is taken along because the space has no curvature. However, in a general Riemannian manifold, not only does it depend on the chosen path but curvature will also give to parallel transport a rotational component 5, which will almost surely break the sparsity of the gradients and hence the benefit of adaptivity. Besides, the interpretation of adaptivity as optimizing different features (i.e. gradient coordinates) at different speeds is also completely lost here, since the coordinate system used to represent gradients depends on the optimization path. Finally, note that the techniques we used to prove our theorems would not apply to updates defined in the vein of Eq.. From now on, we assume additional structure on (M, ρ), namely that it is the cartesian product of n Riemannian manifolds (M i, ρ i), where ρ is the induced product metric: DISPLAYFORM0 Product notations. The induced distance function d on M is known to be given by DISPLAYFORM1 Similarly, the exponential, log map and the parallel transport in M are the concatenations of those in each M i.Riemannian ADAGRAD. We just saw in the above discussion that designing meaningful adaptive schemes − intuitively corresponding to one learning rate per coordinate − in a general Riemannian manifold was difficult, because of the absence of intrinsic coordinates. Here, we propose to see each component x i ∈ M i of x as a "coordinate", yielding a simple adaptation of Eq. as DISPLAYFORM2 On the adaptivity term. Note that we take (squared) Riemannian norms g DISPLAYFORM3 in the adaptivity term rescaling the gradient. In the Euclidean setting, this quantity is simply a scalar (g DISPLAYFORM4 In section 2, we briefly presented ADAGRAD, ADAM and AMSGRAD. Intuitively, ADAM can be described as a combination of ADAGRAD with a momentum (of parameter β 1), with the slight modification that the sum of the past squared-gradients is replaced with an exponential moving average, for an exponent β 2. Let's also recall that AMSGRAD implements a slight modification of ADAM, allowing to correct its convergence proof. Finally, ADAMNC is simply ADAM, but with a particular non-constant schedule for β 1 and β 2. On the other hand, what is interesting to note is that the schedule initially proposed by BID18 for β 2 in ADAMNC, namely β 2t:= 1 − 1/t, lets v t recover the sum of squared-gradients of ADAGRAD. Hence, ADAMNC without momentum (i.e. β 1t = 0) yields ADAGRAD.Assumptions and notations. For 1 ≤ i ≤ n, we assume (M i, ρ i) is a geodesically complete Riemannian manifold with sectional curvature lower bounded by κ i ≤ 0. As written in Eq. FORMULA7, let (M, ρ) be the product manifold of the (M i, ρ i)'s. For each i, let X i ⊂ M i be a compact, geodesically convex set and define X:= X 1 × · · · × X n, the set of feasible parameters. Define Π Xi: M i → X i to be the projection operator, i.e. Π Xi (x) is the unique y ∈ X i minimizing d i (y, x). Denote by P i, exp i and log i the parallel transport, exponential and log maps in DISPLAYFORM0 and by g i ∈ T x i M i the corresponding components of x and g. In the sequel, let (f t) be a family of differentiable, geodesically convex functions from M to R. Assume that each X i ⊂ M i has a diameter bounded by D ∞ and that for all 1 ≤ i ≤ n, t ∈ [T] and x ∈ X, (gradf t (x)) i xi ≤ G ∞. Finally, our convergence guarantees will bound the regret, defined at the end of T rounds as DISPLAYFORM1 Following the discussion in section 3.2 and especially Eq., we present Riemannian AMSGRAD in FIG1. For comparison, we show next to it the standard AMSGRAD algorithm in FIG1.Require: DISPLAYFORM2 Require: DISPLAYFORM3 DISPLAYFORM4 From these algorithms, RADAM and ADAM are obtained simply by removing the max operations, i.e. replacingv i t = max{v DISPLAYFORM5 The convergence guarantee that we obtain for RAMSGRAD is presented in Theorem 1, where the quantity ζ is defined by as DISPLAYFORM6 For comparison, we also show the convergence guarantee of the original AMSGRAD in appendix C. Note that when (M i, ρ i) = R for all i, convergence guarantees between RAMSGRAD and AMSGRAD coincide as well. Indeed, the curvature dependent quantity (ζ(κ i, D ∞) + 1)/2 in the Riemannian case then becomes equal to 1, recovering the convergence theorem of AMSGRAD. It is also interesting to understand at which speed does the regret bound worsen when the curvature is small but non-zero: by a multiplicative factor of approximately 1 + D ∞ |κ|/6 (see Eq. FORMULA18). Similar remarks hold for RADAMNC, whose convergence guarantee is shown in Theorem 2. Finally, notice that β 1:= 0 in Theorem 2 yields a convergence proof for RADAGRAD, whose update rule we defined in Eq..Theorem 1 (Convergence of RAMSGRAD). Let (x t) and (v t) be the sequences obtained from Algorithm 1a, α t = α/ √ t, β 1 = β 11, β 1t ≤ β 1 for all t ∈ [T] and γ = β 1 / √ β 2 < 1. We then have: DISPLAYFORM7 Proof. See appendix A.Theorem 2 (Convergence of RADAMNC). Let (x t) and (v t) be the sequences obtained from RADAMNC, α t = α/ √ t, β 1 = β 11, β 1t = β 1 λ t−1, λ < 1, β 2t = 1 − 1/t. We then have: DISPLAYFORM8 Proof. See appendix B.The role of convexity. Note how the notion of convexity in Theorem 5 got replaced by the notion of geodesic convexity in Theorem 1. Let us compare the two definitions: the differentiable functions f: R n → R and g: M → R are respectively convex and geodesically convex if for all x, y ∈ R n, u, v ∈ M: DISPLAYFORM9 But how does this come at play in the proofs? Regret bounds for convex objectives are usually obtained by bounding T t=1 f t (x t) − f t (x *) using Eq. FORMULA0 for any x * ∈ X, which boils down to bounding each g t, x t − x *. In the Riemannian case, this term becomes ρ xt (g t, − log xt (x *)).The role of the cosine law. How does one obtain a bound on g t, x t − x *? For simplicity, let us look at the particular case of an SGD update, from Eq.. Using a cosine law, this yields DISPLAYFORM10 One now has two terms to bound: (i) when summing over t, the first one simplifies as a telescopic summation; (ii) the second term T t=1 α t g t 2 will require a well chosen decreasing schedule for α. In Riemannian manifolds, this step is generalized using the analogue lemma 6 introduced by, valid in all Alexandrov spaces, which includes our setting of geodesically convex subsets of Riemannian manifolds with lower bounded sectional curvature. The curvature dependent quantity ζ of Eq. appears from this lemma, letting us bound ρ DISPLAYFORM11 The benefit of adaptivity. Let us also mention that the above bounds significantly improve for sparse (per-manifold) gradients. In practice, this could happen for instance for algorithms embedding each word i (or node of a graph) in a manifold M i and when just a few words are updated at a time. On the choice of ϕ i. The fact that our convergence theorems (see lemma 3) do not require specifying ϕ i suggests that the regret bounds could be improved by exploiting momentum/acceleration in the proofs for a particular ϕ i. Note that this remark also applies to AMSGRAD BID18. We empirically assess the quality of the proposed algorithms: RADAM, RAMSGRAD and RADAGRAD compared to the non-adaptive RSGD method (Eq. 2). For this, we follow BID15 and embed the transitive closure of the WordNet noun hierarchy BID12 in the n-dimensional Poincaré model D n of hyperbolic geometry which is well-known to be better suited to embed tree-like graphs than the Euclidean space BID8 BID4. In this case, each word is embedded in the same space of constant curvature −1, thus M i = D n, ∀i. Note that it would also be interesting to explore the benefit of our optimization tools for algorithms proposed in BID14 BID4 BID6. The choice of the Poincaré model is justified by the access to closed form expressions for all the quantities used in Alg. 1a: DISPLAYFORM0 n, where λ x = 2 1− x 2 is the conformal factor.• Riemannian gradients are rescaled Euclidean gradients: DISPLAYFORM1 • Distance function and geodesics, BID15 BID26 BID7 ).• Exponential and logarithmic maps: DISPLAYFORM2, where ⊕ is the generalized Mobius addition BID26 BID7 ).• Parallel transport along the unique geodesic from x to y: P x→y (v) = λx λy · gyr[y, −x]v. This formula was derived from BID26 BID7, gyr being given in closed form in (, Eq. (1.27) ).Dataset & Model. The transitive closure of the WordNet taxonomy graph consists of 82,115 nouns and 743,241 hypernymy Is-A relations (directed edges E). These words are embedded in D n such that the distance between words connected by an edge is minimized, while being maximized otherwise. We minimize the same loss function as BID15 which is similar with log-likelihood, but approximating the partition function using sampling of negative word pairs (non-edges), fixed to 10 in our case. Note that this loss does not use the direction of the edges in the graph DISPLAYFORM3 Metrics. We report both the loss value and the mean average precision (MAP) BID15: for each directed edge (u, v), we rank its distance d(u, v) among the full set of ground truth negative examples {d(u, v)|(u, v) / ∈ E}. We use the same two settings as BID15, namely: reconstruction (measuring representation capacity) and link prediction (measuring generalization). For link prediction we sample a validation set of 2% edges from the set of transitive closure edges that contain no leaf node or root. We only focused on 5-dimensional hyperbolic spaces. Training details. For all methods we use the same "burn-in phase" described in BID15 for 20 epochs, with a fixed learning rate of 0.03 and using RSGD with retraction as explained in Sec. 2.2. Solely during this phase, we sampled negative words based on their graph degree raised at power 0.75. This strategy improves all metrics. After that, when different optimization methods start, we sample negatives uniformly. We use n = 5, following BID15.Optimization methods. Experimentally we obtained slightly better for RADAM over RAMS-GRAD, so we will mostly report the former. Moreover, we unexpectedly observed convergence to lower loss values when replacing the true exponential map with its first order approximation − i.e. the retraction R x (v) = x + v − in both RSGD and in our adaptive methods from Alg. 1a. One possible explanation is that retraction methods need fewer steps and smaller gradients to "escape" points sub-optimally collapsed on the ball border of D n compared to fully Riemannian methods. As a consequence, we report "retraction"-based methods in a separate setting as they are not directly comparable to their fully Riemannian analogues. Results. We show in FIG2 for "exponential" based and "retraction" based methods. We ran all our methods with different learning rates from the set {0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1.0, 3.0}. For the RSGD baseline we show in orange the best learning rate setting, but we also show the previous lower (slower convergence, in blue) and the next higher (faster overfitting, in green) learning rates. For RADAM and RAMSGRAD we only show the best settings. We always use β 1 = 0.9 and β 2 = 0.999 for these methods as these achieved the lowest training loss. RADAGRAD was consistently worse, so we do not report it. As can be seen, RADAM always achieves the lowest training loss. On the MAP metric for both reconstruction and link prediction settings, the same method also outperforms all the other methods for the full Riemannian setting (i.e. Tab. 2). Interestingly, in the "retraction" setting, RADAM reaches the lowest training loss value and is on par with RSGD on the MAP evaluation for both reconstruction and link prediction settings. However, RAMSGRAD is faster to converge in terms of MAP for the link prediction task, suggesting that this method has a better generalization capability. After Riemannian SGD was introduced by BID1, a pletora of other first order Riemannian methods arose, such as Riemannian SVRG, Riemannian Stein variational gradient descent BID10, Riemannian accelerated gradient descent BID31 or averaged RSGD BID25, along with new methods for their convergence analysis in the geodesically convex case. Stochastic gradient Langevin dynamics was generalized as well, to improve optimization on the probability simplex BID16.Let us also mention that BID20 proposed Riemannian counterparts of SGD with momentum and RMSprop, suggesting to transport the momentum term using parallel translation, which is an idea that we preserved. However (i) no convergence guarantee is provided and (ii) their algorithm performs the coordinate-wise adaptive operations (squaring and division) w.r.t. a coordinate system in the tangent space, which, as we discussed in section 3.1, compromises the possibility of obtaining convergence guarantees. Finally, another version of Riemannian ADAM for the Grassmann manifold G(1, n) was previously introduced by BID3, also transporting the momentum term using parallel translation. However, their algorithm completely removes the adaptive component, since the adaptivity term v t becomes a scalar. No adaptivity across manifolds is discussed, which is the main point of our discussion. Moreover, no convergence analysis is provided either. Driven by recent work in learning non-Euclidean embeddings for symbolic data, we propose to generalize popular adaptive optimization tools (e.g. ADAM, AMSGRAD, ADAGRAD) to Cartesian products of Riemannian manifolds in a principled and intrinsic manner. We derive convergence rates that are similar to the Euclidean corresponding models. Experimentally we show that our methods outperform popular non-adaptive methods such as RSGD on the realistic task of hyperbolic word taxonomy embedding. DISPLAYFORM0 i *. Combining the following formula 8: DISPLAYFORM1 with the following inequality (given by lemma 6): DISPLAYFORM2 yields DISPLAYFORM3 where the use the notation ·, · x i for ρ DISPLAYFORM4 Now applying Cauchy-Schwarz' and Young's inequalities to the last term yields DISPLAYFORM5 From the geodesic convexity of f t for 1 ≤ t ≤ T, we have DISPLAYFORM6 Let's look at the first term. Using β 1t ≤ β 1 and with a change of indices, we have DISPLAYFORM7 where the last equality comes from a standard telescopic summation. We now need the following lemma. Lemma 3. DISPLAYFORM8 Proof. Let's start by separating the last term, and removing the hat on v. Using that β 1k ≤ β 1 for all k ∈ [T], (1 − β 1j)β DISPLAYFORM9 Finally, (1 − β 1j) ≤ 1 and The following lemma is a user-friendly inequality developed by in order to prove convergence of gradient-based optimization algorithms, for geodesically convex functions, in Alexandrov spaces. Lemma 6 (Cosine inequality in Alexandrov spaces). If a, b, c, are the sides (i.e., side lengths) of a geodesic triangle in an Alexandrov space with curvature lower bounded by κ, and A is the angle between sides b and c, then DISPLAYFORM0 Proof. See section 3.1, lemma 6 of.Lemma 7 (An analogue of Cauchy-Schwarz). For all p, k ∈ N *, u 1,..., u k ∈ R p, a 1,..., a k ∈ R +, we have Proof. The proof consists in applying Cauchy-Schwarz' inequality two times: DISPLAYFORM1 DISPLAYFORM2 Finally, this last lemma is used by BID18 in their convergence proof for ADAMNC. We need it too, in an analogue lemma. Lemma 8 (BID0). For any non-negative real numbers y 1,..., y t, the following holds: DISPLAYFORM3
Adapting Adam, Amsgrad, Adagrad to Riemannian manifolds.
458
scitldr
We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit very limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the ing adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks. State-of-the-art effectiveness of deep neural networks has made it the technique of choice in a variety of fields, including computer vision , natural language processing , and speech recognition . However, there have been a myriad of demonstrations showing that deep neural networks can be easily fooled by carefully perturbing pixels in an image through what have become known as adversarial example attacks (; ; b;). In response, a large literature has emerged on defending deep neural networks against adversarial examples, typically either proposing techniques for learning more robust neural network models (; ; b; ;), or by detecting adversarial inputs . Particularly concerning, however, have been a number of demonstrations that implement adversarial perturbations directly in physical objects that are subsequently captured by a camera, and then fed through the deep neural network classifier (; ; b;). Among the most significant of such physical attacks on deep neural networks are three that we specifically consider here: 1) the attack which fools face recognition by using adversarially designed eyeglass frames , 2) the attack which fools stop sign classification by adding adversarially crafted stickers , and 3) the universal adversarial patch attack, which causes targeted misclassification of any object with the adversarially designed sticker (patch) . Oddly, while considerable attention has been devoted to defending against adversarial perturbation attacks in the digital space, there are no effective methods specifically to defend against such physical attacks. Our first contribution is an empirical evaluation of the effectiveness of conventional approaches to robust ML against two physically realizable attacks: the eyeglass frame attack on face recognition and the sticker attack on stop signs . Specifically, we study the performance on adversarial training and randomized smoothing against these attacks, and show that both have limited effectiveness in this context (quite ineffective in some settings, and somewhat more effective, but still not highly robust, in others), despite showing moderate effectiveness against l ∞ and l 2 attacks, respectively. Our second contribution is a novel abstract attack model which more directly captures the nature of common physically realizable attacks than the conventional l p -based models. Specifically, we consider a simple class of rectangular occlusion attacks in which the attacker places a rectangular sticker onto an image, with both the location and the content of the sticker adversarially chosen. We develop several algorithms for computing such adversarial occlusions, and use adversarial training to obtain neural network models that are robust to these. We then experimentally demonstrate that our proposed approach is significantly more robust against physical attacks on deep neural networks than adversarial training and randomized smoothing methods that leverage l p -based attack models. Related Work While many approaches for defending deep learning in vision applications have been proposed, robust learning methods have been particularly promising, since alternatives are often defeated soon after being proposed (; a; ;). The standard solution approach for this problem is an adaptation of Stochastic Gradient Descent (SGD) where gradients are either with respect to the loss at the optimal adversarial perturbation for each i (or approximation thereof, such as using heuristic local search or a convex over-approximation (b;) ), or with respect to the dual of the convex relaxation of the attacker maximization problem (a; ;). Despite these advances, adversarial training a la remains the most practically effective method for hardening neural networks against adversarial examples with l ∞ -norm perturbation constraints. Recently, randomized smoothing emerged as another class of techniques for obtaining robustness , with the strongest in the context of l 2 -norm attacks. In addition to training neural networks that are robust by construction, a number of methods study the problem of detecting adversarial examples , with mixed (a). Of particular interest is recent work on detecting physical adversarial examples . However, detection is inherently weaker than robustness, which is our goal, as even perfect detection does not resolve the question of how to make decisions on adversarial examples. Finally, our work is in the spirit of other recent efforts that characterize robustness of neural networks to physically realistic perturbations, such as translations, rotations, blurring, and contrast . Adversarial examples involve modifications of input images that are either invisible to humans, or unsuspicious, and that cause systematic misclassification by state-of-the-art neural networks (; ;). Commonly, approaches for generating adversarial examples aim to solve an optimization problem of the following form: arg max where x is the original input image, δ is the adversarial perturbation, L(·) is the adversary's utility function (for example, the adversary may wish to maximize the cross-entropy loss), and · p is some l p norm. While a host of such digital attacks have been proposed, two have come to be viewed as state of the art: the attack developed by Carlini & Wagner (2017b), and the projected gradient descent attack (PGD) by. While most of the work to date has been on attacks which modify the digital image directly, we focus on a class of physical attacks which entail modifying the actual object being photographed in order to fool the neural network that subsequently takes its digital representation as input. The attacks we will focus on will have three characteristics: 1. The attack can be implemented in the physical space (e.g., modifying the stop sign); 2. the attack has low suspiciousness; this is operationalized by modifying only a small part of the object, with the modification similar to common "noise" that obtains in the real world; for example, stickers on a stop sign would appear to most people as vandalism, but covering the stop sign with a printed poster would look highly suspicious; and 3. the attack causes misclassification by state-of-the-art deep neural network. Since our ultimate purpose is defense, we will not concern ourselves with the issue of actually implementing the physical attacks. Instead, we will consider the digital representation of these attacks, ignoring other important issues, such as robustness to many viewpoints and printability. For example, in the case where the attack involves posting stickers on a stop sign, we will only be concerned with simulating such stickers on digital images of stop signs. For this reason, we refer to such attacks physically realizable attacks, to allude to the fact that it is possible to realize them in practice. It is evident that physically realizable attacks represent a somewhat stronger adversarial model than their actual implementation in the physical space. Henceforth, for simplicity, we will use the terms physical attacks and physically realizable attacks interchangeably. We consider three physically realizable attacks. The first is the attack on face recognition by , in which the attacker adds adversarial noise inside printed eyeglass frames that can subsequently be put on to fool the deep neural network (Figure 1a). The second attack posts adversarially crafted stickers on a stop sign to cause it to be misclassified as another road sign, such as the speed limit sign (Figure 1b) . The third, adversarial patch, attack designs a patch (a sticker) with adversarial noise that can be placed onto an arbitrary object, causing that object to be misclassified by a deep neural network . While numerous approaches have been proposed for making deep learning robust, many are heuristic and have soon after been defeated by more sophisticated attacks (b; ; a; a). Consequently, we focus on principled approaches for defense that have not been broken. These fall broadly into two categories: robust learning and randomized smoothing. We focus on a state-of-the-art representative from each class. Robust Learning The goal of robust learning is to minimize a robust loss, defined as follows: where D denotes the training data set. In itself this is a highly intractable problem. Several techniques have been developed to obtain approximate solutions. Among the most effective in practice is the adversarial training approach by , who use the PGD attack as an approximation to the inner optimization problem, and then take gradient descent steps with respect to the associated adversarial inputs. In addition, we consider a modified version of this approach termed curriculum adversarial training . Our implementation of this approach proceeds as follows: first, apply adversarial training for a small, then increase and repeat adversarial training, and so on, increasing until we reach the desired level of adversarial noise we wish to be robust to. The second class of techniques we consider works by adding noise to inputs at both training and prediction time. The key idea is to construct a smoothed classifier g(·) from a base classifier f (·) by perturbing the input x with isotropic Gaussian noise with variance σ. The prediction is then made by choosing a class with the highest probability measure with respect to the induced distribution of f (·) decisions: To achieve provably robust classification in this manner one typically trains the classifier f (·) by adding Gaussian noise to inputs at training time . Most of the approaches for endowing deep learning with adversarial robustness focus on adversarial models in which the attacker introduces l p -bounded adversarial perturbations over the entire input. Earlier we described two representative approaches in this vein: adversarial training, commonly focused on robustness against l ∞ attacks, and randomized smoothing, which is most effective against l 2 attacks (although certification bounds can be extended to other l p norms as well). We call these methods conventional robust ML. In this section, we ask the following question: Are conventional robust ML methods robust against physically realizable attacks? This is similar to the question was asked in the context of malware classifier evasion by , who found that l p -based robust ML methods can indeed be successful in achieving robustness against realizable evasion attacks. Ours is the first investigation of this issue in computer vision applications and for deep neural networks, where attacks involve adversarial masking of objects. We study this issue experimentally by considering two state-of-the-art approaches for robust ML: adversarial training a-la-, along with its curriculum learning variation , and randomized smoothing, using the implementation by. These approaches are applied to defend against two physically realizable attacks described in Section 2.1: an attack on face recognition which adds adversarial eyeglass frames to faces , and an attack on stop sign classification which adds adversarial stickers to a stop sign to cause misclassification . We consider several variations of adversarial training, as a function of the l ∞ bound,, imposed on the adversary. Just as , adversarial instances in adversarial training were generated using PGD. We consider attacks with ∈ {4, 8} (adversarial training failed to make progress when we used = 16). For curriculum adversarial training, we first performed adversarial training with = 4, then doubled to 8 and repeated adversarial training with the model robust to = 4, then doubled again, and so on. In the end, we learned models for ∈ {4, 8, 16, 32}. For all versions of adversarial training, we consider 7 and 50 iterations of the PGD attack. We used the learning rate of /4 for the former and 1 for the latter. In all cases, pixels are in 0 ∼ 255 range and retraining was performed for 30 epochs using the ADAM optimizer. For randomized smoothing, we consider noise levels σ ∈ {0.25, 0.5, 1} as in , and take 1000 Monte Carlo samples at test time. We applied white-box dodging (untargeted) attacks on the face recognition systems (FRS) from. We used both the VGGFace data and transferred VGGFace CNN model for the face recognition task, subselecting 10 individuals, with 300-500 face images for each. Further details about the dataset, CNN architecture, and training procedure are in Appendix A. For the attack, we used identical frames as in occupying 6.5% of the pixels. Just as , we compute attacks (that is, adversarial perturbations inside the eyeglass frame area) by using the learning rate 20 as well as momentum value 0.4, and vary the number of attack iterations between 0 (no attack) and 300. eyeglass frame attack. First, it is clear that none of the variations of adversarial training are particularly effective once the number of physical attack iterations is above 20. The best performance in terms of adversarial robustness is achieved by adversarial training with = 8, for approaches using either 7 or 50 PGD iterations (the difference between these appears negligible). However, non-adversarial accuracy for these models is below 70%, a ∼20% drop in accuracy compared to the original model. Moreover, adversarial accuracy is under 40% for sufficiently strong physical attacks. Curriculum adversarial training generally achieves significantly higher non-adversarial accuracy, but is far less robust, even when trained with PGD attacks that use = 32. Figure 2 (right) shows the performance of randomized smoothing when faced with the eyeglass frames attack. It is readily apparent that randomized smoothing is ineffective at deflecting this physical attack: even as we vary the amount of noise we add, accuracy after attacks is below 20% even for relatively weak attacks, and often drops to nearly 0 for sufficiently strong attacks. , we use the LISA traffic sign dataset for our experiments, and 40 stop signs from this dataset as our test data and perform untargeted attacks (this is in contrast to the original work, which is focused on targeted attacks). For the detailed description of the data and the CNN used for traffic sign prediction, see Appendix A. We apply the same settings as in the original attacks and use ADAM optimizer with the same parameters. Since we observed few differences in performance between running PGD for 7 vs. 50 iterations, adversarial training methods in this section all use 7 iterations of PGD. Again, we begin by considering adversarial training (Figure 3, left and middle). In this case, both the original and curriculum versions of adversarial training with PGD are ineffective when = 32 (error rates on clean data are above 90%); these are consequently omitted from the plots. Curriculum adversarial training with = 16 has the best performance on adversarial data, and works well on clean data. Surprisingly, most variants of adversarial training perform at best marginally better than the original model against the stop sign attack. Even the best variant has relatively poor performance, with robust accuracy under 50% for stronger attacks. Figure 3 (right) presents the for randomized smoothing. In this set of experiments, we found that randomized smoothing performs inconsistently. To address this, we used 5 random seeds to repeat the experiments, and use the ing mean values in the final . Here, the best variant uses σ = 0.25, and, unlike experiments with the eyeglass frame attack, significantly outperforms adversarial training, reaching accuracy slightly above 60% even for the stronger attacks. Neverthe-less, even randomized smoothing in significant degradation of effectiveness on adversarial instances (nearly 40%, compared to clean data). There are two possible reasons why conventional robust ML perform poorly against physical attacks: 1) adversarial models involving l p -bounded perturbations are too hard to enable effective robust learning, and 2) the conventional attack model is too much of a mismatch for realistic physical attacks. In Appendix B, we present evidence supporting the latter. Specifically, we find that conventional robust ML models exhibit much higher robustness when faced with the l p -bounded attacks they are trained to be robust to. As we observed in Section 3, conventional models for making deep learning robust to attack can perform quite poorly when confronted with physically realizable attacks. In other words, the evidence strongly suggests that the conventional models of attacks in which attackers can make l p -bounded perturbations to input images are not particularly useful if one is concerned with the main physical threats that are likely to be faced in practice. However, given the diversity of possible physical attacks one may perpetrate, is it even possible to have a meaningful approach for ensuring robustness against a broad range of physical attacks? For example, the two attacks we considered so far couldn't be more dissimilar: in one, we engineer eyeglass frames; in another, stickers on a stop sign. We observe that the key common element in these attacks, and many other physical attacks we may expect to encounter, is that they involve the introduction of adversarial occlusions to a part of the input. The common constraint faced in such attacks is to avoid being suspicious, which effectively limits the size of the adversarial occlusion, but not necessarily its shape or location. Next, we introduce a simple abstract model of occlusion attacks, and then discuss how such attacks can be computed and how we can make classifiers robust to them. We propose the following simple abstract model of adversarial occlusions of input images. The attacker introduces a fixed-dimension rectangle. This rectangle can be placed by the adversary anywhere in the image, and the attacker can furthermore introduce l ∞ noise inside the rectangle with an exogenously specified high bound (for example, = 255, which effectively allows addition of arbitrary adversarial noise). This model bears some similarity to l 0 attacks, but the rectangle imposes a contiguity constraint, which reflects common physical limitations. The model is clearly abstract: in practice, for example, adversarial occlusions need not be rectangular or have fixed dimensions (for example, the eyeglass frame attack is clearly not rectangular), but at the same time cannot usually be arbitrarily superimposed on an image, as they are implemented in the physical environment. Nevertheless, the model reflects some of the most important aspects common to many physical attacks, such as stickers placed on an adversarially chosen portion of the object we wish to identify. We call our attack model a rectangular occlusion attack (ROA). An important feature of this attack is that it is untargeted: since our ultimate goal is to defend against physical attacks whatever their target, considering untargeted attacks obviates the need to have precise knowledge about the attacker's goals. For illustrations of the ROA attack, see Appendix C. The computation of ROA attacks involves 1) identifying a region to place the rectangle in the image, and 2) generating fine-grained adversarial perturbations restricted to this region. The former task can be done by an exhaustive search: consider all possible locations for the upper left-hand corner of the rectangle, compute adversarial noise inside the rectangle using PGD for each of these, and choose the worst-case attack (i.e., the attack which maximizes loss computed on the ing image). However, this approach would be quite slow, since we need to perform PGD inside the rectangle for every possible position. Our approach, consequently, decouples these two tasks. Specifically, we first perform an exhaustive search using a grey rectangle to find a position for it that maximizes loss, and then fix the position and apply PGD inside the rectangle. An important limitation of the exhaustive search approach for ROA location is that it necessitates computations of the loss function for every possible location, which itself requires full forward propagation each time. Thus, the search itself is still relatively slow. To speed the process up further, we use the gradient of the input image to identify candidate locations. Specifically, we select a subset of C locations for the sticker with the highest magnitude of the gradient, and only exhaustively search among these C locations. C is exogenously specified to be small relative to the number of pixels in the image, which significantly limits the number of loss function evaluations. Full details of our algorithms for computing ROA are provided in Appendix D. Once we are able to compute the ROA attack, we apply the standard adversarial training approach for defense. We term the ing classifiers robust to our abstract adversarial occlusion attacks Defense against Occlusion Attacks (DOA), and propose these as an alternative to conventional robust ML for defending against physical attacks. As we will see presently, this defense against ROA is quite adequate for our purposes. We now evaluate the effectiveness of DOA-that is, adversarial training using the ROA threat model we introduced-against physically realizable attacks (see Appendix G for some examples that defeat conventional methods but not DOA). Recall that we consider only digital representations of the corresponding physical attacks. Consequently, we can view our in this section as a lower bound on robustness to actual physical attacks, which have to deal with additional practical constraints, such as being robust to multiple viewpoints. In addition to the two physical attacks we previously considered, we also evaluate DOA against the adversarial patch attack, implemented on both face recognition and traffic sign data. We consider two rectangle dimensions ing in comparable area: 100 × 50 and 70 × 70, both in pixels. Thus, the rectangles occupy approximately 10% of the 224 × 224 face images. We used {30, 50} iterations of PGD with = 255/2 to generate adversarial noise inside the rectangle, and with learning rate α = {8, 4} correspondingly. For the gradient version of ROA, we choose C = 30. DOA adversarial training is performed for 5 epochs with a learning rate of 0.0001. Figure 4: Performance of DOA (using the 100 × 50 rectangle) against the eyeglass frame attack in comparison with conventional methods. Left: comparison between DOA, adversarial training, and randomized smoothing (using the most robust variants of these). Middle/Right: Comparing DOA performance for different rectangle dimensions and numbers of PGD iterations inside the rectangle. Middle: using exhaustive search for ROA; right: using the gradient-based heuristic for ROA. Figure 4 (left) presents the comparing the effectiveness of DOA against the eyeglass frame attack on face recognition to adversarial training and randomized smoothing (we took the most robust variants of both of these). We can see that DOA yields significantly more robust classifiers for this domain. The gradient-based heuristic does come at some cost, with performance slightly worse than when we use exhaustive search, but this performance drop is relatively small, and the is still far better than conventional robust ML approaches. Figure 4 (middle and right) compares the performance of DOA between two rectangle variants with different dimensions. The key observation is that as long as we use enough iterations of PGD inside the rectangle, changing its dimensions (keeping the area roughly constant) appears to have minimal impact. We now repeat the evaluation with the traffic sign data and the stop sign attack. In this case, we used 10 × 5 and 7 × 7 rectangles covering ∼5 % of the 32 × 32 images. We set C = 10 for the gradientbased ROA. Implementation of DOA is otherwise identical as in the face recognition experiments above. We present our using the square rectangle, which in this case was significantly more effective; the for the 10 × 5 rectangle DOA attacks are in Appendix F. Figure 5 (left) compares the effectiveness of DOA against the stop sign attack on traffic sign data with the best variants of adversarial training and randomized smoothing. Our here are for 30 iterations of PGD; in Appendix F, we study the impact of varying the number of PGD iterations. We can observe that DOA is again significantly more robust, with robust accuracy over 90% for the exhaustive search variant, and ∼85% for the gradient-based variant, even for stronger attacks. Moreover, DOA remains 100% effective at classifying stop signs on clean data, and exhibits ∼95% accuracy on the full traffic sign classification task. Finally, we evaluate DOA against the adversarial patch attacks. In these attacks, an adversarial patch (e.g., sticker) is designed to be placed on an object with the goal of inducing a target prediction. We study this in both face recognition and traffic sign classification tasks. Here, we present the for face recognition; further detailed on both datasets are provided in Appendix F. As we can see from Figure 5 (right), adversarial patch attacks are quite effective once the attack region (fraction of the image) is 10% or higher, with adversarial training and randomized smoothing both performing rather poorly. In contrast, DOA remains highly robust even when the adversarial patch covers 20% of the image. As we have shown, conventional methods for making deep learning approaches for image classification robust to physically realizable attacks tend to be relatively ineffective. In contrast, a new threat model we proposed, rectangular occlusion attacks (ROA), coupled with adversarial training, achieves high robustness against several prominent examples of physical attacks. While we explored a number of variations of ROA attacks as a means to achieve robustness against physical attacks, numerous questions remain. For example, can we develop effective methods to certify robustness against ROA, and are the ing approaches as effective in practice as our method based on a combination of heuristically computed attacks and adversarial training? Are there other types of occlusions that are more effective? Answers to these and related questions may prove a promising path towards practical robustness of deep learning when deployed for downstream applications of computer vision such as autonomous driving and face recognition. is a benchmark for face recognition, containing 2622 subjusts with 2.6 million images in total. We chose ten subjects: A. J. Buckley, A. R. Rahman, Aamir Khan, Aaron Staton, Aaron Tveit, Aaron Yoo, Abbie Cornish, Abel Ferrara, Abigail Breslin, and Abigail Spencer, and subselected face images pertaining only to these individuals. Since approximately half of the images cannot be downloaded, our final dataset contains 300-500 images for each subject. We used the standard corp-and-resize method to process the data to be 224 × 224 pixels, and split the dataset into training, validation, and test according to a 7:2:1 ratio for each subject. In total, the data set has 3178 images in the training set, 922 images in the validation set, and 470 images in the test set. We use the VGGFace convolutional neural network model, a variant of the VGG16 model containing 5 convolutional layer blocks and 3 fully connected layers. We make use of standard transfer learning as we only classify 10 subjects, keeping the convolutional layers as same as VGGFace structure, 3 but changing the fully connected layer to be 1024 → 1024 →10 instead of 4096 → 4096 →2622. Specifically, in our Pytorch implementation, we convert the images from RGB to BGR channel orders and subtract the mean value [129.1863, 104.7624, 93 .5940] in order to use the pretrained weights from VGG-Face on convolutional layers. We set the batch size to be 64 and use Pytorch built-in Adam Optimizer with an initial learning rate of 10 −4 and default parameters in Pytorch. 4 We drop the learning rate by 0.1 every 10 epochs. Additionally, we used validation set accuracy to keep track of model performance and choose a model in case of overfitting. After 30 epochs of training, the model successfully obtains 98.94 % on test data. To be consistent with , we select the subset of LISA which contains 47 different U.S. traffic signs (Møgelmose et al., 2012). To alleviate the problem of imbalance and extremely blurry data, we picked 16 best quality signs with 3509 training and 1148 validation data points. From the validation data, we obtain the test data that includes only 40 stop signs to evaluate performance with respect to the stop sign attack, as done by. In the main body of the paper, we present only on this test data to evaluate robustness to stop sign attacks. In the appendix below, we also include performance on the full validation set without adversarial manipulation. All the data was processed by standard crop-and-resize to 32 × 32 pixels. We use the LISA-CNN architecture defined in , and construct a convolutional neural network containing three convolutional layers and one fully connected layer. We use the Adam Optimizer with initial learning rate of 10 −1 and default parameters 4, dropping the learning rate by 0.1 every 10 epochs. We set the batch size to be 128. After 30 epochs, we achieve the 98.69 % accuracy on the validation set, and 100% accuracy in identifying the stop signs in our test data. AND l 2 ATTACKS In this appendix, we show that adversarial training and randomized smoothing degrade more gracefully when faced with attacks that they are designed for. In particular, we consider here variants of projected gradient descent (PGD) for both the l ∞ and l 2 attacks. In particular, the form of PGD for the l ∞ attack is where Proj is a projection operator which clips the to be feasible, x t the adversarial example in iteration t, α the learning rate, and L(·) the loss function. In the case of an l 2 attack, PGD becomes where the projection operator normalizes the perturbation δ = x t+1 − x t to have δ 2 ≤ if it doesn't already. The experiments were done on the face recognition and traffic sign datasets, but unlike physical attacks on stop signs, we now consider adversarial perturbations to all sign images. We begin with our on the face recognition dataset. Tables 1 and 2 present for (curriculum) adversarial training for varying of the l ∞ attacks, separately for training and evaluation. As we can see, curriculum adversarial training with = 16 is generally the most robust, and remains reasonably effective for relatively large perturbations. However, we do observe a clear tradeoff between accuracy on non-adversarial data and robustness, as one would expect. Table 3 presents the of using randomized smoothing on face recognition data, when facing the l 2 attacks. Again, we observe a high level of robustness and, in most cases, relatively limited drop in performance, with σ = 0.5 perhaps striking the best balance. Tables 4 and 5 present evaluation on traffic sign data for curriculum adversarial training against the l ∞ attack for varying. As with face recognition data, we can observe that the approaches tend to be relatively robust, and effective on non-adversarial data for adversarial training methods using < 32. The of randomized smoothing on traffic sign data are given in Table 6. Since images are smaller here than in VGGFace, lower values of for the l 2 attacks are meaningful, and for ≤ 1 we generally see robust performance on randomized smoothing, with σ = 0.5 providing a good balance between non-adversarial accuracy and robustness, just as before. Our basic algorithm for computing rectangular occlusion attacks (ROA) proceeds through the following two steps: 1. Iterate through possible positions for the rectangle's upper left-hand corner point in the image. Find the position for a grey rectangle (RGB value =[127.5, 127.5, 127.5] ) in the image that maximizes loss. 2. Generate high-l ∞ noise inside the rectangle at the position computed in step 1. Algorithm 1 presents the full algorithm for identifying the ROA position, which amounts to exhaustive search through the image pixel region. This algorithm has several parameters. First, we assume that images are squares with dimensions N 2. Second, we introduce a stride parameter S. The purpose of this parameter is to make location computation faster by only considering every other Sth pixel during the search (in other words, we skip S pixels each time). For our implementation of ROA attacks, we choose the stride parameter S = 5 for face recognition and S = 2 for traffic sign classification. Once we've found the place for the rectangle, our next step is to introduce adversarial noise inside it. For this, we use the l ∞ version of the PGD attack, restricting perturbations to the rectangle. We used {7, 20, 30, 50} iterations of PGD to generate adversarial noise inside the rectangle, and with learning rate α = {32, 16, 8, 4} correspondingly. Grad. plot Grad. searching Exh. searching Physically realizable attacks that we study have a common feature: first, they specify a mask, which is typically precomputed, and subsequently introduce adversarial noise inside the mask area. Let M denote the mask matrix constraining the area of the perturbation δ; M has the same dimensions as the input image and contains 0s where no perturbation is allowed, and 1s in the area which can be perturbed. The physically realizable attacks we consider then solve an optimization problem of the following form: arg max Next, we describe the details of the three physical attacks we consider in the main paper. , we first initialized the eyeglass frame with 5 different colors, and chose the best starting color by calculating the cross-entropy loss. For each update step, we divided the gradient value by its maximum value before multiplying by the learning rate which is 20. Then we only kept the gradient value of eyeglass frame area. Finally, we clipped and rounded the pixel value to keep it in the valid range. , we initialized the stickers on the stop signs with random noise. For each update step, we used the Adam optimizer with 0.1 learning rate and with default parameters. Just as for other attacks, adversarial perturbations were restricted to the mask area exogenously specified; in our case, we used the same mask as -a collection of small rectangles. We used gradient ascent to maximize the log probability of the targeted class P [y target |x], as in the original paper . When implementing the adversarial patch, we used a square patch rather than the circular patch in the original paper; we don't anticipate this choice to be practically consequential. We randomly chose the position and direction of the patch, used the learning rate of 5, and fixed the number of attack iterations to 100 for each image. We varied the attack region (mask) R ∈ {0%, 5%, 10%, 15%, 20%, 25%}. For the face recognition dataset, we used 27 images (9 classes (without targeted class) × 3 images in each class) to design the patch, and then ran the attack over 20 epochs. For the smaller traffic sign dataset, we used 15 images (15 classes (without targeted class) × 1 image in each class) to design the patch, and then ran the attack over 5 epochs. Note that when evaluating the adversarial patch, we used the validation set without the targeted class images. Figure 11: Examples of the eyeglass attack on face recognition. From left to right: 1) the original input image, 2) image with adversarial eyeglass frames, 3) face predicted by a model generated through adversarial training, 4) face predicted by a model generated through randomized smoothing, 5) face predicted (correctly) by a model generated through DOA. Each row is a separate example. Figure 12: Examples of the stop sign attack. From left to right: 1) the original input image, 2) image with adversarial eyeglass frames, 3) face predicted by a model generated through adversarial training, 4) face predicted by a model generated through randomized smoothing, 5) face predicted (correctly) by a model generated through DOA. Each row is a separate example. H EFFECTIVENESS OF DOA METHODS AGAINST l ∞ ATTACKS For completeness, this section includes evaluation of DOA in the context of l ∞ -bounded attacks implemented using PGD, though these are outside the scope of our threat model. Table 23 presents of several variants of DOA in the context of PGD attacks in the context of face recognition, while Table 24 considers these in traffic sign classification. The are quite consistent with intuition: DOA is largely unhelpful against these attacks. The reason is that DOA fundamentally assumes that the attacker only modifies a relatively small proportion (∼5%) of the scene (and the ing image), as otherwise the physical attack would be highly suspicious. l ∞ bounded attacks, on the other hand, modify all pixels. To further illustrate the ability of DOA to generalize, we evaluate its effectiveness in the context of three additional occlusion patterns: a union of triangles and circle, a single larger triangle, and a heart pattern. As the in Figures 13 and 14 suggest, DOA is able to generalize successfully to a variety of physical attack patterns. It is particularly noteworthy that the larger patterns (large triangle-middle of the figure, and large heart-right of the figure) are actually quite suspicious (particularly the heart pattern), as they occupy a significant fraction of the image (the heart mask, for example, accounts for 8% of the face). Mask2 Mask3 Abbie Cornish Abbie Cornish Abbie Cornish
Defending Against Physically Realizable Attacks on Image Classification
459
scitldr
Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge. However, catastrophic forgetting poses a grand challenge for neural networks performing such learning process. Thus, neural networks that are deployed in the real world often struggle in scenarios where the data distribution is non-stationary (concept drift), imbalanced, or not always fully available, i.e., rare edge cases. We propose a Differentiable Hebbian Consolidation model which is composed of a Differentiable Hebbian Plasticity (DHP) Softmax layer that adds a rapid learning plastic component (compressed episodic memory) to the fixed (slow changing) parameters of the softmax output layer; enabling learned representations to be retained for a longer timescale. We demonstrate the flexibility of our method by integrating well-known task-specific synaptic consolidation methods to penalize changes in the slow weights that are important for each target task. We evaluate our approach on the Permuted MNIST, Split MNIST and Vision Datasets Mixture benchmarks, and introduce an imbalanced variant of Permuted MNIST --- a dataset that combines the challenges of class imbalance and concept drift. Our proposed model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting. A key aspect of human intelligence is the ability to continually adapt and learn in dynamic environments, a characteristic which is challenging to embed into artificial intelligence. Recent advances in machine learning (ML) have shown tremendous improvements in various problems, by learning to solve one complex task very well, through extensive training on large datasets with millions of training examples or more. However, most of the ML models that are used during deployment in the real-world are exposed to non-stationarity where the distributions of acquired data changes over time. Therefore, after learning is complete, and these models are further trained with new data, responding to distributional changes, performance degrades with respect to the original data. This phenomenon known as catastrophic forgetting or catastrophic interference presents a crucial problem for deep neural networks (DNNs) that are tasked with continual learning , also called lifelong learning . In continual learning, the goal is to adapt and learn consecutive tasks without forgetting how to perform well on previously learned tasks, enabling models that are scalable and efficient over long timescales. In most supervised learning methods, DNN architectures require independent and identically distributed (iid) samples from a stationary training distribution. However, for ML systems in realworld applications that require continual learning, the iid assumption is easily violated when: There is concept drift in the training data distribution. There are imbalanced class distributions and concept drift occuring simultaneously. Data representing all scenarios in which the learner is expected to perform are not initially available. In such situations, learning systems face the "stability-plasticity dilemma" which is a well-known problem for artificial and biological neural networks . This presents a continual learning challenge for an ML system where the model needs to provide a balance between its plasticity (to integrate new knowledge) and stability (to preserve existing knowledge). In biological neural networks, synaptic plasticity has been argued to play an important role in learning and memory (; ;) and two major theories have been proposed to explain a human's ability to perform continual learning. The first theory is inspired by synaptic consolidation in the mammalian neocortex where a subset of synapses are rendered less plastic and therefore preserved for a longer timescale. The general idea for this approach is to consolidate and preserve synaptic parameters that are considered important for the previously learned tasks. This is normally achieved through task-specific updates of synaptic weights in a neural network. The second is the complementary learning system (CLS) theory , which suggests that humans extract highlevel structural information and store it in different brain areas while retaining episodic memories. Recent work on differentiable plasticity has shown that neural networks with "fast weights" that leverage Hebbian learning rules can be trained end-to-end through backpropagation and stochastic gradient descent (SGD) to optimize the standard "slow weights", as well as also the amount of plasticity in each synaptic connection . These works use slow weights to refer to the weights normally used to train vanilla neural networks, which are updated slowly and are often associated with long-term memory. The fast weights represent the weights that are superimposed on the slow weights and change quickly from one time step to the next based on input representations. These fast weights behave as a form of short-term memory that enable "reactivation" of long-term memory traces in the slow weights. showed that simple plastic networks with learned plasticity outperform networks with uniform plasticity on various problems. Moreover, there have been several approaches proposed recently for overcoming the catastrophic forgetting problem in fixed-capacity models by dynamically adjusting the plasticity of each synapse based on its importance for retaining past memories . Here, we extend the work on differentiable plasticity to the task-incremental continual learning setting (van de), where tasks arrive in a batch-like fashion, and have clear boundaries. We develop a Differentiable Hebbian Consolidation 1 model that is capable of adapting quickly to changing environments as well as consolidating previous knowledge by selectively adjusting the plasticity of synapses. We modify the traditional softmax layer and propose to augment the slow weights in the final fully-connected (FC) layer (softmax output layer) with a set of plastic weights implemented using Differentiable Hebbian Plasticity (DHP). Furthermore, we demonstrate the flexibility of our model by combining it with recent task-specific synaptic consolidation based approaches to overcoming catastrophic forgetting such as elastic weight consolidation , synaptic intelligence (b) and memory aware synapses . Our model unifies core concepts from Hebbian plasticity, synaptic consolidation and CLS theory to enable rapid adaptation to new unseen data, while consolidating synapses and leveraging compressed episodic memories in the softmax layer to remember previous knowledge and mitigate catastrophic forgetting. We test our proposed method on established benchmark problems including the Permuted MNIST , Split MNIST (b) and Vision Datasets Mixture benchmarks. We also introduce the Imbalanced Permuted MNIST problem and show that plastic networks with task-specific synaptic consolidation methods outperform networks with uniform plasticity. Neural Networks with Non-Uniform Plasticity: One of the major theories that have been proposed to explain a human's ability to learn continually is Hebbian learning , which suggests that learning and memory are attributed to weight plasticity, that is, the modification of the strength of existing synapses according to variants of Hebb's rule (; ;). It is a form of activity-dependent synaptic plasticity where correlated activation of pre-and post-synaptic neurons leads to the strengthening of the connection between the two neurons. According to the Hebbian learning theory, after learning, the related synaptic strength are enhanced while the degree of plasticity decreases to protect the learned knowledge (a). Recent approaches in the meta-learning literature have shown that we can incorporate fast weights into a neural network to perform one-shot and few-shot learning . proposed a model that augments FC layers preceding the softmax with a matrix of fast weights to bind labels to representations. Here, the fast weights were implemented with non-trainable Hebbian learning-based associative memory. proposed a Hebbian Softmax layer that can improve learning of rare classes by interpolating between Hebbian learning and SGD updates on the output layer using an engineered scheduling scheme. proposed differentiable plasticity, which uses SGD to optimize the plasticity of each synaptic connection, in addition to the standard fixed (slow) weights. Here, each synapse is composed of a slow weight and a plastic (fast) weight that automatically increases or decreases based on the activity over time. Although this approach served to be a powerful new method for training neural networks, it was mainly demonstrated on recurrent neural networks (RNNs) for solving pattern memorization tasks and maze exploration with reinforcement learning. Also, these approaches were only demonstrated on meta-learning problems and not the continual learning challenge of overcoming catastrophic forgetting. Our work also augments the slow weights in the FC layer with a set of plastic (fast) weights, but implements these using DHP. We only update the parameters of the softmax output layer in order to achieve fast learning and preserve knowledge over time. Overcoming Catastrophic Forgetting: This work leverages two strategies to overcome the catastrophic forgetting problem: 1) Task-specific Synaptic Consolidation -Protecting previously learned knowledge by dynamically adjusting the synaptic strengths to consolidate and retain memories. 2) CLS Theory -A dual memory system where, the neocortex (neural network) gradually learns to extract structured representations from the data while, the hippocampus (augmented episodic memory) performs rapid learning and individuated storage to memorize new instances or experiences. There have been several notable works inspired by task-specific synaptic consolidation for overcoming catastrophic forgetting (; b;) and they are often categorized as regularization strategies in the continual learning literature . All of these regularization approaches estimate the importance of each parameter or synapse, Ω k, where least plastic synapses can retain memories for long timescales and more plastic synapses are considered less important. The parameter importance and network parameters θ k are updated in either an online manner or after learning task T n. Therefore, when learning new task T n+1, a regularizer is added to the original loss function L n (θ), so that we dynamically adjust the plasticity w.r.t Ω k and prevent any changes to important parameters of previously learned tasks: where θ n−1 k are the learned network parameters after training on the previous n − 1 tasks and λ is a hyperparameter for the regularizer to control the amount of forgetting (old versus new memories). The main difference in these regularization strategies is on the method used to compute the importance of each parameter, Ω k. In Elastic Weight Consolidation (EWC), used the values given by the diagonal of an approximated Fisher information matrix for Ω k, and this was computed offline after training on a task was completed. An online variant of EWC was proposed by to improve EWC's scalability by ensuring the computational cost of the regularization term does not grow with the number of tasks. Zenke et al. (2017b) proposed an online method called Synaptic Intelligence (SI) for computing the parameter importance where, Ω k is the cumulative change in individual synapses over the entire training trajectory on a particular task. Memory Aware Synapses (MAS) from is an online method that measures Ω k by the sensitivity of the learned function to a perturbation in the parameters, instead of measuring the change in parameters to the loss as seen in SI and EWC. Our work draws inspiration from CLS theory which is a powerful computational framework for representing memories with a dual memory system via the neocortex and hippocampus. There have been numerous approaches based on CLS principles involving pseudo-rehersal (; ;), exact or episodic replay and generative replay . Exact replay methods require storage of the data from previous tasks which are later replayed. Generative replay methods train a separate generative model to generate images to be replayed. iCaRL performs rehearsal and regularization, where an external memory is used to store exemplar patterns from old task data and rehearse the model via distillation. However, in our work, we are primarily interested in neuroplasticity techniques inspired from CLS theory for alleviating catastrophic forgetting. Earlier work from; showed how each synaptic connection can be composed of a fixed weight where slow learning stores long-term knowledge and a fast-changing weight for temporary associative memory. This approach involving slow and fast weights is analogous to properties of CLS theory to overcome catastrophic forgetting during continual learning. Recent research in this vein has included replacing soft attention mechanism with fast weights in RNNs , the Hebbian Softmax layer , augmenting slow weights in the FC layer with a fast weights matrix , differentiable plasticity and neuromodulated differentiable plasticity . We did not evaluate and compare against neuroplasticity-inpired CLS methods as baselines because they were designed for meta-learning problems and would be unfair to evaluate their performance on continual learning benchmark problems given some of their limitations. All of these methods were designed for rapid learning on simple tasks or meta-learning over a distribution of tasks or datasets, where a few number of examples from a class are seen by the network when training on different tasks to perform one-shot and few-shot learning. For instance, the Hebbian Softmax layer modifies its parameters by annealing between Hebbian and SGD updates based on an engineered scheduling scheme which achieves fast binding for rarer classes. However, when a large number of examples are observed frequently from the same class, the annealing function switches completely to SGD updates. Thus, when evaluating this model in continual learning setups, the effect of the fast weights memory storage becomes non-existent as the network learns from a large number of examples per class on each task. With a focus on continual learning, the goal of our work is to metalearn a local learning rule for the fast weights via the fixed (slow) weights and an SGD optimizer. In our model, each synaptic connection in the softmax layer has two weights: 1) The slow weights, θ ∈ R m×d, where m is the number of units in the final hidden layer and d is the number of outputs of the last layer. 2) A Hebbian plastic component of the same cardinality as the slow weights, composed of the plasticity coefficient, α, and the Hebbian trace, Hebb. The α is a scaling parameter for adjusting the magnitude of the Hebb. The Hebbian traces accumulate the mean hidden activations of the final hidden layer h for each target label in the mini-batch {y 1:B} of size B which are denoted byh ∈ R 1×m (refer to Algorithm 1). Given the pre-synaptic activations of neurons i in h, we can formally compute the post-synaptic activations of neurons j using Eq. 2 and obtain the unnormalized log probabilities (softmax pre-activations) z. The softmax function is then applied on z to obtain the desired predicted probabilitiesŷ thus,ŷ = softmax(z). The η parameter in Eq. 3 is a scalar value that dynamically learns how quickly to acquire new experiences into the plastic component, and thus behaves as the learning rate for the plastic connections. The η parameter also acts as a decay term for the Hebb to prevent instability caused by a positive feedback loop in the Hebbian traces. The network parameters α i,j, η and θ i,j are optimized by gradient descent as the model is trained sequentially on different tasks in the continual learning setup. In standard neural networks the weight connection has only fixed (slow) weights, which is equivalent to setting the plasticity coefficients α = 0 in Eq. 2. Algorithm 1 Batch update Hebbian traces. , where w i,j is the change in weight at connection i, j and a k i, a k j denote the activation levels of neurons i and j, respectively, for the k th input. Therefore, in our model, w =h the Hebbian weight update, a i = h the hidden activations of the last hidden layer, a j = y the corresponding target class in y 1:B and N = s the number of inputs for the corresponding class in y 1:B (see Algorithm 1). Across the model's lifetime, we only update the Hebbian traces during training as it learns tasks in a continual manner. Therefore, during test time, we maintain and use the most recent Hebb traces to make predictions. Our model explores an optimization scheme where hidden activations are accumulated directly into the softmax output layer weights when a class has been seen by the network. This in better initial representations and can also retain these learned deep representations for a much longer timescale. This is because memorized activations for one class are not competing for space with activations from other classes. Fast learning, enabled by a highly plastic weight component, improves test accuracy for a given task. Between tasks this plastic component decays to prevent interference, but selective consolidation into a stable component protects old memories, effectively enabling the model to learn to remember by modelling plasticity over a range of timescales to form a learned neural memory (see Section 4.1 ablation study). In comparison to an external memory, the advantage of DHP Softmax is that it is simple to implement, requiring no additional space or computation. This allows it to scale easily with increasing number of tasks. Figure 1: An example of a Hebbian update for the class, c = 6 ∈ y 1:B. Here, we are given the hidden activations of the final hidden layer, h. Multiple hidden activations corresponding to class c = 6 (represented by the pink boxes) are averaged into one vector denoted byh ∈ R 1×m. This Hebbian update visualization reflects Lines 4-6 in Algorithm 1 and is repeated for each unique class in the target vector y 1:B. The plastic component learns rapidly and performs sparse parameter updates to quickly store memory traces for each recent experience without interference from other similar recent experiences. Furthermore, the hidden activations corresponding to the same class, c, are accumulated into one vectorh, thus forming a compressed episodic memory in the Hebbian traces to reflect individual episodic memory traces (similar to the hippocampus in biological neural networks ). As a , this method improves learning of rare classes and speeds up binding of class labels to deep representations of the data without introducing any additional hyperparameters. In Appendix B, we provide a sample implementation of the DHP Softmax using PyTorch. Hebbian Synaptic Consolidation: Following the existing regularization strategies such as EWC , Online EWC , SI (b) and MAS , we regularize the loss L(θ) as in Eq. 1 and update the synaptic importance parameters of the network in an online manner. We rewrite Eq. 1 to obtain the updated quadratic loss for Hebbian Synaptic Consolidation in Eq. 4 and show that the network parameters θ i,j are the weights of the connections between pre-and post-synaptic activities of neurons i and j, We adapt the existing task-specific consolidation approaches to our model and do not compute the synaptic importance parameters on the plastic component of the network, hence we only regularize the slow weights of the network. Furthermore, when training the first task T n=1, the synaptic importance parameter, Ω i,j in Eq. 4, was set to 0 for all of the task-specific consolidation methods that we tested on except for SI. This is because SI is the only method we evaluated that estimates Ω i,j while training, whereas Online EWC and MAS compute Ω i,j after learning a task. The plastic component of the softmax layer in our model can alleviate catastrophic forgetting of consolidated classes by allowing gradient descent to optimize how plastic the connections should be (i.e. less plastic to preserve old information or more plastic to quickly learn new information). In our experiments, we compare our approach to vanilla neural networks with Online EWC, SI and MAS. Since our approach increases the capacity of the DNN due to the addition of plastic weights, we add an extra set of slow weights to the softmax output layer of the standard neural network to match the capacity. We do this to show that it is not the increased model capacity from the plastic weights that is helping mitigate the forgetting when performing sequential task learning, thus ensuring a fair evaluation. We tested our model on the Permuted MNIST, Split MNIST and Vision Datasets Mixture benchmarks, and also introduce the Imbalanced Permuted MNIST problem. For all of the benchmarks, we evaluated the model based on the average classification accuracy on all previously learned tasks as a function of n, the number of tasks trained so far. To determine memory retention and flexibility of the model, we are particularly interested in the test performance on the first task and the most recent one. We also measure forgetting using the backward transfer metric, BWT = , which indicates how much learning new tasks has influenced the performance on previous tasks. R T,i is the test classification accuracy on task i after sequentially finishing learning the T th task. While BWT < 0 directly reports catastrophic forgetting, BWT > 0 indicates that learning new tasks has helped with the preceding tasks. To establish a baseline for comparison of well-known task-specific consolidation methods, we trained neural networks with Online EWC, SI and MAS, respectively, on all tasks in a sequential manner. The hyperparameters of the consolidation methods (i.e. EWC, SI and MAS) remain the same with and without DHP Softmax, and the plastic components are not regularized. Descriptions of the hyperparameters and other details for all benchmarks can be found in Appendix A. In this benchmark, all of the MNIST pixels are permuted differently for each task with a fixed random permutation. Although the output domain is constant, the input distribution changes between tasks and is mostly independent of each other, thus, there exists a concept drift. In the Permuted MNIST and Imbalanced Permuted MNIST benchmarks we use a multi-layered perceptron (MLP) network with two hidden layers consisting of 400 ReLU nonlinearities, and a cross-entropy loss. The η of the plastic component was set to be a value of 0.001 and we emphasize that we spent little to no effort on tuning the initial value of this parameter (see Appendix A.5 for a sensitivity analysis). We first compare the performance between our network with DHP Softmax and a fine-tuned vanilla MLP network we refer to as Finetune in Figure 2a and no task-specific consolidation methods involved. The network with DHP Softmax alone showed improvement in its ability to alleviate catastrophic forgetting across all tasks compared to the baseline network. Then we compared the performance with and without DHP Softmax using the same task-specific consolidation methods. Figure 2a shows the average test accuracy as new tasks are learned for the best hyperparameter combination for each task-specific consolidation method. We find our DHP Softmax with consolidation maintains a higher test accuracy throughout sequential training of tasks than without DHP Softmax. Ablation Study: We further examine the structural parameters of the network and Hebb traces to provide further interpretability into the behaviour of our proposed model. The left plot in Figure 8 shows the behaviour of η during training as 10 tasks in the Permuted MNIST benchmark are learned continually. Initially, in task T 1, η increases very quickly from 0.001 to 0.024 suggesting that the synaptic connections become more plastic to quickly acquire new information. Eventually, η decays after the 3 rd task to reduce the degree of plasticity to prevent interference between the learned representations. We also observe that within each task from T 4 to T 10, η initially increases then decays. The Frobenius Norm of the Hebb trace (middle plot in Figure 8) suggests that Hebb grows without runaway positive feedback every time a new task is learned, maintaining a memory of which synapses contributed to recent activity. The Frobenius Norm of α (right plot in Figure 8) indicates that the plasticity coefficients grow within each task, indicating that the network is leveraging the structure in the plastic component. It is important to note that gradient descent and backpropagation are used as meta-learning to tune the structural parameters in the plastic component. We introduce the Imbalanced Permuted MNIST problem which is identical to the Permuted MNIST benchmark but, now each task is an imbalanced distribution where training samples in each class were artificially removed based on some random probability (see Appendix A.2). This benchmark was motivated by the fact that class imbalance and concept drift can hinder predictive performance, and the problem becomes particularly challenging when they occur simultaneously. Appendix A.6, Figure 5 shows the average test accuracy for the best hyperparameters of each method. We see that DHP Softmax achieves 80.85% after learning 10 tasks with imbalanced class distributions in a sequential manner, thus providing significant 4.41% improvement over the standard neural network baseline of 76.44%. The significance of the compressed episodic memory mechanism in the Hebbian traces is more apparent in this benchmark because the plastic component allows rare classes that are encountered infrequently to be remembered for a longer period of time. We find that DHP Softmax with MAS achieves a 0.04 decrease in BWT, ing in an average test accuracy of 88.80% and a 1.48% improvement over MAS alone; also outperforming all other methods and across all tasks. We split the original MNIST dataset into a sequence of 5 binary classification tasks: T 1 = {0/1}, T 2 = {2/3}, T 3 = {4/5}, T 4 = {6/7} and T 5 = {8/9}. The output spaces are disjoint between tasks, unlike the previous two benchmarks. Similar to the network used by Zenke et al. (2017b), we use an MLP network with two hidden layers of 256 ReLU nonlinearities each, and a cross-entropy loss. The initial η value was set to 0.001 as seen in previous benchmark experiments. We found that different values of η yielded very similar final test performance after learning T 5 tasks (see Appendix A.5). We observed that DHP Softmax alone achieves 98.23% thus, provides a 7.80% improvement on test performance compared to a finetuned MLP network (Figure 2b). Also, combining DHP Softmax with task-specific consolidation consistently decreases BWT, leading to a higher average test accuracy across all tasks, especially the most recent one, T 5. Following previous works , we perform continual learning on a sequence of 5 vision datasets: MNIST, notMNIST 1, FashionMNIST , SVHN and CIFAR-10 (see Appendix A.4 for dataset details). The MNIST, notMNIST and FashionMNIST datasets are zero-padded to be of size 32×32 and are replicated 3 times to create grayscale images with 3 channels, thus matching the resolution of the SVHN and CIFAR-10 images. Here, we use a CNN architecture that is similar to the one used in (more details in Appendix A.4). The initial η parameter value was set to 0.0001. We train the network with mini-batches of size 32 and optimized using plain SGD with a fixed learning rate of 0.01 for 50 epochs per task. We found that DHP Softmax plus MAS decreases BWT by 0.04 ing in a 2.14% improvement in average test accuracy over MAS on its own (see Table 1 and Appendix A.6, Figure 6). Also, SI with DHP Softmax outperforms other competitive methods with an average test performance of 81.75% and BWT of -0.04 after learning all five tasks. In Table 1, we present a summary of the final average test performance after learning all tasks in the respective continual learning problems. Here, we summarize the average test accuracy and BWT across ten trials for each of the benchmarks. We have shown that the problem of catastrophic forgetting in continual learning environments can be alleviated by adding compressed episodic memory in the softmax layer through DHP and performing task-specific updates on synaptic parameters based on their individual importance for solving previously learned tasks. The compressed episodic memory allows new information to be learned in individual traces without overlapping representations, thus avoiding interference when added to the structured knowledge in the slow changing weights and allowing the model to generalize across experiences. The α parameter in the plastic component automatically learns to scale the magnitude of the plastic connections in the Hebbian traces, effectively choosing when to be less plastic (protect old knowledge) or more plastic (acquire new information quickly). The neural network with DHP Softmax showed noticeable improvement across all benchmarks when compared to a neural network with a traditional softmax layer that had an extra set of slow changing weights. The DHP Softmax does not introduce any additional hyperparameters since all of the structural parameters of the plastic part α and η are learned, and setting the initial η value required very little tuning effort. 1 Originally published at http://yaroslavvb.blogspot.com/2011/09/ notmnist-dataset.html and downloaded from https://github.com/davidflanagan/ notMNIST-to-MNIST. We demonstrated the flexibility of our model where, in addition to DHP Softmax, we can perform Hebbian Synaptic Consolidation by regularizing the slow weights using EWC, SI or MAS to improve a model's ability to alleviate catastrophic forgetting after sequentially learning a large number of tasks with limited model capacity. DHP Softmax combined with SI outperforms other consolidation methods on the Split MNIST and 5-Vision Datasets Mixture. The approach where we combine DHP Softmax and MAS consistently leads to overall superior compared to other baseline methods on the Permuted MNIST and Imbalanced Permuted MNIST benchmarks. This is interesting because the local variant of MAS does compute the synaptic importance parameters of the slow weights θ i,j layer by layer based on Hebb's rule, and therefore synaptic connections i, j that are highly correlated would be considered more important for the given task than those connections that have less correlation. Furthermore, our model consistently exhibits lower negative BWT across all benchmarks, leading to higher average test accuracy over methods without DHP. This gives a strong indication that Hebbian plasticity enables neural networks to learn continually and remember distant memories, thus reducing catastrophic forgetting when learning from sequential datasets in dynamic environments. Furthermore, continual synaptic plasticity can play a key role in learning from limited labelled data while being able to adapt and scale at long timescales. We hope that our work will open new investigations into gradient descent optimized Hebbian consolidation for learning and memory in DNNs to enable continual learning. In the continual learning setup, we train a neural network model on a sequence of tasks T 1:nmax, where n max is the maximum number of tasks the model is to learn in the respective benchmarks. Unlike the conventional supervised learning setup, continual learning trains a model on data that is fetched in sequential chunks enumerated by tasks. Therefore, in a continual learning sequence, the model receives a sequence of tasks T 1:nmax that is to be learned, each with its associated training data (X n, Y n), where X n is the input data and the corresponding label data denoted by Y n. Each task T n has its own task-specific loss L n, that will be combined with a regularizer loss term (refer to Eq. 4) to prevent catastrophic forgetting. After training is complete, the model will have learned an approximated mapping f to the the true underlying functionf. The learned f maps a new input X to the target outputs Y 1:n for all T 1:n tasks the network has learned so far. Also, it is to be noted that the set of classes contained in each task can be different from each other, as we have done in the SplitMNIST and Vision Datasets Mixture benchmarks. All experiments were run on either a Nvidia Titan V or a Nvidia RTX 2080 Ti. We train the network on a sequence of tasks T n=1:10 with mini-batches of size 64 and optimized using plain SGD with a learning rate of 0.01. We train for at least 10 epochs and perform earlystopping once the validation error does not improve for 5 epochs. If the validation error increases for more than 5 epochs, then we terminated the training on the task T n, reset the network weights and Hebbian traces to the values that had the lowest test error, and proceeded to the next task. Hyperparameters: For the Permuted MNIST experiments shown in Figure 2a, the regularization hyperparameter λ for each of the task-specific consolidation methods is set to λ = 100 for Online EWC , λ = 0.1 for SI (b) and λ = 0.1 for MAS . We note that for the SI method, λ refers to the parameter c in the original work (b) but we use λ to keep the notation consistent across other task-specific consolidation methods. In SI, the damping parameter, ξ, was set to 0.1. To find the best hyperparameter combination for each of these synaptic consolidation methods, we performed a grid search using a task sequence determined by a single seed. For Online EWC, we tested values of λ ∈ {10, 20, 50,. . ., 400}, SI -λ ∈ {0.01, 0.05,. . ., 0.5, 1.0} and MAS -λ ∈ {0.01, 0.5, . . ., 1.5, 2.0}. For each task in the Imbalanced Permuted MNIST problem, we artificially removed training samples from each class in the original MNIST dataset based on some random probability. For each class and each task, we draw a different removal probability from a standard uniform distribution U, and then remove each sample from that class with that probability. The distribution of classes in each dataset corresponding to tasks T n=1:10 is given in Table 2. Table 2: Distribution of classes in each imbalanced dataset for the respective tasks T n=1:10. Classes Tasks 1 2 3 4 5 6 7 8 9 10 0 4459 3780 1847 3820 5867 122 1013 4608 908 3933 1 1872 3637 1316 6592 1934 1774 5533 2569 831 886 2 2391 4125 2434 4966 5245 4593 4834 4432 3207 3555 3 4433 1907 1682 278 3027 2315 5761 3293 2545 3749 4 186 2728 2002 151 1435 5829 1284 3910 4593 927 5 4292 2472 2924 1369 4094 4858 2265 3289 1134 1413 6 2339 3403 4771 5569 1414 2851 2921 4074 336 3993 7 4717 3090 4800 2574 4086 1065 3520 4705 5400 3650 8 3295 5493 76 4184 2034 4672 682 196 2409 1709 9 2625 3880 4735 1647 2645 3921 901 4546 4649 2045 Total 30609 34515 26587 31120 31781 32000 28714 35622 26012 25860 For the Imbalanced Permuted MNIST experiments shown in Figure 5, the regularization hyperparameter λ for each of the task-specific consolidation methods is λ = 400 for Online EWC , λ = 1.0 for SI (b) and λ = 0.1 for MAS . In SI, the damping parameter, ξ, was set to 0.1. Similar to the Permuted MNIST benchmark, to find the best hyperparameter combination for each of these synaptic consolidation methods, we performed a grid search using a task sequence determined by a single seed. For Online EWC, we tested values of λ ∈ {50, 100,. . .,1×10 3}, SI -λ ∈ {0.1, 0.5,. . ., 2.5, 3.0} and MAS -λ ∈ {0.01, 0.05, . . ., 1.5, 2.0}. Across all experiments, we maintained the the same random probabilities detemined by a single seed to artificially remove training samples from each class. Hyperparameters: For the Split MNIST experiments shown in Figure 2b, the regularization hyperparameter λ for each of the task-specific consolidation methods is λ = 400 for Online EWC , λ = 1.0 for SI (b) and λ = 1.5 for MAS . In SI, the damping parameter, ξ, was set to 0.001. To find the best hyperparameter combination for each of these synaptic consolidation methods, we performed a grid search using the 5 task binary classification sequence (0/1, 2/3, 4/5, 6/7, 8/9). For Online EWC, we tested values of λ ∈ {1, 25, 50, 100, . . .,1×10 3, 2×10 3}, SI -λ ∈ {0.1, 0.5, 1.0, . . ., 5.0} and MAS -λ ∈ {0.01, 0.05, 1.0,. . ., 4.5, 5.0}. We train the network on a sequence of T n=1:5 tasks with mini-batches of size 64 and optimized using plain SGD with a fixed learning rate of 0.01 for 10 epochs. Dataset Details: The Vision Datasets Mixture benchmark consists of a sequence of 5 tasks where each task is a different image classification dataset: MNIST, notMNIST, FashionMNIST, SVHN and CIFAR-10. The notMNIST dataset consists of font glypyhs corresponding to letters'A' to'J'. The original dataset has 500,000 and 19,000 grayscale images of size 28×28 for training and testing, respectively. However, similar to MNIST, we only use 60,000 images for training and 10,000 for testing. FashionMNIST consists of 10 categories of various articles of clothing, and there are 60,000 and 10,000 grayscale images sized 28×28 for training and testing, respectively. SVHN consists of digits'0' to'9' from Google Street View images and there are 73,257 and 26,032 colour images of size 32×32 for training and testing, respectively. CIFAR-10 consists of 50,000 and 10,000 colour images of size 32×32 from 10 different categories for training and testing, respectively. Architecture: The CNN architecture consists of 2 convolutional layers with 20 and 50 channels respectively, and a kernel size of 5. Each convolution layer is followed by LeakyReLU nonlinearities (negative threshold of 0.3) and 2×2 max-pooling operations with stride 2. The two convolutional layers are followed by an FC layer of size 500 before the final softmax output layer (refer to Table 3). Similar to , a multi-headed approach was used because the class definitions are different between datasets. In the other benchmark problems, we use a single η across all connections. In this benchmark, our model has a trainable η value for each connection in the final output layer thus, η ∈ R m×d and we set the initial η value to be 0.0001. We found that using separate η parameters for each connection improved the stability of optimization and convergence to optimal test performance. This allows each plastic connection to modulate its own rate of plasticity when learning new experiences. It was observed that using a single η value across all connections lead to instability of optimization on the SVHN and CIFAR-10 tasks. Hyperparameters: For the 5-Vision Datasets Mixture experiments shown in Figure 6 the regularization hyperparameter λ for each of the task-specific consolidation methods is λ = 100 for Online EWC , λ = 0.1 for SI (b) and λ = 1.0 for MAS . In SI, the damping parameter, ξ, was set to 0.1. To find the best hyperparameter combination for each of these synaptic consolidation methods, we performed a random search using the same task sequence ordering (MNIST, notMNIST, FashionMNIST, . For Online EWC, we tested values of λ ∈ {10, 50, 100,. . ., 500}, SI -λ ∈ {0.01, 0.05, 0.1,. . ., 1.0} and MAS -λ ∈ {0.01, 0.05, 1.0,. . ., 4.5, 5.0}. We provide a summary of the sensitivity analysis performed on the Hebb decay term η and show its effect on the final average test performance after learning a sequence of tasks in the continual learning setup. The plots on the left and center in Figure 4 show the effect of the initial η value on the final test performance after learning tasks T n=1:10 in a sequential manner for the Permuted MNIST and Imbalanced Permuted MNIST benchmarks, respectively. We swept through a range of values η ∈ {0.1, 0.01, 0.001, 0.0005, 0.0001} and found that setting η to low values led to the best performance in terms of being able to alleviate catastrophic forgetting. Similarly, we also performed a sensitivity analysis on the η parameter for the Split MNIST problem (see the rightmost plot in Figure 4). Table 4 presents the average test accuracy across 5 trials for the MNIST-variant benchmarks, which corresponds to the sensitivity analysis plots in Figure 4. e t a r a t e: i n i t i a l l e a r n i n g r a t e v a l u e o f p l a s t i c c o n n e c t i o n s. 14 hebb: t h e u p d a t e d H e b b i a n t r a c e s f o r t h e n e x t i t e r a t i o n. " " " 16 s e l f. i n f e a t u r e s = i n f e a t u r e s 17 s e l f. o u t f e a t u r e s = o u t f e a t u r e s 18 s e l f. e t a r a t e = e t a r a t e 19 20 # I n i t i a l i z e f i x e d (s l o w) w e i g h t s w i t h He i n i t i a l i z a t i o n. 21 s e l f. w e i g h t = P a r a m e t e r (t o r c h . T e n s o r ( s e l f . i n f e a t u r e s, # I n i t i a l i z e t h e l e a r n i n g r a t e o f p l a s t i c c o n n e c t i o n s . 31 s e l f . e t a = P a r a m e t e r ( ( s e l f . e t a r a t e * t o r c h . o n e s ), 56 r e t u r n V a r i a b l e (t o r c h . z e r o s ( s e l f . i n f e a t u r e s, s e l f . o u t f e a t u r e s), r e q u i r e s g r a d = F a l s e ) Listing 1: PyTorch implementation of the DHP Softmax model which adds a compressed episodic memory to the final output layer of a neural network through plastic connections as described in Algorithm 1. We want to emphasize the simplicity of implementation using popular ML frameworks. Zenke et al. (2017b). First, the network was trained on the full CIFAR-10 dataset (Task T n=1) and sequentially on 5 additional tasks each corresponding to 10 consecutive classes from the CIFAR-100 dataset (Tasks T n=2:6). The test accuracies of CIFAR-10 and the CIFAR-100 splits are reported after having learned the final task in this sequence. The DHP Softmax (purple) alone significantly outperforms Finetune (yellow) on each of the tasks in this class-incremental learning setup. On some tasks, DHP Softmax alone performs as well or better than when training from scratch (light green). The test accuracies of Finetune, when training from scratch and SI (turquoise) were taken from von.
Hebbian plastic weights can behave as a compressed episodic memory storage in neural networks and with the combination of task-specific synaptic consolidation can improve the ability to alleviate catastrophic forgetting in continual learning.
460
scitldr
In order to choose a neural network architecture that will be effective for a particular modeling problem, one must understand the limitations imposed by each of the potential options. These limitations are typically described in terms of information theoretic bounds, or by comparing the relative complexity needed to approximate example functions between different architectures. In this paper, we examine the topological constraints that the architecture of a neural network imposes on the level sets of all the functions that it is able to approximate. This approach is novel for both the nature of the limitations and the fact that they are independent of network depth for a broad family of activation functions. Neural networks have become the model of choice in a variety of machine learning applications, due to their flexibility and generality. However, selecting network architectures and other hyperparameters is typically a matter of trial and error. To make the choice of neural network architecture more straightforward, we need to understand the limits of each architecture, both in terms of what kinds of functions any given network architecture can approximate and how those limitations impact its ability to learn functions within those limits. A number of papers (3; 6; 11; 13) have shown that neural networks with a single hidden layer are a universal approximator, i.e. that they can approximate any continuous function on a compact domain to arbitrary accuracy if the hidden layer is allowed to have an arbitrarily high dimension. In practice, however, the neural networks that have proved most effective tend to have a large number of relatively low-dimensional hidden layers. This raises the question of whether neural networks with an arbitrary number of hidden layers of bounded dimension are also a universal approximator. In this paper we demonstrate a fairly general limitation on functions that can be approximated with the L ∞ norm on compact subsets of a Euclidean input space by layered, fully-connected feedforward neural networks of arbitrary depth and activation functions from a broad family including sigmoids and ReLus, but with layer widths bounded by the dimension of the input space. By a layered network, we mean that hidden nodes are grouped into successive layers and each node is only connected to nodes in the previous layer and the next layer. The constraints on the functions are defined in terms of topological properties of the level sets in the input space. This analysis is not meant to suggest that deep networks are worse than shallow networks, but rather to better understand how and why they will perform differently on different data sets. In fact, these limitations may be part of the reason deep nets have proven more effective on datasets whose structures are compatible with these limitations. By a level set, we mean the set of all points in the input space that the model maps to a given value in the output space. For classification models, a level set is just a decision boundary for a particular cutoff. For regression problems, level sets don't have a common interpretation. The main of the paper, Theorem 1, states that the deep, skinny neural network architectures described above cannot approximate any function with a level set that is bounded in the input space. This can be rephrased as saying that for every function that can be approximated, every level set must be unbounded, extending off to infinity. While a number of recent papers have made impressive progress in understanding the limitations of different neural network architectures, this is notable because it is independent of the number of layers in the network, and because the limitations are defined in terms of a very simple topological property. Topological tools have recently been employed to study the properties of data sets within the field known as Topological Data Analysis, but this paper exploits topological ideas to examine the topology of the models themselves. By demonstrating topological constraints on a widely used family of models, we suggest that there is further potential to apply topological ideas to understand the strengths and weaknesses of algorithms and methodologies across machine learning. After discussing the context and related work in Section 2, we introduce the basic definitions and notation in Section 3, then state the main Theorem and outline the proof in Section 4. The detailed proof is presented in Sections 5 and 6. We present experimental that demonstrate the constraints in Section 7, then in Section 8 we present from this work. A number of papers have demonstrated limitations on the functions that can be approximated by neural networks with particular architectures (2; 12; 14; 15; 18; 19; 21; 22; 23; 24; 25; 27; 29; 32). These are typically presented as asymptotic bounds on the size of network needed to approximate any function in a given family to a given ε. Lu et al BID15 gave the first non-approximation that is independent of complexity, showing that there are functions that no ReLu-based deep network of width equal to the dimension of the input space can approximate, no matter how deep. However, they consider convergence in terms of the L 1 norm on the entire space R n rather than L ∞ on a compact subset. This is a much stricter definition than the one used in this paper so even for ReLu networks, Theorem 1 is a stronger . The closest existing to Theorem 1 is a recent paper by Nguyen, Mukkamala and Hein BID24 which shows that for multi-label classification problems defined by an argmax condition on a higherdimensional output function, if all the hidden layers of a neural network have dimension less than or equal to the input dimension then the region defining each class must be connected. The applies to one-to-one activation functions, but could probably be extended to the family of activation functions in this paper by a similar limiting argument. Universality have been proved for a number of variants of the networks described in Theorem 1. Rojas BID26 showed that any two discrete classes of points can be separated by a decision boundary of a function defined by a deep, skinny network in which each layer has a single perceptron that is connected both to the previous layer and to the input layer. Because of the connections back to the input space, such a network is not layered as defined above, so Theorem 1 doesn't contradict this . In fact, to carry out Rojas' construction with a layered feed-forward network, you would need to put all the perceptrons in a single hidden layer. Sutskever and Hinton BID28 showed that deep belief networks whose hidden layers have the same dimension as the input space can approximate any function over binary vectors. This binary input space can be interpreted as a discrete subset of Euclidean space. So while Theorem 1 does not apply to belief networks, it's worth noting that any function on a discrete set can be extended to the full space in such a way that the ing function satisfies the constraints in Theorem 1.This unexpected constraint on skinny deep nets raises the question of whether such networks are so practically effective despite being more restrictive than wide networks, or because of it. Lin, Tegmark and Rolnick BID14 showed that for data sets with information-theoretic properties that are common in physics and elsewhere, deep networks are more efficient than shallow networks. This may be because such networks are restricted to a smaller search space concentrated around functions that model shapes of data that are more likely to appear in practice. Such a would be consistent with a number of papers showing that there are functions defined by deep networks that can only by approximated by shallow networks with asymptotically much larger number of nodes (4; 7; 10; 20; 31).A slightly different phenomenon has been observed for recurrent neural networks, which are universal approximators of dynamic systems BID6. In this setting, Collins, Sohl-Dickstein and Sussillo BID3 showed that many differences that have been reported on the performance of RNNs are due to their training effectiveness, rather than the expressiveness of the networks. In other words, the effectiveness of a given family of models appears to have less to do with whether it includes an accurate model, and more to do with whether a model search algorithm like gradient descent is likely to find an accurate model within the search space of possible models. A model family is a subset M of the space C(R n, R m) of continuous functions from input space R n to output space R m. For parametric models, this subset is typically defined as the image of a map DISPLAYFORM0, where R k is the parameter space. A non-parametric model family is typically the union of a countably infinite collection of parametric model families. We will not distinguish between parametric and non-parametric families in this section. Given a function g: R n → R m, a compact subset A ⊂ R n and a value > 0, we will say that a second function f (, A)-approximates g if for every x ∈ A, we have |f (x) − g(x)| <. Similarly, we will say that a model family M (, A)-approximates g if there is a function f in M that (, A)-approximates g. More generally, we will say that M approximates f if for every compact A ⊂ R n and value > 0 there is a function f in M that (, A)-approximates g. This is equivalent to the statement that there is a sequence of functions f i ∈ M that converges pointwise (though not necessarily uniformly) to g on all of R n. However, we will use the (, A) definition throughout this paper. We'll describe families of layered neural networks with the following notation: Given an activation function ϕ: R → R and a finite sequence of positive integers n 0, n 1,..., n κ, let N ϕ,n0,n1,...,nκ be the family of functions defined by a layered feed-forward neural network with n 0 inputs, n κ outputs and fully connected hidden layers of width n 1,..., n κ−1.With this terminology, Hornik et al's can be restated as saying that the (non-parametric) model family defined as the union of all families N ϕ,n0,n1,1 approximates any continuous function.(Here, κ = 2 and n 2 = 1.)We're interested in deep networks with bounded dimensional layers, so we'll let N * ϕ,n be the union of all the model families N ϕ,n0,n1,...,nκ−1,1 such that n i ≤ n for all i < κ. For the main , we will restrict our attention to a fairly large family of activation functions. We will say that an activation function ϕ is uniformly approximated by one-to-one functions if there is a sequence of continuous, one-to-one functions that converge to ϕ uniformly (not just pointwise).Note that if the activation function is itself one-to-one (such as a sigmoid) then we can let every function in the sequence be ϕ and it will converge uniformly. For the ReLu function, we need to replace the the large horizontal portion with a function such as 1 n arctan(x). Since this function is one-to-one and negative for x < 0, each function in this sequence will be one-to-one. Since it's bounded between − 1 n and 0, the sequence will converge uniformly to the ReLu function. The main of the paper is a topological constraint on the level sets of any function in the family of models N * ϕ,n. To understand this constraint, recall that in topology, a set C is path connected if any two points in C are connected by a continuous path within C. A path component of a set A is a subset C ⊂ A that is connected, but is not a proper subset of a larger connected subset of A. Definition 1. We will say that a function f: R n → R has unbounded level components if for every y ∈ R, every path component of f −1 (y) is unbounded. The main of this paper states that deep, skinny neural networks can only approximate functions with unbounded level components. Note that this definition is stricter than just requiring that every level set be bounded. The stricter definition in terms of path components guarantees that the property is preserved by limits, a fact that we will prove, then use in the proof of Theorem 1. Just having bounded level sets is not preserved under limits. Theorem 1. For any integer n ≥ 2 and uniformly continuous activation function ϕ: R → R that can be approximated by one-to-one functions, the family of layered feed-forward neural networks with input dimension n in which each hidden layer has dimension at most n cannot approximate any function with a level set containing a bounded path component. The proof of Theorem 1 consists of two steps. In the first step, described in Section 5, we examine the family of functions defined by deep, skinny neural networks in which the activation is one-to-one and the transition matrices are all non-singular. We prove two about this smaller family of functions: First, Lemma 2 states that any function that can be approximated by N * ϕ,n can be approximated by functions in this smaller family. This is fairly immediate from the assumptions on ϕ and the fact that singular transition matrices can be approximated by non-singular ones. Second, Lemma 4 states that the level sets of these functions have unbounded level components. The proof of this Lemma is, in many ways, the core argument of the paper and is illustrated in FIG0. The idea is that any function in this smaller family can be written as a composition of a one-to-one function and a linear projection, as in the top row of the Figure. As suggested in the bottom row, this implies that each level set/decision boundary in the full function is defined by the intersection of the image of the one-to-one function (the gray patch in the middle) with a hyperplane that maps to a single point in the second function. Intuitively, this intersection extends out to the edges of the gray blob, so its preimage in the original space must extend out to infinity in Euclidean space, i.e. it must be unbounded. The second part of the proof of Theorem 1, described in Section 5, is Lemma 5 which states that the limit of functions with unbounded level components also has unbounded level components. This is a subtle technical argument, though it should be intuitively unsurprising that unbounded sets cannot converge to bounded sets. The proof of Theorem 1 is the concatenation of these three Lemmas: If a function can be approximated by N * ϕ,n then it can be approximated by the smaller model family (Lemma 2), so it can be approximated by functions with unbounded level components (Lemma 4), so it must also have unbounded level components (Lemma 5). We will say that a function in N * ϕ,n is non-singular if ϕ is continuous and one-to-one, n i = n for all i < k and the matrix defined by the weights between each pair of layers is nonsingular. Note that if ϕ is not one-to-one, then N * ϕ,n will not contain any non-singular functions. If it is one-to-one then N * ϕ,n will contain a mix of singular and non-singular functions. Define the model family of non-singular functionsN n to be the union of all non-singular functions in families N * ϕ,n for all activation functions ϕ and a fixed n. Lemma 2. If g is approximated by N * ϕ,n for some continuous activation function ϕ that can be uniformly approximated by one-to-one functions then it is approximated byN n.To prove this Lemma, we will employ a technical from point-set topology, relying on the fact that a function in N * ϕ,n can be written as a composition of linear functions defined by the weights between successive layers, and non-linear functions defined by the activation function ϕ. DISPLAYFORM0 is a continuous function. Let A ⊂ R n be a compact subset and choose ε > 0. One can prove Lemma 3 by induction on the number of functions in the composition, choosing each A i ⊂ R ni to be a closed ε-neighborhood of the image of A in the composition up to i. For each new function, the δ on the compact set tells you what δ you need to choose for the composition of the preceding functions. We will not include the details here. Proof of Lemma 2. We'll prove this Lemma by showing thatN n approximates any given function in N * ϕ,n. Then, given ε > 0, a compact set A ⊂ R n and a function g that is approximated by N * ϕ,n, we can choose a function f ∈ N * ϕ,n that (ε/2, A)-approximates g and a function inN n that (ε/2, A)-approximates f. So we will reset the notation, let g be a function in N * ϕ,n, let A ⊂ R n be a compact subset and choose ε > 0. As noted above, g is a composition g = ν κ • κ • · · · • ν 0 • 0 where each i is a linear function defined by the weights between consecutive layers and each ν i is a nonlinear function defined by a direct product of the activation function ϕ.If any of the hidden layers in the network defining g have dimension strictly less than n then we can define the same function with a network in which that layer has dimension exactly n, but the weights in and out of the added neurons are all zero. Therefore, we can assume without loss of generality that all the hidden layers in g have dimension exactly n, though the linear functions may be singular. Let {A i} and δ > 0 be as defined by Lemma 3. We want to find functionsν i andˆ i that (δ, A i)-approximate each ν i and i and whose composition is inN n.For the composition to be inN n, we need eachˆ i to be non-singular. If i is already non-singular, then we chooseˆ i = i. Otherwise, we can perturb the weights that define the linear map i by an arbitrarily small amount to make it non-singular. In particular, we can choose this arbitrarily small amount to be small enough that the function values change by less than δ on A i. Similarly, we want eachν i to be a direct product of a continuous, one-to-one activation functions. By assumption, ϕ can be approximated by such functions and we can choose the tolerance for this approximation to be small enough thatν i (δ, A i)-approximates ν i. In fact, we can choose a single activation function for all the nonlinear layers, on each corresponding compact set. Thus we can choose eachˆ i and an activation functionφ that defines all the functions ν i, so that the composition is inN n and, by Lemma 3, the composition (ε, A)-approximates g. Lemma 2 implies that if N * ϕ,n is universal then so isN n. So to prove Theorem 1, we will show that every function inN n has level sets with only unbounded components, then show that this property extends to any function that it approximates. Lemma 4. If f is a function inN n then every level set f −1 (y) is homeomorphic to an open (possibly empty) subset of R n−1. This implies that f has unbounded level components. Proof. Assume f is a non-singular function inN n, where ϕ is continuous and one-to-one. Let f: R n → R n be the function defined by all but the last layer of the network. Letf: R n → R be the function defined by the map from the last hidden layer to the final output layer so that f =f •f.The functionf is a composition of the linear functions defined by the network weights and the nonlinear function at each step defined by applying the activation function to each dimension. Because f is nonsingular, the linear functions are all one-to one. Because ϕ is continuous and one-to-one, so are all the non-linear functions. Thus the compositionf is also one-to-one, and therefore a homeomorphism from R n onto its image If. Since R n is homeomorphic to an open n-dimensional ball, If is an open subset of R n, as indicated in the top row of FIG0.The functionf is the composition of a linear function to R with ϕ, which is one-to-one by assumption. So the preimagef −1 (y) for any y ∈ R is an (n − 1)-dimensional plane in R n. The preimage f −1 (y) is the preimage inf of this (n − 1)-dimensional plane, or rather the preimage of the intersection If ∩f −1 (y), as indicated in the bottom right/center of the Figure. Since If is open as a subset of R n, the intersection is open as a subset off −1 (y).Sincef is one-to-one, its restriction to this preimage (shown on the bottom left of the Figure) is a homeomorphism from f −1 (y) to this open subset of the (n − 1)-dimensional planef −1 (y). Thus f −1 (y) is homeomorphic to an open subset of R n−1.Finally, recall that the preimage in a continuous function of a closed set is closed, so f −1 (y) is closed as a subset of R n. If it were also bounded, then it would be compact. However, the only compact, open subset of R n−1 is the empty set, so f −1 (y) is either unbounded or empty. Since each path component of a subset of R n−1 is by definition non-empty, this proves that any component of f is unbounded. All that remains is to show that this property extends to the functions thatN n approximates. If M is a model family in which every function has unbounded level components then any function approximated by M has unbounded level components. Proof. Let g: R n → R be a function with a level set g −1 (y) containing a bounded path component C. Note that level sets are closed as subsets of R n and bounded, closed sets are compact so C is compact. We can therefore choose a value µ such that any point of g −1 (y) outside of C is distance greater than µ from every point in C.Let η C be the set of all points that are distance strictly less than µ/2 from C. This is an open subset of R n, shown as the shaded region in the center of FIG1, and we will let F be the frontier of η C -the set of all points that are limit points of both η C and limit points of its complement. By construction, every point in F is distance µ/2 from C so F is disjoint from C. Moreover, since every point in g −1 (y) \ C is distance at least µ from C, F is disjoint from the rest of g −1 (y) as well, so y is in the complement of g(F).The frontier is the intersection of two closed sets, so F is closed. It's also bounded, since all points are a bounded distance from C, so F is compact. This implies that g(F) is a compact subset of R, so its complement is open. Since y is in the complement of g(F), this means that there is an open interval U = (y − ε, y + ε) that is disjoint from g(F).LetĈ be the component of g −1 (U) that contains C, as indicated on the right of the Figure. Note that this set intersects η C but is disjoint from its frontier. SoĈ must be contained in η C, and is therefore bounded as well. In particular, each level set that intersectsĈ has a compact component inĈ.Let x be a point in C ⊂Ĉ. SinceĈ is bounded, there is a value r such that every point inĈ is distance at most r from x. Assume for contradiction that g is approximated by a model family M in which each function has unbounded level components. Choose R > r and let B R (x) be a closed ball of radius R, centered at x. Because this is a compact set and g is approximated by M, we can choose a function f ∈ M that (ε/2, B R (x))-approximates g. Then |f (x) − g(x)| < ε/2 so f (x) ∈ [y − ε/2, y + ε/2] ⊂ U and we will define y = f (x).Since f ∈ M, every path component of f −1 (y) is unbounded, so there is a path ⊂ f −1 (y) from x to a point that is distance R from x. If passes outside of B R (x)), we can replace with the component of ∩ B R (x)) containing x to ensure that stays inside of B R (x)), but still reaches a point that is distance R from x. Since every point x ∈ is contained in B R (x), we have |f (x) − g(x)| < ε/2. This implies g(x) ∈ [y − ε, y + ε] = U so the path is contained in g −1 (U), and thus in the path componentĈ of g −1 (U).However, by construction the path ends at a point whose distance from x is R > r, contradicting the assumption that every point inĈ is distance at most r from x. This contradiction proves that g cannot be approximated by a model family M in which each function has unbounded level components. Proof of Theorem 1. Let g be a function that is approximated by N * ϕ,n, where ϕ is a continuous activation function that can be uniformly approximated by one-to-one functions. By Lemma 2, since g is approximated by N (a) The decision boundary learned with six, two-dimensional hidden layers is an unbounded curve that extends outside the region visible in the image.(b) A network with a single threedimensional hidden layer learns a bounded decision boundary relatively easily. To demonstrate the effect of Theorem 1, we used the TensorFlow Neural Network Playground to train two different networks on a standard synthetic dataset with one class centered at the origin of the two-dimensional plane, and the other class forming a ring around it. We trained two neural networks and examined the plot of the ing functions to characterize the level sets/decision boundaries. In these plots, the decision boundary is visible as the white region between the blue and orange regions defining the two labels. The first network has six two-dimensional hidden layers, the maximum number of layers allowed in the webapp. As shown in Figure 3a, the decision boundary is an unbounded curve that extends beyond the region containing all the data points. The ideal decision boundary between the two classes of points would be a (bounded) loop around the blue points in the middle, but Theorem 1 proves that such a network cannot approximate a function with such a level set. A decision boundary such as the one shown in the Figure is as close as it can get. The extra hidden layers allow the decision boundary to curve around and minimize the neck of the blue region, but they do not allow it to pinch off completely. The second network has a single hidden layer of dimension three -one more than that of the input space. As shown in Figure 3b, the decision boundary for the learned function is a loop that approximates the ideal decision boundary closely. It comes from the three lines defined by the hidden nodes, which make a triangle that gets rounded off by the activation function. Increasing the dimension of the hidden layer would make the decision boundary rounder, though in this case the model doesn't need the extra flexibility. Note that this example generalizes to any dimension n, though without the ability to directly graph the . In other words, for any Euclidean input space of dimension n, a sigmoid neural network with one hidden layer of dimension n + 1 can define a function that cannot be approximated by any deep network with an arbitrary number of hidden layers of dimension at most n. In fact, this will be the case for any activation function that is bounded above or below, though we will not include the details of the argument here. In this paper, we describe topological limitations on the types of functions that can be approximated by deep, skinny neural networks, independent of the number of hidden layers. We prove the using standard set theoretic topology, then present examples that visually demonstrate the .
This paper proves that skinny neural networks cannot approximate certain functions, no matter how deep they are.
461
scitldr
Program verification offers a framework for ensuring program correctness and therefore systematically eliminating different classes of bugs. Inferring loop invariants is one of the main challenges behind automated verification of real-world programs which often contain many loops. In this paper, we present Continuous Logic Network (CLN), a novel neural architecture for automatically learning loop invariants directly from program execution traces. Unlike existing neural networks, CLNs can learn precise and explicit representations of formulas in Satisfiability Modulo Theories (SMT) for loop invariants from program execution traces. We develop a new sound and complete semantic mapping for assigning SMT formulas to continuous truth values that allows CLNs to be trained efficiently. We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms existing approaches on the popular Code2Inv dataset. CLN2INV is the first tool to solve all 124 theoretically solvable problems in the Code2Inv dataset. Moreover, CLN2INV takes only 1.1 second on average for each problem, which is 40 times faster than existing approaches. We further demonstrate that CLN2INV can even learn 12 significantly more complex loop invariants than the ones required for the Code2Inv dataset. Program verification offers a principled approach for systematically eliminating different classes of bugs and proving the correctness of programs. However, as programs have become increasingly complex, real-world program verification often requires prohibitively expensive manual effort (; ;). Recent efforts have focused on automating the program verification process, but automated verification of general programs with unbounded loops remains an open problem (; . Verifying programs with loops requires determining loop invariants, which captures the effect of the loop on the program state irrespective of the actual number of loop iterations. Automatically inferring correct loop invariants is a challenging problem that is undecidable in general and difficult to solve in practice . Existing approaches use stochastic search , heurstics-based search , PAC learning based on counter examples , or reinforcement learning . However, these approaches often struggle to learn complex, real-world loop invariants. In this paper, we introduce a new approach to learning loop invariants by modeling the loop behavior from program execution traces using a new type of neural architecture. We note that inferring loop invariants can be posed as learning formulas in Satisfiability Modulo Theories (SMT) over program variables collected from program execution traces . In principle, Neural networks seem well suited to this task because they can act as universal function approximators and have been successfully applied in various domains that require modeling of arbitrary functions . However, loop invariants must be represented as explicit SMT formulas to be usable for program verification. Unfortunately, existing methods for extracting logical rules from general neural architectures lack sufficient precision , while inductive logic learning lacks sufficient expressiveness for use in verification . We address this issue by developing a novel neural architecture, Continuous Logic Network (CLN), which is able to efficiently learn explicit and precise representations of SMT formulas by using continuous truth values. Unlike existing neural architectures, CLNs can represent a learned SMT formula explicitly in its structure and thus allow us to precisely extract the exact formula from a trained model. In order to train CLNs, we introduce a new semantic mapping for SMT formulas to continuous truth values. Our semantic mapping builds on BL, or basic fuzzy logic (Hájek, 2013), to support general SMT formulas in a continuous logic setting. We further prove that our semantic model is sound (i.e., truth assignments for the formulas are consistent with their discrete counterparts) and complete (i.e., all formulas can be represented) with regard to the discrete SMT formula space. These properties allow CLNs to represent any quantifier-free SMT formula operating on mixed integer-real arithmetic as an end-to-end differentiable series of operations. We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms state-of-the-art tools on the Code2Inv dataset by solving all 124 theoretically solvable problems in the dataset. This is 20 problems more than LoopInvGen, the winner of the SyGus 2018 competition loop invariant track . Moreover, CLN2INV finds invariants for each program in 1.1 second on average, more than 40 times faster than LoopInvGen. We also demonstrate that CLN2INV is able to learn complex, real-world loop invariants with combinations of conjunctions and disjunctions of multivariable constraints. Our main contributions are: • We introduce a new semantic mapping for assigning continuous truth values to SMT formulas that is theoretically grounded and enables learning formulas through backpropagation. We further prove that our semantic model is sound and complete. • We develop a novel neural architecture, Continuous Logic Networks (CLNs), that to the best of our knowledge is the first to efficiently learn precise and explicit SMT formulas by construction. • We use CLNs to implement a new loop invariant inference system, CLN2INV, that is the first to solve all 124 theoretically solvable problems in the Code2Inv dataset, 20 more than the existing methods. CLN2INV is able to find invariants for each problem in 1.1 second on average, 40× faster than existing systems. • We further show CLN2INV is able to learn 12 more complex loop invariants than the ones present in the Code2Inv dataset with combinations of multivariable constraints. Related Work. Traditionally, loop invariant learning relies on stochastic or heuristics-guided search . Other approaches like NumInv analyze traces and discover conjunctions of equalities by solving a system of linear equations . LoopInvGen uses PAC learning of CNF using counter-examples . By contrast, Code2Inv learns to guess loop invariants using reinforcement learning with recurrent and graph neural networks . However, these approaches struggle to learn complex invariants. Unlike these works, CLN2INV efficiently learns complex invariants directly from execution traces. There is a extensive work on PAC learning of boolean formulas, but learning precise formulas require a prohibitively large number of samples . Several recent works use differentiable logic to learn boolean logic formulas from noisy data (; ;) or improving adversarial robustness by applying logical rules to training . By contrast, our work learns precise SMT formulas directly by construction, allowing us to learn richer predicates with compact representation in a noiseless setting. A variety of numerical relaxations have been applied to SAT and SMT solving. Application-specific approximations using methods such as interval overapproximation and slack variables have been developed for different classes of SMT . More recent work has applied recurrent and graph neural networks to Circuit SAT problems and unsat core detection (; ; Selsam & Bjørner, 2019). FastSMT uses embeddings from natural language processing like skip-gram and bag-of-words to represent formulas for search strategy optimization . Unlike these approaches, we relax the SMT semantics directly to generate a differentiable representation of SMT. In this section, we introduce the problem of inferring loop invariants and provide a brief overview of Satisfiability Modulo Theories (SMT), which are used to represent loop invariants. We provide into fuzzy logic, which we extend with our new continuous semantic mapping for SMT. Loop invariants capture loop behavior irrespective of number of iterations, which is crucial for verifying programs with loops. Given a loop, while(LC){C}, a precondition P, and a postcondition Q, the verification task involves finding a loop invariant I that can be concluded from the pre-condition and implies the post-condition . Formally, it must satisfy the following three conditions, in which the second is a Hoare triple describing the loop: Example of Loop Invariant. Consider the example loop in Fig.1. For a loop invariant to be usable, it must be valid for the precondition (t = 10 ∧ u = 0), the recursion step when t = 0, and the post condition (u = 20) when the loop condition is no longer satisfied, i.e., t = 0. The correct and precise invariant I for the program is (2t + u = 20). //pre: t=10 /\ u=0 while (t != 0){t = t -1; u = u + 2;} //post: u=20 The desired loop invariant I for the left program is a boolean function over program variables t, u such that: The desired and precise loop invariant I is (2t + u = 20). Satisfiability Modulo Theories (SMT) are an extension of Boolean Satisfiability that allow solvers to reason about complex problems efficiently. Loop invariants and other formulas in program verification are usually encoded with quantifier-free SMT. A formula F in quantifier-free SMT can be inductively defined as below: ∈ {=, =, <, >, ≤, ≥} where E 1 and E 2 are expressions of terms. The loop invariant (2t + u = 20) in Fig. 1 is an SMT formula. Nonlinear arithmetic theories admit higher-order terms such as t 2 and t * u, allowing them to express more complex constraints. For example, (¬(2 ≥ t 2)) is an SMT formula that is true when the value of the high-order term t 2 is larger than 2. Basic fuzzy logic (BL) is a class of logic that uses continuous truth values in the range and is differentiable almost everywhere 1 (Hájek, 2013). BL defines logical conjunction with functions called t-norms, which must satisfy specific conditions to ensure that the behavior of the logic is consistent with boolean First Order Logic. Formally, a t-norm (denoted ⊗) in BL is a binary operator over truth values in the interval satisfying the following conditions: consistency (1 ⊗ x = x and 0 ⊗ x = 0), commutativity (x ⊗ y = y ⊗ x), associativity (x ⊗ (y ⊗ z) = (x ⊗ y) ⊗ z), and monotonicity (x 1 ≤ x 2 =⇒ x 1 ⊗ y ≤ x 2 ⊗ y). Besides these conditions, BL also requires that t-norms be continuous. Given a t-norm ⊗, its associated t-conorm (denoted ⊕) is defined with DeMorgan's law: t ⊕ u ¬(¬t ⊗ ¬u), which can be considered as logical disjunction. A common t-norm is the product t-norm x ⊗ y = x · y with its associated t-conorm x ⊕ y = x + y − x · y. We introduce a continuous semantic mapping, S, for SMT on BL that is end-to-end differentiable. The mapping S associates SMT formulas with continuous truth values while preserving each formula's semantics. In this paper, we only consider quantifier-free formulas. This process is analogous to constructing t-norms for BL, where a t-norm operates on continuous logical inputs. We define three desirable properties for continuous semantic mapping S that will preserve formula semantics while facilitating parameter training with gradient descent: 1. S(F) should be consistent with BL. For any two formulas F and F, where F (x) is satisfied and F (x) is unsatisfied with an assignment x of formula terms, we should have S(F)(x) < S(F)(x). This will ensure the semantics of SMT formulas are preserved. 2. S(F) should be differentiable almost everywhere. This will facilitate training with gradient descent through backpropogation. 3. S(F) should be increasing everywhere as the terms in the formula approach constraint satisfaction, and decreasing everywhere as the terms in the formula approach constraint violation. This ensures there is always a nonzero gradient for training. Continuous semantic mapping. We first define the mapping for ">" (greater-than) and "≥" (greater-than-or-equal-to) as well as adopting definitions for "¬", "∧", and "∨" from BL. All other operators can be derived from these. For example, "≤" (less-than-or-equal-to) is derived using "≥" and "¬", while "=" (equality) is then defined as the conjunction of formulas using "≤" and "≥." Given constants B > 0 and > 0, we first define the the mapping S on ">" and "≥" using shifted and scaled sigmoid functions: Illustrations of these functions are given in Appendix A. The validity of our semantic mapping lie in the following facts, which can be proven with basic algebra. When goes to zero and B * goes to infinity, our continuous mapping of ">" and "≥" will preserve their original semantics. Under these conditions, our mapping satisfies all three desirable properties. In practice, for small and large B, the properties are also satisfied if |t − u| >. Next we define the mapping S for boolean operators "∧", "∨" and "¬" using BL. Given a specific t-norm ⊗ and its corresponding t-conorm ⊕, it is straightforward to define mappings of "∧", "∨" and "¬": on the above definitions, the mapping for other operators can be derived as follows: The mapping S on "=" is valid since the following limit holds (see Appendix B for the proof). The mapping for other operators shares similar behavior in the limit, and also fulfill our desired properties under the same conditions. Using our semantic mapping S, most of the standard operations of integer and real arithmetic, including addition, subtraction, multiplication, division, and exponentiation, can be used normally and mapped to continuous truth values while keeping the entire formula differentiable. Moreover, any expression in SMT that has an integer or real-valued can be mapped to continuous logical values via these formulas, although end-to-end differentiability may not be maintained in cases where specific operations are nondifferentiable. In this section, we describe the construction of Continuous Logic Networks (CLNs) based on our continuous semantic mapping for SMT on BL. CLN Construction. CLNs use our semantic mapping to provide a general neural architecture for learning SMT formulas. In a CLN, the learnable coefficients and smoothing parameters correspond to the learnable parameters in a standard feedforward network, and the continuous predicates, tnorms, and t-conorms operate as activation functions like ReLUs in a standard network. In this paper, we focus on shallow networks to address the loop invariant inference problem, but we envision deeper general purpose CLNs that can learn arbitrary SMT formulas. When constructing a CLN, we work from an SMT Formula Template, in which every value is marked as either an input term, a constant, or a learnable parameter. Given an SMT Formula Template, we dynamically construct a CLN as a computational graph. Figure 2 shows a simple formula template and the constructed CLN. We denote the CLN model constructed from the formula template S(F) as M F. CLN Training. Once the CLN has been constructed based on a formula template, it is trained with the following optimization. Given a CLN model M constructed from an SMT template with learnable parameters W, and a set X of valid assignments for the terms in the SMT template, the expected value of the CLN is maximized by minimizing a loss function L that penalizes model outputs that are less than one. A minimum scaling factor β is selected, and a hinge loss is applied to the scaling factors (B) to force the differentiable predicates to approach sharp cutoffs. The offset is also regularized to ensure precision. The overall optimization is formulated as: where λ and γ are hyperparameters respectively governing the weight assigned to the scaling factor and offset regularization., and L is any loss function strictly decreasing in domain. Given a CLN that has been trained to a loss approaching 0 on a given set of valid assignments, we show that the ing continuous SMT formula learned by the CLN is consistent with an equivalent discrete SMT formula. In particular, we prove that such a formula is sound, (i.e., a CLN will learn a correct SMT formula with respect to the training data), and that our continuous mapping is complete, (i.e., CLNs can represent any SMT formula that can be represented in discrete logic). We further prove that CLNs are guaranteed to converge to a globally optimal solution for formulas, which can be expressed as the conjunction of linear equalities. We provide formal definitions and proofs for soundness and completeness in Appendix C and optimality in Appendix D. We use CLNs to implement a new inference system for loop invariants, CLN2INV, which learns invariants directly from execution traces. CLN2INV follows the same overall process as other loop invariant inference systems such as LoopInvGen and Code2Inv -it iterates through likely candidate invariants and checks its validity with an SMT solver. The key difference between our method and other systems is that it learns a loop invariant formula directly from trace data. Figure 2 provides an overview of the architecture. Preprocessing. We first perform static analysis and instrument the program to prepare for training data generation. In addition to collecting the given precondition and post-condition, the static analysis extracts all constants in the program, along with the loop termination condition. We then instrument the program to record all program variables before each loop execution and after the loop termination. We also restrict the loop to terminate after a set number of iterations to prevent loops running indefinitely (for experiments in this paper, we set the max loop iterations to 50). We also strengthen the precondition to ensure loop execution (see Appendix E). Training Data Generation. We generate training data by running the program repeatedly on a set of randomly initialized inputs that satisfy the preconditions. Unconstrained variables are initialized from a uniform distribution centered on 0 with width r, where r is a hyper-parameter of the sampling process. Variables with either upper or lower bound precondition constraints are initialized from a uniform distribution adjacent to their constraints with width r, while variables with both upper and lower bounds in the precondition are sampled uniformly within their bounds. For all of our experiments in this paper, we set r to 10. When the number of uninitialized variables is small (i.e., less than 3), we perform this sampling exhaustively. An example of training data generation is provided in Appendix F. Template Generation. We generate templates in three stages with increasing expressiveness: 1. We first generate templates directly from the pre-condition and post-condition. 2. We next extract the individual clauses from the pre-and post-condition, as well as the loop condition, and generate templates from conjunctions and disjunctions of each possible pair of clauses. 3. We finally generate more generic templates of increasing complexity with a combination of one or more equality constraints on all variables combined with conjunctions of inequality constraints, which are based on the loop condition and individual variables. We describe the template generation in detail in Appendix F. To detect when higher order terms may be present in the invariant, we perform a log-log linear regression on each variable relative to the loop iteration, similarly to. If the loop contains one or more variables that grow superlinearly relative to the loop iteration, we add higher order polynomial terms to the equality constraints in the template, up to the highest degree detected among the loop variables. CLN Construction and Training. Once a template formula has been generated, a CLN is constructed from the template using the formulation in §4. As an optimization, we represent equality constraints as Gaussian-like functions that retain a global maximum when the constraint is satisfied as discussed in Appendix G. We then train the model using the collected execution traces. Invariant Checking. Invariant checking is performed using SMT solvers such as Z3 (De Moura & Bjørner, 2008). After the CLN for a formula template has been trained, the SMT formula for the loop invariant is recovered by normalizing the learned parameters. The invariant is checked against the pre, post, and recursion conditions as described in §2.1. If the correct invariant is not found, we return to the template generation phase to continue the search with a more expressive template. We compare the performance of CLN2INV with two existing methods and demonstrate the efficacy of the method on several more difficult problems. Finally, we conduct two ablation studies to justify our design choices. Performance Comparison. We compare CLN2INV to two state-of-the-art methods: Code2Inv (based on neural code representation and reinforcement learning) and LoopInvGen (PAC learning over synthesized CNF formulas) . We limit each method to one hour per problem in the same format as the SyGuS Competition . CLN2INV is able to solve all 124 problems in the benchmark. LoopInvGen solves 104 problems while Code2inv solves 90. 2 Figure 3a shows the measured runtime on each evaluated system. CLN2INV solves problems in 1.1 second on average, which is over 40× faster than LoopInvGen, the second fastest system in the evaluation. It spends the most time on solver calls (0.6s avg.) and CLN training (0.5s avg.), with negligible time spent on preprocessing, data generation, and template generation on each problem (<20ms ea.) 3. We provide a breakdown of runtimes in Appendix I. In general, CLN2INV has similar performance to LoopInvGen on simple problems but is able to scale efficiently to complex problems. Figure 3b shows the number of Z3 calls made by each method. For almost all problems, CLN2INV requires fewer Z3 calls than the other systems, although for some difficult problems it uses more Z3 calls than Code2Inv. Table 1 summarizes of the performance evaluation. Code2Inv require much more time on average per problem, but minimizes the number of calls made to an SMT solver. In contrast, LoopInvGen is efficient at generating a large volume of guessed candidate invariants, but is much less accurate for each individual invariant. CLN2INV can be seen as balance between the two approaches: it searches over candidate invariants more quickly than Code2Inv, but generates more accurate invariants than LoopInvGen, ing in lower overall runtime. We consider two classes of more difficult loop invariant inference problems that are not present in the Code2Inv dataset. The first require conjunctions and disjunctions of multivariable constraints, and the second require polynomials with many higher order terms. Both of these classes of problems are significantly more challenging because they are more complex and cause the space of possible invariants to grow much more quickly. To evaluate on problems that require invariants with conjunctions and disjunctions of multivariable constraints, we construct 12 additional problems. For these problems, we only consider loops with invariants that cannot be easily inferred with pre-and post-condition based heuristics. Appendix J describes these problems in more detail and provides examples. CLN2INV is able to find correct invariants for all 12 problems in less than 20 seconds, while Code2Inv and LoopInvGen time out after an hour. To evaluate on problems with higher order polynomial invariants, we test CLN2INV on the power summation problems in the form u = k t=0 t d for a given degree d, which have been used in evaluation for polyonomial loop invariant inference . We discuss these problems in more detail in Appendix J. CLN2INV can correctly learn the invariant for 1st and 2nd order power summations, but cannot learn correct invariants for 3rd, 4th or 5th order summations, which have many more higher order terms. We do not evaluate the other methods on these problems because they are not configured for nonlinear arithmetic by default. Effect of CLN Training on Performance. CLN2INV relies on a combination of heuristics using static analysis and learning formulas from execution traces to correctly infer loop invariants. In this ablation we disable model training and limit CLN2INV to static models with no learnable parameters. Static CLN2INV solves 91 problems in the dataset. Figure 4 shows a comparison of full CLN2INV with one limited to static models. CLN2INV's performance with training disabled shows that a large number of problems in the dataset are relatively simple and can be inferred from basic heuristics. However, for more difficult problems, CLN learning is key to inferring correct invariants. We develop a novel neural architecture that explicitly and precisely learns SMT formulas by construction. We achieve this by introducing a new sound and complete semantic mapping for SMT that enables learning formulas through backpropagation. We use CLNs to implement a loop invariant inference system, CLN2INV, that is the first to solve all theoretically solvable problems in the Code2Inv benchmark and takes only 1.1 second on average. We believe that the CLN architecture will also be beneficial for other domains that require learning SMT formulas. A CONTINUOUS PREDICATES Figure 5 shows examples of shifted sigmoids for S(>), S(≥), and S(=). Combing these , we have For any t-norm, we have 0 ⊗ 1 = 0, 1 ⊗ 1 = 1, and 1 ⊗ 0 = 0. Put it altogether, we have (f (t, u; B,) ⊗ g(t, u; B,)) = 1 t = u 0 t = u which concludes the proof. Soundness. Given the SMT formula F, the CLN model M F constructed from S(F) always preserves the truth value of F. It indicates that given a valid assignment to the terms x in F, Completeness. For any SMT formula F, a CLN model M can be constructed representing that formula. In other words, CLNs can express all SMT formulas on integers and reals. We formally state these properties in Theorem 1. Before that we need to define a property for t-norms. The product t-norm and Godel t-norm have this property, while the Lukasiewicz t-norm does not. Theorem 1. For any quantifier-free linear SMT formula F, there exists CLN model M, such that as long as the t-norm used in building M satisfies Property 1. Proof. For convenience of the proof, we first remove all <, ≤, = and = in F, by transforming. Now the only operators that F may contain are >, ≥, ∧, ∨, ¬. We prove Theorem 1 by induction on the constructor of formula F. In the following proof, we construct model M given F and show that it satisfied Eq.. We leave the proof for why M also satisfied Eq. to readers. Atomic Case. When F is an atomic clause, then F will be in the form of x * W + b > 0 or x * W + b ≥ 0. For the first case, we construct a linear layer with weight W and bias b followed by a sigmoid function scaled with factor B and right-shifted with distance. For the second case, we construct the same linear layer followed by a sigmoid function scaled with factor B and left-shifted with distance. Simply evaluating the limits for each we arrive at And from the definition of sigmoid function we know 0 ≤ M (x; B,) ≤ 1. Negation Case. If F = ¬F, from the induction hypothesis, F can be represented by models M satisfying Eq. From the induction hypothesis we know that F (x) = F alse. So F (x) = ¬F (x) = T rue. Conjunction Case. If F = F 1 ∧ F 2, from the induction hypothesis, F 1 and F 2 can be represented by models M 1 and M 2, such that both (F 1, M 1) and (F 2, M 2) satisfy Eq.. Let p 1 and p 2 be the output nodes of M 1 and M 2. We add a final output node p = p 1 ⊗ p 2. So M (x; B,) = M 1 (x; B,) ⊗ M 2 (x; B,). Since (⊗) is continuous and so are M 1 (x; B,) and M 2 (x; B,), we know their composition M (x; B,) is also continuous. (Readers may wonder why M 1(x; B,) is continuous. Actually the continuity of M (x; B,) should be proved inductively like this proof itself, and we omit it for brevity.) From the definition of (⊗), we have Eq. 0 ≤ M (x; B,) ≤ 1. Now we prove the =⇒ side of Eq.. For any x, if F (x) = T rue which means both F 1 (x) = T rue and F 2 (x) = T rue, from the induction hypothesis we know that lim →0 Then we prove the ⇐= side. From the induction hypothesis we know that M 1 (x; B,) ≤ 1 and M 2 (x; B,) ≤ 1. From the non-decreasing property of t-norms (see §2.3), we have Then from the consistency property and the commutative property, we have Put them altogether we get Because we know lim →0 M (x; B,) = 1, according to the squeeze theorem in calculus, we get From the induction hypothesis, we know that F 1 (x) = T rue. We can prove F 2 (x) = T rue in the same manner. Finally we have Disjunction Case. For the case F = F 1 ∨ F 2, we construct M from M 1 and M 2 as we did in the conjunctive case. This time we let the final output node be p = p 1 ⊕ p 2. From the continuity of (⊗) and the definition of (⊕) (t ⊕ u = 1 − (1 − t) ⊗ (1 − u)), (⊕) is also continuous. We conclude M (x; B,) is also continuous and 0 ≤ M (x; B,) ≤ 1 by the same argument as F = F 1 ∧ F 2. Now we prove the " =⇒ " side of Eq.. For any assignment x, if F (x) = T rue which means F 1 (x) = T rue or F 2 (x) = T rue. Without loss of generality, we assume F 1 (x) = T rue. From the induction hypothesis, we know lim →0 For any (⊕) and any 0 ≤ t, t ≤ 1, if t ≤ t, then Using this property and the induction hypothesis M 2 (x; B,) ≥ 0, we have From the induction hypothesis we also have M 1 (x; B,) ≤ 1. Using the definition of (⊕) and the consistency of (⊗) (0 ⊗ x = 0), we get M 1 (x; B,) ⊕ 0 = M 1 (x; B,). Put them altogether we get Because we know lim →0 + B· →∞ M 1 (x; B,) = 1, according to the squeeze theorem in calculus, we Then we prove the " ⇐= " side. Here we need to use the existence of limit: This property can be proved by induction like this proof itself, thus omitted for brevity. Let Since we have lim →0 M (x; B,) = 1, we get Using Property 1 of (⊗) (defined in §4), we have c 1 = 1 ∨ c 2 = 1. Without loss of generality, we assume c 1 = 1. From the induction hypothesis, we know that Careful readers may have found that if we use the continuous mapping function S in §3, then we have another perspective of the proof above, which can be viewed as two interwoven parts. The first part is that we proved the following lemma. Corollary 1. For any quantifier-free linear SMT formula F, Corollary 1 indicates the soundness of S. The second part is that we construct a CLN model given S(F). In other words, we translate S(F) into vertices in a computational graph composed of differentiable operations on continuous truth values. Optimality. For a subset of SMT formulas (conjunctions of multiple linear equalities), CLNs are guaranteed to converge at the global minumum. We formally state this in Theorem 2. We first define another property similar to strict monotonicity. Property 2. ∀t 1 t 2 t 3, (t 1 < t 2) and (t 3 > 0) implies (t 1 ⊗ t 3 < t 2 ⊗ t 3). Theorem 2. For any CLN model M F constructed from a formula, F, by the procedure shown in the proof of Theorem 1, if F is the conjunction of multiple linear equalities then any local minimum of M F is the global minimum, as long as the t-norm used in building M F satisfies Property 2. Proof. Since F is the conjunction of linear equalities, it has the form Here W = {w ij} are the learnable weights, and {t ij} are terms (variables). We omit the bias b i in the linear equalities, as the bias can always be transformed into a weight by adding a constant of 1 as a term. For convenience, we define f (x) = S(x = 0) =. Given an assignment x of the terms {t ij}, if we construct our CLN model M F following the procedure shown in the proof of Theorem 1, the output of the model will be When we train our CLN model, we have a collection of m data points {t ij1}, {t ij2},..., {t ijm}, which satisfy formula F. If B and are fixed (unlearnable), then the loss function will be Suppose W * = {w * ij} is a local minima of L(W). We need to prove W * is also the global minima. To prove this, we use the definition of a local minima. That is, For convenience, we denote u ik = li j=1 w ij t ijk. Then we rewrite Eq. as If we can prove at Then because (i) f reaches its global maximum at 0, (ii) the t-norm (⊗) is monotonically increasing, (iii) L is monotonically decreasing, we can conclude that W * is the global minima. Here we just show the case i = 1. The proof for i > 1 can be directly derived using the associativity of (⊗). Since f (x) > 0 for all x ∈ R, using Property 2 of our t-norm (⊗), we know that α k > 0. Now the loss function becomes Because (i) f (x) is an even function decreasing on x > 0 (which can be easily proved), (ii) (⊗) is monotonically increasing, (iii) L is monotonically decreasing, for −δ < γ < 0, we have Combing Eq. and Eq., we have Now we look back on Eq.. Since (i) L is strictly decreasing, (ii) the t-norm we used here has Property 2 (see §4 for definition), (iii) α k > 0, the only case when (=) holds is that for all 1 ≤ k ≤ m, we have f (|u ik (1 + γ)|) = f (|u ik |). Since f (x) is strictly decreasing for x ≥ 0, we have |u ik (1 + γ)| = |u ik |. Finally because −1 < −δ < γ < 0, we have u ik = 0. Theorem 3. Given a program C: assume(P); while (LC) {C} assert(Q); If we can find a loop invariant I for program C: assume(P ∧ LC); while (LC) {C} assert(Q); and P ∧ ¬LC =⇒ Q, then I ∨ (P ∧ ¬LC) is a correct loop invariant for program C. Proof. Since I is a loop invariant of C, we have We want to prove I ∨ (P ∧ ¬LC) is a valid loop invariant of C, which means We prove the three propositions separately. To prove P ∧ LC =⇒ I ∨ (P ∧ ¬LC), we transform it into a stronger proposition P ∧ LC =⇒ I, which directly comes from (a). For {(I ∨ (P ∧ ¬LC)) ∧ LC}C{I ∨ (P ∧ ¬LC)}, after simplification it becomes {I ∧ LC}C{I ∨ (P ∧ ¬LC)}, which is a direct corollary of (b). For (I ∨ (P ∧ ¬LC)) ∧ ¬LC =⇒ Q, after simplification it will become two separate propositions, I ∧ ¬LC =⇒ Q and P ∧ ¬LC =⇒ Q. The former is exactly (c), and the latter is a known condition in the theorem. Training Data Generation Example. Figure 6 provides an example of our training data generation procedure. The uninitialized variable k is sampled according to the precondition k ≤ 8 within the predefined width r = 10. So we end up enumerating k = 8, 7,..., −2. For each k, the loop is executed repeatedly until termination, thus generating a small set of samples. The final training set is the union of these small sets. Figure 6: Illustration of how training data is generated. After the sampling procedure in (b) we have a collection of 88 samples which will later be fed to the CLN model. Template Generation. Templates are first generated from the pre-and post-conditions, followed by every pair of clauses extracted from the pre-condition, post-condition, and loop-condition. Generic templates are then constructed consisting of one or more general equality constraints containing all variables conjoined with inequality constraints. Three iterations of the generic template generation are shown here: Algorithm 1 summarizes the template generation process. In it the following functions are defined: Construct a template given an smt formula. extract clauses: Extract individual clauses from smt formulas. estimate degrees: Performs log-log linear regression to estimate degree of each variable. polynomial kernel: Executes polynomial kernel on variables and data for a given degree. is single constraint: Checks if condition is a single inequality constraint. extract loop constraint: Converts the loop condition to learnable smt template. Note that templates are generated on demand, so each template is used to infer a possible invariant before the next is generated. for eq clause in eq clauses do template len ← template len + 1 31: end while We use a Gaussian-like function S(t = u) = exp(− (t−u) 2 2σ2 ) to represent equalities in our experiments. It has the following two properties. First, it preserves the original semantic of = when σ → 0, similar to the mapping S(t = u) = Second, if we view S(t = u) as a function over t − u, then it reaches its only local maximum at t − u = 0, which means the equality is satisfied. Here we provide an example which is Problem 106 in the dataset. The post condition a ≥ m is wrong if we start from a = 0, m = 1, and k = 0. int k = 0; int a, m; assume(a <= m); while (k < 1) {if (m < a) m = a; k = k + 1; } assert(a >= m); Executing the program with these inputs in a = 0, m = 1, and k = 1 as the if condition is never satisfied. But clearly, the post condition a ≥ m is violated. Below we tabulate the counterexamples invalidating the nine removed problems from the dataset: I RUNTIME BREAKDOWN In Table 3 we provide a more detailed analysis of how much time is spent on each stage of the invariant inference pipeline in CLN2INV. Measurements are averaged over 5 runs. The system spends most time on solver calls (0.6s avg.) and CLN training (0.5s avg.) with negligible time spent on preprocessing, sampling, and template generation. For most problems in the Code2Inv benchmark, CLN2INV completes CLN training quickly (less than 0.2s) and spends most of its time performing solver checks, but it requires more epochs to train on some complex problems with many variables. Multivariable Conjunction/Disjunction Invariants. In this subsection, we discuss in detail two of the 12 more difficult loop invariant problems we have created. The first problem is shown in Figure 7. The correct loop invariant for the program is ((t + u = 0) ∨ (t − u = 0)) ∧ (t ≤ 0). The plot of the trace in Figure 7b shows that the points lie on one of the two lines expressible as linear equality constraints. These constraints along with the inequality can be learned from the execution trace using CLN2INV in under 20 seconds. Bernoulli distribution with success probability 0.5. Although the branching behavior is not deterministic, we know (t + u = 0) ∧ (v + w = 0) ∧ (u + w ≥ 0) is a correct invariant as it holds regardless of which branch is taken. Our CLN2INV can learn this invariant within 20 seconds. For both problems in Figure 7 and 8, both Code2inv and LoopInvGen time out after one hour without finding a solution. Polynomial Invariants. Here we provide and an example of the higher order polynomial problems; more precisely, the power summation problems in the form u = k t=0 t d for a given degree d. We found that CLN2INV was able to learn invariants for programs with 10 terms within 2nd degree, but struggled with problems with 20 or more terms of 3rd degree or higher. Table 4 summarizes these . Consider the example loop in Figure 9 which computes the sum of the first k cubes. We know this sum has a closed form solution: For this problem, we would hope to extract the invariant: However, by naively using the polynomial kernel just as methods like NumInv suggest , we will have 35 monomials of degree at most four over three variables as candidate terms (t 3 u, t 2 k 2, tu 2 k, ...), and the model must learn to ignore all the terms except u, t 4, t 3, and t 2. Additionally, by nature of polynomials the highest order term is a good approximation for the whole polynomial. Thus, u = t 4 is a good approximation based on the data. We observe our model will find the correct coefficient for t 4 but the accuracy degrades on the lower ordered terms. The difficulty of learning polynomial invariants using CLN is an interesting direction for future studies. //pre: t = u = 0 /\ k >= 0 while (t < k) {t++; u += t * t * t;} //post: 4u == k ** 2 * (k + 1) ** 2 Figure 9: Pseudocode for Polynomial Invariant Problem
We introduce the Continuous Logic Network (CLN), a novel neural architecture for automatically learning loop invariants and general SMT formulas.
462
scitldr
Single cell RNA sequencing (scRNAseq) technology enables quantifying gene expression profiles by individual cells within cancer. Dimension reduction methods have been commonly used for cell clustering analysis and visualization of the data. Current dimension reduction methods tend overly eliminate the expression variations correspond to less dominating characteristics, such we fail to find the homogenious properties of cancer development. In this paper, we proposed a new and clustering analysis method for scRNAseq data, namely BBSC, via implementing a binarization of the gene expression profile into on/off frequency changes with a Boolean matrix factorization. The low rank representation of expression matrix recovered by BBSC increase the resolution in identifying distinct cell types or functions. Application of BBSC on two cancer scRNAseq data successfully discovered both homogeneous and heterogeneous cancer cell clusters. Further finding showed potential in preventing cancer progression. Cancer the biggest deadly threat to human has been a huge puzzle since its determination in 1775. From once considered as contagious to nowadays cancer immunotherapy, the modern medication continues to evolve in tackling this problem . And yet, not enough to make a huge difference, 1,762,450 people have been diagnosed with cancer and 606,880 has died in 2018 . The development of single cell RNA sequencing (scRNA-seq), which measures each single cell in cancer tissue with over 20,000 dimension of genes (features), picturized the hologram of cancer and its micro-environment with high resolution (; ;). As illustrated in Figure 1A, classic analysis pipeline takes a linear (PCA) or non-linear (t-SNE) dimension reduction of the high dimensional input data, by which loadings of the top bases are further used for cell clustering and visualization . Figure 1: Classic analysis pipeline for scRNA-seq data and Melanoma example Cancer cell heterogeneity hampers theraputic development. We use the melanoma dataset as an example. Cells in a scRNA-seq data are always with multiple crossed conditions, such as types of cancer, origin of patients and different cell types. By analyzing melanoma scRNA-seq data with classic pipeline, we differentiated the cell type of each cell in its cancer microenvironment (CME) (figure 1B). All cell types other than cancer cell are constituted by multiple patients (figure 1C), validated the accuracy of classic pipeline in cell type identification. While on cancer cell, each patient forms a distinct cluster (highlighted in shadow), suggesting confounding patient-wise heterogeneity. Similar phenomenon also exists in breast cancer and head and neck cancer. On the other hand, being an investment-heavy industry like medical industry, the uniqueness of each cancer patient contradicts its general principle as to In addition, f follows a beta distribution accounting for the collective effect of the probability to shift the expression from off to on (k on) and from on to off (k of f). y denotes the true expression of gene i inside cell j and x is the observation of y with Gaussian error. Recent study revealed that, regulated by enhancers, burst frequency f is the major facilitator of cell type specific gene expression landscape . Though f and k size cannot be precisely fitted from our observed data, since y follows the Poisson distribution of the pure product of k size and f, we could still capture the most significant frequency changes across different cells. That is, we could infer whether f is above or equal to zero, corresponding to expression/no-expression of the gene, from our observed data. Counting this property, we thus propose the following approximate gene expression bi-state models. where F denotes a latent binary matrix of f, which is considered as a low rank representation of k different cell types, generated by the Boolean product of two binary matrix A and B plus a Boolean flipping error E. Y denotes the true quantitative expression level generated from F, and X is considered as a measure of Y with i.i.d. Gaussian error. Here our approach takes the approximating Y by Hadamard product between X and n×k ⊗B k×m, i.e. where n×k andB k×m are the estimation of A n×k and B k×m. Bi-state and Boolean matrix factorization for scRNA-seq data (BBSC). In sight of this, we developed a novel scRNA-seq pattern mining and analysis pipeline namely BBSC (Figure 2), by implementing a data binarization process for the inference of ON/OFF bi-state expression patterns. In addition, we proposed a fast binary matrix factorization (BMF) method, namely PFAST, adapting to the large scale of scRNA-seq data. BBSC can be easily implemented with classic dimension reduction based analysis procedure. Application of BBSC on scRNA-seq of the head and neck cancer and melanoma data successfully revealed the cancer homogeneity hence increased the sensitivity in identifying sub types of cells. In addition, cancer cell clusters expressing the epithelial mesenchymal transition (EMT) markers were specifically identified by BBSC in head and neck cancer study, which consist cancer cells from different patient samples, suggesting heterogeneous cancer cells may adopt a similar strategy in cancer metastasis process. We summarize our contributions as follows: • We constructed a scRNA-seq analysis pipeline, BBSC, for retrieving cancer homogeneity properties. BBSC is by far the first analysis pipeline accounting the fundamental interplay between cell type and gene expression in the analysis of scRNA-seq data. • As a major component in BBSC pipeline, we proposed a fast and efficient BMF algorithm, PFAST, in adapting to the large scale of scRNA-seq data. • In the analysis of head and neck cancer data, BBSC identified that cancer cell may adapt similar strategies in metastasis. This finding could be applied to prevent cancer progression. So far, two strategies have been used to optimize the classic pipeline for scRNA-seq data analysis: using extra information to supervise the dimension reduction, such as CITE-seq and REAP-seq data combining scRNA-seq with additional protein information or a recent work by , by maximizing the similarity with bulk RNA seq data for scRNAseq imputation; and limiting analysis to the genes known to be related with desired biological features. Both strategies require substantial prior information that is either expensive or unsuitable for studying biological characterization. In this paper, we developed a new strategy rooted from a perspective that differences in cell types and physiological states correspond to different bi-state frequency patterns, which could retrieve effectively by Boolean matrix factorization. Following the Boolean algebra, BMF decomposes a binary matrix as the Boolean product of two lower rank binary matrices and has revealed its strength in retrieving information from binary data. Due to the NP completeness of the BMF problem, several heuristic solutions have been developed, among which two series of works are most frequently utilized . One is ASSO algorithm developed by. ASSO first generates potential column basis from row-wise correlation. Then adopts a greedy searching from generated basis for the BMF fitting. The second series of work is the PANDA algorithm developed by. PANDA aims to identify the top 1-enriched submatrices in a binary matrix from noise. In each iteration, PANDA excludes the current fitting from the input matrix and retains a residual matrix for further fitting. More recently, Bayesian inference has involved in this field. retrieve patterns from factor-graph model by deriving MAP using message passing (denoted MP). proposed OrMachine, provide full probabilistic inference for binary matrices. While ASSO and PANDA being regarded as the baseline in BMF, MP and OrMachine represent state-of-the-art performance. As shown in Figure 2, we implemented a data binarization and PFAST algorithm to constrain scRNA-seq data before a regular dimension reduction based analysis, which forms a new analysis pipeline namely BBSC. BBSC first binarizes the input data via the on/off expression states of each gene. The approximated matrix, namely recover matrix, is further constructed by the Hadamard product of the original expression matrix and the BMF fitted binary matrix. Regular dimension reduction and cell clustering analysis is then conducted on the recovered matrix. Figure 3: Infer F from scRNA-seq data To determine a gene is truly expressed or not is to examine X ij on. Empirically, we assume the lowest none zero expression value of each gene approximates the distribution of. Since type I error is far damaging than type II error in biological experiments, we utilized the 95% quantile of distribution as the threshold of ON expression state, i.e., gene expression above the threshold is considered as f > 0 while expression below the threshold is considered as with an OFF state, i.e. f = 0. We applied this binarization procedure on two high quality scRNAseq cancer datasets of head and neck cancer (Figure 3A) and melanoma (Figure 3B). To justify the threshold of ON/OFF state computed in this way, we compared the representation of data in the lower dimension by the overall silhouette score, which measures the similarity of each data point to own cluster compare to others. The overall silhouette score represents the goodness of the clustering. Note that cell cluster information is retrieved directly from original paper. In both datasets, the binarization approach significantly increased the performance of cluster representation, suggesting our binarization can remove true noise and still maintains the biological information. We developed a fast and efficient BMF algorithm, namely PFAST to cope with the large scale of modern data of interest. PFAST follows the general framework of PANDA algorithm. In each iteration, PANDA has two main sub functions, core pattern discovery (Core) and extensions of a core pattern (Core ext). Core finds the most enriched square of 1s under current residual matrix. Core ext expands the generated core patterns with not included area. To find most precise patterns amid noise, PANDA calculates global loss at each step. Though PANDA only works on the residual matrix in each iteration, it still involves already generated patterns for calculating loss. This look back property and global loss calculation may play a major role in decomposing noisy binary data. However, the associated computational pressure makes PANDA inapplicable for large-scale scRNAseq data. Fortunately, during our binarization process, 95% of noise has been eliminated, which compensates an extensive binary pattern mining as PFAST. Unlike PANDA, PFAST only focus on the loss in a local scale. Moreover, PFAST abolished the look back property, only focus the loss decrease for current pattern. Taken together, PFAST is an extensive BMF algorithm. Each iteration of PFAST has a computational complexity of O(mn). Like PANDA, PFAST will only work iteratively on residual matrix that has not been covered by any identified patterns before hitting the convergence criteria. The choice of convergence criteria can be modified for different needs. The popular convergence criteria are set by identifying top k patterns or covering certain proportion of the non-zero values in the matrices. Detailed algorithms of PFAST is illustrated below: Algorithm 1: PFAST Inputs: Binary matrix F, Threshold t, and τ Outputs: A ∈ {0, 1} n×k, B ∈ {0, 1} k×m P F AST (F, t, τ): Since OrMachine has been deprecated, we compared the performance of PFAST with ASSO, PANDA,and MP on simulated datasets. We simulated binary matricesX n×m = U n×k ⊗ V k×m where each element of U and V follows an identical Bernoulli random variable. In the simulation, we set n = m = 1000, k = 5, and two signal level p = 0.2/0.4, corresponding to sparse and dense matrix. We compared the performance with three criterion: reconstructed error, sparsity, and time cost. Specifically, reconstructed error measures the overall fitting of each method, and sparsity measures the parsimonious level of the pattern matrices. Detailed definition of reconstructed error and sparsity are given below. Intuitively, a good binary matrix factorization should have small reconstructed error and proper sparsity level. To the best of our knowledge, the conditions to guarantee Algorithm 2: PFAST core Inputs: Residual matrix F r Outputs: a ∈ {0, 1} n, b ∈ {0, 1} m P F AST core(F r): s = {s 1, ..., s n} ← sorting based on row-wise sum a unique solution of the BMF problem have not been theoretically derived, thus we do not directly compare the factorized and true pattern matrices directly, i.e., U vs A *, and V vs B *, where A * and B * denote the pattern matrices decomposed by the three different algorithms. Note that ASSO and PFAST require one additional parameter as a standard input. To achieve a fair comparison, we tested different parameters for each method and used the parameter with the best performance for the comparison. The convergence criteria for all the methods were set as when 5 patterns were identified, corresponds to the true rank of simulated matrices; identified patterns already covers 95% of the non-zero values. All the experiments ran on the same laptop with i7-7600U CPU and 16 GB memory. We conducted the evaluation for 10 times, detailed are shown in Figure 4. The definitions of reconstructed error and sparsity are Comparing to ASSO, PANDA, and MP, our analysis showed that PFAST achieved superior performance in both sparse and dense matrices. The running time of PFAST is significant lower than all other methods. We also observed better convergence of PFAST. ASSO tended to find the most inclusive patterns so that they usually converged with very few dense patterns. PANDA was designed to identified significant patterns from noise. Its low tolerance to noise caused a relative slow pace in convergence. MP revealed its robustness in fitting binary data. However, it has the highest computational cost compared to others. The performance of PFAST demonstrated its balanced computational cost and fitting accuracy. With the significant improvement of speed, PFAST still manages to We applied classic tSNE-based dimension reduction and BBSC analysis on the head and neck cancer and melanoma data sets, as detailed below. For both datasets, we recovered the bi-state model of data by binarizing the expression matrix into ON/OFF expression state with 95% Gaussian noise quantile. PFAST was applied on the binary matrices with threshold setting to 0.6. The choice of convergence criteria can vary according to different needs. Here, we set convergence as 1) top 10 patterns have been identified, 2) 40% of non-zero values has been recovered. The rationale here is that scRNA-seq data is overall sparse. It usually cost extensive patterns to achieve a small reconstructed error. However, the later discovered patterns introduced more bias, where the later patterns are more likely to be related to other factors rather than cell type. Empirically, top 10 patterns and 40% cutoff achieve better cell type identification ability. In analyzing the head and neck cancer and melanoma data sets, it ed in 5 and 10 patterns respectively. In both analysis pipeline, we conducted dimension reduction using t-SNE with perplexity setting to 30 with 20000 max iterations. It is noteworthy that no cell clustering was made in this analysis. All the cell type annotation and patient information were directly retrieved from the original paper. As illustrated in Figure 5A,E, the 2D embedding achieved from the classic pipeline well separated cells by their phenotypic types. Fibroblast, T-, B-, myeloid and cancer cells et al forms distinct individual clusters. Further analysis of the association between cell groups and patient information confirmed same type of the immune and stromal cells from different patients form one cell group, while the cancer cells are grouped by specific patient over the 2D embedding (Figure 5B,F). These observations are consistent with original work. On the other hand, on the 2D embedding of the BBSC pipeline, cell of different phenotypic types form into distinct groups. Comparing to the classic pipeline, BBSC retrieved data generated more groups of subtypes of Fibroblast, T cells and cancer cells (Figure 5C,G). The split cell groups identified by BBSC show higher association with intra-cancer heterogeneity. We further investigated the association between the patient origin and cell group over the 2D embedding of the BBSC data (Figure 5D,H). Interestingly, in both datasets, we observed several cell groups, marked with yellow circles, that are constituted by cancer cells for different patients. These cancer cell groups correspond to the common sub cell populations prevalently shared by cancer tissues with different patients, which may suggest hallmark functions developed in the disease progression. To identify the functional characteristics of BBSC derived cell groups, we checked the differentially expressed genes associated with the cell groups of cancer cells. We first achieved five distinct clusters of the cancer cells over the 2D embedding of the BBSC retrieved data by using k-mean method (Figure 6A). Figure 6B illustrates the newly clustered cancer cell in the 2D embedding derived by the classic tSNE method. The cluster 1 and 2 are formed by cells from different patients while the cluster 3 to 5 were associated with specific patients. We identified significant differential expression of epithelial-mesenchymal transition (EMT) marker genes among the five clusters (Figure 6C). EMT is regarded as a hallmark event in cancer cells metastasis approach for carcinomas such as head and neck cancer. Under this process, cancer cells lose their epithelial properties and become mesenchymal-like cells with higher migratory capabilities for escaping the cancer tissue into circulating system. We identified the cluster 1 and 2 behaved distinct difference compared with cluster 3 to 5 on EMT marker genes. Cells in the cluster 1 and 2 are with overly expressed mesenchymal markers such as CDH3, TGFB1, ITGB6 and VIM. While the cluster 3 to 5 overly express epithelial markers genes such as CDH1, CLDN4, CLDN7, KRT19 and EPCAM. Our analysis clearly demonstrated the BBSC substantially removed inter-cancer heterogeneity that enables the identification of cancer cells from different patients with common functional characteristics. More importantly, the observation also suggests though cancer cell are very different in each patient, they ought to take similar strategy in the metastasis process. Targeting the progression strategy revealed in this study may have huge therapeutic impact in preventing cancer progression. Enabled by the development of single cell technology, we now can observe the complicated biological process like cancer with unprecedented resolution. However, the classic analysis pipeline fails to deliver detailed information: 1) it does not reveal common characteristic of cancer cell in different cancer patients. 2) Even it separates functional cells; it fails to reveal intra-cluster heterogeneity. To solve above problems, we have developed BBSC analysis pipeline. Rooted from casting the frequency change in gene expression, we have applied BMF in the feature selection process, which avoids adding new expensive and potentially noisy information. We have applied tailored binarizing process for each dataset. Moreover, to deal with big scale tall matrix like scRNAseq data, we have developed a fast and efficient algorithm called PFAST. Letting alone its fast speed in handling large-scale data, it shows high accuracy compared with state-of-art BMF algorithms. We have applied BBSC on two high quality cancer studies, head and neck cancer and melanoma. In both datasets, BBSC shutters the big clusters into several sub clusters, and promotes a gateway to analysis intra-cluster heterogeneity. Moreover, BBSC manages to get common cancer sub cell clusters in both datasets, and decreases the patient-wise heterogeneity that hindered cancer therapeutic development. We next have justified the biological meanings of BBSC derived sub clusters by looking into the sub cancer clusters in head and neck cancer. By analyzing their detailed expression profile, We find out that the common clusters are in the EMT transition process indicating these cancer cells play an important part in cancer metastasis. While patient specific clusters are in the early EMT process indicating that these cells are still in the original cancer micro environment. These findings have first justified the biological importance of BBSC derived sub clusters. Secondly, it brings much insightful ideas in the clinical application. We now can hypothesize that when cancer cells seek metastasis, they will transform into similar states that are common across different patients. The characteristic of the common clusters may serve as target in preventing cancer metastasis. Furthermore, we validate that the heterogeneity of cancer comes from the original cancer tissue. Also BBSC shows promising in deciphering this kind of heterogeneity. Especially in head and neck cancer study, BBSC distinctly divides cancer cells from the same patient into two sub clusters. Due to our limited expertise in cancer biology, we did not look closely in this property. However, we believe this would bring insightful ideas in the cause of cancer origin heterogeneity. Overall BBSC is an efficient and valuable analysis platform for scRNAseq or other single cell data. It is capable to bring insightful knowledge for our detailed understanding of complicated biological process.
Our finding shed lights in preventing cancer progression
463
scitldr
While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. In this paper, we show that flows can in fact be extended to discrete events---and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Discrete flows have numerous applications. We display proofs of concept under 2 flow architectures: discrete autoregressive flows enable bidirectionality, allowing for example tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows (i.e., with layer structure from RealNVP) enable parallel generation such as exact nonautoregressive text modeling. There have been many recent advances in normalizing flows, a technique for constructing high-dimensional continuous distributions from invertible transformations of simple distributions BID22 BID25 BID23. Applications for high-dimensional continuous distributions are widespread: these include latent variable models with expressive posterior approximations BID22 BID20 BID12, parallel image generation BID6 BID11, parallel speech synthesis BID19, and general-purpose density estimation BID18.Normalizing flows are based on the change-of-variables formula, which derives a density given an invertible function applied to continuous events. There have not been analogous advances for discrete distributions, where flows are typically thought to not be applicable. Instead, most research for discrete data has focused on building either latent-variable models with approximate inference BID2, or increasingly sophisticated autoregressive models that assume a fixed ordering of the data BID0 BID26. In this paper, we present an alternative for flexible modeling of discrete sequences by extending continuous normalizing flows to the discrete setting. We demonstrate proofs of concept of discrete flows with two architectures:1. Discrete autoregressive flows enable multiple levels of autoregressivity. For example, one can design a bidirectional language model of text where each token depends on both left-to-right and right-to-left contexts while maintaining an exact likelihood and sampling.2. Discrete bipartite flows (i.e., with flow structure similar to RealNVP BID6) enable flexible models with parallel generation. For example, one can design nonautoregressive text models which maintain an exact likelihood for training and evaluation. Bidirectional models. Classically, bidirectional language models have been pursued but require approximate inference BID16. Unlike bidirectional models, autoregressive models must impose a specific ordering, and this has been shown to matter across natural language processing tasks BID7 BID28. Bidirectionality such as in encoders have been shown to significantly improve in neural machine translation BID3. Most recently, BERT has shown bidirectional representations can significantly improve transfer tasks BID4. In this work, discrete autoregressive flows enable bidirectionality while maintaining the benefits of a (tractable) generative model. Nonautoregressive models. There have been several advances for flexible modeling with nonautoregressive dependencies, mostly for continuous distributions BID5 BID11. For discrete distributions, BID21 and BID24 have considered retaining blockwise dependencies while factorizing the graphical model structure in order to simulate hierarchically. BID8 and BID10 apply latent variable models for fast translation, where the prior is autoregressive and the decoder is conditionally independent. BID13 adds an iterative refinement stage to initial parallel generations. In this work, discrete bipartite flows enable nonautoregressive generation while maintaining an exact density-analogous to RealNVP advances for image generation BID6. Normalizing flows transform a probability distribution using an invertible function BID25 BID22 BID23. Let x be a D-dimensional continuous random variable whose density can be computed efficiently. Given an invertible function f: R D → R D, the change-of-variables formula provides an explicit construction of the induced distribution on the function's output, y = f (x): DISPLAYFORM0 The transformation f is referred to as a flow and x is referred to as the base distribution. Composing multiple flows can induce further complex distributions. For an arbitrary invertible f, the determinant of the Jacobian incurs an O(D 3) complexity, which is as costly as modeling with a full rank covariance matrix. Thus, normalizing flows are designed so that the determinant of the flow's Jacobian can be computed efficiently. Here, we review two popular flow transformations. Autoregressive flows. Autoregressive functions such as recurrent neural networks and Transformers BID26 have been shown to successfully model data across modalities. Specifically, assume a base distribution x ∼ p(x). With µ and σ as autoregressive functions of y, i.e. DISPLAYFORM0, and σ d > 0 for all d, the flow computes a location-scale transform BID18, DISPLAYFORM1 The transformation is invertible and in fact, the inverse can be vectorized and computed in parallel: DISPLAYFORM2 In addition to a fast-to-compute inverse, the autoregressive flow's Jacobian is lower-triangular, so its determinant is the product of the diagonal elements, d=1 σ d. This enables autoregressive flow models to have efficient log-probabilities for training and evaluation. Bipartite flows. Real-valued non-volume preserving (RealNVP) flows are another transformation BID6. For some d < D, RealNVP coupling flows follow a bipartite rather than autoregressive factorization: DISPLAYFORM3 where σ and µ are functions of x 1:d with σ > 0. By changing the ordering of variables between each flow, the composition of RealNVP flows can learn highly flexible distributions. RealNVP flows have a lower-triangular Jacobian where its determinant is again the product of diagonal elements, D i=d+1 σ i. RealNVP flows are not as expressive as autoregressive flows, as a subset of variables don't undergo a transformation. However, both their forward and inverse computations are fast to compute, making them suitable for generative modeling where fast generation is desired. Normalizing flows depend on the change of variables formula (Equation 1) to compute the change in probability mass for the transformation. However, the change of variables formula applies only to continuous random variables. We extend normalizing flows to discrete events. Let x be a discrete random variable and y = f (x) where f is some function of x. The induced probability mass function of y is the sum over the pre-image of f: DISPLAYFORM0 where f −1 (y) is the set of all elements such that f (x) = y. For an invertible function f, this simplifies to FORMULA0 ). It is the same but without the log-determinant-Jacobian. Intuitively, the log-determinant-Jacobian corrects for changes to the volume of a continuous space; volume does not exist for discrete distributions so there is no need to adjust it. Computationally, Equation 4 is appealing as there are no restrictions on f such as fast Jacobian computations in the continuous case, or tradeoffs in how the log-determinantJacobian influences the output density compared to the base distribution. DISPLAYFORM1 Next we develop discrete invertible functions. To build intuition, first consider the binary case. Given a D-dimensional binary vector x, one natural function applies the XOR bitwise opera- tor, Example. Let D = 2 where p(x) is defined by the following probability table: DISPLAYFORM0 DISPLAYFORM1 0.63 0.07 DISPLAYFORM2 The data distribution cannot be captured by a factorized one p(x 1)p(x 2). However, it can with a flow: set f (x 1, x 2) = (x 1, x 1 ⊕x 2); p(x 1) with probabilities [0.7, 0.3]; and p(x 2) with probabilities [0.9, 0.1]. The flow captures correlations that cannot be captured alone with the base. More broadly, discrete flows perform a multi-dimensional relabeling of the data such that it's easier to model with the base. This is analogous to continuous flows, which whiten the data such that it's easier to model with the base (typically, a spherical Gaussian).Modulo location-scale transform. To extend XOR to the categorical setting, consider a Ddimensional vector x, each element of which takes on values in 0, 1,..., K − 1. One can perform location-scale transformations on the modulo integer space, DISPLAYFORM3 Here, µ d and σ d are autoregressive functions of y taking on values in 0, 1,..., K −1 and 1,..., K − 1 respectively. For this transformation to be invertible, σ and K must be coprime (an explicit solution for σ −1 is Euclid's algorithm). An easy way to ensure coprimality is to set K to be prime; mask noninvertible σ values for a given K; or fix σ = 1. Setting K = 2 and σ = 1, it's easy to see that the modulo location-scale transform generalizes XOR. The idea also extends to the bipartite flow setting: the functions (µ, σ) are set to for a subset of the data dimensions, and are functions of that subset otherwise. Example. FIG2 illustrates an example of using flows to model correlated categorical data. Following BID15, the data is drawn from a mixture of Gaussians with 8 means evenly spaced around a circle of radius 2. The output variance is 0.01, with samples truncated to be between −2.25 and 2.25, and we discretize at the 0.05 level. A factorized base distribution cannot capture the data correlations, while a single discrete flow can. (Note the modulo location-scale transform does not make an ordinal assumption. We display ordinal data as an example only for visualization; other experiments use non-ordinal data.) With discrete flow models, the maximum likelihood objective per datapoint is log p(y) = log p(f −1 (y)), DISPLAYFORM0 7.7 7.6 8.0 7.9 D = 5, K = 10 10.7 10.3 11.5 10.7 D = 10, K = 5 15.9 15.7 16.6 16.0 Table 1: Negative log-likelihoods for the full rank discrete distribution (lower is better). Autoregressive flows improve over its autoregressive base. Bipartite flows improve over its factorized base and achieve nats close to an autoregressive distribution while remaining parallel. Autoregressive Base Autoregressive Flow 15.05 5.81 Table 2: Negative log-likelihoods on the square-lattice Ising model (lower is better). Higher coupling strength corresponds to more spatial correlations.where the flow f has free parameters according to its autoregressive or bipartite network, and the base distribution p has free parameters as a factorized (or itself an autoregressive) distribution. Gradient descent with respect to base distribution parameters is straightforward. To perform gradient descent with respect to flow parameters, one must backpropagate through the discrete-output function µ and σ. We use the straight-through gradient estimator BID1. In particular, the (autoregressive or bipartite) network outputs two vectors of K logits θ d for each dimension d, one for the location and scale respectively. On the forward pass, we take the argmax of the logits, where for the location, DISPLAYFORM1 Because the argmax operation is not differentiable, we replace Equation 6 on the backward pass with the softmax-temperature function: DISPLAYFORM2 As the temperature τ → 0, the softmax-temperature becomes close to the argmax and the bias of the gradient estimator disappears. However, when τ is too low, the gradients vanish, inhibiting the optimization. Work with the Gumbel-softmax distribution indicates that this approximation works well when the number of classes K < 200 BID14 BID9, which aligns with our experimental settings; we also fix τ = 0.1. In addition to the experiment in FIG2, we perform three toy experiments to show the utility of discrete autoregressive flows and discrete bipartite flows. For discrete autoregressive flows, we used an autoregressive Categorical base distribution where the first flow is applied in reverse ordering.(This setup lets us compare its advantage of bidirectionality to the baseline of an autoregressive base with 0 flows.) For discrete bipartite flows, we used a factorized Categorical base distribution where the bipartite flows alternate masking of even and odd dimensions. A natural experiment is to analyze the expressivity of the flows for an arbitrary discrete distribution. In particular, we sample a true set of probabilities for all D dimensions of K classes according to a Dirichlet distribution of size K D − 1, α = 1. For the network for both the base and flows, we used a Transformer with 64 hidden units. Table 1 displays negative log-likelihoods (nats) of trained models over data simulated from this distribution. Across the data dimension D and number of classes K, autoregressive flows gain several nats over the autoregressive base distribution, which has no flow on top. Bipartite flows improve over its factorized base and in fact obtain nats competitive with the autoregressive base while remaining fully parallel for generation. Addition. , we examine an addition task: there are two input numbers with D digits (each digit takes K = 10 values), and the output is their sum with D digits (we remove the D + 1 th digit if it appears). Addition naturally follows a right-to-left ordering: computing the leftmost digit requires carrying the remainder from the rightmost computations. Given an autoregressive base which poses a left-to-right ordering, we examine whether the bidirectionality that flows offer can adjust for wrong orderings. We use an LSTM to encode both inputs, apply 0 or 1 flows on the output, and then apply an LSTM to parameterize the autoregressive base where its initial state is set to the concatenated two encodings. All LSTMs use 256 hidden units for D = 10; 512 for D = 20.For D = 10, an autoregressive base achieves 4.0 nats; an autoregressive flow achieves 0.2 nats (i.e., close to the true deterministic solution over all pair of 10-digit numbers). A bipartite model with 1, 2, and 4 flows achieves 4.0, 3.17, and 2.58 nats respectively. For D = 20, an autoregressive base achieves 12.2 nats; an autoregressive flow achieves 4.8 nats. A bipartite model with 1, 2, 4, and 8 flows achieves 12.2, 8.8, 7.6, and 5.08 nats respectively. Ising Model. We examine how bidirectional generative models can be used for learning undirected models. For the base network, we used a single layer LSTM with 8 hidden units. For the flow network, we used an embedding layer with 8 hidden units. Table 2 displays negative log-likelihoods (nats) of trained models over data simulated from Ising models with varying lattice size and coupling strength. As Ising models are undirected models, the autoregressive base posits a poor inductive bias by fixing an ordering and sharing network parameters across the individual conditional distributions. Over data dimension D and coupling, autoregressive flows perform as well as, or improve upon, autoregressive base models. We describe discrete flows, a class of invertible functions for flexible modeling of discrete data. Note our experiments are only toy to show proofs of concept. We're continuing to push these ideas to larger-scale text data. We're also applying discrete inverse autoregressive flows, which enable flexible variational approximations for discrete latent variable models. One open question remains with scaling discrete flows to large numbers of classes: in particular, the straight-through gradient estimator works well for small numbers of classes such as for character-level language modeling, but it may not work for (sub)word-level modeling where the vocabulary size is greater than 5,000.
We extend autoregressive flows and RealNVP to discrete data.
464
scitldr
We present a Deep Neural Network with Spike Assisted Feature Extraction (SAFE-DNN) to improve robustness of classification under stochastic perturbation of inputs. The proposed network augments a DNN with unsupervised learning of low-level features using spiking neuron network (SNN) with Spike-Time-Dependent-Plasticity (STDP). The complete network learns to ignore local perturbation while performing global feature detection and classification. The experimental on CIFAR-10 and ImageNet subset demonstrate improved noise robustness for multiple DNN architectures without sacrificing accuracy on clean images. There is a growing interest in deploying DNNs in autonomous systems interacting with physical world such as autonomous vehicles and robotics. It is important that an autonomous systems make reliable classifications even with noisy data. However, in a deep convolutional neural networks (CNN) trained using stochastic gradient descent (SGD), pixel level perturbation can cause kernels to generate incorrect feature maps. Such errors can propagate through network and degrade the classification accuracy (Nazaré et al.; ). Approaches for improving robustness of a DNN to pixel perturbation can be broadly divided into two complementary categories. First, many research efforts have developed image de-noising (or filtering) networks that can pre-process an image before classification, but at the expense of additional latency in the processing pipeline (; ; ; ; ;). De-noising is an effective approach to improve accuracy under noise but can degrade accuracy for clean images . Moreover, de-noising networks trained on a certain noise type do not perform well if the a different noise structure is experienced during inference . Advanced de-noising networks are capable of generalizing to multiple levels of a type of noise and effective for different noise types (; ;). But high complexity of these network makes them less suitable for real-time applications and lightweight platforms with limited computational and memory resources. An orthogonal approach is to develop a classification network that is inherently robust to input perturbations. Example approaches include training with noisy data, introducing noise to network parameters during training, and using pixel level regularization (; Nazaré et al.;;; ). These approaches do not change the processing pipeline or increase computational and memory demand during inference. However, training-based approaches to design robust DNNs also degrade classification accuracy for clean images, and more importantly, are effective only when noise structure (and magnitude) during training and inference closely match. Therefore, a new class of DNN architecture is necessary for autonomous system that is inherently resilient to input perturbations of different type and magnitude without requiring training on noisy data, as well as computationally efficient. Towards this end, this paper proposes a new class of DNN architecture that integrates features extracted via unsupervised neuro-inspired learning and supervised training. The neuro-inspired learning, in particular, spiking neural network (SNN) with spike-timing-dependent plasticity (STDP) is an alternative and unsupervised approach to learning features in input data . However, the classification accuracy of a STDP-learned SNN for complex datasets is much lower than a that of a DNN. The fundamental premise of this paper is that, augmenting the feature space of a supervised (trained) DNN with features extracted by an SNN via STDP-based learning increases robustness of the DNN to input perturbations. We argue that stochastic gradient descent (SGD) based back-propagation in a DNN enables global learning between low-level pixel-to-pixel interactions and high-level detection and classification. On the other hand, STDP performs unsupervised local learning and extracts low-level features under spatial correlation. By integrating features from global (supervised training) and local (STDP) learning, the hybrid network "learns to ignore" locally uncorrelated perturbations (noise) in pixels while extracting the correct feature representation from the overall image. Consequently, hybridization of SGD and STDP enables robust image classification under noisy input while preserving the accuracy of the baseline DNN for clean images. We present a hybrid network architecture, referred to as Spike Assisted Feature Extraction based Deep Neural Network (SAFE-DNN), to establish the preceding premise. We develop an integrated learning/training methodology to couple the features extracted via neuro-inspired learning and supervised training. In particular, this paper makes the following contributions: • We present a SAFE-DNN architecture (Figure 1) that couples STDP-based robust learning of local features with SGD based supervised training. This is achieved by integrating a spiking convolutional module within a DNN pipeline. • We present a novel frequency-dependent stochastic STDP learning rule for the spiking convolutional demonstrating local competitive learning of low level features. The proposed learning method makes the feature extracted by the spiking convolutional module robust to local perturbations in the input image. • We develop a methodology to transform the STDP-based spiking convolution to an equivalent CNN. This is achieved by using a novel special neuron activation unit (SAU), a non-spiking activation function, that facilitates integration of the SNN extracted features within the DNN thereby creating a single fully-trainable deep network. The supervised (SGD-based) training is performed in that deep network after freezing the STDP-learnt weights in the spiking CNN module. We present implementations of SAFE-DNN based on different deep networks including MobileNet, ResNet and DenseNet (, ,) to show the versatility of our network architecture. Experiment is conducted for CIFRA10 and ImageNet subset considering different types of noise, including Gaussian, Wald, Poisson, Salt&Paper, and adversarial noise demonstrating robust classification under input noise. Unlike training-based approaches, SAFE-DNN shows improved accuracy for a wide range of noise structure and magnitude without requiring any prior knowledge of the perturbation during training and inference and does not degrade the accuracy for clean images (even shows marginal improvement in many cases). SAFE-DNN complements, and can be integrated with, de-noising networks for input pre-processing. However, unlike de-noising networks, the SAFE-DNN has negligible computation and memory overhead, and does not introduce new stages in the processing pipeline. Hence, SAFE-DNN is an attractive architecture for resource-constrained autonomous platforms with real-time processing. We note that, SAFE-DNN differs from deep SNNs that convert a pre-trained DNN to SNN (,). Such networks function as a spiking network during inference to reduce energy; however, the learning is still based on supervision and back-propagation. In contrast, SAFE-DNN hybridizes STDP and SGD during learning but creates a single hybrid network operating as a DNN during inference. Spiking neural network uses biologically plausible neuron and synapse models that can exploit temporal relationship between spiking events . There are different models that are developed to capture the firing pattern of real biological neurons. We choose to use Leaky Integrate Fire (LIF) model in this work described by: where, a, b and c are parameters that control neuron dynamics, and I is the sum of current signal from all synapses that connects to the neuron. In SNN, two neurons connected by one synapse are referred to as pre-synaptic neuron and postsynaptic neuron. Conductance of the synapse determines how strongly two neurons are connected and learning is achieved through modulating the conductance following an algorithm named spike-timingdependent-plasticity (STDP) (; ;). With two operations of STDP: long-term potentiation (LTP) and long-term depression (LTD), SNN is able to extract the causality between spikes of two connected neurons from their temporal relationship. More specifically, LTP is triggered when post-synaptic neuron spikes closely after a pre-synaptic neuron spike, indicating a causal relationship between the two events. On the other hand, when a post-synaptic neuron spikes before pre-synaptic spike arrives or without receiving a pre-synaptic spike at all, the synapse goes through LTD. For this model the magnitude of modulation is determined by : In the functions above, ∆G p is the magnitude of LTP actions, and ∆G d is the magnitude of LTD actions. α p, α d, β p, β d, G max and G min are parameters that are tuned based on specific network configurations. The gradient descent based weight update process in a DNN computes the new weight as W = W − η∇L, where the gradient of loss function L is taken with respect to weight:. Consider cross entropy loss as an example for L, weight optimization of element i is described by: Here η is the rate for gradient descent; N is the number of classes; y n is a binary indicator for the correct label of current observation andŷ n is the predicated probability of class n by the network. For equation, gradient is derived based on the output prediction probabilitiesŷ and ground truth. Such information is available only at the output layer. To generate the gradient, the output prediction (or error) has to be back-propagated from the output layer to the target layer using chain rule. Aŝ y = g(W, X) with g being the logistic function and X the input image, the prediction probabilities are the outcome of the entire network structure. Consider the low level feature extraction layers in a deep network. Equation suggests that gradient of the loss with respect to a parameter is affected by all pixels in the entire input image. In other words, the back-propagation makes weight update sensitive to non-neighboring pixels. This facilitates global learning and improve accuracy of higher level feature detection and classification. However, the global learning also makes it difficult to strongly impose local constraints during training. Hence, the network does not learn to ignore local perturbations during low-level feature extraction as it is trained to consider global impact of each pixel for accurate classifications. This means that during inference, although a noisy pixel is an outlier from the other pixels in the neighbourhood, a DNN must consider that noise as signal while extracting low-level features. The ing perturbation from pixel level noise propagates through the network, and degrades the classification accuracy. The preceding discussion suggests that, to improve robustness to stochastic input perturbation (noise), the low level feature extractors must learn to consider local spatial correlation. The local learning will allow network to more effectively "ignore" noisy pixels while computing the low-level feature maps and inhibit propagation of input noise into the DNN pipeline. The motivation behind SAFE-DNN comes from the observation that STDP in SNN enables local learning of features. Compared to conventional DNN, SNN conductance is not updated through gradient descent that depends on back propagation of global loss. Consider a network with one spiking neuron and n connected input synapses, a spiking event of the neuron at time t spike and timing of closest spikes from all input spike trains T input, the modulated conductance is given by: Here input is spike timing difference, r is the magnitude function (Equation 2) and p is the modulation probability function (Equation 5). The value of t spike is a of the neuron's response to the collective sum of input spike trains in one kernel. Hence, the modulation of weight of each synapse in a SNN depends only on other input signals within the same (local) receptive field. Moreover, as the correlation between the spike patterns of neighboring pre-synaptic neurons controls and causes the post-synaptic spike, STDP helps the network learn the expected spatial correlation between pixels in a local region. During inference, if the input image contains noise, intensity of individual pixel can be contaminated but within a close spatial proximity the correlation is better preserved. As the SNN has learned to respond to local correlation, rather than individual pixels, the neuron's activity experiences less interference from local input perturbation. In other words, the SNN "learns to ignore" local perturbations and hence, the extracted features are robust to noise. 4.1 NETWORK ARCHITECTURE Fig. 1 (a) shows an illustrative implementation of SAFE-DNN. The network contains spiking layers placed contiguously to form the spiking convolution module, along with conventional CNN layers. The spiking convolution module is placed at the front to enable robust extraction of local and low-level features. Further, to ensure that the low-level feature extraction also considers global learning, which is the hallmark of gradient backpropagation as discussed in section 3, we place several conventional CNN layers of smaller size in parallel with the spiking convolution module. This is called the auxiliary CNN module. The output feature map of the two parallel modules is maintained to have the same height and width, and concatenated along the depth to be used as input tensor to the remaining CNN layers, referred to as the main CNN module. Main CNN module is responsible for higher level feature detection as well as the final classification. The main CNN module can be designed based on existing deep learning models. The concatenation of features from auxilary CNN and spikining convolutional module helps integrate global and local learning. The first convolution layer and the following one block from the original network architecture are dropped and the remaining layers are used as the mian CNN module of SAFE-MobileNetV2. We show that SAFE-DNN is a versatile network by testing three configurations in this work, which have the main CNN module based on MobileNetV2, ResNet101 and DenseNet121, respectively. The storage and computational complexity of the networks are shown in Table 1. It can be observed that SAFE-DNN implementations do not introduce a significant overhead to the baseline networks. In the dynamical system of SNN, neurons transmit information in the form of spikes, which are temporally discrete events that spread across multiple simulation time steps. This requires input signal intensity to be converted to spike trains, and a number of time steps for neurons to respond to input stimulus. Such mechanism is different from that of the conventional DNN, which takes only one time step for data to propagate through the network. Due to this reason the native SNN model can not be used in spiking convolution module of SAFE-DNN. Two potential solutions to this problem are, running multiple time steps for every input, or, adapting the spiking convolution module to single-time-step response system. Since the first slows down both training and inference by at least one order of magnitude, we choose the latter. Training Process. We separate STDP-based learning and DNN training into two stages. In the first stage, the spiking convolution module operates in isolation and learns all images in the training set without supervision. The learning algorithm follows our novel frequency dependent STDP method described next in section 4.2. In the second stage, network parameters are first migrated to the spiking convolution module of SAFE-DNN. The network building blocks of the spiking convolutional module go through a conversion process shown in Fig. 1 (b). The input signal to spike train conversion process is dropped, and conductance matrix is re-scaled to be used in the new building block. Batch normalization is inserted after the convolution layer. In order to preserve the non-linear property of spiking neurons, a special activation unit (SAU) is designed to replace the basic spiking neuron model. Details about SAU is discussed later in section 4.3. Once the migration is completed, the entire SAFE-DNN is then trained fully using statistical method, while weights in the spiking convolution module are kept fixed to preserve features learned by SNN. Network inference is performed using the network architecture created during the second stage of training i.e. instead of the baseline LIF, the SAU is used for modeling neurons. Frequency-dependent stochastic STDP The STDP algorithm discussed in 2 captures the basic exponential dependence on timing of synaptic behavior, but does not address the associative potentiation issue in STDP (; ;). Associativity is a temporal specificity such that when a strong (in case of our SNN model, more frequent) input and a weak (less frequent) input into one neuron induce a post-synaptic spike, a following conductance modulation process is triggered equivalently for the both. In the context of STDP based SNN, associativity can cause erroneous conductance modulation if unaccounted for (She et al. (2019b) ). Therefore, we propose a frequency-dependent (FD) stochastic STDP that dynamically adjust the probability of LTP/LTD based on input signal frequency. The algorithm is described by: In this algorithm, τ d and τ p are time constant parameters. ∆t is determined by subtracting the arrival time of the pre-synaptic spike from that of the post-synaptic spike (t post − t pre). Probability of LTP P p is higher with smaller ∆t, which indicates a stronger causal relationship. The probability of LTD P d is higher when ∆t is larger. γ p and γ d controls the peak value of probabilities. f max and f min define the upper and lower limit of input spike frequency and f is the value of input spike frequency. When input spike originates from a weak input, the probability declines faster than that from a strong input. As a , pre-synaptic spike time of weak input needs to be much closer to the post-synaptic spike than that of strong input to have the same probability of inducing LTP, i.e. the window for LTP is narrower for weak input. The same rule applies to LTD behavior. As will be shown in the following section, FD stochastic STDP exhibits better learning capability than conventional STDP. The architecture of the spiking convolutional module is shown in Fig. 3. This architecture resembles conventional DNN but have some differences. First, the 8-bit pixel intensity from input images is converted to spike train with frequency over a range from f min to f max. The input spike train matrix connects to spiking neurons in the spiking convolution layer in the same way as conventional 2D convolution, which also applies for connections from one spiking convolution layer to the next. All connections as mentioned are made with plastic synapses following STDP learning rule. When a neuron in the convolution layer spikes, inhibitory signal is sent to neurons at the same (x,y) coordinate across all depth in the same layer. This cross-depth inhibition prevents all neurons at the same location from learning the same feature. Overall, such mechanism achieves a competitive local learning behavior of robust low level features that are crucial to the implementation of SAFE-DNN. A basic property of spiking neuron is that a number of spikes need to be received before a neuron reaches spiking state and emits one spike. In a two layer network this does not cause a problem but for multiple-layer network it prohibits spiking signal to travel deep down. Due to the diminishing spiking frequency of multiple-layer SNN, a layer-by-layer learning procedure is used. When the first layer completes learning, its conductance matrix is kept fixed and cross-depth inhibition disabled. Next, all neurons in the first layer are adjusted to provide higher spiking frequency by lowering the spiking threshold V th. The effect of changing V th is illustrated in Fig.4. In such way, neurons in the first layer receive input from input images and produce enough spikes that can facilitate learning behavior of the second layer. The same process is repeated until all layers complete learning. Consider the spike conversion process of SNN, given an input value of X ∈ 0, 1 and input perturbation ξ, conversion to spike frequency with range ∈ f min, f max is applied such that F = Clip {(X + ξ)(f max − f min)}. For the duration of input signal T input, the total received spikes for the recipient is N spike = F * T input. Also consider how one spiking neuron responses to input frequency variation, which is shown in Fig.4: it can be observed that flat regions exist throughout spiking activity as its unique non-linearity. Therefore, for |ξ| ≤ δ Tinput(fmax−fmin) perturbation does not cause receiving neuron to produce extra spikes. While the exact value of δ changes with different input frequency, it is small only when original input frequency is near the edges of non-linearity. This provides the network with extra robustness to small input perturbations. Based on this, we design the Special Activation Unit (SAU) to be a step function in the form of f (x) = Three baseline networks: MobileNetV2, ResNet101 and DenseNet121 are tested in comparison with SAFE-DNN. We also studied two enhancement methods for baseline networks, namely, training with noisy input (30 dB) and using average filter (2x2) for image pre-processing. Note SAFE-DNN is never trained with noisy images; it is only trained with clean images and only tested with noisy images. Visualization of the embedding space We demonstrate the improved local feature extraction of FD stochastic STDP by comparing capability of the deep network to cluster noisy input. Two SAFEMobileNetV2 are trained with FD stochastic STDP and deterministic STDP, respectively, and tested on noisy input with AWGN noise. The embedding space is taken between the two fully connected layers and each color represents one class. As shown in Fig. 6, 20 dB input is used for (i) and (ii), 18 dB for (iii) and (iv) and 15 dB for (v) and (vi). SAFE-MobileNetV2 implemented with features extracted via FD stochastic STDP provides better clustering of different classes and achieves higher accuracy. Next, we compare the entire SAFE-DNN architecture with alternative designs. First, we consider the standard (baseline) MobileNetV2. The second one, referred to as MobileNetV2-µ, has the same architecture as SAFE-MobileNetV2, but the spiking convolution module is replaced with regular trainable DNN layers. The third one, referred to as the MobileNetV2-λ, is constructed by replacing the activation functions in the first three layers of a trained MobileNetV2-µ with the SAU (without any re-training). The comparisons with MobileNetV2-µ and MobileNetV2-λ show whether benefits of SAFE-MobilenetV2 can be achieved by only architectural modifications or new (SAU) activation function, respectively, without local STDP learning. All networks are trained with CIFAR10 dataset. Fig. 7 shows embedding space visualizations of all four networks with clean and noisy (SNR equal to 25dB) images. We observe that with clean input images, the vectors in embedding space of the baseline MobileNetV2 are distributed into ten distinct clusters. As noise is added to the images the clusters overlap which leads to reduced classification accuracy. On the other hand, SAFE-MobileNetV2 is able to maintained good separation between feature mappings for each class from no noise to 25 dB. We further observe that clusters for noisy images also heavily overlap for MobileNetV2-µ and MobileNetV2-λ, showing that only using architectural modification or spiking activation function, without STDP learning, cannot improve noise robustness of a DNN. Table 2 shows accuracy of all network variants for CIFAR-10. For the baseline DNNs, noise in images significantly degrades classification accuracy. The networks that are trained with noise (30dB noise is used during training) show higher robustness to noise, and the improvement is more prominent when inference noise is at similar level (30 dB) with training noise. For clean images the accuracy is degraded. Average filtering provides accuracy gain over the original networks in highly noisy conditions (less than 20 dB signal to noise ration (SNR)); but major performance drop is observed under mild to no noise. This is expected as average filtering in significant loss of feature details for input images in the CIFAR-10 dataset. For SAFE-DNN implemented with all three DNN architectures, performance in noisy condition is improved over the original network by an appreciable margin. For example, at 20 dB SNR SAFEMobileNetV2 remains at good performance while the original network drops below 40% accuracy, making a significant (50%) gain. Similar trend can be observed for other noise levels. Compared to networks trained with noise, SAFE-DNN shows similar performance at around 30 dB SNR while its advantage increases at higher noise levels. Moreover, for clean images accuracy of SAFE-DNN is on par with the baseline networks. Considering the use case scenario of autonomous vehicles, we conduct test on a subset of ImageNet that contains classes related to traffic (cars, bikes, traffic signs, etc).The subset contains 20 classes with a total of 26,000 training images. The same baseline networks as in the CIFAR10 test are used. Here 25 dB SNR images are used for noise training. The accuracy is shown in Table 3. All networks achieve around 70% top 1 accuracy on clean images. Noise training shows robustness improvement over the baseline network but still negatively affects clean image accuracy. In this test the average filter shows less degradation under no noise condition than for the CIFAR10 test, due to higher resolution of input images. DensNet121 shows more noise robustness than MobileNetV2 and ResNet101 when noise training is used, while for average filtering ResNet101 benefits the most. SAFE-DNN implementations of all three networks exhibit same or better robustness over all noise levels. Clean image classification accuracy is also unaffected. Comparing top 5 accuracy for SAFE-MobileNetV2 and its baselines, as shown in Table 4, SAFE-MobileNetV2 is able to maintain above 80% accuracy even at 5 dB SNR, outperforming all three baselines. Random perturbation We test SAFE-DNN in three more noise structures: Wald, Poisson and salt-and-pepper (SP). For CIFAR10, the is shown in 5. Wald I has a distribution of µ = 3, scale = 0.3 and for Wald II, µ = 13, scale = 1; Poisson I has a distribution with peak of 255 and for Poisson II, 75; S&P I has 5% noisy pixels and S&P II has 20%. Noise-trained DNNs for Wald, Poisson, and SP, are trained with noisy images generated using distributions Wald I, Poisson I, and SP I, resepectively. It can be observed that SAFE-DNN implementation with all three networks are more noise robust than baseline and average filtering. The noise-trained networks performs well when inference noise is aligned with training noise, but performance drops when noise levels are not aligned. Moreover, noise-trained networks trained with mis-aligned noise types performs poorly ( not shown). As for ImageNet subset, networks based on MobileNetV2 are tested. Wald I is a distribution with µ = 5, scale = 0.3 and Wald II is a distribution with µ = 25, scale = 1; Poisson I has a distribution with peak of 255 and for Poisson II, 45; S&P I has 5% noisy pixels and S&P II has 30%. As previously, noise-trained networks are trained with noisy images generated from Wald I, Poisson I and SP I. Similar to previous , as shown in 6 SAFE-MobileNetV2 is more robust to the different noise structures without ever-being trained on any noise structure. Adversarial perturbation We also test SAFE-DNN on adversarial perturbation crafted from black-box adversarial method. For this test, DNNs trained with conventional method are used as target network to generate the perturbed images. The attack method is fast gradient sign method : X adv = X + sign(∇ X J(X, y true)). Here X is the input image, y true the ground truth label and ∈ 0, 255. For source networks that are tested on the perturbed images, DNN trained with different initialization are used as baseline against SAFE-DNN implementation of the deep network. As shown in Table 7, SAFE-DNN also shows improved robustness to noise generated via adversarial perturbations. However, we note that the do not indicate robustness to white-box attacks; and integration of SAFE-DNN with adversarial training approaches will be an interesting future work in this direction. In this paper we present SAFE-DNN as a deep learning architecture that integrates spiking convolutional network with STDP based learning into a conventional DNN for robust low level feature extraction. The experimental show that SAFE-DNN improves robustness to different input perturbations without any prior knowledge of the noise during training/inference. SAFE-DNN is compatible with various DNN designs and incurs negligible computation/memory overhead. Hence, it is an attractive candidate for real-time autonomous systems operating in noisy environment.
A noise robust deep learning architecture.
465
scitldr
Neural embeddings have been used with great success in Natural Language Processing (NLP) where they provide compact representations that encapsulate word similarity and attain state-of-the-art performance in a range of linguistic tasks. The success of neural embeddings has prompted significant amounts of research into applications in domains other than language. One such domain is graph-structured data, where embeddings of vertices can be learned that encapsulate vertex similarity and improve performance on tasks including edge prediction and vertex labelling. For both NLP and graph-based tasks, embeddings in high-dimensional Euclidean spaces have been learned. However, recent work has shown that the appropriate isometric space for embedding complex networks is not the flat Euclidean space, but a negatively curved hyperbolic space. We present a new concept that exploits these recent insights and propose learning neural embeddings of graphs in hyperbolic space. We provide experimental evidence that hyperbolic embeddings significantly outperform Euclidean embeddings on vertex classification tasks for several real-world public datasets. Embeddings are used to represent complex high-dimensional data in lower-dimensional continuous spaces BID28 BID3. Embedded representations provide three principal benefits over sparse schemes: They encapsulate similarity, are compact, and perform better as inputs to machine learning models BID29. These benefits are particularly important for graph-structured data where the native representation is the adjacency matrix, which is typically a sparse matrix of connection weights. Neural embedding models are a flavour of embedding where the embedded representation corresponds to a subset of the connection weights in a neural network (see FIG2), which are learned through backpropagation. Neural embedding models have been shown to improve performance on many tasks across multiple domains, including word analogies (a; BID20, machine translation BID31), document comparison (, missing edge prediction BID12, vertex attribution BID26, product recommendations BID10 BID1, customer value prediction BID14 BID6 and item categorisation BID2 . In all cases, the embeddings are learned without labels (unsupervised) from a sequence of tokens. Previous work on neural embedding models has either either explicitly or implicitly (by using the Euclidean dot product) assumed that the embedding space is Euclidean. However, recent work in the field of complex networks has found that many interesting networks, particularly those with a scale-free structure such as the Internet BID30 BID5 or academic citations BID8 BID7 can be well described with a geometry which is non-Euclidean, such as hyperbolic geometry. Even more recently the problem of mapping graphs and datasets to a low-dimensional hyperbolic space has been addressed in BID24 and BID4. Here we use a neural embedding approach based on the Skipgram architecture to find hyperbolic embeddings. There are two reasons why embedding complex networks in hyperbolic geometry can be expected to perform better than Euclidean geometry. The first is that complex networks exhibit a hierarchical structure. Hyperbolic geometry provides a continuous analogue of tree-like graphs, and even infinite trees have nearly isometric embeddings in hyperbolic space BID11. The second property is that complex networks have power-law degree distributions, ing in high-degree hub vertices. All tiles are of constant area in hyperbolic space, but shrink to zero area at the boundary of the disk in Euclidean space. c Hub and spokes graph. It is impossible to embed this graph in two-dimensional Euclidean space and preserve the properties that all spokes are the same distance from the hub, all spokes are the same distance from each other, and the distance between spokes along the circumference is more than twice the distance to the hub. In hyperbolic space such embeddings exist. FIG1 shows a simple hub-and-spoke graph where each spoke is a distance R from the hub and 2R from each other. For an embedding in two-dimensional Euclidean space it is impossible to reproduce this geometry for more than two spokes. However, in hyperbolic space, large numbers of spokes that satisfy these geometrical constraints can be embedded because the circumference of a circle expands exponentially rather than polynomially with the radius. The starting point for our model is the celebrated Skipgram architecture (a; b) shown in FIG2. Skipgram is a shallow neural network with three layers: An input projection layer that maps from a one-hot-encoded token to a distributed representation, a hidden layer, and an output softmax layer. Skipgram is trained on a sequence of words that is decomposed into (input word, context word)-pairs. The model uses two separate vector representations, one for the input words and another for the context words, with the input representation comprising the learned embedding. The (input word, context word)-pairs are generated by running a fixed length sliding window over a word sequence. Words are initially randomly allocated to vectors within the two vector spaces. Then, for each training word pair, the vector representations of the observed input and context words are pushed towards each other and away from all other words (see FIG2). The model can be extended to network structured data using random walks to create sequences of vertices. Vertices are then treated exactly analogously to words in the NLP formulation. This was originally proposed as DeepWalk BID26. Extensions varying the nature of the random walks have been explored in LINE BID32 and Node2vec BID12.Contribution In this paper, we introduce the new concept of neural embeddings in hyperbolic space. We formulate backpropagation in hyperbolic space and show that using the natural geometry of complex networks improves performance in vertex classification tasks across multiple networks. At the same time, BID24 independently proposed a hyperbolic embedding algorithm that has similarities to ours. The key differences are that BID24 try to fit the hyperbolic distance between nodes using cartesian coordinates in the Poincaré disk, whereas we use a modified cosine distance in a spherical hyperbolic coordinate system. Our approach does not require a numerical constraint to prevent points from'falling off' the edge of the disk and becoming infinitely distant from the others. Hyperbolic geometry emerged through a relaxation of Euclid's fifth geometric postulate (the parallel postulate). In hyperbolic space, there is not just one, but an infinite number of parallel lines that pass through a single point. This is illustrated in FIG1 where every fine line is parallel to the bold, blue line, and all pass through the same point. Hyperbolic space is one of only three types of isotropic space that can be defined entirely by their curvature. The most familiar is flat Euclidean space. Space with uniform positive curvature has an elliptic geometry (e.g. the surface of a sphere) and space with uniform negative curvature has a hyperbolic geometry, which is analogous to a saddle-like surface. Unlike Euclidean space, in hyperbolic space even infinite trees have nearly isometric embeddings, making the space well suited to model complex networks with hierarchical structure. Additionally, the defining features of complex networks, such as power-law degree distributions, strong clustering and community structure, emerge naturally when random graphs are embedded in hyperbolic space.One of the defining characteristics of hyperbolic space is that it is in some sense larger than Euclidean space; the 2D hyperbolic plane cannot be isometrically embedded into Euclidean space of any dimension, unlike elliptic geometry where a 2-sphere can be embedded into 3D Euclidean space etc. The hyperbolic area of a circle or volume of a sphere grows exponentially with its radius, rather than polynomially. This property allows low-dimensional hyperbolic spaces to provide effective representations of data in ways that low-dimensional Euclidean spaces cannot. FIG1 shows a hub-and-spoke graph with four spokes embedded in a two-dimensional Euclidean plane so that each spoke sits on the circumference of a circle surrounding the hub. Each spoke is a distance R from the hub and 2R from every other spoke, but in the embeddings the spokes are a distance of R from the hub, but only R √ 2 from each other. Complex networks often have small numbers of vertices with degrees that are orders of magnitude greater than the median. These vertices approximate hubs. The distance between spokes tends to the distance along the circumference s = 2πR n as the number of spokes n increases, and so the shortest distance between two spokes is via the hub only when n < π. However, for embeddings in hyperbolic space, we get n < sinh R R, such that an infinite number of spokes can satisfy the property that they are the same distance from a hub, and yet the path that connects them via the hub is shorter than along the arc of the circle. As hyperbolic space can not be isometrically embedded in Euclidean space, there are many different representations that each conserve some geometric properties, but distort others. In this paper, we use the Poincaré disk model of hyperbolic space. The Poincaré disk models the infinite two-dimensional hyperbolic plane as a unit disk. For simplicity we work with the two-dimensional disk, but it is easily generalised to the d-dimensional Poincaré ball, where hyperbolic space is represented as a unit d-ball. Hyperbolic distances grow exponentially towards the edge of the disk. The boundary of the disk represents infinitely distant points as the infinite hyperbolic plane is squashed inside the finite disk. This property is illustrated in FIG1 where each tile is of constant area in hyperbolic space, but rapidly shrink to zero area in Euclidean space. Although volumes and distances are warped, the Poincaré disk model is conformal. Straight lines in hyperbolic space intersect the boundary of the disk orthogonally and appear either as diameters of the disk, or arcs of a circle. FIG1 shows a collection of straight hyperbolic lines in the Poincaré disk. Just as in spherical geometry, shortest paths appear curved on a flat map, hyperbolic geodesics also appear curved in the Poicaré disk. This is because it is quicker to move close to the centre of the disk, where distances are shorter, than nearer the edge. In our proposed approach, we will exploit both the conformal property and the circular symmetry of the Poincaré disk. The geometric intuition motivating our approach is that vertices embedded near the middle of the disk can have more'near' neighbours than they could in Euclidean space, whilst vertices nearer the edge of the disk can still be very far from each other. The distance metric in Poincaré disk is a function only of the radius. Exploiting the angular symmetries of the model using polar coordinates considerably simplifies the mathematical description of our approach and the efficiency of our optimiser. Points in the disk are x = (r e, θ), with r e ∈ and θ ∈ [0, 2π). The distance from the origin, r h is given by DISPLAYFORM0 and the circumference of a circle of hyperbolic radius R is C = 2π sinh R. Note that as points approach the edge of the disk, r e = 1, the hyperbolic distance from the origin r h tends to infinity. In Euclidean neural embeddings, the inner product between vector representations of vertices is used to quantify their similarity. However, unlike Euclidean space, hyperbolic space is not a vector space and there is no global inner product. Instead, given points x 1 = (r 1, θ 1) and x 2 = (r 2, θ 2) we define a cosine similarity weighted by the hyperbolic distance from the origin as DISPLAYFORM1 DISPLAYFORM2 It is this function that we will use to quantify the similarity between points in the embedding. We note that using a cosine distance in this way does lose some properties of hyperbolic space such as conformality. Our goal is to learn embeddings that perform well on downstream tasks and the key properties of hyperbolic space that permit this are retained. Trade-offs like this are common in the embeddings literature such as the use of negative sampling BID22 BID20. We adopt the notation of the original Skipgram paper (a) whereby the input vertex is w I and the context / output vertex is w O. The corresponding vector representations are v w I and v w O, which are elements of the two vector spaces shown in FIG2, W and W respectively. Skipgram has a geometric interpretation, shown in FIG2 for vectors in W. Updates to v wj are performed by simply adding (if w j is the observed output vertex) or subtracting (otherwise) an error-weighted portion of the input vector. Similar, though slightly more complicated, update rules apply to the vectors in W. Given this interpretation, it is natural to look for alternative geometries that improve on Euclidean geometry. To embed a graph in hyperbolic space we replace Skipgram's two Euclidean vector spaces (W and W in FIG2) with two Poincaré disks. We learn embeddings by optimising an objective function that predicts context vertices from an input vertex, but we replace the Euclidean dot products used in Skipgram with. A softmax function is used for the conditional predictive distribution DISPLAYFORM0 where v wi is the vector representation of the i th vertex, primed indicates context vectors (see FIG2) and ·, · H is given in. Directly optimising is computationally demanding as the sum in the denominator is over every vertex in the graph. Two commonly used techniques for efficient computation are replacing the softmax with a hierarchical softmax BID21 a) and negative sampling BID22 BID20. We use negative sampling as it is faster. We learn the model using backpropagation with Stochastic Gradient Descent (SGD). Optimisation is conducted in polar native hyperbolic coordinates where r ∈ (0, ∞), θ ∈ (0, 2π]. For optimisation, this coordinate system has two advantages over the cartesian Euclidean system used by BID24. Firstly there is no need to constrain the optimiser s.t. x < 1. This is important as arbitrarily moving points a small Euclidean distance inside the disk equates to an infinite hyperbolic distance. Secondly, polar coordinates in update equations that are simple modifications of the Euclidean updates, which avoids evaluating the metric tensor for each data point. The negative log likelihood using negative sampling is DISPLAYFORM0 where v w I, v w O are the vector representation of the input and context vertices, u j = v wj, v w I H, W neg is a set of samples drawn from the noise distribution and σ is the sigmoid function. The first term represents the observed data and the second term the negative samples. To draw W neg, we specify the noise distribution P n to be the unigram distribution of the vertices in the input sequence raised to 3/4 as in (a). The gradient of the negative log-likelihood in w.r.t. u j is given by DISPLAYFORM1 The derivatives w.r.t. the components of vectors in W (in natural polar hyperbolic coordinates) are DISPLAYFORM2 such that the Jacobian is ∇ r E = DISPLAYFORM3 where χ = w O ∪ W neg, η is the learning rate and j is the prediction error defined in. Calculating the derivatives w.r.t. the input embedding follows the same pattern, and we obtain ∂E ∂r I = j:wj ∈χ DISPLAYFORM4 The corresponding update equations are where t j is an indicator variable s.t. t j = 1 iff w j = w O, and t j = 0 otherwise. Following optimisation, the vectors are mapped back to Euclidean coordinates on the Poincaré disk through θ h → θ e and r h → tanh r h 2. The asymptotic runtimes of the update equations - FORMULA7 and FORMULA0 are the same as Euclidean Skipgram, i.e., the hyperbolic embedding does not add computational burden. In this section, we assess the quality of hyperbolic embeddings and compare them to embeddings in Euclidean spaces. Firstly we perform a qualitative assessment of the embeddings on a synthetic fully connected tree graph and a small social network. It is clear that embeddings in hyperbolic space exhibit a number of features that are superior to Euclidean embeddings. Secondly we run experiments on a number of public benchmark networks, producing both Euclidean and hyperbolic embeddings and contrasting the performance of both on a downstream vertex classification task. We provide a TensorFlow implementation and datasets to replicate our experiments in our github repository 1. To illustrate the usefulness of hyperbolic embeddings we visually compare hyperbolic embeddings with Euclidean plots. In all cases, embeddings were generated using five training epochs on an intermediate dataset of ten-step random walks, one originating at each vertex. FIG3 show hyperbolic embeddings in the 2D Poincaré model of hyperbolic space where the circles of radius 1 is the infinite boundary and Euclidean embeddings in R 2. FIG3 shows embeddings of a complete 4-ary tree with three levels. The vertex numbering is breadth first with one for the root and 2, 3, 4, 5 for the second level etc. The hyperbolic embedding has the root vertex close to the origin of the disk, which is the position with the shortest average path length. The leaves are all located in close proximity to their parents, and there are clearly four clusters representing the tree's branching factor. The Euclidean embedding is incapable of representing the tree structure with adjacent vertices at large distances (such as 1 and 3) and vertices that are maximally separated in the tree appearing close in the embedding (such as 19 and 7). FIG6 shows the 34-vertex karate network, which is split into two factions. FIG6 shows the hyperbolic embedding of this network where the two factions can be clearly separated. In addition, the vertices in FIG6 are the junior instructors, who are forbidden by the instructor (vertex 1) from socialising with other members of the karate club. For this reason they form a community that is only connected through the instructor. This community is clearly visible in FIG6 to the The of our experiments together with the HyBed and 2D Deepwalk embeddings used to derive them are shown in FIG8. The vertex colours of the embedding plots indicate different values of the vertex labels. The legend shown in FIG8 applies to all line graphs. The line graphs show macro F1 scores against the percentage of labelled data used to train a logistic regression classifier with the embeddings as features. Here we follow the method for generating multi-label F1 scores described in BID17. The error bars show one standard error from the mean over ten repetitions. The blue lines show HyBed hyperbolic embeddings, the yellow lines give the 2D Poincaré embeddings of BID24 while the red lines depict Deepwalk embeddings at various dimensions. As we use one-vs-all logistic regression with embedding coordinates as features, good embeddings are those that can linearly separate one class from all other classes. FIG8 shows that HyBed embeddings tend to cluster together similar classes so that they are linearly separable from other classes, unlike the Euclidean embeddings.
We learn neural embeddings of graphs in hyperbolic instead of Euclidean space
466
scitldr
The International Competition on Knowledge Engineering for Planning and Scheduling (ICKEPS) plays a pivotal role in fostering the development of new Knowledge Engineering (KE) tools, and in emphasising the importance of principled approaches for all the different KE aspects that are needed for the successful long-term use of planning in real-world applications. In this paper, as an exercise in synthesis and for the sake of stimulating thoughts and discussion, we review the format of previous ICKEPS, to suggest alternative formats for future competitions, ideally to motivate someone to step up and organise the next ones. The International Competition on Knowledge Engineering for Planning and Scheduling (ICKEPS) has been running since 2005 as an almost biennial event promoting the development and importance of the use of knowledge engineering (KE) methods and techniques within this area. The aim of the competition series is to foster developments in the knowledge-based and domain modelling aspects of Automated Planning, to accelerate knowledge engineering research, to encourage the creation and sharing of prototype tools and software platforms that promise more rapid, accessible, and effective ways to construct reliable and efficient Automated Planning systems. The latest competition took place in 2016 1 BID3, which aimed at on-site domain modelling, and highlighted a number of major issues. Most teams did not use any of the existing KE tools, and thus relied only on their expertise. Second, existing tools do not effectively support cooperation, which is needed to cope with the growing complexity of planning applications. Finally, and more worryingly, the number of participants of ICKEPS is still not very large, especially when compared with the latest edition of the International Planning Competition: this suggests that the planning community underestimates the importance of knowledge engineering, despite of its enormous impact on applicability of domain-independent planning in real-world scenarios. Accidental complexity issues BID2, for instance, can prevent the exploitation of automated planning approaches in complex scenarios, and even an unfortunate ordering of elements in the domain model can adversely affect the performance of planning engines BID7.Given the pivotal role played by ICKEPS in promoting the importance of principled KE approaches and tools, we believe it is important to evolve and adapt its format in order to attract and engage a larger number of participants. In this paper, we review the format of past competitions, in order to highlight weaknesses and strengths both from organisers' and participants' perspective. Building on top of this analysis, we suggest some alternative formats that may help future ICKEPS organisers in performing their tasks. It should be noted, though, that the aim of this paper is twofold: to review formats and suggest improvements to ICKEPS, and -more importantly-to make a call for action for organising future competitions focused on KE aspects of planning and scheduling. This section is devoted to describe the formats of past ICK-EPS. The first edition of ICKEPS, held in 2005 BID0, focused on tools for KE. Any tool that helped in knowledge formulation (the acquisition and encoding of domain structure or control heuristics), planner configuration (fusing application knowledge with a Planning or Scheduling engine), validation of the domain model (for example, using visualisation, analysis, reformulation) or validation and maintenance of the Planning and Scheduling system as a whole (for example, using plan/schedule visualisation, or automated knowledge refinement) was allowed to take part. The competition included two stages. In the precompetition stage, the competitors submitted short papers describing the tools. The program committee did light reviewing of the papers with the goal to evaluate relevance of the tools, to send feedback to the competitors, and to contribute to the overall evaluation. During the on-site competition, the participants gave talks about their systems in a workshop-like arrangement, and then they presented the systems during an open demonstration session. Evaluation The tools were evaluated by a jury of experts against the following criteria 2:• The 2007 edition of ICKEPS 3 extended the above format by including an additional simulation stage. A web service including a number of planning and scheduling simulations was made available to participants, in the pre-competition stage, to evaluate their tools. Competitors were made available a short text description of the competition domain, including a description of the simulation API. They used their tools to encode models and submit generated plans for each instance, and received feedback describing the quality of the plan. Evaluation As in ICKEPS 2005, tools were evaluated by judges by taking into account a number of criteria. In 2007, above mentioned criteria were extended by considering also aspects related to the simulation:• domain simulation applicability: how well did the competitors address the simulation domains using their tools? How many domains were the simulators tried on? How long did it take competitors to generate valid plans for the domains? How many problem instances were solved? What was the quality of the plans generated?Specific Tools Design narrowed the scope to tools that support a specific aspect of knowledge engineering technology: those that when input with a model described in an application-area-specific language, output solver-ready domain models. The rationale was to foster the development of tools that can support a rapid exploitation of automated planning in real-world applications, by leveraging on existing planning engines. Evaluation Evaluation was performed by a board of judges that considered a wide range of criteria, divided into two main classes: criteria focusing on the software engineering aspects of the tools (e.g., robustness, usability, etc.), and criteria focusing on the more traditional Planning and Scheduling elements, such as originality, comprehensiveness, etc. The 2012 edition of ICKEPS 4 included two different tracks: The Design Process Track, and the Challenge Track. The former followed the structure of previous ICKEPS, and was focused on the design of both general and specific tools for KE.The newly-introduced challenge track aimed at evaluating the actual usefulness of tools and approaches in tackling complex application domains of Planning and Scheduling. Participants were provided a few months before the actual competition with the natural-language specifications of 3 challenging scenarios for planning and scheduling, and had to tackle one off-site. During the workshop-like demonstration, the participants had to demonstrate the advantage of using their tools/method to produce a model as a solution to the requirements (or a sub-set of) in the specification and the plans for the specified scenarios. The evaluation criteria were the same used in ICKEPS 2009. This format was introduced in ICKEPS 2016, and included two main stages: on-site modelling and a subsequent demonstration. During the first stage, each team received descriptions of 4 scenarios and had to exploit the available time for generating the corresponding models. Scenarios were not taken from real-world applications of planning, but were designed by the organisers taking inspiration from games or from potential application domains. Participants were free to select the scenarios to tackle, and had no restrictions on the number and type of tools that can be used. The only constraints were on the available time -six hours were given-and on the maximum size of teams: at most four members. The day after, each team had to present, in a 10-minute demonstration, the aspects of the knowledge engineering process they exploited for encoding the scenarios. Specifically, teams were expected to discuss: the division of work among team members, the tools used, key decisions taken during the encoding, and the issues they faced. Hard to identify suitable domains and models. On-site modelling More attractive. Can allow to distil good practice in KE.No new tools for the community. Only toy domains can be considered. Table 1: Overview of strengths and weaknesses of considered ICKEPS formats. Evaluation Evaluation included both qualitative and quantitative aspects, and focused on three aspects:• KE tools exploited. This included the list of tools, the KE steps covered by the tools, etc.• Model characteristics. Models were checked in terms of presence bugs, number of operators, readability, etc.• Observed planners' performance. Encoded models were tested using a set of planners, in order to extract useful statistics to be used to empirically compare models. The jury of experts was present at the demonstration, and took into account the above mentioned aspects to award the teams that excelled in all (or in some) of the aspects. In this section we highlight strengths and weaknesses of the format of past ICKEPS, with the aim of synthesising some suggestions for future competitions. An overview of this analysis is provided in Table 1. ICKEPS based on tool design (either specific or general) have fostered the development of a decent number of KE tools, that are exploited by the wider planning community and are extremely helpful for testing planning in real-world applications. The main issue of this format of competition is that, nowadays, quite a significant number of tools is available, and it is hard to provide innovative general tools. Furthermore, the design and development of such tools and techniques in exceedingly demanding, and this has a strong impact on the number of competitors. In the case of general tools, the comparison can also be cumbersome: tools that can support the formulation of models in different languages, or that are aimed at supporting different aspects of KE for planning can be extremely hard to compare. On the other hand, this sort of competitions can be very useful when the focus is on specific aspects of KE / specific languages. This focus can help in the comparison, and can foster the work on some overlooked areas of KE, but may also significantly limit the interest of the community and the number of participants. Off-site modelling and demonstration poses significant burden to the organisers, because they have to identify a set of application domains, or a specific angle within some previously explored domain, where automated planning has not been applied before, and where it is possible to create models that are challenging from a formulation point of view, and at the same time can allow domain-independent planning engines to generate solutions in a reasonable amount of CPU-time. On the other hand, the large amount of time made available to competitors allows to consider more complex cases than those that can be handled in an on-site competition. Furthermore, the more relaxed settings also fosters the use of principled knowledge engineering approaches and techniques. Intuitively, the on-site modelling format has pros and cons which are quite the opposite of what has been discussed for off-site modelling competitions. The limited amount of time available to participants for formulating domain models forces the organisers to consider only "toy" examples, and does not allow to push the boundaries of planning in real-world applications. As observed in ICKEPS 2016, it is usually the case that models are so easy that no KE tool is needed, but a text editor is enough to encode reasonably good models. On the plus side, this style of competition, that is inspired by Hackatons and similar events, lead to a more "funny" sort of competition, and can attract also people that are not usually interested in KE aspects of planning. This is particularly true for students and young researchers. Furthermore, an analysis of techniques exploited by participants can also lead to identify good practice in KE, that can be emphasised and discussed during the demo session or after the competition. Distilling the knowledge obtained by reviewing the formats of past ICKEPS, we found ourselves in the position of suggesting two possible structures for future competitions, that may help to keep alive the interest of the wider ICAPS community in KE aspects, and provide useful data or tools as a tangible heritage. It may be worth to revive formats focusing on what we previously defined as Specific Tool Design. Past ICKEPS exploiting this format considered tools able to translate models between different languages. Taking into account different areas may help to foster the development of tools, and also can help providing some sort of standards that can be used for future work in the area. An example area could be, given also the special attention given to the topic by the 2019 workshop on Knowledge Engineering for Planning and Scheduling (KEPS) 5 (McCluskey et al. 2003), on automated domain model acquisition. With Specific Tool Design focus, it might be possible to specify expected type and amount of input information as well as expected output. Consequently, it might be possible to specify metrics that can be used to quantitatively evaluate a particular tool. Notably, the metrics might follow (soft) constraints that can be set according to a specific application domain, where only some type of data can be expected, or according to some more general "usual" conditions. Such a format can motivate development of tools that might be of a critical importance for advancing the state-of-the-art in the area of KE for Planning and Scheduling. Moreover, with the quantifying metrics the tools can be evaluated more objectively and thus mitigating subjective assessment of judges. Also, to some extent such a format should reduce burden of the organisers as they might focus on narrower scope of the competition. Another suitable format for future ICKEPS can be obtained by mixing off-site and on-site modelling, aiming at exploiting the strengths of both. A suitable combination may be the following: participants are provided with the specifications of "not so easy" application domains to model in a planning language (e.g. PDDL), and demonstrate that existing domain-independent planning engines can handle given sample planning instances from the domains and provide good quality solutions in a reasonable amount of CPU-time. In the on-line stage of the competition, participants will be provided with a modified version of the specifications, and will be required to modify the models accordingly. Again, the domain models will be evaluated on given samples of planning instances. The intuition behind the idea is to foster the exploitation of principled KE approaches for formulating domain models, especially in the off-site stage, with a focus on robustness and maintenance of the models, and possibly help shaping a notion of quality of domain models, such that the model amendment in the on-site stage would be manageable. The potential issue for the organising team is find appropriate and interesting application domains that are both reasonable challenging to formulate, and suitable to "not so easy or hard" modifications. Concluding this paper, we believe that there is a strong need to organise the ICKEPS competitions in order to increase awareness of KE techniques, tool and issues in the ICAPS and general AI communities. The success of future ICK-EPS competitions (e.g. considerable increase of the number of participants) can, in consequence, influence the domainindependent AI planning field by making it accessible for use (by planning non-experts) in various application domains. To give some motivation and inspiration for the future ICKEPS competitions, we, in this paper, provided a review of the format of the past ICKEPS competitions, and suggested two possibly new formats that, we believe, can at-tract more participants and possibly avoid an excessive burden of organisers. We believe that the paper initiates a fruitful discussion about the format of future ICKEPS competitions as well as motivate potential organisers to step up and organise the next competition(s).
Ideas for future ICKEPS
467
scitldr
We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations. The sequence-to-sequence framework has demonstrated success in natural-language sequence transduction tasks such as machine translation. More recently, neural techniques have been applied to do single-document, abstractive (paraphrasing) text summarization of news articles BID15, BID9 ). In this prior work, the input to supervised models ranged from the first sentence to the entire text of an article, and they are trained end-to-end to predict reference summaries. Doing this end-to-end requires a significant number of parallel article-summary pairs since language understanding is a pre-requisite to generate fluent summaries. In contrast, we consider the task of multi-document summarization, where the input is a collection of related documents from which a summary is distilled. Prior work has focused on extractive summarization, which select sentences or phrases from the input to form the summaries, rather than generating new text. There has been limited application of abstractive neural methods and one possible reason is the paucity of large, labeled datasets. In this work, we consider English Wikipedia as a supervised machine learning task for multidocument summarization where the input is comprised of a Wikipedia topic (title of article) and a collection of non-Wikipedia reference documents, and the target is the Wikipedia article text. We describe the first attempt to abstractively generate the first section, or lead, of Wikipedia articles conditioned on reference text. In addition to running strong baseline models on the task, we modify the Transformer architecture BID18 to only consist of a decoder, which performs better in the case of longer input sequences compared to recurrent neural network (RNN) and Transformer encoder-decoder models. Finally we show our modeling improvements allow us to generate entire Wikipedia articles. Neural abstractive summarization was pioneered in BID15, where they train headline generation models using the English Gigaword corpus BID3, consisting of news articles from number of publishers. However, the task is more akin to sentence paraphrasing than summarization as only the first sentence of an article is used to predict the headline, another sentence. RNN-based encoder-decoder models with attention (seq2seq) perform very well on this task in both ROUGE BID7, an automatic metric often used in summarization, and human evaluation BID1.In BID9, an abstractive summarization dataset is proposed by modifying a questionanswering dataset of news articles paired with story highlights from Daily Mail and CNN. This task is more difficult than headline-generation because the information used in the highlights may come from many parts of the article and not only the first sentence. One downside of the dataset is that it has an order-of-magnitude fewer parallel examples (310k vs. 3.8M) to learn from. Standard seq2seq models with attention do less well, and a number of techniques are used to augment performance. Another downside is that it is unclear what the guidelines are for creating story highlights and it is obvious that there are significant stylistic differences between the two news publishers. In our work we also train neural abstractive models, but in the multi-document regime with Wikipedia. As can be seen in TAB0, the input and output text are generally much larger, with significant variance depending on the article. The summaries (Wikipedia lead) are multiple sentences and sometimes multiple paragraphs, written in a fairly uniform style as encouraged by the Wikipedia Manual of Style 1. However, the input documents may consist of documents of arbitrary style originating from arbitrary sources. We also show in TAB0 the ROUGE-1 recall scores of the output given the input, which is the proportion of unigrams/words in the output co-occuring in the input. A higher score corresponds to a dataset more amenable to extractive summarization. In particular, if the output is completely embedded somewhere in the input (e.g. a wiki-clone), the score would be 100. Given a score of only 59.2 compared to 76.1 and 78.7 for other summarization datasets shows that ours is the least amenable to purely extractive methods. There is a rich body of work incorporating Wikipedia for machine learning tasks, including questionanswering BID4, BID13 ) and information extraction BID6, and text generation from structured data BID5.The closest work to ours involving generating Wikipedia is BID16, where articles are generated extractively (instead of abstractively in our case) from reference documents using learned templates. The Wikipedia articles are restricted to two categories, whereas we use all article types. The reference documents are obtained from a search engine, with the Wikipedia topic used as query similar to our search engine references. However we also show with documents only found in the References section of the Wikipedia articles. Previous work on neural abstractive summarization relies on RNNs as fundamental modules, mirroring techniques successful in machine translation (MT). Recently, state-of-the-art MT were obtained using a non-recurrent architecture, called the Transformer BID18. The lack of recurrence enables greater within-training-example parallelization, at the cost of quadratic complexity in the input sequence length. We find the Transformer transfers well to medium length, input sequence summarization and describe modifications to better handle longer sequences. from the Google search engine, using the article section titles as queries. For each query, we collect 10 pages. From this collection we remove the Wikipedia article itself, which is often among the top . We also remove "clones", which are detected when there is a high-level of unigram overlap with the article (details provided in A.2.1). We denote these refined search for an article, a i, as S i ⊂ D. Similar to C i, we extract only the text to use as input. TAB1 describes overall properties of our WikiSum dataset. Many articles have few citations, motivating our supplementation of the source documents with web search . On the other hand, citations when available, tend to be of higher-quality. When counting the total words in the entire dataset, it is orders-of-magnitude larger than previous summarization datasets. To have consistent train/development/test data across corpus-comparison experiments, we restrict the articles to those with at least one crawlable citation. We divide the articles roughly into 80/10/10 for train/development/test subsets, ing in 1865750, 233252, and 232998 examples respectively. Because the amount of text in input reference documents (C i, S i) can be very large (see TAB1) it is infeasible to train an end-to-end abstractive model given the memory constraints of current hardware. Hence, we first coarsely select a subset of the input using extractive summarization. The second stage involves training an abstractive model that generates the Wikipedia text while conditioning on this extraction. This two-stage process is inspired by by how humans might summarize multiple long documents: First highlight pertinent information, then conditionally generate the summary based on the highlights. We investigate three extractive methods from the summarization literature, along with a trivial and cheating method, to assess the importance of this stage. For each article, a i we create a ranked list of paragraphs, {p DISPLAYFORM0 is the rank of the jth paragraph p i j of (C i, S i). From this we select the first L tokens as input to the second abstractive stage.1. Identity: As a trivial baseline extractor, we simply use the first L tokens of the input.2. tf-idf: A non-trivial ranking is to consider ranking paragraphs as documents in a queryretrieval problem, where the query is the title of the article, T (a i). We compute tf-idf BID14 for the query, with respect to the documents, {p i j}. That is, we summate for each word in the query DISPLAYFORM1 where N w, N d, and N dw are the count of the word in the document, total number of documents, and total number of documents containing the word, respectively.3. TextRank BID8: A weighted graph is defined where text units are nodes and edges are defined by a similarity measure based on word overlap. An algorithm similar to PageRank BID11 is then used to compute the ranking of text units. We used paragraphs for the text units.4. SumBasic BID10: Word frequencies in the input text are used to assign scores to words, which are in turn used to score sentences. After selecting the best scoring sentence, words in it have their scores reduced, and the process is repeated until the desired summary length is reached. To further demonstrate the quality of extraction on the final performance, we implement a cheating extractor that ranks {p i j} using recall of bigrams in the ground truth text: DISPLAYFORM0 4.2 ABSTRACTIVE STAGE 4.2.1 DATA REPRESENTATION Given the ordered paragraphs {p i Ri(j) }, we derive the raw text input simply as the concatenation of the paragraphs in order, the most relevant at the beginning, and prefixed with the title. We then encode the text using sub-word tokenization similar to BID19 with a vocabulary size of 32,000 yielding tokenized input, x i: DISPLAYFORM1 For various values of L in experiments, we truncate the tokens to form the input sequence: DISPLAYFORM2 For the output, we use the same vocabulary and tokenization for the Wikipedia lead text but do not do any truncation across experiments. Next we describe the abstractive models, W, that learn to write articles, a i = W (m L i), which we treat as a sequence transduction problem from very long input sequences (up to L = 11000) to medium output sequences (typically less than 500). As a baseline we apply the standard LSTM encoder-decoder with attention (seq2seq-att) as in BID0 to this task. As is typical we train to optimize the maximum-likelihood objective: DISPLAYFORM0 A stronger, more recent baseline that we use is the non-recurrent Transformer model described in 2.3, which also has symmetric encoder and decoder modules (T-ED). We introduce a simple but effective modification to T-ED for long sequences that drops the encoder module (almost reducing model parameters by half for a given hyper-parameter set), combines the input and output sequences into a single "sentence" and is trained as a standard language model., where δ is a special separator token and train a model to predict the next word given the previous ones: DISPLAYFORM0 Since the model is forced to predict the next token in the input, m, as well as y, error signals are propagated from both input and output time-steps during training. We also suspect that for monolingual text-to-text tasks redundant information is re-learned about language in the encoder and decoder. We believe this allows for easier optimization and empirically observe this with longer sequences (see Section 5.3). Note that because of the self-attention of the Transformer, when generating the next token, attention from both m and y are considered. At inference we provide the input sequence, m i, initially, and auto-regressively generate the output, y i, as normal. To re-use the terminology used to describe the Transformer, the attention is a function of a query (Q) and set of key (K) and value (V) pairs. To handle longer sequences, we modify the multi-head self-attention of the Transformer to reduce memory usage by limiting the dot products between Q and K in: DISPLAYFORM0 Local attention: Sequence tokens are divided into blocks of similar length and attention is performed in each block independently. As the attention memory cost per block becomes constant, this modification allow us to keep the number of activations linear with respect to the sequence length. In our experiments, we choose to have blocks of 256 tokens. Memory-compressed attention: After projecting the tokens into the query, key, and value embeddings, we reduce the number of keys and values by using a strided convolution. The number of queries remains unchanged. This modification allows us to divide the number of activations by a compression factor. In our experiments we use convolution kernels of size 3 with stride 3. In contrast to local attention layers, which only capture the local information within a block, the memorycompressed attention layers are able to exchange information globally on the entire sequence. These modifications (see FIG0) allow us in practice to process sequences 3x in length over the T-D model. For both local and memory-compressed attention, masking is added to prevent the queries from attending to future keys and values. Our final architecture is a 5-layer network (LMLML) alternating between local-attention (L) layers and memory-compressed attention (M) layers (in BID18 it is 6 identical layers). We also added in some experiments one mixture of experts (MoE) layer to increase the network's capacity. In experiments we evaluate based on perplexity (per-wordpiece), a common language modeling metric, and ROUGE-L F1 (version ROUGE-1.5.5), a common metric used in comparing candidate and reference summaries. Note the F1 flavor of ROUGE is more appropriate in this setting as we do not explicitly constrain the output length in abstractive models; it is the harmonic mean of ROUGERecall (which favors long summaries) and ROUGE-Precision (which favors short summaries).Although optimizing ROUGE directly has been shown to not always yield the best summaries as evaluated by human judgment BID12, we found that for our task optimizing for perplexity correlates with increased ROUGE and human judgment. We suspect that the relatively uniform style of Wikipedia articles makes ROUGE more appropriate here than in general abstractive summarization tasks. For all abstractive model training, we use the open-source tensor2tensor 2 library. The seq2seq baseline had a hidden size of 128 with 2 layers (we use the hyper-parameter set defined in the library as lstm attention).For the Transformer encoder-decoder (T-ED), we use the hyper-parameter set transfomer base v1 and train for 1 million steps. Models exhibited very little overfitting and did not require early-stopping. The Transformer Decoder (T-D) was identical to the decoder part of T-ED. The T-DMCA model is similar to T-D, but with the enhancements described in section 4.2.4.Unless otherwise stated, during decoding we use a beam search of size 4 and length penalty α = 0.6 BID19 and decode until an end-of-sequence token is reached. Extractive-only is not enough: We investigate performance of extractive methods without the abstractive model by looking at the ROUGE-L F1 scores after running tf-idf, SumBasic, and TextRank in Figure 2, without any abstractive model. In the case of TextRank and SumBasic we matched the output length to the target length and observe the extractive methods perform roughly in-line with each other in terms of ROUGE-L F1. Our best abstractive model more than doubled this metric. Further, this model yields large improvements in perceived linguistic quality (elaborated below).Extractive method: From TAB2 we observe that smart extraction is critical for final abstractive performance. There is a significant gap between doing nothing, identity, and extractive summarization, tf-idf. Further, there is a significant gap between tf-idf and the cheating extractor, suggesting future work in improving the extraction step could in significant improvements. One possibility is to train a supervised model to predict relevance (Eq. 1), which we leave as future work. For subsequent experiments we fix the extractive method to tf-idf. Input Corpus: From table 3 we also observe that, unsurprisingly, the combined dataset performs best, but the gaps between it and using only one of citations or search are both significant and their contributions are complementary. In subsequent experiments, we report only the combined . Abstractive model architecture and input length: As we see from TAB3, seq2seq-attention as a baseline does quite poorly on this task compared to the Transformer architectures. As seen in FIG2, we observe that the Transformer encoder-decoder, T-ED, architecture consistently improves in performance until a best of around L = 500 − 1000 and is unable to learn at L = 2000. This motivated the Transformer-Decoder, which we found could learn and improve up to L = 4000, before running out of memory on our machines equipped with 16GB of GPU RAM (NVIDIA P100). By using the T-DMCA modifications, we were able to train up to L = 11000 and continued to see improvements in performance. We also found the MoE-layer helped performance by adding model capacity at high L, for example dropping log-perplexity from 2.05 to 1.93 at L = 11000 with 128 experts. Our best model attempted uses 256 experts at L = 7500 (we were unable to use 256 experts with L = 11000 due to memory constraints) and achieves a perplexity of 1.90, Human Evaluation -Linguistic quality We conducted a DUC-style human evaluation of linguistic quality 3 of samples from a baseline abstractive (seq2seq), the best extractive (tf-idf), and our best T-DMCA models. Five different dimensions are assessed: grammaticality, non-redundancy, referential clarity, focus, and structure/coherence. As seen in Table 5, the T-DMCA model does statistically significantly better on all dimensions, except on non-redundancy where tf-idf does about as well. Overall, we observed high fluency and coherence from our best abstractive model. Occasionally we observed some repetition of phrases which hurt the non-redundancy and structure, but it was much rarer compared with the other abstractive method, seq2seq. The biggest weakness of the extractive Table 5: Linguistic quality human evaluation scores (scale 1-5, higher is better). A score significantly different (according to the Welch Two Sample t-test, with p = 0.001) than the T-DMCA model is denoted by *. Focus Grammar Referential clarity T-DMCA (best) 4.5 4.6 4.2 4.5 4.2 tf-idf -only 3.0* 3.6* 3.9 3.2* 2.7* seq2seq-attention 3.0* 3.4* 2.1* 3.4* 2.3* Table 6: Side-by-side for two models pair with large automatic metric gaps DISPLAYFORM0 38.8 1.5 method compared with our best abstractive model was the lack of structure and coherence in the summaries. Human Evaluation -side-by-side preference We validated our chosen metrics correlate with human preference by conducting two side-by-side human evaluation experiments, comparing models with large gaps in perplexity/ROUGE. We observe in Table 6 that human judgment correlates with our automatic metrics, but it becomes more difficult to distinguish at the higher-end of model performance. Details of the human evaluation experimental designs can be found in Appendix A.3.To summarize the quantitative , we believe the highest impact future work will be from improving the extractive stage and extending the decoder-only architectures to learn from larger L while maintaining sufficient model capacity. Comparison with BID16: A direct comparison with BID16 is difficult for three reasons: (a) they report only for two small subsets of Wikipedia, Diseases and American Actors; (b) we report on lead generation instead of full-articles; (c) we were unable to obtain the exact articles they used as input and output (in particular they make no claim of Wikiclone detection). However, we make a best-effort comparison by finding the subset of articles of our test set that correspond to Diseases and American Actors, the two categories reported on by Sauper & Barzilay and reporting our ROUGE-1 scores TAB4. We observe that we perform better on American Actors than Diseases, probably because of the prevalence of the former (and biographies) in Wikipedia compared to the latter in our training set for our single, global model, whereas Sauper & Barzilay likely benefit from the category-specific templates. On average our ROUGE-1 scores are higher but do worse on the less common and somewhat specific disease category. In FIG3, we show the predictions from three different models (using tf-idf extraction, and the combined corpus) along with the Wikipedia ground truth. As the perplexity decreases we see improvements in the model outputs, in terms of fluency, factual accuracy, and narrative complexity. In particular, the T-DMCA model offers a respectable alternative to the Wikipedia version and is more succinct, while mentioning key facts, such as where the law firm was located, when and how it was formed, and the rise and fall of the firm. In manual inspection of model outputs, we noticed an unexpected side-effect: models learn to translate names from English into multiple languages, e.g. Rohit Viswanath into Hindi (see FIG4). Although we did not do a systematic evaluation of the translations, we found they are often correct, and often they are not found in the Wikipedia article itself. We also verified that in general the translation is not merely copied from the source, such as example cases where the target language is the incorrect one (e.g. translation of an English name into Ukrainian). Given that we have shown it is possible to learn sequence transduction models on combined inputoutput sequence lengths of approximately 12000 using the T-D architecture, we show that it is possible to train a model to generate entire Wikipedia articles. As a preliminary , we trained two T-DMCA models: One is trained to use L = 6000 reference tokens to predict at most 2192 article tokens (longer examples are ignored) and another is conditioned only on the title and generates articles up to 4000 tokens long. We show samples from both models in Appendix A.1. Although the generated articles are not as good as the real Wikipedia or our lead section samples, the models can be seen to organize the article into plausible sections and exhibit global coherence over multi-paragraph text. The model with access to reference documents inserts factual information in the generated article. Although we did not focus or tune on the full-article task we see this as an interesting future work for abstractive summarization. We have shown that generating Wikipedia can be approached as a multi-document summarization problem with a large, parallel dataset, and demonstrated a two-stage extractive-abstractive framework for carrying it out. The coarse extraction method used in the first stage appears to have a significant effect on final performance, suggesting further research on improving it would be fruitful. We introduce a new, decoder-only sequence transduction model for the abstractive stage, capable of handling very long input-output examples. This model significantly outperforms traditional encoderdecoder architectures on long sequences, allowing us to condition on many reference documents and to generate coherent and informative Wikipedia articles. To encourage further research on large-scale summarization, we will release the URLs used in our experiments (the Wikipedia URL as well as the URLs of its references). We also provide code that that extracts content from the CommonCrawl dataset 4, which is freely available for download. We use the open-source tensor2tensor 5 library for training abstractive models and will be releasing our abstractive modeling code extensions. Further details are available at https:// goo.gl/wSuuS9. To assess linguistic quality, we randomly selected samples generated by models from the test set and ask raters to choose a score from 1 to 5 (higher is better) for five dimensions: Grammaticality, Non-redundancy, Referential clarity, Focus, and Structure and Coherence. These were used in the past at DUC for evaluating summaries BID2. For each model we selected 25 examples and averaged the scores for each question across 3 raters (out of pool of 7).To compare two models by human evaluation, we randomly select examples from the test set and show model outputs side-by-side in the interface shown in Figure 9. Which side a model appears on is randomized per example and rater. For the experiments in Table 6 we had 3 raters score 25 examples each and computed the ratio of ratings preferring one model over the other. A.4 EXAMPLE ABSTRACTIVE MODEL INPUT Figure 9: Screenshot of side-by-side human evaluation tool. Raters are asked whether they prefer model output on the left or right, given a ground truth Wikipedia text. Figure 10: Example extractive-output/abstractive-input for models in "dewey & lebeouf" example. The extractive method used is tf-idf.
We generate Wikipedia articles abstractively conditioned on source document text.
468
scitldr
Abstract Stochastic gradient descent (SGD) and Adam are commonly used to optimize deep neural networks, but choosing one usually means making tradeoffs between speed, accuracy and stability. Here we present an intuition for why the tradeoffs exist as well as a method for unifying the two in a continuous way. This makes it possible to control the way models are trained in much greater detail. We show that for default parameters, the new algorithm equals or outperforms SGD and Adam across a range of models for image classification tasks and outperforms SGD for language modeling tasks. One of the most common methods of training neural networks is stochastic gradient descent (SGD) . SGD has strong theoretical guarantees, including convergence in locally non-convex optimization problems . It also shows improved generalization and stability when compared to other optimization algorithms . There have been various efforts in improving the speed and generalization of SGD. One popular modification is to use an adaptive gradient , which scales the gradient step size to be larger in directions with consistently small gradients. Adam, an implementation that combines SGD with momentum and an adaptive step size inversely proportional to the RMS gradient, has been particularly successful at speeding up training and solving particular problems . However, at other problems it pays a penalty in worse generalization , and it requires additional modifications to achieve a convergence guarantee . Here we develop an intuition for adaptive gradient methods that allows us to unify Adam with SGD in a natural way. The new optimizer, SoftAdam, descends in a direction that mixes the SGD with Adam update steps. As such, it should be able to achieve equal or better optimization across a variety of problems. Several authors have recently tried to combine Adam and SGD to get the best of both worlds. However, these have not always enabled better generalization or performance across different problems. In one study, the optimization algorithm was switched from Adam to SGD during training based on a scale-free criterion, preventing the addition of a new hyper-parameter . The is that the longer the convolutional networks were trained on Adam, the worse their generalization performance compared to SGD. The best performance came from switching to SGD in this case as soon as possible. Another recent algorithm takes the approach of clipping the large Adam updates to make them more similar to SGD as the training nears the end . However, this approach requires two new hyper-parameters: the rate at which the training is switched over, and a learning rate for both SGD and Adam. Similar to this work, partially adaptive methods can allow arbitrary mixing between SGD and Adam. However, in that work the step size is not strictly smaller than the SGD step and so the same guarantees cannot be made about convergence. It is of interest to see whether there is any advantage over these methods. Other algorithms have shown improvements on SGD by averaging weights over many steps (; ;). These algorithms are complementary to the algorithm developed here, as they require an underlying algorithm to choose the step direction at any point. The fundamental idea of gradient descent is to follow the path of steepest descent to an optimum. Stochastic gradient descent enables us to optimize much larger problems by using randomly subsampled training data. The stochastic gradient descent algorithm will minimize the loss function J(θ; x), which is parameterized by θ and takes as input training data x, where α is a learning rate that may vary with t and x t is the training data selected for the batch at step t. The convergence rate can be improved further by using a running average of the gradient, initializing with m 0 ← 0. This method, known as momentum , may be written as, A further development that has improved convergence, especially for LSTM and language modeling tasks, involves the second gradient as well. This specific version is known as the Adam algorithm , are unbiased estimators of the first and second moment respectively. In order to analyze the convergence of these algorithms, we can consider a second-order approximation of J on its combined argument z = (θ; x) in the region of (θ t ; x t), where H t is the Hessian of J(z) around z t. This gives us the gradient, which becomes the SGD update step, Unrolling this update step can be shown to lead to an expression for the distance from the optimal value z, in the basis of Hessian eigenvectors ξ i: We can see that the learning is stable if the learning rate α satisfies, In addition, we find that the value for the learning rate that leads to the fastest overall convergence is, where λ 1 and λ n are the max and min eigenvalues of H, respectively. If rather than a single learning rate α, we were to use a diagonal matrix D such that the update is, we may be able to modify the diagonal entries such that faster overall convergence is achieved. For example, in the special case that the Hessian is diagonal, the convergence rate for the i-th element becomes, i. In this situation, if the eigenvalues λ i are known, the algorithm can converge to the minimum in exactly one step. This corresponds with some intuition behind adaptive moment methods: that taking a step with a "constant" size in every direction toward the target will reach convergence faster than taking a step proportional to the gradient size. Because the eigenvalues and eigenvectors not known a priori, for a practical algorithm we must rely on an approximation to find d i. One technique named AdaGrad prescribes the diagonal elements: For building our intuition, we consider the special case where the Hessian is diagonal, Combining this with Equation 2, we compare the AdaGrad coefficient to the optimal value for As long as αλ i, this will be true when, This may be true on average if the initializations z 0i and optima z i can be made over the same subspaces. That is, if z i is uncorrelated to λ i, we can expect this to have good performance on average. However, there can be significant errors in both overestimating and underestimating the eigenvalue. One would expect that for a typical problem b i and λ i might be drawn from uncorrelated distributions. In this case, large values of z i will be likely to correspond to small values of λ i. Since z 0 can only be drawn from the average distribution (no information is known at this point), the estimated λ i is more likely to be large, as the initialization will be far from the optimum. Intuitively, the gradient is large because the optimum is far from the initialization, but the algorithm mistakes this large gradient for a large eigenvalue. On the other hand, when the parameter is initialized close to its optimum, the algorithm will mistakenly believe the eigenvalue is small, and so take relatively large steps. Although they do not affect the overall convergence much on their own (since the parameter is near its optimum), these steps can add significant noise to the process, making it difficult to accurately measure gradients and therefore find the optimum in other parameters. This problem will be significantly worse for Adam, which forgets its initialization with some decay factor β 2. In that case, as each parameter reaches its optimum, its estimated eigenvalue λ i drops and the step size gets correspondingly increased. In fact, the overall algorithm can be divergent as each parameter reaches its optimum, as the step size will grow unbounded unless α is scheduled to decline properly or a bounding method like AMSGrad is used . In addition, reviewing our earlier assumption of small, these algorithms will perform worse for small eigenvalues λ i < /α. This might be especially bad in Adam where late in training when the We finally note that the Hessians in deep learning problems are not diagonal . As such, each element might be better optimized by a learning rate that better serves both its min and max eigenvalues. Overall, this understanding has led us to believe that adaptive moments might effectively estimate λ i when it is large, but might be less effective when it is small. In order to incorporate this information about large eigenvalues, as well as optimize the learning rate to account for variation in the eigenvalues contributing to convergence of a particular component, we consider the an update to Eq. 1, whereλ is an average eigenvalue and η is a new hyper-parameter that controls the weighting of the eigenvalue estimation. Here we have addedλ to the numerator so that α does not need to absorb the changes to the RMS error as it does in Eq. 4. This also recovers the SGD convergence guarantees, since the step is always within a factor of η to an SGD step. In addition, this will allow us to recover SGD with momentum exactly in the limit η → 0. We use the adaptive gradient estimation, wherev t is the mean value of v t, to write, One issue with the above estimation is that its variance is very large at the beginning of training . It was suggested that this is the reason that warmup is needed for Adam and shown Input: θ 0 ∈ F: initial parameters, {α t > 0} T t=1: learning rate, α wd, β 1, β 2, η,: other hyperparameters, J t (θ): loss function Output: Average over all elements Calculate the denominator Perform the update end for return θ T that rectifying it can make warmup unnecessary. Where v t is the average of n t elements and v ∞ the average of n ∞, we define r t = n t /n ∞ and: This finally forms the basis for our algorithm. Our algorithm differs from Adam in a few other ways. First, the biased gradient estimate is used rather than the unbiased one. This matches the SGD implementation of momentum, and also avoids magnifying the large variance of early gradient estimates: In addition, the second moment v t is calculated in an unbiased way using an effectiveβ 2 (t), or by abuse of notationβ 2t:β This has a negligable impact on the performance of the algorithm, but makes tracking the moment over time easier since it does not need to be un-biased later. We then calculate the ratio of the number of samples n t used to calculate the moment v t to the steady state number of samples in the average n ∞: We finally note that the weight decay should be calculated separately from this update as in AdamW . In order to test the ability of this algorithm to reach better optima, we performed testing on a variety of different deep learning problems. In these problems we keep η at the default value of 1. Because the algorithm is the same as SGDM if η = 0 and is comparable to Adam when η = −1, getting at least as good as those algorithms is just a matter of parameter tuning. These are intended to show that SoftAdam performs remarkably well with a common parameter choice. For the best performance, the hyper-parameter η and learning rate schedule α should be optimized for a particular problem. We trained a variety of networks: 1 AlexNet , VGG19 with batch normalization , ResNet-110 with bottleneck blocks , PreResNet-56 with bottleneck blocks , DenseNet-BC with L=100 and k=12 on the CIFAR-10 dataset using SGD, AdamW and SoftAdam. For each model and optimization method, the weight decay was varied over [1e-4,2e-4,5e-4,1e-3,2e-3,5e-3]. For AdamW the learning rate was varied over [1e-4,2e-4,5e-4,1e-3,2e-3]. For each optimizer and architecture and the best was chosen, and three runs with separate initializations were used go generate the final data. The learning schedule reduced the learning rate by a factor of 10 at 50% and 75% through the total number of epochs. The are summarized in Table 1. We find that SoftAdam equals or outperforms SGD in training classifiers on this dataset. Due to the larger optimal weight decay constant, SoftAdam achieves lower validation loss at a higher train loss than SGD. We trained a 3-layer LSTM with 1150 hidden units per layer on the Penn Treebank dataset in the same manner as. For SoftAdam the weight drop was increased from 0.5 to 0.6. Results for the average of three random initializations are shown in Figure 2 (a) and are summarized in Table 2. For these parameters, SoftAdam outperforms SGD significantly but does not quite achieve the same as Adam. Note that for this experiment we chose Adam instead of AdamW for comparison due to its superior performance. We also trained a transformer using the fairseq package by on the IWSLT'14 German to English dataset. Results for each method with optimized hyperparameters are summarized in Table 3. Note that no warmup is used for training SoftAdam, but warmup is used for AdamW and In this paper, we have motivated and demonstrated a new optimization algorithm that naturally unifies SGD and Adam. We have focused our empirical on the default hyper-parameter setting, η = 1, and predetermined learning schedules. With these parameters, the algorithm was shown to produce optimization that is better than or equal to SGD and Adam on image classification tasks. It also performed significantly better than SGD on language modeling tasks. Together with finding the optimal values for η, we expect a better understanding of the learning schedule to bring light to the way in which the adaptive gradient methods improve convergence. SoftAdam now also makes it possible to create a learning schedule on η, which may be another fruitful avenue of research, expanding on the work of. Better understanding of how adaptive gradients improve the convergence of practical machine learning models during training will enable larger models to be trained to more accurately in less time. This paper provides a useful intuition for how that occurs and provides a new algorithm that can be used to improve performance across a diverse set of problems. # S t a t e i n i t i a l i z a t i o n i f l e n (s t a t e) == 0: s t a t e [" s t e p "] = 0 # E x p o n e n t i a l moving a v e r a g e o f g r a d i e n t v a l u e s s t a t e [" e x p a v g "] = t o r c h. z e r o s l i k e (p . d a t a) # E x p o n e n t i a l moving a v e r a g e o f # s q u a r e d g r a d i e n t v a l u e s s t a t e [" e x p a v g s q "] = t o r c h. z e r o s l i k e (p . d a t a) e x p a v g, e x p a v g s q = (s t a t e [ " e x p a v g "], s t a t e [" e x p a v g s q "], ) b e t a 1, b e t a 2 = g r o u p [" b e t a s "] s t a t e [" s t e p "] += 1 b e t a 2 h a t = min (b e t a 2, 1 . 0 − 1 . 0 / ( s t a t e [ " s t e p "] ) ) r b e t a = (1 − b e t a 2) / (1 − b e t a 2 h a t) e t a h a t 2 = (g r o u p [ " e t a "] * g r o u p [" e t a "] * r b e t a ) # Decay t h e f i r s t and s e c o n d moment w i t h t h e # r u n n i n g a v e r a g e c o e f f i c i e n t e x p a v g. mul (b e t a 1). a d d r e t u r n l o s s
An algorithm for unifying SGD and Adam and empirical study of its performance
469
scitldr
The use of imitation learning to learn a single policy for a complex task that has multiple modes or hierarchical structure can be challenging. In fact, previous work has shown that when the modes are known, learning separate policies for each mode or sub-task can greatly improve the performance of imitation learning. In this work, we discover the interaction between sub-tasks from their ing state-action trajectory sequences using a directed graphical model. We propose a new algorithm based on the generative adversarial imitation learning framework which automatically learns sub-task policies from unsegmented demonstrations. Our approach maximizes the directed information flow in the graphical model between sub-task latent variables and their generated trajectories. We also show how our approach connects with the existing Options framework, which is commonly used to learn hierarchical policies. Complex human activities can often be broken down into various simpler sub-activities or sub-tasks that can serve as the basic building blocks for completing a variety of complicated tasks. For instance, when driving a car, a driver may perform several simpler sub-tasks such as driving straight in a lane, changing lanes, executing a turn and braking, in different orders and for varying times depending on the source, destination, traffic conditions etc. Using imitation learning to learn a single monolithic policy to represent a structured activity can be challenging as it does not make explicit the sub-structure between the parts within the activity. In this work, we develop an imitation learning framework that can learn a policy for each of these sub-tasks given unsegmented activity demonstrations and also learn a macro-policy which dictates switching from one sub-task policy to another. Learning sub-task specific policies has the benefit of shared learning. Each such sub-task policy also needs to specialize over a restricted state space, thus making the learning problem easier. Previous works in imitation learning BID16 BID7 focus on learning each sub-task specific policy using segmented expert demonstrations by modeling the variability in each sub-task policy using a latent variable. This latent variable is inferred by enforcing high mutual information between the latent variable and expert demonstrations. This information theoretic perspective is equivalent to the graphical model shown in FIG0 (Left), where the node c represents the latent variable. However, since learning sub-task policies requires isolated demonstrations for each sub-task, this setup is difficult to scale to many real world scenarios where providing such segmented trajectories is cumbersome. Further, this setup does not learn a macro-policy to combine the learned sub-task policies in meaningful ways to achieve different tasks. In our work, we aim to learn each sub-task policy directly from unsegmented activity demonstrations. For example, given a task consisting of three sub-tasks -A, B and C, we wish to learn a policy to complete sub-task A, learn when to transition from A to B, finish sub-task B and so on. To achieve this we use a causal graphical model, which can be represented as a Dynamic Bayesian Network as. Right: Causal model in this work. The latent code causes the policy to produce a trajectory. The current trajectory, and latent code produce the next latent code shown in FIG0 (Right). The nodes c t denote latent variables which indicate the currently active sub-task and the nodes τ t denote the state-action pair at time t. We consider as given, a set of expert demonstrations, each of which is represented by τ = {τ 1, · · ·, τ T} and has a corresponding sequence of latent factors c = {c 1, · · ·, c T −1}. The sub-activity at time t dictates what state-action pair was generated at time t. The previous sub-task and the current state together cause the selection of the next sub-task. As we will discuss in Section 3, extending the use of mutual information to learn sub-task policies from unsegmented demonstrations is problematic, as it requires learning the macro-policy as a conditional probability distribution which depends on the unobserved future. This unobserved future is unknown during earlier points of interaction (FIG0). To alleviate this, in our work we aim to force the policy to generate trajectories that maximize the directed information or causal information BID17 flow from trajectories to latent factors of variation within the trajectories instead of mutual information. Using directed information requires us to learn a causally conditioned probability distribution BID12 which depends only on the observed past while allowing the unobserved future to be sequentially revealed. Further, since there exists feedback in our causal graphical model i.e., information flows from the latent variables to trajectories and vice versa, directed information also provides a better upper bound on this information flow between the latent variables and expert trajectories than does the conventional mutual information BID17 BID12.We also draw connections with existing work on learning sub-task policies using imitation learning with the options framework BID27 BID3. We show that our work, while derived using the information theoretic perspective of maximizing directed information, bears a close resemblance to applying the options framework in a generative adversarial imitation setting. Thus, our approach combines the benefits of learning hierarchical policies using the options framework with the robustness of generative adversarial imitation learning, helping overcome problems such as compounding errors that plague behaviour cloning. In summary, the main contributions of our work include:• We extend existing generative adversarial imitation learning frameworks to allow for learning of sub-task specific policies by maximizing directed information in a causal graph of subactivity latent variables and observed trajectory variables.• We draw connections between previous works on imitation learning with sub-task policies using options and show that our proposed approach can also be seen as option learning in a generative adversarial setting.• We show through experiments on both discrete and continuous state-action spaces, the ability of our approach to segment expert demonstrations into meaningful sub-tasks and combine sub-task specific policies to perform the desired task.2 RELATED WORK Imitation Learning BID22 aims at learning policies that can mimic expert behaviours from demonstrations. Modeling the problem as a Markov Decision Process (MDP), the goal in imitation learning is to learn a policy π(a|s), which defines the conditional distribution over actions a ∈ A given the state s ∈ S, from state-action trajectories τ = (s 0, a 0, · · ·, s T) of expert behaviour. Recently, BID8 introduced an imitation learning framework called Generative Adversarial Imitation Learning (GAIL) that is able to learn policies for complex high-dimensional physics-based control tasks. They reduce the imitation learning problem into an adversarial learning framework, for which they utilize Generative Adversarial Networks (GAN) BID5. The generator network of the GAN represents the agent's policy π while the discriminator network serves as a local reward function and learns to differentiate between state-action pairs from the expert policy π E and from the agent's policy π. Mathematically, it is equivalent to optimizing the following, DISPLAYFORM0 InfoGAIL BID16 and BID7 solve the problem of learning from policies generated by a mixture of experts. They introduce a latent variable c into the policy function π(a|s, c) to separate different type of behaviours present in the demonstration. To incentivize the network to use the latent variable, they utilize an information-theoretic regularization enforcing that there should be high mutual information between c and the state-action pairs in the generated trajectory, a concept that was first introduced in InfoGAN BID2. They introduce a variational lower bound L 1 (π, Q) of the mutual information I(c; τ) to the loss function in GAIL. DISPLAYFORM1 The modified objective can then be given as, DISPLAYFORM2 InfoGAIL models variations between different trajectories as the latent codes correspond to trajectories coming from different demonstrators. In contrast, we aim to model intra-trajectory variations and latent codes in our work correspond to sub-tasks (variations) within a demonstration. In Section 3, we discuss why using a mutual information based loss is infeasible in our problem setting and describe our proposed approach. Consider an MDP with states s ∈ S and actions a ∈ A. Under the options framework BID27, an option, indexed by o ∈ O consists of a sub-policy π(a|s, o), a termination policy π(b|s,ō) and an option activation policy π(o|s). After an option is initiated, actions are generated by the sub-policy until the option is terminated and a new option is selected. Options framework has been studied widely in RL literature. A challenging problem related to the options framework is to automatically infer options without supervision. Option discovery approaches often aim to find bottleneck states, i.e., states that the agent has to pass through to reach the goal. Many different approaches such as multiple-instance learning BID18, graph based algorithms BID19 BID26 have been used to find such bottleneck states. Once the bottleneck states are discovered, the above approaches find options policies to reach each such state. In contrast, we propose a unified framework using a information-theoretic approach to automatically discover relevant option policies without the need to discover bottleneck states. BID3 formulate the options framework as a probabilistic graphical model where options are treated as latent variables which are then learned from expert data. The option policies (π(a|s, o)) are analogous to sub-task policies in our work. These option policies are then learned by maximizing a lower bound using the Expectation-Maximization algorithm BID20. We show how this lower bound is closely related to the objective derived in our work. We further show how this connection allows our method to be seen as a generative adversarial variant of their approach. propose to extend the EM based approach to multiple levels of option hierarchies. Further work on discovery of deep continuous options allows the option policy to also select a continuous action in states where none of the options are applicable. Our proposed approach can also be extended to multi-level hierarchies (e.g. by learning VAEs introduced in section 3 with multiple sampling layers) or hybrid categorical-continuous macro-policies (e.g. using both categorical and continuous hidden units in the sampling layer in VAE). BID25 learn options by assuming knowledge of task sketches BID0 along with the demonstrations. The work proposes a behavior cloning based approach using connectionist temporal classification BID6 to simultaneously maximize the joint likelihood of the sketch sequences and the sub-policies. Our proposed approach does not expect task sketches as input, making it more amenable to problems where labeling demonstrations with sketch labels is difficult. Prior work in robot learning has also looked at learning motion primitives from unsegmented demonstrations. These primitives usually correspond to a particular skill and are analogous to options. BID21 used the Beta-Process Autoregressive Hidden Markov Model (BP-AR-HMM) to segment expert demonstrations and post-process these segments to learn motion primitives which provide the ability to use reinforcement learning for policy improvement. Alternately, BID14 use Dirichlet Process Gaussian Mixture Model (DP-GMM) to segment the expert demonstrations by finding transition states between linear dynamical segments. Similarly, BID23 use the BP-AR-HMM framework to initially segment the expert demonstrations and then use an inverse reinforcement learning step to infer the reward function for each segment. The use of appropriate priors allows these methods to discover options without a priori knowledge of the total number of skills. BID15 model the task of manipulation as an autoregressive Hidden Markov Model where the hidden phases of manipulation are learned from data using EM. However, unlike the above methods, in our proposed approach we also learn an appropriate policy over the extracted options. We show how this allows us to compose the individual option policies to induce novel behaviours which were not present in the expert demonstrations. As mentioned in the previous section, while prior approaches can learn to disambiguate the multiple modalities in the demonstration of a sub-task and learn to imitate them, they cannot learn to imitate demonstrations of unsegmented long tasks that are formed by a combination of many small sub-tasks. To learn such sub-task policies from unsegmented deomonstrations we use the graphical model in FIG0 (Right), i.e., consider a set of expert demonstrations, each of which is represented by τ = {τ 1, · · ·, τ T} where τ t is the state-action pair observed at time t. Each such demonstration has a corresponding sequence of latent variables c = {c 1, · · ·, c T −1} which denote the sub-activity in the demonstration at any given time step. As noted before, previous approaches BID16 BID7 model the expert sub-task demonstrations using only a single latent variable. To enforce the model to use this latent variable, these approaches propose to maximize the mutual information between the demonstrated sequence of state-action pairs and the latent embedding of the nature of the sub-activity. This is achieved by adding a lower bound to the mutual information between the latent variables and expert demonstrations. This variational lower bound of the mutual information is then combined with the the adversarial loss for imitation learning proposed in BID8. Extending this to our setting, where we have a sequence of latent variables c, yields the following lower bound on the mutual information, DISPLAYFORM0 Observe that the dependence of q on the entire trajectory τ precludes the use of such a distribution at test time, where only the trajectory up to the current time is known. To overcome this limitation, in this work we propose to force the policy to generate trajectories that maximize the directed or causal information flow from trajectories to the sequence of latent sub-activity variables instead. As we show below, by using directed information instead of mutual information, we can replace the dependence on τ with a dependence on the trajectory generated up to current time t. The directed information flow from a sequence X to Y is given by, DISPLAYFORM1 where H(Y X) is the causally-conditioned entropy. Replacing X and Y with sequences τ and c, DISPLAYFORM2 Here DISPLAYFORM3 which uses an approximate posterior q(c t |c 1:t−1, τ 1:t) instead of the true posterior p(c t |c 1:t−1, τ 1:t) can then be derived to get (See Appendix A.1 for the complete derivation), DISPLAYFORM4 Thus, by maximizing directed information instead of mutual information, we can learn a posterior distribution over the next latent factor c given the latent factors discovered up to now and the trajectory followed up to now, thereby removing the dependence on the future trajectory. In practice, we do not consider the H(c) term. This gives us the following objective, DISPLAYFORM5 We call this approach Directed-Info GAIL. Notice that, to compute the loss in equation 3, we need to sample from the prior distribution p(c 1:t). In order to estimate this distribution, we first pre-train a variational auto-encoder (VAE) BID11 on the expert trajectories, the details of which are described in the next sub-section. Figure 2 (left) shows the design of the VAE pictorially. The VAE consists of two multi-layer perceptrons that serve as the encoder and the decoder. The encoder uses the current state s t and the previous latent variable c t−1 to produce the current latent variable c t. We used the Gumbel-softmax trick BID9 to obtain samples of latent variables from a categorical distribution. The decoder then takes s t and c t as input and outputs the action a t. We use the following objective, which maximizes the lower bound of the probability of the trajectories p(τ), to train our VAE, DISPLAYFORM0 Figure 2 (right) gives an overview of the complete method. The VAE pre-training step allows us to get approximate samples from the distribution p(c 1:t) to optimize equation 4. This is done by using q to obtain samples of latent variable sequence c by using its output on the expert demonstrations. In practice, we fix the weights of the network q to those obtained from the VAE pre-training step when optimizing the Directed-Info GAIL loss in equation 4. In BID3 the authors provide a probabilistic perspective of the options framework. Although, BID3 Figure 2: Left: VAE pre-training step. The VAE encoder uses the current state (s t), and previous latent variable (c t−1) to produce the current latent variable (c t). The decoder reconstructs the action (a t) using s t and c t. Right: An overview of the proposed approach. We use the VAE pre-training step to learn an approximate prior over the latent variables and use this to learn sub-task policies in the proposed Directed-Info GAIL step. DISPLAYFORM0 Note that the first term in equation 6 i.e., the expectation over the distribution log p(c t |s t, c t−1) is the same as equation 3 of our proposed approach with a one-step Markov assumption and a conditional expectation with given expert trajectories instead of an expectation with generated trajectories. The second term in equation 6 i.e., the expectation over log π(a t |s t, c t) is replaced by the GAIL loss in equation 4. Our proposed Directed-Info GAIL can be therefore be considered as the generative adversarial variant of imitation learning using the options framework. The VAE behaviour cloning pretraining step in equation 5 is exactly equivalent to equation 6, where we use approximate variational inference using VAEs instead of EM. Thus, our approach combines the benefits of both behavior cloning and generative adversarial imitation learning. Using GAIL enables learning of robust policies that do not suffer from the problem of compounding errors. At the same time, conditioning GAIL on latent codes learned from the behavior cloning step prevents the issue of mode collapse in GANs. We present on both discrete and continuous state-action environments. In both of these settings we show that our method is able to segment out sub-tasks from given expert trajectories, learn sub-task conditioned policies, and learn to combine these sub-task policies in order to achieve the task objective. Table 1: A comparison of returns for continuous environments. The returns were computed using 300 episodes. Our approach gives comparable returns to using GAIL but also segments expert demonstrations into sub-tasks. The proposed Directed-Info GAIL approach improves over the policy learned from the VAE pre-training step. For the discrete setting, we choose a grid world environment which consists of a 15 × 11 grid with four rooms connected via corridors as shown in FIG1. The agent spawns at a random location in the grid and its goal is to reach an apple, which spawns in one of the four rooms randomly, using the shortest possible path. Through this experiment we aim to see whether our proposed approach is able to infer sub-tasks which correspond to meaningful navigation strategies and combine them to plan paths to different goal states. FIG1 shows sub-task policies learned by our approach in this task. The two plots on the left correspond to two of the four different values of the latent variable. The arrow at every state in the grid shows the agent action (direction) with the highest probability in that state for that latent variable. In the discussion that follows, we label the rooms from 1 to 4 starting from the room at the top left and moving in the clockwise direction. We observe that the sub-tasks extracted by our approach represent semantically meaningful navigation plans. Also, each latent variable is utilized for a different sub-task. For instance, the agent uses the latent code in FIG1 (a), to perform the sub-task of moving from room 1 to room 3 and from room 2 to room 4 and the code in FIG1 (b) to move in the opposite direction. Further, our approach learns to successfully combine these navigation strategies to achieve the given objectives. For example, FIG1 show examples of how the macro-policy switches between various latent codes to achieve the desired goals of reaching the apples in rooms 1 and 2 respectively. To validate our proposed approach on continuous control tasks we experiment with 5 continuous state-action environments. The first environment involves learning to draw circles on a 2D plane and is called Circle-World. In this experiment, the agent must learn to draw a circle in both clockwise and counter-clockwise direction. The agent always starts at, completes a circle in clockwise direction and then retraces its path in the counter-clockwise direction. The trajectories differ in the radii of the circles. The state s ∈ R 2 is the (x,y) co-ordinate and the actions a ∈ R 2 is a unit vector representing the direction of motion. Notice that in Circle-World, the expert trajectories include two different actions (for clockwise and anti-clockwise direction) for every state (x, y) in the trajectory, thus making the problem multi-modal in nature. This requires the agent to appropriately disambiguate between the two different phases of the trajectory. Further, to show the scalability of our approach to higher dimensional continuous control tasks we also show experiments on Pendulum, Inverted Pendulum, Hopper and Walker environments, provided in OpenAI Gym BID1. Each task is progressively more challenging, with a larger state and action space. Our aim with these experiments is to see whether our approach can identify certain action primitives which helps the agent to complete the given task successfully. To verify the effectiveness of our proposed approach we do a comparative analysis of our with both GAIL BID8 and the supervised behavior cloning approaching using a VAE. To generate expert trajectories we train an agent using Proximal Policy Optimization BID24. We used 25 expert trajectories for the Pendulum and Inverted Pendulum tasks and 50 expert trajectories for experiments with the Hopper and Walker environments. Figures 4(a, b, c) show on the Circle-World environment. As can be seen in Figure 4(a, b), when using two sub-task latent variables, our method learns to segment the demonstrations into two intuitive sub-tasks of drawing circles in clockwise and counterclockwise directions. Hence, our method is able to identify the underlying modes and thus find meaningful sub-task segmentations from unsegmented data. We also illustrate how the learned sub-task policies can be composed to perform new types of behavior that were unobserved in the expert data. In Figure 4 (c) we show how the sub-task policies can be combined to draw the circles in inverted order of direction by swapping the learned macro-policy with a different desired policy. Thus, the sub-task policies can be utilized as a library of primitive actions which is a significant benefit over methods learning monolithic policies. We now discuss the on the classical Pendulum environment. Figure 4(d) shows the sub-task latent variables assigned by our approach to the various states. As can be seen in the figure, the network is able to associate different latent variables to different sub-tasks. For instance, states that have a high velocity are assigned a particular latent variable (shown in blue). Similarly, states that lie close to position 0 and have low velocity (i.e. the desired target position) get assigned another latent variable (shown in green). The remaining states get classified as a separate sub-task. Figure 5 shows the on the higher dimensional continuous control, Hopper and Walker, environments. Figure 5 (a) shows a plots for sub-task latent variable assignment obtained on these environments. Our proposed method identifies basic action primitives which are then chained together to effectively perform the two locomotion tasks. Figure 5(b) shows that our approach learns to assign separate latent variable values for different action primitives such as, jumping, mid-air and landing phases of these tasks, with the latent variable changing approximately periodically as the agent performs the periodic hopping/walking motion. Finally, in Table 1 we also show the quantitative evaluation on the above continuous control environments. We report the mean and standard deviations of the returns over 300 episodes. As can be seen, our approach improves the performance over the VAE pre-training step, overcoming the issue of compounding errors. The performance of our approach is comparable to the state-of-the-art GAIL BID8. Our method moreover, has the added advantage of segmenting the demonstrations into sub-tasks and also providing composable sub-task policies. Segmentations obtained using our proposed Directed-Info GAIL method on FetchPickandPlace-v1. Returns VAE −14.07 ± 5.57 GAIL −13.29 ± 5.84 Directed-Info GAIL −11.74 ± 5.87GAIL + L2 loss −12.05 ± 4.94 Directed-Info GAIL + L2 loss −9.47 ± 4.84 Table 2: Mean returns over 100 episodes on FetchPickandPlace-v1 environment, calculated using the'dense' reward setting. We further analyze our proposed approach in more detail in the Appendix. In Appendix A.4 we visualize the sub-tasks in a low-dimensional sub-space. Also, in Appendix A.5 we show when using a larger dimensional sub-task latent variable. A video of our on Hopper and Walker environments can be seen at https://sites.google.com/view/directedinfo-gail. We further performed experiments on the FetchPickandPlace-v1 task in OpenAI Gym. In each episode of this task, the object and goal locations are selected randomly. The robot then must first reach and pick the object, and then move it to the goal location. We trained agents using both our proposed Directed-Info GAIL and the baseline GAIL approaches. We used 500 expert demonstrations. While our method was able to learn to segment the expert demonstrations into the Pick and Place sub-tasks correctly, as can be seen in Figure 6 and the videos at https://sites.google.com/view/directedinfo-gail/home#h.p_ 4dsbuC5expkZ, neither our approach, nor GAIL was able to successfully complete the task. In our preliminary , we found that the robot, in both our proposed approach and GAIL, would reach the object but fail to grasp it despite repeated attempts. To the best of our knowledge, no other work has successfully trained GAIL on this task either. Our preliminary experiments suggested that stronger supervision may be necessary to teach the agent the subtle action of grasping. In order to provide this supervision, we additionally trained the policy to minimize the L2 distance between the policy action and the expert action on states in the expert demonstrations. At every training step, we compute the discriminator and policy (generator) gradient using the Directed-Info GAIL (or in the baseline, GAIL) loss using states and actions generated by the policy. Along with this gradient, we also sample a batch of states from the expert demonstrations and compute the policy gradient that minimizes the L2 loss between actions that the policy takes at these states and the actions taken by the expert. We weigh these two gradients to train the policy. Table 2 shows the returns computed over 100 episodes. Adding the L2 measure as an additional loss led to significant improvement. Our proposed approach Directed-Info GAIL + L2 loss outperforms the baselines. Moreover, we believe that this quantitative improvement does not reflect the true performance gain obtained using our method. The reward function is such that a correct grasp but incorrect movement (e.g. motion in the opposite direction or dropping of the object) is penalized more than a failed grasp. Thus, the reward function does not capture the extent to which the task was completed. Qualitatively, we observed a much more significant difference in performance between the proposed approach and the baseline. This can be seen in the sample videos of the success and failure cases for our and the baseline method at https://sites.google.com/view/ directedinfo-gail/home#h.p_qM39qD8xQhJQ. Our proposed method succeeds much more often than the baseline method. The most common failure cases for our method include the agent picking up the object, but not reaching the goal state before the end of the episode, moving the object to an incorrect location or dropping the object while moving it to the goal. Agents trained using GAIL + L2 loss on the other hand often fail to grasp the object, either not closing the gripper or closing the gripper prematurely. We believe that our approach helps the agent alleviate this issue by providing it with the sub-task code, helping it disambiguate between the very similar states the agent observes just before and just after grasping. Learning separate sub-task policies can help improve the performance of imitation learning when the demonstrated task is complex and has a hierarchical structure. In this work, we present an algorithm that infers these latent sub-task policies directly from given unstructured and unlabelled expert demonstrations. We model the problem of imitation learning as a directed graph with sub-task latent variables and observed trajectory variables. We use the notion of directed information in a generative adversarial imitation learning framework to learn sub-task and macro policies. We further show theoretical connections with the options literature as used in hierarchical reinforcement and imitation learning. We evaluate our method on both discrete and continuous environments. Our experiments show that our method is able to segment the expert demonstrations into different sub-tasks, learn sub-task specific policies and also learn a macro-policy that can combines these sub-task. TAB3: Experiment settings for all the different environments for both DirectedInfo-GAIL and VAE-pretraining step respectively. Thus, by maximizing directed information instead of mutual information, we can learn a posterior distribution over the next latent factor c given the latent factors discovered up to now and the trajectory followed up to now, thereby removing the dependence on the future trajectory. In practice, we do not consider the H(c) term. This gives us the objective, DISPLAYFORM0 In practice, we fix q from the VAE pre-training and only minimize over the policy π in equation 4. BID24 to train our policy network with = 0.2. For the VAE pre-training step we set the VAE learning rate also to 3e −4. For the Gumbel-Softmax distribution we set an initial temperature τ = 5.0. The temperature is annealed using using an exponential decay with the following schedule τ = max(0.1, exp −kt), where k = 3e − 3 and t is the current epoch. In the Circle-World experiment, we added another loss term L s to VAE pre-training loss L V AE, which penalizes the number of times the latent variable switches from one value to another. FIG4 shows the segmentation of expert trajectories with and without the L s term. We observed that without adding the smoothing penalty, the VAE learns to segment the expert trajectories into semi-circles as shown in FIG4 (a). While a valid solution, this does not match with the intuitive segmentation of the task into two sub-tasks of drawing circles in clockwise and counter-clockwise directions. The smoothing term can be thought of as a prior, forcing the network to change the latent variable as few times as possible. This helps reach a solution where the network switches between latent variables only when required. FIG4 (b) shows an example of segmentation obtained on expert trajectories after smoothing. Thus, adding more terms to the VAE pre-training loss can be a good way to introduce priors and bias solutions towards those that match with human notion of sub-tasks. DISPLAYFORM0 In Figure 8, we show the plots expert states, reduced in dimensionality using Principal Component Analysis (PCA), in Hopper and Walker environments. States are color coded by the latent code assigned at these states. We reduced the dimension of states in Hopper from 11 to 2 and in Walker from 17 to 3. These low dimensional representations are able to cover ∼ 90% of variance in the states. As can be seen in the figure, states in different parts of the space get assigned different latent variables. This further shows that our proposed approach is able to segment trajectories in such a way so that states that are similar to each other get assigned to the same segment (latent variable). For the following discussion we will represent a k-dimensional categorical variable as belonging to ∆ k−1 simplex. To observe how the dimensionality of the sub-task latent variable affects our proposed approach we show with larger dimensionality for the categorical latent variable c t. Since DirectedInfo-GAIL infers the sub-tasks in an unsupervised manner, we expect our approach to output meaningful sub-tasks irrespective of the dimensionality of c t. Figure 9 shows for using a higher dimensional sub-task latent variable. Precisely, we assume c t to be a 8-dimensional one hot vector, i.e., c t ∈ ∆ 7.As seen in the above figure, even with a larger context our approach identifies similar basic action primitives as done previously when c t ∈ ∆ 3. This shows that despite larger dimensionality our approach is able to reuse appropriate context inferred previously. We also visualize the context values for the low-dimensional state-space embedding obtained by PCA. Although not perfectly identical, these context values are similar to the visualizations observed previously for c t ∈ ∆ 3. Thus our proposed approach is able, to some extent, infer appropriate sub-task representations independent of the dimensionality of the context variable.
Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information
470
scitldr
The Convolutional Neural Network (CNN) has been successfully applied in many fields during recent decades; however it lacks the ability to utilize prior domain knowledge when dealing with many realistic problems. We present a framework called Geometric Operator Convolutional Neural Network (GO-CNN) that uses domain knowledge, wherein the kernel of the first convolutional layer is replaced with a kernel generated by a geometric operator function. This framework integrates many conventional geometric operators, which allows it to adapt to a diverse range of problems. Under certain conditions, we theoretically analyze the convergence and the bound of the generalization errors between GO-CNNs and common CNNs. Although the geometric operator convolution kernels have fewer trainable parameters than common convolution kernels, the experimental indicate that GO-CNN performs more accurately than common CNN on CIFAR-10/100. Furthermore, GO-CNN reduces dependence on the amount of training examples and enhances adversarial stability. Convolutional Neural Networks have been successfully applied in many fields during recent decades, but the theoretical understanding of the deep neural network is still in the preliminary stages. Although CNNs have strong expressive abilities, they have two clear deficiencies. First, as complex functional mappings, CNNs, like black boxes, cannot take full advantage of domain knowledge and prior information. Second, when little data is available for a certain task, CNNs' generalization ability weakens. This is due to overfitting, which may occur due to the large number of parameters and the large model size. Stemming from these two defects, a great deal of research has been done to modify CNNs BID7; ).Before CNNs were applied, traditional geometric operators had developed quite well. Each geometric operator represents the precipitation of domain knowledge and prior information. For example, the Sobel operator (Works) is a discrete difference operator, which can extract image edge information for edge detection. The Schmid operator is an isotropic circular operator, which extracts texture information from images for face recognition. The Histogram of Oriented Gradients (HOG) BID8 ) is a statistic operator of gradient direction, which extracts edge direction distributions from images for pedestrian detection and other uses. Many computer vision tasks require domain knowledge and prior information. For example, in BID2, the texture information from the image is used for an auxiliary diagnosis of a fracture. Geometric operators can make use of domain knowledge and prior information, but cannot automatically change parameter values by learning from data. Convolutional Neural Networks have strong data expression abilities and learning abilities, but they struggle to make use of domain knowledge. For better data learning, we have combined the two. It is natural to directly use geometric operators for pre-processing, and then classify the data through a Convolutional Neural Network . However, this method uses human experience to select geometric operator parameter values, and then carries out the Convolutional Neural Network learning separately. This method is a kind of two-stage technique, and without reducing parameter redundancy in a Convolutional Neural Network, it is difficult to achieve global optimization. The method proposed in this paper directly constructs geometric operator convolution and then integrates geometric operator convolution into a Convolutional Neural Network to form a new framework -the Geometric Operator Convolutional Neural Network. This method achieves global optimizations and utilizes the properties of geometric operators. In summary, the contributions of this work are as follows:• This framework can integrates many conventional geometric operators, which reveals its broad customization capabilities when handling diverse problems.• In theory, the same approximation accuracy and generalization error bounds are achieved when geometric operators meet certain conditions.• The Geometric Operator Convolutional Neural Network not only reduces the redundancy of the parameters, but also reduces the dependence on the amount of the training samples.• The Geometric Operator Convolutional Neural Network enhances adversarial stability. In recent years, Convolutional Neural Networks have been widely used in various classification and recognition applications BID19 BID15. Convolutional Neural Networks have achieved advanced success in various problems. All CNNs adopt an end-to-end approach to learning; however, each unique task is associated with its own distinctive domain knowledge and prior information. Thus, to improve classification accuracy, researchers use priori information that is tailored to each specific task and each specific Convolutional Neural Network. One way to do this is to use the traditional image processing algorithm as a preprocessing step. Another way is to use the traditional image processing algorithm to initialize convolution kernels. Classification accuracy is a primary concern for researchers in the machine-learning community. Different pre-processing models, such as filters or feature detectors, have been employed to improve the accuracy of CNNs. One example of this is the Gabor filter with CNN BID10. The Gabor filter is a feature extractor based on human vision. Besides the Gabor filter, some people also use Fisher vectors BID5, sparse filter Banks , and the HOG algorithm combined with a CNN to improve accuracy. Based on the human visual system, these filters are found to be remarkably well-suited for texture representation and discrimination. In the works by and , the Gabor filter is used to extract features from the input image in a pre-processing step. However, these methods require a kind of two-stage procedure that may not reach the optimal global solution. In addition, some scholars use traditional image processing algorithms to initialize convolutional kernels, such as building a Feature Pyramid Network with an image pyramid for multi-scale feature extraction . Geometric operators are widely used in traditional image processing algorithms. Many researchers use the Gabor filter to fix the first convolution layer, while other layers, which are common convolution layers, can be trained to improve their accuracy . John et al. simultaneously adopted the weight of the first layer convolution with the Gershgorin circle theorem and the Gabor filter constraint to improve the classification accuracy when Convolutional Neural Networks propagated backward. In BID1 and BID3, the authors have attempted to get rid of the pre-processing overhead by introducing Gabor filters in the first convolutional layer of a CNN. In addition, some researchers use filters to initialize multiple convolutional kernels. only used the Gabor function to create kernels in four directions to initialize the convolutional kernels from a Convolutional Neural Network. These methods change the initialization weight and use domain knowledge, but they do not reduce the redundancy of model parameters, and they do not enhance the transformation ability of the model. In this paper, a new network, the Geometric Operator Convolutional Neural Network, is proposed. This method integrates geometric operators, namely the filters, into a convolutional neural network. This network can not only make use of domain knowledge and prior information, but also reduce the redundancy of network parameters and enhance the ability of model transformation. This network's construction is described in detail in the following section. , the Roberts operator , the Laplace operator (van), the Gabor operator BID13, and so on. Each operator has different characteristics. Therefore, different geometric operators are used in different application scenarios, according to the characteristics of each unique problem. For example, SIFT looks for feature points in different scale spaces for pattern recognition and image matching. The Roberts operator uses local differences to find edges for edge detection, and the Laplace operator uses isotropic differentials to retain details for image enhancement. Geometric operators represent the precipitation of domain knowledge and prior knowledge. The GO-CNN is proposed in this paper, which uses the characteristics of geometric operators. The first step in this framework is to convolve geometric operators. In this paper, the Gabor operator and the Schmid operator are mainly used as examples to illustrate how to carry out convolutions and integrate these convolutions into CNNs. Other geometric operators in subsequent studies employ similar concepts. In order to study the frequency characteristics of local range signals, BID11 proposed the famous "Window" Fourier transform (also called the short-time Fourier transform, STFT) in the paper "Theory of communication" in 1946. This is now known as the Gabor operator; when combined with images, it is referred to as the Gabor filter. Until now, the Gabor filter has undergone many developments, and its primary characteristics are listed below. First, the Gabor filter has the advantages of both spatial and frequency signal processing. As shown in Eqn. 1.0, the Gabor operator is essentially a Fourier transform with a gaussian window. For an image, the window function determines its locality in the spatial domain, so the spatial domain information from different positions can be obtained by moving the center of the window. In addition, since the gaussian function remains the same after the Fourier transform, the Gabor filter can extract local information in the frequency domain. Second, the Gabor filter's response to biological visual cells may be an optimal feature extraction method. BID9 BID9 extended the Gabor function to a 2-dimensional form and constructed a 2D Gabor filter on this basis. It was surprising to find that the 2D Gabor filter was also able to maintain consistency with the mammalian model of retinal nerve cell reception. Third, the Gabor kernels are similar to the convolution kernels from the first convolutional layer in the CNN. From the visualization of the first convolutional layer in AlexNet, which was proposed by BID19. Some convolution kernels present geometric properties, as in the kernel function from the Gabor filter. From this feature, it can also be explained that there are parameter redundancies in the Convolutional Neural Network, and the Gabor operator can be convoluted and integrated into CNN. Lastly, the Gabor filter can extract directional correlation texture features from an image. DISPLAYFORM0 Since the Gabor operator combines with the CNN in the image, better feature expressions can be obtained. There are two main binding methods. First, the image is preprocessed by the Gabor operator, and then its features are extracted by the CNN. Next, the Gabor operator is convoluted to form a convolution layer, and then we integrate this convolution into the common Convolutional Neural Network. The second approach is used in this article. As shown in Eqn. 1.0, the Gabor kernel function has 5 parameters, which are obtained by learning and then regenerated into an m×m kernel. We replace the common convolution kernels with these Gabor kernels to form a convolutional layer. However, for the common convolutional layer, an m × m convolution kernel is generated by an identity mapping, which requires m 2 parameters. So, our method reduces the number of trainable parameters in the convolutional layer. In 2001, proposed a Gabor-like image filter, namely the Schmid operator. As shown in Eqn. 2.0, its composition is similar to the kernel function of the Gabor operator, so it retains the properties of the Gabor operator. In addition, the Schmid operator has rotation invariance. So, the Schmid operator is convoluted, and we integrate this convolution into common Convolutional Neural Network. This network improves the model's adversarial stability to rotation and improve the image feature extraction effect. Similar to the convolution of the Gabor operator, as shown in Eqn. 2.0, the Schmid kernel function has two parameters, which are obtained by learning and then generated by the Schmid kernel. Finally, we replace common convolution kernels with Schmid kernels to form a convolutional layer. DISPLAYFORM0 In this paper, only two geometric operator convolutions are explained. Similarly, for other geometric operators, operator kernels are generated by operator kernel functions, which replace common convolution kernels to form a convolutional layer. Due to the diversity of geometric operators, different geometric operators can be replaced with geometric operator convolutions, so the geometric operator convolution is customizable. There is a kind of geometric operator to form any kind of geometric operator convolution. Consequently, a question that must be addressed is how we combine multiple geometric operators with common CNNs to form GO-CNNs. Since the visualization of the first layer of convolution kernels maintains some geometric characteristics, we replace the convolution kernel in the first convolutional layer by kernel generated from geometric operators, and denote this kind of CNN as Geometric Operator Convolutional Neural Network (GO-CNN). In GO-CNN, kernels from the first convolutional layer are calculated by parameters of various geometric operator functions, and we call these operator functions as generator functions. Then, we concatenate all the calculated convolutional kernels in the last dimension to obtain a complete convolutional kernel. The full procedure is illustrated in FIG0. The generated convolution kernel is used as the weight of the first convolution layer in the Geometric Operator Convolutional Neural Network, and then the common convolution layer and output layer are connected. In this way, we have defined the forward propagation of the whole Geometric Operator Convolutional Neural Network. So, in backward propagation, the gradient of loss is transferred to the convolution kernel; this process is different from the usual convolution. Here, the convolution kernel generated by the geometric operator needs to further use the chain derivative rule (i. e., Eqn. 3.0, where L is the loss function, w is each convolution kernel, and p i is the parameter to generate each convolution kernel) to transfer the gradient to the parameters of each convolution kernel. Then, trainable parameters are updated by gradient descent algorithms, and the whole GO-CNN is complete. DISPLAYFORM0 The whole framework of the Geometric Operator Convolutional Neural Network has been introduced above. Next, we describe how to theoretically analyze the GO-CNN. It is theoretically proved that although the number of trainable parameters in the GO-CNN decreases, the effectiveness for computer vision tasks does not decrease. All detailed proofs are expanded in Appendix B. • We denote the input by DISPLAYFORM0, the corresponding label is {y i |y i = 0 or 1} DISPLAYFORM1 • The loss function is Mean Square Error.• The output of the neural network isỹ i for each input I i, and the empirical loss function is defined as follows:Ê DISPLAYFORM2 Definition 1 (Parametric Convolutional Kernel Space). Let f be a function that maps vector from R n to matrix in R m×m, n, m ∈ N +, and we call this function as convolution kernel generator function. Then we define Parametric Convolutional Kernel Space K f as: DISPLAYFORM3 We call n the parameter number, m the kernel size, od (short for output dimension) the output dimension. Since a convolutional kernel in a parametric convolutional kernel spaces is generated by function f, we call f as the generator function, and DISPLAYFORM4 Since kernel can be generated from a generator function by fewer parameters than ordinary kernel, the amount of trainable parameters of GO-CNN can be much smaller. However, reduction in parameters often causes loss of performance as the hypothesis space is smaller. In the simplest situation, we replace the ordinary kernel in the first convolutional layer by the parameter kernel generated from a parametric convolutional kernel space, and study on it. Definition 2 (GO-CNN). Assume that K f is a parametric convolutional kernel space. If the kernel in the first convolutional layer of a convolutional neural network is generated from K f, we call this network GO-CNN. We denote the set of GO-CNN by G f.GO-CNN is almost exactly the same as common CNN, except for the kernel in the first convolutional layer. We treat the first convolutional layer as a function from images to outputs, which then act as input of the following layer. If this function is not an injective function, meaning that different inputs can be mapped to identical outputs, then the network takes these identical outputs as the input of the following layers, meaning that the final outputs are still the same. However, the image inputs of the first convolutional layer are different, and corresponding labels can also be different. Thus, when the final outputs are the same, errors must occur. Therefore, we need to choose kernel carefully to make the function be an injective function. Since the convolution operator is a linear operator, we have the following proposition. Proposition 1. If the kernel of a convolutional layer, denoted by w, satisfies the following: I * w = 0 ⇔ I = 0, ∀I (6.0) where I is the layer input and * is the convolution operation, then this convolutional layer is an injective function. We find a necessary and sufficient condition for a convolutional layer to be an injective function. But which kernel satisfies this condition? In the proposition below, we show that 3 × 3 kernel generated by Gabor kernel function satisfies this condition. DISPLAYFORM5, where x = xcosθ + ysinθ, y = −xsinθ + ycosθ. Let K f be the corresponding parametric convolutional kernel space with kernel size m equal to 3 and sufficient output dimension od. Then, there exists kernel in K f satisfies the condition (6.0).As the kernel generated from K f could not meet the (6.0), we have the following definition: Definition 3 (Well-Defined GO-CNN). Let G ∈ G f, if there is a kernel generated by K f that satisfies (6.0), we call G a well-defined GO-CNN. We denote the set of all well-defined GO-CNNs as G * f. Corollary 1. If the generator function f is Gabor kernel function, the GO-CNN is well-defined. Now, let us consider a Convolutional Neural Network with one convolutional layer and two fully connected layers, and we will study the convergency of common CNN and GO-CNN. The detailed mathematical expression is expanded in Appendix B. Theorem 1. For any F ∈ F, where F is the set of common CNN, if the first fully connected layer is wide enough, the empirical loss of a well-defined GO-CNN can be that of common CNN controls. That is, for an arbitrary > 0, there exists d * ∈ N + and G ∈ G * f, such that when d 1 ≥ d *, the following inequality holds: DISPLAYFORM6 where F is the set of common CNN, if the first fully connected layer is wide enough, the generalization error of a well-defined GO-CNN can be that of common Convolutional Neural Network controlled. That is, for an arbitrary > 0, there exists d * ∈ N + and G ∈ G * f, such that when d 1 ≥ d *, the following inequality holds: DISPLAYFORM7 In Theorem. 2, we know that well defined GO-CNNs have almost the same generalization error as common CNNs. Therefore, we need to find which GO-CNNs are well defined. As GO-CNN with Gabor kernel function as the generator function is well defined, we have the following corollary. Corollary 2. Let f be Gabor kernel function, for any F ∈ F, if the first fully connected layer is wide enough, the generalization error GO-CNN G, which applies f as the generator function can be that of F controlled. That is, for an arbitrary > 0, there exists d DISPLAYFORM8 *, the following inequality holds: DISPLAYFORM9 More generally, if there are many generator functions in the first convolutional layer of a GO-CNN, when the number of kernels generated by Gabor kernel function is sufficient enough, this GO-CNN is also well defined. Therefore, we have the following corollary. Corollary 3. Let {f 1, f 2, · · ·, f T} be the set of generator functions. Suppose that there are od convolution kernels {k 1, k 2, · · ·, k od} in the first convolutional layer of a GO-CNN, denoted by G, and each k j is generated by function f tj, where 1 ≤ j ≤ od, 1 ≤ t j ≤ T. If there exists t * ∈ {1, 2, · · ·, T} such that f t * is Gabor kernel function, and the number of kernels generated by f t *, denoted by n t *, is sufficient big enough, then G is well defined, so that (8.0) holds. All experiments are performed on a single machine with CPU Intel Core i7-7700 CPU @ 3.60GHz × 8, GPU TITAN X (Pascal), and RAM 32G. Experimental details and more experiments are given in Appendix C, respectively. Theoretical analyses ensures that the GO-CNN has the same approximation accuracy and the same upper bound for generalization error as the common CNN. We verify this using two kinds of experiments on CIFAR-10/100 datasets. For the GO-CNN, the convolution kernels from the first layer are half trainable Schmid kernels and half trainable Gabor kernels. The basic network frameworks used in these experiments are ResNet18, ResNet34, and ResNet50 BID14.According to the cross-entropy curve of CIFAR-10/100 train sets, GO-CNN's value initially fell faster than the common CNN's, eventually almost reaching the same value. It is verified that GO-CNN achieves the same approximation accuracy as the common Convolutional Neural Network. According to the error rate curve of CIFAR-10/100 verification sets, the value of GO-CNN is lower than that of the common CNN. In addition, as shown in TAB1, the GO-CNN on the CIFAR-10 test set was 0.4% more accurate than the common CNN. On the CIFAR-100 test set, the GO-CNN was 0.5% more accurate than the common CNN. It is verified that the GO-CNN achieves the same generalization error bound as the common CNN. In many practical applications, such as the military, medical care, and so on, annotated data are often insufficient. Thus, a model's generalization ability for small data sets is of great importance. For CIFAR-10/100 and MNIST datasets, their train sets are large and their test sets are small. So, in these numerical experiments, the test set is directly used to train the model, and the train set is used to evaluate the model. For numerical experiments with CIFAR-10/100 datasets, the techniques and models used are the same as in Sec. 5.1. For numerical experiments with MNIST dataset, the basic network structure used in the experiment is LeNet . Similarly, in the GO-CNN, the first convolutional layer is replaced by the operator convolutional layer. The convolution kernels from the first layer are composed of trainable Gabor kernels and Schmid kernels. The other convolutional layers are the common convolutional layers. As shown in TAB2, from the perspective of the accuracy of MNIST and CIFAR-10/100, after the train set drops to one-fifth of the original train set, the accuracy of the common CNN falls faster than the GO-CNN. Moreover, the GO-CNN more accurate than the common CNN on the original train set. That is to say, the GO-CNN is better at predicting unknown data than the common CNN. The GO-CNN not only reduces the redundancy of the parameters, but also reduces the dependence on the amount of training samples. The current machine learning model, including the neural network and other models, is vulnerable to attacks from adversarial samples. In addition, CNNs show instability under attacks against adversarial samples BID12. So, it is very important to study the stability of adversarial It can be seen from TAB3 that when the test set is randomly rotated within 90 degrees, the difference of the GO-CNN is 1.21% lower than that of the common CNN. This verifies that the GO-CNN enhances the adversarial stability of rotated samples. When the small Gaussian disturbance (the mean is 0, the standard deviation is 0.3) is applied to the test set, the difference of the GO-CNN is 0.6% lower than that of the common CNN. This indicates that the GO-CNN enhances the adversarial stability of Gaussian disturbance adversarial samples. In sum, the GO-CNN enhances the adversarial stability of certain adversarial samples. In the above experiments, the Geometric Operator Convolutional Neural Network uses a priori knowledge from the field of medicine and provides a better recognition effect. Experiments about intelligent medical diagnoses of bone fractures is given in Appendix C. Although the trainable parameters decrease, GO-CNN still reaches the same approximation accuracy and a slightly lower generalization error upper bound when compared with the common CNN. And the GO-CNN reduces the dependence on training samples and enhances the adversarial stability of certain adversarial samples. In this paper, we present a novel framework named the Geometric Operator Convolution Neural Network, where the kernel in the first convolutional layer is replaced with kernels generated by geometric operator functions. This new network boasts several contributions. Firstly, the GO-CNN is customizable for diverse situations. Secondly, there is a theoretical guarantee in the learning framework of the GO-CNN. Thirdly, the GO-CNN reduces the dependence on training samples. Lastly, the GO-CNN enhances adversarial stability. In the future, we can explore a more appropriate geometric operator convolution block. A supplementary explanation about the Gabor and Schmid operators. The Gabor kernels are similar to the convolution kernels from the first convolutional layer in the CNN. An illustration of this similarity is shown in FIG1. In addition, the Gabor filter can extract directional correlation texture features from an image. As shown in FIG2, there are 40 Gabor As shown in FIG3, when the original image and a version of that image that has been rotated 90 degrees are both convolved with the same Schmid kernel, the ing characteristic graph exhibits only 90 degrees of rotation. So, the Schmid operator has rotation invariance. FIG1 ) let F and G be two hypothesis classes and let a ∈ R be a constant, we have: FIG0. With probability at least 1 − δ over the choice of S, the following holds for all h ∈ F: DISPLAYFORM0 DISPLAYFORM1 Proof of Proposition1.Assume that the proposition is not true, then there exist I 1 = I 2, such that I 1 * w = I 2 * w. Thus, if we set I = I 1 − I 2, we have I * w = (I 1 − I 2) * w = 0, since * is a linear operator, which means that I = 0 according to the condition. Therefore, the assumption is not true, and the is proved. Assume that there exists I ∈ R 3×3, I = 0, such that I * k = 0 holds for ∀k ∈ K f.We write I in the following matrix way: We define the pixel generator function f ij to be f x−1,y−1 (θ, σ, γ, λ, ψ). Then, we have the following equivalence: DISPLAYFORM0 DISPLAYFORM1 We will choose a variety of different parameters to discuss. DISPLAYFORM2 Since θ = 0, we have x = x, y = y, and the following: DISPLAYFORM3 We make the following shorthands for conveniency: DISPLAYFORM4 From Eqn.13.2, we can get: DISPLAYFORM5 The equation above means that, DISPLAYFORM6 Differentiate on both sides of parameter γ and get: DISPLAYFORM7 Since Eqn.13.7 holds for ∀γ, σ, which indicates that: DISPLAYFORM8 In the same way, we can get the following equation from Eqn.13.8: DISPLAYFORM9 Therefore, we have the following equations: DISPLAYFORM10 a 00 + a 02 + a 20 + a 22 = 0 a 01 + a 21 = 0 a 10 + a 12 = 0 a 11 = 0 (13.10) DISPLAYFORM11 In the same way, we have the following equations: DISPLAYFORM12 And we can get:(a 00 + a 02)h 1 + a 01 h 2 + (a 10 + a 12)h 3 + a 11 = 0, (13.12) which indicates that:a 00 + a 02 = 0 a 01 = 0 (13.13) From Eqn.13.10, we can get:a 00 + a 02 = 0 a 01 = a 21 = 0 (13.14) DISPLAYFORM13 We can get the following equations in the way just the same as discussed in situation II: DISPLAYFORM14 IV. θ = π/2, λ = 3, ψ = ±π/3.We have x = y, y = x this time, and we can get the following equations as the way discussed in situation II III,: DISPLAYFORM15 Combine equations 13.14 13.15 13.16, we can get: DISPLAYFORM16 We have x = √ 2 2 (x + y), y = √ 2 2 (y − x) and the following: DISPLAYFORM17 Therefore, we have DISPLAYFORM18 Combine equations 13.10 13.17 13.19, we can find that a ij = 0, f ori, j = 0, 1, 2, which means that I = 0. Therefore, the assumption that I = 0 is not true. For an arbitrary sized input I, we can focus on the 3 × 3 sized submatrix that will do inner product with the convolution kernel and get the same . Proof of Corollary1. From Prop.2, the is obvious. For the common CNN, denoted by F, we define the convolution kernel as k F. The weights of the rest of fully connected layers are {a F,1, a F,2}, and the biases of three layers are {b F,0, b F,1, b F,2}. Let σ stand for sigmoid activation function, then the convolutional layer C F and the fully connected layer F C F can be defined as follows: DISPLAYFORM19 Then, the last two fully connected layers can be defined as: DISPLAYFORM20 Therefore, the output before activation, denoted by F (x), and after activation, denoted byF (x), are defined as: DISPLAYFORM21 We denote the set of common CNN as F, that is, F = {F}, and the output before activation and after activation of input I i as F i,F i.For a GO-CNN G, we similarly define the convolutional kernel to be k g, and the weights and biases DISPLAYFORM22 Then, we have the following shorthand when the input is x: DISPLAYFORM23 We denote the output before activation and after activation of input I i as G i,G i as well. We maintain the same neuron number for each corresponding layer in common CNN and GO-CNN, that is to say, dim (b F,k DISPLAYFORM24, since the approximation ability is different when the neuron number is different. We define the width of each layer as DISPLAYFORM25 For the common CNN, denoted by F, we define the convolution kernel as k F . The weights of the rest of fully connected layers are {a F,1, a F,2}, and the biases of three layers are {b F,0, b F,1, b F,2}. Let σ stand for sigmoid activation function, then the convolutional layer C F and the fully connected layer F C F can be defined as follows: DISPLAYFORM26 Then, the last two fully connected layers can be defined as: DISPLAYFORM27 Therefore, the output before activation, denoted by F (x), and after activation, denoted byF (x), are defined as: DISPLAYFORM28 We denote the set of common CNN as F, that is, F = {F}, and the output before activation and after activation of input I i as F i,F i.For a GO-CNN G, we similarly define the convolutional kernel to be k g, and the weights and biases are {a G,1, a G,2, b G,0, b G,1, b G,2}. Then, we have the following shorthand when the input is x: DISPLAYFORM29 We denote the output before activation and after activation of input I i as G i,G i as well. We maintain the same neuron number for each corresponding layer in common CNN and GO-CNN, that is to say, dim (b F,k DISPLAYFORM30, since the approximation ability is different when the neuron number is different. We define the width of each layer as DISPLAYFORM31 Proof of Theorem1. Notice that DISPLAYFORM32 Apply absolute value on both sides DISPLAYFORM33 The last inequality holds as DISPLAYFORM34 We can fix parameters of ordinary CNN, so that there is a mapping between input I i and outputF i, and the mapping function isF as we have defined. We can also fix parameters of C G, and choose the convolution kernel of C G that satisfies (??) since G is a well-defined GO-CNN, so that C G is an injective function, which means that C −1 G exists. In the same time, D G = F C G,2 • σ • F C G,1 can be treated as a one hidden layer neural network. Combine with (16.2), we can get DISPLAYFORM0 Proof of Theorem2. From Theorem.1, we know that G satisfies the following inequality: DISPLAYFORM1 From Lemma.3, we know that DISPLAYFORM2 Since G * f ⊂ F, we have the following inequality from Lemma.2: DISPLAYFORM3 Combined with (17.2), we havê DISPLAYFORM4 The is proved! Proof of Corollary2. From Theorem.2 and Corollary.1, this is obvious. Let K be the set of k i such that the generator function of k i is f ti = f t * and denote the concatenation of all these k i ask. Suppose that there exists an input x, satisfies that x * k i = 0, i = 1, 2, · · ·, od, then x * k = 0, ∀k ∈ K. Therefore, x * k holds for any parameters. However, it is conflict with Prop.2.Therefore, the is proved! C APPENDIX Approximation accuracy and generalization error Recognizing objects in an actual scene is not dependent on corresponding domain knowledge but on humans' prior information. For object recognition tasks, the Geometric Operator Convolutional Neural Network's recognition effect is worth exploring. The commonly used public data sets for common object recognition are CIFAR-10 (ten categories) and CIFAR-100 (100 categories). They are all three-channel color images with a resolution of 32×32. The train set contains 50,000 images and the test set contains 10,000 images. The basic network frameworks used in these experiments are ResNet18, ResNet34, and ResNet50 BID14, which mainly consist of a new residual structure unit. In these experiments, four paddings are added on the four edges. Then, a random 32×32 cropping is performed, and a data enhancement method is carried out, which involves turning the image up and down. For both testing and training, the images' pixels are normalized to a 0-1 distribution. The Stochastic gradient descent optimization algorithm with 0.9 the momentum is used during the training process. The batch size is 100, the initial learning rate is 0.1, and the weight decay is 0.0005. The learning rate is reduced by one fifth per 60, 120, and 160 epochs. We report the performance of our algorithm on a test set after 200 epochs based on the average over five runs. As shown in FIG5, according to the cross-entropy curve of the CIFAR-10 and CIFAR-100 train sets, GO-CNN's value initially fell faster than the common CNN's, eventually almost reaching the same value. It is verified that Geometric Operator Convolutional Neural Network achieves the same approximation accuracy as the common Convolutional Neural Network. According to the error rate BID17 are generally used for visualization. The T-SNE visualization maps data points to a two-dimensional or three-dimensional probability distribution through affinitie transformation. Then, the data points are displayed with a two-dimensional or three-dimensional plane. In this paper, a two-dimensional T-SNE visualization is adopted to display the CIFAR-10 features extracted by the model. As shown in FIG6, the CIFAR-10 features extracted by the Geometric Operator Convolutional Neural Network are evenly separated from each other in the two-dimensional visualization of T-SNE, while the features extracted from the common Convolutional Neural Network are mixed. It is apparent that the features extracted by the GO-CNN are more separable; in other words, the features learned by the GO-CNN are more distinguishable and easy to classify with the last fully connected layer. Generalization MNIST is a public, handwritten recognition dataset with a total of ten classes. This dataset is a channel image with 28×28 resolution and a clean . The train set contains 50,000 images and the test set contains 10,000 images. For numerical experiments with the MNIST data set, the adaptive moment estimation BID18 optimization algorithm is used. In addition, as an image enhancement strategy, the image padding is increased to 32×32 during the training process. The batch size is 100, the initial learning rate is 0.001, and the weight decay is 0.0005. The learning rate stays the same until reaching 20,000 iterations. Consequently, we complete 20,000 iterations on one test set and average the performance over five runs in order to report the final performance evaluation of our algorithm. Application Medical images in China are developing rapidly, but specialist doctors are short of resources, and they are mainly concentrated in big cities and big hospitals. Many small and mediumsized cities do not have sufficient diagnostic imaging capacities, so many patients have to go to big cities in order to access better medical resources and obtain better treatment. Similarly, there are few Doctors usually judge whether a fracture has occurred based on whether there is a fracture line (texture) in the image. In BID2, the texture information from the image is used for an auxiliary diagnosis of a fracture. With prior information from the Schmid operator, we do preprocessing by Schmid operators to enhance the texture information from an image. Then, we use the CNN to conduct classification. However, the parameters of geometric operators in this two-stage method are preset by human experience. At this point, it is difficult for the local parameters obtained by the respective optimization to reach the global optimum. Thus, one may consider integrating the preprocessing of geometric operators into the deep network for global parameter learning without prior artificial empirical design parameters. In other words, this would mean using the GO-CNN proposed in this paper, wherein the convolution kernels from the first layer are all trainable Schmid kernels. Around 2,000 samples from X-rays taken at the Hainan People's Hospital were used as the data for the three kinds of intelligent fracture diagnosis models. Each sample was manually divided into bone The above three models are used for numerical experiments. The basic network framework used in the experiment is ResNet50 BID14, which mainly consists of a new residual structure unit. To balance the data during training, the number of fracture patches is increased to 4,016 by rotating the images and changing the of the images. In the test set, there are 145 fracture patches and 1,004 non-fracture patches. Then, five experiments are conducted to evaluate each model. The SGD algorithm and the finetune strategy are used during the training process, with a batch size of 50. The initial learning rate is 0.001 and the weight decay is 0.0005. The learning rate is reduced by one fifth every 4,000 iterations. Each data class is queued, and the data from each batch is averaged out of each data class during training. We report the performance of our algorithm on the test set after 12,000 iterations based on the average over five runs. According to TAB6, the Geometric Operator Convolutional Neural Network is the most accurate. Moreover, the fracture recall of the two-stage method is 0.77% higher than that of the Convolutional Neural Network, indicating that domain knowledge from the field of medicine is important for intelligent diagnosis. The fracture recall of the Geometric Operator Convolutional Neural Network is 2.21% higher than that of the two-stage method, which indicates that the Geometric Operator Convolutional Neural Network does make use of medical knowledge for fracture diagnosis. The integration of geometric operator into the deep neural network indeed achieve global optimization.
Traditional image processing algorithms are combined with Convolutional Neural Networks,a new neural network.
471
scitldr
Determinantal point processes (DPPs) is an effective tool to deliver diversity on multiple machine learning and computer vision tasks. Under deep learning framework, DPP is typically optimized via approximation, which is not straightforward and has some conflict with diversity requirement. We note, however, there has been no deep learning paradigms to optimize DPP directly since it involves matrix inversion which may in highly computational instability. This fact greatly hinders the wide use of DPP on some specific objectives where DPP serves as a term to measure the feature diversity. In this paper, we devise a simple but effective algorithm to address this issue to optimize DPP term directly expressed with L-ensemble in spectral domain over gram matrix, which is more flexible than learning on parametric kernels. By further taking into account some geometric constraints, our algorithm seeks to generate valid sub-gradients of DPP term in case when the DPP gram matrix is not invertible (no gradients exist in this case). In this sense, our algorithm can be easily incorporated with multiple deep learning tasks. Experiments show the effectiveness of our algorithm, indicating promising performance for practical learning problems. Diversity is desired in multiple machine learning and computer vision tasks (e.g., image hashing (; Carreira-Perpinán &), descriptor learning, metric learning and video summarization ), in which sub-sampled points or learned features need to spread out through a specific bounded space. Originated from quantum physics, determinantal point processes (DPP) have shown its power in delivering such properties b). Compared with other diversity-oriented techniques (e.g., entropy and orthogonality ), DPP shows its superiority as it incorporates only one single metric and delivers genuine diversity on any bounded space; ). Therefore, DPP has been utilized in a large body of diversity-oriented tasks. In general, sample points from a DPP tend to distribute diversely within a bounded space A. Given a positive semi-definite kernel function κ: A × A → R, the probability of a discrete point set X ⊂ A under a DPP with kernel function κ can be characterized as: where L is a |X | × |X | matrix with entry L ij = κ(x i, x j) and x i, x j ∈ X. L is called L-ensemble. Note that A is a continuous space, whereas X is finite. In the Hilbert space associated with κ, larger determinant implies larger spanned volume, thus the mapped points tend not to be similar or linearly dependent. DPP can be viewed from two perspectives: sampling and learning. A comprehensive introduction to mathematical fundamentals of DPP for sampling from a discrete space can be found in. Based on this, a line of works has been proposed (a; ;). In this paper, we concentrate on learning DPPs. In learning of DPP, the term det(L) is typically treated as a singleton diversity measurement and is extended to learning paradigms on continuous space (; ;). There are generally two lines of strategies to learn DPPs: Approximation. This type of methods is to convert DPP into a simpler format which can ease and stabilize the computation. low-rank approximation proves powerful in easing the computational burden , in which the gram matrix is factorized as L = BB where B ∈ n×m with m n. This decomposition can also reduce the complexity which is originally a cubic time of |L|. Kulesza & Taskar (2011b) explicitly expressed the kernel with κ(x, y) = σ 1 σ 2 δ(x) δ(y), where σ measures the intrinsic quality of the feature and δ(·) is function mapping input x to a feature space. In this sense, the pairwise similarity is calculated in Euclidean feature space with cosine distance. suggest approximating a given distribution by approximating the eigenvalues of the corresponding DPP. As such, the computation can be eased and become stable. Following this, DPP is also applied on some visual tasks, such as video summarization , ranking and image classification . It can be noted that the approximation is not straightforward for DPP, thus cannot fully deliver the diversity property (e.g. ing in rank-deficiency). Direct optimization. While the aforementioned methods optimize DPP with specific approximation, a series of efforts also seek to optimize the DPP term directly (; ;). In this setting, the whole gram matrix L corresponding to the pairwise similarity among features is updated directly, which allows accommodating more flexible feature mapping functions rather than an approximation. proposed an Expectation-Maximization algorithm to update marginal kernel DPP K = L(L + I) −1, together with a baseline K-Ascent derived from projected gradient ascent . extended DPP from a fixed-point perspective and proposed to optimize DPP upon a lower bound in variational inference fashion. A key problem of such line of works is that the computation is not differentiable, making it difficult to be used in deep learning frameworks. To the best of our knowledge, there is no previous method incorporating DPP as a feature-level diversity metric in deep learning. A key difficulty in doing so is that the calculation of the gradient of det(L) involves matrix inversion, which can be unstable and inaccurate in GPUs. Though KAscent seems to be a naive rule, it still needs explicit matrix inversion in the first step before the projection procedure. This fact greatly hinders the tight integration of DPP with deep networks. Some alternative methods seek to reach diversity under more constrained settings. For example, resorted to a global pairwise orthogonality constraint in hyper-sphere and employed statistical moments to measure the diversity. However, compared with DPP, such measurements are unable to fully characterize diversity in an arbitrary bounded space. In this paper, rather than providing more efficient DPP solvers, we concentrate on delivering a feasible feature-level DPP integration under the deep learning framework. To this end, we revisit the spectral decomposition of DPP and propose a sub-gradient generation method which can be tightly integrated with deep learning. Our method differs from either approximation or direct optimization by introducing a "differentiable direct optimization" procedure, thus can produce genuinely diverse features in continuous bounded space. Our method is stable and scalable to the relatively large dataset with a specific mini-batch sampling strategy, which is verified by several experiments on various tasks. Notations: Bold lower case x and bold upper case K represent vector and matrix, respectively. det(·) and Tr(·) calculate the determinant and trace of a matrix, respectively. A ⊗ B is the element-wise product of matrices A and B. |X | and |x| measure the cardinality of a finite set X and the L 2 length of a vector x, respectively. x, y calculates the inner product of the two vectors. x = diag(X) transforms a diagonal matrix X into its vector form x, and vice versa. We refer "positive semi-definite" and "positive definite" to PSD and PD, respectively. Denote the real numbers. 2.1 DETERMINANTAL POINT PROCESS L-ensemble expression of DPP requires L to be PSD, whereas kernel expression further constrains K < I (each eigenvalue of K is less than 1). A conversion from L to K can thus be written as, which is the marginal normalization constant given a specific L. While there is always conversion from L to K, the inverse may not exist. In practice, one may construct L-ensemble first, then normalize it into a marginal kernel. This fact may give rise to the difficulty of deep networks. Since a conversion from K to L might not exist, the network needs carefully adjusting the gradients under specific constraints to ensure the updated L to be valid. As L and K share the same eigenvectors v i, a pair of L and K holds the relation: where λ i is the ith eigenvalue. It is seen that such conversion is not straightforward to be directly integrated with deep learning framework. Therefore, we optimize ensemble L directly in this paper. We briefly introduce Gaussian kernel in this section, which works on Hilbert space with infinite dimension. Mercer's theorem ensures the PSD properties when constructing new kernels with existing ones under a specific procedure. Such procedure is also employed in multiple kernel learning paradigms (; b;), which is out of the scope of this paper. A Gaussian kernel is defined as κ(where σ is a controlling parameter. Thus an L-ensemble matrix becomes According to the definition, L ii = 1 and for any element in the matrix we have L ij ∈. With Gaussian kernel, we have a nice property 0 ≤ det(L) ≤ 1. This can be easily verified by applying geometric inequality to the eigenvalues of L. Although not tight, this property shows that the determinant value with Gaussian kernel is bounded. This fact inspires one version of our algorithm detailed in the next section. Throughout this paper, our discussion is based on the Gaussian kernel unless specified. Given vectorized inputs I i ∈ R h where i = 1,..., n, our goal is to learn a map f such that the features x i = f (I i) can spread out within a bounded feature space x i ∈ S. Hereafter we refer space to an Euclidean bounded space (e.g., [−1, 1] d ) without loss of generality. Given any loss function J, the chain rule of gradient involving DPP is written as: where X refer to the features before DPP layer. While calculating ∂J/∂ det(L) and ∂L/∂X is straightforward, the main difficulty lies on the calculation of ∂ det(L)/∂L. We will discuss the calculation of this term under two case: 1) When the inversion L −1 can be stably obained, we will derive the gradient of DPP det(L) on Sec 3.1; When L is not invertible or L −1 is difficult to calculate, we give the procedure to handle the case by generating valid sub-gradient in Sec 3.2. Since our objective is to diverse features, det(L) will serve as a (partial) objective term to be directly maximized. With kernel κ, a DPP regularization term seeks to maximize the possibility of a feature configuration x i, i = 1,..., n. As this possibility is proportional to det(L), the objective is max det(L). This can become a regularization term where diversity is required. Thus with a general loss function L G, our aim is to solve min L G − λ 1 det(L), with the controlling parameter λ 1 ≥ 0. For the time being, we assume that kernel matrix L is invertible (we will discuss the case when L is not invertible in the next section), hence L −1 exists. Without loss of generality, we discuss the gradient of the determinant equipped with Gaussian kernel. For other kernels the derivation is analogous. According to Eq (??), L ij can be further factorized as: where x ij is the jth dimension of feature x i. Using chain rule, the derivative of det(L) w.r.t. x il can be written as: where on the ijth position of the corresponding element is: Eq can be more compactly expressed as: where M (il) is such a matrix that, except for the ith column and row, all resting elements are 0s. Besides, the ijth and jith elements of. In summary, Eq can be simplified as: To ease the computation and fully utilize the chain rule in deep learning architecture, we peel the DPP loss into two layers, and the corresponding gradient product can be expressed as: While we can use existing package to obtain ∂L/∂x reliably, the way to stably calculate ∂ det(L)/∂L becomes essential. We will detail in the next section once the term is hard to calculate. The calculation of the gradient ∂ det L/∂L involves computing the inverse matrix L −1. However, the kernel matrix L is not always invertible. This situation happens iff there exists at least a pair of features x i and x j such that x i = x j. In this case, there exist two identical columns/rows of L and the 0 eigenvalue in the non-invertibility. This phenomenon is sometimes caused by Relu function, which can map different input values onto an identical one. Even when all features are distinct, the numerical precision (typically on float number in GPU) may also lead to failure. We occasionally observed that GPU calculation of L −1 reports error even no eigenvalue is 0. One may imagine a naive replacement of matrix inverse with the pseudo-inverse, which can be applied on singular matrices. However, pseudo-inverse will keep the zero eigenvalues intact (still rank-deficiency), and the back-propagated gradient will play no part to increase the determinant value (both 0 before and after updates). To address this, we first diverge to consider the objective of DPP max det(L). Since DPP term seeks to maximize the determinant, for a configuration can be a valid ascending direction. Thus we give the following definition: We see if a proper sub-gradientL can be found at det(L) = 0, back-propagation procedure in deep learning can consequently perform calculation usingL. To obtain suchL, we first note that L can be eigen-decomposed as following since it is symmetric and PSD: where U is the orthogonal eigenvector matrix and Λ's diagonal elements are the corresponding eigenvalues. As L has zero eigenvalues, the rank of Λ is lower than the dimension of L. We sort all eigenvalue into descending order to k = (σ 1, ..., σ q, 0, ..., 0), where q < n. We then employ a simple yet effective amplification procedure by amplifying any eigenvalue smaller than ∆ to ∆. The amplified eigenvalues are nowk = (σ 1, ..., σ s, ∆, ..., ∆), where s ≤ q. Let the diagonalized amplified eigenvalue matrix beΛ (w.r.t. k), then the modified matrix with small positive determinant can be written as:L For any > 0, we can choose a sufficiently small ∆ such that det(L) <. Thus the continuity of this procedure is guaranteed. The differenceL =L − L can be viewed as a proper ascending direction w.r.t. L, as by addingL, det(L +L) becomes above 0 as well as arbitrarily small. It is trivial to prove thatL is a sub-gradient on a neighbor of L, thusL is also a proper sub-gradient sufficing Definition 3.1. This procedure is summarized in Algorithm 1 and is termed as DPPSG. Intuitively, once encountering an identical or too close feature pair x i and x j, this procedure tries to enhance the diversity by separating them apart from each other. Inspired by geometric inequality, we provide an improved version of the algorithm taking into account the property of Gaussian kernel. First it easy to show that the function i σ i is concave in the feasible set i σ i = n (diagonal of Gaussian gram matrices are 1s, thus trace is n) and the maximal objective is reached out iff σ i = 1. Therefore, any point b = (1 − θ)(σ 1, ..., σ n) + θ(1, ..., 1) will increase the objective i σ i. By letting θ being a small value, the proper sub-gradient becomes Udiag(b − σ)U, where σ = (σ 1, ..., σ n). This version of update differs from DPPSG as it generates sub-gradients under geometric constraints. The method is summarized in Algorithm 2 and is termed as DPPSG*. During implementation, the irregularity of L is examined to determine whether to adopt a normal back-propagation (in Sec 3.1) or sub-gradient (in Sec 3.2). This can be done by verifying if the determinant value in the forward pass is less than a pre-defined small value β. This proper subgradient based back-propagation method can be used to integrate to deep learning framework with other objectives involving matrix determinant. We emphasize that our method is different from the line of gradient-projection based methods, such as K-Ascent. While projection-based methods calculate the true gradient then project it back to a feasible set, our methods generate proper subgradient directly. Without explicitly computing matrix inversion, sub-gradients, in this case, is more feasible for deep learning framework. We employed a balanced sampling strategy for each mini-batch. Assuming the batch size is n and there are c classes in total, in each mini-batch the distribution of samples generally follows the whole training sample distribution on c classes. This strategy is considered to utilize the intrinsic diversity of the original data. Besides, mini-batch sampling can constrain the overhead of DPP computation depending only on the batch size, which can be viewed as a constant in practice. Practically, the features are always required to lie in a bounded space. This is essential in some applications as a bounded space is more controllable. Especially, sometimes one may demand that the features should suffice to a pre-defined distribution P. This bounding requirement is crucial to the objective of DPP since maximizing determinant tends to draw feature points infinitely apart from each other. A naive method to achieve this is to truncate the features or using barrier functions. However, these methods will in irregularly dense distribution on the learned feature space boundary. To overcome this issue, we employ Wasserstein GAN (WGAN) to enforce the features mapped to a specific distribution P. As we do not focus on WGAN in this paper, readers are referred to for more details. To this end, we randomly sample n 1 pointsx i from the distribution P under balanced sampling, which are treated as positive samples. The generator f (·) takes a feature as input and outputs the corresponding embedding. Denote the discriminator h(·) (which is also the mapping from input to feature). Then the WGAN loss for discriminator is: According to the , we incorporate the generator loss In this section, we conduct two experiments. One is about metric learning and image hashing on MNIST and CIFAR to verify the effectiveness of our method, while another is for local descriptor retrieval task based on HardNet . For the first test. MNIST This simple dataset is suitable to reveal the geometric properties of the features on various tasks. We test the image retrieval task equipped with contrastive loss where L(i) indicates the label of the ith feature and x i is the learnt feature. We employ a simple network structure for MNIST. This network consists of 3 convolutional layers (Conv) followed by 2 fully connected layers (FC). Batch normalization is applied on each layer. The filter number of each Conv are 32, 32 and 64, respectively. The sizes of the filter are identically 5 × 5. For the first Conv, we employ maxpooling. For the other 2 Convs, average pooling is adopted. The dimensions of the last FCs are 200 and 2 (for 2D visualization). The performance can be found in Table 1 and the feature distribution is visualized in Figure 1. From Table 1, it is observed that the performance on retrieval task can be enhanced by adding the DPP and WGAN regularization terms. We see that DPP term can enhance the retrieval performance by avoiding feature points from concentrating too much. In this sense, the learned map around the separating boundary can be much smoother. As retrieval task typically requires the existence of top-k inter-class samples rather than concentrating property, the DPP term is more preferable. In Figure 1 (c), we see that the feature points generally fall into the pre-defined space [−1, 1] 2. The utility of such space is high without sacrificing the retrieval performance. Typically, DPPSG* is slightly superior to DPPSG. Thus in the following test, we only report the performance under DPPSG* setting (termed as DPP* for short). We conduct image hashing on CIFAR-10 which seeks to produce binary code for images. To this end, we follow the binary hashing code generation procedure in which is activated by a Sigmoid function. The number of neurons in the second last layer equals to the number of bits of the hashing codes. It is anticipated that DPP regularization can enhance the utility in binary code space since the code can spread out 2. We test two lengths of binary code (12 and 16). We visualize the 16-bit feature distribution using TSNE in Fig. 2 (a) and (b), and the binary code histogram comparison in (c). The quantitative are summarized in Table 2. jointly solve binary code generation and classification, we report both retrieval performance (mAP) and classification performance (Acc). We see our method can significantly enhance the binary space utility while keeping the performance almost intact. We employ all the convolutional layers in VGG-19 as the base and discard its final fully connected layers. Thus the output size of this base VGG-19 network is 1 × 1 × 512. We concatenate 3 fully connected layers with ReLU activation on each after that with dimensions 512, 100 and 20, respectively. Contrastive loss is applied on the 20-dimensional space. We train the whole network from scratch. Aside from mAP, we also report top-k average precision (Precision-k) and the Wasserstein distance to the pre-defined distribution (Gap to P). The performance on coarse (20 classes) and fine (100 classes) levels can be found in Table 3. In either setting, we see that DPP+WGAN significantly outperform the baseline. Thus we infer that the DPP term can serve as a regularization not only for the feature itself but also for the smoothness of the mapping. Since the DPP term avoids the features from concentrating too much, the learned mapping should also be from a smoother function family. Batch size VS. performance We study how batch size influences the performance with DPP regularization. To this end, we report the performance on CIFAR-100 100-class retrieval with different batch sizes. The are shown in Table 2. Generally, with larger batch size, the algorithm can reach out better mAP. We note the computational efficiency of DPP sub-gradients is high, which adds very slight overhead (even with 500 batch size) to each iteration of common back-propagation under contrastive loss, which can be neglected. Precision-k (%) Gap to P k 10 20 50 10 20 50 On coarse Table 4: Image hashing on CIFAR-10. "Acc" is the classification accuracy. This test utilizes the UBC Phototour dataset , which consists of three subsets (Liberty, Notre Dame, and Yosemite) with around 400k 64 × 64 local patches for each. We follow the protocol in to treat two subsets as the training set and the third one as the testing set. As each pair of matched image patches includes only two patches, there is no need to apply balanced sampling in this test. We simply add DPP regularization term to the objective of state-of-the-art algorithm HardNet . The batch size is 512. We report FPR (false positive rate) and FDR (false discovery rate) following;. Results are summarized in Table 5. Several baselines are selected for comparison (i.e. SIFT , MatchNet , TFeat-M , L2Net and HardNet ). As the authors improved HardNet after the NeurIPS submission, we also compare with the latest version (termed as HardNet+). We only conduct our method under DPPSG* setting and name our method HardDPP. We see that with DPP regularization, the performance of HardNet can be further enhanced. Note that in HardNet there is no WGAN integrated as the mapped features lie in the surface of a hyper unit sphere. While the sampling strategy of HardNet emphasizes the embedding behavior near the margin, DPP regularization can further focus on global feature distribution. Table 5: Performance of UBC Phototour comparison. Notre, Yose and Lib are short for "Notre Dame", "Yosemite" and "Liberty", respectively. , we report FPR at true positive rate at 95%. The best are in bold. In this paper, we investigated the problem of learning diverse features via a determinantal point process under deep learning framework. To overcome the instability in computing the gradient which involves the matrix inverse, we developed an efficient and reliable procedure called proper spectral sub-gradient generation. The generated proper sub-gradient can replace the true gradient and performs well in applications. We also considered how to constrain the features into a bounded space, since in such a way one can ensure the behavior of the network more predictable. To this end, we further incorporated Wasserstein GAN into our framework. Together, DPP+WGAN showed significant performance on both some common criteria and feature space utility. A APPENDIX MNIST Some parameters are set as follows: α = 5, λ 1 = 10 3, λ 2 = 10 6, margin µ = 0.8, variance for Gaussian kernel σ = 0.2 and ∆ = 10 −7. During the training, the batch size is set to 200. In each iteration of DPP and WGAN training, we uniformly sample 2, 000 adversarial points from the space [−1, 1] 2. We adopt RMSprop and the learning rate is 10 −4 for all tests. In the testing stage, we sample 2, 000 points from [−1, 1] 2 and calculate the Wasserstein distance with all the testing samples. This procedure is conducted 10 times and the mean distance is reported. CIFAR-10 image hashing The parameters in the hashing related experiments are used as following: variance for Gaussian Kernel σ = 2, the coefficients for the loss term of DPP is λ 1 = 10 2 and for the loss term of discriminator and generator in WGAN is 10 and 1 respectively. The batch size is set to 500 and the learning rate is initialized to 0.01 with a changing rate of 0.1 at every 150 epoch. The total number of epoch is set to 350 and we adopt the Adam optimizer to update our model. The parameter setting is as follows: α = 1, λ 1 = 10 3, λ 2 = 10 3, margin µ = 0.8, variance for Gaussian kernel σ = 0.2 and ∆ = 10 −6. The rest of the settings are the same as those of MNIST test. A.2 CRITERIA PRECISION-k AND MAP-k For image retrieval task, we adopt the top-k mean average precision (abbreviated as mAP-k) to evaluate the performance. We also present the top-k average precision (abbreviated as Precision-k), which is calculated as: where b is the corresponding class and I is the indicator function: Thus mAP-k is the reweighted version of Precision-k: A.3 OVERHEAD OF DPP Calculating SVD or matrix inversion on a large number of features can be time consuming. However, in our setting, we employed a common practice in deep learning -mini-batch -to avoid such computation on a whole batch. We can conclude that mini-batch strategy can limit the computational cost such that the extra overhead of DPP is only dependent on the batch size (thus other parts of the networks have no impact on this overhead). Therefore, although the complexity of our method is O(n 3), n only corresponds to the batch size rather than whole sample number in our setting, which is much more manageable in practice. We report the average overhead comparison on CIFAR-10 hashing task with varying batch sizes (100, 200, 250, 400 and 500) on a GTX 1080 GPU as in Table 6 Table 6: Overhead of a single batch and a DPP calculation on CIFAR-10 hashing task with varying batch size. Time is in seconds. where "overhead-all" and "overhead-DPP" refer to the average time cost (s) for a single batch on all the computation and only DPP computation (both forward and backward), respectively. We can conclude that, compared to other computation, the extra overhead of DPP is small (even in a simple network as CIFAR-10 hashing). Besides, a batch size up to 500 is considered to be sufficient in most of the applications. In practice, we did not employ any trick to reduce such overhead (since it is out of the papers focus) but simply utilized standard functions provided by Pytorch. For MNIST verification test, we employed a simple backbone. The structure of the backbone is {conv 1(5 × 5)+maxpool+conv 2(5 × 5)+avepool+conv 3(5 × 5)+avepool+fully con1(200-d)+fully con2(2-d)+fully con3(10-d)+contrastive loss}. We add DPP and WGAN regularization to the features at "fully con2" layer, which is 2-dimensional thus better for visualization. For CIFAR-10 image hashing task, we employ the same network structure as a high-cited method DCH . DPP and WGAN loss is applied on the second last fully connected layer (the dimension of this layer corresponds to the length of digits in the hashing code). For CIFAR-100 metric learning task, we employ all the convolutional layers of VGG19, concatenated with 3 fully connected layers (with 512, 100 and 20 dimension). DPP and WGAN loss, together with contrastive loss, is applied on the final fully connected layer (20-dimension). The network is trained from scratch without any pre-training. A.5 DEGRADATION ON CIFAR-10 IMAGE HASHING For the performance degradation with DPP on hashing task, we can take Figure 2 (c) as an example to explain. We see that original DCH features concentrate on several digits (generally 10 digits corresponding to 10 classes), while DPP features diffuse to almost the whole discrete space. In this sense, if one retrieves the k-th closest hashing code, DCH can find the hashing code with a small searching radius. However, one has to greatly enlarge the search radius for k-th closest code in DPP feature space since the distribution is much more even. In this sense, DPP will inevitably causes degradation since large searching radius will more likely to reach a code in other class. Therefore, we think "utility vs mAP" is an intrinsic conflict and needs to reach a trade-off.
We proposed a specific back-propagation method via proper spectral sub-gradient to integrate determinantal point process to deep learning framework.
472
scitldr
The quality of a machine translation system depends largely on the availability of sizable parallel corpora. For the recently popular Neural Machine Translation (NMT) framework, data sparsity problem can become even more severe. With large amount of tunable parameters, the NMT model may overfit to the existing language pairs while failing to understand the general diversity in language. In this paper, we advocate to broadcast every sentence pair as two groups of similar sentences to incorporate more diversity in language expressions, which we name as parallel cluster. Then we define a more general cluster-to-cluster correspondence score and train our model to maximize this score. Since direct maximization is difficult, we derive its lower-bound as our surrogate objective, which is found to generalize point-point Maximum Likelihood Estimation (MLE) and point-to-cluster Reward Augmented Maximum Likelihood (RAML) algorithms as special cases. Based on this novel objective function, we delineate four potential systems to realize our cluster-to-cluster framework and test their performances in three recognized translation tasks, each task with forward and reverse translation directions. In each of the six experiments, our proposed four parallel systems have consistently proved to outperform the MLE baseline, RL (Reinforcement Learning) and RAML systems significantly. Finally, we have performed case study to empirically analyze the strength of the cluster-to-cluster NMT framework. Recently, an encode-decoder neural architecture has surged and gained its popularity in machine translation. In this framework, the encoder builds up a representation of the source sentence and the decoder uses its previous RNN hidden state and attention mechanism to generate target translation. In order to better memorize the input information, an attention mechanism has been exploited to further boost its performance. In order to train the attentive encoder-decoder architecture, Maximum Likelihood Estimation (MLE) algorithm has been widely used, which aims at maximizing the point-to-point (one sentence to one sentence) log-likelihood of data pairs in a given dataset. However, this algorithm has severely suffered from data sparsity problem, or in other word, maximizing only likelihood the existing language pairs might make the model blind to all the non-existing similar sentence pairs. Thus, the large neural model might overfit to certain prototypes existing in the training set while failing to generalize more unseen but similar scenarios in test time.hurting its semantic meaning. 2) Model-Centroid Augmentation (RL), and BID13 leverage model-generated candidates as pseudo training samples, which are weighted with rewards to enhance the model learning. By exploring self-generated candidates, the model is able to understand the diversity in the output space. In pseudo-learning algorithms, both RAML and RL can be interpreted as broadcasting a target ground truth as a cluster of analogues while leaving the source input untouched, which though helps the model understand target diversity, fails to capture the input diversity. In order to explore both sides' diversity, we advocate a novel and general cluster-to-cluster framework of pseudo learning, which first broadcasts both source and target sentence as clusters and then train the model to comprehend their correspondence, as described in FIG0.In this paper, we first introduce the concept of parallel cluster, then design the cluster-to-cluster correspondence score as our optimization objective, based on which, we derive its lower bound KL-divergence as our surrogate objective for model training. In order to realize our proposed framework, we design four parallel systems and apply them to three recognized machine translation tasks with both forward and reverse translation directions, these four systems have all demonstrated their advantages over the existing competing algorithms in six translation tasks. In the appendices, we draw samples from the parallel clusters and further analyze their properties to verify our motivation. The contributions of our paper can be summarized as follows: 1) We are the first to propose the concept of cluster-to-cluster framework, which provides a novel perspective to current sequence-tosequence learning problems. 2) We delineate the framework and arrive in a novel KL-divergence loss function and generalizes several existing algorithms as special cases, which provides a highlevel understanding about the previous algorithms.2 RELATED LITERATURE Exposure bias and train-test loss discrepancy are two major issues in the training of sequence prediction models. Many research works BID16 BID13 BID9 have attempted to tackle these issues by adding reward-weighted samples drawn from model distribution into the training data via a Reinforcement Learning BID17 framework. By exposing the model to its own distribution, these methods are reported to achieve significant improvements., BID13 and BID16 advocate to optimize the sequence model as a stochastic policy to maximize its expected task-level reward. Though RL is not initially designed to resolve data sparsity problem, the model-centroid training samples can indeed alleviate data sparseness by exposing the sequence-to-sequence model to more unseen scenarios. One problem of the previous RL works is that, the input information is still restricted to the dataset, which fails to teach model to comprehend source diversity. The cluster-to-cluster framework augments many similar input sentences to account for source language diversity. One successful approach for data augmentation in neural machine translation system is Reward Augmented Maximum Likelihood (RAML), which proposes a novel payoff distribution to augment training samples based on task-level reward (BLEU, Edit Distance, etc). In order to sample from this intractable distribution, they further stratify the sampling process as first sampling an edit distance, then performing random substitution/deletion operations. Following the work of RAML, BID11 introduces a novel softmax Q-Distribution to reveal RAML's relation with Bayes decision rule, and they also propose an alternative sampling strategy -first randomly replacing n-gram of the ground truth sentence and then using payoff distribution to compute corresponding importance weight with local normalization. These two approaches augment the target-side data by exposing the model to diverse scenarios and improve its robustness. We draw our inspiration from RAML, but with a difference that, instead of based on task-level reward, a learnable payoff function (cluster distribution) is used in our approach to take more latent structures into account, such as semantic meaning, language fluency, etc. From the cluster distribution, we can sample semantically and syntactically correct candidates to train the model. In addition, our more generalized bilateral data augmentation strategy also empowers our model more capability to generalize better. In order to utilize the large amount of monolingual data in current NMT framework, different strategies have been designed, the most common methods can be concluded into these categories: 1) using large monolingual data to train language model and integrates it to enhance language fluency BID2. 2) using self-learning method to transform the monolingual data into bilingual form BID14 ). 3) using reconstruction strategy to leverage monolingual data to enhance NMT training BID3. Although our motivation to augment training data is aligned with these semi-supervised algorithms, our proposed framework has substantial differences from them: 1) we don't rely on additional monolingual data to boost NMT performance; 2) Though we jointly train forward and backward translation models as advocated in and BID3, our joint algorithm doesn't involve any interactions between these two models (they can be trained independently). We define the parallel cluster as two groups of weighted sentences C(Y *) and C(X *), whose similarities (BLEU, METEOR, etc) with Y * and X * are above certain threshold M. DISPLAYFORM0 Every sample X or Y is associated with a normalized weight p(X|X *) or p(Y |Y *) to denote how much chance a sentence X or Y is sampled from the corresponding cluster, here we draw a schematic diagram to better visualize the parallel cluster in FIG0. We will further talk about how we define and compute the weights in the following sections. Upon the definition of parallel cluster, we further design a cluster-to-cluster correspondence score CR c→c (X *, Y *) as the log scaled expectation of likelihood of a random sentence X in source cluster C(X *) being translated to Y in target cluster C(Y *), which generally denotes the translatability of two clusters, formally, we define the cluster-to-cluster correspondence score CR c→c (X *, Y *) as below: The higher correspondence score the more likely these two clusters correspond to each other. Note that the cluster-to-cluster correspondence score can reflect both NMT's and cluster's quality, assuming the cluster is ideal, then the correspondence score measures the translatability from a source sentence X to a target sentence Y, while assuming the NMT is ideal, then the correspondence score measures the quality of the cluster (the capability to rank paraphrases based on semantically similarity). DISPLAYFORM0 DISPLAYFORM1 Based on the definition of parallel cluster and cluster-to-cluster correspondence score, we further design the cluster-to-cluster framework's objective function as maximizing the empirical correspondence score CR c→c (X *, Y * ; D) with the regularization of target cluster's entropy H(p(Y |Y *)) in a dataset D, as described below: DISPLAYFORM0 By applying Jensen's inequality to the objective function Obj c→c, we can further derive its lowerbound as: DISPLAYFORM1 From this, we notice that the cluster-to-cluster objective is lower bounded by a negative KL- DISPLAYFORM2. Therefore, we can use this lower-bound to maximize correspondence score, by changing the sign of this lower-bound function, we further define the loss function as: DISPLAYFORM3 We theoretically verify that this lower bound KL-divergence can generalize Maximum Likelihood (MLE) and Reward Augmented Maximum Likelihood (RAML) as special cases when we instantiate cluster distribution as Kronecker-Delta function δ(Y |Y *) and payoff dis- TAB0. DISPLAYFORM4 In this section, we try to minimize the proposed KL-divergence KL(p(Y |Y *)||p(Y |X *)) so as to raise the lower bound of the regularized cluster-to-cluster correspondence. We can write its deriva-tives w.r.t to the NMT parameters in two forms, namely parallel sampling and NMT broadcasting modes, which differ in their Monte-Carlo proposing distribution.• Parallel Sampling: sampling candidates independently from two clusters and then reweighted pairwise samples with a translation confidence w(Y |X, X *). DISPLAYFORM0 • Translation Broadcasting: sampling candidates from one cluster and broadcasting them through the NMT to construct its opponents, and re-weighted by cluster confidence c(Y |Y *, X *). DISPLAYFORM1 More specifically, translation broadcasting's samples are more NMT-aware in the sense that it incorporates NMT's knowledge to generate correspondents. The parallel sampling mode works like twosided RAML while translation broadcasting works more like mixed RAML-RL . In this paper, we design cluster distribution in two manners, namely inadaptive (pre-computed without training) and adaptive (trained during optimization) cluster. Both cluster designs meet the criterion of concentrating around the ground truth according to sentence similarity metric. In addition, a cutoff criterion is also leveraged to reject samples whose task-level score is lower than certain threshold M value as in Equation 1.• Inadaptive Cluster: we use two non-parametric distributions q(X|X *) and q(Y |Y *) to denote source and target parallel clusters, based on the similarity score between sample X/Y and the ground truth X * /Y *. We follow the payoff distribution to define our inadaptive cluster: DISPLAYFORM0 where R(Y, Y *) denotes the task-level reward (BLEU, CIDEr, METEOR, etc) and R(Y, Y *) denotes its normalization in the whole output space, τ is the hyper-parameter temperature to control the smoothness of the optimal distribution around correct target Y *. Since the task-level reward only considers string-level matching (precision, recall, etc) while ignoring semantic coherence, the generated samples though lexically similar, prone to many semantical and syntactical mistakes, which might cause counter-effects to the NMT model.• Adaptive Cluster: we use two parametric models p(X|X *) and p(Y |Y *) to denote the source and target adaptive cluster, which follow encoder-decoder neural architecture but take ground truth X *, Y * as inputs. Adaptive cluster is designed to fulfill the following two requirements: 1) Proximity to ground truth: the randomly sampled candidates should have high similarity with the ground truth. 2) High correspondence score: parallel cluster should be highly correlated and translatable. Combining these two goals can guarantee mutual dependence between the source and target clusters and also retain its similarity to the original ground truth. Formally, we write the optimization target of the target cluster as: DISPLAYFORM1 During optimization, we fix the forward NMT p(Y |X) and target cluster p(X|X *) to update source cluster p(Y |Y *), and we fix the parameters of backward NMT p(X|Y) and source cluster p(Y |Y *) to update target cluster p(X|X *). Here we write target cluster's derivative as following: DISPLAYFORM2 Due to the mutual dependence between adaptive clusters and translation models, we advocate to alternately update the cluster and the translation models. In this section, we advocate to combine both forward and backward translation directions in a joint system to simultaneously learn four models -forward NMT p(Y |X), backward NMT (X|Y), source cluster p(X|X *) and target cluster p(Y |Y *). We exploit different scenarios to combine these four models and then design four parallel systems, whose implementations are elaborated in TAB2. System-A and B use inadaptive (non-parametric) cluster, thus require optimizing only the two translation systems; system-A applies parallel sampling algorithm while B applies translation broadcasting algorithm. In contrast, system-C and D apply adaptive (parametric) cluster, thus require simultaneous optimization of both NMT and cluster, system-C applies parallel sampling while system-D applies translation broadcasting algorithm. These four systems exhibit different characteristics which are shown in details as below: In a slight abuse of notation, we will denote DISPLAYFORM0 System-A For system-A, we use inadaptive cluster with parallel sampling strategy to train the NMT model, and the forward-backward joint objective functions is defined as: DISPLAYFORM1 Formally, the derivative respect to η and γ are shown as: DISPLAYFORM2 Parallel candidates are sampled from source and target cluster distributions are leveraged by scaled translation scores w(X|Y, Y *), w(Y |X, X *) during optimization. System-B With the same loss function in system-A, translation broadcasting is leveraged to compute derivatives in system-B, instead of parallel sampling, and the gradients is shown as: DISPLAYFORM3 This system works similar as reinforcement Learning, where normalized environmental rewards R(X, X *),R(Y, Y *) are leveraged to guide the model's policy search, and the gradients is interpreted as a form of Monte-Carlo Policy Gradient BID18.System-C Unlike System-A and system-B, two adaptive cluster distributions is used in system-C, thus the NMT and cluster are jointly optimized during training, and the loss function is defined as: DISPLAYFORM4 we can get the derivatives as below: DISPLAYFORM5 To train the NMT system, parallel sentence pairs (X, Y) are firstly sampled from two independent cluster distributions and then translation confidence scores w(Y |X, X *), w(X|Y, Y *) are leveraged to guide the training. The derivatives w.r.t the cluster contain two elements, candidates sampled from translation system, and candidates sampled from cluster itself. The two components together ensure parallel cluster's translatability and the similarity to the ground truth. System-D With the same loss function in system-C, translation broadcasting strategy is leveraged to compute derivatives, instead of parallel sampling, and the gradients is shown as: DISPLAYFORM6 System-D works quite similar as system-B but differs in that cluster confidence scores c(X|X *, Y *), c(Y |Y *, X *) are leveraged in training NMT, hence it is more abundant than tasklevel rewards (R(X, X *) andR(Y, Y *)). System-D adopts the same gradient formulas in system-C to update the clusters. The details of the training algorithm for system-A,B,C,D are shown in Algorithm 1: To evaluate our cluster-to-cluster NMT framework on different-sized (small-data, medium-data and large-data) and different-lingual (German-English and Chinese-English) translation tasks, we conduct experiments on three datasets (IWSLT, LDC, WMT). For more details about the datasets, please refer to Appendix C. For comparability, we follow the existing papers to adopt similar network architectures, and apply learning rate annealing strategy described in to further boost our baseline NMT system. In our experiments, we design both the NMT and adaptive cluster models based on one-layered encoder-decoder network ) with a maximum sentence length of 62 for both the encoder and decoder. During training, ADADELTA is adopted with = 10 −6 and ρ = 0.95 to separately optimize the NMT's and adaptive cluster's parameters. During decoding, a beam size of 8 is used to approximate the full search space. We compute the threshold similarity M via sentence-BLEU, some small-scaled experiments indicate M = 0.5 yields best performance, so we simply stick to this setting throughout all the experiments. To prevent too much hyper-parameter tuning in building the inadaptive cluster, we follow to select the best temperature τ = 0.8 in all experiments. For comparison, RAML and RL systems are also implemented with the same sequence-to-sequence attention model, following and BID18. For more details of our RL's and RAML's implementations, please refer to Appendix A. We can see from TAB5 that our system-D achieves significant improvements on both directions. Though our baseline system is already extremely strong, using cluster-to-cluster framework can further boost the NMT system by over 1.0 BLEU point. Baseline Model Baseline Model MIXER BID13 20.10 21.81 --BSO BID19 24.03 26.36 --A-C 27.56 28.53 --Softmax-Q BID11 27.66 28.77 --Our implementation of RL BID18 29.10 29.70 24.40 24.75 Our implementation of RAML Table 4: Experimental on NIST Chinese-English Machine Translation Task WMT2014 German-English We can see from Table 5 that system-C achieves the strongest on both WMT14 EN-DE and DE-EN tasks, which outperforms the baseline system by over 1.1 BLEU points. It's worth noting that our one-layer RNN model even outperforms the deep multilayer RNN model of Zhou et al. FORMULA0 and, which contain a stack of 4-7 LSTM layers. By using cluster-to-cluster framework, our one-layer RNN model can fully exploit the dataset and learn to generalize better. From the above 24 parallel cluster-to-cluster experiments, we observe general improvements over the fine-tuned baseline systems as well as our implemented RL/RAML systems. To understand the strength of our cluster-to-cluster framework, we give more detailed comparisons with existing competing algorithms as below:Comparison with RAML From the above three tables, we can observe general improvements yielded by RAML algorithm on different tasks (except LDC Chinese-English), but RAML still suffers from two problems: on one hand, RAML's benefits is restricted by its neglect of the input variabilities, and on the other hand, without considering semantic contexts and language fluency, Table 5: Experimental on WMT-2014 German-English Machine Translation Task RAML's random replacement strategy may introduce noisy and wrong bilingual pairs to hurt the translation performance (like in LDC Chinese-English translation task). Our adaptive cluster takes into account more semantic contexts to enclose more rational paraphrases, and the bilateral augmentation also empowers the model more chance to access various inputs. Comparison with RL We can also observe prevalent improvements yielded by RL algorithm BID13. Exposing the model to self-generated translation can improve the performance. Our methods inherit this merit and further enhance it with source and target clusters, which can improve the model with more sampled bilingual pairs from both source and target sides. Comparison between four parallel systems Among our proposed four parallel systems, system-C and D achieve better performances than A and B throughout different experiments, which confirms the advantages of the adaptive clusters. The adaptive cluster is more flexible and target optimized than inadaptive cluster. Unlike the payoff distribution used in inadaptive cluster which only takes task-level reward into account, the adaptive cluster learns more sophisticated criterion and thus assigns more rational probability to sampled candidates. We give more detailed analysis and visualization in the appendices to demonstrate how the source and target clusters look like. We demonstrate the learning curves of four systems and visualize some adaptive clusters in Appendix D and Appendix E, which give a more intuition about cluster-to-cluster learning. In this paper, we propose a cluster-to-cluster learning framework and incorporate this concept into neural machine translation. Our designed systems have proved to be efficient in helping current NMT model to generalize in both source and target sides. In the cluster-to-cluster framework, the cooperation of four agents can augment valuable samples and alleviate data sparsity, and achieve significant improvement compared with strong baseline systems. We believe the concept of clusterto-cluster learning can be applicable to a wide range of natural language or computer vision tasks, which will be explored in the future. Appendices A SYSTEM-DESIGN Sequence to sequence problem (machine translation) can be considered to produce an output sequence Y = (y 1, y 2, . . ., y T), y t ∈ A given an input X. Given input-target pairs (X, Y *), the generated sequence Y on test is evaluated with task-specific score R(Y, Y *). Recurrent neural networks have been widely used in sequence to sequence prediction tasks. As proposed in and, the basic idea is to first encode the input sequence as a variablelength feature vectors, then apply attention mechanism to compute weighted average over the input vectors and summarize a context vector, with which, previous hidden states and previous label are fed into the decoder RNN to predict the next state and its label. In our approach, attention-based encoder-decoder is leveraged for both the translation and cluster models, shown as: DISPLAYFORM0 A.1 RL NMT In order to train our RL system as well as adaptive cluster, we need to define a task-level reward as driving signal. Instead of directly applying BLEU or other evaluation metric, we advocate to use a surrogate n-gram match interpolation, as shown as: DISPLAYFORM1 where N n denotes the number of n-gram match between Y and Y *. In order to alleviate sequencereward sparseness, we further split it as a series of local reward to drive model's policy search at every time step. Formally, we write the step-wise reward r(y t |y 1:t−1, Y *) as following. where N (Y,Ỹ) represents the occurrence of n-gramỸ in sequence Y, specifically, if a certain nsequence y t−n+1:t appears in reference and it's not repeating more than needed, then we assign a corresponding matching score to y t, the policy gradient is described as: DISPLAYFORM2 DISPLAYFORM3 A.2 RAML NMT In order to sample from the intractable payoff distribution for system-A/B as well as our implemented RAML system, we adopt stratified sampling technique described in. Given a sentence Y *, we first sample an edit distance m, and then randomly select m positions to replace the original labels. For each sentence, we randomly sample four candidates to perform RAML training. DISPLAYFORM4 B MATHEMATICAL ANALYSIS We optimize the model parameters of our cluster-to-cluster models by minimizing the lower-bound KL-divergence instead of maximizing the original correspondence score, to characterize the difference between the two objective function, we analyze the relationships between these two functions below: DISPLAYFORM5 which can be further written as: DISPLAYFORM6 therefore, we can derive: DISPLAYFORM7 Since both cluster and translation confidence score c(Y |Y *, X *) and w(Y |X, X *) require computing the marginalized probability p(Y |X *) known to be intractable for variable-length sequences, here we adopt different mechanisms to approximate them. In system-A and C, we simplify DISPLAYFORM8 pη(Y |X *). In system-B and D, since Y is broadcast through the translation system, the marginalized probabilityp(Y |X *) is close to one, we discard this factor and approximate c(Y |Y DISPLAYFORM9 IWSLT2014 Dataset The IWSLT2014 German-English training data set contains 153K sentences while the validation data set contains 6,969 sentences pairs. The test set comprises dev2010, dev2012, tst2010, tst2011 and tst2012, and the total amount are 6,750 sentences. We adopt 512 as the length of RNN hidden stats and 256 as embedding size. We use bidirectional encoder and initialize both its own decoder states and coach's hidden state with the learner's last hidden state. The experimental for IWSLT2014 German-English and English-German Translation Task are summarized in TAB5. In order to give a more intuitive view about what the cluster distribution looks like, we draw some samples from the well-trained cluster distribution in LDC Chinese-English Translation task as shown in TAB9. we can observe that most of the paraphrases are based on three types of modification, namely form changing, synonym replacement as well as simplification. Most of the modifications does not alter the original meaning of the ground truth. Encompassing more expressions with close meanings can ease the data sparseness problem, and enhance its generalization ability. We here draw two samples from source and target clusters in FIG3, which demonstrates how point-point correspondence can be expanded into cluster-to-cluster correspondence. Reference taihsi natives seeking work in other parts of the country are given a thorough UNK before being hired, and later their colleagues maintain a healthy distance at first.Cluster taihsi natives seeking work in other parts of the country are given a thorough UNK before being employed, and later their colleagues maintain a healthy distance at first. Property Simplification Reference i once took mr tung chee -hwa to a squatter area where he found beyond imagination that a narrow alley could have accommodated so many people. Cluster i once took mr tung chee -hwa to a squatter area where he fo und beyond imagination that a narrow alley have a lot of people.
We invent a novel cluster-to-cluster framework for NMT training, which can better understand the both source and target language diversity.
473
scitldr
The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems. In this work, we study the learning to explain the problem in the scope of inductive logic programming (ILP). We propose Neural Logic Inductive Learning (NLIL), an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data. In experiments, compared with the state-of-the-art models, we find NLIL is able to search for rules that are x10 times longer while remaining x3 times faster. We also show that NLIL can scale to large image datasets, i.e. Visual Genome, with 1M entities.: A scene-graph can describe the relations of objects in an image. The NLIL can utilize this graph and explain the presence of objects Car and Person by learning the first-order logic rules that characterize the common sub-patterns in the graph. The explanation is globally consistent and can be interpreted as commonsense knowledge. The recent years have witnessed the growing success of deep learning models in a wide range of applications. However, these models are also criticized for the lack of interpretability in its behavior and decision making process , and for being data-hungry. The ability to explain its decision is essential for developing a responsible and robust decision system . On the other hand, logic programming methods, in the form of first-order logic (FOL), are capable of discovering and representing knowledge in explicit symbolic structure that can be understood and examined by human . In this paper, we investigate the learning to explain problem in the scope of inductive logic programming (ILP) which seeks to learn first-order logic rules that explain the data. Traditional ILP methods (Galárraga et al., 2015) rely on hard matching and discrete logic for rule search which is not tolerant for ambiguous and noisy data . A number of works are proposed for developing differentiable ILP models that combine the strength of neural and logicbased computation (; ; Rocktäschel & ; ;). Methods such as ∂ILP are referred to as forward-chaining methods. It constructs rules using a set of pre-defined templates and evaluates them by applying the rule on data multiple times to deduce new facts that lie in the held-out set (related works available at Appendix A). However, general ILP problem involves several steps that are NP-hard: (i) the rule search space grows exponentially in the length of the rule; (ii) assigning the logic variables to be shared by predicates grows exponentially in the number of arguments, which we refer as variable binding problem; (iii) the number of rule instantiations needed for formula evaluation grows exponentially in the size of data. To alleviate these complexities, most works have limited the search length to within 3 and resort to template-based variable assignments, limiting the expressiveness of the learned rules (detailed discussion available at Appendix B). Still, most of the works are limited in small scale problems with less than 10 relations and 1K entities. On the other hand, multi-hop reasoning methods (; ; ; ;) are proposed for the knowledge base (KB) completion task. Methods such as NeuralLP can answer the KB queries by searching for a relational path that leads from the subject to the object. These methods can be interpreted in the ILP domain where the learned relational path is equivalent to a chain-like first-order rule. Compared to the template-based counterparts, methods such as NeuralLP is highly efficient in variable binding and rule evaluation. However, they are limited in two aspects: (i) the chain-like rules represent a subset of the Horn clauses, and are limited in expressing complex rules such as those shown in Figure 1; (ii) the relational path is generated while conditioning on the specific query, meaning that the learned rule is only valid for the current query. This makes it difficult to learn rules that are globally consistent in the KB, which is an important aspect of a good explanation. In this work, we propose Neural Logic Inductive Learning (NLIL), a differentiable ILP method that extends the multi-hop reasoning framework for general ILP problem. NLIL is highly efficient and expressive. We propose a divide-and-conquer strategy and decompose the search space into 3 subspaces in a hierarchy, where each of them can be searched efficiently using attentions. This enables us to search for x10 times longer rules while remaining x3 times faster than the state-of-theart methods. We maintain the global consistency of rules by splitting the training into rule generation and rule evaluation phase, where the former is only conditioned on the predicate type that is shared globally. And more importantly, we show that a scalable ILP method is widely applicable for model explanations in supervised learning scenario. We apply NLIL on Visual Genome dataset for learning explanations for 150 object classes over 1M entities. We demonstrate that the learned rules, while maintaining the interpretability, have comparable predictive power as densely supervised models, and generalize well with less than 1% of the data. Supervised learning typically involves learning classifiers that map an object from its input space to a score between 0 and 1. How can one explain the outcome of a classifier? Recent works on interpretability focus on generating heatmaps or attention that self-explains a classifier (; ;). We argue that a more effective and humanintelligent explanation is through the description of the connection with other classifiers. For example, consider an object detector with classifiers Person(X), Car(X), Clothing(X) and Inside(X, X) that detects if certain region contains a person, a car, a clothing or is inside another region, respectively. To explain why a person is present, one can leverage its connection with other attributes, such as "X is a person if it's inside a car and wearing clothing", as shown in Figure 1. This intuition draws a close connection to a longstanding problem of first-order logic literature, i.e. Inductive Logic Programming (ILP). A typical first-order logic system consists of 3 components: entity, predicate and formula. Entities are objects x ∈ X. For example, for a given image, a certain region is an entity x, and the set of all possible regions is X. Predicates are functions that map entities to 0 or 1, for example Person: x → {0, 1}, x ∈ X. Classifiers can be seen as soft predicates. Predicates can take multiple arguments, e.g. Inside is a predicate with 2 inputs. The number of arguments is referred to as the arity. Atom is a predicate symbol applied to a logic variable, e.g. Person(X) and Inside(X, X). A logic variable such as X can be instantiated into any object in X. A first-order logic (FOL) formula is a combination of atoms using logical operations {∧, ∨, ¬} which correspond to logic and, or and not respectively. given a set of predicates P = {P 1 ...P K}, we define the explanation of a predicate P k as a first-order logic entailment where P k (X, X) is the head of the entailment, and it will become P k (X) if it is a unary predicate. A is defined as the rule body and is a general formula, e.g. conjunction normal form (CNF), that is made of atoms with predicate symbols from P and logic variables that are either head variables X, X or one of the body variables Y = {Y 1, Y 2, ...}. By using the logic variables, the explanation becomes transferrable as it represents the "lifted" knowledge that does not depend on the specific data. It can be easily interpreted. For example, represents the knowledge that "if an object is inside the car with clothing on it, then it's a person". To evaluate a formula on the actual data, one grounds the formula by instantiating all the variables into objects. For example, in Figure 1, Eq. is applied to the specific regions of an image. Given a relational knowledge base (KB) that consists of a set of facts where P i ∈ P and x i, x i ∈ X. The task of learning FOL rules in the form of Eq. that entail target predicate P * ∈ P is called inductive logic programming. For simplicity, we consider unary and binary predicates for the following contents, but this definition can be extended to predicates with higher arity as well. The ILP problem is closely related to the multi-hop reasoning task on the knowledge graph (; ; ; ;). Similar to ILP, the task operates on a KB that consists of a set of predicates P. Here the facts are stored with respect to the predicate P k which is represented as a binary matrix M k in {0, 1} |X |×|X |. This is an adjacency matrix, meaning that x i, P k, x j is in the KB if and only if the (i, j) entry of M k is 1. Given a query q = x, P *, x. The task is to find a relational path x − −− → x, such that the two query entities are connected. Formally, let v x be the one-hot encoding of object x with dimension of |X |. Then, the (t)th hop of the reasoning along the path is represented as where M (t) is the adjacency matrix of the predicate used in (t)th hop. The v (t) is the path features vector, where the jth element v (t) j counts the number of unique paths from x to x j . After T steps of reasoning, the score of the query is computed as For each q, the goal is to (i) find an appropriate T and (ii) for each t ∈ [1, 2, ..., T], find the appropriate M (t) to multiply, such that Eq. is maximized. These two discrete picks can be relaxed as learning the weighted sum of scores from all possible paths, and weighted sum of matrices at each step. Let be the soft path selection function parameterized by (i) the path attention vector s ψ = [s ψ] T that softly picks the best path with length between 1 to T that answers the query, and (ii) the operator attention vectors S ϕ = [s ϕ softly picks the M (t) at (t)th step. Here we omit the dependence on M k for notation clarity. These two attentions are generated with a model with learnable parameters w. For methods such as , T(x; w) is a random walk sampler which generates one-hot vectors that simulate the random walk on the graph starting from x. And in NeuralLP, T(x; w) is an RNN controller that generates a sequence of normalized attention vectors with v x as the initial input. Therefore, the objective is defined as arg max Learning the relational path in the multi-hop reasoning can be interpreted as solving an ILP problem with chain-like FOL rules ) Compared to the template-based ILP methods such as ∂ILP, this class of methods is efficient in rule exploration and evaluation. However, (P1) generating explanations for supervised models puts a high demand on the rule expressiveness. The chain-like rule space is limited in its expressive power because it represents a constrained subspace of the Horn clauses rule space. For example, Eq. is a Horn clause and is not chain-like. And the ability to efficiently search beyond the chain-like rule space is still lacking in these methods. On the other hand, (P2) the attention generator T(x; w) is dependent on x, the subject of a specific query q, meaning that the explanation generated for target P * can vary from query to query. This makes it difficult to learn FOL rules that are globally consistent in the KB. In this section, we show the connection between the multi-hop reasoning methods with the general logic entailment defined in Eq.. Then we propose a hierarchical rule space to solve (P1), i.e. we extend the chain-like space for efficient learning of more expressive rules. In Eq., variables that only appear in the body are under existential quantifier. We can turn Eq. into Skolem normal form by replacing all variables under existential quantifier with functions with respect to X and X, If the functions are known, Eq. will be much easier to evaluate than Eq.. Because grounding this formula only requires to instantiate the head variables, and the rest of the body variables are then determined by the deterministic functions. Functions in Eq. can be arbitrary. But what are the functions that one can utilize? We propose to adopt the notions in section 2.2 and treat each predicate as an operator, such that we have a subspace of the functions Φ = {ϕ 1, ..., ϕ K}, where where U and B are the sets of unary and binary predicates respectively. The operator of the unary predicate takes no input and is parameterized with a diagonal matrix. Intuitively, given a subject entity x, ϕ k returns the set embedding that represents the object entities that, together with the subject, satisfy the predicate P k. For example, let v x be the one-hot encoding of an object in the image, then ϕ Inside (v x) returns the objects that spatially contain the input box. For unary predicate such as Car(X), its operator ϕ Car = M car 1 takes no input and returns the set of all objects labelled as car. Since we only use Φ, a subspace of the functions, the existential variables that can be represented by the operator calls, denoted asŶ, also form the subsetŶ ⊆ Y. This is slightly constrained from Eq.. For example, in Person(X) ← Car(Y), Y can not be interpreted as the operator call from X. However, we argue that such rules are generally trivial. For example, it's not likely to infer "an image contains a person" by simply checking if "there is any car in the image". Therefore, any FOL formula that complies with Eq. can now be converted into the operator form and vice versa. For example, Eq. can be written as where the variable Y 1 and Y 2 are eliminated. Note that this conversion is not unique. For example, Car(ϕ Inside (X)) can be also written as Inside(X, ϕ Car ). The variable binding problem now becomes equivalent to the path-finding problem in section 2.2, where one searches for the appropriate chain of operator calls that can represent the variable inŶ. Tree-like Figure 2: Factor graphs of example chainlike, tree-like and conjunctions of rules. Each rule type is the subset of the latter. Succ stands for successor. As discussed above, the Eq. is equivalent to a chain-like rule. We want to extend this notion and be able to represent more expressive rules. To do this, we introduce the notion of primitive statement ψ. Note that an atom is defined as a predicate symbol applied to specific logic variables. Similarly, we define a predicate symbol applied to the head variables or those inŶ as a primitive statement. For example, in Eq., ψ 1 = Car(ϕ Inside (X)) and ψ 2 = On(ϕ Clothing , X) are two primitive statements. Similar to an atom, each primitive statement is a mapping from the input space to a scalar confidence score, i.e. ψ:, their mappings are defined as where σ(·) is the sigmoid function. Note that we give unary ψ a dummy input x for notation convenience. For example, in Its value is computed as Compared to Eq., Eq. replaces the target v x into another relational path. This makes it possible to represent "correlations" between two variables, and the path that starts from the unary operator, e.g. ϕ Eye . To see this, one can view a FOL rule as a factor graph with logic variables as the nodes and predicates as the potentials. And running the operator call is essentially conducting the belief propagation over the graph in a fixed direction. As shown in Figure 2, primitive statement is capable of representing the tree-like factor graphs, which significantly improves the expressive power of the learned rules. Similarly, Eq. can be relaxed into weighted sums. In Eq., all relational paths are summed with a single path attention vector s ψ. We extend this notion by assigning separate vectors for each argument of the statement ψ. Let S ψ, S ψ ∈ R K×T be the path attention matrices for the first and second argument of all statements in Ψ, i.e. s ψ,k and s ψ,k are the path attention vectors of the first and second argument of the kth statement. Then we have we want to further extend the rule search space by exploring the logic combinations of primitive statements, via {∧, ∨, ¬}, as shown in Figure 2. To do this, we utilize the soft logic not and soft logic and operations where p, q ∈. Here we do not include the logic ∨ operation because it can be implicitly represented as be the set of primitive statements with all possible predicate symbols. We define the formula set at lth level as, where each element in the formula set {f : f ∈ F l} is called a formula such that f: Intuitively, we define the logic combination space in a similar way as that in pathfinding: the initial formula set contains only primitive statements Ψ, because they are formulas by themselves. For the l − 1th formula set F l−1, we concatenate it with its logic negation, which yieldŝ F l−1. Then each formula in the next level is the logic and of two formulas fromF l−1. Enumerating all possible combinations at each level is expensive, so we set up a memory limitation C to indicate the maximum number of combinations each level can keep track of 1. In other words, each level F l is to search for C logic and combinations on formulas from the previous levelF l−1, such that the cth formula at the lth level f lc is As an example, for Ψ = {ψ 1, ψ 2} and C = 2, one possible level sequence is F 0 = {ψ 1, ψ 2},..} and etc. To collect the rules from all levels, the final level L is the union of previous sets, i.e. Note that Eq. does not explicitly forbid trivial rules such as ψ 1 * (1 − ψ 1) that is always true regardless of the input. This is alleviated by introducing nonexistent queries during the training (detailed discussion at section 5). Again, the rule selection can be parameterized into the weighted-sum form with respect to the attentions. We define the formula attention tensors as S f, S f ∈ R L−1×C×2C, such that f lc is the product of two summations over the previous outputs weighted by attention vectors s f,lc and s f,lc respectively 2. Formally, we have where f l−1 (x, x) ∈ R 2C is the stacked outputs of all formulas f ∈F l−1 with arguments x, x. Finally, we want to select the best explanation and compute the score for each query. Let s o be the Iterate Concat attention vector over F L, so the output score is defined as An overview of the relaxed hierarchical rule space is illustrated in Figure 3. We have defined a hierarchical rule space as shown in Figure 3, where the discrete selections on the operators, statements and logic combinations are all relaxed into the weight sums with respect to a series of attention parameters S ϕ, S ψ, S ψ, S f, S f and s o. In this section, we solve (P2), i.e. we propose a differentiable model that generates these attentions without conditioning on the specific query. The goal of NLIL is to generate data-independent FOL rules. In other words, for each target predicate P *, its rule set F L and the final output rule should remain unchanged for all the queries q = x, P *, x (which is different from that in Eq.). To do this, we define the learnable embeddings of all predicates as H = [h 1, .., h K] T ∈ R K×d, and the embeddings for the "dummy" arguments X and X as e X, e X ∈ R d. We define the attention generation model as where h * is the embedding of P *, such that attentions only vary with respect to P *. As shown in Figure 4, we propose a stack of three Transformer networks for attention generator T. Each module is designed to mimic the actual evaluation that could happen during the operator call, primitive statement evaluation and formula computation respectively with neural networks and "dummy" embeddings. And the attention matrices generated during this simulated evaluation process are kept for evaluating Eq.. A MultiHeadAttn is a standard Transformer module such that MultiHeadAttn: where d is the latent dimension and q, v are the query and value dimensions respectively. It takes the query Q and input value V (which will be internally transformed into keys and values), and returns the output value O and attention matrix S. Intuitively, S encodes the "compatibility" between query and the value, and O represents the "outcome" of a query given its compatibility with the input. Operator search: For target predicate P *, we alter the embedding matrix H witĥ such that the rule generation is predicate-specific. Let q (t) ϕ be the learnable tth step operator query embedding. The operator transformer module is parameterized aŝ Here,V ϕ is the dummy input embedding representing the starting points of the paths. e ϕ is a learnable operator encoding such thatQ ϕ represents the embeddings of all operators Φ. Therefore, we consider thatV ϕ encodes the outputs of the operator calls of K predicates. And we aggregate the outputs with another MultiHeadAttn with respect to a single query q Primitive statement search: T be the output embedding of T paths. The path attention is generated as Here, e ψ and e ψ are the first and second argument encodings, such that Q ψ and Q ψ encode the arguments of each statement in Ψ. The compatibility between paths and the arguments are computed with two MultiHeadAttns. Finally, a FeedForward is used to aggregate the selections. Its output V ψ ∈ R K×d represents the of all statement evaluations in Ψ. Formula search: Let Q f,l, Q f,l ∈ R C×d be the learnable queries of the first and second argument of formulas at lth level, and let V f,0 = V ψ. The formula attention is generated aŝ Here, e +, e ¬ are the learnable embeddings, such thatV f,l−1 represents the positive and negative states of the formulas at l − 1th level. Similar to the statement search, the compatibility between the logic and arguments and the previous formulas are computed with two MultiHeadAttns. And the embeddings of formulas at lth level V f,l are aggregated by a FeedForward. Finally, let q o be the learnable final output query and let The training of NLIL consists of two phases: rule generation and rule evaluation. During generation, we run Eq. to obtain the attentions S ϕ, S ψ, S ψ, S f, S f and s o for all P * s. For the evaluation phase, we sample a mini-batch of queries {x,, and evaluate the formulas using Eq.. Here, y is the query label indicating if the triplet exists in the KB or not. We sample nonexistent queries to prevent the model from learning trivial rules that always output 1. In the experiments, these negative queries are sampled uniformly from the target query matrix M * where the entry is 0. Then the objective becomes arg min Since the attentions are generated from Eq. differentiably, the loss is back-propagated through the attentions into the Transformer networks for end-to-end training. During training, the from operator calls and logic combinations are averaged via attentions. For validation and testing, we evaluate the model with the explicit FOL rules extracted from the ϕ is such a distribution over random variables k ∈ [1, K]. And the weighted sum is the expectation over M k. Therefore, one can extract the explicit rules by sampling from the distributions (; . However, since we are interested in the best rules and the attentions usually become highly concentrated on one entity after convergence. We replace the sampling with the arg max, where we get the one-hot encoding of the entity with the largest probability mass. We first evaluate NLIL on classical ILP benchmarks and compare it with 4 state-of-the-art KB completion methods in terms of their accuracy and efficiency. Then we show NLIL is capable of learning FOL explanations for object classifiers on a large image dataset when scene-graphs are present. Though each scene-graph corresponds to a small KB, the total amount of the graphs makes it infeasible for all classical ILP methods. We show that NLIL can overcome it via efficient stochastic training. We evaluate NLIL together with two state-of-the-art differentiable ILP methods, i.e. NeuralLP and ∂ILP , and two structure embedding methods, TransE and RotatE . Detailed experiments setup is available at Appendix C. Benchmark datasets: (i) Even-and-Successor (ES) benchmark is introduced in , which involves two unary predicates Even(X), Zero(X) and one binary predicate Succ(X, Y). The goal is to learn FOL rules over a set of integers. The benchmark is evaluated with 10, 50 and 1K consecutive integers starting at 0; (ii) FB15K-237 is a subset of the Freebase knowledge base containing general knowledge facts; (iii) WN18 is the subset of WordNet containing relations between words. Statistics of datasets are provided in Table 2. Knowledge base completion: All models are evaluated on the KB completion task. The benchmark datasets are split into train/valid/test sets. The model is tasked to predict the probability of a fact triplet (query) being present in the KB. We use Mean Reciprocal Ranks (MRR) and Hits@10 for evaluation metrics (see Appendix C for details). Results on Even-and-Successor benchmark are shown in Table 5a. Since the benchmark is noisefree, we only show the wall clock time for completely solving the task. As we have previously mentioned, the forward-chaining method, i.e. ∂ILP scales exponentially in the number of facts and quickly becomes infeasible for 1K entities. Thus, we skip its evaluation for other benchmarks. Results on FB15K-237 and WN18 are shown in Table. 1. Compared to NeuralLP, NLIL yields slightly higher scores. This is due to the benchmarks favor symmetric/asymmetric relations or compositions of a few relations , such that most valuable rules will already lie within the chain-like search space of NeuralLP. Thus the improvements gained from a larger search space with NLIL are limited. On the other hand, with the Transformer block and smaller model created for each target predicate, NLIL can achieve a similar score at least 3 times faster. Compared to the structure embedding methods, NLIL is significantly outperformed by the current state-of-the-art, i.e. RotatE, on FB15K. This is expected because NLIL searches over the symbolic space that is highly constrained. However, the learned rules are still reasonably predictive, as its performance is comparable to that of TransE. Scalability for long rules: we demonstrate that NLIL can explore longer rules efficiently. We compare the wall clock time of NeuralLP and NLIL for performing one epoch of training against different maximum rule lengths. As shown in Figure 5b, NeuralLP searches over a chain-like rule space thus scales linearly with the length, while NLIL searches over a hierarchical space thus grows in log scale. The search time for length 32 in NLIL is similar to that for length 3 in NerualLP. The ability to perform ILP efficiently extends the applications of NLIL to beyond canonical KB completion. For example in visual object detection and relation learning, supervised models can learn to generate a scene-graph (As shown in Figure 1) for each image. It consists of nodes each labeled as an object class. And each pair of objects are connected with one type of relation. The scene-graph can then be represented as a relational KB where one can perform ILP. Learning the FOL rules on such an output of a supervised model is beneficial. As it provides an alternative way of interpreting model behaviors in terms of its relations with other classifiers that are consistent across the dataset. To show this, we conduct experiments on Visual Genome dataset . The original dataset is highly noisy , so we use a pre-processed version available as the GQA dataset . The scene-graphs are converted to a collection KBs, and its statistics are shown in Table 2. We filter out the predicates with less than 1500 occurrences. The processed KBs contain 213 predicates. Then we perform ILP on learning the explanations for the top 150 objects in the dataset. Quantitatively, we evaluate the learned rules on predicting the object class labels on a held-out set in terms of their R@1 and R@5. As none of the ILP works scale to this benchmark, we compare NLIL with two supervised baselines: (i) MLP-RCNN: a MLP classifier with RCNN features of the object (available in GQA dataset) as input; and (ii) Freq: a frequency-based baseline that predicts object label by looking at the mostly occurred object class in the relation that contains the target. This method is nontrivial. As noted in , a large number of triples in Visual Genome are highly predictive by knowing only the relation type and either one of the objects or subjects. Explaining objects with rules: Results are shown in Table 3. We see that the supervised method achieves the best scores, as it relies on highly informative visual features. On the other hand, NLIL achieves a comparable score on R@1 solely relying on KBs with sparse binary labels. We note that NLIL outperforms Freq significantly. This means the FOL rules learned by NLIL are beyond the superficial correlations exhibited by the dataset. We verify this finding by showing the rules for top objects in Table 4. Induction for few-shot learning: Logic inductive learning is data-efficient and the learned rules are highly transferrable. To see this, we vary the size of the training set and compare the R@1 scores for 3 methods. As shown in Figure 5c, the NLIL maintains a similar R@1 score with less than 1% of the training set. In this work, we propose Neural Logic Inductive Learning, a differentiable ILP framework that learns explanatory rules from data. We demonstrate that NLIL can scale to very large datasets while being able to search over complex and expressive rules. More importantly, we show that a scalable ILP method is effective in explaining decisions of supervised models, which provides an alternative perspective for inspecting the decision process of machine learning systems. Inductive Logic Programming (ILP) is the task that seeks to summarize the underlying patterns shared in the data and express it as a set of logic programs (or rule/formulae) . Traditional ILP methods such as AMIE+ (Galárraga et al., 2015) and RLvLR relies on explicit search-based method for rule mining with various pruning techniques. These works can scale up to very large knowledge bases. However, the algorithm complexity grows exponentially in the size of the variables and predicates involved. The acquired rules are often restricted to Horn clauses with a maximum length of less than 3, limiting the expressiveness of the rules. On the other hand, compared to the differentiable approach, traditional methods make use of hard matching and discrete logic for rule search, which lacks the tolerance for ambiguous and noisy data. The state-of-the-art differentiable forward-chaining methods focus on rule learning on predefined templates (; ;), typically in the form of a Horn clause with one head predicate and two body predicates with chain-like variables, i.e. To evaluate the rules, one starts with a set of facts and repeatedly apply rules for every possible triple until no new facts can be deduced. Then the deduced facts are compared with a heldout ground-truth set. Rules that are learned in this approach are in first-order, i.e. data-independent and can be readily interpreted. However, the deducing phase can quickly become infeasible with a larger set. Although ∂ILP has proposed to alleviate by performing only a fixed number of steps, works of this type could generally scale to KBs with less than 1K facts and 100 entities. On the other hand, differentiable backward-chaining methods such as NTP (Rocktäschel &) are more efficient in rule evaluation. In , NTP 2.0 can scale to larges KBs such as WordNet. However, FOL rules are searched with templates, so the expressiveness is still limited. Another differentiable ILP method, i.e. Neural Logic Machine (NLM), is proposed in , which learns to represent logic predicates with tensorized operations. NLM is capable of both deductive and inductive learning on predicates with unknown arity. However, as a forward-chaining method, it also suffers from the scalability issue as ∂ILP. It involves a permutation operation over the tensors when performing logic deductions, making it difficult to scale to real-world KBs. On the other hand, the inductive rules learned by NLM are encoded by the network parameters implicitly, so it does not support representing the rules with explicit predicate and logic variable symbols. Multi-hop reasoning: Multi-hop reasoning methods (; ; ; ; ; such as NeuralLP construct rule on-the-fly when given a specific query. It adopts a flexible ILP setting: instead of pre-defining templates, it assumes a chain-like Horn clause can be constructed to answer the query And each step of the reasoning in the chain can be efficiently represented by matrix multiplication. The ing algorithm is highly scalable compared to the forward-chaining counter-parts and can learn rules on large datasets such as FreeBase. However, this approach reasons over a single chainlike path, and the path is sampled by performing random walks that are independent on the task context , limiting the rule expressiveness. On the other hand, the FOL rule is generated while conditioning on the specific query, making it difficult to extract rules that are globally consistent. Link prediction with relational embeddings: Besides multi-hop reasoning methods, a number of works are proposed for KB completion using learnable embeddings for KB relations. For example, In (; ; Balažević et al., 2019) it learns to map KB relations into vector space and predict links with scoring functions. NTN , on the other hand, parameterizes each relation into a neural network. In this approach, embeddings are used for predicting links directly, thus its prediction cannot be interpreted as explicit FOL rules. This is different from that in NLIL, where predicate embeddings are used for generating data-independent rules. Standard ILP approaches are difficult and involve several procedures that have been proved to be NP-hard. The complexity comes from 3 levels: first, the search space for a formula is vast. The body of the entailment can be arbitrarily long and the same predicate can appear multiple times with different variables, for example, the Inside predicate in Eq. appears twice. Most ILP works constrain the logic entailment to be Horn clause, i.e. the body of the entailment is a flat conjunction over literals, and the length limited within 3 for large datasets. Second, constructing formulas also involves assigning logic variables that are shared across different predicates, which we refer to as variable binding. For example, in Eq., to express that a person is inside the car, we use X and Y to represent the region of a person and that of a car, and the same two variables are used in Inside to express their relations. Different bindings lead to different meanings. For a formula with n arguments (Eq. has 7), there are O(n n) possible assignments. Existing ILP works either resort to constructing formula from pre-defined templates or from chain-like variable reference, limiting the expressiveness of the learned rules. Finally, evaluating a formula candidate is expensive. A FOL rule is data-independent. To evaluate it, one needs to replace the variables with actual entities and compute its value. This is referred to as grounding or instantiation. Each variable used in a formula can be grounded independently, meaning a formula with n variables can be instantiated into O(C n) grounded formulas, where C is the number of total entities. For example, Eq. contains 3 logic variables: X, Y and Z. To evaluate this formula, one needs to instantiate these variables into C 3 possible combinations, and check if the rule holds or not in each case. However in many domains, such as object detection, such grounding space is vast (e.g. all possible bounding boxes of an image) making the full evaluation infeasible. Many forward-chaining methods such as ∂ILP scales exponentially in the size of the grounding space, thus are limited to small scale datasets with less than 10 predicates and 1K entities. Baselines: For NeuralLP, we use the official implementation at here. For ∂ILP, we use the thirdparty implementation at here. For TransE, we use the implementation at here. For RotatE, we use the official implementation at here. Model setting: For NLIL, we create separate Transformer blocks for each target predicate. All experiments are conducted on a machine with i7-8700K, 32G RAM and one GTX1080ti. We use the embedding size d = 32. We use 3 layers of multi-head attentions for each Transformer network. The number of attention heads are set to number of heads = 4 for encoder, and the first two layers of the decoder. The last layer of the decoder has one attention head to produce the final attention required for rule evaluation. For KB completion task, we set the number of operator calls T = 2 and formula combinations L = 0, as most of the relations in those benchmarks can be recovered by symmetric/asymmetric relations or compositions of a few relations . Thus complex formulas are not preferred. For FB15K-237, binary predicates are grouped hierarchically into domains. To avoid unnecessary search overhead, we use the most frequent 20 predicates that share the same root domain (e.g. "award", "location") with the head predicate for rule body construction, which is a similar treatment as in. For VG dataset, we set T = 3, L = 2 and C = 4. Evaluation metrics: Following the conventions in ) we use Mean Reciprocal Ranks (MRR) and Hits@10 for evaluation metrics. For each query x, P k, x, the model generates a ranking list over all possible groundings of predicate P k, with other groundtruth triplets filtered out. Then MRR is the average of the reciprocal rank of the queries in their corresponding lists, and Hits@10 is the percentage of queries that are ranked within the top 10 in the list.
An efficient differentiable ILP model that learns first-order logic rules that can explain the data.
474
scitldr
Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems. Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways. When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations. Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems. We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints. We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations. We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation. Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world. Our show that adversarial examples are a practical concern for real-world systems. The existence of adversarial examples for neural networks has until now been largely a theoretical concern. While minute, carefully-crafted perturbations can cause targeted misclassification in a neural network, adversarial examples produced using standard techniques lose adversariality when directly translated to the physical world as they are captured over varying viewpoints and affected by natural phenomena such as lighting and camera noise. This suggests that practical systems may not be at risk because adversarial examples generated using standard techniques are not robust in the physical world. We show that neural network-based classifiers are vulnerable to physical-world adversarial examples. We introduce a new algorithm for reliably producing physical 3D objects that are adversarial over a distribution of viewpoints. FIG0 shows an example of an adversarial object constructed using our approach, where a 3D-printed turtle is consistently classified as rifle by an ImageNet classifier. In this paper, we demonstrate the efficacy and generality of our method, demonstrating conclusively that adversarial examples are a concern in real-world systems. Methods for transforming ordinary two-dimensional images into adversarial examples are wellknown; however, the extent to which these techniques affect specific real-world systems is still an open question. Prior work has shown that adversarial examples generated using standard techniques lose their adversarial nature once subjected to minor transformations BID6 BID5. These suggest that adversarial examples, while serving as interesting phenomena, have no hope of applying to physical-world systems, where transformations such as viewpoint changes and camera noise are inevitable. Prior techniques attempting to synthesize robust adversarial examples for the physical world have had limited success. While some progress has been made, current efforts have demonstrated a small number of datapoints on nonstandard classifiers, and only in the two-dimensional case, with no clear generalization to three dimensions (further discussed in Section 4).The entirety of prior work has only explored generating adversarial examples in the two-dimensional case, where "viewpoints" can be approximated by an affine transformations of an original image. Constructing adversarial examples for the physical world requires the ability to generate entire 3D adversarial objects, which must remain adversarial in the face of complex transformations not applicable to 2D objects, such as 3D rotations and perspective projection. In this work, we definitively show that adversarial examples pose a real threat in the physical world. We propose a general-purpose algorithm for reliably constructing adversarial examples robust over a chosen distribution of transformations, and we demonstrate the efficacy of this algorithm in both the 2D and 3D case. We succeed in producing physical-world 3D adversarial objects that are robust over a large, realistic distribution of 3D viewpoints, proving that the algorithm produces adversarial three-dimensional objects that are adversarial in the physical world. Specifically, our contributions are as follows:• We develop Expectation Over Transformation (EOT), a novel algorithm that produces single adversarial examples that are simultaneously adversarial over an entire distribution of transformations• We consider the problem of constructing 3D adversarial examples under the EOT framework, viewing the 3D rendering process as part of the transformation, and we show that the approach successfully synthesizes adversarial objects• We fabricate adversarial objects and show that they remain adversarial, demonstrating that our approach works end-to-end in the physical world, showing that adversarial examples are of real concern in practical deep learning systems First, we present the Expectation Over Transformation (EOT) algorithm, a general framework allowing for the construction of adversarial examples that remain adversarial over a chosen transformation distribution T. We then describe our end-to-end approach for generating adversarial objects using a specialized application of EOT in conjunction with differentiating through the 3D rendering process. When constructing adversarial examples in the white-box case (that is, with access to a classifier and its gradient), we know in advance a set of possible classes Y and a space of valid inputs X to the classifier; we have access to the function P (y|x) and its gradient ∇P (y|x), for any class y ∈ Y and input x ∈ X. In the standard case, adversarial examples are produced by maximizing the log-likelihood of the target class over a -radius ball around the original image, that is: DISPLAYFORM0 This approach has been shown to be both feasible and effective at generating adversarial examples for any given classifier. However, prior work has shown adversarial examples' inability to remain adversarial even under minor perturbations inevitable in any real-world observation BID6 BID5.To address this issue, we introduce Expectation Over Transformation (EOT). The key insight behind EOT is to model such perturbations within the optimization procedure. Rather than optimizing the log-likelihood of a single example, EOT uses a chosen distribution T of transformation functions t taking an input x supplied by the adversary to the "true" input t(x) perceived by the classifier. Furthermore, rather than simply taking the norm of x − x to constrain the solution space, EOT instead aims to constrain the effective distance between the adversarial and original inputs, which we define as: DISPLAYFORM1 Intuitively, this is how different we expect the true input to the classifer will be, given our new input. Then, EOT solves the following optimization problem: DISPLAYFORM2 In practice, the distribution T can model perceptual distortions such as random rotation, translation, or addition of noise. However, EOT generalizes beyond simple transformations. EOT finds examples robust under any perception distribution Q(· ; x) parameterized by the generated example x as long as d dx Q(· ; x) is well-defined. The objective function can be optimized by stochastic gradient descent, approximating the derivative of the expected value through sampling transformations independently at each gradient descent step and differentiating through the transformation. Given its ability to synthesize robust adversarial examples, we use the EOT framework for generating 2D examples, 3D models, and ultimately physical-world adversarial objects. Within the framework, however, there is a great deal of freedom in the actual method by which examples are generated, including choice of T, distance metric, and optimization method. In the 2D case, we design T to approximate a realistic space of possible distortions involved in printing out an image and taking a natural picture of it. This amounts to a set of random transformations of the form t(x) = Ax +, which are more thoroughly described in Section 3.In the 3D case, we define the transformations t from a texture to a perceived image by mapping the texture to a given 3D object, and simulating highly nonlinear functions such as rendering, rotation, translation, and perspective distortion of the object in addition to the perceptual mechanisms used in the 2D case. Finding adversarial examples across these nonlinear transformations is what allows for the transfer of adversarial examples to the physical world, but also introduces implementation complexities not found in the 2D case. EOT requires the ability to differentiate though the transformation distribution with respect to the input. In the 3D case, this implies differentiating through the 3D renderer with respect to the texture. To do this, we model the rendering process as a sparse matrix multiplication between the texture and a coordinate map, which is generated from a standard 3D renderer. Rather than attempting to differentiate through the complex renderer, we use the renderer to find a coordinate map for each rendering, which maps coordinates on the texture to coordinates in the classifier's field of view. We can then simulate this specific rendering as a sparse matrix multiplication and differentiate through it; we differentiate through different matrix multiplications at each sampling step. Once EOT has been parameterized, i.e. once a distribution T is chosen, the issue of actually optimizing the induced objective function remains. Rather than solving the constrained optimization problem given above, we use the Lagrangian-relaxed form of the problem, as BID0 do in the conventional (non-EOT, single-viewpoint) case: DISPLAYFORM0 In order to encourage imperceptibility of the generated images, we set d(x, x) to be the 2 norm in the LAB color space, a perceptually uniform color space where Euclidean distance roughly corresponds with perceptual distance. Note that the E t∼T [||LAB(t(x)) − LAB(t(x))|| 2 2 ] can be sampled and estimated in conjunction with E[P (y|t(x))]; in general, the Lagrangian formulation gives EOT the ability to intricately constrain the search space (in our case, using LAB distance) at insignificant computational cost (without computing a complex projection). Our optimization, then, is: DISPLAYFORM1 We use projected gradient descent to find the optimum, clipping to the set of valid inputs (e.g. for images). We show that we can reliably produce transformation-tolerant adversarial examples in both the 2D and 3D case. Furthermore, we show that we can synthesize and fabricate 3D adversarial objects, even those with complex shapes, in the physical world: these adversarial objects remain adversarial regardless of viewpoint, camera noise, and other similar real-world factors. Adversariality in this case refers to the propensity of the classifier to predict an attacker-chosen target class given an image or object. In our evaluation, given a source object x and a set of correct classes {y 1, . . . y n}, as well as an attacker-chosen target class y adv and a crafted adversarial example x, we use the following terms in order to characterize the effectiveness of x. We call a viewpoint "adversarial" if the top output of the classifier is y adv; "correct" if the adversarial example fails and the classifier outputs one of {y 1, . . ., y n}, and "misclassified" if the classifier predicts a class unrelated to that of the source object (i.e. not in {y 1, . . . y n}) but not the target class y adv. By randomly sampling a set of viewpoints, we can evaluate an adversarial example x by examining the proportion of adversarial, correct, and misclassified viewpoints. We evaluate robust adversarial examples in the 2D and 3D case, and furthermore, we evaluate physical-world 3D adversarial objects. The two cases are fundamentally different. In the virtual case, we know that we want to construct adversarial examples robust over a certain distribution of transformations, and we can simply use EOT over that distribution to synthesize a robust adversarial example. In the case of the physical world, however, we cannot capture the exact distribution unless we perfectly model all physical phenomena. Therefore, we must approximate the distribution and perform EOT over the proxy distribution. This works well in practice for producing adversarial objects that remain adversarial under the "true" physical-world distribution, as we demonstrate. See Appendix A for the exact parameters of the distributions we use in the 2D, 3D simulation, and 3D physical-world cases. In all cases, the various transformation parameters are sampled as continuous random variables from a uniform distribution between the minimum and maximum values given, unless otherwise indicated in Appendix A (i.e. Gaussian noise).In our experiments, we use TensorFlow's standard pre-trained InceptionV3 classifier which has 78.0% top-1 accuracy on ImageNet. In all of our experiments, we use randomly chosen target classes, and we use EOT to synthesize adversarial examples over a chosen distribution. We measure the 2 distance per pixel between the original and adversarial example (in LAB Original: Persian cat P (true): 97% P (adv): 0% space), and we also measure classification accuracy (percent of randomly sampled viewpoints classified as the true class) and adversariality (percent of randomly sampled viewpoints classified as the adversarial class) for both the original and adversarial example. When working in simulation, we evaluate over a large number of transformations sampled randomly from the distribution; in the physical world, we evaluate over a large number of manually-captured images of our adversarial objects taken over different viewpoints. DISPLAYFORM0 In the 2D case, we consider the distribution of transformations that includes rescaling, rotation, lightening or darkening by an additive factor, adding Gaussian noise, and any in-bounds translation of the image. We take the first 1000 images in the ImageNet validation set, randomly choose a target class for each image, and use EOT to synthesize an adversarial example that is robust over the chosen distribution. For each adversarial example, we evaluate over 1000 random transformations sampled from the distribution at evaluation time. TAB1 summarizes the . On average, the adversarial examples have an adversariality of 96.4%, showing that our approach is highly effective in producing robust adversarial examples. FIG1 shows an example of a synthesized adversarial example, along with the classification confidence in true and adversarial classes for original and corresponding adversarial images. See Appendix B for more examples. We produce 3D adversarial examples by modeling the 3D rendering as a transformation under EOT. Given a textured 3D object, we optimize over the texture such that the rendering is adversarial from any viewpoint. We consider a distribution that incorporates different camera distances, lateral trans- lation, rotation of the object, and solid colors. We approximate the expectation over transformation by taking the mean loss over batches of size 40; furthermore, due to the computational expense of computing a batch, we reuse up to 80% of the batch at each iteration, but enforce that each batch contain at least 8 new images. As previously mentioned, the parameters of the distribution we use is specified in Appendix A, sampled as independent continuous random variables (that are uniform except for Gaussian noise).We consider 10 complex 3D models, choose 20 random target classes per 3D model, and use EOT to synthesize adversarial textures for the 3D models with minimal parameter search (four constant, pre-chosen λ values were tested across each [3D model, target] pair). For each of the 200 adversarial examples, we sample 100 random transformations from the distribution at evaluation time. TAB3 summarizes , and FIG2 shows renderings of drawn samples with classification probabilities. See Appendix C for more examples. The simulated adversarial object have an average adversariality of 83.4% with a long left tail, showing that EOT usually produces highly adversarial objects. See Appendix C for a plot of the distribution. In order to fabricate physical-world adversarial examples, beyond modeling the 3D rendering process, we need to model physical-world phenomena such as lighting effects and camera noise. Furthermore, we need to model the 3D printing process: in our case, we use commercially available full-color 3D printing. With the 3D printing technology we use, we find that color accuracy varies between prints, so we model printing errors as well. We approximate all of these phenomena by a distribution of transformations under EOT. In addition to the transformations considered for 3D in simulation, we consider camera noise, additive and multiplicative lighting, and per-channel color inaccuracies. Turtle 82% 16% 2% Baseball 59% 31% 10% Table 3: Quantitative analysis of the two adversarial objects, over 100 photos of each object over a wide distribution of viewpoints. Both objects are classified as the adversarial target class in the majority of viewpoints. We evaluate physical adversarial examples over two 3D-printed objets: one of a turtle (where we consider any of the 5 turtle classes in ImageNet as the "true" class), and one of a baseball. The unperturbed 3D-printed objects are correctly classified as the true class with 100% accuracy over a large number of samples. FIG3 shows example photographs of unperturbed objects, along with their classifications. We choose target classes for each of the 3D models at random -"rifle" for the turtle, and "espresso" for the baseball -and we use EOT to synthesize adversarial examples. We evaluate the performance of our two 3D-printed adversarial objects by taking 100 photos of each object over a variety of viewpoints 1. FIG4 shows a random sample of these images, along with their classifications. Table 3 gives a quantitative analysis over all images, showing that our 3D-printed adversarial objects are strongly adversarial over a wide distribution of transformations. See Appendix D for more examples. The and quantative analysis in this section demonstrate the efficacy of EOT and confirm the existence of physical adversarial examples. Here, we perform a qualitative analysis of the :Modeling Perception. The EOT algorithm as presented in Section 2 presents a general method to construct adversarial examples over a chosen perceptual distribution, but notably gives no guarantees for observations of the image outside of the chosen distribution. In constructing physical-world adversarial objects, we use a crude, high-variance approximation of the rendering and capture process, and this succeeds in ensuring robustness to a diverse set of environments; see, for example, FIG5, which shows the same adversarial turtle in vastly different environments. In specialized 1 Although the viewpoints were not selected in any way and were simply the of walking around the objects, moving them up/down, etc., we hesitate to call them "random" since they were not in fact generated numerically or sampled from a concrete distribution, in contrast with the rendered 3D examples. domains, however, a domain expert may opt to model the perceptual distribution precisely in order to better constrain the search space. We also find significant error in the color accuracy of even state of the art commercially available color 3D printing; FIG6 shows a comparison of a 3D-printed model along with a printout of the model's texture, printed on a standard laser color printer. Still, EOT was able to overcome the problem and produce robust physical-world adversarial objects. We predict that we could have produced adversarial examples with smaller 2 perturbation with a higher-fidelity printing process. Semantically Relevant Misclassification. Interestingly, for the majority of viewpoints where the adversarial target class is not the top-1 predicted class, the classifier also fails to correctly predict the source class. Instead, we find that the classifier often classifies the object as an object that is semantically similar to the adversarial target; while generating the adversarial turtle to be classified as a rifle, for example, the second most popular class (after "rifle") was "revolver," followed by "holster" and then "assault rifle." Similarly, when generating the baseball to be classified as an espresso, the example was often classified as "coffee" or "bakery". State of the art neural networks are vulnerable to adversarial examples BID10. Researchers have proposed a number of methods for synthesizing adversarial examples in the whitebox scenario (with access to the gradient of the classifier), including L-BFGS BID10, the Fast Gradient Sign Method (FGSM) BID3, and a Lagrangian relaxation formulation BID0, all for the single-viewpoint case where the adversary directly controls the input to the neural network. Projected Gradient Descent (PGD) can be seen as a universal first-order adversary BID7. BID8 show the existence of universal (image-agnostic) adversarial perturbations, small perturbation vectors that can be applied to any image to induce misclassification. Their paper proposes an algorithm that finds perturbations that are universal over images; in our work, we give an algorithm that finds a perturbation to a single image or object that is universal over a chosen distribution of transformations. In preliminary experiments, we found that universal adversarial perturbations, like standard adversarial perturbations to single images, are not inherently robust to transformation. In the first work on 2D physical-world adversarial examples, BID4 demonstrate the transferability of FGSM-generated adversarial misclassification on a printed page. In their setup, a photo is taken of a printed image with QR code guides, and the ant image is warped, cropped, and resized to become a square of the same size as the source image before classifying it. Their show the existence of 2D physical-world adversarial examples for approximately axis-aligned views, demonstrating that adversarial perturbations produced using FGSM can translate to the physical world and are robust to camera noise, rescaling, and lighting effects. BID4 do not synthesize targeted physical-world adversarial examples, they do not evaluate other real-world 2D transformations such as rotation, skew, translation, or zoom, and their approach does not translate to the 3D case. BID9 develop a real-world adversarial attack on a state-of-the-art face recognition algorithm, where adversarial eyeglass frames cause targeted misclassification in portrait photos. The algorithm produces robust perturbations through optimizing over a fixed set of inputs: the attacker collects a set of images and finds a perturbation that minimizes cross entropy loss over the set. The algorithm solves a different problem than we do in our work: it produces adversarial perturbations universal over portrait photos taken head-on from a single viewpoint, while EOT produces 2D/3D adversarial examples robust over transformations. The approach also includes a mechanism for enhancing perturbations' printability using a color map to address the limited color gamut and color inaccuracy of the printer. Note that this differs from our approach in achieving printability: rather than creating a color map, we find an adversarial example that is robust to color inaccuracy. Our approach has the advantage of working in settings where color accuracy varies between prints, as was the case with our 3D-printer. Recently, BID1 proposed a method for generating robust physical-world adversarial examples in the 2D case by optimizing over a fixed set of manually-captured images. However, the approach is limited to the 2D case, with no clear translation to 3D, where there is no simple mapping between what the adversary controls (the texture) and the observed input to the classifier (an image). Furthermore, the approach requires the taking and preprocessing of a large number of photos in order to produce each adversarial example, which may be expensive or even infeasible for many objects. BID5 argued that adversarial examples may not be a practical concern in physical-world systems because adversarial examples generated for the single viewpoint case lose adversariality at viewpoints with differing scale and rotation. While this is the case with standard adversarial examples as evaluated in the paper and in BID6, we have shown that with EOT, it is in fact possible to construct physical-world adversarial images and objects that are classified as a chosen target class over a wide range of viewpoints. Our work shows that adversarial examples pose a practical threat to systems using neural networkbased image classifiers. By introducing EOT, a general-purpose algorithm for creating robust adversarial examples under any chosen distribution, and modeling 3D rendering and printing within the framework of EOT, we succeed in fabricating three-dimensional adversarial objects. With access only to low-cost commercially available 3D printing technology, we successfully print physical adversarial objects that are strongly classified as a chosen target class over a variety of angles, viewpoints, and lighting conditions by a standard ImageNet classifier. Minimum Maximum Under the EOT framework, we must choose a distribution of transformations, and the optimization produces an adversarial example that is robust under the distribution of transformations. Here, we give the specific parameters we chose in the 2D TAB5, 3D TAB6, and physical-world case TAB7 ). We give a random sample out of our 1000 2D adversarial examples in Figures Under review as a conference paper at ICLR 2018Original: teddybear P (true): 90% P (adv): 0% P (true): 1% P (adv): 0% P (true): 98% P (adv): 0% P (true): 5% P (adv): 0% Adversarial: sock 2 = 3.3 × 10 −5 P (true): 0% P (adv): 99% P (true): 0% P (adv): 99% P (true): 0% P (adv): 98% P (true): 0% P (adv): 99%Original: clownfish P (true): 46% P (adv): 0% P (true): 14% P (adv): 0% P (true): 2% P (adv): 0% P (true): 65% P (adv): 0% Adversarial: panpipe 2 = 4.8 × 10 −5 P (true): 0% P (adv): 100% P (true): 0% P (adv): 1% P (true): 0% P (adv): 12% P (true): 0% P (adv): 0% Original: sofa P (true): 15% P (adv): 0% P (true): 73% P (adv): 0% P (true): 1% P (adv): 0% P (true): 70% P (adv): 0% Adversarial: sturgeon 2 = 1.3 × 10 −5 P (true): 0% P (adv): 100% P (true): 0% P (adv): 100% P (true): 0% P (adv): 100% P (true): 0% P (adv): 100% classified as baseball classified as espresso classified as other FIG0: All 100 photographs of our physical-world 3D adversarial baseball.
We introduce a new method for synthesizing adversarial examples robust in the physical world and use it to fabricate the first 3D adversarial objects.
475
scitldr
Representations of sets are challenging to learn because operations on sets should be permutation-invariant. To this end, we propose a Permutation-Optimisation module that learns how to permute a set end-to-end. The permuted set can be further processed to learn a permutation-invariant representation of that set, avoiding a bottleneck in traditional set models. We demonstrate our model's ability to learn permutations and set representations with either explicit or implicit supervision on four datasets, on which we achieve state-of-the-art : number sorting, image mosaics, classification from image mosaics, and visual question answering. Consider a task where each input sample is a set of feature vectors with each feature vector describing an object in an image (for example: person, table, cat). Because there is no a priori ordering of these objects, it is important that the model is invariant to the order that the elements appear in the set. However, this puts restrictions on what can be learned efficiently. The typical approach is to compose elementwise operations with permutation-invariant reduction operations, such as summing or taking the maximum over the whole set. Since the reduction operator compresses a set of any size down to a single descriptor, this can be a significant bottleneck in what information about the set can be represented efficiently (; ;).We take an alternative approach based on an idea explored in Vinyals et al. (2015a), where they find that some permutations of sets allow for easier learning on a task than others. They do this by ordering the set elements in some predetermined way and feeding the ing sequence into a recurrent neural network. For instance, it makes sense that if the task is to output the top-n numbers from a set of numbers, it is useful if the input is already sorted in descending order before being fed into an RNN. This approach leverages the representational capabilities of traditional sequential models such as LSTMs, but requires some prior knowledge of what order might be useful. Our idea is to learn such a permutation purely from data without requiring a priori knowledge (section 2). The key aspect is to turn a set into a sequence in a way that is both permutation-invariant, as well as differentiable so that it is learnable. Our main contribution is a Permutation-Optimisation (PO) module that satisfies these requirements: it optimises a permutation in the forward pass of a neural network using pairwise comparisons. By feeding the ing sequence into a traditional model such as an LSTM, we can learn a flexible, permutation-invariant representation of the set while avoiding the bottleneck that a simple reduction operator would introduce. Techniques used in our model may also be applicable to other set problems where permutation-invariance is desired, building on the literature of approaches to dealing with permutation-invariance (section 3).In four different experiments, we show improvements over existing methods (section 4). The former two tasks measure the ability to learn a particular permutation as target: number sorting and image mosaics. We achieve state-of-the-art performance with our model, which shows that our method is suitable for representing permutations in general. The latter two tasks test whether a model can learn to solve a task that requires it to come up with a suitable permutation implicitly: classification from image mosaics and visual question answering. We provide no supervision of what the permutation should be; the model has to learn by itself what permutation is most useful for the task at hand. In the ordering cost C, elements of X are compared to each other (blue represents a negative value, red represents a positive value). Gradients are applied to unnormalised permutations P (t), which are normalised to proper permutations P (t).Here, our model also beats the existing models and we improve the performance of a state-of-the-art model in VQA with it. This shows that our PO module is able to learn good permutation-invariant representations of sets using our approach. We will now describe a differentiable, and thus learnable model to turn an unordered set {x i} N with feature vectors as elements into an ordered sequence of these feature vectors. An overview of the algorithm is shown in FIG0 and pseudo-code is available in Appendix A. The input set is represented as a matrix X = [x 1, . . ., x N] T with the feature vectors x i as rows in some arbitrary order. In the algorithm, it is important to not rely on the arbitrary order so that X is correctly treated as a set. The goal is then to learn a permutation matrix P such that when permuting the rows of the input through Y = P X, the output is ordered correctly according to the task at hand. When an entry P ik takes the value 1, it can be understood as assigning the ith element to the kth position in the output. Our main idea is to first relate pairs of elements through an ordering cost, parametrised with a neural network. This pairwise cost tells us whether an element i should preferably be placed before or after element j in the output sequence. Using this, we can define a total cost that measures how good a given permutation is (subsection 2.1). The second idea is to optimise this total cost in each forward pass of the module (subsection 2.2). By minimising the total cost of a permutation, we improve the quality of a permutation with respect to the current ordering costs. Crucially, the ordering cost function -and thus also the total cost function -is learned. In doing so, the module is able to learn how to generate a permutation as is desired. In order for this to work, it is important that the optimisation process itself is differentiable so that the ordering cost is learnable. Because permutations are inherently discrete objects, a continuous relaxation of permutations is necessary. For optimisation, we perform gradient descent on the total cost for a fixed number of steps and unroll the iteration, similar to how recurrent neural networks are unrolled to perform backpropagation-through-time. Because the inner gradient (total cost differentiated with respect to permutation) is itself differentiable with respect to the ordering cost, the whole model is kept differentiable and we can train it with a standard supervised learning loss. Note that as long as the ordering cost is computed appropriately (subsection 2.3), all operations used turn out to be permutation-invariant. Thus, we have a model that respects the symmetries of sets while producing an output without those symmetries: a sequence. This can be naturally extended to outputs where the target is not a sequence, but grids and lattices (subsection 2.4). The total cost function measures the quality of a given permutation and should be lower for better permutations. Because this is the function that will be optimised, it is important to understand what it expresses precisely. The main ingredient for the total cost of a permutation is the pairwise ordering cost (details in subsection 2.3). By computing it for all pairs, we obtain a cost matrix C where the entry C ij represents the ordering cost between i and j: the cost of placing element i anywhere before j in the output sequence. An important constraint that we put on C is that C ij = −C ji. In other words, if one ordering of i and j is "good" (negative cost), then the opposite ordering obtained by swapping them is "bad" (positive cost). Additionally, this constraint means that C ii = 0. This makes sure that two very similar feature vectors in the input will be similarly ordered in the output because their pairwise cost goes to 0.In this paper we use a straightforward definition of the total cost function: a sum of the ordering costs over all pairs of elements i and j. When considering the pair i and j, if the permutation maps i to be before j in the output sequence, this cost is simply C ij. Vice versa, if the permutation maps i to be after j in the output sequence, the cost has to be flipped to C ji. To express this idea, we define the total cost c: R N ×N → R of a permutation P as: DISPLAYFORM0 This can be understood as follows: If the permutation assigns element i to position u (so P iu = 1) and element j to position v (so P jv = 1), the sums over k and k simplify to 1 when v > u and −1 when v < u; permutation matrices are binary and only have one 1 in any row and column, so all other terms in the sums are 0. That means that the term for each i and j becomes C ij when v > u and −C ij = C ji when v < u, which matches what we described previously. Now that we can compute the total cost of a permutation, we want to optimise this cost with respect to a permutation. After including the constraints to enforce that P is a valid permutation matrix, we obtain the following optimisation problem: DISPLAYFORM0 Optimisation over P directly is difficult due to the discrete and combinatorial nature of permutations. To make optimisation feasible, a common relaxation is to replace the constraint that P ik ∈ {0, 1} with P ik ∈ BID9. With this change, the feasible set for P expands to the set of doublystochastic matrices, known as the Birkhoff or assignment polytope. Rather than hard permutations, we now have soft assignments of elements to positions, analogous to the latent assignments when fitting a mixture of Gaussians model using Expectation-Maximisation. Note that we do not need to change our total cost function after this relaxation. Instead of discretely flipping the sign of C ij depending on whether element i comes before j or not, the sums over k and k give us a weight for each C ij that is based on how strongly i and j are assigned to positions. This weight is positive when i is on average assigned to earlier positions than j and negative vice versa. In order to perform optimisation of the cost under our constraints, we reparametrise P with the Sinkhorn operator S from (defined in Appendix B) so that the constraints are always satisfied. We found this to lead to better solutions than projected gradient descent in initial experiments. After first exponentiating all entries of a matrix, S repeatedly normalises all rows, then all columns of the matrix to sum to 1, which converges to a doubly-stochastic matrix in the limit. DISPLAYFORM1 This ensures that P is always approximately a doubly-stochastic matrix. P can be thought of as the unnormalised permutation while P is the normalised permutation. By changing our optimisation to minimise P instead of P directly, all constraints are always satisfied and we can simplify the optimisation problem to min P c(P) without any constraints. It is now straightforward to optimise P with standard gradient descent. First, we compute the gradient: DISPLAYFORM2 ∂c(P) DISPLAYFORM3 From equation 4, it becomes clear that this gradient is itself differentiable with respect to the ordering cost C ij, which allows it to be learned. In practice, both ∂c(P)/∂ P as well as ∂[∂c(P)/∂ P ]/∂C can be computed with automatic differentiation. However, some implementations of automatic differentiation require the computation of c(P) which we do not use. In this case, implementing ∂c(P)/∂ P explicitly can be more efficient. Also notice that if we define B jq = k >q P jk − k <q P jk, equation 4 is just the matrix multiplication CB and is thus efficiently computable. For optimisation, P has to be initialised in a permutation-invariant way to preserve permutationinvariance of the algorithm. In this paper, we consider a uniform initialisation so that all P ik = 1/N (PO-U model, left) and an initialisation that linearly assigns each element to each position (PO-LA model, right). DISPLAYFORM4 where w k is a different weight vector for each position k. Then, we perform gradient descent for a fixed number of steps T. The iterative update using the gradient and a (learnable) step size η converges to the optimised permutation P (T): DISPLAYFORM5 One peculiarity of this is that we update P with the gradient of the normalised permutation P, not of the unnormalised permutation P as normal. In other words, we do gradient descent on P but in equation 5 we set ∂P uv /∂ P pq = 1 when u = p, v = q, and 0 everywhere else. We found that this in significantly better permutations experimentally; we believe that this is because ∂P /∂ P vanishes too quickly from the Sinkhorn normalisation, which biases P away from good permutation matrices wherein all entries are close to 0 and 1 (Appendix D).The runtime of this algorithm is dominated by the computation of gradients of c(P), which involves a matrix multiplication of two N × N matrices. In total, the time complexity of this algorithm is T times the complexity of this matrix multiplication, which is Θ(N 3) in practice. We found that typically, small values for T such as 4 are enough to get good permutations. The ordering cost C ij is used in the total cost and tells us what the pairwise cost for placing i before j should be. The key property to enforce is that the function F that produces the entries of C is anti-symmetric (F (x i, x j) = −F (x j, x i)). A simple way to achieve this is to define F as: DISPLAYFORM0 We can then use a small neural network for f to obtain a learnable F that is always anti-symmetric. Lastly, C is normalised to have unit Frobenius norm. This in simply scaling the total cost obtained, but it also decouples the scale of the outputs of F from the step size parameter η to make optimisation more stable at inference time. C is then defined as: DISPLAYFORM1 DISPLAYFORM2 In some tasks, it may be natural to permute the set into a lattice structure instead of a sequence. For example, if it is known that the set contains parts of an image, it makes sense to arrange these parts back to an image by using a regular grid. We can straightforwardly adapt our model to this by considering each row and column of the target grid as an individual permutation problem. The total cost of an assignment to a grid is the sum of the total costs over all individual rows and columns of the grid. The gradient of this new cost is then the sum of the gradients of these individual problems. This in a model that considers both row-wise and column-wise pairwise relations when permuting a set of inputs into a grid structure, and more generally, into a lattice structure. The most relevant work to ours is the inspiring study by , where they discuss the reparametrisation that we use and propose a model that can also learn permutations implicitly in principle. Their model uses a simple elementwise linear map from each of the N elements of the set to the N positions, normalised by the Sinkhorn operator. This can be understood as classifying each element individually into one of the N classes corresponding to positions, then normalising the predictions so that each class only occurs once within this set. However, processing the elements individually means that their model does not take relations between elements into account properly; elements are placed in absolute positions, not relative to other elements. Our model differs from theirs by considering pairwise relations when creating the permutation. By basing the cost function on pairwise comparisons, it is able to order elements such that local relations in the output are taken into account. We believe that this is important for learning from permutations implicitly, because networks such as CNNs and RNNs rely on local ordering more than absolute positioning of elements. It also allows our model to process variable-sized sets, which their model is not able to do. Our work is closely related to the set function literature, where the main constraint is invariance to ordering of the set. While it is always possible to simply train using as many permutations of a set as possible, using a model that is naturally permutation-invariant increases learning and generalisation capabilities through the correct inductive bias in the model. There are some similarities with relation networks in considering all pairwise relations between elements as in our pairwise ordering function. However, they sum over all non-linearly transformed pairs, which can lead to the bottleneck we mention in section 1. Meanwhile, by using an RNN on the output of our model, our approach can encode a richer class of functions: it can still learn to simply sum the inputs, but it can also learn more complex functions where the learned order between elements is taken into account. The concurrent work in discusses various approximations of averaging the output of a neural network over all possible permutations, with our method falling under their categorisation of a learned canonical input ordering. Our model is also relevant to neural networks operating on graphs such as graph-convolutional networks . Typically, a set function is applied to the set of neighbours for each node, with which the state of the node is updated. Our module combined with an RNN is thus an alternative set function to perform this state update with. Noroozi & Favaro FORMULA0 and BID7 show that it is possible to use permutation learning for representation learning in a self-supervised setting. The model in BID7 is very similar to , including use of a Sinkhorn operator, but they perform significantly more processing on images with a large CNN (AlexNet) beforehand with the main goal of learning good representations for that CNN. We instead focus on using the permuted set itself for representation learning in a supervised setting. We are not the first to explore the usefulness of using optimisation in the forward pass of a neural network (for example, Stoyanov et al. FORMULA0 ; Domke FORMULA0 ; BID4). However, we believe that we are the first to show the potential of optimisation for processing sets because -with an appropriate cost function -it is able to preserve permutation-invariance. In OptNet BID1, exact solutions to convex quadratic programs are found in a differentiable way through various techniques. Unfortunately, our quadratic program is non-convex (Appendix E), which makes finding an optimal solution possibly NP-hard . We thus fall back to the simpler approach of gradient descent on the reparametrised problem to obtain a non-optimal, but reasonable solution. Note that our work differs from learning to rank approaches such as BID5 and Severyn & Moschitti FORMULA0, as there the end goal is the permutation itself. This usually requires supervision on what the target permutation should be, producing a permutation with hard assignments at the end. We require our model to produce soft assignments so that it is easily differentiable, since the main goal is not the permutation itself, but processing it further to form a representation of the set being permuted. This means that other approaches that produce hard assignments such as Ptr-Net (b) are also unsuitable for implicitly learning permutations, although using a variational approximation through Mena et al. FORMULA0 to obtain a differentiable permutation with hard assignments is a promising direction to explore for the future. Due to the lack of differentiability, existing literature on solving minimum feedback arc set problems BID6 can not be easily used for set representation learning either. Throughout the text, we will refer to our model with uniform assignment as PO-U, with linear assignment initialisation as PO-LA, and the model from Mena et al. FORMULA0 as LinAssign. We perform a qualitative analysis of what comparisons are learned in Appendix C. Precise experimental details can be found in Appendix F and our implementation for all experiments is available at https: //github.com/Cyanogenoid/perm-optim for full reproducibility. Some additional including example image mosaic outputs can be found in Appendix G. We start with the toy task of turning a set of random unsorted numbers into a sorted list. For this problem, we train with fixed-size sets of numbers drawn uniformly from the interval and evaluate on different intervals to determine generalisation ability (for example: DISPLAYFORM0 We use the correctly ordered sequence as training target and minimise the mean squared error. , during evaluation we use the Hungarian algorithm for solving a linear assignment problem with −P as the assignment costs. This is done to obtain a permutation with hard assignments from our soft permutation. Our PO-U model is able to sort all sizes of sets that we tried -5 to 1024 numbers -perfectly, including generalising to all the different evaluation intervals without any mistakes. This is in contrast to all existing end-to-end learning-based approaches such as , which starts to make mistakes on at 120 numbers and no longer generalises to sets drawn from at 80 numbers. Vinyals et al. (2015a) already starts making mistakes on 5 numbers. Our stark improvement over existing is evidence that the inductive biases due to the learned pairwise comparisons in our model are suitable for learning permutations, at least for this particular toy problem. In subsection C.1, we investigate what it learns that allows it to generalise this well. As a second task, we consider a problem where the model is given images that are split into n × n equal-size tiles and the goal is to re-arrange this set of tiles back into the original image. We take these images from either MNIST, CIFAR10, or a version of ImageNet with images resized down to 64 × 64 pixels. For this task, we use the alternative cost function described in subsection 2.4 to arrange the tiles into a grid rather than a sequence: this lets our model take relations within rows and columns into account. Again, we minimise the mean squared error to the correctly permuted image and use the Hungarian algorithm during evaluation, matching the experimental setup in. Due to the lack of reference implementation of their model for this experiment, we use our own implementation of their model, which we verified to reproduce their MNIST closely. Unlike them, we decide to not arbitrarily upscale MNIST images to get improved for all models. The mean squared errors for the different image datasets and different number of tiles an image is split into are shown in TAB0. First, notice that in essentially all cases, our model with linear assignment initialisation (PO-LA) performs best, often significantly so. On the two more complex datasets CIFAR10 and ImageNet, this is followed by our PO-U model, then the LinAssign model. We analyse what types of comparisons PO-U learns in subsection C.2.On MNIST, LinAssign performs better than PO-U on higher tile counts because images are always centred on the object of interest. That means that many tiles only contain the and end up completely blank; these tiles can be more easily assigned to the borders of the image by the LinAssign model than our PO-U model because the absolute position is much more important than the relative positioning to other tiles. This also points towards an issue for these cases in our cost function: because two tiles that have the same contents are treated the same by our model, it is unable to place one blank tile on one side of the image and another blank tile on the opposite side, as this would require treating the two tiles differently. This issue with s is also present on CIFAR10 to a lesser extent: notice how for the 3 × 3 case, the error of PO-U is much closer to LinAssign on CIFAR10 than on ImageNet, where PO-U is much better comparatively. This shows that the PO-U model is more suitable for more complex images when relative positioning matters more, and that PO-LA is able to combine the best of both methods. We now turn to tasks where the goal is not producing the permutation itself, but learning a suitable permutation for a different task. For these tasks, we do not provide explicit supervision on what the permutation should be; an appropriate permutation is learned implicitly while learning to solve another task. As the dataset, we use a straightforward modification of the image mosaic task. The image tiles are assigned to positions on a grid as before, which are then concatenated into a full image. This image is fed into a standard image classifier (ResNet-18 ) which is trained with the usual cross-entropy loss to classify the image. The idea is that the network has to learn some permutation of the image tiles so that the classifier can classify it accurately. This is not necessarily the permutation that restores the original image faithfully. One issue with this set-up we observed is that with big tiles, it is easy for a CNN to ignore the artefacts on the tile boundaries, which means that simply permuting the tiles randomly gets to almost the same test accuracy as using the original image. To prevent the network from avoiding to solve the task, we first pre-train the CNN on the original dataset without permuting the image tiles. Once it is fully trained, we freeze the weights of this CNN and train only the permutation mechanism.2 × 2 3 × 3 4 × 4 5 × 5 2 × 2 3 × 3 4 × 4 5 × 5 2 × 2 3 × 3 4 × 4 5 ×We show our in TAB1. Generally, a similar trend to the image mosaic task with explicit supervision can be seen. Our PO-LA model usually performs best, although for ImageNet PO-U is consistently better. This is evidence that for more complex images, the benefits of linear assignment decrease (and can actually detract from the task in the case of ImageNet) and the importance of the optimisation process in our model increases. With higher number of tiles on MNIST, even though PO-U does not perform well, PO-LA is clearly superior to only using LinAssign. This is again due to the fully black tiles not being able to be sorted well by the cost function with uniform initialisation. As the last task, we consider the much more complex problem of visual question answering (VQA): answering questions about images. We use the VQA v2 dataset BID3 ), which in total contains around 1 million questions about 200,000 images from MS-COCO with 6.5 million human-provided answers available for training. We use bottom-up attention features BID2 as representation for objects in the image, which for each image gives us a set (size varying from 10 to 100 per image) of bounding boxes and the associated feature vector that encodes the contents of the bounding box. These object proposals have no natural ordering a priori. We use the state-of-the-art BAN model as a baseline and perform a straightforward modification to it to incorporate our module. For each element in the set of object proposals, we concatenate the bounding box coordinates, features, and the attention value that the baseline model generates. Our model learns to permute this set into a sequence, which is fed into an LSTM. We take the last cell state of the LSTM to be the representation of the set, which is fed back into the baseline model. This is done for each of the eight attention glimpses in the BAN model. We include another baseline model (BAN + LSTM) that skips the permutation learning, directly processing the set with the LSTM.Our on the validation set of VQA v2 are shown in TAB3. We improve on the overall performance of the state-of-the-art model by 0.37% -a significant improvement for this datasetwith 0.27% of this improvement coming from the learned permutation. This shows that there is a substantial benefit to learning an appropriate permutation through our model in order to learn better set representations. Our model significantly improves on the number category, despite the inclusion of a counting module BID2 specifically targeted at number questions in the baseline. This is evidence that the representation learned through the permutation is non-trivial. Note that the improvement through our model is not simply due to increased model size and computation: found that significantly increasing BAN model size, increasing computation time similar in scale to including our model, does not yield any further gains. In this paper, we discussed our Permutation-Optimisation module to learn permutations of sets using an optimisation-based approach. In various experiments, we verified the merit of our approach for learning permutations and, from them, set representations. We think that the optimisation-based approach to processing sets is currently underappreciated and hope that the techniques and in this paper will inspire new algorithms for processing sets in a permutation-invariant manner. Of course, there is plenty of work to be done. For example, we have only explored one possible function for the total cost; different functions capturing different properties may be used. The main drawback of our approach is the cubic time complexity in the set size compared to the quadratic complexity of Mena et al. FORMULA0, which limits our model to tasks where the number of elements is relatively small. While this is acceptable on the real-world dataset that we used -VQA with up to 100 object proposals per image -with only a 30% increase in computation time, our method does not scale to the much larger set sizes encountered in domains such as point cloud classification. Improvements in the optimisation algorithm may improve this situation, perhaps through a divide-and-conquer approach. We believe that going beyond tensors as basic data structures is important for enabling higher-level reasoning. As a fundamental mathematical object, sets are a natural step forward from tensors for modelling unordered collections. The property of permutation invariance lends itself to greater abstraction by allowing data that has no obvious ordering to be processed, and we took a step towards this by learning an ordering that existing neural networks are able to take advantage of. Algorithm 1 Forward pass of permutation-optimisation algorithm 1: Input: X ∈ R N ×M with x i as rows in arbitrary order 2: Learnable parameters: weights that parametrise F, step size η 3: DISPLAYFORM0 compute ordering costs (equation 10) 5: initialise P either uniform or linear assignment init (equation 6) 6: for t ← 1, T do 7: DISPLAYFORM1 normalise assignment with Sinkhorn operator (Appendix B)8:G ← ∂c(P)/∂P compute gradient of normalised assignment (equation 4) 9:P ← P − ηG gradient descent step on unnormalised assignment (equation 7) 10: end for 11: P ← S(P) 12: Y ← P X permute rows of X to obtain output Y The Sinkhorn operator S as defined in Adams & Zemel FORMULA0 is: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 T r normalises each row, T c normalises each column of a square matrix X to sum to one. This formulation is different from the normal Sinkhorn operator by by exponentiating all entries first and running for a fixed number of steps L instead of for steps approaching infinity. Mena et al. FORMULA0 include a temperature parameter on the exponentiation, which acts analogously to temperature in the softmax function. In this paper, we fix L to 4. First, we investigate what comparison function F is learned for the number sorting task. We start with plotting the outputs of F for different pairs of inputs in Figure 2. From this, we can see that it learns a sensible comparison function where it outputs a negative number when the first argument is lower than the second, and a positive number vice versa. The easiest way to achieve this is to learn f (x i, x j) = x i, which in F (x i, x j) = x i − x j. By plotting the outputs of the learned f in Figure 3 we can see that something close to this has indeed been learned. The learned f mostly depends on the second argument and is a scaled and shifted version of it. It has not learned to completely ignore the first argument, but the deviations from it are small enough that the cost function of the permutation is able to compensate for it. We can see that there is a faint grey diagonal area going from to and to, which could be an artifact from F having small gradients due to its skew-symmetry when two numbers are close to each other. Next, we investigate the behaviour of F on the image mosaic task. Since our model uses the outputs of F in the optimisation process, we find it easier to interpret F over f in the subsequent analysis. We start by looking at the output of F 1 (costs for left-to-right ordering) and F 2 (costs for top-to-bottom ordering) for MNIST 2 × 2, shown in Figure 4. First, there is a clear entry in each row and column of both F 1 and F 2 that has the highest absolute cost (high colour saturation) whenever the corresponding tiles fit together correctly. This shows that it successfully learned to be confident what order two tiles should be in when they fit together. From the two 2-by-2 blocks of red and blue on the anti-diagonal, we can also see that it has learned that for the per-row comparisons (F 1), the tiles that should go into the left column should generally compare to less than (i.e. should be permuted to be to the left of) the tiles that go to the right. Similarly, for the per-column comparisons (F 2) tiles that should be at the top compare to less than tiles that should be at the bottom. Lastly, F 1 has a low absolute cost when comparing two tiles that belong in the same column. These are the entries in the matrix at the coordinates,,, and. This makes sense, as F 1 is concerned with whether one tile should be to the left or right of another, so tiles that belong in the same column should not have a preference either way. A similar thing applies to F 2 for tiles that belong in the same column. Figure 4: Outputs of F 1 (left half, row comparisons) and F 2 (right half, column comparisons) for pairs of tiles from an image in MNIST. For F 1, the tiles are sorted left-to-right if only F 1 was used as cost. For F 2, the tiles are sorted top-to-bottom if only F 2 was used as cost. Blue indicates that the tile to the left of this entry should be ordered left of the tile at the top for F 1, the tile on the left should be ordered above the tile at the top for F 2. The opposite applies for red. The saturation of the colour indicates how strong this ordering is. Next, we investigate what positions within the tiles F 1 and F 2 are most sensitive to. This illustrates what areas of the tiles are usually important for making comparisons. We do this by computing the gradients of the absolute values of F with respect to the input tiles and averaging over many inputs. For MNIST 2 × 2 (FIG4, it learns no particular spatial pattern for F 1 and puts slightly more focus away from the centre of the tile for F 2 . As we will see later, it learns something that is very content-dependent rather than spatially-dependent. With increasing numbers of tiles on MNIST, it tends to focus more on edges, and especially on corners. For the CIFAR10 dataset FIG5), there is a much clearer distinction between left-right comparisons for F 1 and top-bottom comparisons for F 2. For the 2 × 2 and 4 × 4 settings, it relies heavily on the pixels on the left and right borders for left-to-right comparisons, and top and bottom edges for top-to-bottom comparisons. Interestingly, F 1 in the 3 × 3 setting (middle pair) on CIFAR10 focuses on the left and right halves of the tiles, but specifically avoids the borders. A similar thing applies to F 2, where a greater significance is given to pixels closer to the middle of the image rather than only focusing on the edges. This suggests that it learns to not only match up edges as with the other tile numbers, but also uses the content within the tile to do more sophisticated content-based comparisons. Figure 7: Gradient maps of pairs of tiles from MNIST for F 1 (left half) and F 2 (right half). Each group of four consists of: tile 1, tile 2, gradient of F (t 1, t 2) with respect to tile 1, gradient of F (t 2, t 1) with respect to tile 2.Figure 8: Gradient maps of pairs of tiles from CIFAR10 for F 1 (left half) and F 2 (right half). Each group of four consists of: tile 1, tile 2, gradient of F (t 1, t 2) with respect to tile 1, gradient of F (t 2, t 1) with respect to tile 2. Lastly, we can look at the gradients of F with respect to the input tiles for specific pairs of tiles, shown in Figure 7 and Figure 8. This gives us a better insight into what changes to the input tiles would affect the cost of the comparison the most. These figures can be understood as follows: for each pair of tiles, we have the corresponding two gradient maps next to them. Brightening the pixels for the blue entries in these gradient maps would order the corresponding tile more strongly towards the left for F 1 and towards the top for F 2. The opposite applies to brightening the pixels with red entries. Vice versa, darkening pixels with blue entries orders the tile more strongly towards the right for F 1 and the bottom for F 2. More saturated colours in the gradient maps correspond to greater effects on the cost when changing those pixels. We start with gradients on the tiles for an input showing the digit 2 on MNIST 2 × 2 in Figure 7. We focus on the first row, left side, which shows a particular pair of tiles from this image and their gradients of F 1 (left-to-right ordering), and we share some of our observations here:• The gradients of the second tile show that to encourage the permutation to place it to the right of the first tile, it is best to increase the brightness of the curve in tile 2 that is already white (red entries in tile 2 gradient map) and decrease the black pixels around it (blue entries). This means that it recognised that this type of curve is important in determining that it should be placed to the right, perhaps because it matches up with the start of the curve from tile 1. We can imagine the curve in the gradient map of tile 2 roughly forming part of a 7 rather than a 2 as well, so it is not necessarily looking for the curve of a 2 specifically.• In the gradient map of the first tile, we can see that to encourage it to be placed to the left of tile 2, increasing the blue entries would form a curve that would make the first tile look like part of an 8 rather than a 2, completing the other half of the curve from tile 2. This means that it has learned that to match something with the shape in tile 2, a loop that completes it is best, but the partial loop that we have in tile 1 satisfies part of this too.• Notice how the gradient of tile 1 changes quite a bit when going from row 1 to row 3, where it is paired up with different tiles. This suggests that the comparison has learned something about the specific comparison between tiles being made, rather than learning a general trend of where the tile should go. The latter is what a linear assignment model is limited to doing because it does not model pairwise interactions.• In the third row, we can see that even though the two tiles do not match up, there is a red blob on the left side of the tile 2 gradient map. This blob would connect to the top part of the line in tile 1, so it makes sense that making the two tiles match up more on the border would encourage tile 2 to be ordered to the right of tile 1.Similar observations apply to the right half of Figure 7, such as row 5, where tile 1 (which should go above tile 2) should have its pixels in the bottom left increased and tile 2 should have its pixels in the top left increased in order for tile 1 to be ordered before (i.e. above) tile 2 more strongly. On CIFAR10 2 × 2 in Figure 8, it is enough to focus on the borders of the tiles. Here, it is striking how specifically it tries to match edge colours between tiles. For example, consider the blue sky in the left half (F 1), row 6. To order tile 1 to the left of tile 2, we should change tile 1 to have brighter sky and darker red on the right border, and also darken the black on the left border so that it matches up less well with the right border of tile 2, where more of the bright sky is visible. For tile 2, the gradient shows that it should also match up more on the left border, and have increase the amount of bright pixels, i.e. sky, on the right border, again so that it matches up less well with the left border of tile 1 if they were to be ordered the opposing way. First, the gradient of S(X) is: DISPLAYFORM0 where 1 is the indicator function that returns 1 if the condition is true and 0 otherwise. We compared the entropy of the permutation matrices obtained with and without using the "proper" gradient with ∂S(P)/∂ P as term in it and found that our version has a significantly lower entropy. To understand this, it is enough to focus on the first two terms in equation 16, which is essentially the gradient of a softmax function applied row-wise to P.Let x be a row in P and s i be the ith entry in the softmax function applied to x. Then, the gradient is: DISPLAYFORM1 Since this is a product of entries in a probability distribution, the gradient vanishes quickly as we move towards a proper permutation matrix (all entries very close to 0 or 1). By using our alternative update and thus removing this term from our gradient, we can avoid the vanishing gradient problem. Gradient descent is not efficient when the gradient vanishes towards the optimum and the optimumin our case a permutation matrix with exact ones and zeros as entries -is infinitely far away. Since we prefer to use a small number of steps in our algorithm for efficiency, we want to reach a good solution as quickly as possible. This justifies effectively ignoring the step size that the gradient suggests and simply taking a step in a similar direction as the gradient in order to be able to saturate the Sinkhorn normalisation sufficiently, thus obtaining a doubly stochastic matrix that is closer to a proper permutation matrix in the end. We can write our total cost function as a quadratic program in the standard x T Qx form with linear constraints. We leave out the constraints here as they are not particularly interesting. First, we can define O ∈ R N ×N as: DISPLAYFORM0 and with it, Q ∈ R N 2 ×N 2 as: DISPLAYFORM1 Then we can write the cost function as: DISPLAYFORM2 where there is some bijection between a pair of indices (i, k) and the index l and p is a flattened version of P with p l = P ik. Q is indefinite because the total cost can be negative: a uniform initialisation for P has a cost of 0, better permutations have negative cost, worse permutations have positive cost. Thus, the problem is non-convex and the problem is possibly NP-hard. Also, since we have flattened P into p, the number of optimisation variables is quadratic in the set size N. Even if this were a convex quadratic program, methods such as OptNet BID1 have cubic time complexity in the number of optimisation variables, which makes it O(N 6) for our case. All of our experiments can be reproduced using our implementation at https:// github.com/Cyanogenoid/perm-optim in PyTorch through the experiments/all.sh script. For the former three experiments, we use the following hyperparameters throughout:• Optimiser: Adam (default settings in PyTorch: β 1 = 0.9, β 2 = 0.999, = 10 −8)• Initial step size η in inner gradient descent: 1.0All weights are initialised with Xavier initialisation . We choose the f within the ordering cost function F to be a small MLP. The input to f has 2 times the number of dimensions of each element, obtained by concatenating the pair of elements. This is done for all pairs that can be formed from the input set. This is linearly projected to some number of hidden units to which a ReLU activation is applied. Lastly, this is projected down to 1 dimension for sorting numbers and VQA, and 2 dimensions for assembling image mosaics (1 output for row-wise costs, 1 output for column-wise costs). These outputs are used for creating the ordering cost matrix C. The ordering cost function F concatenates the two floats of each pair and applies a 2-layer MLP that takes the 2 inputs to 16 hidden units, ReLU activation, then to one output. For evaluation, we switch to double precision floating point numbers. This is because for the interval, as the set size increases, there are not enough unique single precision floats in that interval for the sets to contain only unique floats with high probability (the birthday problem). Using double precision floats avoids this issue. Note that using single precision floats is enough for the other intervals and smaller set sizes, and training is always done on the interval at single precision. For all three image datasets from which we take images (MNIST, CIFAR10, ImageNet), we first normalise the inputs to have zero mean and standard deviation one over the dataset as is common practice. For ImageNet, we crop rectangular images to be square by reducing the size of the longer side to the length of the shorter side (centre cropping). Images that are not exactly divisible by the number of tiles are first rescaled to the nearest bigger image size that is exactly divisible. , we process each tile with a 5 × 5 convolution with padding and stride 1, 2 × 2 max pooling, and ReLU activation. This is flattened into a vector to obtain the feature vector for each tile, which is then fed into our F. , we decide not to arbitrarily upscale MNIST images by a factor of two, even when upscaling in slightly better performance in general. While we were able to mostly reproduce their MNIST , we were not able to reproduce their ImageNet for the 3 × 3 case. In general, we observed that good settings for their model also improved the of our PO-U and PO-LA models. Better hyperparameters than what we used should improve all models similarly while keeping the ordering of how well they perform the same. This task is also known as jigsaw puzzle , but we decided on naming it image mosaics because the tiles are square which can lead to multiple solutions, rather than the typical unique solution in traditional jigsaw puzzles enforced by the different tile shapes. We use the same setting as for the image mosaics, but further process the output image with a ResNet-18. For MNIST and CIFAR10, we replace the first convolutional layer with one that has a 3 × 3 kernel size and no striding. This ResNet-18 is first trained on the original dataset for 20 epochs (1 for ImageNet), though images may be rescaled if the image size is not divisible by the number of tiles per side. All weights are then frozen and the permutation method is trained for 20 epochs (1 for ImageNet). As stated previously, this is necessary in order for the ResNet-18 to not use each tile individually and ignore the ing artefacts from the permuted tiles. This is also one of the reasons why we downscale ImageNet images to 64 × 64 pixels. Because the ing image tiles are so big while the receptive field of ResNet-18 is relatively small if we were to use 256 × 256 images, the permutation artefacts barely affect because they are only a small fraction of the globally-pooled features. The permutation permutes each set of tiles, which are reconstructed (without use of the Hungarian algorithm) into an image, which is then processed by the ResNet-18. We observed that the LinAssign model by consistently in NaN values after Sinkhorn normalisation in this set-up, despite our Sinkhorn implementation using the numericallystable version of softmax with the exp-normalise trick. We avoided this issue by clipping the outputs of their model into the [-10, 10] interval before Sinkhorn normalisation. We did not observe these NaN issues with our PO-U model. We use the official implementation of BAN as baseline without changing any of the hyperparameters. We thus refer to for details of their model architecture and hyperparameters. The only change to hyperparameters that we make is reducing the batch size from 256 to 112 due to the GPU memory requirements of the baseline model, even without our permutation mechanism. The BAN model generates attention weights between all object proposals in a set and words of the question. We take the attention weight for a single object proposal to be the maximum attention weight for that proposal over all words of the question, the same as in their integration of the counting module. Each element of the set, corresponding to object proposals, is the concatenation of this attention logit, bounding box coordinates, and the feature vector projected from 2048 down to 8 dimensions. We found this projection necessary to not inhibit learning of the rest of the model, which might be due to gradient clipping or other hyperparameters that are no longer optimal in the BAN model. This set of object proposals is then permuted with T = 3 and a 2-layer MLP with hidden dimension 128 for f to produce the ordering costs. The elements in the permuted sequence are weighted by how relevant each proposal is (sigmoid of the corresponding attention logit) and the sequence is then fed into an LSTM with 128 units. The last cell state of the LSTM is the set representation which is projected, ReLUd, and added back into the hidden state of the BAN model. The remainder of the BAN model is now able to use information from this set representation. There are 8 attention glimpses, so we process each of these with a PO-U module and an LSTM with shared parameters across these 8 glimpses. An interesting aspect we observed throughout all experiments is how the learned step size η changes during training. At the start of training, it decreases from its initial value of 1, thus reducing the influence of the permutation mechanism. Then, η starts rising again, usually ending up at a value above 1 at the end of training. This can be explained by the ordering cost being very inaccurate at the start of training, since it has not been trained yet. Through training, the ordering cost improves and it becomes more beneficial for the influence of the PO module on the permutation to increase. In TAB4, we show the accuracy corresponding to the in TAB0 where the permutation has been trained with explicit supervision. In FIG0, FIG0, and FIG0, we show some example reconstructions that have been learnt by our PO-U model. Starting from a uniform assignment at the top, the figures show reconstructions as a permutation is being optimised. Generally, it is able to reconstruct most images fairly well. Due to the poor quality of many of these reconstructions (particularly on ImageNet), the last two figures show reconstructions on the 2 × 2 versions of the datasets rather than 3 × 3. In Table 5, we show the mean squared error reconstruction loss corresponding to the in TAB1. These show similar trends as before. In FIG0, FIG0, and FIG0, we show some example reconstructions that have been learnt by our PO-U model on 3 × 3 versions of the image datasets. Because the quality of implicit CIFAR10 and ImageNet reconstructions are relatively poor, we also include FIG0, and FIG0 on 2 × 2 versions. Starting from a uniform assignment at the top, the figures show reconstructions as a permutation is being optimised. The reconstructions here are clearly noisier than before due the supervision only being implicit. This is evidence that while our method is superior to existing methods in terms of reconstruction error and accuracy of the classification, there is still plenty of room for improvement to allow for better implicitly learned permutations. Keep in mind that it is not necessary for the permutation to produce the original image exactly, as long as the CNN can consistently recognise what the permutation method has learned. Our models tend to naturally learn reconstructions that are more similar to the original image than the LinAssign model. implicit supervision. These examples have not been cherry-picked.
Learn how to permute a set, then encode permuted set with RNN to obtain a set representation.
476
scitldr
The physical design of a robot and the policy that controls its motion are inherently coupled. However, existing approaches largely ignore this coupling, instead choosing to alternate between separate design and control phases, which requires expert intuition throughout and risks convergence to suboptimal designs. In this work, we propose a method that jointly optimizes over the physical design of a robot and the corresponding control policy in a model-free fashion, without any need for expert supervision. Given an arbitrary robot morphology, our method maintains a distribution over the design parameters and uses reinforcement learning to train a neural network controller. Throughout training, we refine the robot distribution to maximize the expected reward. This in an assignment to the robot parameters and neural network policy that are jointly optimal. We evaluate our approach in the context of legged locomotion, and demonstrate that it discovers novel robot designs and walking gaits for several different morphologies, achieving performance comparable to or better than that of hand-crafted designs. An agent's ability to navigate through and interact with its environment depends not just on its skill at planning and controlling its motion, but also on its physical design. Different physical designs are inherently better suited to different tasks and environments. By making appropriate choices during fabrication, mechanical elements can be designed to improve robustness to non-idealities such as errors in perception, delays in actuation, etc., and indeed, make control problem an easier one to solve. At the same time, robots that take different forms may find completely different control strategies to be optimal to complete the same task. Therefore, the physical and computational design of an agent are inherently coupled, and must ideally be jointly optimized if the robot is to successfully complete a task in a particular environment. Consider the development of a legged robot for locomotion. Variations in physical design will require changes to the joint torques in order to preserve a particular locomotion behavior (e.g., a heavier torso requires greater torque at the ankle), and will likely in completely different walking gaits, even when the morphology is preserved. In fact, some changes to design may render locomotion impossible for the target operating environment (e.g., a robot with long feet may be unable to locomote up an incline). Meanwhile, careful choice of bipedal design enables passive walking BID20 BID9 BID4. It is therefore beneficial to not simply consider the robot's design or gait to be fixed, but to optimize both jointly for the target environment and task. Similar co-design can be beneficial in other settings-for example for the control policy and physical characteristics of digits in robotic grippers for grasping. While a robot's physical design and the corresponding control policy are inherently coupled, most existing methods ignore this coupling, instead choosing to alternate between separate design and control phases. Existing approaches that jointly reason over design and control BID7 BID12 BID46 assume knowledge of an accurate model of the robot dynamics and require expert supervision (e.g., to provide a suitable initial design and guide the optimization process). However, these restrictive assumptions limits their applicability to a handful of specific settings, and often yield solutions heavily influenced by expert intuition. In this work, we seek a general approach-one that can optimize a robot's physical characteristics jointly with controllers of a desired complexity (Fig. 1), that can be applied to general tasks in some DISPLAYFORM0 Figure 1: Our algorithm learns a robot's physical design jointly with the control policy. Here we show the learned designs evolving over time for the Hopper (top left), the Walker2d (top right) and the Ant (bottom), each with the default Roboschool design for comparison. Scale is fixed for each robot. Note that these designs correspond to modes of the distribution over robot designs that our algorithm maintains during training.given environment, and that can explore the joint search space of physical design and computational control in a purely data-driven way, without a model of the robot dynamics and independent of the biases of expert intuition. We develop this approach in the context of determining the physical parameters of an articulated agent-the lengths and thicknesses of each limbs in a given morphologythrough joint training with a neural network for control, with the objective of achieving locomotion. Our method maintains a distribution over these physical parameters, and simultaneously trains the parameters of this distribution with those of a neural network controller, using deep reinforcement learning. In this way, we pursue a design distribution and control policy that are jointly optimal for the given task and environment. Experimental show that starting from random initializations, our approach is able to find novel designs and walking gaits that match or exceed the performance of manually designed agents. To the best of our knowledge, our method is the first to successfully carry out such a joint optimization of design and control in a completely model-free manner. Attention has been paid recently to the problem of jointly optimizing the parameters that specify a robot's design and motion (e.g., gaits) or control. Early work in this area takes an evolutionary approach to optimizing a robot's design and controller, typically parameterized as a neural network, for virtual agents BID45 BID30 BID0 and physical robots BID19 BID1. Particularly relevant to our approach is the work of BID12, who relate design and motion parameters via a set of implicit functions that express robot dynamics, desired trajectories, and actuation limits. These functions encode a manifold that is then linearized to model the relationship between design and motion via the implicit function theorem. The method then solves for the desired parameters in a local fashion via constraint-based optimization. Similarly, BID46 describe an approach that jointly reasons over physical design and motion parameters for robots with articulated degrees of freedom (e.g., legged robots). They formulate the problem in the framework of trajectory optimization by incorporating parameters involved in the design of articulated robots, including dimensions, masses, mass distribution, and moments of inertia, together with the contact force and torque variables typically associated with trajectory optimization. They use weighted actuation cost as the objective, subject to a regularization penalty on parameters. Their method is guaranteed to find a feasible design so long as the problem is initialized within a small neighborhood of a feasible solution. Unlike our approach, their method requires that the user provide an estimate of the robot configuration at each time step, as well as an accurate analytic model of constraint dynamics (e.g., foot contact), which is computationally expensive to enforce. BID8 propose a derivative-free strategy that optimizes over muscle routing and muscle-based control for simulated bipeds to realize a design that incorporates biomechanical constraints. describe an evolutionary method that jointly reasons over design and motion parameters for legged robots, while also tuning the parameters of a robust controller that tracks these motions. Their method is limited to biologically inspired quadrupedal foothold patterns, and does not account for contact dynamics. In contrast, our approach applies to arbitrary morphologies and does not require that we model contact dynamics. While the focus on simultaneous optimization over robot design and motion parameters is relatively new, there is a long history of research focused on the related problem of co-design of physical structure and control BID35 BID34. BID29 jointly optimize the link geometry of a high-speed robot arm along with the parameters of a PD joint controller. These methods often rely upon access to an accurate model of the robot dynamics in order to design the controller. In order to reduce the dependence upon a detailed analytical model, BID32 use on-robot experiments to refine their model as part of the iterative design process. More recently, BID49 allow for some degree of uncertainty in the model by including the sensitivity of the design and control parameters to model uncertainty along with the task-specific optimization objectives. However, existing methods still rely upon access to an analytical model of the robot dynamics and are typically limited to simple (e.g., linear) control designs. Our method assumes only that the controller can be modeled via a convolutional neural network, and can thereby learn complex, highly nonlinear control policies with no a priori knowledge of the robot dynamics. Far more attention has been paid to the individual problem of task-driven optimization of robot motion and control. Given an arbitrary robot design chosen by novice users, BID21 describe an interactive design framework that solves for a realizable walking gait that in stable locomotion. Similarly, BID27 synthesize emergent behaviors (motion) for arbitrary morphologies and tasks from high-level specifications, by jointly optimizing over contact and motion. BID28 build upon this work by training recurrent neural networks to serve as feedback controllers capable of producing complex, stable behaviors for a variety of dynamical systems. Their method interleaves supervised learning with trajectory optimization, and incorporates noise to improve generalizability. Meanwhile, there is a large body of literature that formulates motion synthesis as a trajectory optimization problem. This approach has proven effective at respecting contact constraints (e.g., between the foot and ground), which make controlling dynamic motion particularly challenging BID6 BID33 BID10. These approaches have been shown to generate sophisticated behaviors for complex robot designs (e.g., humanoids) BID48, and for robots of arbitrary morphologies using only a high-level specification of the robot's shape, gait, and task BID50. Related, a number of methods interleave trajectory optimization and supervised learning with neural network regression BID14 BID15 BID26 BID28. Unlike our method, the use of trajectory optimization makes these approaches reliant upon knowledge of the model. A great deal of attention of-late has focused on the problem of learning complex control policies directly from low-level sensory input, without any knowledge of the system dynamics. Methods that have proven particularly effective combine neural networks that learn representations of the highdimensional raw sensor input with deep reinforcement learning BID36 BID24 BID41. While much of the work in this area focuses on low-dimensional, discrete action spaces, several methods have been recently proposed that learn continuous control policies through deep reinforcement learning. These techniques have been applied to control simple, simulated robots BID52 BID53 BID51 BID18, perform robot manipulation BID16 BID19 BID11, and control legged robots BID42 BID31.Black-box optimization BID40 BID37 is an alternative to using reinforcement learning to training the control policy. These approaches have the advantage that they do not involve backpropagating gradients, are insensitive to reward sparsity, and can handle long time horizons. While black-box optimization strategies have traditionally been thought of as illsuited to difficult reinforcement learning problems, BID38 recently showed that they perform similarly to state-of-the-art RL methods on difficult continuous control problems, including those that involve locomotion. Closely related is the policy-gradient method of BID44, who define the policy as a distribution over the parameters of a controller, which they then sample over. This in gradient estimates that are far less noisy than is typical of policy gradient algorithms. These approaches are similar to the way in which we learn the robot design, which we formulate as a Gaussian mixture model over design parameters. Indeed, the two referenced methods yield the same gradient estimate for Gaussian parameter distributions BID38.Meanwhile, much work has focused on the problem of determining robot designs that meet the requirements of a particular task. Given a user demonstration of the desired behaviors, BID5 learn optimum kinematic linkages that are capable of reproducing these motions. BID22 synthesize electromechanical robot designs in a compositional fashion based upon a complete user-specified structural specification of the robot. BID23 build upon this work, allowing the user to specify functional objectives via structured English, which is parsed to a formal specification using linear temporal logic. describes a theory for co-design that includes the ability to select discrete robot parts according to functional constraints, but do not reason over geometry or motion. Related to our approach is recent work that jointly optimizes sensor design and inference algorithms for perception systems. BID3 considers the problem of jointly learning a camera sensor's multiplexing pattern along with reconstruction methods for imaging tasks. They model inference as a neural network together and use stochastic gradient descent to backpropagate the loss to a neural layer representation of the multiplexing pattern. Related, BID39 jointly learn design and inference for beacon-based localization. They encode beacon allocation (spatially and across transmission channels) as a differential neural layer that interfaces with a neural network for inference. Joint optimization then follows from standard techniques for training neural networks. In this section, we begin by describing the standard reinforcement learning framework for training agent policies, and then describe how we extend this to also learn the physical design of the agent. In the standard reinforcement learning setting, an agent interacts with its environment, usually a Markov Decision Process, over a number of discrete timesteps. At each time step t, the agent receives a state s t ∈ S and takes action a t ∈ A according to a policy π: S → A. Then, the agent receives a scalar reward r t and the next state s t+1 from the environment. This process continues until a terminal state is reached. The goal of reinforcement learning is to then find a policy π * that maximizes the expected return E[R t], where R t = ∞ i=0 γ i r t+i and γ ∈ is a discount factor. Policy gradient methods are a class of algorithms often used to optimize reinforcement learning problems, due to their ability to optimize cumulative reward and the ease with which the can be used with neural networks and other nonlinear function approximators. Consequently, they are commonly used for reinforcement learning problems that involve complex, continuous action spaces. Policy gradient methods directly parameterize a stochastic policy π θ (a t |s t) and perform stochastic gradient ascent on the expected return. "Vanilla" policy gradient methods compute an estimate of the gradient ∇ θ E[R t] using a sample-based mean computed over ∇ θ log π θ (a t |s t)R t BID54, which yields an unbiased gradient estimate BID47. While the variance in the ing estimate decreases with the number of samples, sampling is computationally expensive. Another way to reduce variance while maintaining an unbiased estimate is to estimate the gradient by comparing the reward to a "baseline" reward b(s t).These methods are effective but can be unstable, especially when used for deep reinforcement learning. Small changes to the policy parameters may cause large changes in the distribution of visited states. Several methods have been proposed to mitigate these effects. Among them, BID41 introduce Trust Region Policy Optimization (TRPO), which imposes a constraint on the KLdivergence between policies before and after an update. Recently, proposed proximal policy optimization (PPO), a first-order class of methods similar to TRPO. PPO alternates between sampling data through interaction with the environment and optimizing the objectivê DISPLAYFORM0 where r t (θ) = π θ (at|st) π θ old (at|st) andÊ t represents an empirical average over a finite sample set. This objective seeks to maximize the expected return while encouraging a small step in policy space. The clipping within the objective removes any incentive for moving r t (θ) outside the interval [1−, 1+]. The net effect of this objective is a simple, robust policy gradient algorithm that attains or matches state-of-the-art on a wide array of tasks We extend the standard reinforcement learning formulation by considering the space of possible robot designs Ω. Specifically, we assume that for every design E ∈ Ω, we can define common state and action spaces S and A, reward function r(s, a), and initial state distribution p 0 (s), that are meaningful to all designs. The designs differ only in the transition dynamics they induce p E (s |s, a) and share a common action space A-to achieve this, we assume a common morphology for all possible designs. Our goal then is to find the optimal design E * and policy π * E * pair that maximizes the expected value of a given reward function. However, this is a non-linear, non-convex optimization problem over the spaces of all possible designs and all possible policies. Solving it exactly would require enumerating all possible designs (possibly through discretization of the design space), learning policies for each, and comparing the ing expected rewards. This is computationally infeasible for all but the simplest of cases. Instead, we develop a gradient-based approach to solving this optimization problem with the following key components: we maintain a multi-modal distribution over the space of physical designs and update this distribution using policy gradient methods in parameter space, similar to BID44 and BID38; and we train a single controller to act on all sampled designs during training, providing this controller with the design parameters of the specific sample it is controlling. We find that the multi-modal stochastic parameterization of the design space allows our method to explore the space more thoroughly (where a uni-modal Gaussian distribution would frequently get trapped in local minima). Moreover, a common controller makes optimization tractable, allowing efficient evaluation of unseen designs. We additionally benefit from learning common strategies across diverse designs, while still adapting different policies to different parts of the design space given the sample parameters as input. Together, these components enable efficient joint exploration of the design and policy spaces. Formally, let p(r; φ) denote the distribution over designs, and π(a t |s t, r; θ) a stochastic control policy parameterized by φ and θ respectively. In our experiments, we use a neural network to model the control policy π, and a Gaussian mixture model as the parametric form of the distribution p(r; φ). Our goal is then to solve the following optimization problem: DISPLAYFORM1 where E r,t [·] is the expectation over robots and trajectories those robots induce. We use stochastic gradient-based updates to optimize Eqn. 2. Our method (Algorithm 3.2) alternates between updating the parameters of the policy and design distributions, θ and φ, respectively. We empirically find this to yield convergence to better solutions in a reasonable number of iterations compared to performing simultaneous updates. We optimize the policy parameters θ using Proximal Policy Optimization. However, instead of collecting data from a single design, we sample a design r after a fixed number T of timesteps according to the distribution p(r; φ). After n iterations of PPO, we freeze the policy parameters and proceed to optimize the parameters of the design distribution. Without knowledge of the model, we optimize the design parameters φ via policy gradient over parameter space, similar to BID44. This is equivalent to black-box optimization methods that have proven effective for complex learning problems BID38. We sample m different designs and compute their returns for a single episode acting under policy π(·; θ). We then use this data to shift the design distribution p(·; φ) in a direction maximizing expected return. These iterations are repeated until convergence. We validate our approach with three commonly used robots provided in OpenAI's Roboschool. Environments in Roboschool are built on top of Bullet Physics, a popular open-source physics engine. In a series of locomotion experiments, we show that our approach not only discovers novel robot designs and gaits, but also outperforms two out of three of the hand-designed robots trained on the same task and yields performance comparable to the third. We evaluate our approach on three legged locomotion tasks within Roboschool: RoboschoolHopper, RoboschoolWalker2d, and RoboschoolAnt (note that we subsequently drop the Roboschool prefix for brevity). FIG0 depicts the default Roboschool design for each robot. These environments describe a locomotion task that has the robots moving along the positive x-axis (to the right in the figures) towards a distant goal. The reward function defined in these environments is a weighted sum of rewards for forward progress and staying upright, and penalties for applying torques and for joints that reach their rotational limits. The episode ends when the robot falls over or reaches a maximum number of timesteps. We learn robot designs using the morphologies specified by the standard Roboschool designs. For each morphology, we parameterize the robot in terms of the length and radius (e.g., mass) of each link (or just the radius in the case of the sphere for the ant body). We impose symmetry, and share parameters across each leg for the Walker2d and Ant designs. This parameterization permits a wide variety of robots of different shapes and sizes, some of which are better suited to the locomotion objective than others, and some designs that do not permit a control policy that in locomotion. We model the control policy π(a t |s t, E; θ p) as a feed forward neural network consisting of three fully-connected layers with 64 units and tanh activation functions. A final layer maps the output to the robot action space. With the exception of the last robot-specific layer, the architecture is the same for all experiments. Meanwhile, we represent the distribution over the parameterized robot design p(·; θ r) using a Gaussian mixture model with four mixture components (each with a diagonal covariance matrix over the different parameters). We initialize the means of each component randomly within a wide range, and initialize the variances in order to span the range of each parameter. We find that our approach maintains high variance distributions during early iterations-thereby continuing exploration of the design space-before committing to a chosen design. The appendix provides further details regardng the evolution of these distributions. Timesteps Ant Figure 3: Training curves that show the evolution of reward across training iterations for the Hopper, Walker2d, and Ant environments over eight different random seeds. The black dashed line corresponds to the highest achievable performance of the corresponding baseline robot. In other figures, we display the best-performing checkpoint of each run. Therefore, we make each training curve transparent after it stops improving. We train our method with eight threads of Proximal Policy Optimization for a total of 300M environment timesteps across all threads. We use the publicly available implementation of PPO provided in OpenAI Baselines BID13. We alternate between 50 policy PPO iterations and 2 design iterations, finding this ratio to yield convergence to good solutions. We set this and the other hyper-parameters on the Hopper and used the same settings for all experiments. We evaluate the performance of our approach to joint optimization over robot design and control on the Hopper, Walker2d, and Ant robots, and compare against a policy trained for the standard Roboschool design. We evaluate the consistency and robustness of our approach relative to the initial robot parameter distribution using several different random seeds. For each robot morphology, we find that our method learns robot designs and the corresponding control policy that are either comparable (Walker2d) or exceed the performance (Hopper and Ant) achievable using the default designs (Fig. 3). We also see that our method achieves these levels of performance by discovering unique robot designs (Fig. 1) together with novel walking gaits (Fig. 4) that can not be achieved with the default designs. Note that our method was able to learn these designs and control policies from a random initialization, without access to a dynamics model or any expert supervision. Our method learns a joint design and controller for the Hopper that outperforms the baseline by as much as 50%. Our learned robot exploits the small size of the foot to achieve faster, more precise walking gaits. Meanwhile, the longer torso of the learned robot improves stability, allowing it to maintain balance while locomoting at a faster pace. We found this behavior to be consistent across several different Hopper experiments, with the method converging to designs with a small foot and long, thin torso. In the appendix, we explore the stability of this design with respect to variations in the environment using the coefficient of friction as an example, and find the improvement to be consistent. For the ant, our optimization yields a physical design that is significantly different from the default design FIG0. Consequently, the learned Ant drastically outperforms the baseline, improving reward by up to 116%. Our method learns a design with a small, lightweight body and extremely long legs. The long legs enable the ant to apply large torque at contact, allowing it to move at a fast pace. Our framework learned different design-control pairs for the Walker2d that perform similarly to the default design. Across several different experiments, we see two distinct, successful designs and walking gaits. Interestingly, neither agent walks primarily on its feet. The first design has small, thick legs and long feet. The controller is then able to exploit the thick middle leg link, which protrudes slightly past the foot, to push off the ground. The long foot then provides additional balance. The second design is similar in geometry to the baseline Walker2d, but moves in a very different way. By lowering the knee joint and lengthening the foot, the Walker2d is able to efficiently balance on its knees and toes. This low stance allows the Walker2d to fully extend its leg backwards, creating a long, powerful stride, similar to that of a sprinter using starting blocks. Figure 4: Here, we compare locomotion of default and learned robots, visualizing both their physical design and corresponding learned gaits. We pick the best for Hopper and Ant, and two of the best for Walker2d (due to diversity in gaits). Note that for each robot type, we show a blend of a fixed number of frames in the same time interval, allowing direct comparison between the speed with which different designs are able to locomote. We proposed what is, to the best of our knowledge, the first model-free algorithm that jointly optimizes over the physical design of a robot and the corresponding control policy, without any need for expert supervision. Given an arbitrary morphology, our robot maintains a distribution over the robot design parameters and learns these parameters together with a neural network controller using policy gradient-based reinforcement learning. This in an assignment to the policy over robot parameters and the control policy that are jointly optimal. We evaluated our approach on a series of different legged robot morphologies, demonstrating that it in novel robot designs and walking gaits, achieving performance that either matches or exceeds that of manually defined designs. Our findings suggest several avenues for future work. The most direct is extending the current approach to find optimized designs for uneven terrain, the presence of obstacles, changes in slope, variations in friction, etc. We are also interested in extending our framework to relax the assumption that the morphology is pre-defined. Finally, we are investigating applications to different types of agents and design spaces beyond legged robots (e.g., end-effectors), and exploring appropriate stochastic parameterization for such designs. The following provides further evaluation of our framework, including the behavior of the design distribution during training and the robustness of the learned designs to environment variations. To provide insight into the training process of our algorithm, we evaluate the evolution of the Gaussian mixture model distribution throughout training of the best performing Hopper experiment. We initialize each component of the mixture model with random means and a diagonal covariance matrix chosen to cover the parameter space. The initial mixture weights are uniform. As shown in FIG1 (left), roughly one third of the way through training, our algorithm converges to the most successful component. Additionally, we find that modes generally do not collapse (FIG1). We find this behavior to be consistent across random seeds and different robots. It is often desirable for a robot to be able to operate in a variety of environments. In this section, we consider the robustness of a learned design to changes in friction. We conducted experiments on the Hopper in which we first learned the design and controller for one friction setting (0.8). We finetuned the controller in environments with different friction settings, while leaving the design fixed. We find the learned design to be reasonably robust with variability comparable to controllers finetuned for the default hand-crafted design (Fig. 6). Additionally, the learned design outperforms the hand-crafted one across the full range of friction values (although, for very low friction values, both designs essentially were unable to learn a successful gait).Note that our framework can incorporate this goal of generalization by simply sampling from a diverse set of environments during training. But at the same time, it may be useful in some applications to seek out solutions that are specifically adapted to a relatively narrower set of environments, gaining better performance within this set at the cost of more general performance. Figure 6: A visualization of accumulated reward for different friction values. The controllers and learned design were trained in an environment with a friction value of 0.8. We finetuned the controllers (but not the designs) for both the learned and hand-crafted designs for 10M timesteps for each friction value. Rewards are reported as an average over 100 episodes.
Use deep reinforcement learning to design the physical attributes of a robot jointly with a control policy.
477
scitldr
Deep learning enables training of large and flexible function approximators from scratch at the cost of large amounts of data. Applications of neural networks often consider learning in the context of a single task. However, in many scenarios what we hope to learn is not just a single task, but a model that can be used to solve multiple different tasks. Such multi-task learning settings have the potential to improve data efficiency and generalization by sharing data and representations across tasks. However, in some challenging multi-task learning settings, particularly in reinforcement learning, it is very difficult to learn a single model that can solve all the tasks while realizing data efficiency and performance benefits. Learning each of the tasks independently from scratch can actually perform better in such settings, but it does not benefit from the representation sharing that multi-task learning can potentially provide. In this work, we develop an approach that endows a single model with the ability to represent both extremes: joint training and independent training. To this end, we introduce matrix-interleaving (Mint), a modification to standard neural network models that projects the activations for each task into a different learned subspace, represented by a per-task and per-layer matrix. By learning these matrices jointly with the other model parameters, the optimizer itself can decide how much to share representations between tasks. On three challenging multi-task supervised learning and reinforcement learning problems with varying degrees of shared task structure, we find that this model consistently matches or outperforms joint training and independent training, combining the best elements of both. While deep learning has enabled remarkable levels of generalization through the use of function approximators, this comes at the cost of large amounts of data, which remains a critical challenge in deploying deep learning to a number of domains. When combined with deep networks, multitask learning offers the promise of building more powerful representations using less data per task, leading to greater performance and data efficiency. However, multi-task deep learning has also posed considerable challenges. Numerous works have observed that joint training on multiple tasks can actually decrease task performance due to the negative influence of other tasks (; a). Indeed, training networks entirely independently on each task has remained a strong approach, to the point that multiple multi-task methods have first trained models independently before using them to train a multi-tasking model (; a; ; ; . Moreover, our experiments in Section 6 indicate that three recently proposed methods for multi-task learning are all surpassed by training models independently per task. However, training independent models will only work well when provided enough data per task, and precludes potential positive data-efficiency gains from multi-task learning, only providing protection against negative transfer. Further, while a number of works have successfully shared parameters, finding an architecture with the appropriate level of parameter sharing for a given problem domain can require a considerable amount of manual engineering. In this work, we aim to develop a multi-task learning method that can perform well both when tasks share very little and when they share a large amount of structure. To address this problem, we consider how a single neural network model can represent two extremes: independent models, when optimization challenges prevail, or a single model with shared weights, when sharing is beneficial. Further, we would like such a model to be able to represent intermediate levels of model sharing, when appliable. One option for performing independent training within a single model is to put separate networks with independent weights into a single model, using the task ID to select which network prediction to output. However, this prevents any sharing. An alternative approach is to condition the model on the task ID, through various conditioning approaches, including additive and multiplicative approaches such as FiLM . In fact, point-wise multiplicative conditioning, as proposed in FiLM, can indeed represent separate networks by selecting which parts of the network to be used for different tasks, as can a number of other approaches in multi-task learning (; 2019;). Yet, these approaches still require an optimization over shared parameters in order to select which parameters are used for each task. These shared parameters can introduce significant optimization challenges. We instead consider how to allow a model to perform optimization on only shared parameters, only disjoint parameters, or any combination thereof. We can achieve this by simply interleaving learned per-task matrices at each layer of a jointly-trained neural network. When optimization over shared parameters is ineffective, the model can still represent a full neural network per task using only the per-task matrices, ing in independent training; while using identical per-task matrices in standard joint training. Intermediately, a mix of shared and per-task parameters may be used. In effect, by incorporating these matrices into the network, the optimizer itself can automatically and dynamically modulate the degree to which a representation is shared between tasks, depending on the problem domain and the optimization progress, and can do so without having to optimize shared parameters. The primary contribution of this paper is a simple yet effective approach for multi-task learning that can represent and smoothly interpolate between independent training and joint training, via matrix interleaving (Mint). We describe how we can implement Mint in deep multi-task models and show its effectiveness in improving data efficiency and generalization in multi-task settings while providing intuition about the reasons why this architecture performs so well. Further, we show that the model can be extended to goal-conditioned reinforcement learning in a straightforward manner by allowing the model to generate the interleaved matrices conditioned on task information such as the goal. We evaluate Mint on sets of tasks with both high and low levels of shared structure and find that it performs well in both settings, performing comparably to or outperforming both joint training and independent training, effectively combining the best elements of both. Further, in comparison to previous methods that use multiplicative interactions for continual learning and for general conditioning , Mint is better able to separate tasks by avoiding the need to optimize over shared parameters and can empirically produce substantially better performance on a range of challenging multi-task problems. Finally, Mint also outperforms state-of-the-art approaches for multi-task learning while being significantly simpler to implement. In multi-task learning, the goal is to find a θ-parameterized model f θ that reaches high performance across all training tasks drawn from a task distribution where L k denotes the loss function for task T k. In this work, we study the setting in which we have a fixed set of K tasks, and we wish to obtain high performance on all of the tasks in this set. In Section 4, we will study this multi-task problem in the context of a reinforcement learning setting. In our multi-task learning setup, we train a model that is conditioned on a task indicator z k which is used to specify a task T k that should be performed. The task indicator can be represented in a variety of ways, from simple categorical variables to learned task embeddings . This formulation can be readily extended to a goal-conditioned reinforcement learning setting, where z k indicates the desired goal. The multi-task learning model learns to optimize objectives of all the tasks T k that the model is trained on. Joint and fully independent training are the two extremes of multi-task learning. Assuming a set of n training tasks, we characterize multi-task training as independent if it optimizes a set of task-specific, disjoint parameters {θ k} n i=1 that parameterize the model f θ k. We define joint training as finding a single set of task-independent, shared parameters θ of f θ. Note that joint training utilizes the fact that the parameters are shared throughout learning. While joint training has a number of data-efficiency and generalization benefits, it can be difficult to train effectively. Considering fully independent and fully joint training as two ends of the spectrum in multi-task learning problems, we want to design an algorithm to get the best of both worlds -the stability of independent training and the parameter-sharing efficiency of jointly trained models. To this end, we propose to build models that allow the neural network to learn how much information should be shared and between which tasks throughout learning, in an adaptive fashion. We describe the details of our approach below. In the case of neural network models, we view representations as network activations at different layers of the network. We aim to introduce a modification to the neural network that would allow the model to either form those activations in a completely task-specific way, in a completely shared way, or in a way that shares to an intermediate degree. To achieve this, we propose a model architecture that transforms the previous layer's representation both in a task-general way and in a task-specific way, in sequence. When two tasks share very little, the network can optimize task-specific weights, while when the tasks share a considerable degree of structure, the network can leverage the shared weights. Since these transformations are task-specific and can be introduced at various layers of the network, they allow for a different amounts of representation shared at different levels of the neural network model. To understand the practical implementation, we consider the activations at layer l of a fully-connected neural network:, where W (l) is the weight matrix for layer l, b (l) is the bias vector for layer l, and σ is the non-linearity at each layer. The Mint layer augments the traditional fully-connected layer with task-specific weight matrix M k and bias vector β k, where k indexes the task. The forward pass of a Mint layer for some vector of activations y is presented in Definition 1. Definition 1. A Mint layer applies an affine transformation to activations y ∈ R n as follows, yielding new activations a: where M k and β k are per-layer task-specific matrix and bias, respectively. A neural network augmented with Mint thus contains parameters that are both shared across all tasks as well as parameters that are only used for a particular task for each layer l, i.e. Figure 1 for a visual depiction of the application of Mint. We show how the regular fully-connected layers and Mint layers can be interleaved in Equation 2 and 3 below: Because this layer is fully-differentiable, we can learn the task-specific matrices M (l) k and biases β k jointly with the shared parameters of the model. When we apply Mint to tasks with very large numbers of tasks, or arbitrary task descriptors (e.g., goal-conditioning), we can train separate neural networks, to output the Mint matrices and biases at every layer, instead of storing independent matrices for each task. In this case, Mint resembles , another method which performs transformation-based task conditioning. In contrast to FiLM, Mint uses a matrix transforming the activations at each layer instead of a point-wise multiplication by a vector. In the next section, we study a theoretical property of Mint that motivates the chosen definition of the Mint layer. We validate the benefits of Mint in our experimental evaluation in Section 6. We next aim to theoretically study the expressive power of multi-task learning architectures when the shared parameters cannot be optimized effectively (e.g. due to optimization challenges arising from learning tasks that share little or no structure). While a viable approach in this regime is to simply use separate task-specific networks f φi with no shared parameters, this approach makes the strong assumption that the tasks indeed share no exploitable structure. This is often not known a priori. We show in this section that in the'worst case' where shared parameters are not useful, the class of functions defined by a Mint network is still as rich as that defined by a set of purely task-specific networks. Thus, Mint makes no sacrifices in terms of expressiveness with respect to the optimal attainable parameters φ * i and retains the possibility of exploiting shared structure if it does exist. Further, we show that the same is not true of existing methods that are related to Mint. Specifically, we compare the class of functions representable by Mint with 1. The class of functions defined by a FiLM network , where a layer's activations are'tilted' through element-wise multiplication with a task-specific vector γ k and addition with a task-specific bias β k. 2. The class of functions expressable by a task indicator concatenation approach, where a one-hot task indicator vector z k for task T k is concatenated to each shared layer's input. We begin by considering the optimal parameters for task-specific networks f φ * i, where φ * i = arg min φ L i (φ), the parameters that minimize the loss function L i for task T i. We assume that f φ * i is an L-layer MLP. We also consider L-layer Mint networks f θi (consisting of L sequential pairs of Mint transform and shared parameter layer as in Equation 1) and L-layer FiLM networks (consisting of L sequential pairs of FiLM transform and shared parameter layer). We state Definition 2 to complete our setup. Definition 2. Let y (l−1) be the activations of layer l − 1, and let α i = {W (l), b (l) } be the shared parameters (weight matrix and bias vector) at the l-th layer. We assume that W (l) is an arbitrary, fixed invertible matrix. With this setup, we state Lemmas 1 and 2, which highlight a key difference in the layer-wise expressiveness of Mint and methods such as FiLM and task indicator concatenation. Lemma 1. For a given α i, applying Mint to y (l−1) can express an arbitrary affine transformation at layer l for each task. Lemma 2. (a) For a given α i, there exist affine transformations that cannot be expressed by applying FiLM to y (l−1). (b) Similarly, for a given α i, there exist affine transformations that cannot be expressed by a network which concatenates the input y (l−1) of each layer l with a task indicator z k. The proof can be found in Appendix A. We assume only that the shared parameter matrices W (l) are invertible, such that they do not lose information about the input. This assumption critically does not require the optimizer to productively update the shared parameters. We use Lemmas 1 and 2 to extend the above layer-wise expressiveness properties of Mint and FiLM to compare the function classes defined by L-layer Mint networks f θi and FiLM networks f ψi in the context of optimal L-layer MLP f φ * i. We state this comparison in Theorem 1. We use α to denote the subset of θ i and ψ i that corresponds to the parameters of the network that are shared across tasks. is an MLP (a composition of affine transformations and activation functions), the proof of the first statement follows from applying Lemma 1 to each layer of f φ * i and f θi (the task specific and Mint networks, respectively). Similarly, Lemma 2 implies that we can construct an α and L i (and therefore φ * i) such that we can find anx satisfying the second statement. That is, for any set of shared parameters in a Mint network and any set of optimal'target' MLP, there exist weights for Mint that are effectively equivalent to the target MLP. On the other hand, there exist settings of the shared parameters that make matching the outputs of the target MLP unattainable in the FiLM and task indicator concatenation regimes. Using Lemma 2, we reach an equivalent regarding the limited expressiveness of the task indicator concatenation approach. The general idea of Mint is implemented as follows in the supervised and reinforcement learning settings. In the case of supervised learning models, we simply apply Mint layers after every fully connected layer of the model. Specifically, for a task identifier z k ∈ R K where K is the number of tasks and for every layer l with activations a (l) ∈ R n, nonlinearity σ and weight W (l) and bias b (l), we represent the transformation using two matrices T M ∈ R n×n×K and β ∈ R n×K that take in the task identifier and output the per-layer task-specific matrices M (l) and biases β l respectively. The transformation can be summarized as follows: Multi-Task Reinforcement Learning For multi-task reinforcement learning, we implement the architecture similarity to the supervised learning case but we combine this with actor-critic RL algorithms by introducing this architecture into both the critic Q(s, a, z k) and the actor π(a|s, z k). For the case of goal conditioned RL, we introduce a slightly modified Mint architecture into both the actor and the critic conditioned on the task goal g. Specifically, for every layer l with activations a (l), nonlinearity σ, weight W (l), and bias b (l), we represent two transformation function T (l) φ and T ψ by two 2-layer ReLU networks that take in the goal and produces a per-layer goal-specific matrix M l (g) and the bias β (l) (g) respectively. The transformation can be summarized as: 5 RELATED WORK Multi-task learning (; ;) focuses on the problem of finding a single model that can solve multiple different tasks. This formulation can be readily applied to a variety of learning domains, such as supervised learning (; ; b; ;), and multi-task (; ;) and goal-conditioned reinforcement learning (; ;). While multi-task learning offers the promise of efficient training of shared representations, naïvely training a single model on multiple tasks often does not in these desired benefits, due to the optimization challenges introduced by the multi-task setting (; ; a). In order to eliminate potential negative interference between different tasks during multi-task learning using a single model, many approaches propose to learn each task separately, to later combine their solutions into a single multi-task model;; a; ). In contrast to these works, we present a method that is able to train a single model on multiple tasks and is able to interpolate between the extremes of joint and independent training. More closely related to our approach, various architectural solutions have been proposed to increase the multi-task learning capability of the model. Example approaches include architectural changes that allow multiple modules or paths within the same network (; ; ; b; ;), transformation-based task conditioning , attention-based architectures , multi-headed network solutions , and a variety of other approaches (; Ruder12 et al., 2017). We demonstrate an approach that allows for a middle ground between the conceptual extremes of fully independent training at one end and single-model joint training at the other. This added flexibility enables us to sidestep the negative effects of potential task interference while at the same time share parameters between the tasks when appropriate. Prior approaches (a;) have investigated this ability by factorizing layers in the neural network across independent and shared components. In contrast, our method is simpler to implement, less computationally intensive, and empirically easier to optimize. In our experiments, we provide direct comparisons of our method to cross-stich , routing networks , the FiLM architecture , rotational superposition , and multi-headed models. The goal of our experimental evaluation is to answer the following questions: does our method enable effective multi-task learning both in settings where there is substantial overlap in tasks and where there is little overlap?, how does our method compare to independent training and joint training in those settings?, how does our method compare to prior state-of-the-art approaches? To answer the above questions, we conduct experiments on multi-task reinforcement learning domains. For multi-task RL doamins, we perform experiments on two multi-task RL benchmark variants MT10 and MT50 (as showcased in Figure 2) proposed by. Finally, to test if Mint can excel in continuous task distributions, we also evaluate the method on a goal-conditioned RL domain where a Sawyer robot arm is required to push a randomly initialized puck to different goal positions. For all RL experiments, we use the popular off-policy RL algorithm, soft actor-critic (SAC) , which has shown to solve many RL benchmarks with great data-efficiency. On the multi-task RL domains, we compare Mint to the following methods: • SAC: train a vanilla SAC agent with task identifier as part of the input. • Multi-head SAC: train a SAC agent where both the actor and critic are represented as multihead feedforward neural networks where the number of heads is the number of tasks. • SAC (concat all fc): train a SAC agent where the task identifier z is concatenated with the activation at each layer and passed as inputs to the next layer. • FiLM : the actor and critic are learned with neural networks combined with FiLM. • Superposition : the actor and critic are learned with neural networks combined with superposition. • independent: learn separate actor and critic per task. We provide details of architecture design in each domain as well as environment set-up in the Appendix B. On the RL domain, we first investigate the ability of Mint to perform a set of distinct RL tasks. As discussed in , MT10 and MT50 serve as representative benchmarks to evaluate multi-task learning algorithms on learning a diverse range of robotics manipulation tasks. We present the in Figure 3. The success rates are averaged across tasks and we adopt the success metrics used in the Meta-World benchmark. We design appropriate architectures of the actor and the critic of the SAC algorithm for each method such that the number of parameters of each approach is around the same magnitude (see Appendix B for details). For MT10, Mint learns all tasks with the best data efficiency, while independent networks also learn all of the tasks with slightly worse data-efficiency. The other methods are unable to acquire half of the skills. Mint, on the other hand, enjoys the expressive power to interpolate between independent learning and sharing, while mitigating optimization challenges, to attain the best between the two extremes. For MT50, where the evaluations are done on all 50 of the Meta-World environments, as shown on the right in Figure 3, Mint quickly learns to solve more than 60% of tasks in 20 million environment steps while SAC and SAC with multi-head architectures struggled in solve 40% of the tasks after 35 million steps. Independent networks learn to solve the tasks slower than Mint but eventually surpasses it. 1 This also validates the expressive power of Mint to represent both separate learning and learning with shared networks. Figure 3: Learning curves on MT10 (left) and MT50 (right). We observe that independent training performs well on both benchmarks. Mint, unlike prior multi-task learning approaches, is able to perform at a similar level to independent training. We also examine the learned Mint layers in the MT10 setting to analyze whether they capture taskspecific information. Specifically, we replace the'close drawer' task in MT10 with a duplicated'press button top' task and thus we have two'press button top' tasks in MT10 (see Figure 2). We train Mint on this new set of tasks. Concretely, if we index the tasks from 1 to 9, then we have two copies of task T 1 and learn two separate Mint matrices: M T1 1 and M T1 2. We then compute the pairwise 1 distance between the Mint layer learned for one of the duplicate'press button top' tasks and the Mint layer learned for any other task such as'insert peg side' in the set. Specifically, we compute d(M T1 1, M Ti) for all i = 1, where d is the l 1 distance. We compute the percent difference between each d(M T1 1, M Ti) and d(M T1 1, M T1 2) and present it in Figure 4 on the left. For each pair M Ti, i = 1, we plot. From the figure, the two Mint layers learned for the duplicated tasks have the smallest relative 1 distance except the distance between the Mint layers of'press button top' and'reach', which is reasonable since'press button top' is structurally similar to'reaching'. Finally, we provide a comparison between Mint and other methods where the non-Mint methods are trained with the same number layers as Mint as opposed to the similar number of parameters in Figure 3. As shown on the right in Figure 4, Mint outperforms all of the other approaches in this setting. Figure 4: On the left, we show the percent increase in 1 distance between Mint layers learned for one of the duplicate tasks and each of the other tasks in MT10 as compared to the distance between the Mint layers learned for the two duplicate tasks. We can see that for most of the tasks, the percent increase in 1 distance is approximately 10%, except that the distance between'reach' and'press button top' is smaller, which could be explained by the fact that'press button top' is inherently just a reaching task. On the right, we compare Mint to a list of other methods with the same number of layers and find that Mint achieves a significantly higher success rate than any of the other approaches. Next, we consider the question if Mint can be applied to a set of RL goals that have considerable shared structure. Hence, we evaluate all methods on the goal-conditioned sawyer pushing domain. In this environment, the goal space consists of the initial positions of the puck and the goal positions. For details of the goal-conditioned environment, see Appendix B. At each policy rollout step, we sample a batch of 9 goals and collect 3 paths for each goal, where all the paths are stored in the task-specific replay buffers. At each training step, we sample a batch of 9 goals and 128 samples per goal from their corresponding replay buffers. To prevent creating infinite number of replay buffers, we discretize the goal space into 200 goals. Given that it is impractical to train 200 independent agents, we sample 10 goals from the goal space and train 10 independent SAC agents for estimating the performance of independent training in goal-conditioned pushing. As shown in Figure 5, Mint outperforms all methods both in terms of data efficiency and distance to the goal. SAC (concat all fc) also achieves comparable performance while independent networks fail to learn the task without sufficient amounts of data, suggesting that the ability of Mint to represent both joint training and independent networks per task is crucial in multi-task learning and can lead to considerable improvement. Simultaneous optimization of multiple, potentially unrelated tasks can prove challenging for deep neural networks. Recent multi-task learning architectures attempt to mitigate this issue by providing alternative pathways for information to flow through a neural network for each task. In this paper, we introduce a new multi-task learning module, Mint, which provides theoretical guarantees of universal approximation even for multi-task settings with no shared structure. We conjecture that this property, not shared by similar multi-task architectures, enables Mint to outperform other multi-task approaches on a variety of reinforcement learning benchmarks. We also observe that Mint is able to match or improve upon the performance of independent training. While Mint exhibits strong performance gains over previous methods, one potential limitation is that the task matrices may introduce a significant number of parameters, particularly as the number of tasks increases. As discussed, this can be alleviated for problem domains with many tasks, by learning a single neural network that produces the matrices and biases conditioned on the task descriptor. Further, in our experiments, we find that Mint-based networks can outperform prior methods while using comparable or fewer parameters. In summary, Mint is a simple, yet effective approach for deep multi-task learning. Its implementation requires minimal modifications over standard deep networks. As a , we expect it to be straightforward for future work to build upon or use Mint for more effective multi-task learning in deep networks. A PROOF OF THEOREM 1 Lemma 1. For a given α i, applying Mint to y (l−1) can express an arbitrary affine transformation at layer l for each task. Let W (l) and b (l) be an arbitrary weight matrix and bias vector. Suppose that for task k we wish to represent the affine transformation W k (l) y (l−1) + b k (l) at layer l of the network using the combination of Mint and the affine transformation described by applying W (l) multiplicatively
We propose an approach that endows a single model with the ability to represent both extremes: joint training and independent training, which leads to effective multi-task learning.
478
scitldr
Training agents to operate in one environment often yields overfitted models that are unable to generalize to the changes in that environment. However, due to the numerous variations that can occur in the real-world, the agent is often required to be robust in order to be useful. This has not been the case for agents trained with reinforcement learning (RL) algorithms. In this paper, we investigate the overfitting of RL agents to the training environments in visual navigation tasks. Our experiments show that deep RL agents can overfit even when trained on multiple environments simultaneously. We propose a regularization method which combines RL with supervised learning methods by adding a term to the RL objective that would encourage the invariance of a policy to variations in the observations that ought not to affect the action taken. The of this method, called invariance regularization, show an improvement in the generalization of policies to environments not seen during training. Learning control policies from high-dimensional sensory input has been gaining more traction lately due to the popularity of deep reinforcement learning (DRL);; Zhang et al. (2018b); , which enables learning the perception and control modules simultaneously. However, most of the work done in RL chooses to evaluate the learned policies in the same environment in which training occurred. Using the same environments to train and test agents does not give any insight into the generalization abilities of the learned policy. There could be a number of changes in the environment at test time that would degrade the agent's performance. Variations could appear in the visual aspects that determine the agent's observation, the physical structure that determines the agent's state and even some aspects that are related to the agent's goal (Figure 1). For example, different observations of the same room are encountered at different times of the day (different lighting conditions). New obstacles could be present. Levels of a game could be different, yet playing a few levels should often be enough to figure out how to play the rest. Such variations might in a new environment where the control model that defined the training environment has changed. A robust policy should generalize from its experience and perform the same skills in the presence of these variations. DRL agents have been notorious for overfitting to their training environments. An agent could have drastically different performance on testing environments even if it manages to maximize the reward during training Zhang et al. (2018a). Supervised learning algorithms have been shown to have some generalization guarantees when adding proper regularization. However, these guarantees are weakened in reinforcement learning algorithms where the source of the data is not i.i.d.. In order to make use of the progress of DRL algorithms in practice we need policies that are robust to possible changes in the sensory inputs, surrounding structure and even some aspects of the task. In this paper we study the notion of generalization that is appropriate for visual navigation control policies that are learned with DRL. We present: a study of the generalization of visual control policies to certain changes in the underlying dynamical system; an alternative training method that combines DRL with supervised learning, thus using DRL to learn a controller while leveraging the generalization properties of supervised learning. In our experiments we use the VizDoom platform which is easily customizable and enables the generation of numerous variants of a given environment. Figure 1: The figure shows how environments may differ in their visual aspects, like textures of the surfaces. The textures provide a differentiator for each environment, where without them the environments would have shared the same state space. Visual navigation for mobile robots combines the domains of vision and control. Navigation can be described as finding a suitable and safe path between a starting state and a goal state. Classical approaches split the problem into a sequence of sub-tasks, such as map construction, localization, planning and path following. However, each sub-task requires some handengineering that is specific to the environment and task which makes it hard to adapt it to different scenarios without performing some tuning. Deep learning approaches enable the use of highly non-linear classifiers that can adapt their inner representations to learn to robustly solve complicated tasks. In this work, we use reinforcement learning algorithms coupled with deep learning approaches to solve the task of navigating an agent towards a goal object using only its visual observations as input. The field of view of the agent is limited, i.e., it does not observe the full environment, and we do not provide an explicit map of the environment to that agent. We model the problem of visual navigation as a partially observed Markov decision process (POMDP) . A POMDP is given by a tuple where S is the set of states, A is the set of actions and Ω is the set of observations, all which are assumed to be finite sets. The reward function is R: S × A → R. The conditional transition probability mass function is T: S × A × S →, with the interpretation that T (s, a, s) = p(s t+1 = s |s t = s, a t = a) is the probability that the next state is s given that the current state is s and that action a is taken. The conditional observation probability mass function is is the probability of observing o in state s when the last action taken was a, and we allow for a special observation probability O(s, o) = p(o 0 = o|s 0 = s) when in the initial state s and no action has yet been taken. Finally, P 0 is the initial state probability mass function, so that P 0 (s) = p(s 0 = s) is the probability that the initial state is s. In DRL, we work with a parameterized policy π θ (h, a) = p θ (a t = a|h t = h) with parameters θ ∈ Θ, giving the probability of taking action a given observation-action history h t:= (o 0, a 0, o 1, a 1, . . ., a t−1, o t). The objective is to adjust the parameters θ to attain a high value for the discounted reward with discount factor γ ∈. The expectation is over state-observation-action sequences where the initial state s 0 is drawn from P 0 and other elements of such a sequence are drawn from T, O and π θ respectively . Many methods for attempting to approximate optimal policies have been proposed. For instance, policy gradient methods perform gradient ascent on estimates of the expected discounted reward. In this work we use the proximal policy optimization (PPO) algorithm, which arguably shows relatively robust performance on a wide range of different tasks . As in classification, we wish to learn from a finite training set but still perform well on previouslyunseen examples from a test set. To formalize this, we have a distribution D over POMDPs, representing multiple environments or tasks, and we sample n train POMDPs from this distribution P 1, P 2,..., P n train. In the context of navigation, these POMDPs might differ in terms of their observation distributions, perhaps representing views of the same environment at different times of day or year, in terms of their transition distributions, perhaps representing maps with different geometries, or in terms of their reward distributions, perhaps corresponding to the specification of different goal states. Given this sample, we then learn a policy π θ from a finite collection of state-observation-action sequences from these POMDPs. In order to have a meaningful common policy across these POMDPs, we require that they have common state, action and observation spaces S, A and Ω. By analogy with the notion of generalization risk in classification , we say that policy π θ generalizes well if it attains a high value for the expectation of the discounted reward over the full distribution of POMDPs, which we call the discounted generalization reward, so that E P∼D J P (θ) is high in some sense. This is our own terminology as we did not find a semantically-equivalent term in the literature. It is not hard to see that the discounted generalization reward is actually the discounted reward J P D (θ) of a single larger POMDP P D, whose state space may however no longer be finite. To see this, let us associate a unique identifier i(P) with any POMDP sampled from D and let I D be the set of all such unique identifiers. In the large POMDP P D, the state space is the Cartesian product S × I D of the original states and these unique identifiers, but the action and observation spaces are just A and Ω. The initial state distribution is obtained by first sampling a POMDP P ∼ D and then sampling s env ∼ P P 0 from that POMDP's initial state distribution. The initial state in the large POMDP is then the concatenation (s env, i(P)). Thus one might succinctly state the problem of generalization in POMDPs as follows: given a distribution D over POMDPs with common state, action and observation spaces and access to a sample of state-observation-action sequences from a sample of POMDPs drawn from D, choose a policy π θ that obtains a high value for the discounted reward J P D (θ). Training in synthetic environments enables the simulation of huge amounts of experience in a span of a few hours. Simulations are convenient to use when training reinforcement learning agents that are often highly sample inefficient. There is, frequently, a gap between the synthetic world and the real-world, mainly due to the manner in which the simulators depict the real-world dynamics and visual appearances. Often, these simulated worlds capture the richness and noise of the real-world with low-fidelity. Many have tried to propose transfer learning techniques to bridge the reality gap in order to still make use of fast simulators for training. One popular method to bridge the reality gap is by randomizing some aspects of the training environment. This domain randomization technique has been shown to be successful for the transfer of grasping policies from simulated training environments to the real-world. However, the learned models ing from that work are not control policies, but perception modules. Previous work has showed some success in transferring the perception module learned in simulation to the real world, but not the controller. conduct a large scale study on generalization using a new environment, that resembles an arcade game, which they call CoinRun. They experiment by training on different images and different level structures. They test with different regularization strategies and network architectures finding that the RL agent has a surprising tendency to overfit even to large training sets. Zhang et al. (2018a) reach a similar , when learning in grid-world environments, and state that the agents have a tendency to memorize levels of the training set. , however, they argue that the methods that inject stochasticity into the dynamics of the system to prevent memorization, such as sticky actions and random initializations , often do not help. In our work we are interested in generalization when navigating under partial observability unlike the fully observable CoinRun or grid-world environments. Domain adaptation methods have also been used for simulated to real transfer. They allow models trained on a source domain to generalize to a target domain. train a generative model to adapt the synthetic images of the simulator to appear like the real environment. It was shown to successfully transfer a grasping policy trained in simulation to the real world. However, they do not discuss whether the policy generalizes when variations happen in the target domain. Another aspect of generalization is the transfer of learned skills to solve different tasks. In other words, generalization to the goal of the trained agent g. Achieving different tasks would require the agent to have the ability to maximize different reward functions. consider working with value functions that contain the goal g as part of the agent's state. They call them universal value functions. The reward will then become a function of a state-action-goal tuple (s, a, g) instead of a classical state-action pair. In the paper, the authors present universal value function approximators (UVFA). A method that attempts to learn a universal value function estimate V θ (s, g). They show that UVFA's can generalize for unseen state-goal pairs in grid-world setup. Deep reinforcement learning has been used to train control policies. These DRL based methods generally propose to learn motor control commands from raw camera images, thus mapping pixels to commands that control the robot's motors. DRL algorithms have been used for various navigation tasks such as goal conditioned navigation; and mapless navigation. Control policies learned from high-dimensional visual input are often brittle and lack the robustness to operate in novel situations. Our main contribution is to propose a regularization term that can be added to the RL objective to improve the robustness of the learned policy to variations in the observations, presented in Section 4.2. However, to motivate the necessity of our proposed method we study domain randomization in Section 4.1; one of the current main practices that aims at learning a policy that generalizes well. Domain randomization is typically used to train policies that can generalize to variations and noise in the observations. It is done by training on several POMDP's that share the same S, A, Ω spaces, however they could in their observation distribution. The motivation behind domain randomization is that it is assumed to be an effective technique to provide a policy that is invariant to the changes that would appear in the observations. We explore the problem of navigating the agent towards a goal object with random noise added to the agent's observations. If the agent is able to perform the task in an environment defined by a POMDP P 1 then it should still be able to perform the task in another POMDP P 2, if certain features f of the environment that are specific to successfully achieving the task exist and are invariant to these variations, i.e., f (P 1) = f (P 2). In Section 5.1, we study domain randomization when added to RL training and the ability of ing policies to generalize in unseen POMDPs. We want to investigate if the policy does in fact overfit to the training POMDPs and whether we mitigate that overfitting by training the policies on multiple POMDPs. In the previous sections, we discussed how overfitting to the training environment can be a big problem in RL. Furthermore, we should be careful not to jump to the that training on different environments will ensure policies that generalize well to new environments. It is merely an assumption that has been shown to empirically hold up when used in a supervised learning context. However, we show in this work that this assumption might not hold for reinforcement learning techniques. This is compatible with the findings in and Zhang et al. (2018a). We reason that in order to generalize well, the training objective should include a term that encourages policy generalization. Therefore, putting the weight of the problem of generalizing explicitly in the objective function itself. Formally, a function h of variable x is invariant to a transformation φ of x if h(x) = h(φ(x)). We can deduce the same definition for the invariance of a policy π to changes in the observation given by some transformation T, π(o) = π(T (o)). We add this regularization penalty term to the RL objective as shown in Equation: where L ppo is the PPO objective , θ is the set of parameters that define the policy π θ, d is a distance function between the two conditional distributions, and λ is a weighting coefficient of the penalty. T is a transformation of the observations. Given an observation o and a transformation on that observation T where the transformation still holds the semantic context of the underlying state, but with added visual variations. We can think of the difference between observing a room with observation o and observing the same room with observation T (o) as the color of the wall for example. Therefore, let us say that we observe o in POMDP P and observe, where f (P) is the set of invariant features of the environment defined by the POMDP P. We further discuss the nature of T in the experiments section. M is the number of transformations of each observations. The penalty d in Equation 1 resembles adding a constraint on the PPO objective, where the new objective dictates that the policy should simultaneously obtain a high reward while behaving similarly for the observations o and T (o). The idea is similar, in spirit, to trust region policy optimization where a penalty term, resembling that which would from imposing a trustregion constraint, is added to ensure monotonic improvement of the average return with each policy update. We call this method in Equation 1 invariance regularization (IR) since the regularization term indicates the invariance of the learned policy to a transformation of given observations. We propose two ways to solve the RL problem in Equation 1. The first is to directly optimize the full objective by adding the penalty to the original PPO loss. The second method splits the training process to two stages of training RL first and then performing a supervised learning step to minimize d(π(o), π(T (o))) which presents an elegant form that combines reinforcement learning with supervised learning, more details of the second method is available in Appendix A. In the next section, we will discuss experiments using both methods. Before that, we will describe a study on the effectiveness of domain randomization as a mean to reducing overfitting in DRL agents. In this section we present the of two experiments. The first is about training RL with domain randomization. We discuss the ability of the learned policies to generalize to unseen environments when trained on variations of the training environment. The next part presents the obtained when using the invariance regularization (IR) method, proposed in Section 4.2, with domain randomization and shows that it improves the success rate considerably. We performed these experiments because we are interested in the following questions: Does training on environments with random variations (as domain randomization suggests) learn a representation of the invariant f with which the policy can generalize to other environments that share the same invariant features? Can we find a training algorithm that would empirically guarantee finding these invariant features f? We leverage the customizability of VizDoom maps with hundreds of unique textures to generate train/test scenarios. The agent is required to reach an object in order to get a reward. We train an actor-critic style agent to solve the task. The network consists of three convolutional layers and 2 fully connected layers, followed by the policy and value function estimator layers. The policy output is a four-dimensional fully-connected layer, where the four dimensions corresponds to four actions; move forward, turn right, turn left and do nothing. The ouput of the policy layer is a log-probability of each action. The value layer is a single unit that predicts the value function. This network architecture was proposed by. ReLUs are used as the non-linear operations in all layers. As mentioned, we optimize the PPO objective with a binary reward function (+1 if goal is reached, 0 otherwise) and a discount factor γ = 0.99. We generate the variations of the training environment by changing the textures on the surfaces using the numerous textures provided by. We train agents on a subset of 1, 10, 50, 100 and 500 rooms from the generated environments and test on 50 rooms with textures from a hold-out set which are not seen in training. We detail this experimental setup in Appendix B. We experiment with different types of visual input; RGB, RGB-D and Grayscale. The number of training iterations is fixed at 5 × 10 6 to ensure repeatability of our experiment. The are therefore potentially pessimistic, and in future work we would like to choose the number of iterations for each network independently so as to maximize generalization performance. The agent and the goal object are initialized at random positions in the environment at the start of each episode. The role of depth. Adding a depth channel to the observation plays a significant role in generalization. Depth is invariant to many changes in the visible spectrum of the observations. This might lead the training agent to partly find an invariance in observations in its implicit perception model, which in this case can be as simple as focusing on the depth channel only. Therefore, it was not surprising to see, in Table 1, that the depth agents (RGB-D) generalize better than the agents without any depth information. Table 1 shows the success rate of the PPO models with respect to the number of training environments used and the input type (RGB, RGB-D). The are averaged over 5 seeds, a standard practice in the RL literature today. We notice the superior performance of the agent with depth than the agent without depth. The fact that the RGB agent is not able to generalize well even when exposed to numerous environments tells us that it might not be learning the invariance relating the environments. On the other hand, the RGB-D agents perform well on the testing environments even when the agents are only exposed to 10 random training environments. Looking at the RGB and RGB-D experiments, the agents trained on 100 and 500 environments generalize worse on average than the ones trained on 10 and 50, which indicates that some agents might be overfitting. This is inspite of the fact that these agents are able to maximize the reward in the training set regardless of the set size. Looking at the max statistic of these (not shown in this paper) the 100 and 500 experiments outperform the rest. However, the 100 and 500 experiments have a higher variance in the success rates of different seeds than the 10 and 50 experiments. High variance in the test of the 100/500 RGB-D experiments shows that some seeds are able to achieve a near perfect score on the testing environment and others completely fail, thus there is then no empirical guarantee that RL agents will generalize when exposed to numerous environments. The average success rate for the RGB input without the depth shows that domain randomization alone might not be an effective method to adapt the policy to variations in the observations, at least not in the context of RL. In fact, it shows little progress, e.g., the RGB agent exposed to one environment achieves around a 20% success on the testing environments and the agents exposed to 50+ environments achieve less than 40% success. These are consistent when running with a grayscale channel (see Table 1). While training by randomizing the environment did show some success in making supervised learning models generalize better, it fails to do so in RL policies. It is clear from these , that adding random variations and relying solely on the RL objective is not enough to ensure generalization. Much of the success of domain randomization in previous works was reported using supervised learning. Also, the generalization abilities of machine learning algorithms have been linked to supervised learning setups. Therefore, it would make sense to adapt supervised learning techniques to regularize the models trained with DRL. In this section we will discuss the obtained from training the agent using the method proposed in Section 4.2. As mentioned in Section 4.2, we propose two methods of using the proposed IR penalty. The first is to add to the PPO objective as in Equation 1, this method is referred to as (full objective) in the . The second, which is referred to as (split) in the , is to split the objective into two parts; RL step and a supervised learning step (more details available in Appendix A). The value of λ in Equation 1 used in all IR (full objective) experiments is 1.0. As for the nature of transformation T of the observations, we tested with the same randomly textured environments from VizDoom, that were used in the previous section, in order to be able to make fair comparisons with the pure RL and domain randomization agents. Regarding the distance penalty term d in Equation 1, we did preliminary experiments with the KL divergence, L1, L2 and cross-entropy losses and the KL divergence returned the best . Table 1 shows the for combining PPO with the IR penalty using the two proposed implementations. Observing the split method's , we see that the proposed training procedure returns stable success rates that are improves as more environments are added. The split version was able to outperform vanilla PPO and substantially improve the generalization especially in the cases of RGB/Grayscale inputs. Training with the full objective, however, returned the best that outperform vanilla PPO with domain randomization and the split version of the IR algorithm. Similar to the split version, training on the full objective shows stable performance for the different inputs across different seeds. The , in Table 1, also show that the trained models, on the full objective, are achieving test success rate, with only 10 training environments, that is close if not identical to the agents trained on 50, 100 and 500 environments. These suggests that training with the full objective version of the IR algorithms does not require a large number of environments to learn the invariant features. Notice the average testing success rate is similar across the different number of training environments since the model learns the invariant features from only 10 environments and adding more environments that share the same invariant features will not make a difference. We can verify that hypothesis when looking at the RGB-D testing in the full objective part. All agents achieve a near perfect score which we attribute to the availability of an invariant feature map in the input (the depth channel) which only the agents trained with the full objective are able to catch. Regularization has been shown to help in the generalization of supervised learning models. Using regularization in supervised learning often improves the performance of the trained models on test sets. However, regularization has not been frequently used in DRL setups, The average success rate show that some of these methods achieve similar to ours in some instances. (Right): The lower SPL of the other regularization methods relative to ours indicates some randomness in their learned policies possibly due to the previously-common poor practise of testing and training in the same environment so there is no generalization gap. We compare our method with some regularization techniques that are frequently used; dropout, batchnorm and L2. The first experiment has a dropout layer added after each convolutional layer , the second has a batchnorm layer added after every convolutional layer and the last uses L2 regularization. We choose the dropout probability to be 0.1 and the L2 weight to be 10 −4, the same values that were proposed by. As in the previous setup, we train five models (different seeds) for each technique and evaluate on 50 environments whose textures are sampled from a hold-out set. We report the experiments done with RGB input only as it poses a harder problem and a larger gap than RGB-D. Figure 2 (left) shows the average success rate over 5 seeds for the four methods. We see that our proposed method is the only one that is steadily improving when more environments are added. The batchnorm models performed worst while dropout and L2 achieved similar success rates to the split version of our method given 50 and 500 training environments. However, the entropy of the learned policies is substantially higher when dropout and L2 are added to the model. We hypothesize that the high entropy policies are able to generalize by acting randomly in some instances and this makes them more robust in certain situations. We show the success weighted shortest path length (SPL) in Figure 2 (right). A random behavior that displays robustness (has a high success probability) would return a relatively lower SPL due to the fact that this random behavior will probably not take the shortest possible path to the goal. Details of the formulation of SPL are available in Appendix C. Figure 2 (right) shows that the dropout and L2 agents have a lower SPL than the IR agents indicating that these policies with higher entropy are inefficient. We present a study of the generalization capabilities of visual navigation agents trained with deep reinforcement learning algorithms. We formalize what it means to generalize in the context of a POMDP. We find that the tendency of RL agent to overfit even when exposed to large training sets is quite visible. We show that using domain randomization with RL, without adding invariant features to the input such as the depth maps, is not enough to generalize. In the second part, we proposed Invariance Regularization (IR), a method that attempts to regularize the RL model with a supervised learning loss. It improves the generalization success and displays stable performance across different seeds. In this work, we focused our experimentation on generalization to changes in the input observation. However, it is also interesting to generalize the learned skills to different architectural designs of the environment, just as one one wishes to generalize to different levels of the game as proposed in the retro competition. Another avenue of future work is to explore the appropriate transformation function T of the observations. One might consider an adaptive form of T learned with data augmentation or adversarial examples Goodfellow et al. (2015 The first part consists of training RL on the observations of the original training environment, while the second part can be seen as a supervised learning objective on the transformed observations, as shown in Algorithm 1. The first step trains RL on one environment and then use the actions that the trained policy would have taken in that environment to tune the model with supervised learning on the textured environments. In the reported experiments using the split version, the model is trained with one iteration of the algorithm. Therefore, the training process has two stages, train RL then train with a supervised learning setup, without iterating between both. As stated in Section 5.1, we run training on a subset of 1, 10, 50, 100 and 500 rooms where the surfaces in each room are sampled from the variety of textures available in Vizdoom. The ing policies are tested on 50 rooms with textures from a hold-out set which are not seen in training. During training we run several agents in parallel to quickly collect observation-action-reward data in multiple environments. Another advantage of this parallelization is the ability to run each agent on a variation of the training environment. Due to hardware limitations, we cannot run one agent for each environment, at least not when we have a large number of training environments, i.e., 100 or 500. Therefore, each agent samples one environment from the training set and runs on it for some n episodes before sampling another one (n = 25 episodes). SPL was proposed by as a way of measuring the navigation agents success rates while taking into account the time it takes agents to succeed 1. where N is the number of runs, S i is the binary indicator of the success of episode i, l i is the length of the shortest possible path and p i is the length of the path taken by the agent.
We propose a regularization term that, when added to the reinforcement learning objective, allows the policy to maximize the reward and simultaneously learn to be invariant to the irrelevant changes within the input..
479
scitldr
Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations. In this paper, we present a universal visual representation learned over the monolingual corpora with image annotations, which overcomes the lack of large-scale bilingual sentence-image pairs, thereby extending image applicability in NMT. In detail, a group of images with similar topics to the source sentence will be retrieved from a light topic-image lookup table learned over the existing sentence-image pairs, and then is encoded as image representations by a pre-trained ResNet. An attention layer with a gated weighting is to fuse the visual information and text information as input to the decoder for predicting target translations. In particular, the proposed method enables the visual information to be integrated into large-scale text-only NMT in addition to the multimodel NMT. Experiments on four widely used translation datasets, including the WMT'16 English-to-Romanian, WMT'14 English-to-German, WMT'14 English-to-French, and Multi30K, show that the proposed approach achieves significant improvements over strong baselines. Visual information has been introduced for neural machine translation in some previous studies (NMT); ) though the contribution of images is still an open question . Typically, each bilingual (or multilingual) parallel sentence pair is annotated manually by one image describing the content of this sentence pair. The bilingual parallel corpora with manual image annotations are used to train a multimodel NMT model by an end-to-end framework, and are reported on a specific data set, Multi30K. One strong point of the multimodel NMT model is the ability to use visual information to improve the quality of the target translation. However, the effectiveness heavily relies on the availability of bilingual parallel sentence pairs with manual image annotations, which hinders the image applicability to the NMT. As a , the visual information is only applied to the translation task over a small and specific multimodel data set Multi30K, but not to large-scale text-only NMT (; ;) and low-resource text-only NMT (; ; . In addition, because of the high cost of annotation, the content of one bilingual parallel sentence pair is only represented by a single image, which is weak in capturing the diversity of visual information. The current situation of introducing visual information in a bottleneck in the multimodel NMT, and is not feasible for text-only NMT and low-resource NMT. In this paper, we present a universal visual representation (VR) method 1 relying only on image-monolingual annotations instead of the existing approach that depends on image-bilingual annotations, thus breaking the bottleneck of using visual information in NMT. In detail, we transform the existing sentence-image pairs into topic-image lookup table from a small-scale multimodel data set Multi30K. During the training and decoding process, a group of images with similar topic to the source sentence will be retrieved from the topic-image lookup table learned by the term frequency-inverse document frequency, and thus is encoded as image representations by a pretrained ResNet . A simple and effective attention layer is then designed to fuse the image representations and the original source sentence representations as input to the decoder for predicting target translations. In particular, the proposed approach can be easily integrated into the text-only NMT model without annotating large-scale bilingual parallel corpora. The proposed method was evaluated on four widely-used translation datasets, including the WMT'16 Englishto-Romanian, WMT'14 English-to-German, WMT'14 English-to-French, and Multi30K which are standard corpora for NMT and multi-modal machine translation (MMT) evaluation. Experiments and analysis show effectiveness. In summary, our contributions are primarily three-fold: 1. We present a universal visual representation method that overcomes the shortcomings of the bilingual (or multilingual) parallel data with manual image annotations for MMT. 2. The proposed method enables the text-only NMT to use the multimodality of visual information without annotating the existing large scale bilingual parallel data. 3. Experiments on different scales of translation tasks verified the effectiveness and generality of the proposed approach. Building fine-grained representation with extra knowledge is an important topic in language modeling b; a), among which adopting visual modality could potentially benefit the machine with more comprehensive perception of the real world. Inspired by the studies on the image description generation (IDG) task (; ; ;), a new shared translation task for multimodel machine translation was addressed by the machine translation community. In particular, the released dataset Multi30K includes 29,000 multilingual (English, German, and French) parallel sentence pairs with image annotations ). Subsequently, there has been a rise in the number of studies (; ; ; Libovickỳ & ;). For example, proposed a doubly-attentive multi-modal NMT model to incorporate spatial visual features, improving the translation performance. Compared with spatial-visual features, further incorporated global image features as words in the source sentence and to enhance the encoder or decoder hidden state. In contrast, some recent studies indicated that the visual modality is either unnecessary or only marginally beneficial (Grönroos et al., 2018). More recently, showed that visual information is only needed in particular cases, such as for ambiguous words where the textual context is not sufficient. However, these approaches only center around a small and specific Multi30K data set to build multimodel NMT model, which hinders image applicability to NMT. The reason would be the high cost of image annotations, ing potentially in the image information not being adequately discovered. We believe that the capacity of MMT has not yet been excavated sufficiently and there is still a long way to go before the potential of MMT is fully discovered. In this work, we seek to break this constraint and enable visual information to benefit NMT, especially text-only NMT. return TF-IDF dictionary F 9: end procedure 10: procedure LOOKUP(S, E, F) for For each pair {T i, e i} ∈ zip{S, E} do Rank and pick out the top-w "topic" words in the sentence according to the TF-IDF score in the dictionary F, and each sentence is reformed as T = {t 1, t 2, . . ., t w} Pair the w words with the corresponding image e i for For each word t j in T do In this section, we will introduce the proposed universal visual representation method. Generally, the default input setting of the MMT is a sentence-image pair. Our basic intuition is to transform the existing sentence-image pairs into topic-image lookup table 2, which assumes the topic words in a sentence should be relevant to the paired image. Consequently, a sentence can possess a group of images by retrieving the topic-image lookup table. To focus on the major part of the sentence and suppress the noise such as stopwords and low-frequency words, we design a filtering method to extract the "topic" words of the sentence through the term frequency-inverse document frequency (TF-IDF) 3 inspired by. Specifically, given an original input sentence X = {x 1, x 2, . . ., x I} of length I and its paired image e, X is first filtered by a stopword list 4 and then the sentence is treated as a document g. We then compute TF-IDF T I i,j for each word x i in g, where o i,j represents the number of occurrences of the word x i in the input sentence g, |G| the total number of source language sentences in the training data, and |j: x i ∈ g| the number of source sentences including word x i in the training data. We then select the top-w high TF-IDF words as the new image description T = {t 1, t 2, . . ., t w} for the input sentence X. After preprocessing, each filtered sentence T is paired with an image e, and each word t i ∈ T is regarded as the topic word for image e. After processing the whole corpus (i.e., Multi30K), we form a topic-image lookup table Q as described in Algorithm 1, in which each topic word t i would be paired with dozens of images. Image Retrieval For input sentence, we first obtain its topic words according to the text preprocessing method described above. Then we retrieve the associated images for each topic word dog is playing in the snow dog playing snow (a) a black dog and a spotted dog are fighting (b) a dog is running in the snow (c) a dog is playing with a hose (d) a family playing on a tractor on a beautiful day (e) two people working on removing snow from a roof (f) a black dog and a white dog are standing on snow corpus Figure 1: Illustration of the proposed visual retrieval. from the lookup table Q and group all the retrieved images together to form an image list G. We observe that an image might be associated with multiple topic words so that it would occur multiple times in the list G. Therefore, we sort the images according to the frequency of occurrences in G to maintain the total number of images for each sentence at m. In the left block, we show six examples of sentence-image pairs in which the topic words are in boldface. Then we process the corpus using the topic-image transformation method demonstrated above and obtain the topic-image lookup table. For example, the word dog is associated with 1,512 images. For an input source sentence, we obtain the topic words (in boldface) using the same preprocessing. Then we retrieve the corresponding images from the lookup table for each topic word. Now we have a list of images, and some images appear multiple times as they have multiple topics (like the boxed image in Figure 1). So we sort the retrieved image list by the count of occurrence to pick out the top-m images that cover the most topics of the sentence. At test time, the process of getting images is done using the image lookup table built by the training set, so we do not need to use the images from the dev and test sets in Multi30K dataset 6. Intuitively, we do not strictly require the manual alignment of the word (or concept) and image, but rely on the co-occurrence of topic word and image, which is simpler and more general. In this way, we call our method as universal visual retrieval. In this section, we introduce the proposed universal visual representation (VR) method for NMT. The overview of the framework of our proposed method is shown in Figure 2. In the state-of-the-art Transformer-based NMT , source information is encoded as source representation by an SAN-based encoder with multiple layers. Specifically, the encoder is composed of a stack of L identical layers, each of which includes two sub-layers. The first sublayer is a self-attention module, whereas the second is a position-wise fully connected feed-forward network. A residual connection is applied between the two sub-layers, and then 5 More examples are provided in the Appendix A.1. 6 The lookup table can be easily adapted to a wide range of other NLP tasks even without any paired image, and therefore opens our proposed model to generalization. a layer normalization is performed. Formally, the stack of learning the source representation is organized as follows: where ATT l (·), LN(·), and FFN l (·) are the attention module, layer normalization, and the feedforward network for the l-th identical layer, respectively. After retrieval as described in Section 3, each original sentence X = {x 1, x 2, . . ., x I} is paired with m images E = {e 1, e 2, . . ., e m} retrieved from the topic-image lookup table Q. First, the source sentence X={x 1, x 2, . . ., x I} is fed into the encoder (Eq.2) to learn the source sentence representation H L. Second, the images E ={e 1, e 2, . . ., e m} are the inputs to a pre-trained ResNet followed by a feed forward layer to learn the source image representation textM ∈ R m×2048. Then, we apply an attention mechanism 7 to append the image representation to the text representation: where {K M, V M} are packed from the learned source image representation M. 7 We used single head here for simplicity. Intuitively, NMT aims to produce a target word sequence with the same meaning as the source sentence rather than a group of images. In other words, the image information may play an auxiliary effect during the translation prediction. Therefore, we compute λ ∈ to weight the expected importance of source image representation for each source word: where W λ and U λ are model parameters. We then fuse H L and H to learn an effective source representation: Finally, H is fed to the decoder to learn a dependent-time context vector for predicting target translation. Note that there is a single aggregation layer to fuse image and text information. The proposed method was evaluated on four widely-used translation datasets, including WMT'16 English-to-Romanian (EN-RO), WMT'14 English-to-German (EN-DE), WMT'14 English-toFrench (EN-DE), and Multi30K which are standard corpora for NMT and MMT evaluation. 1) For the EN-RO task, we experimented with the officially provided parallel corpus: Europarl v7 and SETIMES2 from WMT'16 with 0.6M sentence pairs. We used newsdev2016 as the dev set and newstest2016 as the test set. 2) For the EN-DE translation task, 4.43M bilingual sentence pairs of the WMT14 dataset were used as training data, including Common Crawl, News Commentary, and Europarl v7. The newstest2013 and newstest2014 datasets were used as the dev set and test set, respectively. 3) For the EN-FR translation task, 36M bilingual sentence pairs from the WMT14 dataset were used as training data. Newstest12 and newstest13 were combined for validation and newstest14 was used as the test set, following the setting of. 4) The Multi30K dataset contains 29K English→{German, French} parallel sentence pairs with visual annotations. The 1,014 English→{German, French} sentence pairs visual annotations are as dev set. The test sets are test2016 and test2017 with 1,000 pairs for each. Image Retrieval Implementation We used 29,000 sentence-image pairs from Multi30K to build the topic-image lookup table. We segmented the sentences using the same BPE vocabulary as that for each source language. We selected top-8 (w = 8) high TF-IDF words, and the default number of images m was set 5. The detailed case study is shown in Section 6.2. After preprocessing, we had about 3K topic words, associated with a total of 10K images for retrieval. Image features were extracted from the averaged pooled features of a pre-trained ResNet50 CNN . This led to feature maps V ∈ R 2048. Baseline Our baseline was text-only Transformer . We used six layers for the encoder and the decoder. The number of dimensions of all input and output layers was set to 512 and 1024 for base and big models. The inner feed-forward neural network layer was set to 2048. The heads of all multi-head modules were set to eight in both encoder and decoder layers. For Multi30K dataset, we further evaluated a multimodal baseline (denoted as MMT) where each source sentence was paired with an original image. The other settings were the same as our proposed model. The byte pair encoding algorithm was adopted, and the size of the vocabulary was set to 40,000. In each training batch, a set of sentence pairs contained approximately 4096×4 source tokens and 4096×4 target tokens. During training, the value of label smoothing was set to 0.1, and the attention dropout and residual dropout were p = 0.1. We used Adam optimizer of 1,000 batches on the dev set. For the Multi30K dataset, we trained the model up to 10,000 steps, and the training was early-stopped if dev set BLEU score did not improve for ten epochs. For the EN-DE, EN-RO, and EN-FR tasks, following the training of 200,000 batches, the model with the highest BLEU score of the dev set was selected to evaluate the test sets. During the decoding, the beam size was set to five. All models were trained and evaluated on a single V100 GPU. Multi-bleu.perl 8 was used to compute case-sensitive 4-gram BLEU scores for all test sets. The signtest is a standard statistical-significance test. In addition, we followed the model configurations of to train Big models for WMT EN-RO, EN-DE, and EN-FR translation tasks. All experiments were conducted with fairseq 9 . The analysis in Section 6 is conducted on base models. Table 1 shows the for the WMT'14 EN-DE, EN-FR, and WMT'16 EN-RO translation tasks. Our implemented Transformer (base/big) models showed similar BLEU scores with the original Transformer , ensuring that the proposed method can be evaluated over strong baseline NMT systems. As seen, the proposed +VR significantly outperformed the baseline Transformer (base), demonstrating the effectiveness of modeling visual information for text-only NMT. In particular, the effectiveness was adapted to the translation tasks of the three language pairs which have different scales of training data, verifying that the proposed approach is a universal method for improving the translation performance. Our method introduced only 1.5M and 4.0M parameters for base and big transformers, respectively. The number is less than 3% of the baseline parameters as we used the fixed image embeddings from the pre-trained ResNet feature extractor. Besides, the training time was basically the same as the baseline model (Section 6.4). In addition, the proposed method was also evaluated for MMT on the multimodel dataset, Multi30K. Results in Table 2 show that our model also outperformed the transformer baseline. Compared with the in text-only NMT, we find that the image presentation gave marginal contribution, which was consistent with the findings in previous work (; Grönroos et al., 2018;). The most plausible reason might be that the sentences in Multi30K are so simple, short, and repetitive that the source text is sufficient to perform the translation . This verifies our assumption of the current bottleneck of MMT due to the limitation of Multi30K and shows the necessity of our new setting of transferring multimodality into more standard and mature text-only NMT tasks. Table 2: Results from the test2016 and test2017 for the MMT task. Del denotes the deliberation network in . is the official baseline (text-only NMT) on WMT17-Multi30K 2017 test data. Trans. is short for transformer and MMT is the multimodal baseline described in Section 5.2. Because we used the same model for test2016 and test2017 evaluation, the numbers of parameters are the same. The contribution of the lookup table could be two folds: 1) the content connection of the sentences and images; 2) the topic-aware co-occurrence of similar images and sentences. There are cases when paired images are not accurately related to the given sentence. A simple solution is to set a threshold heuristically for the TF-IDF retrieval to filter out the "improper" images. However, we maintain the specific number of the images in this work because of the second potential benefits of the cooccurrence, by taking images as diverse topic information. According to Distributional Hypothesis which states that, words that occur in similar contexts tend to have similar meanings, we are inspired to extend the concept in multimodal world, the sentences with similar meanings would be likely to pair with similar even the same images. Therefore, the consistent images (with similar topic) could play the role of topic or type hints for similar sentence modeling. This is also very similar to the idea of word embedding by taking each image as a "word". Because we use the average pooled output of ResNet, each image is represented as 2400d vector. For all the 29,000 images, we have an embedding layer with size. The "content" of the image is regarded as the embedding initialization. It indeed makes effects, but the capacity of the neural network is not up to it. In contrast, the mapping from text word to the index in the word embedding is critical. Similarly, the mapping of sentence to image in image embedding would be essential, i.e., the similar sentences (with the same topic words) tend to map the same or similar image. To verify the hypotheses, we replace our ResNet features with 1) Shuffle: shuffle the image features but keep the lookup table; 2) Random Init: randomly initialize the image embedding but keep the lookup table; 3) Random Mapping: randomly retrieve unrelated images. The BLEU scores are on EN-RO are 33.53, 33,28, 32.14, respectively. The of 1-2 are close to the proposed VR (33.78) and outperform the baseline (32.66), which shows that the content of images would not be very important. The ablation 3) gives a lower , which verifies the necessity of the mapping especially the topic relationship. To evaluate the influence of the number of paired images m, we constrained m in {0, 1, 3, 5, 7, 9, 15, 20, 30} for experiments on the EN-RO test set, as shown in Figure 4. When m = 0, the model is the baseline NMT model, whose BLEU score was lower than all the models with images. As the number of images increases, the BLEU score also increased at the beginning (from 32.66 to 33.78) and then slightly decreased when m exceeds 5. The reason might be that too many images for a sentence would have greater chance of noise. Therefore, we set m = 5 in our models. The number of sentence-image pairs to create the lookup table could also make effects. We randomly split the pairs of Multi30K into the proportion in [0.1, 0.3, 0.5, 0.7, 0.9], the corresponding BLEU scores for 33.44, 34.01, 34.06, 33.80]. Furthermore, we also evaluate the performance by adding external sentence-pairs from the training set of MS COCO image caption dataset . The BLEU scores are 33.55 and 33.71 respectively for COCO only and Multi30K+COCO. These indicate that a modest number of pairs would be beneficial. In our model, the weight λ of the gated aggregation method was learned automatically to measure the importance of the visual information. We compared by manually setting the weight λ into scalar values in {0.1, 0.3, 0.5, 0.7, 0.9} for experiments on the EN-RO test set. Figure 5 shows that all models with manual λ outperformed the baseline Trans. (base), indicating the effectiveness of image information. In contrast, they were inferior to the performance of our model. This means that the degree of dependency for image information varies for each source sentence, indicating the necessity of automatically learning the gating weights of image representations. There are mainly two extra computation cost using our method, including 1) obtaining image data for sentences and 2) learning image representations, which are negligible compared with training a NMT model. The time of obtaining image data for MT sentences for EN-RO dataset is less than 1 minute using GPU. The lookup table is formed as the mapping of token (only topic words) index to image id. Then, the retrieval method is applied as the tensor indexing from the sentence token indices (only topic words) to image ids, which is the same as the procedure of word embedding. The retrieved image ids are then sorted by frequency. Learning image representations takes about 2 minutes for all the 29,000 images in Multi30K using 6G GPU memory for feature extraction and 8 threads of CPU for transforming images. The extracted features are formed as the "image embedding layer" with the size of for quick accessing in neural network. This work presents a universal visual representation method for neural machine translation relying on monolingual image annotations, which breaks the restraint of heavy dependency on bilingual sentence-image pairs in the current multimodal NMT setting. In particular, this method enables visual information to be applied to large-scale text-only NMT through a topic-image lookup. We hope this work sheds some light for future MMT research. In the future, we will try to adopt the proposed method to other tasks. a man walks by a silver vehicle an elderly woman pan frying food in a kitchen small boy carries a soccer ball on a field man woman food Retrieved Images for Sentences Figure 5: Examples of the topic-image lookup table and retrieved images for sentences in Multi30K dataset. We only show six images for each topic or sentence for instance. The topics in each sentence are in boldface.
This work proposed a universal visual representation for neural machine translation (NMT) using retrieved images with similar topics to source sentence, extending image applicability in NMT.
480
scitldr
This paper introduces a novel framework for learning algorithms to solve online combinatorial optimization problems. Towards this goal, we introduce a number of key ideas from traditional algorithms and complexity theory. First, we draw a new connection between primal-dual methods and reinforcement learning. Next, we introduce the concept of adversarial distributions (universal and high-entropy training sets), which are distributions that encourage the learner to find algorithms that work well in the worst case. We test our new ideas on a number of optimization problem such as the AdWords problem, the online knapsack problem, and the secretary problem. Our indicate that the models have learned behaviours that are consistent with the traditional optimal algorithms for these problems. Machine learning has led to dramatic improvements in our capabilities to solve problems previously considered intractable. Besides the obvious empirical evidence of success, there has also been a strong parallel effort in the theory of ML which aims to explain why, when, and how ML techniques work. Our goal in this paper is to explore whether machine learning can be used to learn algorithms for classic combinatorial optimization problems. We will define this question more specifically by connecting to three concepts from traditional algorithms and complexity theory. Firstly, by "algorithm," we mean a uniform algorithm, one that works for inputs of all lengths, not just for specific input lengths from which the training data is drawn. Typically, models learned using ML techniques tend to be non-uniform, i.e., depend on input length. Previous approaches to finding uniform models -the Neural Turing machines of BID17 and generally the use of recurrent models (including LSTMs) -all suffer from some drawback, most notably the difficulty of training by back-propagation and gradient descent over long sequences. A particularly clever approach, due to BID21, adopts the idea of learning "convolution masks" of finite size that, when repeatedly applied solve a problem of interest on inputs of arbitrary length; however, the ing learning problems appear intractable (since the volume of computation grows at least cubically in their setup for most interesting problems, and stable ways to learn convolutions over such large grids are not well-understood). We expand on this point in Appendix F.Our first key insight is that for numerous combinatorial optimization problems, the primal-dual framework offers efficient solutions, and also lends itself to efficient online algorithms (see, e.g.,) where the input arrives in small units, one at a time, and the algorithm makes a choice about the input (e.g., which advertiser to give a query impression to, which node to match in a graph, whether to include an item in the knapsack, etc.). In addition, there is usually a clear notion of reward associated with a set of decisions, and the goal is often to optimize the overall rewards collected. This naturally connects our goal to the field of reinforcement learning, and indeed we formulate our learning problems in the Markov Decision Process (MDP) framework and use tools from deep reinforcement learning using policy gradient and DQN methods. Specifically, for any optimization problem, the MDP state will consist of three parts: global parameters of the problem instance (e.g., Knapsack size), a data structure that we expect (and train) the algorithm to learn to maintain (and whose size can depend on the global parameters), and the current input unit. We will train two agents -U that computes an update to the data structure, and D that makes the decision for each input using the data structure and the current input. The RL environment will then carry out the task of applying the update to the data structure, and present it back to the agents U and D as part of the state for the next input. We establish theoretically (Appendix G) that this simple framework is flexible to capture a wide class of problems, and empirically that the ing algorithms are quite powerful. An important question in both ML and Algorithms is what input instances is the algorithm expected to work on. The ML approach is to use a rich enough training set to capture future inputs distributions. Theoretical computer science (TCS), by contrast, traditionally considers worst-case analysis: an algorithm is judged by its performance on the worst possible input (specially crafted by an Adversary to beat the algorithm). This approach leads to theoretically robust guarantees on the algorithm. In stochastic models of input, the Adversary is somewhat restricted in order to better capture "real" inputs -including the Random Order and the IID models of input. Our second key insight is to bring this approach of adversarial input sets (not to be confused with the notion of adversarial examples which fool ML models, for example, see BID16) to the ML domain via two techniques to craft training sets:1. Universal Training Set A common way to prove lower bounds in the TCS literature is to come up with a distribution over inputs and show that no algorithm can perform better than some factor α ≤ 1 compared to the optimal solution, in expectation. This is a key ingredient in the technique of using to prove a lower bound on the performance of all randomized algorithms. For example, in the Adwords problem, there is a specific input distribution which is hard for all online algorithms BID23; BID27 ). Intuitively, one might expect that if an algorithm does perform well on the specified input distribution then it must have learned some characteristics of the optimal algorithm. We bring this idea to the ML literature by proposing to incorporate such instances into the training. In some cases, it may difficult to find a universal training set or the universal training set may admit algorithms which perform well on the training set while performing poorly on all other instances. To alleviate this problem we also propose to incorporate training sets that have high entropy. For example, in the Adwords problem, a randomized greedy algorithm is able to perform quite well on the adversarial instance so we incorporate a distribution which is explicitly bad for greedy. In the secretary problem, we provide inputs which come from many different distributions so that it is difficult for it to learn utilize any characteristics of the distributions. Our third contribution is the following intriguing question, connected to the broad area of how to interpret ML models. Specifically, suppose that for a given problem, we do manage to learn a network of constant (fixed) size, which does well over inputs of varying lengths coming from varying distributions. Does this allow us to confidently say that the network has learned the correct algorithm? One observation is that since the network is concise, it has to represent a succinct logic. How does that compare to the optimal pen-and-paper algorithm that computer scientists have developed for the problem? We will answer such questions by plotting the input-output characteristics of the network learned for the different problems we consider, and compare them to the expected behavior of the traditional algorithms. It may even be possible to convert the network to an algorithmic form, but we leave such an attempt for future work. We study three optimization problems in this work -the Adwords Problem (aka Online Budgeted Allocation), the Online Knapsack Problem, and the so called Secretary Problem 1. All three problems share the feature that they are all very well-studied online combinatorial problems with some probabilistic features, and importantly, the optimal algorithms for each of them have a concise algorithmic form (e.g., not represented implicitly as the solution of a dynamic program).For all three problems we use RL to find a "uniform algorithm", i.e., an input-length independent logic. We train the models using universal or high-entropy input distributions and find that the models discover the classic algorithms. To mention the highlights of each section:• Adwords problem: The model learns to find the Balance strategy BID22 for unweighted graphs, and the MSVV strategy BID27 for weighted graphs which optimally trades-off between load-balancing and greedy strategies.• Online Knapsack problem: The model learns to find an optimal threshold on value per unit size to use to either accept or reject incoming items.• Secretary Problem: The model learns the optimal "Wait-then-Pick" algorithm which samples the first 1/e fraction of the input stream and then picks the next item which is higher than any seen before. It also finds the optimal time-dependent value-threshold algorithm for i.i.d. input. Our suggest that it might be possible to draw a formal connection between the online primaldual framework and RL, e.g., to prove that the online optimization problems solvable in the primaldual framework admit efficient algorithms learnable via RL. We leave this as a fascinating open question for future work. Remark. In this paper, we use the standard REINFORCE algorithm for policy gradient, with the Adam optimizer. Our contribution is not in extending RL techniques, but in making the connection to algorithms, and showing how standard RL techniques can in fact find the classic "pen-and-paper" algorithms. Further, we do not optimize for the training set-up or hyperparameters; in particular all our training is done over a single machine and training often takes less than a day or two. We are taking a specific angle at the question of how machine learning solves optimization problems. There is a lot of previous work on the larger question of ML and optimization. A related previous work is that of BID4 which also studies combinatorial problems, particularly the Traveling Salesman Problem and Knapsack, and also uses policy gradient method for an RL framework to optimize the parameters of a pointer network. (This paper also summarizes previous literature on combinatorial optimization using neural networks.) Our work differs in a few ways, but specifically the goal is not only to solve the problem, but also to interpret the learned RL policy network and compare to the known optimal algorithms, both in performance and in structure. Moreover, the work of BID4 learns a recurrent network, which could become prohibitively expensive to train on data sets that are large enough to capture the complexity of TSP or Knapsack. Another closely related paper is BID10, which uses embeddings and RL to find heuristics to solve classic graph problems on specific distributions. The problems they consider are offline in nature, and the heuristics conform to an incremental (greedy) policy guided by scores generated by the RL agent. Specifically, their goal is to find new heuristics for specific distributions, which is different from the work here, where we ask if RL can discover the classic "worst-case" algorithms. Our work is also different in the same way from other work in the space of combinatorial problems, such as that on TSP and Vehicle routing BID24, as well as to the growing literature on using RL to solve optimization for control problems (see for e.g. BID26, BID25). We also mention as a loosely related paper by BID5, which uses RL (as a budgeted MDP) the solve Budget Allocation problem, although that problem is different from the Adwords problem we consider, in that the question there is to optimally allocate a single advertiser's budget. One of the goals in this work is to find a uniform algorithm, i.e., one which is independent of the input length, for which we use the RL approach and focus on online optimization problems. As mentioned earlier, there have been several other nice approaches for this problem, each with different difficulty level in the goals, and obstacles in learning. This includes the Neural Turing machines of BID17 and generally the use of recurrent models (including LSTMs), and the "convolution masks" approach of BID21.Finally, from the algorithms literature, our three problems are very well-studied, especially with Knapsack (e.g., BID11) and Secretary being decades old problems. The relatively recent Adwords problem BID27 is strongly motivated by online advertising (see, e.g., BID28), and for which solutions that merge both the theoretical approach and the ML approach could potentially be of high impact in practice of budgeted ad allocations. Algorithmic work on these problems is cited throughout the next sections.2 ADWORDS: ONLINE MATCHING AND AD ALLOCATION We define the AdWords problem (introduced by BID27 as a generalization of the online bipartite b-matching problem) and the key algorithmic related to this problem. Problem 1 (AdWords problem). There are n advertisers with budgets B 1,..., B n and m ad slots. Each ad slot j arrives sequentially along with a vector (v 1,j, . . ., v n,j) where v i,j is the value that advertiser i has for ad slots j. Once an ad slot arrives, it must be irrevocable allocated to an advertiser or not allocated at all. If ad slot j is allocated to advertiser i then the revenue is increased by v i,j while advertiser i's budget is depleted by v i,j. The objective is to maximize the total revenue. The online b-matching problem is the special case when the values are in {0, 1}.Algorithm MSVV Let v i,j be the value that advertiser i has for ad slot j and let s i,j be the fraction of the advertiser i's budget when ad slot j arrives. Define the "tradeoff" function ψ(x) = e 1−x. Ad slot j is allocated to an advertiser in arg max i∈[n] v i,j ψ(s i,j) where ties can be broken arbitrarily. BID27 showed that when all the values are small compared to the their respective budgets, MSVV obtains at least a (1 − 1/e)-approximation of the optimal revenue. Moreover, this is optimal in the worst case. Let us also remark that MSVV has a particular elegant and intuitive form when v i,j ∈ {0, 1}. The algorithm is simply to look at the advertisers with a positive value for the ad slot and allocate to the advertiser who has the most fractional budget remaining (reducing to the BALANCE algorithm of BID22) RL FORMULATION Suppose there are n advertiser and m ad slots. We formulate the AdWords problem as an RL problem as follows State space: When ad slot j arrives, the agent sees the state (v 1,j, . . ., v n,j, s 1,j, . . ., s n,j) where v i,j is the value that advertiser i has for ad slot j and s i,j is the fractional spend of advertiser i. Action space: The agent can choose to either allocate the ad slot to an advertiser or not allocate the ad slot at all. In order to make the input independent of the number of advertisers, we experiment with another method for encoding the input. We relegate the details to Appendix B.Reward: If the action is to allocate ad slot j to advertiser i and the allocation does not cause advertiser i to exceed his budget then the reward for that action is v i,j.Transition: If ad slot j was allocated to advertiser i then the advertiser i's fractional spend is updated accordingly. In either case, we move on the next ad slot j + 1. We use a feedforward neural network with five hidden layers each with 500 neurons and ReLU nonlinearity. We then train the network using the standard REINFORCE algorithm with a simple fixed learning rate of 10 −4 and a batch size of 10. To facilitate training, we use a bootstrapping approach: we first train the network when the number of ad slots is small, say 100 before training it on a larger stream, say 500. The AdWords problem benefits from the existence of several classes of special graphs which force many algorithms to perform poorly in the worst case. We relegate the details of these graphs to Appendix A. Online bipartite b-matching We train the model by feeding it instances of the special graphs defined in the Appendix A. (In fact, we use a uniform distribution over those graphs.) Having chosen the graph, we also randomly choose whether or not to permute the order of the ad slots. We now describe and analyze the output of the learned model to visualize the policy it has learned. FIG0 illustrates the algorithm that is learned by the network when training on the mixture distribution that is described above. It is clear that the network has learned some version of balancing although the exact tradeoffs were not realized by the network. We also provide a comparison of the performance of the learned algorithm and the BALANCE algorithm in Table 1. This can be found in Appendix C.One other interesting aspect to look at is how the duals of the advertisers evolve under the learned agent and under the optimal algorithm. In FIG6, we see that the trajectory of the duals can be quite similar. FIG0 plots the probability that advertiser i (as seen by the network) is allocated as a function of their spend when all other advertiser have spend 0.5 and all advertiser have value 1. plots the following curves. FIG0 is obtained by averaging the curves in FIG0.Adwords Finally, we present our when training our model on AdWords. For training the model on AdWords, we only used the adversarial graph defined in Appendix A. However, for each instance, every advertiser is given a weight w i ∈. If the common budget for the advertisers is B then advertiser i's budget is then scaled to w i B and their value for any ad slot is either 0 or w i.Figure 7 in Appendix C plots the policy that is learned by the network. It is clear that the network has learned that, as an advertiser spends more, it also needs to have a larger value before it is allocated the ad slot. Table 4 shows the performance metrics for the learned agent. Note that the agent was only trained on inputs up to length 100 but it has learned to much larger input lengths. We leave it as future work to find a more adversarial distribution which forces the learner to more accurately recover MSVV.3 ONLINE KNAPSACK Problem 2 (Online knapsack problem). Suppose we have knapsack with capacity B and a sequence of n items, represented as a sequence of value-size pairs {(v i, s i)} i∈ [n]. The items arrive sequentially and each item must be irrevocably accepted into the knapsack or rejected as soon as it arrives. The objective is to maximize the total value of the items inside the knapsack without violating the capacity constraint. Algorithm "Online Bang-per-Buck" When n is large and max(v i, s i) B for all i ≥ 1, a nearly optimal strategy for the online KP is as follows. For some small 0 < p 1, accept (when possible) the first k:= np items and define S(r) as the total size of items seen so far with valueby-size ratio (aka "bang-per-buck") at least r, i.e. S(r) = k i=1 s i 1 {vi/si≥r}. Define the threshold ratio r * = arg min r {S(r) < B}.For the remaining items that arrive, accept (when possible) items whose value-to-size ratios are greater than r *. This algorithm is the online version of the natural Bang-per-Buck Greedy strategy for the offline problem BID11, and can be interpreted as a "Dual-learning" algorithm, which finds the best online estimate of the corresponding dual variable of the natural linear program. Finally, as a point of comparison, note that the Knapsack problem is related to the Adwords problem in the following way: it is simpler in that there is only one budget to pack, but it is also harder in that each item has two parameters, the value and the size, while in Adwords one may consider each item to have value to be the same as its size. Suppose there are n items arriving in the sequence and the knapsack capacity is B. Then an RL formulation that may be used to learn the nearly optimal algorithm from above is as follows. Let Actions: The agent can choose to either Accept or Reject the item corresponding to the state. Transition: To transition to the next state, DISPLAYFORM0 Reward: If S i + s i ≤ B and the action is Accept, then reward is v i; else reward is 0. We use a feedforward neural network with 3 hidden layers each with 50 neurons and ReLU nonlinearity. The network is trained using the standard REINFORCE algorithm with a simple fixed learning rate of 10 −4 for the Adam optimizer. The batch size was left at 1. We train the Policy Gradient RL model on a set of different input parameters. The value and sizes are taken as (F s, F v) ∼ U 2, and we vary the budget and length of the input sequence to make the KS problem more or less constrained. For each of the input instances, the learned RL policy achieves a performance close to the Bang-per-Buck Algorithm. We now analyze the output of the learned network to visualize the policy it has learned. FIG2 plots the probability that an item with a certain value-to-size ratio (x-axis) is accepted when it arrives. It is clear that the policy has learned the Bang-per-Buck algorithm with the correct valueby-size threshold for each distribution. For (B, n) = and (B, n) = FIG2, FIG2 ), there is budget to pick about one-fifth of the items in the stream if we pick at random (recall items have an average size of 0.5), so they have similar thresholds. For (B, n) = (FIG2) the knapsack is much more constrained and can take only a tenth of the items if picked at random, so the network has learned that a much higher threshold is required., (B, n) = (center), and (B, n) = (right). The top row depicts the probability that the agent will accept an item as a function of its value-by-size ratio. The bottom row depicts the histogram of items as a function of their value-by-size ratio ("all" is over all items in the sequence, and "taken" is over only the items that the agent accepts into the knapsack). DISPLAYFORM0 As opposed to the Adwords and Secretary problems, there is no known theoretical work which provides a universal distribution for the online knapsack problem. However, we do know that a universal algorithm would need to maintain a larger state, for example a histogram of value-by-size ratios of items seen so far, and be able to read the thresholds from the histogram. We take the first steps towards this goal here. Consider the following distribution: There are two types of knapsack instances, X and Y. In both X and Y, the budget equals B, and all items have value 1. Fix a small positive integer k, e.g., k = 4. In X, items have size either 1 or k with probability 1/2 each (independently of other items). In Y, all items have size either k or k 2 with probability 1/2 each. Finally, the distribution over instances is that we get either an instance of type X or of type Y with probability 1/2 each. The point of this distribution is that the optimal solution has a different policy for instances of type X versus Y. For X the optimal value-by-size threshold is any number between 1/k and 1, while Y the threshold is any number between 1/k 2 and 1/k. On the other hand, any single threshold value for the entire distribution will perform sub-optimally for either X or Y.We train our RL agent on this distribution in two different settings:(A) The original state space defined above, and (B) The same state augmented by a histogram of the spend binned by the value-by-size ratio. Specifically the state at item i is augmented by H i, an array of length m representing a size-weight m-binned histogram of value-to-size ratios seen so far. The learner (A) without the augmented state does not converge to a good solution, achieving only 73% of optimal, while the learner (B) achieves 95% of the optimal solution quite quickly. A plot of their output (Figure 8 in Appendix D shows that (B) has leveraged the augmented state to be able to determine the optimal threshold for the realized instance type (be it X or Y), while (A) has failed to identify which type of instance it got, and uses a single threshold between 1/k 2 and 1/k. We leave for future work the question of leveraging this simple mixed distribution defined above, to find a truly universal training set (e.g., by recursively expanding on it) and show that an RL learner with the augmented state can find a universal bang-per-buck learner (for any input instance distribution). Problem 3 describes the basic secretary problem. Problem 3 (Secretary problem). There are n candidates with values v 1,..., v n and an agent that is trying to hire the single best candidate (with the largest value). The candidates arrive in random order and we must irrevocably accept or reject each one before the next one arrives. Once a candidate is accepted, we can not replace by another. The goal is to maximize the probability of selecting the best candidate in the sequence. The algorithm knows the total number of candidates n. This is an optimal stopping problem. We will dispense of the original language and say that items arrive according to the above process, and the goal is to pick the item with the largest value. The optimal "Wait-then-Pick" Algorithm An optimal algorithm for this problem is as follows. First, we reject the first 1/e fraction of the items and let i * be the best amongst these items. Next, we accept the first item j such that v j ≥ v i *. One can show that this algorithm chooses the best item with probability at least 1/e. It is also known that, with no restriction on the value sequence, no algorithm can do better in the worst case (see also BID9).We first need to make the input to the models scale-free. We do this by restricting the input values in three different ways, each of them giving a variant of the original problem:1. Binary setting We start with the original problem. Let v 1,..., v n be the randomly permuted sequence of numbers. The i th item is presented as a Boolean m i where m i = 1 if v i = max j≤i v j and m i = 0 otherwise. That is, m i represents whether the item has the maximum value among the items seen so far. Note that the Wait-then-Pick algorithm never really cared about the value; only whether a particular value is the maximum value seen in the stream so far. Hence, the Wait-then-Pick algorithm achieves a success probability of 1/e and no algorithm can do better.2. Percentile setting This is a generalization of the binary setting in which item i is represented as a percentile p i to indicate its rank among the items seen so far (so p i = 1, means that the i th item is the maximum so far). Thus this setting provides more information about the stream seen so far. We can show that Wait-then-Pick is still an optimal algorithm achieving a success probability of 1/e. 3a. i.i.d. value setting with a fixed distributions This is the original setting in which the item values v i are declared upon arrival, but the restriction is that the values v 1,..., v n are picked i.i.d. from a fixed distribution F. In this restricted setting, Wait-then-Pick is not optimal. Instead, the optimal algorithm is a thresholding algorithm where the threshold decreases over time (, Section 3). Specifically, the algorithm (with knowledge of F and n) determines thresholds t 1 ≥ t 2... ≥ t n, and picks the first item i with v i > t i. This algorithm achieves the optimal success probability.3b. i.i.d. value setting with changing distributions This is almost identical to the previous setting except that each input instance chooses a distribution F, which may be different every time. The values v 1,..., v n are drawn i.i.d. from F. Note that the algorithm stated in the previous paragraph no longer works. In particular, forces an algorithm to at least look at some of the input before deciding whether to accept. Thus, this should bring back elements of the Wait-then-Pick algorithm. In the first three settings, an RL formulation is as follows. At time i, the agent sees a state (i/n, x i), where i/n is the fraction of the sequence seen so far, and x i = m i, p i v i in the binary, percentile, and i.i.d. value setting, respectively. The agent has two possible actions at each state, whether to Accept the item corresponding to the state, or to Reject it. The transition at time i + 1 for both Actions is simply to pick the next item (x i+1) according to the problem setting, and move to (i+1 n, x i+1). The reward is given only at the end state (1 + 1 n, Φ), where the reward is +1 if the agent succeeded in picking the maximum (which is the last 1 in the sequence for the binary case, the last item with percentile 1.0 for the percentile case, and max i v i for the i.i.d. values case) and −1 otherwise. Note that our formulation is not an MDP as the rewards are not Markovian. Although we can convert it to an MDP with minor modifications to the state, our show that this is not necessary. In the value setting with changing distributions, it is impossible to recover the secretary problem with just these two inputs so we augment the state space by providing the maximum value in the past. Otherwise, the RL formulation is as described above. Architecture and Training: We use a feedforward neural network with three hidden layers each with 50 neurons and ReLU nonlinearity. The output layer has two neurons and a softmax is taken over the output logits to obtain the probability of each action. We then train the network using the standard REINFORCE algorithm, with a simple fixed learning rate of 10 −4 and a batch size of 50. However, to facilitate training, we use a bootstrapping approach: We first train the network when the input stream is short, say n = 10. Once the learned algorithm is performing sufficiently well, we then increasing n, say, by 10, and repeat. In the binary setting, we trained an agent on instance of secretary up to input lengths of 100. In FIG3, we see that the agent has clearly learned a policy which is very similar to the optimal algorithm. In Table 6, we compare the performance metrics of the agent against the optimal secretary algorithm; the learned agent comes quite close. For the percentile setting, Figure 9 again shows that the algorithm has learned to place a sharp threshold. The scores are found in Table 7.I.I.D. value setting with a fixed distribution Recall that in this case, the agent should learn radically different behavior than in the other two settings. FIG0 shows the learned algorithm for various input lengths and we see that, qualitatively, the agent has learned the optimal algorithm. Here we use the value distribution U. Table 8 compares the optimal and the learned algorithm. I.I.D. value setting with changing distributions In this case, our show that by using a distribution which has very high entropy (sample a, b ∼ U after which all values are drawn i.i.d. from U[min(a, b), max(a, b)]), the model is able to learn a behaviour which is characteristic of Wait-then-Pick, i.e. it waits until some time before accepting any value which is larger than the maximum value seen so far. Somewhat surprisingly, the threshold in our experiments also coincide at 1/e. This is illustrated in FIG3. Table 9 gives he performance metrics. Recall that we augmented the state space so as to provide a "hint" to the learner. We leave it as future work to remove the hint, i.e. the agent should learn to maintain the maximum value it has seen in the past. In this work, we introduced several ideas from traditional algorithmic thinking to train neural networks to solve online optimization problems. In the problems that we consider, our show that RL was able to find key characteristics of the optimal "pen-and-paper" algorithms. However, in some instances (such as in the knapsack and secretary problem), we saw that some state augmentation was needed in order for the learner to more adequately recover the optimal algorithms. In this work, we took a step towards that by having the RL environment encode that state in a form usable by the agent. In future work, we plan to remove the state augmentation from the RL environment and force the agent to learn the state augmentation as part of the training process. FIG3 compares the agent's learned algorithm with the optimal algorithm in the binary setting. FIG3 plots the threshold for the agent's learned algorithm in the value setting with changing distributions. Observe that both have learned a threshold at around 1/e. Here we describe some special graph for the AdWords problem. Let U denote the set of vertices on the left hand side (corresponding to the advertisers) and V denote the set of vertices on the right hand side (corresponding to the ad slots).(i) The adversarial graph is defined as follows. Let B be an integer and set B i = B for all i. Let m = Bn. To define the adversarial graph, we label the ad slots 1,..., m. For i ∈ [n], we add an edges between all advertisers in {i, i + 1, . . ., n} and all ad slots in {(i − 1)B + 1,..., iB}. Observe that this graph has a perfect b-matching by matching advertiser i to ad slots {(i − 1)B + 1,..., iB}. FIG5 shows an example of the this graph. It can be shown that for any deterministic algorithm, if one randomly permutes the advertisers then the expected competitive ratio is bounded above by 1 − 1/e (, Theorem 9). Consequently, by an application of Yao's principle , for any randomized algorithm, there exists a permutation for which the competitive ratio is bounded above by 1 − 1/e (see (, Theorem 9) ). (ii) The thick-z graph is defined as follows. Suppose n is even. Let B be an integer and set B i = B for all i. Let m = Bn. Again, label the ad slots 1,..., n and the advertisers 1,..., m. We add edges between advertisers i and {(i−1)B +1,..., iB}. Finally, we also add the complete graph bipartite graph between ad slots {1, . . ., Bm/2} and advertisers {m/2 + 1, . . ., m}. FIG5 shows a diagram of this graph FIG5 shows the adversarial graph. In the graph, each advertiser has budget of 100. Hence, there exists a perfect b-matching by allocating the first 100 copies to advertiser 1, the second 100 copies to advertiser 2, etc. However, for any randomized algorithm, here is always a permutation of the vertices on the left hand side that will yield a competitive ratio of at most 1 − 1/e. FIG5 shows the thick-z graph. Again, each advertiser has a budget of 100 so here exists a perfect matching. However, the greedy algorithm, even if randomized will yield only at most a competitive ratio of 1/2. For the AdWords experiments, we also considered the following state and action spaces which we dub the discretized state and action spaces. Discretized, approximate state space: In order to make our framework applicable to a large number of advertisers, we also introduce a discretized state space. For simplicity, assume the values are in. Let g be an integer parameter called the granularity. When ad slot j arrives, the agent sees a vector r ∈ g×g where for k 1, k 2 ∈ [g], r k1,k2 is the fraction of advertisers with value in [(k 1 − 1)/g, k 1 /g) and the fraction of their budget spent is in [(k 2 − 1)/g, k 2 /g). The agent chooses k 1, k 2 ∈ [g]. Let S k1,k2 be the set of advertisers with value in [(k 1 − 1)/g, k 1 /g) and the fraction of their budget spent is in [(k 2 − 1)/g, k 2 /g). If S k1,k2 = ∅ then a random advertiser from S k1,k2 is chosen uniformly at random. The ad slot is then allocated to the chosen advertiser. If S k1,k2 = ∅ then the ad slot is not allocated. Figure 6 in Appendix C illustrates the algorithm that is learned by the network. Once again, it is clear that the network has learned to balance so that advertisers who have spent a smaller fraction of their budget are given preference. However we suspect that due to numerical reasons, the network was unable to distinguish between a small fractional number and zero; this is illustrated in Figure 6 where the network did not learn to balance when only most of the bidders are concentrated at spend exactly 0.5. Once again, we compare the performance of the learned algorithm and the BALANCE algorithm in TAB2. The table can be found in Appendix C. Table 1 compares the performance of the BALANCE algorithm and the learned algorithm when using the basic state space. Note that the learned algorithm was trained with 10 advertisers each with a budget of 50. TAB2 compares the performance of the BALANCE algorithm and the learned algorithm when using the discretized state space. Note that the learned algorithm was trained with 20 advertisers each with a budget of 20. In Table 3, we give some experimental evidence that the learned algorithms are uniform in that the quality of the algorithm does not depend too much on the number of advertisers, the budget, or the number of ad slots. Here the agent is trained on input instances with 20 advertisers each with a budget of 20. However, it was tested on instances with varying number of advertisers and varying budgets with up to 10 6 ad slots. We remark that, due to the discretization, one should not expect to get an approximation of 1 to the BALANCE solution even with the training parameters. Here, we see that the learned agent gets 0.92 of BALANCE for the training parameters. If an RL learned algorithm is "uniform" then it should not degrade too far below 0.92 (compared to the BALANCE solution). In our experiments, we see that no matter how long the length of our input is, the quality of its solution never dropped to less than 0.84, even as we scale up to 1 million ads. FIG6 compares the evolution of the duals for the advertiser that wants the first 10%, 30%, 40%, 70%, 90%, and 100% of the ads, respectively. In many cases, the duals evolve in a similar manner to the optimal algorithm. Figure 6: The agent's algorithm learned algorithm for the bipartite b matching problem where the value and spent space has been discretized into 5 buckets. The plots are to be interpreted as follows. In both curves, all advertisers have a value for the ad slot. In the solid curves (resp. the dashed curve) 80% (resp. 90%) of the advertisers have spent exactly 0.5. The plot shows the probability that one of the other 20% (resp. 10%) of the advertisers will be allocated the ad slot as a function of their spend. We see that the agent has roughly learned to balance but does have some issues when the number of advertisers in each grid varies substantially. Table 3: This table compares the performance of the learned algorithm compared the BALANCE in the discretized state space. Here, the agent is trained on the adversarial graph with the ad slots arriving in a permuted order. The agent was only trained on the input instance with 20 advertisers and a common budget of 20 but tested on instances with up to 10 6 ad slots. Figure 7a plots the following curves. Fix advertiser i. Then all advertisers except i has value 1 for the ad slot and their fractional spend is 0.5. We then let the fractional spend of bidder i vary from 0 to 1 and plot the minimum value that advertiser i needs to be allocated the item with probability at least 0.5. The dotted curve corresponds to the threshold given by MSVV. Figure 7b is obtained by averaging the curves for all the advertisers. Figure 8: In Figure 8a, the learner with augmented state accepts only items of size 1 for type X, and only items of size k for type Y. In Figure 8b, the learner without the augmented state accepts items of size 1 and k for type X (which is suboptimal for X), and only of size k for type Y (which is optimal for Y). E ADDITIONAL FIGURES AND TABLES FOR SECRETARY Figure 9: The agent's algorithm for the secretary problem compared with the optimal algorithm for the secretary problem. In this section, we visit a fundamental question that motivates our work at a very high level: what is an algorithm? We present a detailed and somewhat informal description of the various nuances that make the question of learning an algorithm challenging and subtle. Traditionally in Computer Science we define an algorithm as a finite piece of code (for some machine model) that gives a recipe to solve all instances of a problem. This definition works perfectly when the underlying model of computation is a Turing machine, a standard RAM machine, or a logical system like first-order logic. In particular, for all these models, the same algorithm works for instances of all possible sizes. By contrast, when the model of computation is non-uniform, e.g., Boolean or arithmetic circuits or branching programs or straight-line programs or feed-forward neural networks, this notion breaks down. In these models, it is customary to think of a concrete computation (algorithm) as operating on inputs of a fixed length. Typically, the models learnt using various machine learning techniques tend to be non-uniform. Even some of the most basic ML models such as linear or logistic regression use this notion implicitly: given pairs (x 1, y 1),..., (x m, y m), where each x i ∈ R n, a linear model w = (w 1, ..., w n) ∈ R n that is designed to minimize i w, x i −y i 2 2 works only for inputs of length n (and aims to work well for inputs of length n from a distribution that supplies the training data). Similarly, feed-forward neural networks commonly employed for various image classification tasks work on inputs of fixed dimensions (we will discuss an exception to this momentarily).Given this state of affairs, what does it mean to learn an algorithm for a problem that is well-defined for inputs of arbitrary length? Moreover, is it even reasonable to expect that inputs trained on bounded length be able to generalize to inputs of arbitrary length? We next discuss a few specific success stories and a few attempts in machine learning that have failed to yield satisfying solutions. In the pre-machine-learning era, an early success story is that of finite-state machines and regular expressions. It is possible BID20, in principle, to learn an FSM (equivalently, a regular expression) if we are given the correct label for all instances of (a finite) length bound (a bound that depends only on the language). Even for the next rung on the Chomsky hierarchy, namely context-free languages, the situation is extremely murky (see BID20), and depends delicately on the type of training examples, the structure of the grammar, etc. (The fundamental question of whether two given context-free grammars are equivalent is undecidable, and this type of intractability is closely associated with the problem of inferring or learning grammars from labeled data.) The situation is entirely hopeless for Turing machines, and quickly runs into issues of undecidability. In the context of neural networks (or equivalently differentiable arithmetic straight-line programs), three developments are worth highlighting:1. The Neural Turing Machine model of BID17 offers a philosophically complete answer to the question of what it means to learn algorithms. The model is fundamentally a recurrent neural network with a finite number of parameters, and is Turing-complete in the parlance of artificial intelligence, that is, it is as powerful as the standard Turing machine model. While the work of BID17 has many impressive examples of what these models can be trained for (for example, by training a model to copy short sequences of numbers, it has learned to copy longer sequences of numbers with relatively small error), they are quite far from being trainable for significantly more complex algorithmic tasks. A fundamental bottleneck here is that recurrent networks, in general, are very hard to train reliably through back-propagation over long input sequences. This problem exists even with cleverly crafted variants of recurrent neural networks like LSTMs BID19 ) that have been successful in practice in dealing with sequences of hundreds of input symbols.2. The idea of convolution that is commonly used in image-processing tasks (including feed-forward neural networks for various image-related tasks such as classification, object identification, etc.) offers, in principle, a method to define finite-size algorithms for inputs of arbitrary length. A convolution mask is a (short) sequence of finite size that is applied to every contiguous patch of the input sequence, emitting a finite-size sequence of symbols each time. This in possibly increasing the input size, but in practice it has been observed that the following paradigm works very well in practice (especially for image-related problems): perform several (but fixed number of) layers of convolution then pool the ing sequence into a fixed-length summary, and finally apply an expensive neural network computation on this fixed-length summary to produce the output. The key here is that the pooling operator is typically defined as dividing the input into a fixed number of regions (possibly overlapping) and applying a simple differentiable function (e.g., the SUM or addition operator) to the convolution outputs in each region. In particular, the architecture defined above implies that regardless of the size of the input instance, the goal of learning is to infer a fixed number of parameters, and equally importantly, the depth of the ing computation graph is finite, so algorithms like back-propagation have a chance to succeed. Unfortunately, however, the finite-depth limitation that enables (potentially) efficient (or at least feasible) learning, comes with a severe cost: it is unclear how rich the ing model is, that is, we dont know if there are algorithms for interesting tasks in this model. This question is related to fundamental questions in computational complexity theory: on the one hand, the closest complexity class that captures computations like this, namely TC 0 (the class of problems solvable by constantdepth polynomial-size circuits with AND, OR, NOT, and THRESHOLD gates), is not known to be powerful enough to perform all polynomial-time computations (or even logspace computations) (Aaronson); on the other hand, a slight weakening, where we drop THRESHOLD gates, in the complexity class AC 0 that is not powerful enough to compute even simple functions like the parity of n bits (or the product of two n-bit integers). To summarize, we dont know if the convolutionpooling paradigm(even though it works well in practice on certain classes of problems) is powerful enough to represent nontrivial algorithms for interesting real-world computational tasks.3. The work of BID21 attempts to go beyond this type of limitation using an elegant idea: they go back to first principles in terms of how Turing machines work, and propose a model that captures it well. In their model, the goal is to learn a finite-sized set of convolution masks that, when repeatedly applied to an input (so its a recurrent network), effectively solves the problem. In other words, they remove the depth limitation in the convolution-pooling model outlined above. This immediately restores the power of the model; it is now rich enough to simulate a Turing machine with polynomial overhead. On the other hand, even simple tasks like adding two n-bit integers could now in Ω(n 2) or Ω(n 3) in terms of the volume of the computation (volume refers to the product of time and space). The ing depth makes this hard to train, but at least this solves the non-uniformity problem: in principle, one can train a model on inputs of length 100 and hope that it will work on arbitrarily long inputs. Kaiser and Sutskever present some impressive examples of basic problems for which they are able to learn algorithms (e.g., addition, multiplication, etc.). In our view, this work comes closest to addressing the philosophical questions in the right framework, even if the ing learning problems appear intractable. One of our goals in this paper is to understand under what conditions we can learn uniform algorithms for various problems. By uniform algorithms, we mean algorithms that work for inputs of all lengths. In particular, this implies that the number of parameters that describe the algorithm (that we wish to learn) needs to be finite. In this section, we show that uniform algorithms can be learned for a large and interesting class of problems by combining two key ideas. As described in the Introduction, we focus on learning algorithms for optimization problems such as the classic Knapsack problem, matching and allocation problems in bipartite graphs, and versions of the "secretary problem". By focusing on optimization problems which have a clear notion of immediate and/or overall rewards, we are able to cast the learning problem as a reinforcement learning (RL) problem. This helps us effectively sidestep the "depth problem" which arise from training recurrent networks. We focus on the class of functions which can be computed, or approximated by, algorithms utilizing a small amount of memory and only a few passes over the input sequence (for example, online algorithms with space constraints).Although these two restrictions may be limiting, they already capture some interesting algorithmic problems that admit nontrivial solutions. For instance, the AdWords problem has an optimal algorithm that looks only at the current state of nature and ignores all past information; and the online knapsack problem has a nearly optimal solution which requires only O memory. Importantly, both of these problems require memory which is independent of the input length. These two restrictions are certainly limiting, but as we shall see shortly, they lead to a very interesting sweet spot that includes numerous real-world optimization problems. For instance, the AdWords problem has an optimal algorithm that looks only at the current state of nature and ignores all past information; and the online knapsack problem has a nearly optimal solution which requires only O memory. Importantly, both of these problems require memory which is independent of the input length. Equally importantly, they lead to us a representation theorem that establishes rigorously that such problems can be solved by computation graphs of constant depth (independent of input length), which, in turn, leads to tractable learning tasks. A few remarks are in order on the memory-restricted computational models we focus on. The simplest model is the standard "streaming algorithm BID2 BID18, where the algorithm is endowed with a small amount of working memory (typically polylogarithmic in the input length) and processes the input sequence one element at a time, spending very little time on each input item (typically polylogarithmic in the input length). This model has been known in the CS theory literature to be surprisingly powerful in estimating several statistical quantities of input streams (see BID2). While powerful, this model is somewhat cumbersome when one wishes to solve problems whose output is more than a handful of numbers, e.g., the matching problem in bipartite graphs. The prevailing model of choice for such problems, originally proposed by , is the semi-streaming model. In this model, the algorithm is still a onepass (or few-pass) algorithm, but the algorithm is allowed linear amount of memory (linear in the right complexity measure, e.g., number of vertices for graphs) but still needs to process each input item in polylogarithmic time. A particular variant of this model that we will discuss is what well call segmented semi-streaming model where the linear amount of memory is further assumed to be divided into constant-sized units, one per variable of interest (e.g., a handful of variables per vertex of a graph). This is a very natural model of computation that is also quite rich in what can be accomplished in it: for example, there are powerful algorithms for various versions of matching and allocation problems BID23 BID27 BID1 (which are some of the "hardest" problems in P, the class of polynomial-time solvable problems). Informally this makes sense -many algorithms maintain per-variable information throughout the course of their execution (weights, dual variables, vertex color, etc.).In ML terms, we may informally think of a streaming algorithm as the discrete analogue of an LSTM. This, of course, makes a streaming algorithm (never mind the more powerful variants like semi-streaming) difficult to learn via back-propagation; the key twist, however, is a structural representation theorem (established in the CS theory literature BID3 BID14 and independently re-discovered in the context of deep learning by) which shows effectively that every symmetric function computable in the streaming model is also computable in the weaker "sketching model (where each input item is sketched independently of each other, and the ing sketches are aggregated by a combining function). Informally, this gives us a large class of functions that can be computed by computation graphs of constant depth with a constant number of parameters that need to be learnt. This is the key fact that makes our framework tick. Definition 1. A function f is said to be computable by a streaming algorithm with space s(·) if there is an algorithm A that for every n and every x = x 1,..., x n computes f (x) given one-way access to x (that is, reads x one coordinate or "item" at a time), uses space no more than s(n).(Traditionally, we also require that A runs in time poly(s(n)) on each item of the input stream; for the purposes of this paper, this is unimportant, so we will assume that each input item has constant size and A processes each item in constant time.) Definition 2. A function f is said to be computable by a sketching algorithm if there are two (uniform) algorithms S and R such that for every n and every x = x 1,..., x n, f (x 1, . . ., x n) = R(S(x 1),..., S(x n)). Here S is called the "sketching function" and R is called the "reducer function".A function that is computable by a sketching function is thus computable in a simple "Map-Reduce" style of computation BID12.The main idea of this section is that while differentiable streaming algorithms are hard to learn (because on long inputs, the back-propagation chains are too long), differentiable sketching algorithms are easy -there is a finite number of parameters to learn, no back-propagation chain is more than a constant number of steps long, and we can train networks that do the "sketching" (like the function S in the definition above) and the "reducing" (like the function R above) on inputs of arbitrary length, provided R is simple enough. This leadsThe key complexity-theoretic we draw on is the following theorem BID3 BID14, which shows that under suitable conditions, any function computable by streaming algorithms are also computable by sketching algorithms. Essentially, this has also been independently discovered in the "deep sets" work of. Definition 3. A function f on n inputs x 1,..., x n is said to be symmetric if f is invariant to permutations of the inputs, that is, for all n, all x = x 1,..., x n and permutations π ∈ S n (the group of permutations of n objects), f (x 1, . . ., x n) = f (x π,..., x π(n) ). Theorem 1 (; paraphrased). If a function f is symmetric and is computable by a streaming algorithm, it is also computable by a sketching algorithm. There are additional technical constrains in the of BID3 BID14 , but from the viewpoint of learning uniform algorithms, we intend to use Theorem 1 only to guide us in the following way, so we suppress those details. Suppose we consider an optimization problem captured by function f (e.g., Adwords, Knapsack, etc.); we will use the fact that an "online version" of f (where the inputs arrive one at a time) often admits an efficient streaming or semi-streaming algorithm through the primal-dual framework BID27 BID7 ). If f is symmetric, then the representation theorem above implies that f can be computed by a pair of functions S and R in the sketching model. This reduces the problem of learning a uniform algorithm for f to the problem of learning uniform algorithms S and R. We invoke the symmetry of f once again to conclude that R must be a symmetric function as well -this implies that R can be computed given only the set of values of the sketch function S on the given input sequence. In particular, if the range of S is a set of discrete values of size k, we only need to learn a function R that can be computed from the k-bucket histogram of the values of S. If S is computed in one-hot fashion, the count for each of the k buckets in the histogram of values of S is simply a sum, an eminently differentiable function!
By combining ideas from traditional algorithms design and reinforcement learning, we introduce a novel framework for learning algorithms that solve online combinatorial optimization problems.
481
scitldr
Despite their popularity and successes, deep neural networks are poorly understood theoretically and treated as'black box' systems. Using a functional view of these networks gives us a useful new lens with which to understand them. This allows us us to theoretically or experimentally probe properties of these networks, including the effect of standard initializations, the value of depth, the underlying loss surface, and the origins of generalization. One key is that generalization from smoothness of the functional approximation, combined with a flat initial approximation. This smoothness increases with number of units, explaining why massively overparamaterized networks continue to generalize well. Deep neural networks, trained via gradient descent, have revolutionized the field of machine learning. Despite their widespread adoption, theoretical understanding of fundamental properties of deep learning -the true value of depth, the root cause of implicit regularization, and the seemingly'unreasonable' generalization achieved by overparameterized networks -remains mysterious. Empirically, it is known that depth is critical to the success of deep learning. Theoretically, it has been proven that maximum expressivity grows exponentially with depth, with a smaller number of trainable parameters . This theoretical capacity may not be used, as recently shown explicitly by . Instead, the number of regions within a trained network is proportional to the total number of hidden units, regardless of depth. Clearly deep networks perform better, but what is the value of depth if not in increasing expressivity? Another major factor leading to the success and widespread adoption of deep learning has been its surprisingly high generalization performance . In contrast to other machine learning techniques, continuing to add parameters to a deep network (beyond zero training loss) tends to improve generalization performance. This is even for networks that are massively overparameterized, wherein according to traditional ML theory they should (over)fit all the training data . How does training deep networks with excess capacity lead to generalization? And how can it be that this generalization error decreases with overparameterization? We believe that taking a functional view allows us a new, useful lens with which to explore and understand these issues. In particular, we focus on shallow and deep fully connected univariate ReLU networks, whose parameters will always in a Continuous Piecewise Linear (CPWL) approximation to the target function. We provide theoretical for shallow networks, with experiments showing that these qualitative hold in deeper nets. Our approach is related to previous work from (; ;) in that we wish to characterize parameterization and generalization. We differ from these other works by using small widths, rather than massively overparamaterized or infinite, and by using a functional parameterization to measure properties such as smoothness. Other prior works such as (; ;) attempt to provide theoretical upper or lower bounds to the number of induced pieces in ReLU networks, whereas we are more interested in the empirical number of pieces in example tasks. Interestingly, also takes a functional view, but is not interested in training and generalization as we are. Previous work has hinted at the importance of small norm initialization, but the functional perspective allows us to prove generalization properties in shallow networks. The main contribution of this work are as follows: -Functional Perspective of Initialization: Increasingly Flat with Depth. In the functional perspective, neural network parameters determine the locations of breakpoints and their delta-slopes (defined in Section 2.1) in the CPWL reparameterization. We prove that, for common initializations, these distributions are mean 0 with low standard deviation. The delta-slope distribution becomes increasingly concentrated as the depth of the network increases, leading to flatter approximations. In contrast, the breakpoint distribution grows wider, allowing deeper network to better approximate over a broader range of inputs. -Value of Depth: Optimization, not Expressivity. Theoretically, depth adds an exponential amount of expressivity. Empirically, this is not true in trained deep networks. We find that expressivity scales with the number of total units, and weakly if at all with depth. However, we find that depth makes it easier for GD to optimize the ing network, allowing for a greater flexibility in the movement of breakpoints, as well as the number of breakpoints induced during training. -Generalization is due to Flat Initialization in the Overparameterized Regime. We find that generalization in overparametrized FC ReLu nets is due to three factors: (i) the very flat initialization, (ii) the curvature-based parametrization of the approximating function (breakpoints and deltaslopes) and (iii) the role of gradient descent (GD) in preserving (i) and regularizing via (ii). In particular, the global, rather than local, impact of breakpoints and delta-slopes helps regularize the approximating function in the large gaps between training data, ing in their smoothness. Due to these nonlocal effects, more overparameterization leads to smoother approximations (all else equal), and thus typically better generalization (; . Consider a fully connected ReLU neural netf θ (x) with a single hidden layer of width H, scalar input x ∈ R and scalar output y ∈ R.f (·; θ) is continuous piecewise linear function (CPWL) since the ReLU nonlinearity is CPWL. We want to understand the function implemented by this neural net, and so we ask: How do the CPWL parameters relate to the NN parameters? We answer this by transforming from the NN parametrization (weights and biases) to two CPWL parametrizations: where the Iversen bracket b is 1 when the condition b is true, and 0 otherwise. Here the NN parameters denote the input weight, bias, and output weight of neuron i, and (·) + max{0, ·} denotes the ReLU function. The first CPWL parametrization is, where β i − bi wi is (the x-coordinate of) the breakpoint (or knot) induced by neuron i, µ i w i v i is the delta-slope contribution of neuron i, and s i sgn w i ∈ {±1} is the orientation of β i (left for s i = −1, right for s i = +1). Intuitively, in a good fit the breakpoints β i will congregate in areas of high curvature in the ground truth function |f (x)| ≥ 0, while deltaslopes µ i will actually implement the needed curvature by changing the slope by µ i from one piece p(i) to the next p(i) + 1. As the number of pieces grows, the approximation will improve, and the delta-slopes (scaled by the piece lengths) approach the true curvature of f: We note that the BDSO parametrization of a ReLU NN is closely related to but different than a traditional roughness-minimizing m-th order spline parametrizationf spline (x) BDSO (i) lacks the base polynomial, and (ii) it has two possible breakpoint orientations s i ∈ {±1} whereas the spline only has one. We note in passing that adding in the base polynomial (for linear case m = 1) into the BDSO ReLU parametrization yields a ReLU ResNet parametrization. We believe this is a novel viewpoint that may shed more light on the origin of the effectiveness of ResNets, but we leave it for future work. The second parametrization is the canonical one for PWL functions: <... < β P is the sorted list of (the x-coordinates of) the P H + 1 breakpoints (or knots), m p, γ p are the slope and y-intercept of piece p. Computing the analogous reparametrization to function space for deep networks is more involved, so we present a basic overview here, and a more detailed treatment in Appendix B. For L ≥ 2 layers with widths H , the neural network's activations are defined as: z for all hidden layers ∈ {1, 2, . . ., L} and for all neurons i is a breakpoint induced by neuron i in layer if it is a zero-crossing of the net input i.e. z Considering these parameterizations (especially the BDSO parameterization) provides a new, useful lens with which to analyze neural nets, enabling us to reason more easily and transparently about the initialization, loss surface, and training dynamics. The benefits of this approach derive from two main properties: that we have'modded out' the degeneracies in the NN parameterization and the loss depends on the NN parameters θ N N only through the BDSO parameters (the approximating function) θ BDSO i.e. (θ N N) = (θ BDSO (θ N N)), analogous to the concept of a minimum sufficient statistic in exponential family models. Much recent related work has also veered in this direction, analyzing function space . We now study the random initializations commonly used in deep learning in function space. These include the independent Gaussian initialization, with, and independent Uniform initialization, with We find that common initializations in flat functions, becoming flatter with increasing depth. Theorem 1. Consider a fully connected ReLU neural net with scalar input and output, and a single hidden layer of width H. Let the weights and biases be initialized randomly according to a zero-mean Gaussian or Uniform distribution. Then the induced distributions of the function space parameters (breakpoints β, delta-slopes µ) are as follows: (a) Under an independent Gaussian initialization, Using this , we can immediately derive marginal and conditional distributions for the breakpoints and delta-slopes. Corollary 1. Consider the same setting as Theorem 1. (a) In the case of an independent Gaussian initialization, where G nm pq (·|·) is the Meijer G-function and K ν (·) is the modified Bessel function of the second kind. (b) In the case of an independent Uniform initialization, where Tri(·; a) is the symmetric triangular distribution with base [−a, a] and mode 0. Implications. Corollary 1 implies that the breakpoint density drops quickly away from the origin for common initializations. If f has significant curvature far from the origin, then it may be far more difficult to fit. We show that this is indeed the case by training a shallow ReLU NN with an initialization that does not match the underlying curvature, with training becoming easier if the initial breakpoint distribution better matches the function curvature. We also show that during training, breakpoint distributions move to better match the underlying function curvature, and that this effect increases with depth (see Section 3, Table 1, and Appendix A.6). This implies that a data-dependent initialization, with a breakpoint distribution near areas of high curvature, could potentially be faster and easier to train. Next, we consider the typical Gaussian He or Glorot (Glorot & Bengio) initializations. In the He initialization, we have σ w = √ 2, σ v = 2/H. In the Glorot initalization, we have σ w = σ v = 2/(H + 1). We wish to consider their effect on the smoothness of the initial function approximation. From here on, we measure the smoothness using a roughness metric, defined as ρ i µ 2 i, where lower roughness indicates a smoother approximation. Theorem 2. Consider the initial roughness ρ 0 under a Gaussian initialization. In the He initialization, we have that the tail probability is given by, where E[ρ 0] = 4. In the Glorot initialization, we have that the tail probability is given by Thus, as the width H increases, the distribution of the roughness of the initial functionf 0 gets tighter around its mean. In the case of the He initialization, this mean is constant; in the Glorot initialization, it decreases with H. In either case, for reasonable widths, the initial roughness is small with high probability. This smoothness has implications for the implicit regularization/generalization phenomenon observed in recent work (see Section 3 for generalization/smoothness analysis during training). Work. Several recent works analyze the random initialization in deep networks. However, there are two main differences, First, they focus on the infinite width case (; ;) and can thus use the Central Limit Theorem (CLT), whereas we focus on finite width case and cannot use the CLT, thus requiring nontrivial mathematical machinery (see Supplement for detailed proofs). Second, they focus on the activations as a function of input whereas we also compute the joint densities of the BDSO parameters i.e. breakpoints and deltaslopes. The latter is particularly important for understanding the non-uniform density of breakpoints away from the origin as noted above. We now consider the mean squared error (MSE) loss as a function of either the NN parameters such that the restriction off BDSO to any piece of this partition, denotedf (·; θ BDSO)| πp, is a linear function. An open question is how many such critical points exist. A starting point is to consider that there are C(N +H, H) (N +H)!/N! H! possible partitions of the data. Not every such partition will admit a piecewise-OLS solution which is also continuous, and it is difficult to analytically characterize such solutions, so we resort to simulation and find a lower bound that suggests the number of critical points grows at least polynomially in N and H (Figure 7). Using Theorem 3, we can characterize growth of global minima in the overparameterized case. Call a partition Π lonely if each piece π p contains at most one datapoint. Then, we can prove the following : Theorem 4. For any lonely partition Π, there are infinitely many parameter settings θ BDSO that induce Π and are global minima with˜ (θ BDSO) = 0. Proof. Note that each linear piece p has two degrees of freedom (slope and intercept). By way of induction, start at (say) the left-most piece. If there is a datapoint in this piece, choose an arbitrary slope and intercept that goes through it; otherwise, choose an arbitrary slope and intercept. At each subsequent piece, we can use one degree of freedom to ensure continuity with the previous piece, and use one degree of freedom to match the data (if there is any). Remark 1. Suppose that the H breakpoints are uniformly spaced and that the N data points are uniformly distributed within the region of breakpoints. Then in the overparametrized regime H ≥ αN 2 for some constant α > 1, the induced partition Π is lonely with high probabilility 1 − e −N 2 /(H+1) = 1 − e −1/α. Furthermore, the total number of lonely partitions, and thus a lower bound on the total number of global minima of˜ is Thus, with only order N 2 units, we can almost guarantee lonely partitions, where the piecewise OLS solution on these lonely paratitions will be the global optimum. Note how simple and transparent the function space explanation is for why overparametrization makes optimization easy, as compared to the weight space explanation , requiring order N 7 units. The above sections argue that overparameterization leads to a flatter initial function approximation, and an easier time reaching a global minima over the training data. However, neural networks also exhibit unreasonably high generalization performance, which must be due to implicit regularization, since the effect is independent of loss function. Here we provide an argument that overparameterization directly leads to this implicit regularization, due to the increasing flatness of the initialization and the non-locality of the delta-slope parameters. Consider a dataset like that shown in Figure 8 with a data gap between regions of two continuous functions f L, f R and consider a breakpoint i with orientation s i in the gap. Starting with a flat initialization, the dynamics of the i-th delta-slope areμ where r 2,s (t), r 3,s (t) are the (negative) net correlation and residual on the active side of i, in this case including data from the function f si but not f −si. Note that the both terms of the gradientμ i have a weak dependence on i through the orientation s i, and the second term additionally depends on i through β i (t). Thus the vector of delta-slopes with orientation s evolves according toμ s = r 2,s (t)1 + r 3,s (t)β s. Now consider the regime of overparametrization H N. It will turn out to be identical to taking a continuum limit f (x, t), the curvature of the approximation (the discrete index i has become a continuous index x) andβ i (t) → 0 (following from Theorem 5, multiplyingβ i (t) by v i (t)/w i (t) and factoring out µ i (t) → 0). Integrating the dynamicsμ s (x, t) = r 2,s (t) + r 3,s (t)x over all time yields µ s (x, t = ∞) = µ s (x, t = 0) + R * 2,s + R * 3,s x, where the curvature µ s (x, t = 0) ≈ 0 (Section 3) and R * j,s ∞ 0 dt r j,s (t) < ∞ (convergence of residuals n (t) and immobility of breakpointsβ i (t) = 0 implies convergence of r j,s (t)). Integrating over space twice(from x = ξ s to x = x) yields a cubic splinef (x, t) = c 0,s + c 1, where c 0,s, c 1,s are integration constants determined by the per-piece boundary conditions (PBCs), thus matching the 0-th and 1st derivatives at the gap endpoints. The other two coefficients c k,s R * k,s, k ∈ {2, 3} and serve to match the 2nd and 3rd derivatives at the gap endpoints. Clearly, matching the training data only requires the two parameters c 0,s, c 1,s; and yet, surprisingly, two unexpected parameters c 2,s, c 3,s emerge that endowf with smoothness in the data gap, despite the loss function not possessing any explicit regularization term. Tracing back to find the origin of these smoothness-inducing terms, we see that they emerge as a consequence of (i) the smoothness of the initial function and (ii) the active half space structure, which in turn arises due to the discrete curvature-based (delta-slope) parameterization. Stepping back, the ReLU net parameterization is a discretization of this underlying continuous 2nd-order ordinary differential equation. In Section 3 we conduct experiments to test this theory. Breaking Bad: Breakpoint densities that are mismatched to function curvature makes optimization difficult We first test our initialization theory against real networks. We initialize fullyconnected ReLU networks of varying depths, according to the popular He initializations . Figure 1 shows experimentally measured densities of breakpoints and delta-slopes. Our theory matches the experiments well. The main points to note are that: (i) breakpoints are indeed more highly concentrated around the origin, and that (ii) as depth increases, delta-slopes have lower variance and thus lead to even flatter initial functions. We next ask whether the standard initializations will experience difficulty fitting functions that have significant curvature away from the origin (e.g. learning the energy function of a protein molecule). We train ReLU networks to fit a periodic function (sin(x)), which has high curvature both at and far from the origin. We find that the standard initializations do quite poorly away from the origin, consistent with our theory that breakpoints are essential for modeling curvature. Probing further, we observe empirically that breakpoints cannot migrate very far from their initial location, even if there are plenty of breakpoints overall, leading to highly suboptimal fits. We additionally show (see Appendix A.6 for details) that breakpoint distributions change throughout training to more accurately match the ground truth curvature. In order to prove that it is indeed the breakpoint density that is causally responsible, we attempt to rescue the poor fitting by using a simple data-dependent initialization that samples breakpoints uniformly over the training data range [x min, x max], achieved by exploiting Eq.. Sine Quadratic Standard 4.096 ± 2.25.1032 ± 0404 Uniform 2.280 ±.457.1118 ±.0248 We train shallow ReLU networks on training data sampled from a sine and a quadratic function, two extremes on the spectrum of curvature. The data shows that uniform breakpoint density rescues bad fits in cases with significant curvature far from the origin, with less effect on other cases, confirming the theory. We note that this could be a potentially useful data-dependent initialization strategy, one that can scale to high dimensions, but we leave this for future work. Explaining and Quantifying the Suboptimality of Gradient Descent. The suboptimality seen above begs a larger question: under what conditions will GD be successful? Empirically, it has been observed that neural nets must be massively overparameterized (relative to the number of parameters needed to express the underlying function), in order to ensure good training performance. Our theory provides a possible explanation for this phenomenon: if GD cannot move breakpoints too far from their starting point, then one natural strategy is to sample as many breakpoints as possible everywhere, allowing us to fit an arbitrary f. The downside of this strategy is that many breakpoints will add little value. In order to test this explanation and, more generally, understand the root causes 55.5 ± 2.9 52 ± 1.414 50 ±.7 49.25 ± 3.3 51.25 ± 6.1 49.25 ± 4.5 4 68 ± 3.1 57.25 ± 6.8 48.5 ± 2.5 42.5 ± 4.8 40.25 ± 3.9 40.25 ± 3.3 5 62.25 ± 15.1 49 ± 3.5 44.5 ± 5.1 38 ± 5.1 33.75 ± 1.1 31.5 ± 1.7 of the GD's difficulty, we focus on the case of a fully connected shallow ReLU network. A univariate input (i) enables us to use our theory, (ii) allows for visualization of the entire learning trajectory, and (iii) enables direct comparison with existing globally (near-)optimal algorithms for fitting PWL functions. The latter include the Dynamic Programming algorithm (DP, ), and a very fast greedy approximation known as Greedy Merge (GM, ). How do these algorithms compare to GD, across different target function classes, in terms of training loss, and the number of pieces/hidden units? We use this metric for the neural network as well, rather than the total number of trainable parameters. Taking the functional approximation view allows us to directly compare neural network performance to these PWL approximation algorithms. For a quadratic function (e.g. with high curvature, requiring many pieces), we find that the globally optimal DP algorithm can quickly reduce training error to near 0 with order 100 pieces. The GM algorithm, a relaxation of the DP algorithm, requires slightly higher pieces, but requires significantly less computational power. On the other hand all variants of GD (vanilla, Adam, SGD w/ BatchNorm) all require far more pieces to reduce error below a target threshold, and may not even monotonically decrease error with number of pieces. Interestingly, we observe a strict ordering of optimization quality with Adam outperforming BatchNorm SGD outperforming Vanilla GD. These (Figure 1) show how inefficient GD is with respect to (functional) parameters, requiring orders of magnitude more for similar performance to exact or approximate PWL fitting algorithms. Learned Expressivity is not Exponential in Depth. In the previous experiment, we counted the number of linear pieces in the CPWL approximation as the number of parameters, rather than the number of weights. Empirically, we know that the greatest successes have come from deep learning. This raises the question: how does the depth of a network affect its expressivity (as measured in the number of pieces)? Theoretically, it is well known that maximum expressivity increases exponentially with depth, which, in a deep ReLU neural network, means an exponential increase in the number of linear pieces in the CPWL approximation. Thus, theoretically the main power of depth is that it allows for more powerful function approximation relative to a fixed budget of parameters compared to a shallow network. However, recent work has called this into question, finding that in realistic networks expressivity does not scale exponentially with depth. We perform a similar experiment here, asking how the number of pieces in the CPWL function approximation of a deep ReLU network varies with depth. The in Table 2 clearly show that the number of pieces does not exponentially scale with depth. In fact, we find that depth only has a weak effect overall, although more study is needed to determine exactly what effect depth has on the number and variability of pieces. These lend more support to the recent findings of From the functional approximation, we know that a unit induces breakpoints only when the ReLU function applied to the unit's input has zero crossings. In layer one, this happens exactly once per unit as the input to each ReLU is just a line over the input space. In deeper layers, the function approximation is learned, allowing for a varying number of new breakpoints. Given our previous on the flatness of the standard initializations, this will generally only happen once per unit, implying that the number of pieces will strongly correlate with the number of units at initialization. Depth helps with Optimization by enabling the Creation, Annihilation and Mobility of Breakpoints. If depth does not strongly increase expressivity, then it is natural to ask whether its value lies with optimization. In order to test this, we examine how the CPWL function approximation develops in each layer during learning, and how it depends on the target function. A good fit requires that breakpoints accumulate at areas of higher curvature in the training data, as these regions require more pieces. We argue that the deeper layers of a network help with this optimization procedure, allowing the breakpoints more mobility as well as the power to create and annihilate breakpoints. One key difference between the deeper layers of a network and the first layer is the ability for a single unit to induce multiple breakpoints. As these units' inputs change during learning, the number of breakpoints induced by deeper units in a network can vary, allowing for another degree of freedom for the network to optimize. Through the functional parameterization of the hidden layers, these "births and deaths" of breakpoints can be tracked as changes in the number of breakpoints induced per layer. Another possible explanation for the value added of depth is breakpoint mobility, or that breakpoints in deeper layers can move more than those in shallow layers. We run experiments comparing how the velocity and number of induced breakpoints varies between layers of a deeper network. Figure 2: Total changes in number of breakpoints induced and average velocity of breakpoints relative to the first layer in each layer of a five layer ReLU network Figure 2 shows the . The number of breakpoints in deeper layers changes more often than in shallow layers. The breakpoint velocity in deeper layers is also higher than the first layer, although not monotonically increasing. Both of these provide support for the idea that later layers help significantly with optimization and breakpoint placement, even if they do not help as strongly with expressivity. Note that breakpoints induced by a layer of the network are present in the basis functions of all deeper layers. Their functional approximations thus become more complex with depth. However the roughness of the basis functions at initialization in the deeper layers is lower than that of the shallow layers. But, as the network learns, for complex functions most of the roughness is in the later layers as seen in Figure 3 (right)., 2018; 2015) has argued that it comes from an implicit regularization inherent in the optimization algorithm itself (i.e. SGD). In contrast, for the case of shallow and deep univariate fully connected ReLU nets, we provide causal evidence that it is due to the specific, very flat CPWL initialization induced by common initialization methods. In order to test this in both shallow and deep ReLU networks, we compare training with the standard flat initialization to a'spiky' initialization. For a shallow ReLU network, we can test a'spiky' initialization by exactly solving for network parameters to generate a given arbitrary CPWL function. This network initialization is then compared against a standard initialization, and trained against a smooth function with a small number of training data points. Note that in a 1D input space we need a small number of training data points to create a situation similar to that of the sparsity caused by high dimensional input, and to allow for testing generalization between data points. We find that both networks fit the training data near perfectly, reaching a global minima of the training loss, but that the'spiky' initialization has much worse generalization error (Table 3). Visually, we find that the initial'spiky' features of the starting point CPWL representation are preserved in the final approximation of the smooth target function (Figures 4 and 6). For a deep ReLU network, it is more difficult to exactly solve for a'spiky' initialization. Instead, we train a network to approximate an arbitrary CPWL function, and call those trained network parameters the'spiky' initialization. Once again, the'spiky' initialization has near identical training performance, hitting all data points, but has noticeably worse generalization performance. Figure 4:'Spiky' (orange) and standard initialization (blue), compared before (left) and after (right) training. Note both cases had similar, very low training set error. It appears that generalization performance it not automatically guaranteed by GD, but instead due to the flat initializations which are then preserved by GD.' Spiky' initializations also have their (higher) curvature preserved by GD. This idea makes sense, as generalization depends on our target function smoothly varying, and a smooth approximation is promoted by a smooth initialization. Variance. Our last experiment examines how smoothness (roughness) depends on the number of units, particularly in the case where there are large gaps in the training data. We use a continuous and discontinuous target function (shown in Figure 8). We trained shallow ReLU networks with varying width H and initial weight variance σ w on these training data until convergence, and measured the total roughness of ing CPWL approximation in the data gaps. Figure 5: Roughness vs. Width (left) and the variance of the initialization (right) for both data gap cases shown in Figure 8. Each data point is the of averaging over 4 trials trained to convergence. Figure 5 shows that roughness in the data gaps decreases with width and increases with initial weight variance, confirming our theory. A spiky (and thus rougher) initialization leads to increased roughness at convergence as well, lending support to the idea that roughness in data gaps can be'remembered' from initialization. On the other hand, higher number of pieces spreads out the curvature work over more units, leading to smaller overall roughness. Taken together, our experiments indicate that smooth, flat initialization is partly (if not wholly) responsible for the phenomenon of implicit regularization in univariate fully connected ReLU nets, and that increasing overparameterization leads to even better generalization. Conclusions. We show in this paper that examining deep networks through the lens of function space can enabled new theoretical and practical insights. We have several interesting findings: the value of depth in deep nets seems to be less about expressivity and more about learnability, enabling GD to finding better quality solutions. The functional view also highlights the importance initialization: a smooth initial approximation seems to encourage a smoother final solution, improving generalization. Fortunately, existing initializations used in practice start with smooth initial approximations, with smoothness increasing with depth. Analyzing the loss surface for a ReLU net in function space gives us a surprisingly simple and transparent view of the phenomenon of overparameterization: it makes clear that increasing width relative to training data size leads w.h.p. to lonely partitions of the data which are global minima. Function space shows us that the mysterious phenomenon of implicit regularization may arise due to a hidden 2nd order differential equation that underlies the ReLU parameterization. In addition, this functional lens suggests new tools, architectures and algorithms. Can we develop tools to help understand how these CPWL functions change across layers or during training? Finally, our analysis shows that bad local minima are often due to breakpoints getting trapped in bad local minima: Can we design new learning algorithms that make global moves in the BDSO parameterization in order to avoid these local minima? A EXPERIMENTAL DETAILS Figure 6:'Spiky' (orange) and standard initialization (blue), compared before training (left) and post-training (right) using a deep network Trained on a deep, 5 layer network, with 4 hidden layers of width 8. Trained on function over the interval [-2,2]. Learning rate = 1e-4, trained via GD over 10000 epochs, with roughness measured every 50 epochs. Roughness per layer was summed over all units within that layer. Shallow version trained on a 21 unit FC ReLU Network. Deep version trained on a deep, 5-layer network with 4 hidden layers of width 8. In both cases, the'spiky' initialization was a 20 -breakpoint CPWL function, with y n ∼ Uniform([−2, 2]). In the deep case, the spiky model was initialized with the same weights as the non-spiky model, and then pre-trained for 10,000 epochs to fit the CPWL. After that, gradient descent training proceeded on both models for 20,000 epochs, with all training having learning rate 1e-4. Training data was 20 random points in the range [-2,2], while the testing data (used to measure generalization) was spaced uniformly at every ∆x =.01 of the target interval of the target function. In the shallow case, there was no pre-training, as the'spiky' model was directly set to be equal to the CPWL. In the shallow model, training occurred for 20,000 epochs. All experiment were run over 5 trials, and values in table are reported as mean ± standard deviation. Base shallow learning rate was 1e-4 using gradient descent method, with learning rate divided by 5 for the spiky case due to the initialization method generating larger weights. Despite differing learning rates, both models had similar training loss curves and similar final training loss values, e.g. for sine, final training loss was.94 for spiky and 1.02 for standard. Functions used were sin(x), arctan(x), a sawtooth function from [-2,2] with minimum value of -1 at the endpoints, and 4 peaks of maximum value 1, cubic 2, and exp(.5x) Note GD was chosen due to the strong theoretical focus of this paper -similar were obtained using ADAM optimizer, in which case no differing learning rates were necessary. We used networks with a total of H = 40 hidden units, spread over L ∈ {1, 2, 3, 4, 5} hidden layers. Training data consiste of uniform samples of function over the interval x ∈ [−3, 3]. Learning rate = 5 · 10 −5, trained via GD over 25, 000 epochs. The target functions tested were sin(πx), a 5-piece polynomial with maximum value of 2 in the domain [−3, 3], a sawtooth with period 3 and amplitude 1, arctan(x), exp(x), and 1 9 x 2. Each value in the table was the average of 5 trials. We use a deep, 6-layer network, with 5 hidden layers of width 8. Training data consists of the'smooth' and'sharp' functions over the interval x ∈ [−3, 3]. Learning rate = 5e-5, trained via GD until convergence, where convergence was defined as when the loss between two epochs changed by less than 10 −8. Breakpoints were calculated every 50 epochs. The velocity of breakpoints was then calculated, and the values seen in the figure are normalized to the velocity of the first layer. Various function classes were trained until convergence on a depth 1 or 4 ReLU network, with 500 total units distributed evenly across layers. Initial and final breakpoint distributions were measured using a kernel density estimate, and compared with the underlying curvature (absolute value of 2nd derivative) of the ground truth function. The cubic spline was a cubic spline fit to a small number of arbitrary data points. Table 4 shows that the breakpoint densities moved over training to become more correlated with the underlying curvature of the ground truth function. This effect was more pronounced with depth. In certain very simple functions (e.g. x 2 or exp(x), not shown), a failure case emerged where there was no real change in correlation over training. Diagnostics appeared to show this was due to the function being so simple as to train almost instantaneously in our overparameterized network, meaning breakpoints had no time to move. Figure 9 shows what happens to the breakpoint densities Table 4: Top: Correlation of the BP distribution before and after training for depth 1 and 4 networks across function classes. Bottom: Change in correlation over training over training -in the shallow case, they are more constrained by the initial condition, and continue to have a higher density near the origin even when not necessary or appropriate. Each neuron of the second hidden layer receives as input the of a CPWL function z i (x) as defined above. The output of this function is then fed through a ReLU, which has two implications: first, every zero crossing of z i is a breakpoint of x i; second, any breakpoints β j ) < 0 will not be breakpoints of x i. Importantly, the number of breakpoints in g θ (x) is now a function of the parameters θ, rather than equal to fixed H as in the L = 1 case; in other words, breakpoints can be dynamically created and annihilated throughout training. This fact will have dramatic implications when we explore how gradient descent optimizes breakpoints in order to model curvature in the training data (see Section 3). But first, due to complexities of depth, we must carefully formalize the notion of a breakpoint for a deep network. Let a π (x) = i∈π a i. Then, β i is active iff there exists some path π such that a π is discontinuous at and: 2 ), and a cubic spline of a few arbitrary data points This gives us Equation, as desired. Let the subscripts p, q denote the parameters sorted by β p value. In this setting, let β 0 −∞, and β H+1 ∞. Then, This gives us Equation, as desired. Lemma 1. Suppose (b i, w i, v i) are initialized independently with densities f B (b i), f W (w i), and f V (v i). Then, the density of (β i, µ i) is given by Then, we can derive the density of (β i, µ i) by considering the invertable continuous transformation given by where J is the Jacobian determinant of g −1. Then, we have J = − sgn w i and |J| = 1. The density of (β i, µ i) is then derived by integrating out the dummy variable u: are independent, this expands to Theorem 1(a). Consider a fully connected ReLU neural net with scalar input and output, and a single hidden layer of width H. Let the weights and biases be initialized randomly according to a zero-mean Gaussian or Uniform distribution. Then, under an independent Gaussian initialization, Proof. Starting with Lemma 1, unknown otherwise but the integrand is even in µ, giving Corollary 1(a). Consider the same setting as Theorem 1. In the case of an independent Gaussian initialization, where G nm pq (·|·) is the Meijer G-function and K ν (·) is the modified Bessel function of the second kind. Proof. Marginalizing out µ from the joint density in Sympy returns the desired f β (β) from above. Sympy cannot compute the other marginal, so we verify it by hand: , Eq. 3.462.20, we have applying this with a = |µ| σ b σvσw and b = σ b, We can then use these densities to derive the conditional: Theorem 1(b). Consider a fully connected ReLU neural net with scalar input and output, and a single hidden layer of width H. Let the weights and biases be initialized randomly according to a zero-mean Gaussian or Uniform distribution. Then, under an independent Uniform initialization, Proof. Starting with Lemma 1, Corollary 1(b). Consider the same setting as Theorem 1. In the case of an independent Uniform initialization, where Tri(·; a) is the symmetric triangular distribution with base [−a, a] and mode 0. Proof. Beginning with the marginal of β i, Remarks. Note that the marginal distribution on µ i is the distribution of a product of two independent random variables, and the marginal distribution on β i is the distribution of the ratio of two random variables. For the Gaussian case, the marginal distribution on µ i is a symmetric distribution with variance σ Proof. Computing the time derivatives of the BDSO parameters and using the loss gradients of the loss with respect to the NN parameters gives us: This completes the proof.
A functional approach reveals that flat initialization, preserved by gradient descent, leads to generalization ability.
482
scitldr
It is well-known that deeper neural networks are harder to train than shallower ones. In this short paper, we use the (full) eigenvalue spectrum of the Hessian to explore how the loss landscape changes as the network gets deeper, and as residual connections are added to the architecture. Computing a series of quantitative measures on the Hessian spectrum, we show that the Hessian eigenvalue distribution in deeper networks has substantially heavier tails (equivalently, more outlier eigenvalues), which makes the network harder to optimize with first-order methods. We show that adding residual connections mitigates this effect substantially, suggesting a mechanism by which residual connections improve training. Practical experience in deep learning suggests that the increased capacity that comes with deeper models can significantly improve their predictive performance. It has also been observed that as the network becomes deeper, training becomes harder. In convolutional neural networks (CNNs), residual connections BID5 are used to alleviate this problem. Various explanations are provided for this phenomenon: BID6 suggests that residual connections reduce the flatness of the landscape, whereas BID3 questions this premise, noting that the extremal eigenvalues of the loss Hessian are much larger when residual connections are present: large Hessian eigenvalues indicate that the curvature of the loss is much sharper, and less flat. In a different line of work, BID0 observes that the gradients with respect to inputs in deeper networks decorrelate with depth, and suggest that residual connections reduce the'shattering' of the gradients. In this paper, we explore the interaction between depth and the loss geometry. We first establish that gradient explosion or vanishing is not responsible for the slowing down of training, as is commonly believed. Searching for an alternative explanation, we study the Hessian eigenvalue density (using the tools introduced in BID3 to obtain estimates of the eigenvalue histogram or density). The classical theory of strongly convex optimization tells us that optimization is slow when the spectrum simultaneously contains very small and very large eigenvalues (i.e., optimization rate is dependent on κ = λ max /λ min). Following this intuition, we focus on examining the relative spread of the Hessian eigenvalues. In particular, we quantify the extent of the large outliers by computing some scale-invariant classical statistics of the Hessian eigenvalues, namely the skewness and kurtosis. Finally, we observe that in comparable models with residual connections, these magnitude of these outliers is substantially mitigated. In BID3, it is hypothesised that batch normalization suppresses large outlier eigenvalues, thereby speeding up training; in this paper, we present evidence that residual connections speed up training through essentially the same channel. Throughout, the dataset of interest is CIFAR-10; we describe the specific model architectures used in Appendix A. It is well-known that deeper CNNs are harder to train than shallower ones. We exhibit training loss curves depicting this for both residual and non-residual (we refer to these as simple) CNNs in Appendix B, at various network depths (20 and 80). The most prevalent explanation for why very deep networks are hard to train is that the gradient explodes or vanishes as the number of layers increase BID4; this explanation has been infrequently challenged (Section 4.1 in BID5), but no definitive experiments have been shown. We study this hypothesis in FIG0, where we compare the gradient norms of a depth 80 residual and non-residual networks. Two things become clear from this this plot. Firstly, there is no exponential increase or decrease in gradient norms (i.e., we would see vastly different gradient norm scales), as hypothesised in gradient explosion explanations. Secondly, residual connections do not consistently increase or decrease the gradient norms. In FIG0, 49.4% of variables have lower gradient norm in residual networks (in comparison to a baseline of non-residual networks), making the exploding/vanishing gradient explanation untenable in this case. Let H ∈ R n×n be the Hessian of the training loss function with respect to the parameters of the model: DISPLAYFORM0 where θ ∈ R n is the parameter vector, and L(θ) is the training loss. The Hessian is a measure of (local) loss curvature. In the convex setting, optimization characteristics are largely determined by the loss curvature, we expect to be able to observe the factors slowing down the training from analyzing the Hessian spectrum along the optimization trajectory. Let λ 1 ≥ λ 2 · · · ≥ λ n be the eigenvalues of the Hessian. The theory of convex optimization suggests that first-order methods such as SGD slow down dramatically when the relative differences among the eigenvalues of the loss Hessian are large; in particular, from convex analysis suggest that as | λi λ1 | becomes smaller, the optimization in the direction of the eigenvectors associated with λ i slows down BID1 BID2 1. Following this intuition, when the distribution of the eigenvalues of H has heavy tails (equivalently large outliers), we expect the network to train slowly as there will be many eigenvalues where λ i /λ 1 is small. FIG1 shows the (smoothed) density of the eigenvalues of the Hessian for a series of simple CNNs with increasing depth. This figure shows two prominent features of the loss Hessian:1. Most of the eigenvalues of the Hessian are concentrated near zero. This means that the loss surface is relatively flat, in agreement with BID8 BID3 and others.2. As the network gets deeper, outliers appear in the spectrum of H. Moreover, the magnitude of these outliers increases with the depth of the network. This means that as the network becomes deeper, DISPLAYFORM1 shrinks for almost all of the directions, making the training challenging. To quantify the magnitude and extent of these outlier eigenvalues, we compute some scale-independent classical statistics of the Hessian eigenvalues. We are primarily interested in skewness and kurtosis defined as: DISPLAYFORM2 The skewness of a distribution measures its asymmetry, and the kurtosis measures how heavy (or nonGaussian) the tails are -a heavy tailed distribution has a kurtosis greater than 3. In our case, we compute these statistics on the Hessian eigenvalues by observing that for v ∼ N (0, I n): DISPLAYFORM3 Due to the rapid concentration of the quadratic form in high dimensions (for concrete bounds, see BID7) we expect extremely accurate approximation of E[λ k] using a few i.i.d. samples of the form v T H k v. Both skewness and kurtosis should dramatically increase as the tails of the eigenvalue density become heavier. Figure 3 shows what happens to these metrics as we increase the depth: both the skewness and kurtosis increase dramatically as we increase the depth of the model. In particular, note that the kurtosis is far from being a Gaussian -these distribution of eigenvalues is extremely heavy tailed. Given that residual connections allow us to train much deeper models, we would predict that the addition of residual connections should prevent the largest eigenvalues from being so extreme. the spectrum of the Hessian for residual networks and their corresponding simple networks (both networks are identical save for the residual connections). We can see that adding residual connections substantially reduces the extent of the outliers in the Hessian spectrum. More quantitatively, in Figure 3, we can see that models with residual connections have substantially lower skewness and kurtosis than models without residual connections. The effects are in substantial: a 90 layer model with residual connections has lower skewness and kurtosis than a non-residual model half its size. In this paper, we have presented qualitative and quantitative evidence that depth increases outlier eigenvalues in the Hessian, and that residual connections mitigate this. We believe that this touches upon some of the fundamental dynamics of optimizing neural networks, and that any theoretical explanation of residual connections needs to explain this. Behrooz Ghorbani was supported by grants NSF-DMS 1418362 and NSF-DMS 1407813. For the purposes of this short exposition, we adopt a class of networks for our study. We consider a standard residual networks trained on CIFAR-10. These types of networks have 6n layers of feature maps of sizes {32 × 32, 16 × 16, 8 × 8} (2n layers for each type) with {16, 32, 64} filters per layer. With the addition of the input convolution layer and the final fully-connected layer, this type of network has 6n + 2 layers. Batch-Normalization is also present in these networks. In our experiments, when we don't include residual connections, we refer to the network'simple-6n + 2' network and when residual connections are included, the network is referred to as ResNet-6n + 2. We use the SGD with momentum with the same learning rate schedule to train both these networks for 100k steps. B Deeper CNNs are harder to train; skip connections help more at depthWe observe that for small n, both simple and ResNet networks train well (FIG6). As the number of layers increase, training simple model becomes slower. Note however that as we increase the depth, that residual
Network depth increases outlier eigenvalues in the Hessian. Residual connections mitigate this.
483
scitldr
In the context of optimization, a gradient of a neural network indicates the amount a specific weight should change with respect to the loss. Therefore, small gradients indicate a good value of the weight that requires no change and can be kept frozen during training. This paper provides an experimental study on the importance of a neural network weights, and to which extent do they need to be updated. We wish to show that starting from the third epoch, freezing weights which have no informative gradient and are less likely to be changed during training, in a very slight drop in the overall accuracy (and in sometimes better). We experiment on the MNIST, CIFAR10 and Flickr8k datasets using several architectures (VGG19, ResNet-110 and DenseNet-121). On CIFAR10, we show that freezing 80% of the VGG19 network parameters from the third epoch onwards in 0.24% drop in accuracy, while freezing 50% of Resnet-110 parameters in 0.9% drop in accuracy and finally freezing 70% of Densnet-121 parameters in 0.57% drop in accuracy. Furthermore, to experiemnt with real-life applications, we train an image captioning model with attention mechanism on the Flickr8k dataset using LSTM networks, freezing 60% of the parameters from the third epoch onwards, ing in a better BLEU-4 score than the fully trained model. Our source code can be found in the appendix. The immense success of deep neural networks we are witnessing since the deep learning revolution occurred is surprising. A large variety of vision and language applications ranging from image classification, object detection, image synthesis, image super-resolution, image captioning, language modeling.... etc. has proved that neural networks possess a powerful capability of learning very complex data. However, training these networks to perform as expected is very time-consuming and requires powerful graphical processing units (GPUs). A recently published open-source project by NVIDIA 1 claimed that training a generative adversarial network (GAN) took more than 6 days on 8 Tesla V100 GPUs. However, we argue that a lot of parameters involved during training are important for update only for the first few epochs (in our experiments, the first two epochs only), and can be frozen for the rest of the training epochs. The backpropagation algorithm is the base algorithm used to optimize deep neural networks. For each weight, a gradient is computed with respect to the loss which indicates the amount a weight should change. Large gradients correspond to a large change that will occur in the weight, while small ones (near to zero) indicate that the weight is nearly optimized and does not need much change. In particular, if a gradient for a particular weight is zero or close to zero, this means that it has either reached its optimal solution, or it is stuck at a saddle point. The former means that the weight has a good value and is less likely to change throughout the training and can be kept frozen. In this paper, we wish to show the redundancy of weights in a neural network that have no influence and can be kept frozen during training. In particular, we demonstrate that fully training a model with all its weights is required for the first two epochs only. To justify this, we propose an experimental technique named Partial Backpropagation, which freezes weights that have gradients very near to zero and are less likely to change, with the rest of the weights trained normally. This induces a very slight drop in accuracy (and no harm in accuracy for lesser freezing). An overview of our experimental technque is shown in Figure 1. Note that in Figure 1(b), the red weights are frozen and not removed or zeroed out. We can further visualize the histogram of gradients across the network layers to have a better understanding of their distributions. In Figure 2, we visualize the distribution of gradients from several layers in a VGG19 convolutional network . In particular, we visualize the gradients of layers 3, 7, 10 and 13 after training for 2 epochs. We can see a large number of gradients with values very near to zero, suggesting that a lot of weights in these layers have already been optimized and are less likely to change throughout the training. We discuss two topics closely (but not exactly) related to our work, namely weight pruning and progressive freezing of hidden layers. The reduction of the heavy inference cost of deep networks in low-resource settings is the primary purpose of weight pruning. Neural Networks usually contain a large number of parameters that make them hard to bring into effective action on embedded devices such as mobile phones. The hardship here happens due to both the network size and evaluation time. Significant redundant parameters could be reduced ing in a compressed network with the unnecessary computation being alleviated while maintaining the same accuracy. In short, the main objective of weight pruning techniques is the reduction of the energy and storage needed to process inference on deep neural networks. Hence, they can be deployed on embedded devices. learn only the significant connections to reduce storage. A neural network is first trained to learn the significant connections. Subsequently, the insignificant connections are pruned out and the remaining weights are then fine-tuned. In particular, after the initial training phase, all weights lower than a certain threshold are eliminated. This in a sparse network which is then re-trained so the remaining weights can compensate for the removed ones. The phases of pruning and retraining is an iterative process. After many iterations, the minimum number of weights could be found. In , two neurons with highly correlated activations are merged into one instead of removing a neuron, which keeps the signals inside the network as close as possible to the original one. If the highest correlated neurons are not fully correlated, merging them into one neuron could possibly alter the accuracy factor. proposed a dense-sparse-dense training flow. Other works also focus on pruning convolutional filters for efficient inference. In , a new criterion based on Taylor expansion which approximates the change in the cost function induced by pruning network parameters is proposed. Similarly, proposed an acceleration method for convolutional networks where different filters which have a small influence on the output accuracy are pruned. By pruning whole filters in the network with their corresponding feature maps, the computation cost is greatly minimized. Moreover, enforces channel-level sparsity in the network to reduce the model size of a CNN, which decreases the run-time memory and lower the number of computing operations with no harm on the accuracy factor. presented a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). A recenetly proposed netowrk, namely slimmable neural network presents a simple and general method to train a single neural network which is capable of being exucted at different widths. Rather than training an individual network with different width configurations, a shared network with switchable batch normalization could be trained. During runtime, the network can change its width on the fly according to the resource limitation. Other works on compressing deep neural networks include and . However, an important aspect of weight pruning models is that deep neural networks are mainly trained for the objective of reducing computation at testing time for efficient implementation on hardware systems, which is an iterative training process and requires a lot of procedures and algorithms to be involved during training, which in turn increases the training time. In contrast to our work, we wish to demonstrate the number of redundant parameters, which if frozen and not zeroed out, can have a very slight (or no) influence on the overall accuracy. Since the two objectives are different, we cannot directly compare weight pruning methods with our experimental technique. In , the authors proposed training each layer for a set portion of the training schedule, progressively "freezing out" layers and excluding them from the backward pass. This technique is motivated by the fact that in many deep networks, initial layers consume most of the cost, but have the fewest parameters and tend to converge to reasonable simple structures, implying that they do not require as much update as the later layers where most of the parameters are populated. This concept aims to reduce training time, however works on freezing complete layers which may contain informative gradients, and corresponding weights avoid getting updated. The study reports training time improvements with loss/no loss in accuracy, however does not report the number of parameters frozen in the overall network. Moreover, the study "progressively" freezes some layers in the network, implying that important weights at specific times in the training schedule experience an update. Therefore, it is also hard to directly compare Freezout with our experimental technique. As mentioned in Section 1, weights with gradients very near to zero imply that the particular weight is nearly optimized and is less likely to change throughout the training and can therefore be kept frozen. Our experimental technique freezes weights that have gradients very near to zero from the third epoch onwards, while training rest of the weights normally. All weights with gradients below a particular threshold are frozen. In this section, we first describe how we calculate the threshold of gradients and then elaborate on the proposed experimental technique. We start by extracting the non-zero gradients of the neural network. After training, a slight amount of weights may posses zero gradients, which are not considered when performing thresholding. Given the gradients of each layer l i where i indicates the layer number, we have: consists of all the gradients in the layer i. For convenience, we divide each gradient by the corresponding learning rate α to obtain the original gradient (non-scaled by the learning rate), and take its absolute value dE(y) dw. This is because we are interested in the magnitude of the gradient, and not its direction. We then plot the kernel density estimation (KDE) of G, given by the equation: where K is the kernel function and h is the kernel bandwidth. Fromf (x), we find the region which has a high likelihood of the gradients observation, obtaining the new range of gradients G n. The new range G n is then divided into b sub-ranges and the mean of each sub-range b is obtained to form, where each gradient in G n is assigned to its corresponding mean value. We set b to 300. We then apply Otsu thresholding technique to find the optimal threshold value. We re-write the mathematical formulation of otsu's method operating on the mean values in G d. The within-class variance is calculated as: where, where t is the mean value and represents the threshold at each of the 300 steps, (i. After calculating the within-class variance of all values in G d, we select the mean value from G d with the least within-class variance as our final threshold value. We first start by training the network with full backpropagation for 2 epochs. It is necessary that the network first understands the data it is dealing with and performs weights update. After the second epoch, the network is considered to have known what weights are optimized and what weights further need optimization. We then perform full backpropagation on the first batch of the third epoch, however before updating the weights, we perform the gradient mask generation. Given the Jacobian matrix of the gradients with respect to the loss F at the first batch of the third epoch: we divide each gradient by the corresponding learning rate α to obtain the original non-scaled gradient, and take its absolute value dE(y) dw, since we are interested in the magnitude of the gradient and not its direction. Ω is the number of epochs to train for with full backpropagation at the beginning before starting partial backpropagation. As discussed earlier, we set Ω = 2. For each layer i = 1: n, we create a mask m i that has the same shape of the gradient matrix J. We then set the corresponding mask value for each gradient in the layer i to 0 if the gradient is smaller than the threshold t, and 1 otherwise. Thus, we have a list of masks for each layer: m = (m 1, m 2, . . . .. m n) where n is the number of layers. We then perform weight update only on the weights corresponding to an entry of 1 in the corresponding mask m i. For the rest of the batches in the third epoch and all the remaining epochs, we use the same mask m. Gradients and weight updates are only calculated for entries that correspond to 1 in the mask m i. If the weight corresponds to an entry of 0 in the mask, its gradient is set to 0 and thus no weight update will happen to it (frozen weight). Therefore, the mask is a one-time calculation that depends on one batch in the third epoch, which makes it critical to have a reasonable batch size that contains all possible non-linear data combinations. In our experiments, we set the batch size to 100. Note that generating a mask requires one full iteration (one batch) of backpropagation. We have also experimented with changing the mask every several epochs (changing the mask each 35 epochs), and found out that it leads to the same performance as using the same mask for all epochs. Moreover, we experimented with performing full backpropagation at the last epoch in order to fine-tune the network so that the frozen weights could possibly change if needed. However, we found that the accuracy maintains as it is, implying that the frozen weights with least gradients are already optimized from the first two epochs, which validates our methodology. We start our experiments on the CIFAR-10 dataset using several CNN architectures. The CIFAR-10 dataset consists of 60000 32x32 colour images with 10 classes and 6000 images per class. There are 50,000 training images and 10,000 test images. Figure 3 shows the training and validation plots for both full backpropagation (FB) and experimental technique (PB). We use four different architectures: VGG19 , ResNet-110 , DenseNet-121 and LeNet-5 blocks with a growth rate of 12. Finally, for LeNet-5, the number of channels are. Table 1 demonstrates the obtained, while Table 2 demonstrates the average performance over 3 runs. The lowest freezing percentage is witnessed by residual networks. This is expected since these type of networks only learn the residual information and thus a lot of redundancy is already removed. Moreover, we find that VGG19 experiences 9.4% parameters with zero gradients by default (without applying our experimental technique), while LeNet-5 experiences 31.8% by default. Moreover, to demonstrate best performance of our methodology (i.e. highest freezing percentage with lowest accuracy drop), we slightly tweak the original threshold value obtained as discussed in section 3.1, and report the tweaked threshold values in Table 1. It is worth noting that when training the networks, we avoid using any regularization techniques. In particular, we don't use data augmentation, neither weight decay or dropout. We focus on delivering our experiments under normal settings. Models are all trained for 110 epochs. We start with a learning rate of 0.001 and decay the learning rate by a factor of 0.1 at epoch 40 and 80. We train all networks with AMSGrad which takes the maximum between the current and the previous exponentially weighted average of gradients. In order to experiment on complex real-life applications, we trained an image captioning model with visual attention mechanism as proposed in on the Flickr8k dataset using our experimental technique (PB). Flickr8k is a dataset that contains 8,000 images with up to 5 captions for each image. We divide the dataset into 6,000 training images and 1,000 validation and testing images, each. In our setup, we use the 5 captions for each image (if all 5 are present) and randomly sample from any present caption (5-present captions) times if the 5 captions are not present, ing in 30,000 training captions and 5,000 validation and testing captions, each. We Table 2: Average Performance over 3 runs on CIFAR10 and MNIST datasets for 4 different types of CNNs with the threshold, validation accuracy for full backpropagation (FB), validation accuracy for our experimental technique (PB) and the freezing ratio in parameters which is calculated as: (total gradients -non-zero gradients / total gradients) × 100 resize all images to 256 × 256 pixels. We use a single layer LSTM with a hidden size of 512. The batch size is set 60. We use the Adam optimizer with an initial learning rate of 5e-4 and anneal the learning rate by a factor of 0.8 once the BLEU-4 score shows no improvement for 3 consecutive epochs. The word embedding and attention size is set to 512. We train for a maximum of 15 epochs with early stopping if the validation BLEU-4 score has not improved for 10 consecutive epochs. When sampling, we use a beam size of 3. We use BLEU with up to 4 grams (BLEU-1, BLEU-2, BLEU-3, BLEU-4) as our evaluation metric. We experiment on the soft attention variant of. The best BLEU-4 score obtained when training with full backpropagation is 0.099. When applying our experimental technique (PB), using a threshold of 0.05, we obtain 61.81% redundant parameters which are frozen from the third epoch onwards. Under this setting, we obtain a higher BLEU-4 score of 0.101 than the fully trained model. The freezing ratio is calculated as: (total gradients -non-zero gradients / total gradients) × 100. Figure 4 shows some generated captions. For evaluation on all BLEU scores, see Figure 5. on the Flickr8k dataset using experimental technique (PB). The predicted word and attention weights for each timestep are shown for 4 generated captions of different lengths Figure 5: BLEU-n scores (where n is the number of grams) reported on the the Flickr8k dataset for image captioning with attention mechanism using both full backpropagation (FB) and our experimental technique (PB). We provided an experimental study on the importance of a neural network weights, and to which extent do they need to be updated. Through our experiments, we emphasized the number of redundant parameters that carry no informative gradient, which if frozen from the third epoch onwards, slightly effect (and in sometimes do not) the overall accuracy of the model. To prove our concern, we ran experiments on the MNIST and CIFAR10 datasets using several CNN architectures (VGG19, ResNet-110 and DenseNet-121), as well as the Flick8k dataset using an image captioning architecture composed of LSTM networks with attention mechanism. Our experiments successfully prove the concern of this paper. We provide our base code in this appendix. We use PyTorch to implement our experiments. We modified publicly available codes for the architectures we used.
An experimental paper that proves the amount of redundant weights that can be freezed from the third epoch only, with only a very slight drop in accuracy.
484
scitldr
Like humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum. While conventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of data while introducing pre-computation overheads. In this work, we propose Learning with Incremental Labels and Adaptive Compensation (LILAC), which introduces a novel approach to curriculum learning. LILAC emphasizes incrementally learning labels instead of incrementally learning difficult samples. It works in two distinct phases: first, in the incremental label introduction phase, we unmask ground-truth labels in fixed increments during training, to improve the starting point from which networks learn. In the adaptive compensation phase, we compensate for failed predictions by adaptively altering the target vector to a smoother distribution. We evaluate LILAC against the closest comparable methods in batch and curriculum learning and label smoothing, across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10. We show that our method outperforms batch learning with higher mean recognition accuracy as well as lower standard deviation in performance consistently across all benchmarks. We further extend LILAC to state-of-the-art performance across CIFAR-10 using simple data augmentation while exhibiting label order invariance among other important properties. Deep networks have seen rich applications in high-dimensional problems characterized by a large number of labels and a high volume of samples. However, successfully training deep networks to solve problems under such conditions is mystifyingly hard . The go-to solution in most cases is Stochastic Gradient Descent with mini-batches (simple batch learning) and its derivatives. While offering a standardized solution, simple batch learning often fails to find solutions that are simultaneously stable, highly generalizable and scalable to large systems (; ; ;). This is a by-product of how mini-batches are constructed. For example, the uniform prior assumption over datasets emphasizes equal contributions from each data point regardless of the underlying distribution; small batch sizes help achieve more generalizable solutions, but do not scale as well to vast computational resources as large mini-batches. It is hard to construct a solution that is a perfect compromise between all cases. Two lines of work, curriculum learning and label smoothing, offer alternative strategies to improve learning in deep networks. Curriculum learning, inspired by strategies used for humans , works by gradually increasing the conceptual difficulty of samples used to train deep networks;; ). This has been shown to improve performance on corrupted and small datasets . More recently, deep networks have been used to categorize samples and variations on the pace with which these samples were shown to deep networks were analyzed in-depth . To the best of our knowledge, previous works assumed that samples cover a broad spectrum of difficulty and hence need to be categorized and presented in a specific order. This introduces computational overheads e.g. pre-computing the relative difficulty of samples, and also reduces the effective amount of data from which a model can learn in early epochs. Further, curriculum learning approaches have not been shown to compete with simple training strategies at the top end of performance in image benchmarks. A complementary approach to obtaining generalizable solutions is to avoid over-fitting or getting stuck in local minima. In this regard, label smoothing offers an important solution that is invariant to the underlying architecture. Early works like replace ground-truth labels with noise while uses other models' outputs to prevent over-fitting. This idea was extended in to an iterative method which uses logits obtained from previously trained versions of the same deep network. use local distributional smoothness, based on the robustness of a model's distribution around a data point, to regularize outcomes, penalized highly confident outputs directly. Closest in spirit to our work is the label smoothing method defined in , which offers an alternative target distribution for all training samples with no extra data augmentation. In general, label smoothing is applied to all examples regardless of how it affects the network's understanding of them. Further, in methods which use other models to provide logits/labels, often the parent network used to provide those labels is trained using an alternate objective function or needs to be fully re-trained on the current dataset, both of which introduce additional computation. In this work, we propose LILAC, Learning with Incremental Labels and Adaptive Compensation, which emphasizes a label-based curriculum and adaptive compensation, to improve upon previous methods and obtain highly accurate and stable solutions. LILAC is conceived as a method to learn strong embeddings by using the recursive training strategy of incremental learning alongside the use of unlabelled/wrongly-labelled data as hard negative examples. It works in two key phases, 1) incremental label introduction and 2) adaptive compensation. In the first phase, we incrementally introduce groups of labels in the training process. Data, corresponding to labels not yet introduced to the model, use a single fake label selected from within the dataset. Once a network has been trained for a fixed number of epochs with this setup, an additional set of ground-truth labels is introduced to the network and the training process continues. In recursively revealing labels, LILAC allows the model sufficient time to develop a strong understanding of each class by contrasting against a large and diverse set of negative examples. Once all ground-truth labels are revealed the adaptive compensation phase of training is initiated. This phase mirrors conventional batch learning, except we adaptively replace the target one-hot vector of incorrectly classified samples with a softer distribution. Thus, we avoid adjusting labels across the entire dataset, like previous methods, while elevating the stability and average performance of the model. Further, instead of being pre-computed by an alternative model, these softer distributions are generated on-the-fly from the outputs of the model being trained. We apply LILAC to three standard image benchmarks and compare its performance to the strongest known baselines. While incremental and continual learning work on evolving data distributions with the addition of memory constraints ( and derivative works), knowledge distillation and similar works) or other requirements, this work is a departure into using negative mining and focused training to improve learning on a fully available dataset. In incremental/continual learning works, often the amount of data used to retrain the network is small compared to the original dataset while in LILAC we fully use the entire dataset, distinguished by Seen and Unseen labels. Thus, it avoids data deficient learning. Further, works like;; emphasize the importance of hard negative mining, both in size and diversity, in improving learning. Although the original formulation of negative mining was based on imbalanced data, recent object detection works have highlighted its importance in contrasting and improving learning in neural networks. To summarize, our main contributions in LILAC are as follows, • we introduce a new take on curriculum learning by incrementally learning labels as opposed to samples, • our method adaptively compensates incorrectly labelled samples by softening their target distribution which improves performance and removes external computational overheads, • we improve average recognition accuracy and decrease the standard deviation of performance across several image classification benchmarks compared to batch learning, a property not shared by other curriculum learning and label smoothing methods. In LILAC, our main focus is to induce better learning in deep networks. Instead of the conventional curriculum learning approach of ranking samples, we consider all samples equally beneficial. Early on, we focus on learning labels in fixed increments (Section 2.1). Once the network has had a chance to learn all the labels, we shift to regularizing the network to prevent over-fitting by providing a softer distribution as the target vector for previously misclassified samples (Section 2.2). An overview of the entire algorithm discussed is available in the appendix as Algorithm 1. In the incremental phase, we initially replace the ground-truth labels of several class using a constant held-out label. Gradually, over the course of several fixed intervals of training we reveal the true label. Within a fixed interval of training, we keep constant two sets of data, "Seen", whose groundtruth labels are known and "Unseen", whose labels are replaced by a fake value. When training, Illustration of the evolution of data partitions in the incremental label introduction phase for a four label dataset. In the first incremental step, only one label is used for training while the remaining data use label 4. A short period of training is performed with this fixed setup, where data from U is uniformly sampled to match the number of samples from S, in every mini-batch. The final incremental step depicted is equivalent to batch learning since all the labels are available to the network. Once all the ground-truth labels are revealed we begin the adaptive compensation phase described in Sec. 2.2. mini-batches are uniformly sampled from the entire training set, but the instances from "Unseen" classes use the held-out label. By the end of the final interval, we reveal all ground-truth labels. We now describe the incremental phase in more detail. At the beginning of the incremental label introduction phase, we virtually partition data into two mutually exclusive sets, S: Seen and U: Unseen, as shown in Fig. 1. Data samples in S use their ground-truth labels as target values while those in U use a designated unseen label, which is held constant throughout the entire training process. LILAC assumes a random ordering of labels, Or(M), where M denotes the total number of labels in the dataset. Within this ordering, the number of labels and corresponding data initially placed in S is defined by the variable b. The remaining labels, M − b, are initially placed in U and incrementally revealed in intervals of m labels, a hyper-parameter defined by the user. Training in the incremental phase happens at fixed intervals of E epochs each. Within a fixed interval, the virtual data partition is held constant. Every mini-batch of data is sampled uniformly from the entire original dataset and within each mini-batch, labels are obtained based on their placement in S or U. Then the number of samples from U is reduced or augmented, using a uniform prior, to match the number of samples from S. This is done to ensure no unfair skew in predictions towards U since all data points use the same designated label. Finally, the curated mini-batches of data are used to train the neural network. At the end of each fixed interval, we reveal another set of m groundtruth labels and move samples of those classes from U to S after which the entire data curation and training process is repeated for the next interval. Once all the ground-truth labels are available to the deep network, we begin the adaptive compensation phase of training. The main idea behind adaptive compensation is, if the network is unable to correctly predict a sample's label even after allowing sufficient training time, then we alter the target vector to a less peaked distribution. Compared to learning one-hot vectors, this softer distribution can more easily be learned by the network. Unlike prior methods we adaptively modify the target vector only for incorrectly classified samples on-the-fly. In this phase, the network is trained for a small number of epochs using standard batch learning. Let T be the total number of training epochs in the incremental phase and batch learning. During the adaptive compensation phase, we start at epoch e, where e > T. For a mini-batch of samples in epoch e, predictions from the model at e − 1 are used to determine the final target vector used in the objective function; specifically, we soften the target vector for an instance iff it was misclassified by the model at the end of epoch e − 1. The final target vector for the ith instance at epoch e, t e,i, is computed based on the model φ e−1 using Equation 1. Here, (x i, y i) denote a training sample and its corresponding ground-truth label for sample index i while δ yi represents the corresponding one-hot vector. 1 is a vector of M dimensions with all entries as 1 and is a scaling hyper-parameter. Datasets We use three datasets, CIFAR-10, CIFAR-100 and STL-10 , to evaluate our method and validate our claims. CIFAR-10 and CIFAR-100 are 10 and 100 class variants of the popular image benchmark CIFAR. Each of these contains 50,000 images in the training set and 10,000 images in the testing set. STL-10 is a 10 class subset of ImageNet with 500 and 800 samples per class for training and testing subsets, respectively. Metrics The common metric used to evaluate the performance of all the learning algorithms is average recognition accuracy(%) and standard deviation across 5 trials. We also report consistency, which is a binary metric that indicates whether the training strategy in higher average performance and lower standard deviation compared to standard batch learning across all datasets. Experimental Setup For CIFAR-10/100, we use ResNet18 as the architectural backbone for all methods; for STL-10, we use ResNet34. In each interval of LILAC's incremental phase, we train the model for 10 epochs each for CIFAR-10/100, and 5 epochs each for STL-10. During these incremental steps, we use a learning rate of 0.1, 0.01 and 0.1 for CIFAR-10, CIFAR-100, and STL-10 respectively. The standard batch learning settings used across all datasets are listed in the appendix. These settings reflect the setup used in LILAC once the incremental portion of training is complete and the algorithm moves into the adaptive compensation phase. Within this phase epochs 175, 525 and 120 are used as thresholds (epoch T) for CIFAR-10, 100 and STL-10 respectively. • Stochastic gradient descent with mini-batches is the baseline against which all methods are compared. • Curriculum learning ) forms a family of related works which aim to help models learn faster and optimize to a better minimum. Following the methodology proposed in this work we artificially create a subset of the dataset called "Simple", by selecting data that is within a value of 1.1 as predicted by a linear one-vs-all SVR model trained to regress to the ground-truth label. The deep network is trained on the "Simple" dataset for a fixed period of time that mirrors the total number of epochs of the incremental phase of LILAC after which the entire dataset is used to train the network. • Label Smoothing is the closest relevant work to use label smoothing as a form of regularization without extra data augmentation. This non-invasive baseline is used as a measure of the importance of regularization and for its ability to boost performance. • Dynamic Batch Size (DBS) is a custom baseline used to highlight the importance of variable batch size in training neural networks. DBS randomly copies data available within a mini-batch to mimic variable batch size. Further, all ground-truth labels are available to this model throughout the training process. • Random Augmentation (RA) is a custom baseline used to highlight the importance of virtual data partitions in LILAC. Its implementation closely follows LILAC but excludes the adaptive compensation phase. The main difference between LILAC and RA is that RA uses data from a one randomly chosen class, in U, within a mini-batch while data from all classes in U is used in LILAC to equalize the number of samples from S and U. Table 1 clearly illustrates improvements in average recognition accuracy, decrease in standard deviation and consistency when compared to batch learning. While certain setups have the highest 96.23 Fractional Max-pooling 96.53 Densely Connected Convolutional Networks 96.54 Drop-Activation 96.55 Shake-Drop 96.59 Shake-Drop + LILAC (ours) 96.79 performance on specific datasets (e.g., Label Smoothing on CIFAR-10/100), they are not consistent across all datasets and do not find more stable solutions than LILAC (std. of 0.216 compared to 0.127 from LILAC) LILAC is able to achieve superior performance without unnecessary overheads such as computing sample difficulty or irreversibly altering the ground-truth distribution across all samples. A key takeaway from DBS is the relative drop in standard deviation combined with higher average performance when compared to baselines like fixed curriculum and label smoothing. RA serves to highlight the importance of harvesting data from all classes in U simultaneously, for "negative" samples. The variety of data to learn from provides a boost in performance and standard deviation across the board in LILAC w/o AC as opposed to RA. DBS and RA underline the importance of variable batch size and data partitioning in the makeup of LILAC. We further extend LILAC to train the base pyramidnet with shake-drop regularization (p = 1.0) . From Table 2 we clearly see that LILAC can be extended to provide the highest performance on CIFAR-10 given a standard preprocessing setup. To provide a fair comparison we highlight top performing methods with standard preprocessing setups that avoid partial inputs (at the node or image level) since LILAC was developed with fully available inputs in mind. Across all these learning schemes, LILAC is the only one to consistently increase classification accuracy and decrease the standard deviation across all datasets compared to batch learning. Fig. 2 illustrates the evolution of the embedding across the span of the incremental phase. This space has more degrees of separation when compared to an equivalent epoch of training with batch learning where all the labels are available. Table 3 provides a breakdown of the contribution of each phase of LILAC and how they combine to elevate the final performance. Here, in LILAC w/o AC we replace the entire AC phase with simple batch learning while in Batch + AC we include adaptive compensation with adjusted thresholds. The first half of this table compares the impact of incre- Figure 2: Side-by-side comparison between the representation spaces learned by LILAC and batch learning. Through the entire incremental label introduction phase, the representation space evolves to being more well spaced. Images in column 4 and 5 show comparable points in training space when all labels are available to the deep network being trained. These images support our claim that the deep network starts at a better initialization than standard batch learning, whose effect is carried throughout the training process. mentally introducing labels to a deep network against standard batch learning. We clearly observe that performances across Rows 1 and 2 fall within the indicated standard deviation of each other. However, from Fig. 2 we know that LILAC start from a qualitatively better solution. Combining these , we conclude that the emphasis on a lengthy secondary batch learning phase erodes overall performance. The second half of Table 3 shows the impact of adding adaptive compensation on batch learning and LILAC. When added to standard batch learning there isn't a clear and conclusive indicator of improvement across all benchmarks in both average performance and standard deviation. However, in combination with the incremental label introduction phase of LILAC, adaptive compensation improves average performance as well as decreases standard deviation, indicating an improved stability and consistency. Given the similarity in learning between the batch setup and LILAC, when all labels have been introduced, we show that the embedding space learned by incrementally introducing labels (Fig. 2) is distinct from standard batch learning and is more amenable to AC. Through previous experiments we have established the general applicability of LILAC while contrasting its contributions to that of standard batch learning. In this section we dive deeper to reveal some characteristics of LILAC that further supplement the claim of general applicability. Specifically, we characterize the impact of label ordering, smoothness of alternate target vector distribution and injection of larger groups of labels in the incremental phase. Ordering of Labels Throughout the standard experiments, we assume labels are used in the ascending order of value. When this is modified to a random ordering or in ascending order of diffi- Table 4 suggest that there is no explicit benefit or pattern. Other than the extra impact of continually fluctuating label orders across trials, there isn't a large gap in performance. Thus, we claim LILAC is relatively invariant to the order of label introduction. During adaptive compensation, = 0.5 is used in the alternate target vector for samples with failed predictions throughout all experiments in Sections 3.1 and 3.2. When extended to a variety of values, we observe that most variations of the peak performance still fall within the standard deviation range for each dataset. However, the peak average performance values usually occur between 0.7 to 0.5. While LILAC was designed to allow the introduction of multiple labels in a single incremental step, throughout the experiments in Sections 3.1 and 3.2 only 1 label was introduced per step to allow thorough learning while eliminating the chance of conflicting decision boundaries from multiple labels. Revealing multiple labels instead of 1 label per incremental step has a negative impact on the overall performance of the model. Table 4 shows that adding large groups of labels force lower performance, which is in line with our hypothesis that revealing fewer labels per incremental step makes the embedding space more amenable to adaptive compensation. In this work, we proposed LILAC which rethinks curriculum learning based on incrementally learning labels instead of samples. This approach helps kick-start the learning process from a substantially better starting point while making the learned embedding space amenable to adaptive negative logit compensation. Both these techniques combine well in LILAC to show the highest performance on CIFAR-10 for simple data augmentations while easily outperforming batch and curriculum learning and label smoothing on comparable network architectures. The next step in unlocking the full potential of this setup is to extend this setup to include a confidence measure on the predictions of network so that it can handle the effects of dropout or partial inputs. In further expanding LILAC's ability to handle partial inputs, we aim to explore its effect on standard incremental learning (memory constrained) while also extending it applicability to more complex neural network architectures. A LILAC: ALGORITHM Table 8: The table captures the effect of varying the number of epochs used for the fixed training intervals in the incremental label introduction phase. Across CIFAR-10 there is an obvious peak after which the mean value decreases. However, in STL-10 there seems to be a consistent increase, with the assumption of minor noise. Finally, in CIFAR-100 there isn't a clear pattern. From the in Table 8, we observe that the choice of E is dependent on the dataset. There isn't an explicit pattern that can be used to select the value of E without trial runs. Further, the available run-time is an important constraint when select E from a range of values since both m and E affect it.
A novel approach to curriculum learning by incrementally learning labels and adaptively smoothing labels for mis-classified samples which boost average performance and decreases standard deviation.
485
scitldr
Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the ``long tail'' of this distribution requires enormous amounts of data. Representations of rare words trained directly on end tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained end-to-end for the downstream task. We show that this improves against baselines where embeddings are trained on the end task for reading comprehension, recognizing textual entailment and language modeling. Natural language yields a Zipfian distribution BID28 which tells us that a core set of words (at the head of the distribution) are frequent and ubiquitous, while a significantly larger number (in the long tail) are rare. Learning representations for rare words is a well-known challenge of natural language understanding, since the standard end-to-end supervised learning methods require many occurrences of each word to generalize well. The typical remedy to the rare word problem is to learn embeddings for some proportion of the head of the distribution, possibly shifted towards the domain-specific vocabulary of the dataset or task at hand, and to treat all other words as out-of-vocabulary (OOV), replacing them with an unknown word "UNK" token with a shared embedding. This essentially heuristic solution is inelegant, as words from technical domains, names of people, places, institutions, and so on will lack a specific representation unless sufficient data are available to justify their inclusion in the vocabulary. This forces model designers to rely on overly large vocabularies, as observed by BID17 BID22, which are parametrically expensive, or to employ vocabulary selection strategies BID16. In both cases, we face the issue that words in the tail of the Zipfian distribution will typically still be too rare to learn good representations for through standard embedding methods. Some models, such as in the work of BID13, have sought to deal with the open vocabulary problem by obtaining representations of words from characters. This is successful at capturing the semantics of morphological derivations (e.g. "running" from "run") but puts significant pressure on the encoder to capture semantic distinctions amongst syntactically similar but semantically unrelated words (e.g. "run" vs. "rung"). Additionally, nothing about the spelling of named entities, e.g. "The Beatles", tells you anything about their semantics (namely that they are a rock band).In this paper we propose a new method for computing embeddings "on the fly", which jointly addresses the large vocabulary problem and the paucity of data for learning representations in the long tail of the Zipfian distribution. This method, which we illustrate in FIG0, can be summarized as follows: instead of directly learning separate representations for all words in a potentially unbounded vocabulary, we train a network to predict the representations of words based on auxiliary data. Such auxiliary data need only satisfy the general requirement that it describe some aspect of the semantics of the word for which a representation is needed. Examples of such data could be dictionary definitions, Wikipedia infoboxes, linguistic descriptions of named entities obtained from Wikipedia articles, or something as simple as the spelling of a word. We will refer to the content of auxiliary data as "definitions" throughout the paper, regardless of the source. Several sources of auxiliary data can be used simultaneously as input to a neural network that will compute a combined representation. These representations can then be used for out-of-vocabulary words, or combined with withinvocabulary word embeddings directly trained on the task of interest or pretrained from an external data source BID18 BID20. Crucially, the auxiliary data encoders are trained jointly with the objective, ensuring the preservation of semantic alignment with representations of within-vocabulary words. In the present paper, we will focus on a subset of these approaches and auxiliary data sources, restricting ourselves to producing out-of-vocabulary words embeddings from dictionary data, spelling, or both. The obvious use case for our method would be datasets and tasks where there are many rare terms such as technical writing or bio/medical text BID6. On such datasets, attempting to learn global vectors-for example GloVe embeddings BID20 -from external data, would only provide coverage for common words and would be unlikely to be exposed to sufficient (or any) examples of domain-specific technical terms to learn good enough representations. However, there are no (or significantly fewer) established neural network-based baselines on these tasks, which makes it harder to validate baseline . Instead, we present on a trio of well-established tasks, namely reading comprehension, recognizing textual entailment, and a variant on language modelling. For each task, we compare baseline models with embeddings trained directly only on the task objective to those same models with our on the fly embedding method. Additionally, we report for the same models with pretrained GLoVe vectors as input which we do not update. We aim to show how the gap in between the baseline and the data-rich GLoVe-based models can be partially but substantially closed merely through the introduction of relatively small amounts of auxiliary definitions. Quantitative show that auxiliary data improves performance. Qualitative evaluation indicates our method allows models to draw and exploit connections defined in auxiliary data, along the lines of synonymy and semantic relatedness. Arguably, the most popular approach for representing rare words is by using word embeddings trained on very large corpora of raw text. BID18 BID20. Such embeddings are typically explicitly or implicitly based on word co-occurence statistics. Being a big step forward from the models that are trained from scratch only on the task at hand, the approach can be criticized for being extremely data-hungry 1. Obtaining the necessary amounts of data may be difficult, e.g. in technical domains. Besides, auxiliary training criteria used in the pretraining approaches are not guaranteed to yield representations that are useful for the task at hand. BID7 proposed to represent OOV words by fixed random vectors. While this has shown to be effective for machine comprehension, this method does not account for word semantics at all, and therefore, does not cover the same ground as the method that we propose. There have been a number of attempts to achieve out-of-vocabulary generalization by relying on the spelling. BID13 used a bidirectional LSTM to read the spelling of rare words and showed that this can be helpful for language modeling and POS tagging. We too will investigate spelling as a source of auxiliary data. In this respect, the approach presented here subsumes theirs, and can be seen as a generalization to other types of definitions. The closest to our work is the study by BID10, in which a network is trained to produce an embedding of a dictionary definition that is close to the embedding of the headword. The network is shown to be an effective reverse dictionary and a crossword solver. Our approach is different in that we train a dictionary reader in an end-to-end fashion for a specific task, side-stepping the potentially suboptimal auxiliary ranking cost that was used in that earlier work. Their method also relies on the availability of high-quality pretrained embeddings which might not always be feasible. Another related work by BID14 uses dictionary definitions to provide initialization to a database embedding method, which is different from directly learning to use the definitions like we do. Concurrently with this work BID24 studied dynamic integration of knowledge from a commonsense knowledge base. In another concurrent work BID15 build a new dataset for named entity prediction and show that external knowledge can be very useful for this task.. We obtain representations by retrieving external information (e.g. a dictionary definition) and embedding it, for example, with another LSTM-RNN, instead of using a catch-all "UNK" representation for out-of-vocabulary items. At a higher level our approach belongs to the a broad family of methods for conditioning neural networks on external knowledge. For example, BID12 propose to add classes to a classifier by representing them using their "descriptions". By description they meant, for example, a canonical picture of a printed character, that would represent all its possible handwritten versions. Their idea to rely on descriptions is similar to our idea to rely on definitions, however we focus on understanding complex inputs instead of adding new output classes. Enhancing word embeddings with auxiliary data from knowledge bases (including wordnet) has a long tradition BID27 BID8. Our work differs from previous approaches in essential ways. First, we use a textual form and are not restricted to knowledge represented as a graph. Second, we learn in an end to end fashion, allowing the model to pick useful information for the task of interest. In general, a neural network processes a language input by replacing its elements x i, most often words, with the respective vectors e(x i), often called embeddings BID1. Embeddings are typically either trained from scratch or pretrained. When embeddings are trained from scratch, a restricted vocabulary V train = {w 1, . . ., w n} is defined, usually based on training set frequency. Words not in V train are replaced by a special token UNK with a trainable embedding e(UNK). Unseen test-time words w / ∈ V train are then represented by e(UNK), which effectively means the specific meaning of this word is lost. Even if w had been included in V train but was very rare, its learned embedding e(w) would likely not be very informative. The approach proposed in this work, described in FIG0, is to use definitions from auxiliary data, such as dictionaries, to compute embeddings of rare words on the fly, as opposed to having a persistent embedding for each of them. More specifically, this involves fetching a definition d(w) = (x 1, . . ., x k) and feeding it into a network f that produces an embedding e d (w) = f (e (x 1),..., e (x k)). We will refer to e d (w) as a definition embedding produced by a definition reader f. One can either use the same embeddings e = e when reading the dictionary or train different ones. Likewise, one can either stick to a shared vocabulary V dict = V train, or consider two different ones. When a word x i ∈ V dict is encountered, it is replaced by UNK and the respective trainable embedding e (UNK) is used. For the function f we consider three choices: a simple mean pooling (MP) DISPLAYFORM0 where V is a trainable matrix, and lastly, using the last state of an LSTM BID11, e d (w) = LST M (e (x 1),..., e (x k)). Many words have multiple dictionary definitions. We combine embeddings for multiple definitions using mean pooling. We include all definitions whose headwords match w or any possible lemma of a lower-cased w 2. To simplify the notation, the rest of the paper assumes that there is only one definition for each word. While the primary purpose of definition embeddings e d (w) is to inform the network about the rare words, they might also contain useful information for the words in V train. When we use both, we combine the information coming from the embeddings e(w) and the definition embeddings e d (w) by computing e c (w) = e(w) + W e d (w), where W is a trainable matrix, or just by simply summing the two, e c (w) = e(w) + e d (w). Alternatively it is possible to just use e c (w) = e(w) for w from V train and e c (w) = e d (w) otherwise. When no definition is available for a word w, we posit that e d (w) is a zero vector. A crucial observation that makes an implementation of the proposed approach feasible is that the definitions d(x i) of all words x i from the input can be processed in parallel 3.To keep things simple, we limited the scope and complexity of this first incarnation of our approach as follows. First, we do not consider definitions of word combinations, such as phrasal verbs like "give up" and geographical entities like "San Francisco". Second, our definition reader could itself better handle the unknown words w / ∈ V dict by using their definition embeddings e d (w) instead e (UNK), thereby implementing a form of recursion. We will investigate both in our future work. We worked on extractive question answering, semantic entailment classification and language modelling. For each task, we picked a baseline model and a architecture from the literature which we knew would provide sensible , to explore how augmenting it with on the fly embeddings would affect performance. We explored two complementary sources of auxiliary data. First, we used word definitions from WordNet BID19. While WordNet is mostly known for its structured information about synonyms, it does contain natural language definitions for all its 147306 lemmas (this also includes multi-word headwords which we do not consider in this work) 4. Second, we experimented with the character-level spelling of words as auxiliary data. To this end, in order to fit in with our use of dictionaries, we added fake definitions of the form "Word" → "W", "o", "r", "d".In order to measure the performance of models in "data-rich" scenarios where a large amount of unlabelled language data is available for the training of word representations, we used as pretrained word embeddings 300-dimensional GLoVe vectors trained on 840 billion words BID20. We compared our auxiliary data-augmented on the fly embedding technique to baselines and models with fixed GLoVe embeddings to measure how well our technique closes the gap between a data-poor and data-rich scenario. We used the Stanford Question Answering Dataset (SQuAD) BID21 ) that consists of approximately 100000 human-generated question-answer pairs. For each pair, a paragraph from Wikipedia is provided that contains the answer as a continuous span of words. Our basic model is a simplified version of a coattention network proposed in BID26. First, we represent the context of length n and the question of length m as matrices C ∈ R n,dand Q ∈ R m,d by running them through an LSTM and a linear transform. Next, we compute the affinity scores L = CQ T ∈ R n,m. By normalizing L with row-wise and column-wise softmaxes we construct context-to-question and question-to-context attention maps A C and A Q. These are used to construct a joint question-document representation U 0 as a concatenation along the feature axis of the matrices C, A C Q and A C A T Q C. We transform U 0 with a bidirectional LSTM and another ReLU BID9 layer to obtain the final context-document representation U 2. Finally, two linear layers followed by a softmax assign to each position of the document probabilities of it being the beginning and the end of the answer span. We refer the reader to the work of BID26 for more details. Compared to their model, the two main simplifications that we applied is skipping the iterative inference procedure and using usual ReLU instead of the highway-maxout units. Our baseline is a model with the embeddings trained purely from scratch. We found that it performs best with a small vocabulary of 10k most common words from the training set. This can be explained by a rather moderate size of the dataset: all the models we tried tended to overfit severely. In our preliminary experiments, we found that a smaller V train of 3k words is better for the models using auxiliary data, and that combining the information from definitions and word embeddings is helpful. Unless otherwise specified, we use summation preceded by a linear transformation for composing word embedding with definition embeddings: e c (w) = e(w) + W e d (w).We tried different models (MP and LSTM) for reading the dictionary and used an LSTM for reading the spelling. In addition to using either the spelling or the dictionary definitions, we tried mixing the dictionary definitions with the spelling. When both dictionary and spelling were used, we found that using a LSTM for reading the spelling and MP-L for reading dictionary definitions works best. As mentioned in Section 3 our dictionary lookup procedure involves lowercasing and lemmatization. In order to estimate the contribution of these steps we add to the comparison a model that fetches the spelling of a lemmatized and lowercased version of each word. The last model in our comparison is trained with the GLoVe embeddings. Except for GloVe embeddings, all vectors, such as word embeddings and LSTM states, have d=200 dimensions in all models. The are reported in TAB0. We report the exact match ratio as computed by the evaluation tools provided with the dataset, which is basically the accuracy of selecting the right answer. We report the average over multiple runs on the development set. Looking at the one can see that adding any external information in a significant improvement over the baseline model (B) (3.7 -10.5 points). When the dictionary alone is used, mean pooling (D3) performs similarly to LSTM (D4). For the model with mean pooling, we verified that the matrix W in computing e c (w) is necessary for best (see model D2), and back-propagating through the process of reading definition is helpful (see model D1). We thereby establish that our method is preferable to the one by BID14, which prescribes mean pooling of available embeddings without end-to-end training. We found that adding the spelling (S) helps more than adding a dictionary (D) (3 points difference), possibly due to relatively lower coverage of our dictionary. However, the model that uses both (SD) has a 1.1 point advantage over the model that uses just the spelling (S), demonstrating that combining several forms of auxiliary data allows the model to exploit the complementary information they provide. The model with GLoVe embeddings (G) is still ahead with a 1.1 point margin, but the gap has been shrunk. Finally, we evaluated the models S, SL and SD on the test set. The parameters for test evaluation were selected based on the development set 5. The test set confirm the benefit of using dictionary definitions (SD has a 1 point advantage over S). Lastly, the model that uses lemmatization and lowercasing (SL) to retrieve spelling does not outperform S, which shows that the advantage of SD over S is not due to the normalization procedures and that SD must be using the dictionary definitions in a non-trivial way. To understand how our model successfully uses the dictionary and what prevents it from using it better, we conducted a qualitative investigation on selected examples. Namely, we considered examples on which SD selected the correct answer span and S did not, and from them we chose the ones with the largest difference of log-likelihoods that S and SD assigned to the correct answer. Figure 2 shows the attention maps A C for both models for one of the examples. We can see that S has no clue that "overseas" may be an answer to an question about location, whereas SD is aware of that, presumably thanks to the definition "overseas -> in a foreign country". In a similar manner we observed SD being able to match "direction" and "eastwards", "where" and "outdoors". Another pattern that we saw is that the model with the dictionary was able to answer questions of the form "which scientist" or "which actress" better. Both words "scientist" and "actress" were not frequent enough to make it to V train, but the definitions "actress -> a female actor" "scientist -> a person with advanced knowledge of one or more sciences" apparently provided enough information about these words that the model could start matching them with named entities in the passage. Figure 2: The attention maps A C of the models with (on the left) and without the dictionary (on the right). The rows correspond to words of the context and the columns to the words of the question. One can see how with the help of the dictionary the model starts considering "overseas" as a candidate answer to "where".Furthermore, we compared the models G and SD in a similar way. We found that often SD simply was missing a definition. For a example, it was not able to match "XBox" and "console", "Mark Twain" and "author", "most-watched" and "most watched". We also saw cases where the definition was available but was not used, seemingly because the key word in the definition was outside V train = V dict. For example, "arrow" was defined as "a projectile with a straight thin shaft", and the word "projectile" was quite rare in the training corpus. As a consequence, the model had no chances to understand that an arrow is a weapon and match the word "arrow" in the context with the word "weapon" in the question. Likewise, "Earth" was defined as "planet", but "planet" was outside V train. Finally, we saw cases where inferring important aspects of meaning from the dictionary would be non-trivial, for example, guessing that "historian" is a "profession" from the definition "a person who is an authority on history and who studies it and writes about it" would involve serious common sense reasoning. We used the Stanford Natural Language Inference (SNLI) corpus BID3, which consists of around 500k pairs of sentences (hypothesis and premise) and the task is to predict the logical relation (contradiction, neutral or entailment) between them. In addition we used the MultiGenre Natural Language Inference (MultiNLI) corpus BID25, which effectively is a more recent and more diverse version of SNLI of approximately the same size. A key distinction of MultiNLI from SNLI is the availability of a "matched" and a "mismatched" development and test set. The matched test and development sets contain data from the same domains as the training set, whereas the mismatched ones were intentionally collected using data from different domains. We implemented a variant (replacing TreeLSTM by biLSTM) of Enhanced Sequential Inference Model (ESIM) BID5 that achieves close to SOTA accuracy. Similarly to the model used in the SQuAD experiments, this model represents hypothesis and premise as H ∈ R n,d and P ∈ R m,d matrices by encoding them using a bidirectional LSTM. Analogously, alignment matrices A H and A P are computed by normalizing affinity scores. These alignment matrices are used to form joint hypothesis-premise representations. For the hypothesis we compute and concatenate H, A H P, H − A H P and H A H P, yielding a h ∈ R n,4d sentence embedding, and proceed similarly for the premise. The ing sentence representations are then processed in parallel by a bidirectional LSTM and the final states of the LSTMs are concatenated and processed by a single layer Tanh MLP to predict entailment. Similarly to the SQuAD experiments, we found that the baseline model performs best with a larger vocabulary (5k words for SNLI, 20k words for MultiNLI) than the model that uses auxiliary information (3k words). Differently from SQuAD, we found it helpful to use a different vocabulary V dict = V train. We built V dict by collecting the 11000 words that occur most often in the definitions, where each definition is weighted with the frequency of its headword in the training data. While in theory it would still be possible to share word embeddings between the main model and the definition reader even when they have different vocabularies, we opted for the simpler option of using separate word embeddings e and e. Since with separate word embeddings having a subsequent linear transformations would be redundant, we simply add the of mean pooling k i=1 e (x i)/k to e(w). We use an LSTM for reading the spelling, but unlike the SQuAD experiments, we found that simply adding the last hidden state of the spelling LSTM to e(w) worked better. We also tried to use a LSTM for reading definitions but it did no better than the simpler mean pooling. Our last model used pretrained GloVe embeddings. 300-dimensional ESIM performed best for the baseline and GloVe models, whereas using just 100 dimensions worked better for the models using auxiliary information. All runs were repeated 3 times and scores are averaged. Results on SNLI and MultiNLI are presented in TAB1. Using dictionary definitions allows us to bridge ≈40% of the gap between training from scratch and using embeddings pretrained on 840 billion words, and this improvement is consistent on both datasets. Compared to the SQuAD , an important difference is that spelling was not as useful on SNLI and MultiNLI. We also note that we tried using fixed random embeddings for OOV words as proposed by BID7, and that this method did not bring a significant advantage over the baseline. In order to gain some insights on the performance of our entailment recognition models, in FIG1 we plot a t-SNE (van der) visualization of word embeddings computed for the words from BLESS dataset BID0. Specifically, we used embeddings produced by the definition encoder of our best SNLI model using Wordnet definitions. The BLESS dataset contains three categories of words: fruits, tools and vehicles. One can see that fruit words tend to be separated from tool and vehicle words. FIG1 shows that, as expected, dictionary-enabled models significantly outperform baseline models for sentences containing rare words. Experiments in the previous sections used datasets of moderate size. To get an idea of how useful different auxiliary data sources will be for datasets of various sizes, we also apply our approach to the One Billion Words (OBW) language modelling task BID4. In the first round of experiments, we use only 1% of the training data (∼ 10 7 words) and in the second we train our models on the whole training set (∼ 10 9 words). Similarly to prior work on using the spelling BID13 we restrict the softmax output layer to only predict probabilities of the 10k most common words, however, we do not impose such a constraint when the model processes the words from the input context. We want to assess thereby if having a definition of an observed word helps the model to predict the following ones from a restricted vocabulary. Our baseline model is an LSTM with 500 units and with trainable input embeddings for the 10k most frequent input words. This covers around 90.24% of all word occurrences. We consider computing embeddings of the less frequent input words from their dictionary definitions, GloVe vectors and spellings. These sources of auxiliary information were available for 63.35%, 97.43% and 100% of the rest of occurrences respectively. In order to compare how helpful these sources are when they are available, we run additional set of experiments with "restricted" inputs. Specifically, we only use auxiliary information for a word if it has both a GloVe embedding and a dictionary definition. When the word does not have any of these, we replace it with "UNK". We report for three variants of dictionary-enabled models. The first variant (dict1) uses the same LSTM for reading the text and the definitions. The second one (dict2) has two separate LSTMs but the word embeddings are shared 6. The third variant (dict+spelling) adds spelling to our best dictionary model. Lastly, we trained a model that used the lowercased lemma of the word as the definition. The training was early-stopped using a development set. We report the test perplexities in TAB2. Similarly to our other experiments, using external information to compute embeddings of unknown words helps in all cases. We observe a significant gain even for dict1, which is remarkable as this model has the same architecture and parameters as the baseline. We note that lemma+lowercase performs worse than any model with the dictionary, which suggests that dictionary definitions are used in a non-trivial way. Adding spelling consistently helps more than adding dictionary definitions. In our experiments with restricted inputs ((R) in TAB2 ), spelling and dict2 show similar performance, which suggests that this difference is mostly due to the complete coverage of spelling. Using both dictionary and spelling is consistently slightly better than using just spelling, and the improvement is more pronounced in the restricted setting. Using GloVe embeddings in the best perplexity. Switching to the full training set shrank all the gaps. To zoom in on how the models deal with rare words, we look at the perplexities of the words that appear right after out-of-vocabulary words (PPL after OOV). We can see that the ranking of different models mostly stays the same, yet the differences in performance become larger. For example, on the 1% version of the dataset, in the restricted setting, adding definitions to the spelling helps to bridge half of the 6 points PPL after OOV gap between spelling and GloVe. This is in line with our expectations that when definitions are available they should be helpful for handling rare words. We showed how different sources of auxiliary information, such as the spelling and a dictionary of definitions can be used to produce on the fly useful embeddings for rare words. While it was known before that adding the spelling information to the model is helpful, it is often hard or not possible to infer the meaning directly from the characters, as confirmed by our entailment recognition experiments. Our more general approach offers endless possibilities of adding other data sources and learning end-to-end to extract the relevant bits of information from them. Our experiments with a dictionary of definitions show the feasibility of the approach, as we report improvements over using just the spelling on question answering and semantic entailment classification tasks. Our qualitative investigations on the question answering data confirms our intuition on where the improvement comes from. It is also clear from them that adding more auxiliary data would help, and that it would probably be also useful to add definitions not just for words, but also for phrases (see "Mark Twain" from Section 4.1). We are planning to add more data sources (e.g. first sentences from Wikipedia articles) and better use the available ones (WordNet has definitions of phrasal verbs like "come across") in our future work. An important question that we did not touch in this paper is how to deal with rare words in the auxiliary information, such as dictionary definitions. Based on our qualitative investigations (see the example with "arrow" and "weapon" in Section 4.1), we believe that better handling rare words in the auxiliary information could substantially improve the proposed method. It would be natural to use on the fly embeddings similarly to the ones that we produce for words from the input, but the straight-forward approach of computing them on request would be very computation and memory hungry. One would furthermore have to resolve cyclical dependencies, which are unfortunately common in dictionary data (when e.g. "entertainment" is defined using "diverting" and "diverting" is defined using "entertainment"). In our future work we want to investigate asynchronous training of on the fly embeddings and the main model. In this paper, we have shown that introducing relatively small amounts of auxiliary data and a method for computing embeddings on the fly using that data bridges the gap between data-poor setups, where embeddings need to be learned directly from the end task, and data-rich setups, where embeddings can be pretrained and sufficient external data exists to ensure in-domain lexical coverage. A large representative corpus to pretrain word embeddings is not always available and our method is applicable when one has access only to limited auxiliary data. Learning end-to-end from auxiliary sources can be extremely data efficient when these sources represent compressed relevant information about the word, as dictionary definitions do. A related desirable aspect of our approach is that it may partially return the control over what a language processing system does into the hands of engineers or even users: when dissatisfied with the output, they may edit or add auxiliary information to the system to make it perform as desired. Furthermore, domain adaptation with our method could be carried out simply by using other sources of auxiliary knowledge, for example definitions of domain-specific technical terms in order to understand medical texts. Overall, the aforementioned properties of our method make it a promising alternative to the existing approaches to handling rare words.
We propose a method to deal with rare words by computing their embedding from definitions.
486
scitldr
The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications. In this work, we propose a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks. Unlike the discriminative (or softmax) classifier that only focuses on the decision boundary partitioning its latent space into multiple regions, our generative classifier aims to explicitly model class-conditional distributions as separable Gaussian distributions. Thereby, we can define the confidence score by the distance between a test sample and the center of each distribution. Our empirical evaluation on multi-class images and tabular data demonstrate that the generative classifier achieves the best performances in distinguishing out-of-distribution samples, and also it can be generalized well for various types of deep neural networks. Out-of-distribution (OOD) detection, also known as novelty detection, refers to the task of identifying the samples that differ in some respect from the training samples. Recently, deep neural networks (DNNs) turned out to show unpredictable behaviors in case of mismatch between the training and testing data distributions; for example, they tend to make high confidence prediction for the samples that are drawn from OOD or belong to unseen classes . For this reason, accurately measuring the distributional uncertainty of DNNs becomes one of the important challenges in many real-world applications where we can hardly control the testing data distribution. Several recent studies have tried to simply detect OOD samples using the confidence score defined by softmax probability or Mahalanobis distance from class means , and they showed promising even without re-training the model. However, all of them employ the DNNs designed for a discriminative (or softmax) classifier, which has limited power to locate OOD samples distinguishable with in-distribution (ID) samples in their latent space. To be specific, the softmax classifier is optimized to learn the discriminative latent space where the training samples are aligned along their corresponding class weight vectors, maximizing the softmax probability for the target classes. As pointed out in , OOD samples are more likely to have small values of the softmax probability for all known classes, which means that their latent vectors get closer to the origin. As a , there could be a large overlap between two sets of ID and OOD samples in the latent space (Figure 1), which eventually reduces the gap between their confidence scores and degrades the performance as well. In addition, most of existing confidence scores adopt additional calibration techniques ) to enhance the reliability of the detection, but they include several hyperparameters whose optimal values vary depending on the testing data distribution. In this situation, they utilized a small portion of each test set (containing both ID and OOD samples) for validation, and reported the evaluated on the rest by using the optimal hyperparameter values for each test case. Considering the motivation of OOD detection that prior knowledge of test distributions is not available before we encounter them, such process of tuning the hyperparameters for each test case is not practical when deploying the DNNs in practice. In this paper, we propose a novel objective to train DNNs with a generative (or distance) classifier which is capable of effectively identifying OOD test samples. The main difference of our deep generative classifier is to learn separable class-conditional distributions in the latent space, by explicitly modeling them as a DNN layer. The generative classifier places OOD samples further apart from the distributions of all given classes, without utilizing OOD samples for its validation. Thus, based on the Euclidean distance between a test sample and the centers of the obtained class-conditional distributions, we can calculate how likely and how confidently the sample belongs to each class. This can be interpreted as a multi-class extension of unsupervised anomaly detection , and Gaussian discriminant analysis provides the theoretical for incorporating the generative classifier into the DNNs. Our extensive experiments on images and tabular data demonstrate that the proposed classifier distinguishes OOD samples more accurately than the state-of-the-art method, while maintaining the classification accuracy for ID samples. We introduce a novel objective for training deep neural networks (DNNs) with a generative classifier, which is able to effectively detect out-of-distribution samples as well as classify in-distribution samples into known classes. We first derive the learning objective from the Gaussian discriminant analysis, and propose the distance-based confidence score for out-of-distribution sample detection. Metric learning objective for classification. The key idea of our objective is to optimize the deep learning model so that the latent representations (i.e., the outputs of the last layer) of data samples in the same class gather together thereby form an independent sphere. In other words, it aims to learn each class-conditional distribution in the latent space to follow a normal distribution that is entirely separable from other class-conditional distributions. Using the obtained distributions, we can calculate the class-conditional probabilities that indicate how likely an input sample is generated from each distribution, and this probability can serve as a good measure of the confidence. We define the two terms based on the Euclidean distance between the data representations obtained by the DNNs, denoted by f (x), and the center of each class-conditional distribution, denoted by c k. Given N training samples {(x 1, y 1),..., (x N, y N)} from K different classes, the objective is described as follows. The objective includes three types of trainable parameters: the weights of DNNs W, the class centers c 1,..., c K and biases b 1,..., b K. All of them can be effectively optimized by stochastic gradient descent (SGD) and back-propagation, which are widely used in deep learning. Note that we directly optimize the latent space induced by the DNNs using Euclidean distance, similarly to other metric learning objectives. Existing deep metric learning based on the triplet loss learns the distance among training samples utilizing their label information to capture their similarities into the metric space for a variety of retrieval tasks. On the other hand, our objective focuses on the distance between the samples and their target class centers for the accurate modeling of class-conditional distributions. Derivation from Gaussian discriminant analysis. Our objective for the generative classifier can be understood from the perspective of Gaussian discriminant analysis (GDA) . The generative classifier defines the posterior distribution P (y|x) by using the class-conditional distribution P (x|y) and class prior P (y). In case of GDA, each class-conditional distribution is assumed to follow the multivariate Gaussian distribution (i.e., P (x|y = k) = N (x|µ k, Σ k)) and the class prior is assumed to follow the Bernoulli distribution (i.e., P (y = k) = To simply fuse GDA with DNNs, we further fix all the class covariance matrices to the identity matrix (i.e., Σ k = I). Then, the posterior probability that a sample f (x) belongs to the class k is described as Considering µ k and log β k as the class center c k and bias b k respectively, the first term of our objective is equivalent to the negative log posterior probability. That is, the objective eventually trains the classifier by maximizing the posterior probability for training samples. However, the direct optimization of the DNNs and other parameters by its gradient does not guarantee that the class-conditional distributions become the Gaussian distributions and the class centers are the actual class means of training samples. Thus, to enforce our GDA assumption, we minimize the Kullback-Leibler (KL) divergence between the k-th empirical class-conditional distribution and the Gaussian distribution whose mean and covariance are c k and I, respectively. The empirical class-conditional distribution is represented by the average of the dirac delta functions for all training samples of a target class, i.e., where N k is the number of the training samples of the class k. Then, the KL divergence is formulated as The entropy term of the empirical class-conditional distribution can be calculated by using the definition of the dirac measure . By minimizing this KL divergence for all the classes, we can approximate the K class-conditional Gaussian distributions. Finally, we complete our objective by combining this KL term with the posterior term using the λ-weighted sum in order to control the effect of the regularization. We remark that λ is the hyperparameter used for training the model, which depends on only ID, not OOD; thus it does not need to be tuned for different test distributions. In-distribution classification. Since our objective maximizes the posterior probability for the target class of each sample P (y = y i |x), we can predict the class label of an input sample to the class that has the highest posterior probability as follows. In terms of DNNs, our proposed classifier replaces the fully-connected layer (fc-layer) computing the final classification score by w k · f (x) + b k with the distance metric layer (dm-layer) computing the distance from each center by − f (x) − c k 2 + b k. In other words, the class label is mainly predicted by the distance from each class center, so we use the terms "distance classifier" and "generative classifier" interchangeably in the rest of this paper. The dm-layer contains the exactly same number of model parameters with the fully-connected layer, because only the weight matrix K×d is replaced with the class center matrix Out-of-distribution detection. Using the trained generative classifier (i.e., class-conditional distributions obtained from the classifier), the confidence score of each sample can be computed based on the class-conditional probability P (x|y = k). Taking the log of the probability, we simply define the confidence score D(x) using the Euclidean distance between a test sample and the center of the closest class-conditional distribution in the latent space, This distance-based confidence score yields discriminative values between ID and OOD samples. In the experiment section, we show that the Euclidean distance in the latent space of our distance classifier is more effective to detect the samples not belonging to the K classes, compared to the Mahalanobis distance in the latent space of the softmax classifier. Moreover, it does not require further computation to obtain the class means and covariance matrix, and the predictive uncertainty can be measured by a single DNN inference. Relationship to deep one-class classifier. Recent studies on one-class classification, which have been mainly applied to anomaly detection, try to employ DNNs in order to effectively model the normality of a single class. Inspired by early work on one-class classification including one-class support vector machine (OC-SVM) (Schölkopf et al., 2001) and support vector data description (SVDD) , Ruff et al. (2018; 2019) proposed a simple yet powerful deep learning objective, DeepSVDD. It trains the DNNs to map samples of the single known class close to its class center in the latent space, showing that it finds a hypersphere of minimum volume with the center c: Our DNNs with the distance classifier can be interpreted as an extension of DeepSVDD for multiclass classification, which incorporates K one-class classifiers into a single network. In the proposed objective, the first term makes the K classifiers distinguishable for the multi-class setting, and the second term learns each classifier by gathering the training samples into their corresponding center, as done in DeepSVDD. The purpose of the one-class classifier is to determine whether a test sample belong to the target class or not, thus training it for each class is useful for detecting out-of-distribution samples in our task as well. In this section, we present experimental that support the superiority of the proposed model. Using tabular and image datasets, we compare the performance of our distance classifier (i.e., DNNs with dm-layer) with that of the softmax classifier (i.e., DNNs with fc-layer) in terms of both ID classification and OOD detection. We also provide empirical analysis on the effect of our regularization term. Our code and preprocessed datasets will be publicly available for reproducibility. Experimental settings. We first evaluate our distance classifier using four multi-class tabular datasets with real-valued attributes: GasSensor, Shuttle, DriveDiagnosis, and MNIST. They are downloaded from UCI Machine Learning repository 1, and we use them after preprocessing all the attributes using z-score normalization. Table 1 summarizes the details of the datasets. To simulate the scenario that the test distribution includes both ID and OOD samples, we build the training and test set by regarding one of classes as the OOD class and the rest of them as the ID classes. We exclude the samples of the OOD class from the training set, then train the DNNs using only the ID samples for classifying inputs into the K-1 classes. The test set contains all samples of the OOD class as well as the ID samples that are left out for testing. The evaluations are repeated while alternately changing the OOD class, thus we consider K scenarios for each dataset. For all the scenarios, we perform 5-fold cross validation and report the average . The multi-layer perceptron (MLP) with three hidden layers is chosen as the DNNs for training the tabular data. For fair comparisons, we employ the same architecture of MLP (# Input attributes ×128 × 128 × 128× # Classes) for both the softmax classifier and the distance classifier. We use the Adam optimizer with the initial learning rate η = 0.01, and set the maximum number of epochs to 100. In case of tabular data, we empirically found that the regularization coefficient λ hardly affects the performance of our model, so fix it to 1.0 without further hyperparameter tuning. We consider two competing methods using the DNNs optimized for the softmax classifier: 1) the baseline method uses a maximum value of softmax posterior probability as the confidence score, max k, and 2) the state-of-the-art method defines the score based on the Mahalanobis distance using empirical class meansμ k and covariance matrixΣ, which is max Note that any OOD samples are not available at training time, so we do not consider advanced calibration techniques for all the methods; for example, temperature scaling, input perturbation , and regression-based feature ensemble . We measure the classification accuracy for ID test samples 2, as well as three performance metrics for OOD detection: the true negative rate (TNR) at 85% true positive rate (TPR), the area under the receiver operating characteristic curve (AUROC), and the detection accuracy. Experimental . In Table 2, our proposed method (i.e., distance-based confidence score) using the distance classifier considerably outperforms the other competing methods using the softmax classifier in most scenarios. Compared to the baseline method, the Mahalanobis distance-based confidence score sometimes performs better, and sometimes worse. This strongly indicates that the empirical data distribution in the latent space does not always take the form of Gaussian distribution for each class, in case of the softmax classifier. For this reason, our explicit modeling of class-conditional Gaussian distributions using the dm-layer guarantees the GDA assumption, and it eventually helps to distinguish OOD samples from ID samples. Moreover, the distance classifier shows almost the same classification accuracy with the softmax classifier; that is, it improves the performance of OOD detection without compromising the performance of ID classification. For qualitative comparison on the latent spaces of the softmax classifier and distance classifier, we plot the 2D latent space after training the DNNs whose size of latent dimension is set to 2. Figure 1 illustrates the training and test distributions of the GasSensor dataset, where the class 3 (i.e., Ammonia) is considered as the OOD class. Our DNNs successfully learn the latent space so that ID and OOD samples are separated more clearly than the DNNs of the softmax classifier. Notably, in case of the softmax classifier, the covariance matrices of all the classes are not identical, which violates the necessary condition for the Mahalanobis distance-based confidence score to be effective in detecting OOD samples. 4 In this sense, the proposed score does not require such assumption any longer, because our objective makes the latent space satisfy the GDA assumption. Experimental settings. We validate the effectiveness of the distance classifier on OOD image detection as well. Two types of deep convolutional neural networks (CNNs) are utilized: ResNet with 100 layers and DenseNet with 34 layers. Specifically, we train ResNet and DenseNet for classifying three image datasets: CIFAR-10, CIFAR-100 , and SVHN . Each dataset used for training the models is considered as ID samples, and the others are considered as OOD samples. To consider a variety of OOD samples at test time, we measure the performance by additionally using TinyImageNet (randomly cropped image patches of size 32 × 32 from ImageNet dataset) and LSUN as test OOD samples. All CNNs are trained with stochastic gradient descent with Nesterov momentum , and we follow the training configuration (e.g., the number of epochs, batch size, learning rate and its scheduling, and momentum) suggested by . The regularization coefficient λ of the distance classifier is set to 0.1. Experimental . Table 3 shows that our distance classifier also can be generalized well for deeper and more complicated models such as ResNet and DenseNet. Similarly to tabular data, our confidence score achieves the best performance for most test cases, and significantly improves the detection performance over the state-of-the-art method. Interestingly, the distance classifier achieves better ID classification accuracy than the softmax classifier in Table 4. These show the possibility that any existing DNNs can improve their classification power by adopting the dmlayer, which learns the class centers instead of the class weights. From the experiments, we can conclude that our proposed objective is helpful to accurately classify ID samples as well as identify OOD samples from unknown test distributions. We further investigate the effects of our regularization term on the performance and the data distributions in the latent space. We first evaluate the distance classifier, using the DNNs trained with different λ values from 10 −3 to 10 3. Figure 2 presents the performance changes with respect to the λ value. In terms of ID classification, the classifier cannot be trained properly when λ grows beyond 10 2, because the regularization term is weighted too much compared to the log posterior term in our objective which learns the decision boundary. On the other hand, we observe that the OOD detection performances are not much affected by the regularization coefficient, unless we set λ too small or too large; any values in the range (0.1, 10) are fine enough to obtain the model working well. We also visualize the 2D latent space where the training distribution of MNIST are represented, varying the value of λ ∈ {0.01, 0.1, 1, 10}. In Figure 3, even with a small value of λ, we can find the decision boundary that partitions the space into K regions, whereas the class centers (plotted as black circles) do not match with the actual class means and the samples are spread over the entire space. As λ increases, the class centers approach to the actual class means, and simultaneously the samples get closer to its corresponding class center thereby form multiple spheres. As discussed in Section 2, the regularization term enforces the empirical class-conditional distributions to approximate the Gaussian distribution with the mean c k. In , the proper value of λ makes the DNNs place the class-conditional Gaussian distributions far apart from each other, so the OOD samples are more likely to be located in the rest of the space. As DNNs have become the dominant approach to a wide range of real-world applications and the cost of their errors increases rapidly, many studies have been carried out on measuring the uncertainty of a model's prediction, especially for non-Bayesian DNNs . defined several types of uncertainty, and among them, distributional uncertainty occurs by the discrepancy between the training and test distributions. In this sense, the OOD detection task can be understood as modeling the distributional uncertainty, and a variety of approaches have been attempted, including the parameterization of a prior distribution over predictive distributions and training multiple classifiers for an ensemble method . The baseline method is the first work to define the confidence score by the softmax probability based on a given DNN classifier. To enhance the reliability of detection, ODIN applies two calibration techniques, i.e., temperature scaling and input perturbation, to the baseline method, which can push the softmax scores of ID and OOD samples further apart from each other. uses the Mahalanobis distance from class means instead of the softmax score, assuming that samples of each class follows the Gaussian distribution in the latent space. However, all of them utilize the DNNs for the discriminative (i.e, softmax) classifier, only optimized for classifying ID samples. Our approach differs from the existing methods in that it explicitly learns the class-conditional Gaussian distributions and computes the score based on the Euclidean distance from class centers. This paper introduces a deep learning objective to learn the multi-class generative classifier, by fusing the concept of Gaussian discriminant analysis with DNNs. Unlike the conventional softmax classifier, our generative (or distance) classifier learns the class-conditional distributions to be separated from each other and follow the Gaussian distribution at the same time, thus it is able to effectively distinguish OOD samples from ID samples. We empirically show that our confidence score beats other competing methods in detecting both OOD tabular data and OOD images, and also the distance classifier can be easily combined with various types of DNNs to further improve their performances.
This paper proposes a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks.
487
scitldr
One of the most prevalent symptoms among the elderly population, dementia, can be detected by classifiers trained on linguistic features extracted from narrative transcripts. However, these linguistic features are impacted in a similar but different fashion by the normal aging process. Aging is therefore a confounding factor, whose effects have been hard for machine learning classifiers to isolate. In this paper, we show that deep neural network (DNN) classifiers can infer ages from linguistic features, which is an entanglement that could lead to unfairness across age groups. We show this problem is caused by undesired activations of v-structures in causality diagrams, and it could be addressed with fair representation learning. We build neural network classifiers that learn low-dimensional representations reflecting the impacts of dementia yet discarding the effects of age. To evaluate these classifiers, we specify a model-agnostic score $\Delta_{eo}^{(N)}$ measuring how classifier are disentangled from age. Our best models outperform baseline neural network classifiers in disentanglement, while compromising accuracy by as little as 2.56\% and 2.25\% on DementiaBank and the Famous People dataset respectively. One in three seniors die of Alzheimer's and other types of dementia in the United States . Although its causes are not yet fully understood, dementia impacts people's cognitive abilities in a detectable manner. This includes different syntactic distributions in narrative descriptions BID28, more pausing BID29, higher levels of difficulty in recalling stories BID21, and impaired memory generally BID20. Fortunately, linguistic features can be used to train classifiers to detect various cognitive impairments. For example, BID8 detected primary progressive aphasia with up to 100% accuracy, and classified subtypes of primary progressive aphasia with up to 79% accuracy on a set of 40 participants using lexical-syntactic and acoustic features. BID7 classified dementia from control participants with 82% accuracy on narrative speech. However, dementia is not the only factor causing such detectable changes in linguistic features of speech. Aging also impairs cognitive abilities BID11, but in subtly different ways from dementia. For example, aging inhibits fluid cognitive abilities (e.g., cognitive processing speed) much more than the consolidated abilities (e.g., those related to cumulative skills and memories) BID4. In other words, the detected changes of linguistic features, including more pauses and decreased short-term memories, could attribute to just normal aging process instead of dementia. Unfortunately, due to the high correlation between dementia and aging, it can be difficult to disentangle symptoms are caused by dementia or aging BID24. Age is therefore a confounding factor in detecting dementia. The effects of confounding factors are hard for traditional machine learning algorithms to isolate, and this is largely due to sampling biases in the data. For example, some algorithms predict higher risk of criminal recidivism for people with darker skin colors BID15, others identify images of smiling Asians as blinking BID19, and GloVe word embeddings can project European-American names significantly closer to the words like'pleasant' than African-American names BID3. It is preferable for classifiers to make decisions without biasing too heavily on demographic factors, and therefore to isolate the effects of confounding factors. However, as we will show in Experiments, traditional neural network classifiers bias on age to infer dementia; this can lead to otherwise avoidable false positives and false negatives that are especially important to avoid in the medical domain. Graphically, if both age A and dementia D cause changes in a feature X, the is a v-structure BID17 A → X ← D which is activated upon observing X. In other words, the confounder A affects P (D|X) if we train the classifier in traditional ways, which is to collect data points {(X, D) (i) } and to learn an inference model P (D|X) approximating the affected P (D|X).Traditionally, there are several ways to eliminate the effects of confounding factors A.Controlling A gives a posterior distribution P (D|X, A)P (A). This is unfortunately unrealistic for small, imbalanced clinical datasets, in which sparsity may require stratification. However, the stratified distributions P (D|X, A) can be far from a meaningful representation of the real world (as we will show, e.g., in FIG3). Moreover, a discrepancy in the sizes of age groups can skew the age prior P (A), which would seriously inhibit the generalizability of a classifier. Controlling X Conducting a randomized control trial (RCT) on X removes all causal paths leading "towards" the variable X, which gives a de-confounded dataset P (D|do(X)) according to the notation in BID27. However, RCTs on X are even less practical because simultaneously controlling multiple features produces exponential number of scenarios, and doing this to more than 400 features require far more data points than any available dataset. Pre-adjusting X according to a pre-trained model X = f (A) per feature could also approximately generate the dataset P (D|do(X)). However, such a model should consider participant differences, otherwise interpolating using a fixed age A would give exactly the same features for everybody. The participant differences, however, are best characterized via X, which are the values you want to predict. To overcome the various problems with these methods, we let our classifiers be aware of cognitive impairments while actively filtering out any information related to aging. This is a fair representation learning framework that protects age as a "sensitive attribute".Fair representation learning frameworks can be used to train classifiers to equally consider the subjects with different sensitive attributes. A sensitive attribute (or "protected attribute") can be race, age, or other variables whose impact should be ignored. In the framework proposed by BID32, classifiers were penalized for the differences in classification probabilities among different demographic groups. After training, the classifiers produced better demographic similarities while compromising only a little overall accuracy. To push the fair representation learning idea further, adversarial training can be incorporated. BID9 introduced generative adversarial networks, in which a generator and a discriminator are iteratively optimized against each other. Incorporating adversarial training, BID22 proposed a framework to learn a latent representation of data in order to limit its adversary's ability to classify based on the sensitive attributes. However, these approaches to fair representation learning only handle binary attributes. E.g., BID22 binarized age. To apply to cognitive impairments detection, we want to represent age on a continuous scale (with some granularity if necessary). We formulate a fairness metric for evaluating the ability of a classifier to isolate a continuous-valued attribute. We also propose four models that compress high-dimensional feature vectors into low-dimensional representations which encrypt age from an adversary. We show empirically that our models achieve better fairness metrics than baseline deep neural network classifiers, while compromising accuracies by as little as 2.56% and 2.25% on our two empirical datasets, respectively. There are many measures of entanglement between classifier outcomes and specific variables. We briefly review some relevant metrics, and then propose ours. Correlation (Pearson, Spearman, etc.) is often used to compare classification outputs with component input features. To the extent that these variables are stochastic, several information theoretic measures could be applied, including Kullback-Leibler divergence and Jensen-Shannon divergence. These can be useful to depict characteristics of two distributions when no further information about available data is given. Mutual information can depict the extent of entanglement of two random variables. If we treat age (A) and dementia (D) as two random variables, then adopting the approach of BID18 gives an estimation of I(A, D). However, given the size of clinical datasets, it can be challenging to give precise estimations. An alternative approach is to assume that these variables fit into some probabilistic models. For example, we might assume the age variable A, dementia indicator variable D, and multi-dimensional linguistic feature X fit into some a priori model (e.g., the v-structure mentioned above, A → X ← D), then the mutual information between A and D is: DISPLAYFORM0 where the entropy of age H A and of cognitive impairment H D remain constant with respect to the input data X, and DISPLAYFORM1 However, this marginalized probability is difficult to approximate well, because the accuracy of the term p(A|X) relies on the ability of our model to infer age from features, and FORMULA20 it is hard to decide on a good prior distribution on linguistic features p(X). We want to make the model agnostic to age, leading to a meaningless mutual information in the'ideal' case. In our frameworks, we do not assume specific graphical models that correlate confounds and outcomes, and we propose more explainable metrics than the traditional statistical ones. The literature in fairness representation learning offers several metrics for evaluating the extent of bias in classifiers. Generally, the fairer the classifier is, the less entangled the are with respect to some protected features. Demographic parity BID32 stated that the fairest scenario is reached when the composition of the classifier outcome for the protected group is equal to that of the whole population. While generally useful, this does not apply to our scenario, in which there really are more elderly people suffering from cognitive impairments than younger people (see FIG3).Cross-entropy loss BID5 used the binary classification loss of an adversary that tried to predict sensitive data from latent representations, as a measure of fairness. This measure can only apply to those models containing an adversary component, not traditional classifiers. Moreover, this loss also depends on the ability of the adversary network. For example, a value of this loss could indicate confusing representations (so sensitive information are protected well), but it could also indicate a weak adversary. Equalized odds BID12 proposed a method in which false positive rates should be equal across groups in the ideal case. BID22 defined fairness distance as the absolute difference in false positive rates between two groups, plus that of the false negative rates: DISPLAYFORM0 where p a and n a correspond to the false positive rate and false negative rate, respectively, with sensitive attribute a = 0 (a = 1). We propose an extension of the metric used by BID22 to continuous sensitive attributes, suitable for evaluating an arbitrary two-class classifier., and classifier C. In age-indep-autoencoder and age-indep-entropy FIG0, a reconstructor R tries to reconstruct input data from the hidden representation. In age-indep-consensus-nets FIG0 ), a discriminator D tells apart from which modality the representation originates. First, groups of age along a scale are divided so that each group has multiple participants with both positive and negative diagnoses, respectively. Let a be the age group each participant is in. Then, we aim for the expected false positive (FP) rates of the classifier to be as constant as possible across age groups. This applies likewise to the false negative (FN) rates. To measure their variability, we use their sum of differences against the mean. DISPLAYFORM0 wherex represents the mean of variable x. Special cases To illustrate the nature of our metric, we apply it to several special cases, i.e.:1. When there is only one age group, our fairness metric has its best possible value: ∆ eo = 0. 2. When there are only two age groups, our metric equals that of BID22. 3. In the extreme case where there are as many age groups as there are sample points (assuming there are no two people with identical ages but with different diagnoses), our metric becomes less informative, because the empirical expected false positive rates of that group is either 0 or 1. This is a limitation of our metric, and is the reason that we limit the number of age groups to accommodate the size of the training dataset. Bounds Our metric is bounded. The lower bound, 0, is reached when all false positive rates are equal and when all false negative rates are equal across age groups. Letting N a be the number of age groups divided, an upper bound for ∆ (Na) eo is N a for any better-than-trivial binary classifier. The detailed proof is included in the Appendix. Disentanglement Our fairness metric illustrates disentanglement. A higher ∆ (N) eo corresponds to a higher variation of incorrect predictions by the classifier across different age groups. Therefore, a lower value of ∆ (N) eo is desired for classifiers isolating the effects of age to a better extent. Throughout this paper, we use the terms'fairness','disentanglement', and'isolation' interchangeably. We explain a few design choices here, namely linearity and indirect optimization. eo to be as linear as possible, for explainability of the fairness score itself. This eliminates possible scores consisting of higher order terms of FP / FN rates. eo, we encourage the hidden representations to be age-agnostic (we will explain how to set up age agnostic models in the following section). On the other hand, FP / FN rates are not differentiable after all. In this section, we describe four different ways of building representation learning models, which we call age-indep-simple, age-indep-autoencoder, age-indep-consensus-net, and age-indep-entropy. The simplest model consists of an interpreter network I to compress high-dimensional input data, x, to low-dimensional representations: z = I(x) An adversary A tries to predict the exact age from the representation: DISPLAYFORM0 A classifier C estimated the probability of label (diagnosis) based on the representation: for minibatch x in training data X do 4: DISPLAYFORM1 DISPLAYFORM2 Calculate L a, L c 6: DISPLAYFORM3 For optimization, we set up two losses: the classification negative log likelihood loss L c and the adversarial (L2) loss L a, where: DISPLAYFORM4 We want to train the adversary to minimize the L2 loss, to train the interpreter to maximize it, and to train the classifier (and interpreter) to minimize classification loss. Overall, DISPLAYFORM5 The training steps are taken iteratively, as in previous work BID9. The age-indep-autoencoder structure is similar to BID22, and can be seen as an extension from the age-indep-simple structure. Similar to age-indep-simple, there is an interpreter I, an adversary A, and a classifier C network. The difference is that there is a reconstructor network R that attempts to recover input data from hidden representations: DISPLAYFORM0 The loss functions are set up as: DISPLAYFORM1 Overall, we want to train both the interpreter and the reconstructor to minimize the reconstruction loss term, in additional to all targets mentioned in the age-indep-simple network. DISPLAYFORM2 The detailed algorithm is similar to Algorithm 1 and is in the Appendix. This is another extension from the age-indep-simple structure, borrowing an idea from consensus networks BID33, i.e., that agreements between multiple modalities can in representations that are beneficial for classification. By examining the performance of age-indepconsensus-net, we would like to see whether agreement between multiple modalities of data can be trained to be disentangled from age. Similar to age-indep-simple structures, there is also an adversary A and a classifier C. The interpreter, however, is replaced with several interpreters I 1..M, each compressing a subset of the input data ("modality") into a low-dimensional representation. The key of age-indep-consensusnetwork models is that these representations are encouraged to be indistinguishable. For simplicity, we randomly divide the input features into three modalities (M = 3) with equal (±1) features. A discriminator D tries to identify the modality from which the representation comes: DISPLAYFORM0 The loss functions are set up as: DISPLAYFORM1 Overall, we want to iteratively optimize the networks: DISPLAYFORM2 L c and max DISPLAYFORM3 The detailed algorithm is in the Appendix. Note that we do not combine the consensus network with the reconstructor because they do not work well with each other empirically. In one of the experiments by BID34, each interpreter I m is paired with a reconstructor R m and the performance decreases dramatically. The reconstructor encourages hidden representations to retain the fidelity of data, while the consensus networks urges hidden representations to keep only the information common among modalities, which prohibits the reconstructor and consensus mechanism to function together. The fourth model we apply to fair representation learning is motivated by categorical GANs , where information theoretic metrics characterizing the confidences of predictions can be optimized. This motivates an additional loss function term; i.e., we want to encourage the interpreter to increase the uncertainty (i.e., to minimize the entropy) while letting the adversary become more confident in predicting ages from representations. Age-indep-entropy models have the same network structures as age-indep-autoencoder, except that instead of predicting the exact age, the adversary network outputs the probability of the sample age being larger than the mean: DISPLAYFORM0 This enables us to define the empirical entropy H[p] = E x plog 1 p, which describes the uncertainty of predicting age. Formally, the loss functions are set up as follows: DISPLAYFORM1 where λ H is a hyper-parameter. For comparison, we also include two variants, namely the ageindep-entropy (binary) and age-indep-entropy (Honly) variants, each keeping only one of the two terms in L a. In our experiments, we show that these two terms in L a are better applied together. Overall, the training procedure is the same as age-indep-autoencoder and algorithm pseudocode is in the Appendix: min DISPLAYFORM2 All models are implemented in PyTorch BID26, optimized with Adam BID16 with initial learning rate of 3 × 10 −4, and L2 weight decay 10. For simplicity, we use fully connected networks with ReLU activations BID25 and batch normalization BID14 before output layers, for all interpreter, adversary, classifier, and discriminator networks. Our frameworks can be applied to other types of networks in the future. DementiaBank DementiaBank 1 is the largest available public dataset for assessing cognitive impairments using speech, containing 473 narrative picture descriptions from subjects aged between 45 and 90 BID2. In each sample, a participant talks about what is happening in a clinically validated picture. There is no time limit in each session, but the average description lasts about a minute. 79 samples are excluded due to missing age information. In the remaining data samples, 182 are labeled'control', and 213 are labeled'dementia'. All participants have mini-mental state estimation (MMSE) scores BID6 ) between 1 and 30 2. Of all data samples containing age information, the mean is 68.26 and standard deviation is 9.00. The Famous People dataset BID1 contains 252 transcripts from 17 people (8 with dementia including Gene Wilder, Ronald Reagan and Glen Campbell, and 9 healthy controls including Michael Bloomberg, Woody Allen, and Tara VanDerveer), collected and transcribed from publicly available speech data (e.g., press conferences, interviews, debatse, talk shows). Seven data samples are discarded due to missing age information. Among the remaining samples, there are 121 labeled as control and 124 as impaired. Note that the data samples were gathered across a wide range of ages (mean 59.25, standard deviation 13.60). For those people diagnosed with dementia, there are data samples gathered both before and after the diagnosis, and all of which are labeled as'dementia'. The Famous People dataset permits for early detection several years before diagnosis, which is a more challenging classification task than DementiaBank. Older participants in both DementiaBank FIG3 ) and the Famous People dataset FIG3 ) are more likely to have cognitive impairments. eo and ∆eo ) of several traditional classifiers. DNN is the baseline used to benchmark our neural network based representation learning models. We extract 413 linguistic features from the narrative descriptions and their transcripts. These features were previously identified as the most useful for this task BID28 BID7 BID21 BID13. Each feature is z-score normalized. Acoustic: mean, variance, skewness, and kurtosis of the first 42 cepstral coefficients. Speech fluency: pause-word ratio, utterance length, number and lengths of filled/unfilled pauses. Lexical: cosine similarity between pairs of utterances, word lengths, lexical richness (movingaverage type-token ratio, Brunet's index, and Honoré's statistics BID10).PoS: Number of occurrences of part-of-speech tags, tagged by SpaCy 3.Syntactic and semantic: occurrences of context-free grammar phrase types, parsed by Stanford CoreNLP BID23, and Yngve depth statistics BID31. As part of expository data analysis, we show that these linguistic features contain information indicating age. Simple fully connected neural networks can predict age with mean absolute error of 15.5 ± 1.3 years (on DementiaBank 4) and 14.3 ± 2.5 years (on the Famous People dataset 5). This indicates that even simple neural networks are able to infer information about age from linguistic features. Neural classifiers can therefore also easily bias on age, given the utility of age in downstream tasks. We first set up benchmarks for our classifiers. We evaluate several traditional classifiers with our fairness metrics (∆ eo, corresponding to dividing ages into N = 2 and N = 5 groups respectively). The 6 are listed in Table 1. A DNN is used as the baseline because all our models are based on neural networks, and DNN classifiers have had the best (or statistically indistinguishable from the best) accuracy on the DementiaBank and Famous People datasets. We evaluate the performances of our four proposed neural networks against the DNN baseline. As an additional ablation study, two variants of age-indep-entropy are also evaluated. TAB1: Evaluation of our representation learning models. The "age-indep" prefix are replaced with "*" in model names. age-indep-simple and age-indep-autoencoder have better disentanglement scores, while the rest two models could have better accuracy. Accuracy The fair representation learning models compromise accuracy, in comparison to DNN baselines. This confirms that part of the classification power of DNNs come from biasing with regards to age. On DementiaBank, the age-indep-autoencoder reduces accuracy the least (only 2.56% in comparison to the DNN baseline). On the Famous People data, age-indep-consensus and age-indep-entropy models compromise accuracies by only 2.25% and 2.75% respectively, which are not statistically different from the DNN baseline 7.Disentanglement In comparison to DNN baselines, our fair representation learning models improve disentanglement/fairness 8, the improvements are mostly significant when measured by the two-group scores ∆eo. Also, the five-group scores ∆eo are less stable for both datasets, and the scores in the Famous People have higher variances than in DementiaBank. Following is an explanation. DementiaBank has ∼400 data samples. In 5-fold cross validation, each of the five age groups has only ∼16 samples during evaluation. Famous People data contains ∼250 samples, which increases the variance. When the number of groups, N of ∆ (N) eo, is kept small (e.g., ∼100 samples per label per group, as in DementiaBank N = 2), the fairness metrics are stable. The model age-indep-entropy is best used with a loss function containing both the binary classification term and the uncertainty minimization term. As shown in TAB1, although having similar fairness metrics 9, the two variants with only one term could have lower accuracy than age-indep-entropy. In general, age-indep-simple and age-indep-autoencoder achieve the best fairness metrics. Noticeably, the better of them surpass traditional classifiers in both ∆ Here, we identify the problem of entangling age in the detection of cognitive impairments. After explaining this problem with causality diagrams, we formulate it into a fair representation learning task, and propose a fairness score to measure the extent of disentanglement. We put forward four fair representation learning models that learn low-dimensional representations of data samples containing as little age information as possible. Our best model improves upon the DNN baseline in our fairness metrics, while compromising as little accuracy as 2.56% (on DementiaBank) and 2.25% (on the Famous People dataset).7 p = 0.20, 0.16 on 38-DoF one-tailed t-tests, respectively. 8 On DementiaBank, p = 0.01 and 0.03 for age-indep-simple and age-indep-entropy on ∆ eo respectively; these are significant. p = 0.08 and 0.09 on age-indep-autoencoder and age-indep-consensus-net on ∆ eo respectively; these are marginally significant. However, these differences are not as significant on ∆ Proof of Theorem For each of the age groups: |p a −p| + |n a −n| ≤ max{|p a − 0| + |n a − 0|, |p a − 0.5| + |n a − 0.5|} ≤ max{0.5, 1} = 1 Summing up the N a age groups in our upper bound N a for non-trivial classifiers. Following are the pseudo-code algorithms for our remaining three models; age-indep-AutoEncoder, age-indep-ConsensusNetworks, and age-indep-Entropy. for minibatch x in training data X do for minibatch x in training data X do
Show that age confounds cognitive impairment detection + solve with fair representation learning + propose metrics and models.
488
scitldr
Much of the focus in the design of deep neural networks had been on improving accuracy, leading to more powerful yet highly complex network architectures that are difficult to deploy in practical scenarios. As a , there has been a recent interest in the design of quantitative metrics for evaluating deep neural networks that accounts for more than just model accuracy as the sole indicator of network performance. In this study, we continue the conversation towards universal metrics for evaluating the performance of deep neural networks for practical on-device edge usage by introducing NetScore, a new metric designed specifically to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network. In what is one of the largest comparative analysis between deep neural networks in literature, the NetScore metric, the top-1 accuracy metric, and the popular information density metric were compared across a diverse set of 60 different deep convolutional neural networks for image classification on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC 2012) dataset. The evaluation across these three metrics for this diverse set of networks are presented in this study to act as a reference guide for practitioners in the field. There has been a recent urge in both research and industrial interests in deep learning, with deep performance across a wide variety of applications BID22 22, 11]. However, the practical industrial BID24 deployment bottlenecks associated with the powerful yet highly complex deep neural networks in 22 research literature has become even increasingly visible, and as a , the design of deep neural 23 networks that strike a strong balance between accuracy and complexity become a very hot area of 24 research focus [18, 14, 34, 33, 26, BID31 36]. One of the key challenges in designing practical deep 25 neural networks lies in the difficulties with assessing how well a particular network architecture 26 is striking that balance. One of the most widely cited metrics is the information density metric 27 proposed by BID0, which attempts to measure the relative amount of accuracy given network size. However, information density does not account for computational requirements for performing 29 network inference (e.g., MobileNet has more parameters than SqueezeNet but has lower 30 computational requirements for network inference). Therefore, the exploration and investigation 31 towards universal performance metrics that account for accuracy, architectural complexity, and 32 computational complexity is highly desired as it has the potential to improve network model search 33 and design. In this study, we introduce NetScore, a new metric designed specifically to provide a 34 quantitative assessment of the balance between accuracy, computational complexity, and network low accuracy remain unusable in practical scenarios, regardless how small or fast the network is. Furthermore, we set β = 0.5 and γ = 0.5 since, while architectural and computational complexity are the presented in this study can act as a reference guide for practitioners in the field. The set of deep convolutional neural networks being evaluated in this study are: AlexNet BID22, shown in Fig. 1(right). Similar to the trend observed in Fig. 1
We introduce NetScore, new metric designed to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network.
489
scitldr
Deep Neural Networks (DNNs) are vulnerable to adversarial attacks, especially white-box targeted attacks. This paper studies the problem of how aggressive white-box targeted attacks can be to go beyond widely used Top-1 attacks. We propose to learn ordered Top-k attacks (k>=1), which enforce the Top-k predicted labels of an adversarial example to be the k (randomly) selected and ordered labels (the ground-truth label is exclusive). Two methods are presented. First, we extend the vanilla Carlini-Wagner (C&W) method and use it as a strong baseline. Second, we present an adversarial distillation framework consisting of two components: (i) Computing an adversarial probability distribution for any given ordered Top-$k$ targeted labels. (ii) Learning adversarial examples by minimizing the Kullback-Leibler (KL) divergence between the adversarial distribution and the predicted distribution, together with the perturbation energy penalty. In computing adversarial distributions, we explore how to leverage label semantic similarities, leading to knowledge-oriented attacks. In experiments, we test Top-k (k=1,2,5,10) attacks in the ImageNet-1000 val dataset using two popular DNNs trained with the clean ImageNet-1000 train dataset, ResNet-50 and DenseNet-121. Overall, the adversarial distillation approach obtains the best , especially by large margin when computation budget is limited.. It reduces the perturbation energy consistently with the same attack success rate on all the four k's, and improve the attack success rate by large margin against the modified C&W method for k=10. Despite the recent dramatic progress, deep neural networks (DNNs) (; ; ;) trained for visual recognition tasks (e.g., image classification) can be easily fooled by so-called adversarial attacks which utilize visually imperceptible, carefully-crafted perturbations to cause networks to misclassify inputs in arbitrarily chosen ways in the close set of labels used in training (; ; ;), even with one-pixel attacks . The existence of adversarial attacks hinders the deployment of DNNs-based visual recognition systems in a wide range of applications such as autonomous driving and smart medical diagnosis in the long-run. In this paper, we are interested in learning visually-imperceptible targeted attacks under the whitebox setting in image classification tasks. In the literature, most methods address targeted attacks in the Top-1 manner, in which an adversarial attack is said to be successful if a randomly selected label (not the ground-truth label) is predicted as the Top-1 label with the added perturbation satisfying to be visually-imperceptible. One question arises, • The "robustness" of an attack method itself: How far is the attack method able to push the underlying ground-truth label in the prediction of the learned adversarial examples? Table 1 shows the evaluation of the "robustness" of different attack methods. The widely used C&W method does not push the GT labels very far, especially when smaller perturbation energy is aimed using larger search range (e.g., the average rank of the GT label is 2.6 for C&W 9×1000). Consider Top-5, if the ground-truth labels of adversarial examples still largely appear in the Top-5 of the prediction, we may be over-confident about the 100% ASR, . Please see Sec. 4 for detail of experimental settings. Method ASR Proportion of GT Labels in Top-k (smaller is better) Average Rank of GT Labels (larger is better) Top-3 Top-5 Top-10 Top-50 Top-100 C&W9×30 99.9 36.9 50.5 66.3 90.0 95.1 20.4 C&W9×1000 100 71.9 87.0 96.1 99.9 100 2.6 FGSM 80.7 25.5 37.8 52.8 81.2 89.2 44.2 PGD10 100 3.3 6.7 12 34.7 43.9 306.5 MIFGSM10 99.9 0.7 1.9 6.0 22.5 32.3 404.4 especially when some downstream modules may rely on Top-5 predictions in their decision making. But, the three untargeted attack approaches are much better in terms of pushing the GT labels since they are usually move against the GT label explicitly in the optimization, but their perturbation energies are usually much larger. As we shall show, more "robust" attack methods can be developed by harnessing the advantages of the two types of attack methods. In addition, the targeted Top-1 attack setting could limit the flexibility of attacks, and may lead to less rich perturbations. To facilitate explicit control of targeted attacks and enable more "robust" attack methods, one natural solution, which is the focus of this paper, is to develop ordered Top-k targeted attacks which enforce the Top-k predicted labels of an adversarial example to be the k (randomly) selected and ordered labels (k ≥ 1, the GT label is exclusive). In this paper, we present two methods of learning ordered Top-k attacks. The basic idea is to design proper adversarial objective functions that in imperceptible perturbations for any test image through iterative gradient-based back-propagation. First, we extend the vanilla Carlini-Wagner (C&W) method and use it as a strong baseline. Second, we present an adversarial distillation (AD) framework consisting of two components: (i) Computing an adversarial probability distribution for any given ordered Top-k targeted labels. (ii) Learning adversarial examples by minimizing the Kullback-Leibler (KL) divergence between the adversarial distribution and the predicted distribution, together with the perturbation energy penalty. The proposed AD framework can be viewed as applying the network distillation frameworks (; ;) for "the bad" induced by target adversarial distributions. To compute a proper adversarial distribution for any given ordered Top-k targeted labels, the AD framework is motivated by two aspects: (i) The difference between the objective functions used by the C&W method and the three untargeted attack methods (Table 1) respectively. The former maximizes the margin of the logits between the target and the runner-up (either GT or ResNet-50. AD is better than the modified C&W method (CW *). The thickness represents the 2 energy (thinner is better). Please see Sec. 4 for detail of experimental settings. not), while the latter maximizes the cross-entropy between the prediction probabilities (softmax of logits) and the one-hot distribution of the ground-truth. (ii) The label smoothing methods ), which are often used to improve the performance of DNNs by addressing the over-confidence issue in the one-hot vector encoding of labels. More specifically, we explore how to leverage label semantic similarities in computing "smoothed" adversarial distributions, leading to knowledge-oriented attacks. We measure label semantic similarities using the cosine distance between some off-the-shelf word2vec embedding of labels such as the pretrained Glove embedding . Along this direction, another question of interest is further investigated: Are all Top-k targets equally challenging for an attack approach? In experiments, we test Top-k (k = 1, 2, 5, 10) in the ImageNet-1000 val dataset using two popular DNNs trained with clean ImageNet-1000 train dataset, ResNet-50 and DenseNet-121 respectively. Overall, the adversarial distillation approach obtains the best . It reduces the perturbation energy consistently with the same attack success rate on all the four k's, and improve the attack success rate by large margin against the modified C&W method for k = 10 (see Fig. 1). We observe that Top-k targets that are distant from the GT label in terms of either label semantic distance or prediction scores of clean images are actually more difficulty to attack. In summary, not only can ordered Top-k attacks improve the "robustness" of attacks, but also they provide insights on how aggressive adversarial attacks can be (under affordable optimization budgets). Our Contributions. This paper makes three main contributions to the field of learning adversarial attacks: (i) The problem in study is novel. Learning ordered Top-k adversarial attacks is an important problem that reflects the robustness of attacks themselves, but has not been addressed in the literature. (ii) The proposed adversarial distillation framework is effective, especially when k is large (such as k = 5, 10). (iii) The proposed knowledge-oriented adversarial distillation is novel. It worth exploring the existing distillation framework for a novel problem (ordered Top-k adversarial attacks) with some novel modifications (knowledge-oriented target distributions as "teachers"). The growing ubiquity of DNNs in advanced machine learning and AI systems dramatically increases their capabilities, but also increases the potential for new vulnerabilities to attacks. This situation has become critical as many powerful approaches have been developed where imperceptible perturbations to DNN inputs could deceive a well-trained DNN, significantly altering its prediction. Assuming full access to DNNs pretrained with clean images, white-box targeted attacks are powerful ways of investigating the brittleness of DNNs and their sensitivity to non-robust yet well-generalizing features in the data. Distillation. The central idea of our proposed AD method is built on distillation. Network distillation is a powerful training scheme proposed to train a new, usually lightweight model (a.k.a., the student) to mimic another already trained model (a.k.a. the teacher). It takes a functional viewpoint of the knowledge learned by the teacher as the conditional distribution it produces over outputs given an input. It teaches the student to keep up or emulate by adding some regularization terms to the loss in order to encourage the two models to be similar directly based on the distilled knowledge, replacing the training labels. Label smoothing can be treated as a simple hand-crafted knowledge to help improve model performance. Distillation has been exploited to develop defense models to improve model robustness. Our proposed adversarial distillation method utilizes the distillation idea in an opposite direction, leveraging label semantic driven knowledge for learning ordered Top-k attacks and improving attack robustness. Adversarial Attack. For image classification tasks using DNNs, the discovery of the existence of visually-imperceptible adversarial attacks was a big shock in developing DNNs. White-box attacks provide a powerful way of evaluating model brittleness. In a plain and loose explanation, DNNs are universal function approximator and capable of even fitting random labels in large scale classification tasks as ImageNet-1000 . Thus, adversarial attacks are generally learnable provided proper objective functions are given, especially when DNNs are trained with fully differentible back-propagation. Many white-box attack methods focus on norm-ball constrained objective functions (; ; ;). The C&W method investigates 7 different loss functions. The best performing loss function found by the C&W method has been applied in many attack methods and achieved strong (; ; . By introducing momentum in the MIFGSM method and the p gradient projection in the PGD method , they usually achieve better performance in generating adversarial examples. In the meanwhile, some other attack methods such as the StrAttack also investigate different loss functions for better interpretability of attacks. Our proposed method leverages label semantic knowledge in the loss function design for the first time. In this section, we first briefly introduce the white-box attack setting and the widely used C&W method under the Top-1 protocol, to be self-contained. Then we define the ordered Top-k attack formulation. To learn ordered Top-k attacks, we present detail of a modified C&W method as a strong baseline and the proposed AD framework. We focus on classification tasks using DNNs. Denote by (x, y) a pair of a clean input x ∈ X and its ground-truth label y ∈ Y. For example, in the ImageNet-1000 classification task, x represents a RGB image defined in the lattice of 224×224 and we have X R 3×224×224. y is the category label and we have Y {1, · · ·, 1000}. Let f (·; Θ) be a DNN pretrained on clean training data where Θ collects all estimated parameters and is fixed in learning adversarial examples. For notation simplicity, we denote by f (·) a pretrained DNN. The prediction for an input x from f (·) is usually defined using softmax function by, where P ∈ R |Y| represents the estimated confidence/probability vector (P c ≥ 0 and c P c = 1) and z(x) is the logit vector. The predicted label is then inferred byŷ = arg max c∈[1,|Y|] P c. The traditional Top-1 protocol of learning targeted attacks. For an input (x, y), given a target label t = y, we seek to compute some visually-imperceptible perturbation δ(x, t, f) using the pretrained and fixed DNN f (·) under the white-box setting. White-box attacks assume the complete knowledge of the pretrained DNN f, including its parameter values, architecture, training method, etc. The perturbed example is defined by, which is called an adversarial example of x if t =ŷ = arg max c f (x) c and the perturbation δ(x, t, f) is sufficiently small according to some energy metric. The C&W Method . Learning δ(x, t, f) under the Top-1 protocol is posed as a constrained optimization problem , n, where E(·) is defined by a p norm (e.g., the 2 norm) and n the size of the input domain (e.g., the number of pixels). To overcome the difficulty (non-linear and non-convex constraints) of directly solving Eqn. 3, the C&W method expresses it in a different form by designing some loss functions L(x) = L(x + δ) such that the first constraint t = arg max c f (x) c is satisfied if and only if L(x) ≤ 0. The best loss function proposed by the C&W method is defined by the hinge loss, which induces penalties when the logit of the target label is not the maximum among all labels. Then, the learning problem is formulated by, subject to x + δ ∈ n, which can be solved via back-propagation with the constraint satisfied via introducing a tanh layer. For the trade-off parameter λ, a binary search will be performed during the learning (e.g., 9 × 1000). It is straightforward to extend Eqn. 3 for learning ordered Top-k attacks (k ≥ 1). Denote by (t 1, · · ·, t k) the ordered Top-k targets (t i = y). We have, minimize subject to t i = arg max Directly solving Eqn. 6 is a difficulty task and proper loss functions are entailed, similar in spirit to the approximation approaches used in the Top-1 protocol, to ensure the first constraint can be satisfied once the optimization is converged (note that the optimization may fail, i.e., attacks fail). 3.3 LEARNING ORDERED TOP-k ATTACKS 3.3.1 A MODIFIED C&W METHOD We can modify the loss function (Eqn. 4) of the C&W method accordingly to solve Eqn. 6. We have, which covers the vanilla C&W loss (Eqn. 4), i.e., when k = 1, CW (x). The C&W loss function does not care where the underlying GT label will be as long as it is not in the Top-k. On the one hand, it is powerful in terms of attack success rate. On the other hand, the GT label may be very close to the Top-k, leading to over-confident attacks (see Tabel. 1). In addition, it is generic for any given Top-k targets. As we will show, they are less effective if we select the Top-k targets from the sub-set of labels which are least like the ground-truth label in terms of label semantics. To overcome the shortcomings of the C&W loss function and In our adversarial distillation framework, we adopt the view of point proposed in the network distillation method that the full confidence/probability distribution summarizes the knowledge of a trained DNN. We hypothesize that we can leverage the network distillation framework to learn the ordered Top-k attacks by designing a proper adversarial probability distribution across the entire set of labels that satisfies the specification of the given ordered Top-k targets, and facilitates explicit control of placing the GT label, as well as top-down integration of label semantics. Consider a given set of Top-k targets, {t 1, · · ·, t k}, denoted by P AD the adversarial probability distribution in which P AD ti > P AD tj (∀i < j) and P AD ti The space of candidate distributions are huge. We present a simple knowledge-oriented approach to define the adversarial distribution. We first specify the logit distribution and then compute the probability distribution using softmax. Denote by Z the maximum logit (e.g., Z = 10 in our experiments). We define the adversarial logits for the ordered Top-k targets by, z where γ is an empirically chosen decreasing factor (e.g., γ = 0.3 in our experiments). For the remaining categories l / ∈ {t 1, · · ·, t k}, we define the adversarial logit by, where 0 ≤ α < z AD t k is the maximum logit that can be assigned to any j, s(a, b) is the semantic similarity between the label a and label b, and is a small position for numerical consideration (e.g., = 1e-5). We compute s(a, b) using the cosine distance between the Glove embedding vectors of category names and −1 ≤ s(a, b) ≤ 1. Here, when α = 0, we discard the semantic knowledge and treat all the remaining categories equally. Note that our design of P AD is similar in spirit to the label smoothing technique and its variants ) except that we target attack labels and exploit label semantic knowledge. The design choice is still preliminary, although we observe its effectiveness in experiments. We hope this can encourage more sophisticated work to be explored. With the adversarial probability distribution P AD defined above as the target, we use the KL divergence as the loss function in our adversarial distillation framework as done in network distillation and we have, L (k) and then we follow the same optimization scheme as done in the C&W method (Eqn. 5). In this section, we evaluate ordered Top-k attacks with k = 1, 2, 5, 10 in the ImageNet-1000 benchmark using two pretrained DNNs, ResNet-50 and DenseNet-121 from the PyTorch model zoo 1. We implement our method using the AdverTorch toolkit 2. Our source code will be released. Data. In ImageNet-1000 , there are 50, 000 images for validation. To study attacks, we utilize the subset of images for which the predictions of both the ResNet-50 and DenseNet-121 are correct. To reduce the computational demand, we randomly sample a smaller subset, as commonly done in the literature. We first randomly select 500 categories to enlarge the coverage of categories, and then randomly chose 2 images per selected categories, ing in 1000 test images in total. Settings. We follow the protocol used in the C&W method. We only test 2 norm as the energy penalty for perturbations in learning (Eqn. 5). But, we evaluate learned adversarial examples in terms of three norms (1, 2 and ∞). We test two search schema for the trade-off parameter λ in optimization: both use 9 steps of binary search, and 30 and 1000 iterations of optimization are performed for each trial of λ. In practice, computation budget is an important factor and less computationally expensive ones are usually preferred. Only α = 1 is used in Eqn. 9 in experiments for simplicity due to computational demand. We compare the under three scenarios proposed in the C&W method : The Best Case settings test the attack against all incorrect classes, and report the target class(es) that was least difficult to attack. The Worst Case settings test the attack against all incorrect classes, and report the target class(es) that was most difficult to attack. The Average Case settings select the target class(es) uniformly at random among the labels that are not the GT. We first test ordered Top-k attacks using ResNet-50 for the four selected k's. Table. 2 summarizes the quantitative and comparisons. For Top-10 attacks, the proposed AD method obtains significantly better in terms of both ASR and the 2 energy of the added perturbation. For example, the proposed AD method has relative 362.3% ASR improvement over the strong C&W baseline for the worst case setting. For Top-5 attacks, the AD method obtains significantly better when the search budget is relatively low (i.e., 9 × 30). For Top-k (k = 1, 2) attacks, both the C&W method and the AD method can achieve 100% ASR, but the AD method has consistently lower energies of the added perturbation, i.e., finding more effective attacks and richer perturbations. Fig. 2 shows some learned adversarial examples of ordered Top-10 and Top-5 attacks. Table 2: Results and comparisons under the ordered Top-k targeted attack protocol using randomly selected and ordered 10 targets (GT exclusive) in ImageNet using ResNet-50. For Top-1 attacks, we also compare with three state-of-the-art untargeted attack methods, FGSM , PGD and MIFGSM . 10 iterations are used for both PGD and MIFGSM. Intuitively, we understand that they should not be equally difficult. We conduct some experiments to test this hypothesis. In particular, we test whether the label semantic knowledge can help identify the weak spots of different attack methods, and whether the proposed AD method can gain more in those weak spots. We test Top-5 using ResNet-50 3. Table. 3 summarizes the . We observe that for the 9 × 30 search budget, attacks are more challenging if the Top-5 targets are selected from the least-like set in terms of the label semantic similarity (see Eqn. 9), or from the lowest-score set in terms of prediction scores on clean images. To investigate if the observations from ResNets hold for other DNNs, we also test DenseNet-121 in ImageNet-1000. We test two settings: k = 1, 5 4. Overall, we obtain similar . . The proposed AD method has smaller perturbation energies and "cleaner" (lower-entropy) prediction distributions. Note that for Top-10 attacks, the 9 × 30 search scheme does not work (see Table. 2). Table 3: Results of ordered Top-5 targeted attacks with targets being selected based on (Top) label similarity, which uses 5 most-like labels and 5 least-like labels as targets respectively, and (Bottom) prediction score of clean image, which uses 5 highest-score labels and 5-lowest score labels. In both cases, GT labels are exclusive. Top-1 targeted attack protocol using randomly selected and ordered 5 targets (GT exclusive). For Top-1 attacks, we also compare with three state-of-the-art untargeted attack methods, FGSM , PGD and MIFGSM . 10 iterations are used for both PGD and MIFGSM. This paper proposes to extend the traditional Top-1 targeted attack setting to the ordered Top-k setting (k ≥ 1) under the white-box attack protocol. The ordered Top-k targeted attacks can improve the robustness of attacks themselves. To our knowledge, it is the first work studying this ordered Top-k attacks. To learn the ordered Top-k attacks, we present a conceptually simple yet effective adversarial distillation framework motivated by network distillation. We also develop a modified C&W method as the strong baseline for the ordered Top-k targeted attacks. In experiments, the proposed method is tested in ImageNet-1000 using two popular DNNs, ResNet-50 and DenseNet-121, with consistently better obtained. We investigate the effectiveness of label semantic knowledge in designing the adversarial distribution for distilling the ordered Top-k targeted attacks. Discussions. We have shown that the proposed AD method is generally applicable to learn ordered Top-k attacks. But, we note that the two components of the AD framework are in their simplest forms in this paper, and need to be more thoroughly studied: designing more informative adversarial distributions to guide the optimization to learn adversarial examples better and faster, and investigating loss functions other than KL divergence such as the Jensen-Shannon (JS) divergence or the Earth-Mover distance. On the other hand, we observed that the proposed AD method is more effective when computation budget is limited (e.g., using the 9 × 30 search scheme). This leads to the theoretically and computationally interesting question whether different attack methods all will work comparably well if the computation budget is not limited. Of course, in practice, we prefer more powerful ones when only limited computation budget is allowed. Furthermore, we observed that both the modified C&W method and the AD method largely do not work in learning Top-k (k ≥ 20) attacks with the two search schema (9 × 30 and 9 × 1000). We are working on addressing the aforementioned issues to test the Top-k (k ≥ 20) cases, thus providing a thorough empirical answer to the question: how aggressive can adversarial attacks be?
ordered Top-k adversarial attacks
490
scitldr
Neural message passing algorithms for semi-supervised classification on graphs have recently achieved great success. However, for classifying a node these methods only consider nodes that are a few propagation steps away and the size of this utilized neighborhood is hard to extend. In this paper, we use the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank. We utilize this propagation procedure to construct a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP. Our model's training time is on par or faster and its number of parameters on par or lower than previous models. It leverages a large, adjustable neighborhood for classification and can be easily combined with any neural network. We show that this model outperforms several recently proposed methods for semi-supervised classification in the most thorough study done so far for GCN-like models. Our implementation is available online. Graphs are ubiquitous in the real world and its description through scientific models. They are used to study the spread of information, to optimize delivery, to recommend new books, to suggest friends, or to find a party's potential voters. Deep learning approaches have achieved great success on many important graph problems such as link prediction BID15, graph classification BID12 BID31 BID13 and semi-supervised node classification BID43 BID21.There are many approaches for leveraging deep learning algorithms on graphs. Node embedding methods use random walks or matrix factorization to directly train individual node embeddings, often without using node features and usually in an unsupervised manner, i.e. without leveraging node classes BID33 BID40 BID30 BID15 BID35. Many other approaches use both graph structure and node features in a supervised setting. Examples for these include spectral graph convolutional neural networks BID6 BID11, message passing (or neighbor aggregation) algorithms BID19 BID21 BID16 BID34 BID28 BID13, and neighbor aggregation via recurrent neural networks BID36 BID24 BID10. Among these categories, the class of message passing algorithms has garnered particular attention recently due to its flexibility and good performance. Several works have been aimed at improving the basic neighborhood aggregation scheme by using attention mechanisms BID19 BID16 BID41, random walks BID0 BID44, edge features BID19 BID13 BID37 and making it more scalable on large graphs BID44. However, all of these methods only use the information of a very limited neighborhood for each node. A larger neighborhood would be desirable to provide the model with more information, especially for nodes in the periphery or in a sparsely labelled setting. Increasing the size of the neighborhood used by these algorithms, i.e. their range, is not trivial since neighborhood aggregation in this scheme is essentially a type of Laplacian smoothing and too many layers lead to oversmoothing. BID42 highlighted the same problem by establishing a relationship between the message passing algorithm termed Graph Convolutional Network (GCN) by BID21 and a random walk. Using this relationship we see that GCN converges to this random walk's limit distribution as the number of layers increases. The limit distribution is a property of the graph as a whole and does not take the random walk's starting (root) node into account. As such it is unsuited to describe the root node's neighborhood. Hence, GCN's performance necessarily deteriorates for a high number of layers (or aggregation/propagation steps).To solve this issue, in this paper, we first highlight the inherent connection between the limit distribution and PageRank BID32. We then propose an algorithm that utilizes a propagation scheme derived from personalized PageRank instead. This algorithm adds a chance of teleporting back to the root node, which ensures that the PageRank score encodes the local neighborhood for every root node BID32. The teleport probability allows us to balance the needs of preserving locality (i.e. staying close to the root node to avoid oversmoothing) and leveraging the information from a large neighborhood. We show that this propagation scheme permits the use of far more (in fact, infinitely many) propagation steps without leading to oversmoothing. Moreover, while propagation and classification are inherently intertwined in message passing, our proposed algorithm separates the neural network from the propagation scheme. This allows us to achieve a much higher range without changing the neural network, whereas in the message passing scheme every additional propagation step would require an additional layer. It also permits the independent development of the propagation algorithm and the neural network generating predictions from node features. That is, we can combine any state-of-the-art prediction method with our propagation scheme. We even found that adding our propagation scheme during inference significantly improves the accuracy of networks that were trained without using any graph information. Our model achieves state-of-the-art while requiring fewer parameters and less training time compared to most competing models, with a computational complexity that is linear in the number of edges. We show these in the most thorough study (including significance testing) of message passing models using graphs with text-based features that has been done so far. We first introduce our notation and explain the problem our model solves. Let G = (V, E) be a graph with nodes V and edges E. Let n denote the number of nodes and m the number of edges. The nodes are described by the feature matrix X ∈ R n×f, with the number of features f per node, and the class (or label) matrix Y ∈ R n×c, with the number of classes c. The graph G is described by the adjacency matrix A ∈ R n×n.à = A + I n denotes the adjacency matrix with added self-loops. One simple and widely used message passing algorithm for semi-supervised classification is the Graph Convolutional Network (GCN). In the case of two message passing layers its equation is DISPLAYFORM0 where Z ∈ R n×c are the predicted node labels, =D DISPLAYFORM1 is the symmetrically normalized adjacency matrix with self-loops, with the diagonal degree matrixD ij = kà ik δ ij, and W 0 and W 1 are trainable weight matrices BID21.With two GCN-layers, only neighbors in the two-hop neighborhood are considered. There are essentially two reasons why a message passing algorithm like GCN cannot be trivially expanded to use a larger neighborhood. First, aggregation by averaging causes oversmoothing if too many layers are used. It, therefore, loses its focus on the local neighborhood. Second, most common aggregation schemes use learnable weight matrices in each layer. Therefore, using a larger neighborhood necessarily increases the depth and number of learnable parameters of the neural network (the second aspect can be circumvented by using weight sharing, which is typically not the case, though). However, the required neighborhood size and neural network depth are two completely orthogonal aspects. This fixed relationship is a strong limitation and leads to bad compromises. We will start by concentrating on the first issue. BID42 have shown that for a klayer GCN the influence score of node x on y, I(x, y) = i j ∂Zyi ∂Xxj, is proportional in DISPLAYFORM2 Figure 1: Illustration of (approximate) personalized propagation of neural predictions (PPNP, APPNP). Predictions are first generated from each node's own features by a neural network and then propagated using an adaptation of personalized PageRank. The model is trained end-to-end.expectation to a slightly modified k-step random walk distribution starting at the root node x, P rw' (x → y, k). Hence, the information of node x spreads to node y in a random walk-like manner. If we take the limit k → ∞ and the graph is irreducible and aperiodic, this random walk probability distribution P rw' (x → y, k) converges to the limit (or stationary) distribution P lim (→ y). This distribution can be obtained by solving the equation π lim =Âπ lim. Obviously, the only depends on the graph as a whole and is independent of the random walk's starting (root) node x. This global property is therefore unsuitable for describing the root node's neighborhood. From message passing to personalized PageRank. We can solve the problem of lost focus by recognizing the connection between the limit distribution and PageRank BID32. The only differences between these two are the added self-loops and the adjacency matrix normalization inÂ. Original PageRank is calculated via π pr = A rw π pr, with A rw = AD −1. Having made this connection we can now consider using a variant of PageRank that takes the root node into account -personalized PageRank BID32. We define the root node x via the teleport vector i x, which is a one-hot indicator vector. Our adaptation of personalized PageRank can be obtained for node x using the recurrent equation π ppr (i x) = (1−α)Âπ ppr (i x)+αi x, with the teleport (or restart) probability α ∈. By solving this equation, we obtain DISPLAYFORM0 Introducing the teleport vector i x allows us to preserve the node's local neighborhood even in the limit distribution. In this model the influence score of root node x on node y, I(x, y), is proportional to the y-th element of our personalized PageRank π ppr (i x). This value is different for every root node. How fast it decreases as we move away from the root node can be adjusted via α. By substituting the indicator vector i x with the unit matrix I n we obtain our fully personalized PageRank matrix Π ppr = α(I n − (1 − α)Â) −1, whose element (yx) specifies the influence score of node x on node y, I(x, y) ∝ Π (yx)ppr. Note that due to symmetry Π (yx) DISPLAYFORM1 ppr, i.e. the influence of x on y is equal to the influence of y on x. This inverse always exists since 1 1−α > 1 and therefore cannot be an eigenvalue of (see Appendix A).Personalized propagation of neural predictions (PPNP). To utilize the above influence scores for semi-supervised classification we generate predictions for each node based on its own features and then propagate them via our fully personalized PageRank scheme to generate the final predictions. This is the foundation of personalized propagation of neural predictions. PPNP's model equation is DISPLAYFORM2 where X is the feature matrix and f θ a neural network with parameter set θ generating the predictions H ∈ R n×c. Note that f θ operates on each node's features independently, allowing for parallelization. Furthermore, one could substitute with any propagation matrix, such as A rw.As a consequence, PPNP separates the neural network used for generating predictions from the propagation scheme. This separation additionally solves the second issue mentioned above: the depth of the neural network is now fully independent of the propagation algorithm. As we saw when connecting GCN to PageRank, personalized PageRank can effectively use even infinitely many neighborhood aggregation layers, which is clearly not possible in the classical message passing framework. Furthermore, the separation gives us the flexibility to use any method for generating predictions, e.g. deep convolutional neural networks for graphs of images. While generating predictions and propagating them happen consecutively during inference, it is important to note that the model is trained end-to-end. That is, the gradient flows through the propagation scheme during backpropagation (implicitly considering infinitely many neighborhood aggregation layers). Adding these propagation effects significantly improves the model's accuracy. Efficiency analysis. Directly calculating the fully personalized PageRank matrix Π ppr, is computationally inefficient and in a dense R n×n matrix. Using this matrix would lead to a computational complexity and memory requirement of O(n 2) for training and inference. To solve this issue, reconsider the equation DISPLAYFORM3 Instead of viewing this equation as a combination of a dense fully personalized PageRank matrix with the prediction matrix, we can also view it as a variant of topic-sensitive PageRank, with each class corresponding to one topic BID17. In this view every column of H defines an (unnormalized) distribution over nodes that acts as a teleport set. Hence, we can approximate PPNP via an approximate computation of topic-sensitive PageRank. Approximate personalized propagation of neural predictions (APPNP). More precisely, APPNP achieves linear computational complexity by approximating topic-sensitive PageRank via power iteration. While PageRank's power iteration is connected to the regular random walk, the power iteration of topic-sensitive PageRank is related to a random walk with restarts. Each power iteration (random walk/propagation) step of our topic-sensitive PageRank variant is, thus, calculated via DISPLAYFORM4 where the prediction matrix H acts as both the starting vector and the teleport set, K defines the number of power iteration steps and k ∈ [0, K −2]. Note that this method retains the graph's sparsity and never constructs an R n×n matrix. The convergence of this iterative scheme can be shown by investigating the ing series (see Appendix B).Note that the propagation scheme of this model does not require any additional parameters to train -as opposed to models like GCN, which typically require more parameters for each additional propagation layer. We can therefore propagate very far with very few parameters. Our experiments show that this ability is indeed very beneficial (see Section 6). A similar model expressed in the message passing framework would therefore not be able to achieve the same level of performance. The reformulation of PPNP via fixed-point iterations illustrates a connection to the original graph neural network (GNN) model BID36. While the latter uses a learned fixed-point iteration, our approach uses a predetermined iteration (adapted personalized PageRank) and applies a learned feature transformation before propagation. In both PPNP and APPNP, the size of the neighborhood influencing each node can be adjusted via the teleport probability α. The freedom to choose α allows us to adjust the model for different types of networks, since varying graph types require the consideration of different neighborhood sizes, as shown in Section 6 and described by BID15 and BID1. Several works have tried to improve the training of message passing algorithms and increase the neighborhood available at each node by adding skip connections BID24 BID34 BID16 BID44. One recent approach combined skip connection with aggregation schemes BID42. However, the range of these models is still limited, as apparent in the low number of message passing layers used. While it is possible to add skip connections in the neural network used by our algorithm, this would not influence the propagation scheme. Our approach to solving the range problem is therefore unrelated to these models. facilitated training by combining message passing with co-and self-training. The improvements achieved by this combination are similar to reported with other semi-supervised classification models BID7. Note that most algorithms, including ours, can be improved using self-and co-training. However, each additional step used by these methods corresponds to a full training cycle and therefore significantly increases the training time. Deep GNNs that avoid the oversmoothing issue have been proposed in recent works by combining residual (skip) connections with batch normalization BID18 BID9. However, our model solves this issue by simplifying the architecture via decoupling prediction and propagation and does not rely on ad-hoc techniques that further complicate the model and introduce additional hyperparameters. Furthermore, since PPNP increases the range without introducing additional layers and parameters it is easier and faster to train compared to a deep GNN. Recently, many experimental evaluations have suffered from superficial statistical evaluation and experimental bias from using varying training setups and overfitting. The latter is caused by experiments using a single training/validation/test split, by not distinguishing clearly between the validation and test set, and by finetuning hyperparameters to each dataset or even data split separately. Message-passing algorithms are very sensitive to both data splits and weight initialization (as clearly shown by our evaluation). Thus, a carefully designed evaluation protocol is extremely important. Our work aims to establish such a thorough evaluation protocol. First, we run each experiment 100 times on multiple random splits and initializations. Second, we split the data into a visible and a test set, which do not change. The test set was only used once to report the final performance; and in particular, has never been used to perform hyperparameter and model selection. To further prevent overfitting we use the same number of layers and hidden units, dropout rate d, L 2 regularization parameter λ, and learning rate l across datasets, since all datasets use bag-of-words as features. To prevent experimental bias we optimized the hyperparameters of all models individually using a grid search on CITESEER and CORA-ML and use the same early stopping criterion across models. Finally, to ensure the statistical robustness of our experimental setup, we calculate confidence intervals via bootstrapping and report the p-values of a paired t-test for our main claims. To our knowledge, this is the most rigorous study on GCN-like models which has been done so far. More details about the experimental setup are provided in Appendix C.Datasets. We use four text-classification datasets for evaluation. CITESEER BID38, CORA-ML BID27 and PUBMED BID29 are citation graphs, where each node represents a paper and the edges represent citations between them. In the MICROSOFT ACADEMIC graph edges represent coauthorship. We use the largest connected component of each graph. All graphs use a bag-of-words representation of the papers' abstracts as features. While large graphs do not necessarily have a Figure 2: Accuracy distributions of different models. The high standard deviation between data splits and initializations shows the importance of a rigorous evaluation, which is often omitted.larger diameter BID22, note that these graphs indeed have average shortest path lengths between 5 and 10 and therefore a regular two-layer GCN cannot cover the entire graph. TAB0 reports the dataset statistics. Baseline models. We compare to five state-of-the-art models: GCN , network of GCNs (N-GCN) (a), graph attention networks (GAT) BID41, bootstrapped feature propagation (bt. FP) BID7 and jumping knowledge networks with concatenation (JK) BID42. For GCN we also show the of the (unoptimized) vanilla version (V. GCN) to demonstrate the strong impact of early stopping and hyperparameter optimization. The hyperparameters of all models are listed in Appendix D. To ensure a fair model comparison we used a neural network for PPNP that is structurally very similar to GCN and has the same number of parameters. We use two layers with h = 64 hidden units. We apply L 2 regularization with λ = 0.005 on the weights of the first layer and use dropout with dropout rate d = 0.5 on both layers and the adjacency matrix. For APPNP, adjacency dropout is resampled for each power iteration step. For propagation we use the teleport probability α = 0.1 and K = 10 power iteration steps for APPNP. We use α = 0.2 on the MICROSOFT ACADEMIC graph due to its structural difference (see Figure 5 and its discussion). The combination of this shallow neural network with a comparatively high number of power iteration steps achieved the best during hyperparameter optimization (see Appendix G). Overall accuracy. The for the accuracy (micro F1-score) are summarized in TAB1. Similar trends are observed for the macro F1-score (see Appendix E). Both models significantly outperform the state-of-the-art baseline models on all datasets. Our rigorous setup might understate the improvements achieved by PPNP and APPNP -this is statistically significant p < 0.05, as tested via a paired t-test (see Appendix F). This thorough setup furthermore shows that the advantages reported by recent works practically vanish when training is harmonized, hyperparameters are properly op- timized and multiple data splits are considered. A simple GCN with optimized hyperparameters outperforms several recently proposed models on our setup. Figure 2 shows how broad the accuracy distribution of each model is. This is caused by both random initialization and different data splits (train / early stopping / test). This demonstrates how crucial a statistically rigorous evaluation is for a conclusive model comparison. Moreover, it shows the sensitivity (robustness) of each method, e.g. PPNP, APPNP and GAT typically have lower variance. Training time per epoch. We report the average training time per epoch in TAB2. We decided to only compare the training time per epoch since all hyperparameters were solely optimized for accuracy and the used early stopping criterion is very generous. Obviously, (exact) PPNP can only be applied to moderately sized graphs, while APPNP scales to large data. On average, APPNP is around 25 % slower than GCN due to its higher number of matrix multiplications. It scales similarly with graph size as GCN and is therefore significantly faster than other more sophisticated models like GAT. This is observed even though our implementation improved GAT's training time roughly by a factor of 2 compared to the reference implementation. Training set size. Since the labeling rate is often very small for real world datasets, investigating how the models perform with a small number of training samples is very important. FIG0 shows how the number of training nodes per class n train, per class impacts the accuracy on CORA-ML (for other datasets see Appendix H). The dominance of PPNP and APPNP increases further in this sparsely labelled setting. This can be attributed to their higher range, which allows them to better propagate the information further away from the (few) training nodes. We see further evidence for this when comparing the accuracy of APPNP and GCN depending on the distance between a node and the training set (in terms of shortest path). Appendix I shows that the performance gap between APPNP and GCN tends to increase for nodes that are far away from the training nodes. That is, nodes further away from the training set benefit more from the increase in range. Number of power iteration steps. Figure 4 shows how the accuracy depends on the number of power iterations for two different propagation schemes. The first mimics the standard propagation as known from GCNs (i.e. α = 0 in APPNP). As clearly shown the performance breaks down as we increase the number of power iterations K (since we approach the global PageRank solution). However, when using personalized propagation (with α = 0.1) the accuracy increases and converges to exact PPNP with infinitely many propagation steps, thus demonstrating the personalized propagation principle is indeed beneficial. As also shown in the figure, it is enough to use a moderate number of power iterations (e.g. K = 10) to effectively approximate exact PPNP. Interestingly, we've found that this number coincides with the highest shortest path distance of any node to the training set. Teleport probability α. Figure 5 shows the effect of the hyperparameter α on the accuracy on the validation set. While the optimum differs slightly for every dataset, we consistently found a teleport probability of around α ∈ [0.05, 0.2] to perform best. This probability should be adjusted for the dataset under investigation, since different graphs exhibit different neighborhood structures BID15 BID1. Note that a higher α improves convergence speed. Neural network without propagation. PPNP and APPNP are trained end-to-end, with the propagation scheme affecting (i) the neural network f θ during training, and (ii) the classification decision during inference. Investigating how the model performs without propagation shows if and how valuable this addition is. Figure 6 shows how propagation affects both training and inference. "Never" denotes the case where no propagation is used; essentially we train and apply a standard multilayer perceptron (MLP) f θ using the features only. "Training" denotes the case where we use APPNP during training to learn f θ; at inference time, however, only f θ is used to predict the class labels. "Inference", in contrast, denotes the case where f θ is trained without APPNP (i.e. standard MLP on features). This pretrained network with fixed weights is then used with APPNP's propagation for inference. Finally, "Inf. & Training" denotes the regular APPNP, which always uses propagation. The best are achieved with regular APPNP, which validates our approach. However, on most datasets the accuracy decreases surprisingly little when propagating only during inference. Skipping propagation during training can significantly reduce training time for large graphs as all nodes can be handled independently. This also shows that our model can be combined with pretrained neural networks that do not incorporate any graph information and still significantly improve their accuracy. Moreover, Figure 6 shows that just propagating during training can also lead to large improvements. This indicates that our model can also be applied to online/inductive learning where only the features and not the neighborhood information of an incoming (previously unobserved) node are available. In this paper we have introduced personalized propagation of neural predictions (PPNP) and its fast approximation, APPNP. We derived this model by considering the relationship between GCN and PageRank and extending it to personalized PageRank. This simple model decouples prediction and propagation and solves the limited range problem inherent in many message passing models without introducing any additional parameters. It uses the information from a large, adjustable (via the teleport probability α) neighborhood for classifying each node. The model is computationally efficient and outperforms several state-of-the-art methods for semi-supervised classification on multiple graphs in the most thorough study which has been done for GCN-like models so far. For future work it would be interesting to combine PPNP with more complex neural networks used e.g. in computer vision or natural language processing. Furthermore, faster or incremental approximations of personalized PageRank BID2 BID3 BID25 and more sophisticated propagation schemes would also benefit the method. A EXISTENCE OF Π PPRThe matrix DISPLAYFORM0 exists iff the determinant det(I n − (1 − α)Â) = 0, which is the case iff det( − 1 1−α I n) = 0, i.e. iff 1 1−α is not an eigenvalue ofÂ. This value is always larger than 1 since the teleport probability α ∈. Furthermore, the symmetrically normalized matrix has the same eigenvalues as the row-stochastic matrixà rw. This can be shown by multiplying the eigenvalue equationÂv = λv. The largest eigenvalue of a row-stochastic matrix is 1, as can be proven using the Gershgorin circle theorem. Hence, B CONVERGENCE OF APPNP APPNP uses the iterative equation DISPLAYFORM1 After the k-th propagation step, the ing predictions are DISPLAYFORM2 H.If we take the limit k → ∞ the left term tends to 0 and the right term becomes a geometric series. The series converges since α ∈ and is symmetrically normalized and therefore det(Â) ≤ 1, ing in The sampling procedure is illustrated in FIG3. The data is first split into a visible and a test set. For the visible set 1500 nodes were sampled for the citation graphs and 5000 for MICROSOFT ACADEMIC. The test set contains all remaining nodes. We use three different label sets in each experiment: A training set of 20 nodes per class, an early stopping set of 500 nodes and either a validation or test set. The validation set contains the remaining nodes of the visible set. We use 20 random seeds for determining the splits. These seeds are drawn once and fixed across runs to facilitate comparisons. We use one set of seeds for the validation splits and a different set for the test splits. Each experiment is run with 5 random initializations on each data split, leading to a total of 100 runs per experiment. DISPLAYFORM3 The early stopping criterion uses a patience of p = 100 and an (unreachably high) maximum of n = 10 000 epochs. The patience is reset whenever the accuracy increases or the loss decreases on the early stopping set. We choose the parameter set achieving the highest accuracy and break ties by selecting the lowest loss on this set. This criterion was inspired by GAT BID41.We used TensorFlow (Martín BID26 for all experiments except bootstrapped feature propagation. All uncertainties and confidence intervals correspond to a confidence level of 95 % and were calculated by bootstrapping with 1000 samples. We use the Adam optimizer with a learning rate of l = 0.01 and cross-entropy loss for all models BID20 . Weights are initialized as described in BID14 . The feature matrix is L 1 normalized per row. Vanilla GCN uses the original settings of two layers with h = 16 hidden units, no dropout on the adjacency matrix, L 2 regularization parameter λ = 5 × 10 −4 and the original early stopping with a maximum of 200 steps and a patience of 10 steps based on the loss. The optimized GCN uses two layers with h = 64 hidden units, dropout on the adjacency matrix with d = 0.5 and L 2 regularization parameter λ = 0.02. N-GCN uses h = 16 hidden units, R = 4 heads per random walk length and random walks of up to K − 1 = 4 steps. It uses L 2 regularization on all layers with λ = 1 × 10 −5 and the attention variant for merging the predictions BID0 . Note that this model effectively uses RKh = 320 hidden units, which is 5 times as many units compared to GCN, GAT, and PPNP.For GAT we use the (well optimized) original hyperparameters, except the L 2 regularization parameter λ = 0.001 and learning rate l = 0.01. As opposed to the original paper, we do not use different hyperparameters on PUBMED, as described in our experimental setup. Bootstrapped feature propagation uses a return probability of α = 0.2, 10 propagation steps, 10 bootstrapping (self-training) steps with r = 0.1n training nodes added per step. We add the training nodes with the lowest entropy on the predictions. The number of nodes added per class is based on the class proportions estimated using the predictions. Note that this model does not include any stochasticity in its initialization. We therefore only run it once per train/early stopping/test split. For the jumping knowledge networks we use the concatenation variant with three layers and h = 64 hidden units per layer. We apply L 2 regularization with λ = 0.001 on all layers and perform dropout with d = 0.5 on all layers but not on the adjacency matrix. F PAIRED t-TEST Figure 13: ∆ Accuracy (%) denotes the average improvement in percentage points of APPNP over GCN depending on the distance (number of hops) from the training nodes on different graphs.n denotes the number of nodes at each distance (averaged over multiple dataset splits). Note that the y-axis has a different scale per graph.
Personalized propagation of neural predictions (PPNP) improves graph neural networks by separating them into prediction and propagation via personalized PageRank.
491
scitldr
Most of recent work in cross-lingual word embeddings is severely Anglocentric. The vast majority of lexicon induction evaluation dictionaries are between English and another language, and the English embedding space is selected by default as the hub when learning in a multilingual setting. With this work, however, we challenge these practices. First, we show that the choice of hub language can significantly impact downstream lexicon induction performance. Second, we both expand the current evaluation dictionary collection to include all language pairs using triangulation, and also create new dictionaries for under-represented languages. Evaluating established methods over all these language pairs sheds light into their suitability and presents new challenges for the field. Finally, in our analysis we identify general guidelines for strong cross-lingual embeddings baselines, based on more than just Anglocentric experiments. Continuous distributional vectors for representing words (embeddings) have become ubiquitous in modern, neural NLP. Cross-lingual representations additionally represent words from various languages in a shared continuous space, which in turn can be used for Bilingual Lexicon Induction (BLI). BLI is often the first step towards several downstream tasks such as Part-Of-Speech (POS) tagging , parsing , document classification , and machine translation (; b;). Often, such shared representations are learned with a two-step process, whether under bilingual or multilingual settings (hereinafter BWE and MWE, respectively). First, monolingual word embeddings are learned over large swaths of text; such pre-trained word embeddings, in fact, are available for several languages and are widely used, like the fastText Wikipedia vectors . Second, a mapping between the languages is learned, in one of three ways: in a supervised manner if dictionaries or parallel data are available to be used for supervision , under minimal supervision e.g. using only identical strings , or even in a completely unsupervised fashion . Both in bilingual and multilingual settings, it is common that one of the language embedding spaces is the target to which all other languages get aligned to (hereinafter "the hub"). We outline the details in Section 2. Despite all the recent progress in learning cross-lingual embeddings, we identify a major shortcoming to previous work: it is by and large English-centric. Notably, most MWE approaches essentially select English as the hub during training by default, aligning all other language spaces to the English one. We argue and empirically show, however, that English is a poor hub language choice. In BWE settings, on the other hand, it is fairly uncommon to denote which of the two languages is the hub (often this is implied to be the target language). However, we experimentally find that this choice can greatly impact downstream performance, especially when aligning distant languages. This Anglocentricity is even more evident at the evaluation stage. The lexica most commonly used for evaluation are the MUSE lexica which cover 45 languages, but with translations only from and into English. Even still, alternative evaluation dictionaries are also very English-and European-centric: report on on on Spanish-English and Italian-English, and Artetxe et al. (2018a) between English and Italian, German, Finnish, Spanish, and Turkish. We argue that cross-lingual word embedding mapping methods should look beyond English for their evaluation benchmarks because, compared to all others, English is a language with disproportionately large available data and relatively poor inflectional morphology e.g., it lacks case, gender, and complex verbal inflection systems . These two factors allow for an overly easy evaluation setting which does not necessarily generalize to other language pairs. In light of this, equal focus should instead be devoted to evaluation over more diverse language pairs that also include morphologically rich and low-resource languages. With this work, we attempt to address these shortcomings, providing the following contributions: • We show that the choice of the hub when evaluating on diverse language pairs can lead to significantly different performance (e.g., by more than 10 percentage points for BWE over distant languages). We also show that often English is a suboptimal hub for MWE. • We identify some general guidelines for choosing a hub language which could lead to stronger baselines; less isometry between the hub and source and target embedding spaces mildly correlates with performance, as does typological distance (a measure of language similarity based on language family membership trees). For distant languages, multilingual systems should in most cases be preferred over bilingual ones. • We provide resources for training and evaluation on non-Anglocentric language pairs. We outline a simple triangulation method with which we extend the MUSE dictionaries to an additional 2352 lexicons covering 49 languages, and we present on a subset of them. We also create new evaluation lexica for under-resourced languages using Azerbaijani, Belarusian, and Galician as our test cases. We additionally provide recipes for creating such dictionaries for any language pair with available parallel data. In the supervised bilingual setting, as formulated by , given two languages L = {l 1, l 2} and their pre-trained row-aligned embeddings X 1, X 2, respectively, a transformation matrix M is learned such that: The set Ω can potentially impose a constraint over M, such as the very popular constraint of restricting it to be orthogonal . Previous work has empirically found that this simple formulation is competitive with other more complicated alternatives . The orthogonality assumption ensures that there exists a closed-form solution in the form of the Singular Value Decomposition (SVD) of X 1 X T 2. 1 Note that in this case only a single matrix M needs to be learned, because X 1 − M X 2 = M −1 X 1 − X 2, while at the same time a model that minimizes X 1 − M X 2 is as expressive as one minimizing M 1 X 1 − M 2 X 2, and easier to learn. In the minimally supervised or even the unsupervised setting the popular methods follow an iterative refinement approach . Starting with a seed dictionary (e.g. from identical strings or numerals) an initial mapping is learned in the same manner as in the supervised setting. The initial mapping, in turn, is used to expand the seed dictionary with high confidence word translation pairs. The new dictionary is then used to learn a better mapping, and so forth the iterations continue until convergence. We will generally refer to such methods as MUSE-like. Similarly, in a multilingual setting, one could start with N languages L = {l 1, l 2, . . ., l N} and their respective pre-trained embeddings X 1, X 2,..., X N, and then learn N −1 bilingual mappings between a pre-selected target language and all others. Hence, one of the language spaces is treated as a target (the hub) and remains invariant, while all others are mapped into the (now shared) hub language space. Alternatively, those mappings could be jointly learned using the MAT+MPSR methods of -also taking advantage of the inter-dependencies between any two language pairs. Importantly, though, there is no closed form solution for learning the joint mapping, hence a solution needs to be approximated with gradient-based methods. MAT+MPSR generalizes the adversarial approach of to multiple languages, and also follows an iterative refinement 2 In either case, a language is chosen as the hub, and N − 1 mappings for the other languages are learned. Other than MAT+MPSR, the only other unsupervised multilingual approach is that of , who propose to incrementally align multiple languages by adding each new language as a hub. We decided, though, against comparing to this method, because (a) their method requires learning O(N 2) mappings for relatively small improvements and (b) the order in which the languages are added is an additional hyperparameter that would explode the experimental space. Lexicon Induction One of the most common downstream evaluation tasks for the learned crosslingual word mappings is Lexicon Induction (LI), the task of retrieving the most appropriate wordlevel translation for a query word from the mapped embedding spaces. Specialized evaluation (and training) dictionaries have been created for multiple language pairs, with the MUSE dictionaries most often used, providing word translations between English (En) and 48 other high-to mid-resource languages, as well as on all 30 pairs among 6 very similar Romance and Germanic languages (English, French, German, Spanish, Italian, Portuguese). Given the mapped embedding spaces, the translations are retrieved using a distance metric, with Cross-Lingual Similarity Scaling (, CSLS) as the most common and best performing in the literature. Intuitively, CSLS decreases the scores of pairs that lie in dense areas, increasing the scores of rarer words (which are harder to align). The retrieved pairs are compared to the gold standard and evaluated using precision at k (P@k, evaluating how often the correct translation is within the k retrieved nearest neighbours of the query). Throughout this work we report P@1, which is equivalent to accuracy, but we also provide with P@5 and P@10 in the Appendix. As other works have recently noted the typically used evaluation dictionaries cover a narrow breadth of the possible language pairs, with the majority of them focusing in pairs with English (as with the MUSE dictionaries) or among high-resource European languages. In this section, we first outline our method for creating new dictionaries for low resource languages. Then, we describe the simple triangulation process that allows us to create dictionaries among all 49 MUSE languages. Our approach for constructing dictionaries is fairly straightforward, inspired by phrase table extraction techniques from phrase-based MT . Rather than manual inspection, however, which would be impossible for all language pairs, we rely on fairly simple heuristics for controlling the quality of our dictionaries. The first step is collecting publicly available parallel data between English and the low-resource language of interest. We use data from the TED , OpenSubtitles , WikiMatrix , bible , and JW300 (Agić and Vulić, 2019) datasets. 4 This in 354k, 53k, and 623k English-to-X parallel sentences for Azerbaijani (Az), Belarusian (Be), and Galician (Gl) respectively. 5 We align the parallel sentences using fast align , and extract symmetrized alignments using the gdfa heuristic . In order to ensure that we do not extract highly domain-specific word pairs, we only use the TED, OpenSubtitles, and WikiMatrix parts for word-pair extraction. Also, in order to control for quality, we only extract word pairs if they appear in the dataset more than 5 times, and if the alignment probability is higher than 30%. With this process, we end up with about 6k, 7k, and 38k word pairs for Az-En, Be-En, and GlEn respectively. Following standard conventions, we sort the word pairs according to source-side frequency, and use the intermediate-frequency ones for evaluation, typically using the 5000-6500 rank boundaries. The same process can be followed for any language pair with enough volume of parallel data (needed for training a decent word alignment model). In fact, we can produce similar dictionaries for a large number of languages, as the combination of the recently created JW300 and WikiMatrix datasets provide an average of more than 100k parallel sentences in 300 languages. Our second method for creating new dictionaries is inspired from phrase table triangulation ideas from the pre-neural MT community . The concept can be easily explained with an example, visualized in Figure 1. Consider the Portuguese (Pt) word trabalho which, according to the MUSE Pt-En dictionary, has the words job and work as possible En translations. In turn, these two En words can be translated to 4 and 5 Czech (Cs) words respectively. By utilizing the transitive property (which translation should exhibit) we can identify the set of 7 possible Cs translations for the Pt word trabalho. Following this simple triangulation approach, we create 2352 new dictionaries over language pairs among the 49 languages of the MUSE dictionaries. 7 For consistency, we keep the same train and test splits as with MUSE, so that the source-side types are equal across all dictionaries with the same source language. Triangulating through English (which is unavoidable, due to the lack of non-English-centric dictionaries) is suboptimal -English is morphologically poor and lacks gender information. As a , several inflected forms in morphologically-rich languages map to the same English form. Similarly, gendered nouns or adjectives in gendered languages map to English forms that lack gender information. For example, the MUSE Greek-English dictionary lists the word peaceful as the translation for all ειρηνικός, ειρηνική, ειρηνικό, ειρηνικά, which are the male, female, and neutral (singular and plural) inflections of the same adjective. Equivalently, the English-Italian dictionary translates peaceful into either pacifico, pacifici, or pacifica (male singular, male plural, and female singular, respectively; see Table 1). When translating from or into English lacking context, all of those are reasonable translations. When translating between Greek and Italian, though, one should take gender and number into account. Hence, we devise a filtering method for removing blatant mistakes when triangulating morphologically rich languages. We rely on automatic morphological tagging which we can obtain for most of the MUSE languages, using the StanfordNLP toolkit . The morphological tagging uses the Universal Dependencies feature set making the tagging comparable across almost all languages. Our filtering technique iterates through the bridged dictionaries: for a given source word, if we find a translation word with the exact same morphological analysis, we filter out all other translations with the same lemma but different tags. In the case of feature mismatch (for instance, Greek uses 4 cases and 3 genders while Italian has 2 genders and no cases) or if we only find a partial tag match over a feature subset, we filter out translations with disagreeing tags. Coming back to our Greek-Italian example, this means that for the form ειρηνικός we would only keep pacifico as a candidate translation (we show more examples in Table 1). Our filtering technique removes about 17% of the entries in our bridged dictionaries. Naturally, this filtering approach is restricted to languages for which a morphological analyzer is available. Miti- Table 2: Lexicon Induction performance (measured with P@1) over 10 European languages (90 pairs). In each cell, the superscript denotes the hub language that yields the best for that language pair. µ best: average using the best hub language. µ En: average using the En as the hub. The shaded cells are the only language pairs where a bilingual MUSE system outperforms MAT+MSPR. For our main MWE experiments, we train MAT+MPSR systems to align several language subsets varying the hub language. For BWE experiments, we compare MUSE with MAT+MPSR. The differences in LI performance show the importance of the hub language choice with respect to each evaluation pair. As part of our call for moving beyond Anglo-centric evaluation, we also present LI on several new language pairs using our triangulated dictionaries. It is worth noting that we are predominantly interested in comparing the quality of the multilingual alignment when different hub languages are used. Hence, even slightly noisy dictionaries (like our low-resource language ones) are still useful. Even if the skyline performance (from e.g. a perfect system) would not reach 100% accuracy due to noise, the differences between the systems' performance can be revealing. We first focus on 10 European languages of varying morphological complexity and data availability (which affects the quality of the pre-trained word embeddings): Azerbaijani (Az), Belarusian (Be), Czech (Cs), English (En), Galician (Gl), Portuguese (Pt), Russian (Ru), Slovak (Sk), Spanish (Es), and Turkish (Tr). The choice of these languages additionally ensures that for our three low-resource languages (Az, Be, Gl) we include at least one related higher-resource language (Tr, Ru, Pt/Es respectively), allowing for comparative analysis. Table 2 summarizes the best post-hoc performing systems for this experiment. In the second setting, we use a set of 7 more distant languages: English, French (Fr), Hindi (Hi), Korean (Ko), Russian, Swedish (Sv), and Ukrainian (Uk). This language subset has large variance in terms of typology and alphabet. The best performing systems are presented in Table 3. Experimental Setup We train and evaluate all models starting with the pre-trained Wikipedia FastText embeddings for all languages . We focus on the minimally supervised scenario which only uses similar character strings between any languages for supervision in order to mirror the hard, realistic scenario of not having annotated training dictionaries between the languages. We learn MWE with the MAT+MPSR method using the publicly available code. 8 We also use MAT+MPSR for BWE experiments, but we additionally train and compare to MUSE systems 9 . We compare the statistical significance of the difference in performance from two systems using paired bootstrap resampling . Generally, a difference of 0.4-0.5 percentage points evaluated over our lexica is significant with p < 0.05. The hub matters for distant languages When using MUSE, the answer is simple: the closed form solution of the Procrustes problem is provably direction-independent, and we confirm this empirically (we provide complete on MUSE in Table 15 in the Appendix). However, obtaining good performance with such methods requires the orthogonality assumption to hold, which for distant languages is rarely the case . In fact, we find that the gradient-based MAT+MPSR method in a bilingual setting over distant languages exhibits better performance than MUSE. Across Tables 2 and 3, in only a handful of examples (shaded cells) does MUSE outperform MAT+MPSR for BWE. On the other hand, we find that when aligning distant languages with MAT+MPSR, the difference between hub choices can be significant -in Az-En, for instance, using En as the hub leads to more than 7 percentage points difference to using Az. We show some examples in Table 4. On the other hand, when aligning typologically similar languages, the difference is less pronounced. For example, we obtain practically similar performance for Gl-Pt, Az-Tr, or Uk-Ru when using either the source or the target language as the hub. Note, though, that non-negligible differences could still occur, as in the case of Pt-Gl. In most cases, it is the case that the higher-resourced language is a better hub than the lower-resourced one, especially when the number of resources defer significantly (as in the case of Az and Be against any other language). Since BWE settings are not our main focus, we leave an extensive analysis of this observation for future work. MWE: English is rarely the best hub language In multilingual settings, we conclude that the standard practice of choosing English as the hub language is sub-optimal. Out of the 90 evaluation pairs from our European-languages experiment (Table 2) the best hub language is English in only 17 instances (less than 20% of the time). In fact, the average performance (over all evaluation pairs) when using En as the hub (denoted as µ En) is 1.3 percentage points worse than the optimal (µ best). In our distant-languages experiment (Table 3) English is the best choice only for 7 of the 42 evaluation pairs (again, less than 20% of the time). As before, using En as the hub leads to an average drop of one percentage point in performance aggregated over all pairs, compared to the averages of the optimal selection. The rest of the section attempts to provide an explanation for these differences. Expected gain for a hub language choice As vividly outlined by the superscript annotations in Tables 2 and 3, there is not a single hub language that stands out as the best one. Interestingly, all languages, across both experiments, are the best hub language for some evaluation language pair. For example, in our European-languages experiment, Es is the best choice for about 20% of the evaluation pairs, Tr and En are the best for about 17% each, while Gl and Be are the best for only 5 and 3 language pairs respectively. Clearly, not all languages are equally suited to be the hub language for many language pairs. Hence, it would be interesting to quantify how much better one could do by selecting the best hub language compared to a random choice. In order to achieve this, we define the expected gain G l of using language l as follows. Assume that we are interested in mapping N languages into the shared space m l is the accuracy 10 over a specified evaluation pair m when using language l as the hub. The random choice between N languages will have an expected accuracy equal to the average accuracy when using all languages as hub: The gain for that evaluation dataset m when using language l as hub, then, is g. Now, for a collection of M evaluation pairs we simply average their gains, in order to obtain the expected gain for using language l as the hub: The of this computation for both sets of experiments are presented in Figure 2. The bars marked'overall' match our above definition, as they present the expected gain computed over all evaluation language pairs. For good measure, we also present the average gain per language aggregated over the evaluation pairs where that language was indeed the best hub language ('when best' bars). Perhaps unsurprisingly, Az seems to be the worst hub language choice among the 10 European languages of the first experiment, with an expected loss (negative gain) of -0.4. This can be attributed to how distant Az is from all other languages, as well as to the fact that the Az pre-trained embeddings are of lower quality compared to all other languages (as the Az Wikipedia dataset is significantly smaller than the others). Similarly, Hi and Sv show expected loss for our second experiment. Note that English is not a bad hub choice per se -it exhibits a positive expected gain in both sets of experiments. However, there are languages with larger expected gains, like Es and Gl in the European-languages experiment that have a twice-as-large expected gain, while Ru has a 4 times larger expected gain in the distant-languages experiment. Of course, the language subset composition of these experiments could possibly impact those numbers. For example, there are three very related languages (Es, Gl, Pt) in the European languages set, which might boost the expected gain for that subset; however, the trends stand even if we compute the expected gain over a subset of the evaluation pairs, removing all pairs that include Gl or Pt. For example, after removing all Gl , Es has a slightly lower expected gain of 0.32, but is still the language with the largest expected gain. Identifying the best hub language for a given evaluation set The next step is attempting to identify potential characteristics that will allow us make educated decisions with regards to choosing the hub language, given a specific evaluation set. For example, should one choose a language typologically similar to the evaluation source, target, or both? Or should they use the source or the target of the desired evaluation set as the hub? Our first finding is that the best performing hub language will very likely be neither the source nor the target of the evaluation set. In our European-languages experiments, a language different than the source and the target yields the best accuracy for over 93% of the evaluation sets. Similarly, in the distant-languages experiment, there is only a single instance where the best performing hub language is either the source or the target evaluation language (for the Fr-Ru dataset), and for the other 97% of the cases the best option is a third language. We hypothesize that learning mappings for both language spaces of interest (hence rotating both spaces) allows for a more flexible alignment which leads to better downstream performance, compared to when one of the two spaces is fixed. Note that this contradicts the mathematical intuition discussed in Section 2 according to which a model learning a single mapping (keeping another word embedding space fixed) is as expressive as a model that learns two mappings for each of the languages. Our second finding is that the downstream performance correlates with measures of distance between languages and language spaces. The typological distance (d gen) between two languages can be approximated through their genealogical distance over hypothesized language family trees, which we obtain from the URIEL typological database . Also, recently motivated the use of Gromov-Hausdroff (GH) distance as an a priori estimation of how well two language embedding spaces can be aligned under an isometric transformation (which is an assumption most methods rely on). The authors also note that vector space GH distance correlates with typological language distance. We refer the reader to We find that there is a positive correlation between downstream LI performance and the genealogical distances between the source-hub and target-hub languages. The average (over all evaluation pairs) Pearson's correlation coefficient between P@1 and d gen is 0.49 for the distant languages experiment and 0.38 for the European languages one. A similar positive correlation of performance and the sum of the GH distances between the source-hub and target-hub spaces. On our distant languages experiment, the coefficient between P@1 and GH is equal to 0.45, while it is slightly lower (0.34) for our European languages experiment. High correlation examples from each experiment, namely Gl-En and En-Hi, are shown in Figure 3. Bi-, tri-, and multilingual systems The last part of our analysis compares bilingual, trilingual, and multilingual systems, with a focus on the under-represented languages. Through multiple experiments (complete evaluations are listed in the Appendix) we reach two main . On one hand, when evaluating on typologically distant languages, one should use as many languages as possible. In Table 5 we present one such example with on Az-Cs under various settings. On the other hand, when multiple related languages are available, one can achieve higher performance with multilingual systems containing all related languages and one more hub language, rather than learning diverse multilingual mappings using more languages. We confirm the latter observation with experiments on the Slavic (Be, Ru, Uk) and Iberian (Es, Gl, Pt) clusters, and present an example (Ru-Uk) in Table 5. With this work we challenge the standard practices in learning cross-lingual word embeddings. We empirically showed that the choice of the hub language is an important parameter that affects lexicon induction performance in both bilingual (between distant languages) and multilingual settings. More importantly, we hope that by providing new dictionaries and baseline on several language pairs, we will stir the community towards evaluating all methods in challenging scenarios that include under-represented language pairs. Towards this end, our analysis provides insights and general directions for stronger baselines for non-Anglocentric cross-lingual word embeddings. A Does evaluation directionality matter? We also explored whether there are significant differences between the evaluated quality of aligned spaces, when computed on both directions (src-trg and trg-src). We find that the evaluation direction indeed matters a lot, when the languages of the evaluation pair are very distant, in terms of morphological complexity and data availability (which affects the quality of the original embeddings). A prominent example, from our European-languages experiment, are evaluation pairs involving Az or Be. When evaluating on the Az-XX and Be-XX dictionaries, the word translation P@1 is more than 20 percentage points higher than when evaluating on the opposite direction (XX-Az or XX-Be). For example, Es-Az has a mere P@1 of 9.9, while Az-Es achieves a P@1 of 44.9. This observation holds even between very related languages (cf. Ru-Be: 12.8, Be-Ru: 41.1 and Tr-Az: 8.4, Az-Tr: 32.0), which supports our hypothesis that this difference is also due to the quality of the pre-trained embeddings. It is important to note that such directionality differences are not observed when evaluating distant pairs with presumably high-quality pre-trained embeddings e.g. Tr-Sk or Tr-Es; the P@1 for both directions is very close. Here we provide complete evaluation for our multilingual experiments. Tables 6-11 present P@1, P@5, and P@10 respectively, for the experiment on the 10 European languages. Similarly, on the distant languages experiment are shown in Tables 12, 13, and 14. Table 15 presents the P@1 of the bilingual experiments using MUSE. 7.5 7.0 Ru-Be 12.8 9.9 10.7 11.5 11.2 11.0 11.5 12.3 11.0 11.8 11. Tr-Sk 27.5 29.2 27.9 28.5 29.4 27.7 27.9 27.5 25.2 27.9 27.9
The choice of the hub (target) language affects the quality of cross-lingual embeddings, which shouldn't be evaluated only on English-centric dictionaries.
492
scitldr
Interpreting generative adversarial network (GAN) training as approximate divergence minimization has been theoretically insightful, has spurred discussion, and has lead to theoretically and practically interesting extensions such as f-GANs and Wasserstein GANs. For both classic GANs and f-GANs, there is an original variant of training and a "non-saturating" variant which uses an alternative form of generator update. The original variant is theoretically easier to study, but the alternative variant frequently performs better and is recommended for use in practice. The alternative generator update is often regarded as a simple modification to deal with optimization issues, and it appears to be a common misconception that the two variants minimize the same divergence. In this short note we derive the divergences approximately minimized by the original and alternative variants of GAN and f-GAN training. This highlights important differences between the two variants. For example, we show that the alternative variant of KL-GAN training actually minimizes the reverse KL divergence, and that the alternative variant of conventional GAN training minimizes a "softened" version of the reverse KL. We hope these may help to clarify some of the theoretical discussion surrounding the divergence minimization view of GAN training.
Typical GAN training doesn't optimize Jensen-Shannon, but something like a reverse KL divergence.
493
scitldr
REINFORCE can be used to train models in structured prediction settings to directly optimize the test-time objective. However, the common case of sampling one prediction per datapoint (input) is data-inefficient. We show that by drawing multiple samples (predictions) per datapoint, we can learn with significantly less data, as we freely obtain a REINFORCE baseline to reduce variance. Additionally we derive a REINFORCE estimator with baseline, based on sampling without replacement. Combined with a recent technique to sample sequences without replacement using Stochastic Beam Search, this improves the training procedure for a sequence model that predicts the solution to the Travelling Salesman Problem. REINFORCE is a well known policy optimization algorithm that learns directly from experience. Variants of it have been used to train models for a wide range of structured prediction tasks, such as Neural Machine Translation BID12 BID0, Image Captioning (b) and predicting solutions (tours) for the Travelling Salesman Problem (TSP) BID1 BID6. As opposed to maximum likelihood (supervised) learning, the appeal of using REINFORCE for structured prediction is that it directly optimizes the test-time performance. When using REINFORCE, often for each datapoint (e.g. a sentence, image or TSP instance) only a single sample/prediction (e.g. a translation, caption or tour) is used to construct a gradient estimate. From a classic Reinforcement Learning (RL) point of view, this makes sense, as we may not be able to evaluate multiple sampled actions for a state (datapoint). However, from a data point of view, this is inefficient if we can actually evaluate multiple samples, such as in a structured prediction setting. Reinforcement Learning with multiple samples/predictions for a single datapoint has been used before (e.g. BID14 ;), but we use the samples as counterfactual information by constructing a (local, for a single datapoint) REINFORCE baseline. A similar idea was applied for variational inference by BID10.Many structured prediction tasks can be formulated in terms of sequence modelling, which is the focus of this paper. In most sequence modelling tasks, the objective is a deterministic function of the predicted sequence. As a , duplicate sampled sequences are uninformative and therefore do not improve the quality of the gradient estimate. To solve this problem, we propose to use sampling without replacement to construct a better gradient estimate. This is inspired by recent work by BID7, who introduce Stochastic Beam Search as a method to sample sequences without replacement, and use this to construct a (normalized) importance-weighted estimator for (sentence level) BLEU score. We extend this idea to estimate policy gradients using REINFORCE, and we show how to use the same set of samples (without replacement) to construct a baseline. This way we can leverage sampling without replacement to improve training of sequence models. In our experiment, we consider the TSP and show that using REINFORCE with multiple samples is beneficial compared to single sample REINFORCE, both computationally and in terms of data-efficiency. Additionally, for a sample size of 4 − 8 samples per datapoint, sampling without replacement in slightly faster learning. The REINFORCE estimator allows to estimate gradients of the expectation E y∼p θ (y) [f (y)] by the relation: DISPLAYFORM0 If we also have a context, or datapoint, x (such as a source sentence), we may write p θ (y|x) and f (y, x), but in this paper, we leave dependence on x implicit. Extension of the derived estimators to a minibatch of datapoints x is straightforward. Typically, we estimate the expectation using samples y 1,..., y k and we may reduce variance of the estimator by using a baseline B i that is independent of the sample y i (but may depend on the other samples y j, j = i): DISPLAYFORM1 In practice, often a single sample y is used (per datapoint x, as we already have a batch of datapoints) to compute the estimate, e.g. k = 1, but in this paper we consider k > 1. In this paper, we consider a parametric distribution over discrete structures (sequences). Enumerating all n possible sequences as y 1,..., y n, we indicate with y i the i-th possible outcome, which has log-probability φ i = log p θ (y i) defined by the model. We can use the Gumbel-Max trick BID4 BID9 to sample y according to this distribution as follows: let G i ∼ Gumbel (a standard Gumbel distribution) for i = 1,..., n i.i.d., and let y = y i *, where DISPLAYFORM0 For a proof we refer to BID9. In a slight abuse of notation, we write G φi = φ i + G i, and we call G φi the (Gumbel-) perturbed log-probability of y i.The Gumbel-Max trick can be extended to the Gumbel-Top-k trick BID7 to draw an ordered sample without replacement, by taking the top k largest perturbed log-probabilities (instead of just one, the argmax). The is equivalent to sequential sampling without replacement, where after an element y is sampled, it is removed from the domain and the remaining probabilities are renormalized. The Gumbel-Top-k trick is equivalent to Weighted Reservoir Sampling BID3, as was noted by BID15. The ordered sample is also known as a partial ranking according to the Plackett-Luce model BID11 BID8.For a sequence model with exponentially large domain, naive application of the Gumbel-Top-k trick is infeasible, but an equivalent can be obtained using Stochastic Beam Search BID7. This modification of beam search expands the k partial sequences with maximum (Gumbel) perturbed log-probability, effectively replacing the standard top k operation by sampling without replacement. The ing top k completed sequences are a sample without replacement from the sequence model, by the equivalence to the Gumbel-Top-k trick. For details we refer to BID7. For many applications we need to estimate the expectation of a function f (y), where y is the realization of a variable with a discrete probability distribution p θ (y). When using Monte Carlo (MC) sampling (with replacement), we write y i to indicate the i-th sample in a set of samples. In contrast, when sampling without replacement we find it convenient to write y i (with superscript i) to refer to the i-th possible value in the domain, so (like we did in Section 2.2) we can enumerate the domain with n possible values as y 1,..., y n. This notation allows us to write out the expectation of f (y): DISPLAYFORM0 Published at the ICLR 2019 workshop: Deep RL Meets Structured Prediction Using MC sampling with replacement, we estimate equation 3 using k samples y 1,..., y k: DISPLAYFORM1 When sampling without replacement using the Gumbel-Top-k trick (Section 2.2) we write S as the set of k largest indices of G φi (i.e. S = arg top k{G φi : i ∈ {1, ..., n}}), so the sample (of size k) without replacement is {y i : i ∈ S}. We can use the sample S with the estimator derived by BID16, based on priority sampling BID2. This means that, to correct for the effects of sampling without replacement, we include importance weights DISPLAYFORM2 Here κ is the (k + 1)-th largest value of {G φi : i ∈ {1, ..., n}}, i.e. the (k + 1)-th largest Gumbel perturbed log-probability, and q θ,a (y DISPLAYFORM3) is the probability that the perturbed log-probability of y i exceeds a. Then we can use the following estimator: DISPLAYFORM4 This estimator is unbiased, and we include a copy of the proof by BID7 (adapted from the proofs by BID2 and Vieira FORMULA0) in Appendix A, as this introduces notation and is the basis for the proof in Appendix C.Intuition behind this estimator comes from the related threshold sampling scenario, where instead of fixing the sample size k, we fix the threshold a and define a variably sized sample S = {i ∈ {1, ..., n}: G φi > a}. With threshold sampling, each element y i in the domain is sampled independently with probability P (G φi > a) = q θ,a (y i), and DISPLAYFORM5 is a standard importance weight. As it turns out, instead of having a fixed threshold a, we can fix the sample size k and use κ as empirical threshold (as i ∈ S if G φi > κ), and still obtain an unbiased estimator BID2 BID16.As was shown by BID7, in practice it is preferred to normalize the importance weights to reduce variance. This means that we compute the normalization W (S) = i∈S p θ (y i) q θ,κ (y i) and obtain the following (biased) estimator: DISPLAYFORM6 Typically REINFORCE is applied with a single sample y per datapoint x (e.g. one translation per source sentence, or, in our experiment, a single tour per TSP instance). In some cases, it may be preferred to take multiple samples y per datapoint x as this requires less data. Taking multiple samples also gives us counterfactual information which can be used to construct a strong (local) baseline. Additionally, we obtain computational benefits, as for encoder-decoder models we can obtain multiple samples using only a single pass through the encoder. With replacement, we can use the estimator in equation 2, where we can construct a baseline B i for the i-th term based on the other samples j = i: DISPLAYFORM0 The form in equation 8 is convenient for implementation as it allows to compute a fixed'baseline' B = 1 k k j=1 f (y j) once and correct for the bias (as B depends on y i) by normalizing using DISPLAYFORM1 For details and a proof of unbiasedness we refer to Appendix C. The basic REINFORCE without replacement estimator follows from combining equation 1 with equation 5 for an unbiased estimator: DISPLAYFORM0 Similar to equation 6, we can compute a lower variance but biased variant by normalizing the importance weights using the normalization W (S) = i∈S DISPLAYFORM1. When sampling without replacement, the individual samples are dependent, and therefore we cannot simply define a baseline based on the other samples as we did in Section 3.1. However, similar to the'baseline' DISPLAYFORM2 based on the complete sample S (without replacement), using equation 5: DISPLAYFORM3 Using this baseline introduces a bias that we cannot simply correct for by a constant term (as we did in equation 8), as the importance weights depend on y i. Instead, we weight the individual terms by DISPLAYFORM4 This estimator is unbiased and we give the full proof in Appendix C.For the normalized version, we use the normalization W (S) = i∈S p θ (y i) q θ,κ (y i) for the baseline, and DISPLAYFORM5 to normalize the outer terms: DISPLAYFORM6 It seems odd to normalize the terms in the outer sum by We consider the task of predicting the solution for instances of the Travelling Salesman Problem (TSP) BID17 BID1 BID6. The problem is to find the order in which to visit locations (specified by their x, y coordinates) to minimize total travelling distance. A policy is trained using REINFORCE to minimize the expected length of a tour (sequence of locations) predicted by the model. The Attention Model by BID6 is a sequence model that considers each instance as a fully connected graph of nodes which are processed by an encoder. The decoder then produces the tour as a sequence of nodes to visit, one node at a time, where it autoregressively uses as input the node visited in the previous step.. REINFORCE is used with replacement (WR) and without replacement (WOR) using k = 4 (top row) or k = 8 (bottom row) samples per instance, and a local baseline based on the k samples for each instance. We compare against REINFORCE using one sample per instance, either with a baseline that is the average of the batch, or the strong greedy rollout baseline by BID6 that requires an additional rollout of the model. We use the source code by BID6 1 to reproduce their TSP experiment with 20 nodes (as larger instances diminish the benefit of sampling without replacement). We implement REIN-FORCE estimators based on multiple samples, either sampled with replacement (WR) or without replacement (WOR) using Stochastic Beam Search BID7. We compare the following four estimators:• Single sample with a batch baseline. Here we compute the standard REINFORCE estimator (equation 2) with a single sample (k = 1). We use a batch of 512 instances (datapoints) and as baseline we take the average of the tour lengths in the batch, hence each instances uses the same baseline. This is implemented as using the exponential moving average baseline by BID6 with β = 0.• Single sample with a greedy rollout baseline, and batch size 512. As baseline, we use a greedy rollout: for each instance x we take the length of the tour that is obtained by greedily selecting the next location according to an earlier (frozen) version of the model. This baseline, similar to self-critical training BID13, corresponds to the best found by BID6, superior to using an exponential moving average or learned value function. However, the greedy rollout requires an additional forward pass through the model.• Multiple samples with replacement (WR) with a local baseline. Here we compute the estimator in equation 8 based on k = 4, 8 samples. We use a batch size of 512 k, so the total number of samples is the same. The baseline is local as it is different for each datapoint, but it does not require additional model evaluations like the greedy rollout.• Multiple samples without replacement (WOR) with a local baseline. Here we use the (biased) normalized without replacement estimator in equation 11 with k = 4, 8 samples and batch size 512 k. Samples are drawn without replacement using Stochastic Beam Search BID7. For fair comparison, we do not take a (k + 1)-th sample to compute κ, but sacrifice the k-th sample and compute the summation in equation 11 with the remaining k − 1 (3 or 7) samples. Note that a single sample with a local baseline is not possible, which is why we use the batch baseline. The model architecture and training hyperparameters (except batch size) are as in the paper by BID6. We present the in terms of the validation set (not used for additional tuning) optimality gap during training in Figure 1, using k = 4 (top row) and k = 8 (bottom row). We found diminishing returns for larger k. The left column presents the in terms of the number of gradient update steps (minibatches). We see that sampling without replacement performs on par (k = 8) or slightly better than using the strong but computationally expensive greedy rollout baseline or using multiple samples with replacement. The standard batch baseline performs significantly worse. The estimators based on multiple samples do not lose (much) final performance, while using significantly less instances. In the right column, where are presented in terms of the number of instances, this effectiveness is confirmed, and we observe that sampling without replacement is preferred to sampling with replacement. The difference is small, but there is also not much room for improvement as are close to optimal. The benefit of learning with less data may be small if data is easily generated (as in our setting), but there is also a significant computational benefit as we need significantly fewer encoder evaluations. In this paper, we have derived REINFORCE estimators based on drawing multiple samples, with and without replacement, and evaluated the effectiveness of the proposed estimators in a structured prediction setting: the prediction of tours for the TSP. The derived estimators yield comparable to recent using REINFORCE with a strong greedy rollout baseline, at greater data-efficiency and computational efficiency. These estimators are especially well suited for structured prediction settings, where the domain is too large to compute exact gradients, but we are able to take multiple samples for the same datapoint, and the objective is a deterministic function of the sampled prediction. We hope the proposed estimators have potential to be used to improve training efficiency in more structured prediction settings, for example in the context of Neural Machine Translation or Image Captioning, where depending on the entropy of the model, sampling without replacement may yield a beneficial improvement. We include here in full the proof by BID7, as this introduces necessary notation and helps understanding of the proof in Appendix C.A.1 PROOF OF UNBIASEDNESS OF PRIORITY SAMPLING ESTIMATOR BY KOOL ET AL.The following proof is adapted from the proofs by BID2 and BID16. For generality of the proof, we write DISPLAYFORM0, and we consider general keys h i (not necessarily Gumbel perturbations).We assume we have a probability distribution over a finite domain 1,..., n with normalized probabilities p i, e.g. n i=1 p i = 1. For a given function f (i) we want to estimate the expectation DISPLAYFORM1 Each element i has an associated random key h i and we define q i (a) = P (h i > a). This way, if we know the threshold a it holds that q i (a) = P (i ∈ S) is the probability that element i is in the sample S. As was noted by BID16, the actual distribution of the key does not influence the unbiasedness of the estimator but does determine the effective sampling scheme. Using the Gumbel perturbed log-probabilities as keys (e.g. h i = G φi) is equivalent to the PPSWOR scheme described by BID16.We define shorthand notation h 1:n = {h 1, ..., h n}, h −i = {h 1, ..., h i−1, h i+1, ..., h n} = h 1:n \{h i}. For a given sample size k, let κ be the (k + 1)-th largest element of h 1:n, so κ is the empirical threshold. Let κ i be the k-th largest element of h −i (the k-th largest of all other elements).Similar to BID2 we will show that every element i in our sample contributes an unbiased estimate of E[f (i)], so that the total estimator is unbiased. Formally, we will prove that DISPLAYFORM2 from which the follows: DISPLAYFORM3 To prove equation 12, we make use of the observation (slightly rephrased) by BID2 that conditioning on h −i, we know κ i and the event i ∈ S implies that κ = κ i since i will only be in the sample if h i > κ i which means that κ i is the k + 1-th largest value of h −i ∪ {h i} = h 1:n. The reverse is also true (if κ = κ i then h i must be larger than κ i since otherwise the k + 1-th largest value of h 1:n will be smaller than κ i). DISPLAYFORM4 We will now prove that the REINFORCE estimator based on multiple samples with the sample average as baseline FORMULA10 is unbiased. Let y 1:k = {y 1, ..., y k} be the set of independent samples (with replacement) from p θ (y). First we show that using the batch mean as baseline is equivalent to using the mean of the other elements in the batch, up to a constant DISPLAYFORM0 Note that k−1 k goes to 1 as the batch size k increases and we do not need to include it (and we can simply compute the biased mean) as it can be absorbed into the learning rate. Since y j is independent of y i, unbiasedness follows: DISPLAYFORM1 The proof that the REINFORCE estimator based on multiple samples without replacement with baseline (equation 10) is unbiased follows from adapting and combining the proofs in Appendix A and B. Additionally to q i (a) = P (h i > a) we define q ij (a) = P (h i > a ∩ h j > a) = P (h i > a)P (h j > a) = q i (a)q j (a) for i = j and q ii (a) = P (h i > a) = q i (a). For convenience we define shorthand for the conditional q j|i (a) = qij (a) qi(a), so q j|i (a) = q j (a) for j = i and q i|i (a) = 1. Furthermore, we define h −ij = h 1:n \ {h i, h j} and define κ ij (i = j) as the (k − 1)-th (not k-th!) largest element of h −ij, and κ ii = κ i, e.g. the k-th largest element of h −i.We denote with with {i, j ∈ S} = {i ∈ S ∩ j ∈ S} the event that both i and j are in the sample, also for i = j which simply means {i ∈ S}. First we generalize P (i ∈ S|h −i) = q i (κ i) to the pairwise conditional inclusion probability P (i, j ∈ S|h −ij). DISPLAYFORM2 Proof. For i = j: DISPLAYFORM3 For i = j: Assuming w.l.o.g. h i < h j there are the following scenario's:• κ ij < h i < h j. In this case, after adding h i and h j to h −ij, κ ij will be the (k + 1)-th largest element so κ = κ ij and i ∈ S and j ∈ S since h j > h i > κ = κ ij.• h i < κ ij < h j or h i < h j < κ ij. In both cases, there are at least (k − 1) + 1 = k elements higher than h i so i ∈ S.Therefore it follows that {i, j ∈ S|u −ij} = {h i > κ ij ∩ h j > κ ij |u −ij} and additionally this event implies κ = κ ij. Now the follows: DISPLAYFORM4 =P (h i > κ ij |u −ij)P (h j > κ ij |u −ij) =q i (κ ij)q j (κ ij) = q ij (κ ij)Using this Lemma we can prove the following Lemma: Lemma 2. DISPLAYFORM5 Note that the expectation is w.r.t. the keys h 1:n which define the random variables κ and S = {i : h i > κ}.Proof. DISPLAYFORM6 u −ij, i, j ∈ S P (i, j ∈ S|u −ij) + 0 · (1 − P (i, j ∈ S|u −ij)) DISPLAYFORM7 Theorem 1. Let B(S) = j∈S p θ (y j) qj (κ) f (y j). Then the following is an unbiased estimator: DISPLAYFORM8 Proof. First note that, when i ∈ S, we can rewrite: DISPLAYFORM9 Using equation 16, we can rewrite (similar to equation 13) DISPLAYFORM10 Substituting this into equation 15 and normalizing the outer importance weights by W (S) we see that this term cancels to obtain DISPLAYFORM11
We show that by drawing multiple samples (predictions) per input (datapoint), we can learn with less data as we freely obtain a REINFORCE baseline.
494
scitldr
Reinforcement learning (RL) is a powerful technique to train an agent to perform a task. However, an agent that is trained using RL is only capable of achieving the single task that is specified via its reward function. Such an approach does not scale well to settings in which an agent needs to perform a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. Instead, we propose a method that allows an agent to automatically discover the range of tasks that it is capable of performing in its environment. We use a generator network to propose tasks for the agent to try to achieve, each task being specified as reaching a certain parametrized subset of the state-space. The generator network is optimized using adversarial training to produce tasks that are always at the appropriate level of difficulty for the agent. Our method thus automatically produces a curriculum of tasks for the agent to learn. We show that, by using this framework, an agent can efficiently and automatically learn to perform a wide set of tasks without requiring any prior knowledge of its environment (Videos and code available at: https://sites.google.com/view/goalgeneration4rl). Our method can also learn to achieve tasks with sparse rewards, which pose significant challenges for traditional RL methods. Reinforcement learning (RL) can be used to train an agent to perform a task by optimizing a reward function. Recently, a number of impressive have been demonstrated by training agents using RL: such agents have been trained to defeat a champion Go player BID16, to outperform humans in 49 Atari games , and to perform a variety of difficult robotics tasks (; BID18 . In each of the above cases, the agent is trained to optimize a single reward function in order to learn to perform a single task. However, there are many real-world environments in which a robot will need to be able to perform not a single task but a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. We consider the problem of maximizing the average success rate of our agent over all possible goals, where success is defined as the probability of successfully reaching each goal by the current policy. In order to efficiently maximize this objective, the algorithm must intelligently choose which goals to focus on at every training stage: goals should be at the appropriate level of difficulty for the current policy. To do so, our algorithm allows an agent to generate its own reward functions, defined with respect to target subsets of the state space, called goals. We generate such goals using a Goal Generative Adversarial Network (Goal GAN), a variation of to the GANs introduced by. A goal discriminator is trained to evaluate whether a goal is at the appropriate level of difficulty for the current policy, and a goal generator is trained to generate goals that meet this criteria. We show that such a framework allows an agent to quickly learn a policy that reaches all feasible goals in its environment, with no prior knowledge about the environment or the tasks being performed. Our method automatically creates a curriculum, in which, at each step, the generator generates goals that are only slightly more difficult than the goals that the agent already knows how to achieve. In summary, our main contribution is a method for automatic curriculum generation that considerably improves the sample efficiency of learning to reach all feasible goals in the environment. Learning to reach multiple goals is useful for multi-task settings such as navigation or manipulation, in which we want the agent to perform a wide range of tasks. Our method also naturally handles sparse reward functions, without needing to manually modify the reward function for every task, based on prior task knowledge. Instead, our method dynamically modifies the probability distribution from which goals are sampled to ensure that the generated goals are always at the appropriate difficulty level, until the agent learns to reach all goals within the feasible goal space. The problem that we are exploring has been referred to as "multi-task policy search" BID11 or "contextual policy search," in which the task is viewed as the context for the policy BID10 BID13. Unlike the work of BID11, our work uses a curriculum to perform efficient multi-task learning, even in sparse reward settings. In contrast to BID13, which trains from a small number of discrete contexts / tasks, our method generates a training curriculum directly in continuous task space. Intrinsic Motivation: Intrinsic motivation involves learning with an intrinsically specified objective (; BID4 . Intrinsic motivation has also been studied extensively in the developmental robotics community, such as SAGG-RIAC BID4 BID5, which has a similar goal of learning to explore a parameterized task space. However, our experiments with SAGG-RIAC demonstrate that this approach do not explore the space as efficiently as ours. A related concept is that of competence-based intrinsic motivation BID3, which uses a selector to select from a discrete set of experts. Recently there have been other formulations of intrinsic motivation, relating to optimizing surprise BID19 BID0 or surrogates of state-visitation counts BID7 BID19 . All these approaches improve learning in sparse tasks where naive exploration performs poorly. However, these formulations do not have a notion of which states are hard for the learner, and the intrinsic motivation is independent of the current performance of the agent. In contrast, our formulation of intrinsic motivation directly relates to our policy performance: the agent is motivated to train on tasks that push the boundaries of its capabilities. Curriculum Learning: The increasing interest on training single agents to perform multiple tasks is leading to new developments on how to optimally present the tasks to the agent during learning. The idea of using a curriculum has been explored in many prior works on supervised learning BID9 BID22 BID8 . However, these curricula are usually hand-designed, using the expertise of the system designer. Another lines of work take into explicit consideration which examples are hard for the current learner ; or use learning progress to build an automatic curriculum , however both approaches have mainly been applied for supervised tasks. Most curriculum learning in RL still relies on fixed pre-specified sequences of tasks . Other recent work has proposed using a given baseline performance for several tasks to gauge which tasks are the hardest and require more training BID15, but the framework can only handle a finite set of tasks and cannot handle sparse rewards. Our method trains a policy that generalizes to a set of continuously parameterized tasks and is shown to perform well even under sparse rewards by not allocating training effort to tasks that are too hard for the current performance of the agent. Finally, an interesting self-play strategy has been proposed that is concurrent to our work BID17; however, they view their approach as simply providing an exploration bonus for a single target task; in contrast, we focus on the problem of efficiently optimizing a policy across a range of goals, as we explain below. In the traditional RL framework, at each timestep t, the agent in state s t ∈ S ⊆ R n takes an action a t ∈ A ⊆ R m, according to some policy π(a t |s t) that maps from the current state s t to a probability distribution over actions. Taking this action causes the agent to enter into a new state s t+1 according to a transition distribution p(s t+1 |s t, a t), and receive a reward r t = r(s t, a t, s t+1). The objective of the agent is to find the policy π that maximizes the expected return, defined as the sum of rewards R = T t=0 r t, where T is a maximal time given to perform the task. The learned policy corresponds to maximizing the expected return for a single reward function. In our framework, instead of learning to optimize a single reward function, we consider a range of reward functions r g indexed or parametrized by a goal g ∈ G. Each goal g corresponds to a set of states S g ⊂ S such that goal g is considered to be achieved when the agent is in any state s t ∈ S g. Then the objective is to learn a policy that, given any goal g ∈ G, acts optimally with respect to r g. We define a very simple reward function that measures whether the agent has reached the goal: DISPLAYFORM0 where 1 is the indicator function. In our case, we use S g = {s t : d(f (s t), g) ≤ }, where f (·) is a function that projects a state into goal space G, d(·, ·) is a distance metric in goal space, and is the acceptable tolerance that determines when the goal is reached. However, our method can handle generic binary rewards (as in Eq. FORMULA0) and does not require a distance metric for learning. Furthermore, we define our MDP such that each episode terminates when s t ∈ S g. Thus, the return DISPLAYFORM1 r g t is a binary random variable whose value indicates whether the agent has reached the set S g in at most T time-steps. Hence, the return of a trajectory s 0, s 1,... can be expressed as DISPLAYFORM2, policies are also conditioned on the current goal g (as in), written as π(a t | s t, g). The expected return obtained when we take actions sampled from the policy can then be expressed as the probability of succeeding on each goal within T timesteps, as shown in Eq.. DISPLAYFORM3 The sparse indicator reward function of Eq. FORMULA0 is not only simple but also represents a property of many real-world goal problems: in many settings, it may be difficult to tell whether the agent is getting closer to achieving a goal, but easy to tell when a goal has been achieved. For example, for a robot moving in a maze, taking actions that maximally reduce the straight-line distance from the start to the goal is usually not a feasible approach for reaching the goal, due to the presence of obstacles along the path. In theory, one could hand-engineer a meaningful distance function for each task that could be used to create a dense reward function. Instead, we use the indicator function of Eq., which simply captures our objective by measuring whether the agent has reached the goal state. We show that our method is able to learn even with such sparse rewards. We desire to find a policy π(a t | s t, g) that achieves a high reward for many goals g. We assume that there is a test distribution of goals p g (g) that we would like to perform well on. For simplicity, we assume that the test distribution samples goals uniformly from the set of goals G, although in practice any distribution can be used. The overall objective is then to find a policy π * such that DISPLAYFORM0 Recall from Eq. that R g (π) is the probability of success for each goal g. Thus the objective of Eq. measures the average probability of success over all goals sampled from p g (g). We refer to the objective in Eq. as the coverage objective. Similar to previous work (; ; BID13 BID11 we need a continuous goal-space representation such that a goal-conditioned policy can efficiently generalize over the goals. In particular, we assume that:1. A policy trained on a sufficient number of goals in some area of the goal-space will learn to interpolate to other goals within that area. 2. A policy trained on some set of goals will provide a good initialization for learning to extrapolate to close-by goals, meaning that the policy can occasionally reach them but maybe not consistently so. Furthermore, we assume that if a goal is reachable, there exists a policy that does so reliably. This is a reasonable assumption for any practical robotics problem, and it will be key for our method, as it strives to train on every goal until it is consistently reached. Our approach can be broken down into three parts: First, we label a set of goals based on whether they are at the appropriate level of difficulty for the current policy. Second, using these labeled goals, we construct and train a generator to output new goals that are at the appropriate level of difficulty. Finally, we use these new goals to efficiently train the policy, improving its coverage objective. We iterate through each of these steps until the policy converges. As shown in our experiments, sampling goals from p g (g) directly, and training our policy on each sampled goal may not be the most sample efficient way to optimize the coverage objective of Eq.. Instead, we modify the distribution from which we sample goals during training: we wish to find the set of goals g in the set DISPLAYFORM0 The justification for this is as follows: due to the sparsity of the reward function, for most goals g, the current policy π i (at iteration i) obtains no reward. Instead, we wish to train our policy on goals g for which π i is able to receive some minimum expected return R g (π i) > R min such that the agent receives enough reward signal for learning. On the other hand, if we only sample from goals for which R g (π i) > R min, we might sample repeatedly from a small set of already mastered goals. To force our policy to train on goals that still need improvement, we train on the set of goals g for which R g (π i) ≤ R max, where R max is a hyperparameter setting a maximum level of performance above which we prefer to concentrate on new goals. Thus, training our policy on goals in G i allows us to efficiently maximize the coverage objective of Eq.. Note that from Eq., R min and R max can be interpreted as a minimum and maximum probability of reaching a goal over T timesteps. Given a set of goals sampled from some distribution p data (g), we wish to estimate a label y g ∈ {0, 1} for each goal g that indicates whether g ∈ G i. These labels are obtained based on the policy performance during the policy update step (Sec. 4.3); see Appendix C for details on this procedure. In the next section we describe how we can generate more goals that also belong to G i, in addition to the goals that we have labeled. In order to sample new goals g uniformly from G i, we introduce an adversarial training procedure called "goal GAN", which is a modification of the procedure used for training Generative Adversarial Networks (GANs) . The modification allows us to train the generative model both with positive examples from the distribution we want to approximate and negative examples sampled from a distribution that does not share support with the desired one. This improves the accuracy of the generative model despite being trained with very few positive samples. Our choice of GANs for goal generation was motivated both from this potential to train from negative examples as well as their ability to generate very high dimensional samples such as images which is important for scaling up our approach to goal generation in high-dimensional goal spaces. Other generative models like Stochastic Neural Networks BID20 don't accept negative examples and don't have the potential to scale up to higher dimensions. In our particular application, we use a "goal generator" neural network G(z) to generate goals g from a noise vector z. We train the goal generator to uniformly output goals in G i using a second "goal discriminator" network D(g). The latter is trained to distinguish goals that are in G i from goals that are not in G i. We optimize our G(z) and D(g) in a manner similar to that of the LeastSquares GAN (LSGAN) , which we modify by introducing the binary label y g to indicate whether g ∈ G i (allowing us to train from "negative examples" when y g = 0): DISPLAYFORM0 We directly use the original hyperparameters reported in in all our experiments (a = -1, b = 1, and c = 0). The LSGAN approach gives us a considerable improvement in training stability over vanilla GAN, and it has a comparable performance to WGAN BID2. However, unlike in the original LSGAN paper , we have three terms in our value function V (D) rather than the original two. For goals g for which y g = 1, the second term disappears and we are left with only the first and third terms, which are identical to that of the original LSGAN framework. Viewed in this manner, the discriminator is trained to discriminate between goals from p data (g) with a label y g = 1 and the generated goals G(z). Looking at the second term, our discriminator is also trained with "negative examples," i.e. goals with a label y g = 0 which our generator should not generate. The generator is trained to "fool" the discriminator, i.e. to output goals that match the distribution of goals in p data (g) for which y g = 1. Algorithm 1: Generative Goal Learning Input: DISPLAYFORM0 Our full algorithm for training a policy π(a t | s t, g) to maximize the coverage objective in Eq. FORMULA4 is shown in Algorithm 1. At each iteration i, we generate a set of goals by first using sample noise to obtain a noise vector z from p z (·) and then passing this noise to the generator G.We use these goals to train our policy using RL, with the reward function given by Eq. (update policy). The training can be done with any RL algorithm; in our case we use TRPO (a) with GAE BID14. Our policy's emperical performance on these goals (evaluate policy) is used to determine each goal's label y g (label goals), as described in Section 4.1. Next, we use these labels to train our goal generator and our goal discriminator (train GAN), as described in Section 4.2. The generated goals from the previous iteration are used to compute the Monte Carlo estimate of the expectations with respect to the distribution p data (g) in Eq.. By training on goals within G i produced by the goal generator, our method efficiently finds a policy that optimizes the coverage objective. For details on how we initialize the goal GAN (initialize GAN), and how we use a replay buffer to prevent "catastrophic forgetting" (update replay), see Appendix A.The algorithm described above naturally creates a curriculum for our policy. The goal generator generates goals in G i, for which our policy obtains an intermediate level of return, and thus such goals are at the appropriate level of difficulty for our current policy π i. As our policy improves, the generator learns to generate goals in order of increasing difficulty. Hence, our method can be viewed as a way to automatically generate a curriculum of goals. However, the curriculum occurs as a by-product via our optimization, without requiring any prior knowledge of the environment or the tasks that the agent must perform. In this section we provide the experimental to answer the following questions:• Does our automatic curriculum yield faster maximization of the coverage objective?• Does our Goal GAN dynamically shift to sample goals of the appropriate difficulty?• Does it scale to a higher-dimensional state-space with a low-dimensional space of feasible goals?To answer the first two questions, we demonstrate our method in two challenging robotic locomotion tasks, where the goals are the (x, y) position of the Center of Mass (CoM) of a dynamically complex quadruped agent. In the first experiment the agent has no constraints and in the second one the agent is inside a U-maze. To answer the third question, we study how our method scales with the dimension of the state-space in an environment where the feasible region is kept of approximately constant volume in an embedding space that grows in dimension. We compare our Goal GAN method against four baselines. Uniform Sampling is a method that does not use a curriculum at all, training at every iteration on goals uniformly sampled from the goal-space. To demonstrate that a straight-forward distance reward can be prone to local minima, Uniform Sampling with L2 loss samples goals in the same fashion as the first baseline, but instead of the indicator reward that our method uses, it receives the negative L2 distance to the goal as a reward at every step. We have also adapted two methods from the literature to our setting: Asymmetric Selfplay BID17 and SAGG-RIAC BID6. Finally, we provide an ablation and an oracle for our method to better understand the importance of sampling "good" goals. The ablation GAN fit all consists on not training the GAN only on the "good" goals but rather on every goal attempted in the previous iteration. Given the noise injected at the output of the GAN this generates a gradually expanding set of goals -similar to any hand-designed curriculum. The oracle consists in sampling goals uniformly from the feasible state-space, but only keeping them if they satisfy the criterion defined in Section 4.1. This Rejection Sampling method is orders of magnitude more expensive in terms of labeling, but it serves to estimate an upper-bound for our method in terms of performance. We test our method in two challenging environments of a complex robotic agent navigating either a free space (Free Ant) or a U-shaped maze (Maze Ant). The latter is depicted in Fig. 1, where the orange quadruped is the Ant, and a possible goal to reach is drawn in red. describe the task of trying to reach the other end of the U-turn and they show that standard RL methods are unable to solve it. We further extend the task to ask to be able to reach any given point within the maze, or the [−5, 5] 2 square for Free Ant. The reward is still a sparse indicator function which takes the value 1 only when the (x, y) CoM of the Ant is within = 0.5 of the goal. Therefore the goal space is 2 dimensional, the statespace is 41 dimensional and the action space is 8 dimensional (see details in Appendix B.1).We first explore whether, by training on goals that are generated by our Goal GAN, we are able to improve our policy's training efficiency, compared to the baselines described above. In Figs. 2a- FIG0 we see that our method leads to faster training compared to the baselines. The Uniform Sampling baseline does very poorly because too many samples are wasted attempting to train on goals that are infeasible or not reachable by the current policy -hence not receiving any learning signal. If an L2 loss is added to try to guide the learning, the agent falls into a poor local optima of not moving to avoid further negative rewards. The two other baselines that we compare against perform better, but still do not surpass the performance of our method. In particular, Asymmetric Self-play needs to train the goal-generating policy (Alice) at every outer iteration, with an amount of rollouts equivalent to the ones used to train the goal-reaching policy. This additional burden is not represented in the plots, being therefore at least half as sample-efficient as the plots indicate. SAGG-RIAC maintains an ever-growing partition of the goal-space that becomes more and more biased towards areas that already have more sub-regions, leading to reduced exploration and slowing down the expansion of the policy's capabilities. Details of our adaptation of these two methods to our problem, as well as further study of their failure cases, is provided in the Appendices F.1 and F.2.To better understand the efficiency of our method, we analyze the goals generated by our automatic curriculum. In these Ant navigation experiments, the goal space is two dimensional, allowing us to study the shift in the probability distribution generated by the Goal GAN (Fig. 3) along with the improvement of the policy coverage (Fig. 4). We have indicated the difficulty of reaching the generated goals in Fig. 3. It can be observed in these figures that the location of the generated goals shifts to different parts of the maze, concentrating on the area where the current policy is receiving some learning signal but needs more improvement. The percentage of generated goals that are at the appropriate level of difficulty ("good goals") stays around 20% even as the policy improves. The goals in these figures include a mix of newly generated goals from the Goal GAN as well as goals from previous iterations that we use to prevent our policy from "forgetting" (Appendix A.1). Overall it is clear that our Goal GAN dynamically shift to sample goals of the appropriate difficulty. See Appendix D for additional experiments. It is interesting to analyze the importance of generating "good goals" for efficient learning. This is done in FIG0, where we first show an ablation of our method GAN fit all, that disregards the labels. This method performs worse than ours, because the expansion of the goals is not related to the current performance of the policy. Finally, we study the Rejection Sampling oracle. As explained in Section 4.1, we wish to sample from the set of "good" goals G i, which we approximate by fitting a Goal GAN to the distribution of good goals observed in the previous policy optimization step. We evaluate now how much this approximation affects learning by comparing the learning performance of our Goal GAN to a policy trained on goals sampled uniformly from G i by using rejection sampling. This method is orders of magnitude more sample inefficient, but gives us an upper bound on the performance of our method. Figs. 2c-2d demonstrate that our performance is quite close to the performance of this much less efficient baseline. In most real-world RL problems, the set of feasible states is a lower-dimensional subset of the full state space, defined by the constraints of the environment. For example, the kinematic constraints of a robot limit the set of feasible states that the robot can reach. Therefore, uniformly sampling goals from the full state-space would yield very few achievable goals. In this section we use an N-dimensional Point Mass to explore this issue and demonstrate the performance of our method as the embedding dimension increases. In our experiments, the full state-space of the N -dimensional Point Mass is the hypercube [−5, 5] N. However, the Point Mass can only move within a small subset of this state space. In the two-dimensional case, the set of feasible states corresponds to the [−5, 5] × [−1, 1] rectangle, making up 20% of the full space. For N > 2, the feasible space is the Cartesian product of this 2D strip with [−,] N −2, where = 0.3. In this higher-dimensional environment, our agent receives a reward of 1 when it moves within DISPLAYFORM0 of the goal state, to account for the increase in average L2 distance between points in higher dimensions. The ratio of the volume of the embedded space to the volume of the full state space decreases as N increases, down to 0.00023:1 for 6 dimensions. FIG2 shows the performance of our method compared to the other methods, as the number of dimensions increases. The uniform sampling baseline has very poor performance as the number of dimensions increases because the fraction of feasible states within the full state space decreases as the dimension increases. Thus, sampling uniformly in sampling an increasing percentage of unfeasible states, leading to poor learning signal. In contrast, the performance of our method does not decay as much as the state space dimension increases, because our Goal GAN always generates goals within the feasible portion of the state space (and at the appropriate level of difficulty). The GAN fit all variation of our method suffers from the increase in dimension because it is not encouraged to track the narrow feasible region. Finally, the oracle and the baseline with an L2 distance reward have perfect performance, which is expected in this simple task where the optimal policy is just to go in a straight line towards the goal. Even without this prior knowledge, the Goal GAN discovers the feasible subset of the goal space. We propose a new paradigm in RL where the objective is to train a single policy to succeed on a variety of goals, under sparse rewards. To solve this problem we develop a method for automatic curriculum generation that dynamically adapts to the current performance of the agent. The curriculum is obtained without any prior knowledge of the environment or of the tasks being performed. We use generative adversarial training to automatically generate goals for our policy that are always at the appropriate level of difficulty (i.e. not too hard and not too easy). In the future we want to combine our goal-proposing strategy with recent multi-goal approaches like HER BID1 ) that could greatly benefit from better ways to select the next goal to train on. Another promising line of research is to build hierarchy on top of the multi-task policy that we obtain with our method by training a higher-level policy that outputs the goal for the lower level multi-task policy (like in Heess et al. or in Florensa et al. (2017a). The hierarchy could also be introduced by replacing our current feed-forward neural network policy by an architecture that learns to build implicit plans (; BID18, or by leveraging expert demonstrations to extract sub-goals BID23, although none of these approaches tackles yet the multi-task learning problem formulated in this work. In addition to training our policy on the goals that were generated in the current iteration, we also save a list ("regularized replay buffer") of goals that were generated during previous iterations (update replay). These goals are also used to train our policy, so that our policy does not forget how to achieve goals that it has previously learned. When we generate goals for our policy to train on, we sample two thirds of the goals from the Goal GAN and we sample the one third of the goals uniformly from the replay buffer. To prevent the replay buffer from concentrating in a small portion of goal space, we only insert new goals that are further away than from the goals already in the buffer, where we chose the goal-space metric and to be the same as the ones introduced in Section 3.1. In order to begin our training procedure, we need to initialize our goal generator to produce an initial set of goals (initialize GAN). If we initialize the goal generator randomly (or if we initialize it to sample uniformly from the goal space), it is likely that, for most (or all) of the sampled goals, our initial policy would receives no reward due to the sparsity of the reward function. Thus we might have that all of our initial goals g haveR g (π 0) < R min, leading to very slow training. To avoid this problem, we initialize our goal generator to output a set of goals that our initial policy is likely to be able to achieve withR g (π i) ≥ R min. To accomplish this, we run our initial policy π 0 (a t | s t, g) with goals sampled uniformly from the goal space. We then observe the set of states S v that are visited by our initial policy. These are states that can be easily achieved with the initial policy, π 0, so the goals corresponding to such states will likely be contained within S I 0. We then train the goal generator to produce goals that match the state-visitation distribution p v (g), defined as the uniform distribution over the set f (S v). We can achieve this through traditional GAN training, with p data (g) = p v (g). This initialization of the generator allows us to bootstrap the Goal GAN training process, and our policy is able to quickly improve its performance. The ant is a quadruped with 8 actuated joints, 2 for each leg. The environment is implemented in Mujoco BID21. Besides the coordinates of the center of mass, the joint angles and joint velocities are also included in the observation of the agent. The high degrees of freedom make navigation a quite complex task requiring motor coordination. More details can be found in, and the only difference is that in our goal-oriented version of the Ant we append the observation with the goal, the vector from the CoM to the goal and the distance to the goal. For the Free Ant experiments the objective is to reach any point in the square [−5m, 5m] 2 on command. The maximum time-steps given to reach the current goal are 500. The agent is constrained to move within the maze environment, which has dimensions of 6m x 6m. The full state-space has an area of size 10 m x 10 m, within which the maze is centered. To compute the coverage objective, goals are sampled from within the maze according to a uniform grid on the maze interior. The maximum time-steps given to reach the current goal are 500. For the N-dim point mass of Section 5.2, in each episode (rollout) the point-mass has 400 timesteps to reach the goal, where each timestep is 0.02 seconds. The agent can accelerate in up to a rate of 5 m/s 2 in each dimension (N = 2 for the maze). The observations of the agent are 2N dimensional, including position and velocity of the point-mass. After the generator generates goals, we add noise to each dimension of the goal sampled from a normal distribution with zero mean and unit variance. At each step of the algorithm, we train the policy for 5 iterations, each of which consists of 100 episodes. After 5 policy iterations, we then train the GAN for 200 iterations, each of which consists of 1 iteration of training the discriminator and 1 iteration of training the generator. The generator receives as input 4 dimensional noise sampled from the standard normal distribution. The goal generator consists of two hidden layers with 128 nodes, and the goal discriminator consists of two hidden layers with 256 nodes, with relu nonlinearities. The policy is defined by a neural network which receives as input the goal appended to the agent observations described above. The inputs are sent to two hidden layers of size 32 with tanh nonlinearities. The final hidden layer is followed by a linear N -dimensional output, corresponding to accelerations in the N dimensions. For policy optimization, we use a discount factor of 0.998 and a GAE lambda of 0.995. The policy is trained with TRPO with Generalized Advantage Estimation implemented in rllab (a; b; . Every "update policy" consists of 5 iterations of this algorithm. To label a given goal (Section 4.1), we could empirically estimate the expected return for this goal R g (π i) by performing rollouts of our current policy π i. The label for this goal is then set to DISPLAYFORM0 Nevertheless, having to execute additional rollouts just for labeling is not sample efficient. Therefore, we instead use the rollouts that were used for the most recent policy update. This is an approximation as the rollouts where performed under π i−1, but as we show in Figs. 7a-7b, this small "delay" does not affect learning significantly. Indeed, using the true label (estimated with three new rollouts from π i) yields the Goal GAN true label curves that are only slightly better than what our method does. In the same plots we also study another definition of "good" goals that has been previously used in the literature: learning progress BID6 ). Given that we work in a continuous goal-space, estimating the learning progress of a single goal requires estimating the performance of the policy on that goal before the policy update and after the policy update (potentially being able to replace one of these estimations with the rollouts from the policy optimization, but not both). Therefore the method does require more samples, but we deemed interesting to compare how well the metric to automatically build a curriculum. We see in the Figs. 7a-7b that the two metrics yield a very similar learning, at least in the case of Ant navigation tasks with sparse rewards. Similar to the experiments in Figures 3 and 4, here we show the goals that were generated for the Free Ant experiment in which a robotic quadruped must learn to move to all points in free space. FIG5 show the . As shown, our method produces a growing circle around the origin; as the policy learns to move the ant to nearby points, the generator learns to generate goals at increasingly distant positions. Figure 9: Visualization of the policy performance for different parts of the state space (same policy training as in FIG5). For illustration purposes, the feasible state-space is divided into a grid, and a goal location is selected from the center of each grid cell. Each grid cell is colored according to the expected return achieved on this goal: Red indicates 100% success; blue indicates 0% success. In this section we show that our Goal GAN method is efficient at tracking clearly multi-modal distributions of good goals. To this end, we introduce a new maze environment with multiple paths, as can be seen in FIG6. To keep the experiment simple we replace the Ant agent by a point-mass environment (in orange), which actions are directly the velocity vector (2 dim). As in the other experiments, our aim is to learn a policy that can reach any feasible goal corresponding to -balls in state space like the one depicted in red. Similar to the experiments in Figures 3 and 4, here we show the goals that were generated for the Mutli-path point-mass maze experiment. FIG0 show the . It can be observed that our method produces a multi-modal distribution over goals, tracking all the areas where goals are at the appropriate level of difficulty. Note that the samples from the regularized replay buffer are responsible for the trailing spread of "High Reward" goals and the Goal GAN is responsible for the more concentrated nodes, as can be seen in Fig. 13. A clear benefit of using our Goal GAN as a generative model is that no prior knowledge about the distribution to fit is required (like the number of modes). Finally, note that the fact of having several possible paths to reach a specific goal does not hinder the learning of our algorithm that consistently reaches full coverage in this problem as seen in Fig. 14. FIG5 ). For illustration purposes, the feasible state-space is divided into a grid, and a goal location is selected from the center of each grid cell. Each grid cell is colored according to the expected return achieved on this goal: Red indicates 100% success; blue indicates 0% success. Although not specifically designed for the problem presented in this paper, it is straight forward to apply the method proposed by BID17 to our problem. An interesting study of its limitations in a similar setting can be found in FIG0 ). In our implementation of this method, we use TRPO as the "Low-Level Goal-Directed Exploration with Evolving Context". We therefore implement the method as batch: at every iteration, we sample N new new goals {y i} i=0...Nnew, then we collect rollouts of t max steps trying to reach them, and perform the optimization of the parameters using all the collected data. The detailed algorithm is given in the following pseudo-code. while number steps in(paths) < batch size do Reset s 0 ← s rest; y g ← Uniform(goals); y f, Γ yg, path ← collect rollout(π θi (·, y g), s reset ); paths.append(path); UpdateRegions(R, y f, 0); UpdateRegions(R, y g, Γ yg); end π θi+1 ← train π θi with TRPO on collected paths; end UpdateRegions(R, y f, Γ y f) is exactly the Algorithm 2 described in the original paper, and Selfgenerate is the "Active Goal Self-Generation (high-level)" also described in the paper (Section 2.4.4 and Algorithm 1), but it's repeated N new times to produce a batch of N new goals jointly. As for the competence Γ yg, we use the same formula as in their section 2.4.1 (use highest competence if reached close enough to the goal) and C(y g, y f) is computed with their equation. The collect rollout function resets the state s 0 = s reset and then applies actions following the goal-conditioned policy π θ (·, y g) until it reaches the goal or the maximum number of steps t max has been taken. The final state, transformed in goal space, y f is returned. As hyperparameters, we have used the recommended ones in the paper, when available: p 1 = 0.7, p 2 = 0.2, p 3 = 0.1. For the rest, the best performance in an hyperparameter sweep yields: ζ = 100, g max = 100. The noise for mode is chosen to be Gaussian with variance 0.1, the same as the tolerance threshold max and the competence threshold C.As other details, in our tasks there are no constraints to penalize for, so ρ = ∅. Also, there are no sub-goals. The reset value r is 1 as we reset to s start after every reaching attempt. The number of explorative movements q ∈ N has a less clear equivalence as we use a policy gradient update with a stochastic policy π θ instead of a SSA-type algorithm.
We efficiently solve multi-task problems with an automatic curriculum generation algorithm based on a generative model that tracks the learning agent's performance.
495
scitldr
A wide range of defenses have been proposed to harden neural networks against adversarial attacks. However, a pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable? This paper analyzes adversarial examples from a theoretical perspective, and identifies fundamental bounds on the susceptibility of a classifier to adversarial attacks. We show that, for certain classes of problems, adversarial examples are inescapable. Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples. A number of adversarial attacks on neural networks have been recently proposed. To counter these attacks, a number of authors have proposed a range of defenses. However, these defenses are often quickly broken by new and revised attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable?In this paper, we identify a broad class of problems for which adversarial examples cannot be avoided. We also derive fundamental limits on the susceptibility of a classifier to adversarial attacks that depend on properties of the data distribution as well as the dimensionality of the dataset. Adversarial examples occur when a small perturbation to an image changes its class label. There are different ways of measuring what it means for a perturbation to be "small"; as such, our analysis considers a range of different norms. While the ∞ -norm is commonly used, adversarial examples can be crafted in any p -norm (see FIG0). We will see that the choice of norm can have a dramatic effect on the strength of theoretical guarantees for the existence of adversarial examples. Our analysis also extends to the 0 -norm, which yields "sparse" adversarial examples that only perturb a small subset of image pixels FIG2 ). BID19 on Resnet50, along with the distance between the base image and the adversarial example, and the top class label. 2As a simple example , consider a classification problem with n-dimensional images with pixels scaled between 0 and 1 (in this case images live inside the unit hypercube). If the image classes each occupy a fraction of the cube greater than 1 2 exp(−π 2), then images exist that are susceptible to adversarial perturbations of 2 -norm at most. Note that = 10 was used in FIG0, and larger values are typical for larger images. Finally, in Section 8, we explore the causes of adversarial susceptibility in real datasets, and the effect of dimensionality. We present an example image class for which there is no fundamental link between dimensionality and robustness, and argue that the data distribution, and not dimensionality, is the primary cause of adversarial susceptibility. Adversarial examples, first demonstrated in BID36 and BID3, change the label of an image using small and often imperceptible perturbations to its pixels. A number of defenses have been proposed to harden networks against attacks, but historically, these defenses have been quickly broken. Adversarial training, one of the earliest defenses, successfully thwarted the fast gradient sign method (FGSM) BID13, one of the earliest and simplest attacks. However, adversarial training with FGSM examples was quickly shown to be vulnerable to more sophisticated multi-stage attacks BID16 BID39. More sophisticated defenses that rely on network distillation BID26 and specialized activation functions BID44 were also toppled by strong attacks BID25 BID40 BID6.The ongoing vulnerability of classifiers was highlighted in recent work by BID2 and BID1 that broke an entire suite of defenses presented in ICLR 2018 including thermometer encoding BID5, detection using local intrinsic dimensionality BID18, input transformations such as compression and image quilting BID14, stochastic activation pruning BID10, adding randomization at inference time BID42, enhancing the confidence of image labels BID33, and using a generative model as a defense BID29.Rather than hardening classifiers to attacks, some authors have proposed sanitizing datasets to remove adversarial perturbations before classification. Approaches based on auto-encoders BID22 and GANs BID30 were broken using optimization-based attacks BID8 a).A number of "certifiable" defense mechanisms have been developed for certain classifiers. BID27 harden a two-layer classifier using semidefinite programming, and BID32 propose a convex duality-based approach to adversarial training that works on sufficiently small adversarial perturbations with a quadratic adversarial loss. BID15 consider training a robust classifier using the convex outer adversarial polytope. All of these methods only consider robustness of the classifier on the training set, and robustness properties often fail to generalize reliably to test examples. One place where researchers have enjoyed success is at training classifiers on low-dimensional datasets like MNIST BID19 BID32. The robustness achieved on more complicated datasets such DISPLAYFORM0 Figure 2: Sparse adversarial examples perturb a small subset of pixels and can hide adversarial "fuzz" inside highfrequency image regions. The original image (left) is classified as an "ox." Under ∞-norm perturbations, it is classified as "traffic light", but the perturbations visibly distort smooth regions of the image (the sky). These effects are hidden in the grass using 0-norm (sparse) perturbations limited to a small subset of pixels.as CIFAR-10 and ImageNet are nowhere near that of MNIST, which leads some researchers to speculate that adversarial defense is fundamentally harder in higher dimensions -an issue we address in Section 8.This paper uses well-known from high-dimensional geometry, specifically isoperimetric inequalities, to provide bounds on the robustness of classifiers. Several other authors have investigated adversarial susceptibility through the lens of geometry. BID11 study adversarial susceptibility of datasets under the assumption that they are produced by a generative model that maps random Gaussian vectors onto images. BID12 do a detailed case study, including empirical and theoretical , of classifiers for a synthetic dataset that lies on two concentric spheres. BID31 show that the Lipschitz constant of untrained networks with random weights gets large in high dimensions. Shortly after the original appearance of our work, BID20 presented a study of adversarial susceptibility that included both evasion and poisoning attacks. Our work is distinct in that it studies adversarial robustness for arbitrary data distributions, and also that it rigorously looks at the effect of dimensionality on robustness limits. We use n to denote the unit hypercube in n dimensions, and vol(A) to denote the volume (i.e., ndimensional Lebesgue measure) of a subset A ⊂ n. We use S n−1 = {x ∈ R n | x 2 = 1} to denote the unit sphere embedded in R n, and s n−1 to denote its surface area. The size of a subset A ∈ S n−1 can be quantified by its (n − 1 dimensional) measure µ[A], which is simply the surface area the set covers. Because the surface area of the unit sphere varies greatly with n, it is much easier in practice to work with the normalized measure, which we denote µ 1 [A] = µ[A]/s n−1. This normalized measure has the property that µ 1 [S n−1] = 1, and so we can interpret µ 1 [A] as the probability of a uniform random point from the sphere lying in A. When working with points on a sphere, we often use geodesic distance, which is always somewhat larger than (but comparable to) the Euclidean distance. In the cube, we measure distance between points using p -norms, which are denoted DISPLAYFORM0 Note that · p is not truly a norm for p < 1, but rather a semi-norm. Such metrics are still commonly used, particularly the " 0 -norm" which counts the number of non-zero entries in a vector. We consider the problem of classifying data points that lie in a space Ω (either a sphere or a hypercube) into m different object classes. The m object classes are defined by probability density functions {ρ c} m c=1, where ρ c: Ω → R. A "random" point from class c is a random variable with density ρ c. We assume ρ c to be bounded (i.e., we don't allow delta functions or other generalized functions), and denote its upper bound by U c = sup x ρ c (x).We also consider a "classifier" function C: Ω → {1, 2, . . ., m} that partitions Ω into disjoint measurable subsets, one for each class label. The classifier we consider is discrete valued -it provides a label for each data point but not a confidence level. With this setup, we can give a formal definition of an adversarial example. Definition 1. Consider a point x ∈ Ω drawn from class c, a scalar > 0, and a metric d. We say that x admits an -adversarial example in the metric d if there exists a pointx ∈ Ω with C(x) = c, and d(x,x) ≤.In plain words, a point has an -adversarial example if we can sneak it into a different class by moving it at most units in the distance d. We consider adversarial examples with respect to different p -norm metrics. These metrics are written d p (x,x) = x −x p. A common choice is p = ∞, which limits the absolute change that can be made to any one pixel. However, 2 -norm and 1 -norm adversarial examples are also used, as it is frequently easier to create adversarial examples in these less restrictive metrics. We also consider sparse adversarial examples in which only a small subset of pixels are manipulated. This corresponds to the metric d 0, in which case the constraint x −x 0 ≤ means that an adversarial example was crafted by changing at most pixels, and leaving the others alone. We begin by looking at the case of classifiers for data on the sphere. While this data model may be less relevant than the other models studied below, it provides a straightforward case where can be proven using simple, geometric lemmas. The more realistic case of images with pixels in will be studied in Section 4.The idea is to show that, provided a class of data points takes up enough space, nearly every point in the class lies close to the class boundary. To show this, we begin with a simple definition. Definition 2. The -expansion of a subset A ⊂ Ω with respect to distance metric d, denoted A(, d), contains all points that are at most units away from A. To be precise DISPLAYFORM0 We sometimes simply write A when the distance metric is clear from context. Our provides bounds on the probability of adversarial examples that are independent of the shape of the class boundary. This independence is a simple consequence of an isoperimetric inequality. The classical isoperimetric inequality states that, of all closed surfaces that enclose a unit volume, the sphere has the smallest surface area. This simple fact is intuitive but famously difficult to prove. For a historical review of the isoperimetric inequality and its variants, see BID24. We will use a special variant of the isoperimetric inequality first proved by BID18 and simplified by BID37. Lemma 1 (Isoperimetric inequality). Consider a subset of the sphere A ⊂ S n−1 ⊂ R n with normalized measure µ 1 (A) ≥ 1/2. When using the geodesic metric, the -expansion A is at least as large as the -expansion of a half sphere. The classical isoperimetric inequality is a simple geometric statement, and frequently appears without absolute bounds on the size of the -expansion of a half-sphere, or with bounds that involve unspecified constants BID41. A tight bound derived by BID23 is given below. The asymptotic blow-up of the -expansion of a half sphere predicted by this bound is shown in FIG1. Lemma 2 (-expansion of half sphere). The geodesic -expansion of a half sphere has normalized measure at least DISPLAYFORM1 Lemmas 1 and 2 together can be taken to mean that, if a set is not too small, then in high dimensions almost all points on the sphere are reachable within a short jump from that set. These lemmas have immediate implications for adversarial examples, which are formed by mapping one class into another using small perturbations. Despite its complex appearance, the below is a consequence of the (relatively simple) isoperimetric inequality. Theorem 1 (Existence of Adversarial Examples). Consider a classification problem with m object classes, each distributed over the unit sphere S n−1 ⊂ R n with density functions {ρ c} m c=1. Choose a classifier function C: S n−1 → {1, 2, . . ., m} that partitions the sphere into disjoint measurable subsets. Define the following scalar constants:• Let V c denote the magnitude of the supremum of ρ c relative to the uniform density. This can be written V c:= s n−1 · sup x ρ c (x).• Let f c = µ 1 {x|C(x) = c} be the fraction of the sphere labeled as c by classifier C.Choose some class c with f c ≤ 1 2. Sample a random data point x from ρ c. Then with probability at least DISPLAYFORM2 one of the following conditions holds: 1. x is misclassified by C, or 2. x admits an -adversarial example in the geodesic distance. Proof. Choose a class c with f c ≤ 1 2. Let R = {x|C(x) = c} denote the region of the sphere labeled as class c by C, and let R be its complement. R is the -expansion of R in the geodesic metric. Because R covers at least half the sphere, the isoperimetric inequality (Lemma 1) tells us that the epsilon expansion is at least as great as the epsilon expansion of a half sphere. We thus have DISPLAYFORM3 Now, consider the set S c of "safe" points from class c that are correctly classified and do not admit adversarial perturbations. A point is correctly classified only if it lies inside R, and therefore outside of R. To be safe from adversarial perturbations, a point cannot lie within distance from the class boundary, and so it cannot lie within R. It is clear that the set S c of safe points is exactly the complement of R. This set has normalized measure DISPLAYFORM4 The probability of a random point lying in S c is bounded above by the normalized supremum of ρ c times the normalized measure µ 1 [S c]. This product is given by DISPLAYFORM5 We then subtract this probability from 1 to obtain the probability of a point lying outside the safe region, and arrive at equation 1.In the above , we measure the size of adversarial perturbations using the geodesic distance. Most studies of adversarial examples measure the size of perturbation in either the 2 (Euclidean) norm or the ∞ (max) norm, and so it is natural to wonder whether Theorem 1 depends strongly on the distance metric. Fortunately (or, rather unfortunately) it does not. It is easily observed that, for any two points x and y on a sphere, DISPLAYFORM6 where d ∞ (x, y), d 2 (x, y), and d g (x, y) denote the l ∞, Euclidean, and geodesic distance, respectively. From this, we see that Theorem 1 is actually fairly conservative; any -adversarial example in the geodesic metric would also be adversarial in the other two metrics, and the bound in Theorem 1 holds regardless of which of the three metrics we choose (although different values of will be appropriate depending on the norm). The above about the sphere is simple and easy to prove using classical . However, real world images do not lie on the sphere. In a more typical situation, images will be scaled so that their pixels lie in, and data lies inside a high-dimensional hypercube (but, unlike the sphere, data is not confined to its surface). The proof of Theorem 1 makes extensive use of properties that are exclusive to the sphere, and is not applicable to this more realistic setting. Are there still problem classes on the cube where adversarial examples are inevitable?This question is complicated by the fact that geometric isoperimetric inequalities do not exist for the cube, as the shapes that achieve minimal -expansion (if they exist) depend on the volume they enclose and the choice of BID28. Fortunately, researchers have been able to derive "algebraic" isoperimetric inequalities that provide lower bounds on the size of the -expansion of sets without identifying the shape that achieves this minimum BID38 BID23. The below about the unit cube is analogous to Proposition 2.8 in BID17, except with tighter constants. For completeness, a proof (which utilizes methods from Ledoux) is provided in Appendix A. Lemma 3 (Isoperimetric inequality on a cube). Consider a measurable subset of the cube A ⊂ n, and DISPLAYFORM0 2 /2 dt, and let α be the scalar that satisfies DISPLAYFORM1 where p * = min(p, 2). In particular, if vol(A) ≥ 1/2, then we simply have DISPLAYFORM2 Using this , we can show that most data samples in a cube admit adversarial examples, provided the data distribution is not excessively concentrated. n → {1, 2, . . ., m} that partitions the hypercube into disjoint measurable subsets. Define the following scalar constants:• Let U c denote the supremum of ρ c.• Let f c be the fraction of hypercube partitioned into class c by C.Choose some class c with f c ≤ 1 2, and select an p -norm with p > 0. Define p * = min(p, 2). Sample a random data point x from the class distribution ρ c. Then with probability at least DISPLAYFORM3 one of the following conditions holds:1. x is misclassified by C, or 2. x has an adversarial examplex, with x −x p ≤.When adversarial examples are defined in the 2 -norm (or for any p ≥ 2), the bound in equation 4 becomes DISPLAYFORM4 Provided the class distribution is not overly concentrated, equation 5 guarantees adversarial examples with relatively "small" relative to a typical vector. In n dimensions, the 2 diameter of the cube is √ n, and so it is reasonable to choose = O(√ n) in equation 5. In FIG0, we chose = 10. A similarly strong bound of DISPLAYFORM5 DISPLAYFORM6 2 /2 (for z > 0), and α = Φ −1 (1 − f c). For this bound to be meaningful with < 1, we need f c to be relatively small, and to be roughly f c or smaller. This is realistic for some problems; ImageNet has 1000 classes, and so f c < 10 −3 for at least one class. Interestingly, under ∞ -norm attacks, guarantees of adversarial examples are much stronger on the sphere (Section 3) than on the cube. One might wonder whether the weakness of Theorem 4 in the ∞ case is fundamental, or if this is a failure of our approach. One can construct examples of sets with ∞ expansions that nearly match the behavior of equation 5, and so our theorems in this case are actually quite tight. It seems to be inherently more difficult to prove the existence of adversarial examples in the cube using the ∞ -norm. A number of papers have looked at sparse adversarial examples, in which a small number of image pixels, in some cases only one BID34, are changed to manipulate the class label. To study this case, we would like to investigate adversarial examples under the 0 metric. The 0 distance is defined as DISPLAYFORM0 If a point x has an -adversarial example in this norm, then it can be perturbed into a different class by modifying at most pixels (in this case is taken to be a positive integer).Theorem 2 is fairly tight for p = 1 or 2. However, the bound becomes quite loose for small p, and in particular it fails completely for the important case of p = 0. For this reason, we present a different bound that is considerably tighter for small p (although slightly looser for large p).The case p = 0 was studied by (Section 6.2) and BID21, and later by BID37 BID38. The proof of the following theorem (appendix B) follows the method used in Section 5 of BID38, with modifications made to extend the proof to arbitrary p. Lemma 4 (Isoperimetric inequality on the cube: small p). Consider a measurable subset of the cube A ⊂ n, and a p-norm distance metric d(x, y) = x − y p for any p ≥ 0. We have DISPLAYFORM1 Using this , we can prove a statement analogous to Theorem 2, but for sparse adversarial examples. We present only the case of p = 0, but the generalization to the case of other small p using Lemma 4 is straightforward. Theorem 3 (Sparse adversarial examples). Consider the problem setup of Theorem 2. Choose some class c with f c ≤ 1 2, and sample a random data point x from the class distribution ρ c. Then with probability at least DISPLAYFORM2 one of the following conditions holds: 1. x is misclassified by C, or 2. x can be adversarially perturbed by modifying at most pixels, while still remaining in the unit hypercube. Tighter bounds can be obtained if we only guarantee that adversarial examples exist for some data points in a class, without bounding the probability of this event. Theorem 4 (Condition for existence of adversarial examples). Consider the setup of Theorem 2. Choose a class c that occupies a fraction of the cube f c < 1 2. Pick an p norm and set p * = min(p, 2).Let supp(ρ c) denote the support of ρ c. Then there is a point x with ρ c (x) > 0 that admits an -adversarial example if DISPLAYFORM0 The bound for the case p = 0 is valid only if ≥ n log 2/2.It is interesting to consider when Theorem 4 produces non-vacuous bounds. When the 2 -norm is used, the bound becomes vol[supp(ρ c)] ≥ exp(−π 2)/2. The diameter of the cube is √ n, and so the bound becomes active for = √ n. Plugging this in, we see that the bound is active whenever the size of the support satisfies DISPLAYFORM1 2e πn. Remarkably, this holds for large n whenever the support of class c is larger than (or contains) a hypercube of side length at least e −π ≈ 0.043. Note, however, that the bound being "active" does not guarantee adversarial examples with a "small". There are a number of ways to escape the guarantees of adversarial examples made by Theorems 1-4. One potential escape is for the class density functions to take on extremely large values (i.e., exponentially large U c); the dependence of U c on n is addressed separately in Section 8.Unbounded density functions and low-dimensional data manifolds In practice, image datasets might lie on low-dimensional manifolds within the cube, and the support of these distributions could have measure zero, making the density function infinite (i.e., U c = ∞). The arguments above are still relevant (at least in theory) in this case; we can expand the data manifold by adding a uniform random noise to each image pixel of magnitude at most 1. The expanded dataset has positive volume. Then, adversarial examples of this expanded dataset can be crafted with perturbations of size 2. This method of expanding the manifold before crafting adversarial examples is often used in practice. BID39 proposed adding a small perturbation to step off the image manifold before crafting adversarial examples. This strategy is also used during adversarial training BID19.Adding a "don't know" class The analysis above assumes the classifier assigns a label to every point in the cube. If a classifier has the ability to say "I don't know," rather than assign a label to every input, then the region of the cube that is assigned class labels might be very small, and adversarial examples could be escaped even if the other assumptions of Theorem 4 are satisfied. In this case, it would still be easy for the adversary to degrade classifier performance by perturbing images into the "don't know" class. Feature squeezing If decreasing the dimensionality of data does not lead to substantially increased values for U c (we see in Section 8 that this is a reasonable assumption) or loss in accuracy (a stronger assumption), measuring data in lower dimensions could increase robustness. This can be done via an auto-encoder BID22 BID30, JPEG encoding BID9, or quantization BID43.Computational hardness It may be computationally hard to craft adversarial examples because of local flatness of the classification function, obscurity of the classifier function, or other computational difficulties. Computational hardness could prevent adversarial attacks in practice, even if adversarial examples still exist. In this section, we discuss the relationship between dimensionality and adversarial robustness, and explore how the predictions made by the theorems above are reflected in experiments. It is commonly thought that high-dimensional classifiers are more susceptible to adversarial examples than low-dimensional classifiers. This perception is partially motivated by the observation that classifiers on highresolution image distributions like ImageNet are more easily fooled than low resolution classifiers on MNIST BID39. Indeed, Theorem 2 predicts that high-dimensional classifiers should be much easier to fool than low-dimensional classifiers, assuming the datasets they classify have comparable probability density limits U c. However, this is not a reasonable assumption; we will see below that high dimensional distributions may be more concentrated than their low-dimensional counterparts. We study the effects of dimensionality with a thought experiment involving a "big MNIST" image distribution. Given an integer expansion factor b, we can make a big MNIST distribution, denoted b-MNIST, by replacing each pixel in an MNIST image with a b × b array of identical pixels. This expands an original 28 × 28 image into a 28b × 28b image. FIG3 shows that, without adversarial training, a classifier on big MNIST is far more susceptible to attacks than a classifier trained on the original MNIST 1.However, each curve in FIG3 only shows the attack susceptibility of one particular classifier. In contrast, Theorems 1-4 describe the fundamental limits of susceptibility for all classifiers. These limits are an inherent property of the data distribution. The theorem below shows that these fundamental limits do not depend in a non-trivial way on the dimensionality of the images in big MNIST, and so the relationship between dimensionality and susceptibility in FIG3 from the weakness of the training process. Theorem 5. Suppose and p are such that, for all MNIST classifiers, a random image from class c has an -adversarial example (in the 2 -norm) with probability at least p. Then for all classifiers on b-MNIST, with integer b ≥ 1, a random image from c has a b -adversarial example with probability at least p. Likewise, if all b-MNIST classifiers have b -adversarial examples with probability p for some b ≥ 1, then all classifiers on the original MNIST distribution have -adversarial examples with probability p. Theorem 5 predicts that the perturbation needed to fool all 56 × 56 classifiers is twice that needed to fool all 28 × 28 classifiers. This is reasonable since the 2 -norm of a 56 × 56 image is twice that of its 28 × 28 counterpart. Put simply, fooling big MNIST is just as hard/easy as fooling the original MNIST regardless of resolution. This also shows that for big MNIST, as the expansion factor b gets larger and is expanded to match, the concentration bound U c grows at exactly the same rate as the exponential term in equation 2 shrinks, and there is no net effect on fundamental susceptibility. Also note that an analogous could be based on any image classification problem (we chose MNIST only for illustration), and any p ≥ 0.We get a better picture of the fundamental limits of MNIST by considering classifiers that are hardened by adversarial training 2 FIG3 ). These curves display several properties of fundamental limits predicted by our theorems. As predicted by Theorem 5, the 112 × 112 classifer curve is twice as wide as the 56 × 56 curve, which in turn is twice as wide as the 28×28 curve. In addition, we see the kind of "phase transition" behavior predicted by Theorem 2, in which the classifier suddenly changes from being highly robust to being highly susceptible as passes a critical threshold. For these reasons, it is reasonable to suspect that the adversarially trained classifiers in FIG3 are operating near the fundamental limit predicted by Theorem 2.Theorem 5 shows that increased dimensionality does not increase adversarial susceptibility in a fundamental way. But then why are high-dimensional classifiers so easy to fool? To answer this question, we look at the concentration bound U c for object classes. The smallest possible value of U c is 1, which only occurs when images are "spread out" with uniform, uncorrelated pixels. In contrast, adjacent pixels in MNIST (and especially big MNIST) are very highly correlated, and images are concentrated near simple, low-dimensional manifolds, ing in highly concentrated image classes with large U c. Theory predicts that such highly concentrated datasets can be relatively safe from adversarial examples. Under review as a conference paper at ICLR 2019 We can reduce U c and dramatically increase susceptibility by choosing a more "spread out" dataset, like CIFAR-10, in which adjacent pixels are less strongly correlated and images appear to concentrate near complex, higher-dimensional manifolds. We observe the effect of decreasing U c by plotting the susceptibility of a 56 × 56 MNIST classifier against a classifier for CIFAR-10 FIG3, right). The former problem lives in 3136 dimensions, while the latter lives in 3072, and both have 10 classes. Despite the structural similarities between these problems, the decreased concentration of CIFAR-10 in vastly more susceptibility to attacks, regardless of whether adversarial training is used. The theory above suggests that this increased susceptibility is caused at least in part by a shift in the fundamental limits for CIFAR-10, rather than the weakness of the particular classifiers we chose. Informally, the concentration limit U c can be interpreted as a measure of image complexity. Image classes with smaller U c are likely concentrated near high-dimensional complex manifolds, have more intra-class variation, and thus more apparent complexity. An informal interpretation of Theorem 2 is that "high complexity" image classes are fundamentally more susceptible to adversarial examples, and FIG3 suggests that complexity (rather than dimensionality) is largely responsible for differences we observe in the effectiveness of adversarial training for different datasets. The question of whether adversarial examples are inevitable is an ill-posed one. Clearly, any classification problem has a fundamental limit on robustness to adversarial attacks that cannot be escaped by any classifier. However, we have seen that these limits depend not only on fundamental properties of the dataset, but also on the strength of the adversary and the metric used to measure perturbations. This paper provides a characterization of these limits and how they depend on properties of the data distribution. Unfortunately, it is impossible to know the exact properties of real-world image distributions or the ing fundamental limits of adversarial training for specific datasets. However, the analysis and experiments in this paper suggest that, especially for complex image classes in high-dimensional spaces, these limits may be far worse than our intuition tells us. A PROOF OF LEMMA 3We now prove Lemma 3. To do this, we begin with a classical isoperimetric inequality for random Gaussian variables. Unlike the case of a cube, tight geometric isoperimetric inequalities exist in this case. We then prove about the cube by creating a mapping between uniform random variables on the cube and random Gaussian vectors. In the lemma below, we consider the standard Gaussian density in R n given by p(x) = 1 (2π) n/2 e −nx 2 /2 and corresponding Gaussian measure µ. We also define DISPLAYFORM0 which is the cumulative density of a Gaussian curve. The following Lemma was first proved in BID35, and an elementary proof was given in BID4. Lemma 5 (Gaussian Isoperimetric Inequality). Of all sets with the same Gaussian measure, the set with 2 -expansion of smallest measure is a half space. Furthermore, for any measurable set A ⊂ R n, and scalar constant a such that DISPLAYFORM1 Using this we can now give a proof of Lemma 3.This function Φ maps a random Guassian vector z ∈ N (0, I) onto a random uniform vector in the unit cube. To see why, consider a measurable subset B ⊂ R n. If µ is the Gaussian measure on R n and σ is the uniform measure on the cube, then DISPLAYFORM2. DISPLAYFORM3, we also have DISPLAYFORM4 for any z, w ∈ R n. From this, we see that for p * = min(p, 2) DISPLAYFORM5 where we have used the identity u p ≤ n 1/ min(p,2)−1/2 u 2. Now, consider any set A in the cube, and let B = Φ −1 (A). From equation 10, we see that DISPLAYFORM6 It follows from equation 10 that DISPLAYFORM7 Applying Lemma 5, we see that DISPLAYFORM8 where DISPLAYFORM9 To obtain the simplified formula in the theorem, we use the identity DISPLAYFORM10 which is valid for x > 0, and can be found in BID0.B PROOF OF LEMMA 4Our proof emulates the method of Talagrand, with minor modifications that extend the to other p norms. We need the following standard inequality. Proof can be found in BID37 BID38. Lemma 6 (Talagrand). Consider a probability space Ω with measure µ. For g: Ω →, we have DISPLAYFORM11 Our proof of Lemma 3 follows the three-step process of Talagrand illustrated in BID37. We begin by proving the bound DISPLAYFORM12 where DISPLAYFORM13 is a measure of distance from A to x, and α, t are arbitrary positive constants. Once this bound is established, a Markov bound can be used to obtain the final . Finally, constants are tuned in order to optimize the tightness of the bound. We start by proving the bound in equation 12 using induction on the dimension. The base case for the induction is n = 1, and we have DISPLAYFORM14 We now prove the for n dimensions using the inductive hypothesis. We can upper bound the integral by integrating over "slices" along one dimension. Let A ⊂ n. Define DISPLAYFORM15 Clearly, the distance from (ω, z) to A is at most the distance from z to A ω, and so DISPLAYFORM16 We also have that the distance from x to A is at most one unit greater than the distance from x to B. This gives us. DISPLAYFORM17 Now, we apply lemma 6 to equation 13 with g(ω) = α[A ω]/α[B] to arrive at equation 12.The second step of the proof is to produce a Markov inequality from equation 12. For the bound in equation 12 to hold, we need DISPLAYFORM18 The third step is to optimize the bound by choosing constants. We minimize the right hand side by choosing t = Now, we can simply choose α = 1 to get the simple bound DISPLAYFORM19 or we can choose the optimal value of α = 2 2p n log(1/σ) − 1, which optimizes the bound in the case 2p ≥ n 2 log(1/σ(A)). We arrive at DISPLAYFORM20 This latter bound is stronger than we need to prove Lemma 3, but it will come in handy later to prove Theorem 4.C PROOF OF THEOREMS 2 AND 3We combine the proofs of these since their proofs are nearly identical. The proofs closely follow the argument of Theorem 1.Choose a class c with f c ≤ 1 2 and let R = {x|C(x) = c} denote the subset of the cube lying in class c according to the classifier C. Let R be the complement, who's p expansion is denoted R(; d p). Because R covers at least half the cube, we can invoke Lemma 3. We have that vol[R( ; h)] ≥ 1 − δ, where δ = exp(−πn 1−2/p * 2) 2πn 1/2−1/p *, for Theorem 2 and 2U c exp(− 2 /n), for Theorem 3.The set R(; h) contains all points that are correctly classified and safe from adversarial perturbations. This region has volume at most δ, and the probability of a sample from the class distribution ρ c lying in this region is at most U c δ. We then subtract this from 1 to obtain the mass of the class distribution lying in the "unsafe" region R c. Let A denote the support of p c, and suppose that this support has measure vol[A] = η. We want to show that, for large enough, the expansion A(, d p) is larger than half the cube. Since class c occupies less than half the cube, this would imply that A(, d p) overlaps with other classes, and so there must be data points in A with -adversarial examples. We start with the case p > 0, where we bound A(, d p) using equation 2 of Lemma 3. To do this, we need to approximate Φ −1 (η). This can be done using the inequality Φ(α) = 1 2π The quantity on the left will be greater than This can be re-arranged to obtain the desired . In the case p = 0, we need to use equation 17 from the proof of Lemma 3 in Appendix B, which we restate here
This paper identifies classes of problems for which adversarial examples are inescapable, and derives fundamental bounds on the susceptibility of any classifier to adversarial examples.
496
scitldr
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a , one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory band- width and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN -- wide reduced-precision networks. We report and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks. A promising approach to lower the compute and memory requirements of convolutional deeplearning workloads is through the use of low numeric precision algorithms. Operating in lower precision mode reduces computation as well as data movement and storage requirements. Due to such efficiency benefits, there are many existing works which propose low-precision deep neural networks (DNNs) BID27 BID12 BID14 BID6; BID24, even down to 2-bit ternary mode BID29 BID11 BID25 and 1-bit binary mode BID28 BID2 BID16 BID23. However, majority of existing works in low-precision DNNs sacrifice accuracy over the baseline full-precision networks. Further, most prior works target reducing the precision of the model parameters (network weights). This primarily benefits the inference step only when batch sizes are small. We observe that activation maps (neuron outputs) occupy more memory compared to the model parameters for batch sizes typical during training. This observation holds even during inference when batch size is around eight or more. Based on this observation, we study schemes for training and inference using low-precision DNNs where we reduce the precision of activation maps as well as the model parameters without sacrificing network accuracy. To improve both execution efficiency and accuracy of low-precision networks, we reduce both the precision of activation maps and model parameters and increase the number of filter maps in a layer. We call networks using this scheme wide reduced-precision networks (WRPN) and find that this scheme compensates or surpasses the accuracy of the baseline full-precision network. Although the number of raw compute operations increases as we increase the number of filter maps in a layer, the compute bits required per operation is now a fraction of what is required when using full-precision operations (e.g. going from FP32 AlexNet to 4-bits precision and doubling the number of filters increases the number of compute operations by 4x, but each operation is 8x more efficient than FP32). WRPN offers better accuracies, while being computationally less expensive compared to previously reported reduced-precision networks. We report on AlexNet BID10, batch-normalized Inception BID8, and ResNet-34 BID7 on ILSVRC-12 dataset. We find 4-bits to be sufficient for training deep and wide models while achieving similar or better accuracy than baseline network. With 4-bit activation and 2-bit weights, we find the accuracy to be at-par with baseline full-precision. Making the networks wider and operating with 1-bit precision, we close the accuracy gap between previously reported binary networks and show state-of-the art for ResNet-34 (69.85% top-1 with 2x wide) and AlexNet (48.04% top-1 with 1.3x wide). To the best of our knowledge, our reported accuracies with binary networks and 4-bit precision are highest to date. Our reduced-precision quantization scheme is hardware friendly allowing for efficient hardware implementations. To this end, we evaluate efficiency benefits of low-precision operations (4-bits to 1-bits) on Titan X GPU, Arria-10 FPGA and ASIC. We see that FPGA and ASIC can deliver significant efficiency gain over FP32 operations (6.5x to 100x), while GPU cannot take advantage of very low-precision operations. While most prior works proposing reduced-precision networks work with low precision weights (e.g. work in BID2 BID29 BID28 BID25 ; BID11 ; BID23), we find that activation maps occupy a larger memory footprint when using mini-batches of inputs. Using mini-batches of inputs is typical in training of DNNs and cloud-based batched inference BID9. FIG0 shows memory footprint of activation maps and filter maps as batch size changes for 4 different networks (AlexNet, Inception-Resnet-v2 BID22, during the training and inference steps. Figure 2: Memory requirements of a feed forward convolutional deep neural network. Orange boxes denote weights (W), blue boxes are activations (ACT) and green boxes are gradient-maps (Grad).As batch-size increases, because of filter reuse across batches of inputs, activation maps occupy significantly larger fraction of memory compared to the filter weights. This aspect is illustrated in Figure 2 which shows the memory requirements of a canonical feed-forward DNN for a hardware accelerator based system (e.g. GPU, FPGA, PCIe connected ASIC device, etc.). During training, the sum of all the activation maps (ACT) and weight tensors (W) are allocated in device memory for forward pass along with memory for gradient maps during backward propagation. The total memory requirements for training phase is the sum of memory required for the activation maps, weights and the maximum of input gradient maps (δZ) and maximum of back-propagated gradients (δX). During inference, memory is allocated for input (IFM) and output feature maps (OFM) required by a single layer, and these memory allocations are reused for other layers. The total memory allocation during inference is then the maximum of IFM and maximum of OFM required across all the layers plus the sum of all W-tensors. At batch sizes 128 and more, activations start to occupy more than 98% of total memory footprint during training. Overall, reducing precision of activations and weights reduces memory footprint, bandwidth and storage while also simplifying the requirements for hardware to efficiently support these operations.3 WRPN SCHEME AND STUDIES ON ALEXNET Based on the observation that activations occupy more memory footprint compared to weights, we reduce the precision of activations to speed up training and inference steps as well as cut down on memory requirements. However, a straightforward reduction in precision of activation maps leads to significant reduction in model accuracy BID28 BID16.We conduct a sensitivity study where we reduce precision of activation maps and model weights for AlexNet running ILSVRC-12 dataset and train the network from scratch. BID16. 32bA and 2bW data-point in this table is using Trained Ternary Quantization (TTQ) technique BID29. All other data points are collected using our quantization scheme (described later in Section 5), all the runs have same hyper-parameters and training is carried out for the same number of epochs as baseline network. To be consistent with reported in prior works, we do not quantize weights and activations of the first and last layer. We find that, in general, reducing the precision of activation maps and weights hurts model accuracy. Further, reducing precision of activations hurts model accuracy much more than reducing precision of the filter parameters. We find TTQ to be quite effective on AlexNet in that one can lower the precision of weights to 2b (while activations are still FP32) and not lose accuracy. However, we did not find this scheme to be effective for other networks like ResNet or Inception. To re-gain the model accuracy while working with reduced-precision operands, we increase the number of filter maps in a layer. Although the number of raw compute operations increase with widening the filter maps in a layer, the bits required per compute operation is now a fraction of what is required when using full-precision operations. As a , with appropriate hardware support, one can significantly reduce the dynamic memory requirements, memory bandwidth, computational energy and speed up the training and inference process. Our widening of filter maps is inspired from Wide ResNet BID26 work where the depth of the network is reduced and width of each layer is increased (the operand precision is still FP32). Wide ResNet requires a re-design of the network architecture. In our work, we maintain the depth parameter same as baseline network but widen the filter maps. We call our approach WRPN -wide reduced-precision networks. In practice, we find this scheme to be very simple and effective -starting with a baseline network architecture, one can change the width of each filter map without changing any other network design parameter or hyper-parameters. Carefully reducing precision and simultaneously widening filters keeps the total compute cost of the network under or at-par with baseline cost.1 TAB2 reports the accuracy of AlexNet when we double the number of filter maps in a layer. With doubling of filter maps, AlexNet with 4-bits weights and 2-bits activations exhibits accuracy at-par with full-precision networks. Operating with 4-bits weights and 4-bits activations surpasses the baseline accuracy by 1.44%. With binary weights and activations we better the accuracy of XNOR-NET BID16 by 4%.When doubling the number of filter maps, AlexNet's raw compute operations grow by 3.9x compared to the baseline full-precision network, however by using reduced-precision operands the overall compute complexity is a fraction of the baseline. For example, with 4b operands for weights and activations and 2x the number of filters, reduced-precision AlexNet is just 49% of the total compute cost of the full-precision baseline (compute cost comparison is shown in TAB3). We also experiment with other widening factors. With 1.3x widening of filters and with 4-bits of activation precision one can go as low as 8-bits of weight precision while still being at-par with baseline accuracy. With 1.1x wide filters, at least 8-bits weight and 16-bits activation precision is required for accuracy to match baseline full-precision 1x wide accuracy. Further, as TAB3 shows, when widening filters by 2x, one needs to lower precision to at least 8-bits so that the total compute cost is not more than baseline compute cost. Thus, there is a trade-off between widening and reducing the precision of network parameters. In our work, we trade-off higher number of raw compute operations with aggressively reducing the precision of the operands involved in these operations (activation maps and filter weights) while not sacrificing the model accuracy. Apart from other benefits of reduced precision activations as mentioned earlier, widening filter maps also improves the efficiency of underlying GEMM calls for convolution operations since compute accelerators are typically more efficient on a single kernel consisting of parallel computation on large data-structures as opposed to many small sized kernels BID26. We study how our scheme applies to deeper networks. For this, we study ResNet-34 BID7 and batch-normalized Inception BID8 and find similar trends, particularly that 2-bits weight and 4-bits activations continue to provide at-par accuracy as baseline. We use TensorFlow BID0 and tensorpack for all our evaluations and use ILSVRC-12 train and val dataset for analysis. ResNet-34 has 3x3 filters in each of its modular layers with shortcut connections being 1x1. The filter bank width changes from 64 to 512 as depth increases. We use the pre-activation variant of ResNet and the baseline top-1 accuracy of our ResNet-34 implementation using single-precision 32-bits data format is 73.59%. Binarizing weights and activations for all layers except the first and the last layer in this network gives top-1 accuracy of 60.5%. For binarizing ResNet we did not re-order any layer (as is done in XNOR-NET). We used the same hyper-parameters and learning rate schedule as the baseline network. As a reference, for ResNet-18, the gap between XNOR-NET (1b weights and activations) and full-precision network is 18% BID16. It is also interesting to note that top-1 accuracy of single-precision AlexNet (57.20%) is lower than the top-1 accuracy of binarized ResNet-34 (60.5%). We experimented with doubling number of filters in each layer and reduce the precision of activations and weights. TAB4 shows the of our analysis. Doubling the number of filters and 4-bits precision for both weights and activations beats the baseline accuracy by 0.9%. 4-bits activations and 2-bits (ternary) weights has top-1 accuracy at-par with baseline. Reducing precision to 2-bits for both weights and activations degrades accuracy by only 0.2% compared to baseline. Binarizing the weights and activations with 2x wide filters has a top-1 accuracy of 69.85%. This is just 3.7% worse than baseline full-precision network while being only 15% of the cost of the baseline network. Widening the filters by 3x and binarizing the weights and activations reduces this gap to 1.2% while the 3x wide network is 30% the cost of the full-precision baseline network. Although 4-bits precision seems to be enough for wide networks, we advocate for 4-bits activation precision and 2-bits weight precision. This is because with ternary weights one can get rid of the multipliers and use adders instead. Additionally, with this configuration there is no loss of accuracy. Further, if some accuracy degradation is tolerable, one can even go to binary circuits for efficient hardware implementation while saving 32x in bandwidth for each of weights and activations compared to full-precision networks. All these gains can be realized with simpler hardware implementation and lower compute cost compared to baseline networks. To the best of our knowledge, our ResNet binary and ternary (with 2-bits or 4-bits activation) top-1 accuracies are state-of-the-art in the literature including unpublished technical reports (with similar data augmentation BID13). We applied WRPN scheme to batch-normalized Inception network BID8. This network includes batch normalization of all layers and is a variant of GoogleNet BID21 where the 5x5 convolutional filters are replaced by two 3x3 convolutions with up to 128 wide filters. TAB5 shows the of our analysis. Using 4-bits activations and 2-bits weight and doubling the number of filter banks in the network produces a model that is almost at-par in accuracy with the baseline single-precision network (0.02% loss in accuracy). Wide network with binary weights and activations is within 6.6% of the full-precision baseline network. We adopt the straight-through estimator (STE) approach in our work BID1. When quantizing a real number to k-bits, the ordinality of the set of quantized numbers is 2 k. Mathemat- ically, this small and finite set would have zero gradients with respect to its inputs. STE method circumvents this problem by defining an operator that has arbitrary forward and backward operations. Prior works using the STE approach define operators that quantize the weights based on the expectation of the weight tensors. For instance, Ternary Weight Networks (TWN) BID11 ) uses a threshold and a scaling factor for each layer to quantize weights to ternary domain. In TTQ BID29, the scaling factors are learned parameters. XNOR-NET binarizes the weight tensor by computing the sign of the tensor values and then scaling by the mean of the absolute value of each output channel of weights. DoReFa uses a single scaling factor across the entire layer. For quantizing weights to k-bits, where k > 1, DoReFa uses: DISPLAYFORM0 Here w k is the k-bit quantized version of inputs w i and quantize k is a quantization function that quantizes a floating-point number w i in the range to a k-bit number in the same range. The transcendental tanh operation constrains the weight value to lie in between −1 and +1. The affine transformation post quantization brings the range to [−1, 1].We build on these approaches and propose a much simpler scheme. For quantizing weight tensors we first hard constrain the values to lie within the range [−1, 1] using min-max operation (e.g. tf.clip by val when using Tensorflow BID0). For quantizing activation tensor values, we constrain the values to lie within the range. This step is followed by a quantization step where a real number is quantized into a k-bit number. This is given as, for k > 1: DISPLAYFORM1 Here w i and a i are input real-valued weights and activation tensor and w k and a k are their quantized versions. One bit is reserved for sign-bit in case of weight values, hence the use of 2 k−1 for these quantized values. Thus, weights can be stored and interpreted using signed data-types and activations using un-signed data-types. With appropriate affine transformations, the convolution operations (the bulk of the compute operations in the network during forward pass) can be done using quantized values (integer operations in hardware) followed by scaling with floating-point constants (this scaling operation can be done in parallel with the convolution operation in hardware). When k = 1, for binary weights we use the Binary Weighted Networks (BWN) approach where the binarized weight value is computed based on the sign of input value followed by scaling with the mean of absolute values. For binarized activations we use the formulation in Eq. 2. We do not quantize the gradients and maintain the weights in reduced precision format. For convolution operation when using WRPN, the forward pass during training (and the inference step) involves matrix multiplication of k-bits signed and k-bits unsigned operands. Since gradient values are in 32-bits floating-point format, the backward pass involves a matrix multiplication operation using 32-bits and k-bits operand for gradient and weight update. When k > 1, the hard clipping of tensors to a range maps efficiently to min-max comparator units in hardware as opposed to using transcendental operations which are long latency operations. TTQ and DoRefa BID28 schemes involve division operation and computing a maximum value in the input tensor. Floating-point division operation is expensive in hardware and computing the maximum in a tensor is an O(n) operation. Additionally, our quantization parameters are static and do not require any learning or involve back-propagation like TTQ approach. We avoid each of these costly operations and propose a simpler quantization scheme (clipping followed by rounding). In practice, the effective performance and energy efficiency one could achieve on a low-precision compute operation highly depends on the hardware that runs these operations. We study the efficiency of low-precision operations on various hardware targets GPU, FPGA, and ASIC.For GPU, we evaluate WRPN on Nvidia Titan X Pascal and for FPGA we use Intel Arria-10. We collect performance numbers from both previously reported analysis BID15 as well as our own experiments. For FPGA, we implement a DNN accelerator architecture shown in FIG2 (a). This is a prototypical accelerator design used in various works (e.g., on FPGA and ASIC such as TPU BID9 ). The core of the accelerator consists of a systolic array of processing elements (PEs) to perform matrix and vector operations, along with on-chip buffers, as well as off-chip memory management unit. The PEs can be configured to support different precision -(FP32, FP32), (INT4, INT4), (INT4, TER2), and (BIN1, BIN1). The (INT4, TER2) PE operates on ternary (+1,0,-1) values and is optimized to include only an adder since there is no need for a multiplier in this case. The binary (BIN1, BIN1) PE is implemented using XNOR and bitcount. Our RTL design targets Arria-10 1150 FPGA. For our ASIC study, we synthesize the PE design using Intel 14 nm process technology to obtain area and energy estimates. FIG2 shows the efficiency improvements using first-order estimates where the efficiency is computed based on number of bits used in the operation. DISPLAYFORM0 With this method we would expect (INT4, INT4) and (BIN1, BIN1) to be 8x and 32x more efficient, respectively, than (FP32, FP32). However, in practice the efficiency gains from reducing precision depend on whether the underlying hardware can take advantage of such low-precisions. Figure 3(c) shows performance improvement on Titan X GPU for various low-precision operations relative to FP32. In this case, GPU can only achieve up to ∼4x improvements in performance over FP32 baseline. This is because GPU only provides first-class support for INT8 operations, and is not able to take advantage of the lower INT4, TER2, and BIN1 precisions. On the contrary, FPGA can take advantage of such low precisions, since they are amenable for implementations on the FPGAs reconfigurable fabric.. In fact, for (BIN1, BIN1), FPGA improvements exceed the first-order estimate. Reducing the precision simplifies the design of compute units and lower buffering requirements on FPGA board. Compute-precision reduction leads to significant improvement in throughput due to smaller hardware designs (allowing more parallelism) and shorter circuit delay (allowing higher frequency). FIG2 shows the performance and performance/Watt of the reduced-precision operations on GPU and FPGA. FPGA performs quite well on very low precision operations. In terms of performance/watt, FPGA does better than GPU on (INT4, INT4) and lower precisions. ASIC allows for a truly customized hardware implementation. Our ASIC study provides insights to the upper bound of the efficiency benefits possible from low-precision operations. FIG2 (f) and 3(g) show improvement in performance and energy efficiency of the various low-precision ASIC PEs relative to baseline FP32 PE. As the figures show, going to lower precision offers 2 to 3 orders of magnitude efficiency improvements. In summary, FPGA and ASIC are well suited for our WRPN approach. At 2x wide, our WRPN approach requires 4x more total operations than the original network. However, for INT4 or lower precision, each operation is 6.5x or better in efficiency than FP32 for FPGA and ASIC. Hence, WRPN delivers an overall efficiency win. Reduced-precision DNNs is an active research area. Reducing precision of weights for efficient inference pipeline has been very well studied. Works like Binary connect (BC), Ternary-weight networks (TWN) BID11, fine-grained ternary quantization BID13 and INQ BID27 target precision reduction of network weights while still using full-precision activations. Accuracy is almost always degraded when quantizing the weights. For AlexNet on Imagenet, TWN loses 5% top-1 accuracy. Schemes like INQ, BID20 and BID13 do fine-tuning to quantize the network weights and do not sacrifice accuracy as much but are not applicable for training networks from scratch. INQ shows promising with 5-bits of precision. XNOR-NET BID16, BNN BID2, DoReFa BID28 and TTQ BID29 target training as well. While TTQ targets weight quantization only, most works targeting activation quantization hurt accuracy. XNOR-NET approach reduces top-1 accuracy by 12% and DoReFa by 8% when quantizing both weights and activations to 1-bit (for AlexNet on ImageNet). Further, XNOR-NET requires re-ordering of layers for its scheme to work. Recent work in BID4 targets low-precision activations and reports accuracy within 1% of baseline with 5-bits precision and logarithmic (with base √ 2) quantization. With fine-tuning this gap can be narrowed to be within 0.6% but not all layers are quantized. Non-multiples of two for operand values introduces hardware inefficiency in that memory accesses are no longer DRAM or cache-boundary aligned and end-to-end run-time performance aspect is unclear when using complicated quantization schemes. We target end-to-end training and inference, using very simple quantization method and aim for reducing precision without any loss in accuracy. To the best of our knowledge, our work is the first to study reduced-precision deep and wide networks, and show accuracy at-par with baseline for as low a precision as 4-bits activations and 2-bits weights. We report state of the art accuracy for wide binarized AlexNet and ResNet while still being lower in compute cost. Work by BID5 advocates for low precision fixed-point numbers for training. They show 16-bits to be sufficient for training on CIFAR10 dataset and find stochastic rounding to be necessary for training convergence. In our work here we focus on sub-8b training and like DoReFa scheme do not see stochastic rounding necessary when using full-precision gradients. Work by BID19 quantizes gradients before communication in a distributed computing setting. They use full precision gradients during the backward pass and quantize the gradients before sending them to other computation nodes (decreasing the amount of communication traffic over an interconnection network). For distributed training, we can potentially use this approach for communicating gradients across nodes. We present the Wide Reduced-Precision Networks (WRPN) scheme for DNNs. In this scheme, the numeric precision of both weights and activations are significantly reduced without loss of network accuracy. This is in contrast to many previous works that find reduced-precision activations to detrimentally impact accuracy; specifically, we find that 2-bit weights and 4-bit activations are sufficient to match baseline accuracy across many networks including AlexNet, ResNet-34 and batchnormalized Inception. We achieve this with a new quantization scheme and by increasing the number of filter maps in each reduced-precision layer to compensate for the loss of information capacity induced by reducing the precision. We believe ours to be the first work to study the interplay between layer width and precision -with widening, the number of neurons in a layer increase; yet with reduced precision, we control overfitting and regularization. We motivate this work with our observation that full-precision activations contribute significantly more to the memory footprint than full-precision weight parameters when using mini-batch sizes common during training and cloud-based inference; furthermore, by reducing the precision of both activations and weights the compute complexity is greatly reduced (40% of baseline for 2-bit weights and 4-bit activations).The WRPN quantization scheme and computation on low precision activations and weights is hardware friendly making it viable for deeply-embedded system deployments as well as in cloud-based training and inference servers with compute fabrics for low-precision. We compare Titan X GPU, Arria-10 FPGA and ASIC implementations using WRPN and show our scheme increases performance and energy-efficiency for iso-accuracy across each. Overall, reducing the precision allows custom-designed compute units and lower buffering requirements to provide significant improvement in throughput.
Lowering precision (to 4-bits, 2-bits and even binary) and widening the filter banks gives as accurate network as those obtained with FP32 weights and activations.
497
scitldr
We investigate methods for semi-supervised learning (SSL) of a neural linear-chain conditional random field (CRF) for Named Entity Recognition (NER) by treating the tagger as the amortized variational posterior in a generative model of text given tags. We first illustrate how to incorporate a CRF in a VAE, enabling end-to-end training on semi-supervised data. We then investigate a series of increasingly complex deep generative models of tokens given tags enabled by end-to-end optimization, comparing the proposed models against supervised and strong CRF SSL baselines on the Ontonotes5 NER dataset. We find that our best proposed model consistently improves performance by $\approx 1\%$ F1 in low- and moderate-resource regimes and easily addresses degenerate model behavior in a more difficult, partially supervised setting. Named entity recognition (NER) is a critical subtask of many domain-specific natural language understanding tasks in NLP, such as information extraction, entity linking, semantic parsing, and question answering. State-of-the-art models treat NER as a tagging problem (; ; ;), and while they have become quite accurate on benchmark datasets in recent years (; ; ; ;), utilizing them for new tasks is still expensive, requiring a large corpus of exhaustively annotated sentences . This problem has been largely addressed by extensive pretraining of high-capacity sentence encoders on massive-scale language modeling tasks;;; b), but it is natural to ask if we can squeeze more signal from our unlabeled data. Latent-variable generative models of sentences are a natural approach to this problem: by treating the tags for unlabeled data as latent variables, we can appeal to the principle of maximum marginal likelihood and learn a generative model on both labeled and unlabeled data. For models of practical interest, however, this presents multiple challenges: learning and prediction both require an intractable marginalization over the latent variables and the specification of the generative model can imply a posterior family that may not be as performant as the current state-of-the-art discriminative models. We address these challenges using a semi-supervised Variational Autoencoder (VAE) , treating a neural tagging CRF as the approximate posterior. We address the issue of optimization through discrete latent tag sequences by utilizing a differentiable relaxation of the Perturb-and-MAP algorithm (; ;), allowing for end-to-end optimization via backpropagation and SGD . Armed with this learning approach, we no longer need to restrict the generative model family (as in ;), and explore the use of rich deep generative models of text given tag sequences for improving NER performance. We also demonstrate how to use the VAE framework to learn in a realistic annotation scenario where we only observe a biased subset of the named entity tags. Our contributions can be summarized as follows: 1. We address the problem of semi-supervised learning (SSL) for NER by treating a neural CRF as the amortized approximate posterior in a discrete structured VAE. To the best of our knowledge, we are the first to utilize VAEs for NER. 2. We explore several variants of increasingly complex deep generative models of text given tags with the goal of improving tagging performance. We find that a joint tag-encoding Transformer architecture leads to an ≈ 1% improvement in F1 score over supervised and strong CRF SSL baselines. 3. We demonstrate that the proposed approach elegantly corrects for degenerate model performance in a more difficult partially supervised regime where sentences are not exhaustively annotated and again find improved performance. 4. Finally, we show the utility of our method in realistic low-and high-resource scenarios, varying the amount of unlabeled data. The ing high-resource model is competitive with state-of-the-art and, to the best of our knowledge, achieves the highest reported F1 score (88.4%) for models that do not use additional labeled data or gazetteers. We first introduce the tagging problem and tagging model. We then detail our proposed modeling framework and architectures. NER is the task of assigning coarsely-typed categories to contiguous spans of text. State-of-the-art approaches (; ; ; ; a) do so by treating span extraction as a tagging problem, which we now formally define. We are given a tokenized text sequence x 1:N ∈ X N and would like to predict the corresponding tag sequence y 1:N ∈ Y N which correctly encodes the observed token spans. 1 In this work, we use the BILOU tag-span encoding, which assigns four tags for each of the C span categories (e.g., B-PER, I-PER, L-PER, U-PER for the PERSON category.) The tag types B, I, L, U respectively encode beginning, inside, last, and unary tag positions in the original span. Additionally we have one O tag for tokens that are not in any named entity span. Thus our tag space has size |Y| = 4C + 1. We call the NER task of predicting tags for tokens inference, and model it with a discriminative distribution q φ (y 1:N |x 1:N) having parameters φ. Following state-of-the-art NER approaches (; ; ;), we use a neural encoding of the input followed by a linear-chain CRF decoding layer on top. We use the same architecture for q φ throughout this work, as follows: 1. Encode the token sequence, represented as byte-pairs, with a fixed pretrained language model. 2 That is, we first calculate: In our first experiments exploring the use of pretrained autoregressive information for generation (§3.1), we use the GPT2-SM model . In the experiments after (§3.2) we use the RoBERTa-LG model (b;). 2. Down-project the states: h. Combine local and transition potentials: ψ yi,yi+1 = s yi + T yi,yi+1, T yi,yi+1 ∈ R 5. Using special start and end states y 0 = *, y N +1 = with binary potentials ψ *,y = T *,y, ψ y, = T y, and the forward algorithm to compute the the partition function Z, we can compute the joint distribution: Our tagging CRF has trainable parameters φ = {W 1, b 1, V, b 2, T} 3 and we learn them on a dataset of fully annotated sentences D S = {(x i 1:N i, y i 1:N i)} using stochastic gradient descent (SGD) and maximum likelihood estimation. 2.3 SEMI-SUPERVISED CRF-VAE We now present the CRF-VAE, which treats the tagging CRF as the amortized approximate posterior in a Variational Autoencoder. We first describe our loss formulations for semi-supervised and partially supervised data. We then address optimizing these objectives end-to-end using backpropagation and the Relaxed Perturb-and-MAP algorithm. Finally, we propose a series of increasingly complex generative models to explore the potential of our modeling framework for improving tagging performance. The purpose of this work is to consider methods for estimation of q φ in semi-supervised data regimes, as in Kingma et al. This marginalization is intractable for models that are not factored among y i, so we resort to optimizing the familiar evidence lower bound (ELBO) with an approximate variational posterior distribution, which we set to our tagging model q φ. We maximize the ELBO on unlabeled data in addition to maximum likelihood losses for both the inference and generative models on labeled data, yielding the following objectives: where α is scalar hyper-parameter used to balance the supervised loss L S and the unsupervised loss L U . β is a scalar hyper-parameter used to balance the reconstruction and KL terms for the unsupervised loss . We note that, unlike a traditional VAE, this model contains no continuous latent variables. Assuming that supervised sentences are completely labeled is a restrictive setup for semi-supervised learning of a named entity tagger. It would be useful to be able to learn the tagger on sentences which are only partially labeled, where we only observe some named entity spans, but are not guaranteed all entity spans in the sentence are annotated and no O tags are manually annotated. 4 This presents a challenge in that we are no longer able to assume the usual implicit presence of O tags, since unannotated tokens are ambiguous. While it is possible to optimize the marginal likelihood of the CRF on only the observed tags y O, O ⊂ {1, . . ., N} in the sentence , doing so naively will in a degenerate model that never predicts O, by far the most common tag . Interestingly, this scenario is easily addressed by the variational framework via the KL term. We do this by reformulating the objective in Equation 5 to account for partially observed tag sequences: } be the partially observed dataset where, for some sentence i, O ⊂ {1, . . ., N i} is the set of observed positions and U = {1, . . ., N i} \ O is the set of unobserved positions. Our partially supervised objective is then which can be optimized as before using the constrained forward-backward and KL algorithms detailed in Appendix B. We also explore using this approach simply for regularization of the CRF posterior by omitting the token model p θ (x|y). Since we do not have trainable parameters for the generative model in this case, the reconstruction likelihood drops out of the objective and we have, for a single datum the following loss: Optimizing Equations 5 and 6 with respect to θ and φ using backpropagation and SGD is straightforward for every term except for the expectation terms E q φ (y|x) [log p θ (x|y)]. To optimize these expectations, we first make an Monte Carlo approximation using a single sample drawn from q φ. This discrete sample, however, is not differentiable with respect to φ and blocks gradient computation. While we may appeal to score function estimation (; ; ; ;) to work around this, its high-variance gradients make successful optimization difficult., we can compute approximate samples from q φ that are differentiable with respect to φ using the Relaxed Perturb-and-MAP algorithm . Due to space limitations, we leave the derivation of Relaxed Perturb-and-MAP for linear-chain CRFs to Appendix A and detail the ing CRF algorithms in Appendix B. We model the prior distribution of tag sequences y 1:N as the per-tag product of a fixed categorical distribution p(y 1:N) = i p(y i). The KL between q φ and this distribution can be computed in polynomial time using a modification of the forward recursion derived in , detailed in Appendix B. We experiment with several variations of architectures for p θ (x 1:N |y 1:N), presented in order of increasing complexity. The CRF Autoencoder is the previous state-of-the-art semi-supervised linear-chain CRF, which we consider a strong baseline. This model uses a tractable, fully factored generative model of tokens given tags and does not require approximate inference. Due to space limitations, we have detailed our implementation in Appendix C. MF: This is our simplest proposed generative model. We first embed the relaxed tag samples, represented as simplex vectors y i ∈ ∆ |Y|, into R dy p as the weighted combination of the input vector representations for each possible tag: We then compute factored token probabilities with an inner product where σ X is the softmax function normalized over X. This model is generalization of the CRF Autoencoder architecture in Appendix C where the tag-token parameters θ x,y are computed with a low-rank factorization W U. The restrictive factorization of MF is undesirable, since we expect that information about nearby tags may be discriminative of individual tokens. To test this, we extend MF to use the full tag context by encoding the embedded tag sequence jointly using a two-layer transformer with four attention heads per layer before predicting the tokens independently. That is, MF-GPT2: Next, we see if we can leverage information from a pretrained language model to provide additional training signal to p θ. We extend MF by adding the fixed pretrained language modeling parameters from GPT2 to the token scores: where z xi and h 0 i are the input token embeddings and hidden states from GPT2, respectively. We additionally normalize the scales of the factors by the square root of the vector dimensionalities to prevent the GPT2 scores from washing out the tag-encoding scores (d yp = 300 and d GP T 2 = 768). We add the same autoregressive extention to MT, using the tag encodings v instead of embeddings u. MT-GPT2-PoE: We also consider an autoregressive extension of MT, similar to MT-GPT2, that uses a product of experts (PoE) factorization instead MT-GPT2-Residual: Our last variation directly couples GPT2 with p θ by predicting a residual via a two-layer MLP based on the tag encoding and GPT2 state: For the MF-GPT2, MT-GPT2, and MT-GPT2-PoE models, we choose these factorizations specifically to prevent the trainable parameters from conditioning on previous word information, removing the possibility of the model learning to ignore the noisy latent tags in favor of the strong signal provided by pretrained encodings of the sentence histories . We further freeze the GPT2 parameters for all models, forcing the only path for improving the generative likelihood to be through the improved estimation and encoding of the tags y 1:N. We experiment first with the proposed models generative models for SSL and PSL in a moderately resourced regime (keeping 10% labeled data) to explore their relative merits. We then evaluate our best generative model from these experiments, (MT), with an improved bidirectional encoder language model in a low-and high-resource settings, varying the amount of unlabeled data. For data, we use the OntoNotes 5 NER corpus, which consists of 18 entity types annotated in 82,120 train, 12,678 validation, and 8,968 test sentences. We begin by comparing the proposed generative models, M* along with the following baselines: 1. Supervised (S): The supervised tagger trained only on the 10% labeled data. 2. Supervised 100% (S*): The supervised tagger trained on the 100% labeled data, used for quantifying the performance loss from using less data. 3. AE-Exact: The CRF Autoencoder using exact inference (detailed in Appendix C.) 4. AE-Approx: The same tag-token pair parameterization used by the CRF Autoencoder, but trained with the approximate ELBO objective as in Equation 11 instead of the exact objective in Equation 12. The purpose here is to see if we lose anything by resorting to the approximate ELBO objective. To simulate moderate-resource SSL, we keep annotations for only 10% of the sentences, yielding 8, 212 labeled sentences with 13, 025 annotated spans and 73, 908 unlabeled sentences. Results are shown in Table 1. All models except S* use this 10% labeled data. We first evaluate the proposed models and baselines without the use of a prior, since the use of a locally normalized factored prior can encourage overly uncertain joint distributions and degrade performance (; ;). We then explore the inclusion of the priors for the supervised and MT models with β = 0.01. We explore two varieties of prior tag distributions: the "gold" empirical tag distribution (Emp) from the full training dataset and a simple, but informative, hand-crafted prior (Sim) that places 50% mass on the O tag and distributes the rest of its mass evenly among the remaining tags. We view as a practical approach, since it does not require knowledge of the gold tag distribution, and use to quantify any relative disadvantage from not using the gold prior. We find that including the prior with a small weight, β = 0.01, marginally improved performance and interestingly, the simple prior outperforms the empirical prior, most likely because it is slightly smoother and does not emphasize the O tag as heavily. Curiously, we found that the approximate training of the CRF Autoencoder AE-Approx outperformed the exact approach AE-Exact by nearly 2% F1. We also note that our attempts to leverage signal from the pretrained autoregressive GPT2 states had negligible or negative effects on performance, thus we conclude that it is the addition of the joint encoding transformer architecture MT that provides the most gains (+0.8% F1). We also evaluate the supervised and transformer-based generative models, S and MT, on the more difficult PSL setup, where naively training the supervised model on the marginal likelihood of observed tags produces a degenerate model, due to the observation bias of never having O tags. In this setting we drop 90% of the annotations from sentences randomly, ing in 82,120 incompletely annotated sentences with 12,883 annotations total. We compare the gold and simple priors for each model. From the bottom of Table 1, we see that again our proposed transformer model MT outperforms the supervised-only model, this time by +1.3% F1. We also find that in this case, the MT models need to be trained with higher prior weights β = 0.1, otherwise they diverge towards using the O tag more uniformly with the other tags to achieve better generative likelihoods. 5 Code and experiments are available online at github.com/<anonymizedforsubmission> 6 In preliminary SSL experiments we found β > 0.01 to have a negative impact on performance, likely due to global/local normalization mismatch of the CRF and the prior. 7 The empirical prior puts 85% mass on the O tag Table 1: Semi-supervised and partially-supervised models on 10% supervised training data: best in bold, second best underlined. The proposed MT* improves performance in SSL and PSL by +1.1% F1 and +1.3% F1, respectively. Next we explore our best proposed architecture MT and the supervised baseline in low-and highresource settings (1% and 100% training data, respectively) and study the effects of training with an additional 100K unlabeled sentences sampled from Wikipedia (detailed in Appendix E). Since we found no advantage from using pretrained GPT2 information in the previous experiment, we evaluate the use of the bidirectional pretrained language model, RoBERTa (b), since we expect bidirectional information to highly benefit performance (; , among others). We also experiment with a higher-capacity tagging model, S-LG, by adding more trainable Transformers (L = 4, A = 8, H = 1024) between the RoBERTa encodings and down-projection layers. From Table 2 we see that, like in the 10% labeled data setting, the CRF-VAE improves upon the supervised model by 0.9% F1 in this 1% setting, but we find that including additional data from Wikipedia has a negative impact. A likely reason for this is the domain mismatch between Ontonotes5 and Wikipedia (news and encyclopedia, respectively). In the high-resource setting, we find that using RoBERTa significantly improves upon GPT2 (+5.7% F1) and the additional capacity of S-LG further improves performance by +2.2% F1. Although we do not see a significant improvement from semi-supervised training with Wikipedia sentences, our model is competitive with previous state-of-the-art NER approaches and outperforms all previous approaches that do not use additional labeled data or gazetteers. Utilizing unlabeled data for semi-supervised learning in NER has been studied considerably in the literature. A common approach is a two-stage process where useful features are learned from unsupervised data, then incorporated into models which are then trained only on the supervised data . With the rise of neural approaches, large-scale word vector and language model pretraining methods; ) can be regarded in the same vein. Table 2: Low-and high-resource with RoBERTa, varying available unlabeled data. Best scores not using additional labeled data in bold. † Uses additional labeled data or gazetteers. Another approach is to automatically create silver-labeled data using outside resources, whose low recall induces a partially supervised learning problem. approach the problem by distantly supervising spans using a database. similarly use a gazetteer and adapt the structured perceptron to handle partially labeled sequences, while optimize the marginal likelihood of the distantly annotated tags.'s method, however, still requires some fully labeled data to handle proper prediction of the O tag. The problem setup from is the same as our PSL regime, but they use a cross-validated self-training approach. use a marginal likelihood objective to pool overlapping NER tasks and datasets, but must exploit datasetspecific constraints, limiting the allowable latent tags to debias the model from never predicting O tags. Generative latent-variable approaches also provide an attractive approach to learning on unsupervised data. present an approach that uses the CRF for autoencoding and extend it to neural CRFs, but both require the use of a restricted factored generative model to make learning tractable. Deep generative models of text have shown promise in recent years, with demonstrated applications to document representation learning, sentence generation , compression, translation , and parsing . However, to the best of our knowledge, this framework has yet to be utilized for NER and tagging CRFs. A key challenge for learning VAEs with discrete latent variables is optimization with respect to the inference model parameters φ. While we may appeal to score function estimation (; ; ;, its empirical high-variance gradients make successful optimization difficult. Alternatively, obtaining gradients with respect to φ can be achieved using the relaxed Gumbel-max trick and has been recently extended to latent tree-CRFs by , which we make use of here for sequence CRFs. We proposed a novel generative model for semi-supervised learning in NER. By treating a neural CRF as the amortized variational posterior in the generative model and taking relaxed differentiable samples, we were able to utilize a transformer architecture in the generative model to condition on more context and provide appreciable performance gains over supervised and strong baselines on both semi-supervised and partially-supervised datasets. We also found that inclusion of powerful pretrained autoregressive language modeling states had neglible or negative effects while using a pretrained bidirectional encoder offers significant performance gains. Future work includes the use of larger in-domain unlabeled corpora and the inclusion of latent-variable CRFs in more interesting joint semi-supervised models of annotations, such as relation extraction and entity linking. ) and τ ≥ 0 be the temperature: We know from that the MAP sequence from this perturbed distribution is a sample from the unperturbed distribution. Coupled with the property that the zero temperature limit of the Gibbs distribution is the MAP state , it immediately follows that the zero temperature limit of the perturbedq is a sample from q: ⇒ lim τ →0q where q φ (y|x; τ) is the tempered but unperturbed q φ and "one-hot" is a function that converts elements of Y N to a one-hot vector representation. Thus we can use the temperature τ to anneal the perturbed joint distributionq φ (y|x; τ) to a sample from the unperturbed distribution,ỹ ∼ q φ. When τ > 0,q φ (y|x; τ) is differentiable and can be used for end-to-end optimization by allowing us to approximate the expectation with a relaxed single-sample Monte Carlo estimate: where we have modified log p θ (x|y) to accept the simplex representations of y 1:N fromq φ instead of discrete elements, which has the effect of log p θ (x|y) computing a weighted combination of its input vector representations for y ∈ Y similarly to an attention mechanism or the annotation function in (see Equation 7.) This can be thought of as a generalization of the Gumbel-softmax trick from; to structured joint distributions. The statements in also imply something of practical interest: we can compute the argmax (Viterbi decoding) and its differentiable relaxation; a sample and its differentiable relaxation; the partition function; and the marginal tag distributions, all using the same sum-product algorithm implementation, controlled by the temperature and the presence of noise. We have detailed the algorithm in Appendix B. In Algorithm 1 we have detailed the stable, log-space implementation of the generalized forwardbackward algorithm for computing the argmax (Viterbi decoding) and its differentiable relaxation; a sample and its differentiable relaxation; the partition function; and the marginal tag distributions below. While this algorithm does provide practical convenience, we note that real implementations should have separate routines for computing the partition function (running only the forward algorithm), and the discrete τ = 0 Viterbi algorithm, since it is more numerically stable and efficient. We also have included the dynamic program for computing the constrained KL divergence between q φ and a factored p(y) in Algorithm 2. The idea of using a CRF to reconstruct tokens given tags for SSL has been explored before by; , which we consider to be a strong baseline and restate ψ yi,yi+1 + log p θ (x i |y i)} − log Z(ψ) = log Z(ψ + log p θ) − log Z(ψ) where log Z(ψ + log p θ) is a slight abuse of notation intended to illustrate that the first term in Equation 12 is the same computation as the partition function, but with the generative log-likelihoods added to the CRF potentials. We note that instead of using the Mixed-EM procedure from , we model p θ (x i |y i) using free logit parameters θ x,y for each token-tag pair and normalize using a softmax, which allows for end-to-end optimization via backpropagation and SGD. We train each model to convergence using early-stopping on the F1 score of the validation data, with a patience of 10 epochs. For all models that do not have trainable transformers, we train using the Adam optimizer with a learning rate of 0.001, and a batch size of 128. For those with transformers (MT*), we train using Adam, a batch size of 32, and the Noam learning rate schedule from with a model size of d yp = 300 and 16, 000 warm-up steps . Additionally, we use gradient clipping of 5 for all models and a temperature of τ =.66 for all relaxed sampling models. We implemented our models in PyTorch using the AllenNLP framework and the implementation of the pretrained GPT2 and RoBERTa. We have made all code, data, and experiments available online at github.com/ <anonymizedforsubmission> for reproducibility and reuse. All experimental settings can be reproduced using the configuration files in the repo. For the experiments in §3.2, we gather an additional training corpus of out-of-domain encyclopedic sentences from Wikipedia. To try to get a sample that better aligns with the Ontonotes5 data, these sentences were gathered with an informed process, which was performed as follows: 1. Using the repository <anonymized for submission>, we extract English Wikipedia and align it with Wikidata. 2. We then look up the entity classes from the Ontonotes5 specification in Wikidata and, for each NER class, find all Wikidata classes that are below this class in ontology (all subclasses). 3. We then find all items which are instances of these classes and also have Wikipedia pages. These are the Wikipedia entities which are likely to be instances of the NER classes.
We embed a CRF in a VAE of tokens and NER tags for semi-supervised learning and show improvements in low-resource settings.
498
scitldr
To make deep neural networks feasible in resource-constrained environments (such as mobile devices), it is beneficial to quantize models by using low-precision weights. One common technique for quantizing neural networks is the straight-through gradient method, which enables back-propagation through the quantization mapping. Despite its empirical success, little is understood about why the straight-through gradient method works. Building upon a novel observation that the straight-through gradient method is in fact identical to the well-known Nesterov’s dual-averaging algorithm on a quantization constrained optimization problem, we propose a more principled alternative approach, called ProxQuant, that formulates quantized network training as a regularized learning problem instead and optimizes it via the prox-gradient method. ProxQuant does back-propagation on the underlying full-precision vector and applies an efficient prox-operator in between stochastic gradient steps to encourage quantizedness. For quantizing ResNets and LSTMs, ProxQuant outperforms state-of-the-art on binary quantization and is on par with state-of-the-art on multi-bit quantization. For binary quantization, our analysis shows both theoretically and experimentally that ProxQuant is more stable than the straight-through gradient method (i.e. BinaryConnect), challenging the indispensability of the straight-through gradient method and providing a powerful alternative. change metric. We present the main ingredients of our contribution in this extended abstract. See the Appendices B for the prior work, C for the notation, A and D for the motivation and preliminary discussions about 50 the straight-through gradient method and prox operators. 2 Quantized net training via regularized learning 52 We propose the PROXQUANT algorithm, which adds a quantization-inducing regularizer onto the 53 loss and optimizes via the (non-lazy) prox-gradient method with a finite λ. The prototypical version 54 of PROXQUANT is described in Algorithm 1. Require: Regularizer R that induces desired quantizedness, initialization θ 0, learning rates {η t} t≥0, regularization strengths {λ t} t≥0 while not converged do Perform the prox-gradient step DISPLAYFORM0 = prox ηtλtR θ t − η t ∇L(θ t).The inner SGD step in eq. can be replaced by any preferred stochastic optimization method such as Momentum SGD or Adam []. end whileCompared to usual full-precision training, PROXQUANT only adds a prox step after each stochastic Table 1: Top-1 classification error of quantized ResNets on CIFAR-10. Performance is reported in mean(std) over 4 runs, where for PQ-T we report in addition the best of 4 (Bo4). We now show that BinaryConnect has a very stringent convergence condition. Consider the Bina-to find a parameter θ ∈ {±1} d with low loss, the algorithm only has access to stochastic gradients at {±1} d. As this is a discrete set, a priori, gradients in this set do not necessarily contain any (a E H 9 I S e j X v j 0 X g x X h e l K 8 a y 5 w j 9 g P H 2 C a M i n A U = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " DISPLAYFORM0 a E H 9 I S e j X v j 0 X g x X h e l K 8 a y 5 w j 9 g P H 2 C a M i n A U = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " DISPLAYFORM1 a E H 9 I S e j X v j 0 X g x X h e l K 8 a y 5 w j 9 g P H 2 C a M i n A U = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " DISPLAYFORM2 a E H 9 I S e j X v j 0 X g x X h e l K 8 a y 5 w j 9 g P H 2 C a M i n A U = < / l a t e x i t > rL(✓t) < l a t e x i t s h a 1 _ b a s e 6 4 = " k M y 6 L o 8 X P y Figure 1: (a) Comparison of the straight-through gradient method and our PROXQUANT method. The straightthrough method computes the gradient at the quantized vector and performs the update at the original real vector; PROXQUANT performs a gradient update at the current real vector followed by a prox step which encourages quantizedness. (b) A two-function toy failure case for BinaryConnect. The two functions are f1(x) = |x + 0.5| − 0.5 (blue) and f−1(x) = |x − 0.5| − 0.5 (orange). The derivatives of f1 and f−1 coincide at {−1, 1}, so any algorithm that only uses this information will have identical behaviors on these two functions. However, the minimizers in {±1} are x 1 = −1 and x −1 = 1, so the algorithm must fail on one of them. DISPLAYFORM3 sparsification, nearest-neighbor clustering, and Huffman coding. This architecture is then made into 173 a specially designed hardware for efficient inference []. In a parallel line of work, parameter, but might be hard to tune due to the instability of the inner maximization problem. While preparing this manuscript, we discovered the independent work of Carreira-Perpinán, Carreira-Perpinán and Idelbayev. They formulate quantized network training as a constrained DISPLAYFORM0 This enables the real vector θ to move in the entire Euclidean space, and taking q(θ) at the end of training gives a valid quantized model. Such a customized back-propagation rule yields good empiri-227 cal performance in training quantized nets and has thus become a standard practice [Courbariaux 228 et al., 2015, , . However, as we have discussed, it is information 229 theoretically unclear how the straight-through method works, and it does fail on very simple convex Lipschitz functions (Figure 1b). D.2 Straight-through gradient as lazy projection Our first observation is that the straight-through gradient method is equivalent to Nesterov's dual- averaging method, or a lazy projected SGD []. In the binary case, we wish to minimize 234 L(θ) over Q = {±1} d, and the lazy projected SGD proceeds as DISPLAYFORM0 Written compactly, this is θ t+1 = θ t −η t ∇L(θ)| θ=q(θt), which is exactly the straight-through gradient 236 method: take the gradient at the quantized vector and perform the update on the original real vector. We take a broader point of view that a projection is also a limiting proximal operator with a suitable 239 regularizer, to allow more generality and to motivate our proposed algorithm. Given any set Q, one 240 could identify a regularizer R: R d → R ≥0 such that the following hold: DISPLAYFORM0 In the case Q = {±1} d for example, one could take DISPLAYFORM1 The proximal operator (or prox operator) [] with respect to R and strength DISPLAYFORM2 In the limiting case λ = ∞, the argmin has to satisfy R(θ) = 0, i.e. θ ∈ Q, and the prox operator is 245 to minimize θ − θ 0 2 2 over θ ∈ Q, which is the Euclidean projection onto Q. Hence, projection is 246 also a prox operator with λ = ∞, and the straight-through gradient estimate is equivalent to a lazy 247 proximal gradient descent with and λ = ∞. While the prox operator with λ = ∞ correponds to "hard" projection onto the discrete set Q, when 249 λ < ∞ it becomes a "soft" projection that moves towards Q. Compared with the hard projection, 250 a finite λ is less aggressive and has the potential advantage of avoiding overshoot early in training. Further, as the prox operator does not strictly enforce quantizedness, it is in principle able to query 252 the gradients at every point in the space, and therefore has access to more information than the 253 straight-through gradient method. E Details on the PROXQUANT algorithm 255 E.1 Regularization for model quantization We define a flexible class of quantization-inducing regularizers through "distance to the quantized 257 set", derive efficient algorithms of their corresponding prox operator, and propose a homotopy method 258 for choosing the regularization strengths. Our regularization perspective subsumes most existing 259 algorithms for model-quantization (e.g., [, , 260 as limits of certain regularizers with strength λ → ∞. Our proposed method can be viewed as a 261 principled generalization of these methods to λ < ∞. Let Q ⊂ R d be a set of quantized parameter vectors. An ideal regularizer for quantization would be 263 to vanish on Q and reflect some type of distance to Q when θ / ∈ Q. To achieve this, we propose L 1 and L 2 regularizers of the form DISPLAYFORM0 This is a highly flexible framework for designing regularizers, as one could specify any Q and choose between L 1 and L 2 . Specifically, Q encodes certain desired quantization structure. By appropriately 267 choosing Q, we can specify which part of the parameter vector to quantize 1, the number of bits to 268 quantize to, whether we allow adaptively-chosen quantization levels and so on. The choice of distance metrics will in distinct properties in the regularized solutions. For 270 example, choosing the L 1 version leads to non-smooth regularizers that induce exact quantizedness 271 in the same way that L 1 norm regularization induces sparsity [], whereas choosing 272 the squared L 2 version leads to smooth regularizers that induce quantizedness "softly". In the following, we present a few examples of regularizers under our framework eq. which induce 274 binary weights, ternary weights and multi-bit quantization. We will also derive efficient algorithms (or approximation heuristics) for solving the prox operators corresponding to these regularizers, which generalize the projection operators used in the straight-through gradient algorithms. Binary neural nets In a binary neural net, the entries of θ are in {±1}. A natural choice would be DISPLAYFORM0 This is exactly the binary regularizer R bin that we discussed earlier in eq.. FIG4 plots the 280 W-shaped one-dimensional component of R bin from which we see its effect for inducing {±1} 281 quantization in analog to L 1 regularization for inducing exact sparsity. The prox operator with respect to R bin, despite being a non-convex 283 optimization problem, admits a simple analytical solution: DISPLAYFORM1 DISPLAYFORM2 1 Empirically, it is advantageous to keep the biases of each layers and the BatchNorm layers at full-precision, which is often a negligible fraction, say 1/ √ d of the total number of parameters alternating quantizer of []: Bα = q alt (θ). Together, the prox operator generalizes the alternating minimization procedure in [], as 300 λ governs a trade-off between quantization and closeness to θ. To see that this is a strict generalization, note that for any λ the solution of eq. will be an interpolation between the input θ and its Euclidean 302 projection to Q. As λ → +∞, the prox operator collapses to the projection. Ternary quantization Ternary quantization is a variant of 2-bit quantization, in which weights are 304 constrained to be in {−α, 0, β} for real values α, β > 0. For ternary quantization, we use an approximate version of the alternating prox operator eq. FORMULA0: DISPLAYFORM0 by initializing at θ = θ and repeating DISPLAYFORM1 where q is the ternary quantizer defined as DISPLAYFORM2 This is a straightforward extension of the TWN quantizer [] Recall that the larger λ t is, the more aggressive θ t+1 will move towards the quantized set. An ideal 313 choice would be to force the net to be exactly quantized upon convergence, and not be too 314 aggressive such that the quantized net at convergence is sub-optimal. We let λ t be a linearly increasing sequence, i.e. λ t:= λ · t for some hyper-parameter λ > 0 which 316 we term as the regularization rate. With this choice, the stochastic gradient steps will start off 317 close to full-precision training and gradually move towards exact quantizedness, hence the name 318 "homotopy method". The parameter λ can be tuned by minimizing the validation loss, and controls 319 the aggressiveness of falling onto the quantization constraint. There is nothing special about the 320 linear increasing scheme, but it is simple enough and works well as we shall see in the experiments. Problem setup We perform language modeling with LSTMs Hochreiter and Schmidhuber 323 on the Penn Treebank (PTB) dataset [], which contains 929K training tokens, 73K validation tokens, and 82K test tokens. Our model is a standard one-hidden-layer LSTM with 325 embedding dimension 300 and hidden dimension 300. We train quantized LSTMs with the encoder, 326 transition matrix, and the decoder quantized to k-bits for k ∈ {1, 2, 3}. The quantization is performed 327 in a row-wise fashion, so that each row of the matrix has its own codebook {α 1, . . ., α k}. multi-bit quantization, we also report the for binary LSTMs (weights in {±1}), comparing BinaryConnect and our PROXQUANT-Binary. Result We report the perplexity-per-word (PPW, lower is better) in TAB3. The performance 337 of PROXQUANT is comparable with the Straight-through gradient method. On Binary LSTMs, PROXQUANT-Binary beats BinaryConnect by a large margin. These demonstrate that PROX- QUANT offers a powerful alternative for training recurrent networks. We experimentally compare the training dynamics of PROXQUANT-Binary and BinaryConnect In R d, the space of all full-precision parameters, the sign change is a natural distance metric that 345 represents the closeness of the binarization of two parameters. Recall in our CIFAR-10 experiments (Section 3.1), for both BinaryConnect and PROXQUANT, we 347 initialize at a good full-precision net θ 0 and stop at a converged binary network θ ∈ {±1} d. We 348 are interested in SignChange(θ 0, θ t) along the training path, as well as SignChange(θ 0, θ), i.e. the 349 distance of the final output model to the initialization. As PROXQUANT converges to higher-performance solutions than BinaryConnect, we expect that if we run both methods from a same warm start, the sign change of PROXQUANT should be higher than 352 that of BinaryConnect, as in general one needs to travel farther to find a better net. However, we find that this is not the case: PROXQUANT produces binary nets with both lower sign BinaryConnect never stop changing until we manually freeze the signs at epoch 400. G.1 Detailed sign change on ResNet-20 362 2 We thank Xu et al. for sharing the implementation of this method through a personal communication. There is a very clever trick not mentioned in their paper: after computing the alternating quantization q alt (θ), they multiply by a constant 0.3 before taking the gradient; in other words, their quantizer is a rescaled alternating quantizer: θ → 0.3q alt (θ). This scaling step gives a significant gain in performance -without scaling the PPW is {116.7, 94.3, 87.3} for {1, 2, 3} bits. In contrast, our PROXQUANT does not involve a scaling step and achieves better PPW than this unscaled ALT straight-through method. BC 9.664, 9.430, 9.198, 9.663 0.386, 0.377, 0.390, 0.381 (8.06) PQ-B 9. 058, 8.901, 9.388, 9.237 0.288, 0.247, 0.284, 9.530, 9.623, 10.370 0.376, 0.379, 0.382, 0.386 (8.31) 9.474, 9.410, 9.370 0.291, 0.287, 0.289, 9.558, 9.538, 9.328 0.360, 0.357, 0.359, 0.360 (7.73) PQ-B 9. 284, 8.866, 9.301, 8.884 0.275, 0.276, 0.276, 0.275
A principled framework for model quantization using the proximal gradient method.
499
scitldr