source
stringlengths
273
149k
source_labels
sequence
paper_id
stringlengths
9
11
target
stringlengths
18
668
Due to a resource-constrained environment, network compression has become an important part of deep neural networks research. In this paper, we propose a new compression method, Inter-Layer Weight Prediction (ILWP) and quantization method which quantize the predicted residuals between the weights in all convolution layers based on an inter-frame prediction method in conventional video coding schemes. Furthermore, we found a phenomenon Smoothly Varying Weight Hypothesis (SVWH) which is that the weights in adjacent convolution layers share strong similarity in shapes and values, i.e., the weights tend to vary smoothly along with the layers. Based on SVWH, we propose a second ILWP and quantization method which quantize the predicted residuals between the weights in adjacent convolution layers. Since the predicted weight residuals tend to follow Laplace distributions with very low variance, the weight quantization can more effectively be applied, thus producing more zero weights and enhancing the weight compression ratio. In addition, we propose a new inter-layer loss for eliminating non-texture bits, which enabled us to more effectively store only texture bits. That is, the proposed loss regularizes the weights such that the collocated weights between the adjacent two layers have the same values. Finally, we propose an ILWP with an inter-layer loss and quantization method. Our comprehensive experiments show that the proposed method achieves a much higher weight compression rate at the same accuracy level compared with the previous quantization-based compression methods in deep neural networks. Deep neural networks have demonstrated great performance for various tasks in many fields, such as image classification (LeCun et al. 1990a; Krizhevsky et al. 2012; He et al. 2016), object detection (Ren et al. 2015; He et al. 2017; Redmon & Farhadi 2018), image captioning , and speech recognition Xiong et al. 2018). Wide and deep neural networks achieved great accuracy with the aid of the enormous number of weight parameters and high computational cost (Simonyan & Zisserman 2014; He et al. 2016;). However, as demands toward constructing the neural networks in the resource-constrained environments have been increasing, making the resource-efficient neural network while maintaining its accuracy becomes an important research area of deep neural networks. Several studies have aimed to solve this problem. In LeCun et al. (1990b), , Han et al. (2015b) and , network pruning methods were proposed for memory-efficient architecture, where unimportant weights were forced to have zero values in terms of accuracy. , and Han et al. (2015a), weights were quantized and stored in memory, enabling less memory usage of deep neural networks. On the other hand, some literature decomposed convolution operations into sub operations (e.g., depth-wise separable convolution) requiring less computation costs at similar accuracy levels (Howard et al. 2017; Sandler et al. 2018; Ma et al. 2018). In this paper, we show that the weights between the adjacent two convolution layers tend to share high similarity in shapes and values. We call this phenomenon Smoothly Varying Weight Hypothesis (SVWH). This paper explores an efficient neural network method that fully takes the advantage of SVWH. Specifically, inspired by the prediction techniques widely used in video compression field (Wiegand et al. 2003; Sullivan et al. 2012), we propose a new weight compression scheme based on an inter-layer weight prediction technique, which can be successfully incorporated into the depth-wise separable convolutional blocks (Howard et al. 2017; Sandler et al. 2018; Ma et al. 2018). Contributions: Main contributions of this paper are listed below: • From comprehensive experiments, we find out that the weights between the adjacent layers tend to share strong similarities, which lead us to establishing SVWH. • Based on SVWH, we propose a simple and effective Inter-Layer Weight Prediction (ILWP) and quantization framework enabling a more compressed neural networks than only applying quantization on the weights of the neural networks. • To further enhance the effectiveness of the proposed ILWP, we devise a new regularization function, denoted as inter-layer loss, that aims to minimize the difference between collocated weight values in the adjacent layers, ing in significant bit saving for non-texture bits (i.e., bits for indices of prediction). • Our comprehensive experiments demonstrate that, the proposed scheme achieves about 53% compression ratio on average in 8-bit quantization at the same accuracy level compared to the traditional quantization method (without prediction) in both MobileNetV1 and MobileNetV2 . Network pruning: Network pruning methods prune the unimportant weight parameters, enabling to reduce the redundancy of weight parameters inherent in neural networks. LeCun et al. (1990b) and reduced the number of weight connections implicitly through setting an proper objective function for training. Han et al. (2015b) successfully removed the unimportant weight connections through certain thresholds for the weight values, showing no harm of accuracy in the state-of-the-art convolutional neural network models. Recently, structured (filter/channel/layerwise) pruning methods have been proposed in and , where a set of weights is pruned based on certain criteria (e.g., the sum of absolute values in the set of weights), demonstrating significantly reduced number of weight parameters and computational costs. use AutoML for channel pruning and their proposed method get accuracy 13.2% more than filter pruning method . Our paper is linked to the pruning methods in perspective of assigning more zero weights for weight compression. Quantization: Quantization reduces the representation bits of original weights in neural networks. proposed a weight quantization using weight discretization in neural networks. Han et al. (2015b) incorporated a vector quantization into the pruning, proving that quantization and pruning can jointly work for weight compression without accuracy degradation. This pruningquantization framework, i.e., called Deep Compression, became a milestone in model compression research of deep neural networks. proposed a fixed-point quantization using a linear scale factor for weight values where bit-widths for quantization are adaptively found for each layer, thus enabling 20% reduction of the weight size in memory without any loss in accuracy compared to the baseline fixed-point quantization method. , and use clipping weights before applying linear quantization and those methods are improve accuracy than linear quantization without clipping. For 1-bit quantization which is named as Binary Neural Networks, several studies (Courbariaux et al. 2015; Courbariaux et al. 2016; Hubara et al. 2017) binarized the weights and/or activations in the process of back-propagation, enjoying considerably reduced usage of memory space and computation overhead. Compared to the aforementioned quantization techniques, our work applies quantization in the combination of the residuals for the weights in inter layers rather than the weight values themselves. Empirically, we found that the residuals tend to produce much larger portion of zero values when quan-tized, since they often follow very narrow Laplace distributions. Therefore, our proposed method can significantly reduce the memory consumption for neural networks as shown in Section 4. Prediction in conventional video coding: Prediction technique is considered one of the most crucial parts in video compression, aiming at minimizing the magnitudes of signals to be encoded by subtracting the input signals to the most similar encoded signals in a set of prediction candidates (Wiegand et al. 2003; Rijkse 1996; Sullivan et al. 2012). The prediction methods can produce the residuals of signals with low magnitudes as well as a large number of non/near zero signals. Therefore, they have effectively been incorporated into transforms and quantization for concentrating powers in low frequency regions and reducing the entropy, respectively. There are two prediction techniques: inter-and intra-predictions. The inter-prediction searches the best prediction signals from the encoded neighbor frames out of the current frame, while the intra-prediction generates a set of prediction signals from the input signals and determine the best prediction (Wiegand et al. 2003; Sullivan et al. 2012). This is because intra frames that use only intra-prediction for compression are used as reference frames for subsequent frames to be predicted. Note that a few studies explored to apply the transform techniques of video and/or image coding to the weight compression problem in neural networks. and applied DCT (Discrete Cosine Transform) used in the JPEG (Joint Picture Encoding Group) algorithm to the model compression problem of deep neural networks such that the energy of weight values became concentrated in low frequency regions, thus producing more non/near-zero DCT coefficients for the weights. Compared to the aforementioned papers, our work does not adopt transform techniques to reduce model sizes, because the transforms introduce much computation in inference, decreasing the effectiveness of the weight compression in practical applications. In this paper, we found out that the inter-prediction technique can play a crucial role for weight compression under the SVWH condition (i.e., the weights between adjacent layers tend to be similar). The proposed inter-layer loss reinforces SVWH for reducing the non-texture bits in the training process. As a , the proposed inter-prediction and quantization framework for weight compression yields impressive compression performance enhancement at the similar accuracy level compared to the baseline models. Figure 1-(a) shows an simple example of the proposed ILWP method with a full search strategy (ILWP-FSS). The full search strategy (FSS) indicates that, for the j-th weight in the kernel of the i-th convolution layer (K i,j), it searches the most similar weight kernel (i.e., K u,v in Figure 1 -(a)) in the range of [1, i − 1]-th layers given a pre-trained neural network model. We then compute the residuals (R i,j) between the current weight and the best prediction, which is finally quantized in a certain bit representation and stored in memory space with the index of the best prediction (i.e., (u, v) in Figure 1-(a) ). The proposed ILWP is performed from the second layer to the last layer in a sequential manner. It should be noted that, because of the large portion of non-texture bits (indices of the best prediction), ILWP-FSS tends to produce more bits (both texture (residual) and non-texture bits) than the standard weight values without ILWP, which makes ILWP worthless in compressing the weights. In the next subsection, we describe how SVWH solves this problem effectively. As shown in Figure 2, the dominant portions of the best predictions in the current (i-th layer) layer are obtained from its previous ((i − 1)-th layer) layers, being consistently observed in all the layers of the neural networks. From these observations, we claim that the adjacent layers tend to consists of the similar weight values, leading us to propose SVWH. The proposed SVWH can be mathematically expressed as where P[·] is a probability, L(·, ·) is a distance between two kernels, and (u, v) are the indices of the layer and kernel, where 1 ≤ u < i − 1 (See Figure 1). Inspired by SVWH, we propose an enhanced version of ILWP, i.e. ILWP with a local search strategy (ILWP-LSS) that finds the best prediction only from the the previous layer (See Figure 1-(b) ). The local search strategy (LSS) can effectively reduce the non-texture bits, because the non-texture bits for the layer index are not required. Moreover, the LSS allows the network to keep the weights only in the previous one layer in inference, thus reducing the memory buffer for keeping the weights. Our experimental in Section 4 show that, the local search strategy enhances the compression performance compared to the FSS in the ILWP framework of deep neural networks. In this paper, to further increase the effectiveness of the proposed ILWP, we devise a new regularization loss, namely inter-layer loss. To further exploit SVWH, we propose a new inter-layer loss which makes the collocated filter weights between the adjacent layers have almost the same values. We find out that our ILWP can more effectively be applied to the depth-wise convolution (3 × 3 spatial convolution) layer in the depth-wise separable convolution block , compared to the traditional 3D convolutions (3 × 3 × C convolution). This is because high dimensionality of the weights in the traditional 3D convolution filter tends to hinder finding out the best prediction sharing strong similarity. This can introduce a longer tail and wider shape of the Laplace distribution for the residuals of the weights, thus decreasing the compression efficiency. Moreover, it is not possible to predict the weight kernels having different channel dimensions from the current weight kernel. This can limit the usability of the proposed ILWP. On the other hand, the depth-wise convolution consists only of nine elements, and all the depthwise convolutions have canonical kernel shapes with 3 × 3 size in whole networks. Also, since the point-wise convolution (1 × 1 convolution) learns channel correlations for the output features of the depth-wise convolution layer, forcing the collocated weights to be similar does hardly affect the accuracy of neural networks. These characteristics of depth-wise convolution enhance the usability of the proposed ILWP in the weight compression of neural networks. Moreover, it becomes more popular to use depth-wise separable convolutions in the recent neural network architectures, due to its high efficiency (Kaiser et al. 2017 ; ; Ma et al. 2018). Therefore, we apply the proposed method into the spatial convolution in the depth-wise separable convolution block. Our proposed inter-layer loss can dramatically eliminate the non-texture bits and is defined as follows: where Z is the number of weights to be predicted, and N is the number of depth-wise convolution layers in a whole neural network. In Eq., v is the index in the previous layer which is equal to j mod c i−1 in the inter layer loss. The proposed loss function in Eq. not only regularizes the weights but also allows us to eliminate all the non-texture bits for indices of the best predictions, since the network can always predict the weight values of the current layer from the collocated weights in its previous layer. For training the neural networks with our proposed loss, our total loss is formulated as follows: where L cls is the conventional classification loss using the cross-entropy loss . λ in Eq. is the control parameter for the inter-layer loss and is set to 1 over all experiments. From our comprehensive experiments, we found that setting λ in Eq. to 1 is suitable to match the trade-off between the performance of neural networks and the parameter size of neural networks (See Figure 4 in Section 4.1). Through our new inter-layer loss, SVWH is explicitly controlled in a more elaborate manner as shown in Figure 3 that shows the Heatmaps for the percentage in the number of best predictions with respect to minimizing L 1 distance between the source layer (x-axis) and the target layer (y-axis) in MobileNet, MobileNetV2, ShuffleNet, and ShuffleNetV2 trained with the proposed inter-layer loss. Finally, the reconstruction of the current weight kernel is performed as follow: whereR i,j is the quantized residual at the i-th layer and j-th filter position. K in Eq. is the final reconstruction of K. Note that, to reconstruct the current weights, the weights only in the previous layer (i.e.,K i−1,v in Eq.) are required (See Figure 1-(c) ). The residuals of the weights are quantized through a linear quantization technique, and then saved using Huffman coding for the purpose of efficient model capacity (b). Thanks to the prediction scheme, our method usually remains more high-valued non-zero weights than the traditional quantization methods. Since high weight values importantly contribute to the prediction performance (b), our method can retain both high accuracy and weight compression performance. The weight kernels in the first layer are not quantized since it is as important as Intra-frames (i.e. reference frames) in a group of pictures of the video coding scheme . If the weight kernels in the first layer are quantized, the weight kernels in subsequent layers which are predicted based on the weight kernels in the first layer are negatively affected, leading to accuracy drop in neural networks. In this section, we describe and prove the superiority of the proposed method by applying it on image classification tasks, specifically for CIFAR-10 and CIFAR-100 datasets. For securing generality of our proposed IWLP, we applied our method in four convolutional neural networks, MobileNet, MobileNetV2, ShuffleNet, and ShuffleNetV2. Four NVidia 2080-Ti GPU with the Intel i9-7900X CPU are used to perform the experiments. For the hyper-parameter setting in the training process, we set the initial learning rate as 0.1, which is multiplied by 0.98 every epochs. We used Stochastic Gradient Descent (SGD) optimizer with Nesterov momentum factor 0.9. All the neural networks are trained for 200 epochs with a batch size of 256. The baseline model is each aforementioned four convolutional neural networks which is quantized by linear quantization on the weights of depthwise convolutional kernel. In all the experiments, the test accuracy and parameters in kilobyte (KB) are marked from the average of 5 runs. First, we found the optimal λ in Eq. that is suitable to match the trade-off between the accuracy performance and the parameter size of neural networks. Figure 4 shows the parameter sizes and top-1 test accuracy in MobileNet trained on the CIFAR-10 dataset for different λ in Eq. after quantization and Huffman coded. We can see that the case of λ = 1 has the smallest parameter size in neural networks, and slightly lower top-1 test accuracy in the CIFAR-10 dataset. Furthermore, not only MobileNet trained on CIFAR-100, but also other models, i.e., MobileNetV2, ShuffleNet, and ShuffleNetV2 trained on CIFAR-10 and CIFAR-100 datasets also has very similar . So, we experimented by setting the λ in Eq. to 1. Figure 5 shows the parameter size of our proposed methods for different quantization bits in {2, 3, 4, ..., 8} on CIFAR-10 and CIFAR-100 datasets. Figure 5 shows that our proposed ILWP-ILL yields higher compression ratio in the weight parameters. On the other hand, ILWP-FSS and ILWP-LSS have larger parameter sizes than the baseline. This is because both ILWP-FSS and ILWP-LSS contain non-texture bits (bits for indices of the best predictions), ing in bit overheads. For further analysis on the texture and non-texture bits in the proposed ILWP, Table 1 shows the performance comparison of our proposed methods in terms of the parameter sizes for the depthwise convolution kernels and top-1 test accuracy. Due to the inter-layer prediction scheme, all the proposed methods show less amount of texture bits compared to the baseline. However, the total amounts of bits of ILWP-FSS and ILWP-LSS are higher than the baseline model, which is due to the presence of the non-texture bits (u and v in Figure 1). However, ILWP-ILL demonstrated much reduced amount of total bits compared to the baseline, which is due to the property that this method does not require saving any indices of reference kernels while maintaining good accuracy under the SVWH condition. Figure 6 shows the of our proposed methods on CIFAR-10 and CIFAR-100 datasets for the parameter size in kilobytes (KB) and test accuracy. As shown in Figure 6, the ILWP-ILL outperforms the other compared ones in terms of the trade-off between the amounts of compressed bits and accuracy. This is because ILWP-ILL significantly saves the weight bits by eliminating the non- texture bits as well as increasing the effectiveness of the quantization process as the residuals tend to follow much narrower Laplace distributions than the original weight values (See Section 5). Compared to ILWP-FSS, ILWP-LSS shows very slight improvement in trade-off between the size of parameters and accuracy as ILWP-LSS does not store the layer index of the best prediction (u in Figure 1). This is due to the fact that the portion of the bits for the layer indices is much smaller than the portion of the bits for the kernel indices. Compared to the baseline models, it is worth noting that both ILWP-FSS and ILWP-LSS have often worse compression efficiency compared to the baseline. This is because they introduce non-texture bits consisting of a large portion in the total bits for weight parameters. Therefore, it can be concluded that our foundation of SVWH and the proposed inter-layer loss allow the network to make use of full advantages in the inter-layer prediction scheme of the conventional video coding frameworks. Figure 7 compares the distributions for the weights and residuals in all the depth-wise convolution kernels in MobileNet trained on CIFAR-100. Figure 7 -(a), -(b), and -(c) visualize the distribution of the weight kernels in baseline model, the distribution of residuals in ILWP-FSS and the distribution of the residuals in ILWP-ILL, respectively. As shown in Figure 7, ILWP-FSS produces a single modal Laplace distribution. This nice property contributes to high compression capacity for storing the weights in neural networks when these residuals are quantized and Huffman coded. However, this method requires saving the additional indices, leading to consuming more amount of bits. As shown in Figure 7 -(c), it can be observed that for ILWP-ILL, the residuals are located in near-zero positions (about 46% of the weights), following a very sharp Laplace distribution. This indicates that ILWP-ILL with quantization allows neural networks to achieve remarkable weight compression performance by generating a large amount of zero coefficients after quantization. Furthermore, in terms of information theory, the Shannon entropy H x of Laplace distribution is derived as follows 1: where, f L (·) is the probability density function of the Laplace distribution and b and µ is scale and location factors of the Laplace distribution, respectively. So, The Shannon entropy is proportional to scale parameter b, which controls width of Laplace distribution, i.e., small b has a narrow Laplace distribution. As shown in Figure 7, it is observed that distribution of residuals in ILWP-ILL has much narrower Laplace distribution than distribution of residuals in ILWP-FSS. Consequently, the information entropy of the distribution of the residuals in ILWP-ILL is lower than the information entropy of the distribution of the residuals in ILWP-FSS. Meanwhile, the entropy coding as Huffman coding is more compressed in small information entropy. As a , the ILWP-ILL method is more compressed than the ILWP-FSS method after quantization and entropy coding. We propose a new inter-layer weight prediction with inter-layer loss for efficient deep neural networks. Motivated by our observation that the weights in the adjacent layers tend to vary smoothly, we successfully build a new weight compression framework combining the inter-layer weight prediction scheme, the inter-layer loss, quantization and Huffman coding under SVWH condition. Intuitively, our prediction scheme significantly decreases the entropy of the weights by making them much narrower Laplace distributions, thus leading remarkable compression ratio of the weight parameters in neural networks. Also, the proposed inter-layer loss effectively eliminates the nontexture bits for the best predictions. To the best of our knowledge, this work is the first to report the phenomenon of the weight similarities between the neighbor layers and to build a prediction-based weight compression scheme in modern deep neural network architectures.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1eUd64tDr
We propose a new compression method, Inter-Layer Weight Prediction (ILWP) and quantization method which quantize the predicted residuals between the weights in convolution layers.
There exists a plethora of techniques for inducing structured sparsity in parametric models during the optimization process, with the final goal of resource-efficient inference. However, to the best of our knowledge, none target a specific number of floating-point operations (FLOPs) as part of a single end-to-end optimization objective, despite reporting FLOPs as part of the . Furthermore, a one-size-fits-all approach ignores realistic system constraints, which differ significantly between, say, a GPU and a mobile phone -- FLOPs on the former incur less latency than on the latter; thus, it is important for practitioners to be able to specify a target number of FLOPs during model compression. In this work, we extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective and show that, given a desired FLOPs requirement, different neural networks can be successfully trained for image classification. Neural networks are a class of parametric models that achieve the state of the art across a broad range of tasks, but their heavy computational requirements hinder practical deployment on resourceconstrained devices, such as mobile phones, Internet-of-things (IoT) devices, and offline embedded systems. Many recent works focus on alleviating these computational burdens, mainly falling under two non-mutually exclusive categories: manually designing resource-efficient models, and automatically compressing popular architectures. In the latter, increasingly sophisticated techniques have emerged BID3 BID4 BID5, which have achieved respectable accuracy-efficiency operating points, some even Pareto-better than that of the original network; for example, network slimming BID3 reaches an error rate of 6.20% on CIFAR-10 using VGGNet BID9 with a 51% FLOPs reduction-an error decrease of 0.14% over the original. However, to the best of our knowledge, none of the methods impose a FLOPs constraint as part of a single end-to-end optimization objective. MorphNets BID0 apply an L 1 norm, shrinkage-based relaxation of a FLOPs objective, but for the purpose of searching and training multiple models to find good network architectures; in this work, we learn a sparse neural network in a single training run. Other papers directly target device-specific metrics, such as energy usage BID15, but the pruning procedure does not explicitly include the metrics of interest as part of the optimization objective, instead using them as heuristics. Falling short of continuously deploying a model candidate and measuring actual inference time, as in time-consuming neural architectural search BID11, we believe that the number of FLOPs is reasonable to use as a proxy measure for actual latency and energy usage; across variants of the same architecture, Tang et al. suggest that the number of FLOPs is a stronger predictor of energy usage and latency than the number of parameters BID12.Indeed, there are compelling reasons to optimize for the number of FLOPs as part of the training objective: First, it would permit FLOPs-guided compression in a more principled manner. Second, practitioners can directly specify a desired target of FLOPs, which is important in deployment. Thus, our main contribution is to present a novel extension of the prior state of the art BID6 to incorporate the number of FLOPs as part of the optimization objective, furthermore allowing practitioners to set and meet a desired compression target. Formally, we define the FLOPs objective L f lops: f × R m → N 0 as follows: DISPLAYFORM0 where L f lops is the FLOPs associated with hypothesis h(·; θ θ θ):= p(·|θ θ θ), g(·) is a function with the explicit dependencies, and I is the indicator function. We assume L f lops to depend only on whether parameters are non-zero, such as the number of neurons in a neural network. For a dataset D, our empirical risk thus becomes DISPLAYFORM1 Hyperparameters λ f ∈ R + 0 and T ∈ N 0 control the strength of the FLOPs objective and the target, respectively. The second term is a black-box function, whose combinatorial nature prevents gradient-based optimization; thus, using the same procedure in prior art BID6, we relax the objective to a surrogate of the evidence lower bound with a fully-factorized spike-and-slab posterior as the variational distribution, where the addition of the clipped FLOPs objective can be interpreted as a sparsity-inducing prior p(θ θ θ) DISPLAYFORM2 where denotes the Hadamard product. To allow for efficient reparameterization and exact zeros, Louizos et al. BID6 propose to use a hard concrete distribution as the approximation, which is a stretched and clipped version of the binary Concrete distribution BID7: ifẑ ∼ BinaryConcrete(α, β), thenz:= max(0, min(1, (ζ − γ)ẑ + γ)) is said to be a hard concrete r.v., given ζ > 1 and γ < 0. Define φ φ φ:= (α α α, β), and let ψ(φ φ φ) = Sigmoid(log α α α − β log DISPLAYFORM3 ψ(·) is the probability of a gate being non-zero under the hard concrete distribution. It is more efficient in the second expectation to sample from the equivalent Bernoulli parameterization compared to hard concrete, which is more computationally expensive to sample multiple times. The first term now allows for efficient optimization via the reparameterization trick BID2; for the second, we apply the score function estimator (REINFORCE) BID14, since the FLOPs objective is, in general, nondifferentiable and thus precludes the reparameterization trick. High variance is a non-issue because the number of FLOPs is fast to compute, hence letting many samples to be drawn. At inference time, the deterministic estimator isθ θ θ:= θ θ θ max(0, min(1, Sigmoid(log α α α)(ζ − γ) + γ)) for the final parametersθ θ θ. In practice, computational savings are achieved only if the model is sparse across "regular" groups of parameters, e.g., each filter in a convolutional layer. Thus, each computational group uses one hard concrete r.v. BID6 -in fully-connected layers, one per input neuron; in 2D convolution layers, one per output filter. Under convention in the literature where one addition and one multiplication each count as a FLOP, the FLOPs for a 2D convolution layer h conv (·; θ θ θ) given a random draw z is then defined as L f lops (h conv, z) = (K w K h C in + 1)(I w − K w + P w + 1)(I h − K h + P h + 1) z 0 for kernel width and height (K w, K h), input width and height (I w, I h), padding width and height (P w, P h), and number of input channels C in. The number of FLOPs for a fully-connected layer h f c (·; θ θ θ) is L f lops (h f c, z) = (I n + 1) z 0, where I n is the number of input neurons. Note that these are conventional definitions in neural network compression papers-the objective can easily use instead a number of FLOPs incurred by other device-specific algorithms. Thus, at each training step, we compute the FLOPs objective by sampling from the Bernoulli r.v.'s and using the aforementioned definitions, e.g., L f lops (h conv, ·) for convolution layers. Then, we apply the score function estimator to the FLOPs objective as a black-box estimator. We report on MNIST, CIFAR-10, and CIFAR-100, training multiple models on each dataset corresponding to different FLOPs targets. We follow the same initialization and hyperparameters as Louizos et al. BID6, using Adam BID1 with temporal averaging for optimization, a weight decay of 5 × 10 −4, and an initial α that corresponds to the original dropout rate of that layer. We similarly choose β = 2/3, γ = −0.1, and ζ = 1.1. For brevity, we direct the interested reader to their repository BID0 for specifics. In all of our experiments, we replace the original L 0 penalty with our FLOPs objective, and we train all models to 200 epochs; at epoch 190, we prune the network by weights associated with zeroed gates and replace the r.v.'s with their deterministic estimators, then finetune for 10 more epochs. For the score function estimator, we draw 1000 samples at each optimization step-this procedure is fast and has no visible effect on training time. BID10 7-13-208-16 1.1% 254K SBP BID8 3-18-284-283 0.9% 217K BC-GNJ BID5 8-13-88-13 1.0% 290K BC-GHS BID5 5-10-76-16 1.0% 158K L 0 BID6 20-25-45-462 0.9% 1.3M L 0 -sep BID6 9-18-65-25 1.0% 403K DISPLAYFORM0 We choose λ f = 10 −6 in all of the experiments for LeNet-5-Caffe, the Caffe variant of LeNet-5. BID0 We observe that our methods TAB0, bottom three rows) achieve accuracy comparable to those from previous approaches while using fewer FLOPs, with the added benefit of providing a tunable "knob" for adjusting the FLOPs. Note that the convolution layers are the most aggressively compressed, since they are responsible for most of the FLOPs in this model. Orig. in TAB1 denotes the original WRN-28-10 model BID16, and L 0 -* refers to the L 0 -regularized models BID6; likewise, we augment CIFAR-10 and CIFAR-100 with standard random cropping and horizontal flipping. For each of our (last two rows), we report the median error rate of five different runs, executing a total of 20 runs across two models for each of the two datasets; we use λ f = 3 × 10 −9 in all of these experiments. We also report both the expected FLOPs and actual FLOPs, the former denoting the number of FLOPs, on average, at training time under stochastic gates and the latter denoting the number of FLOPs at inference time. We restrict the FLOPs calculations to the penalized non-residual convolution layers only. For CIFAR-10, our approaches in Pareto-better models with decreases in both error rate and the actual number of inference-time FLOPs. For CIFAR-100, we do not achieve a Pareto-better model, since our approach trades accuracy for improved efficiency. The acceptability of the tradeoff depends on the end application.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkG5JF6Do7
We extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective, and we show that, given a desired FLOPs requirement, different neural networks are successfully trained.
Unpaired image-to-image translation among category domains has achieved remarkable success in past decades. Recent studies mainly focus on two challenges. For one thing, such translation is inherently multimodal due to variations of domain-specific information (e.g., the domain of house cat has multiple fine-grained subcategories). For another, existing multimodal approaches have limitations in handling more than two domains, i.e. they have to independently build one model for every pair of domains. To address these problems, we propose the Hierarchical Image-to-image Translation (HIT) method which jointly formulates the multimodal and multi-domain problem in a semantic hierarchy structure, and can further control the uncertainty of multimodal. Specifically, we regard the domain-specific variations as the of the multi-granularity property of domains, and one can control the granularity of the multimodal translation by dividing a domain with large variations into multiple subdomains which capture local and fine-grained variations. With the assumption of Gaussian prior, variations of domains are modeled in a common space such that translations can further be done among multiple domains within one model. To learn such complicated space, we propose to leverage the inclusion relation among domains to constrain distributions of parent and children to be nested. Experiments on several datasets validate the promising and competitive performance against state-of-the-arts. Image-to-image translation is the process of mapping images from one domain to another, during which changing the domain-specific aspect and preserving the domain-irrelevant information. It has wide applications in computer vision and computer graphics;; Zhu et al. (2017a);; such as mapping photographs to edges/segments, colorization, super-resolution, inpainting, attribute and category transfer, style transfer, etc. In this work, we focus on the task of attribute and category transfer, i.e. a set of images sharing the same attribute or category label is defined as a domain 1. Such task has achieved significant development and impressive in terms of image quality in recent years, benefiting from the improvement of generative adversarial nets (GANs);. Representative methods include pix2pix , , CycleGAN Zhu et al. (2017a), , and. More recently the study of this task mainly focus on two challenges. The first is the ability of involving translation among several domains into one model. It is quite a practical need for users. Using most methods, we have to train a separate model for each pair of domains, which is obviously inefficient. To deal with such problem, leverages one generator to transform an image to any domain by taking both the image and the target domain label as conditional input supervised by an auxiliary domain classifier. Another challenge is the multimodal problem, which is early addressed by BicycleGAN Zhu et al. (2017b). Most techniques including aforementioned StarGAN can only give a single determinate output in target domain given an image from source domain. However, for many translation task, the mapping is naturally multimodal. As shown in Fig.1, a cat could have many possible appearances such as being a Husky, a Samoyed or other dogs when translated to the dog domain. To address Figure 1: An illustration of a hierarchy structure and the distribution relationship in a 2D space among categories in such hierarchy. Multi-domain translation is shown in the horizontal direction (blue dashed arrow) while multimodal translation is indicated in the vertical direction (red dashed arrow). Since one child category is a special case of its parent, in the distribution space it is a conditional distribution of its parent, leading to the nested relationship between them. this issue, recent works including BicycleGAN Zhu et al. (2017b), and model a continuous and multivariant distribution independently for each domain to represent the variations of domain-specific information, and they have achieved diverse and high-quality for several two-domain translation tasks. In this paper, we aim at involving the abilities of both multi-domain and multimodal translation into one model. As shown in Fig.1, it is noted that categories have the natural hierarchical relationships. For instance, the cat, dog and bird are three special children of the animal category since they share some common visual attributes. Furthermore, in the dog domain, some samples are named as husky and some of them are called samoyed due to the appearance variations of being the dog. Of course, one can continue to divide the husky to be finer-grained categories based on the variations of certain visual attributes. Such hierarchical relationships widely exist among categories in real world since it is a natural way for our human to understand objects according to our needs in that time. We go back to the image translation task, the multi-domain and multimodal issues can be understood from horizontal and vertical views respectively. From the horizontal view as the blue dashed arrow indicates, multi-domain translation is the transformation in a flat level among categories. From the vertical view as the red dashed arrow indicates, multimodal translation further considers variations within target category, i.e. the multimodal issue is actually due to the multi-granularity property of categories. In the extreme case, every instance is a variation mode of the domain-specific information. Inspired by these observations, we propose a Hierarchical Image-to-image Translation (HIT) method which translates object images among both multiple category domains in a same hierarchy level and their children domains. To this end, our method models the variations of all domains in forms of multiple continuous and multivariant Gaussian distributions in a common space. This is different from previous methods which model the same Gaussian distribution for two domains in independent spaces and thus can not work with only one generator. Due to the hierarchical relationships among domains, distribution of a child domain is the conditional one of its parent domain. Take such principle into consideration, distributions of domains should be nested between a parent and its children, as a 2D illustration shown in Fig.1. To effectively supervise the learning of such distributions space, we further improve the traditional conditional GAN framework to possess the hierarchical discriminability via a hierarchical classifier. Experiments on several categories and attributes datasets validate the competitive performance of HIT against state-of-the-arts. Conditional Generative Adversarial Networks. is probably one of the most creative frameworks recently for the deep learning community. It contains a generator and a discriminator. The generator is trained to fool the discriminator, while the discriminator in turn tries to distinguish the real and generated data. Various GANs have been proposed to improve the training stability, including better network architectures;;;; , more reasonable distribution metrics;; and normalization schemes;. With these improvements, GANs have been applied to many conditional tasks , such as image generation given class labels , super resolution , text2image , 3D reconstruction from 2D input and image-to-image translation introduced in the following. Image-to-image Translation. Pix2pix is the first unified framework for the task of image-to-image translation based on conditional GANs, which combines the adversarial loss with a pixel-level L1 loss and thus requires the pairwise supervision information between two domains. To address this issue, unpaired methods are proposed including , , and CycleGAN Zhu et al. (2017a). UNIT combines the varitional auto-encoder and GAN framework, and proposes to share partial network weights of two domains to learn a common latent space such that corresponding images in two domains can be matched in this space. DiscoGAN, DualGAN and CycleGAN leverage a cycle consistency loss which enforces that we can re-translate the target image back to the original image. More recently, works in this area mainly focus on the problems of multi-domain and multimodal. To deal with translation among several domains in one generator, takes target label and input image as conditions, and uses an auxiliary classifier to classify translated image into its belonged domain. As for the multimodal issue, BicycleGAN Zhu et al. (2017b) proposes to model continuous and multivariant distributions. However, BicycleGAN requires input-output paired annotations. To overcome this problem, and adopt a disentangled representation for learning diverse translation from unpaired training data. propose to interpolate the latent codes between input and referred image to realize generation of diverse images. Different from all aforementioned works, we aim at realizing both multi-domain and multimodal translation in one model using the natural hierarchical relationships among domains defined by category or attribute. Hierarchy-regularized Learning. Hierarchical learning is a natural learning manner for human beings and we often describe objects in the world from abstract to detailed according to our needs of the time. propose to use generative models to disentangle the factors from low-level representations to high-level ones that can construct a specific object. uses an unsupervised generative framework to hierarchically disentangle the , object shape and appearance from an image. In natural language processing, propose a probabilistic word embedding method to capture the semantics described by the WordNet hierarchy. Our method first introduces such semantic hierarchy to learn a both multi-domain and multimodal translation model. 3.1 PROBLEM FORMULATION Let x i ∈ X i be an image from domain i. Our goal is to estimate the conditional probability p(x j |x i) by learning an image-to-image translation model p(x i→j |x i), where x i→j is a sample produced by translating x i to domain X j. Generally speaking, p(x j |x i) are multimodal due to the intra-domain variations. To deal with the multimodal problem, similar to, we assume that x i is disentangled by an encoder E into the content part c ∈ C that is shared by all domains (i.e. domain-irrelevant) and the style part s i ∈ S i that is specific to domain X i (i.e. domain-specific). By modeling S j as a continuous distribution such as a Gaussian N j, x i can be simply translated to domain X j by G(c, s j) where s j is randomly sampled from N j and G is a decoder. We further assume G and E are deterministic and mutually inverse, i.e. E = G −1 and G = E −1. Besides, we assume that c is a high-dimensional feature map while s i is a low-dimensional vector such that the complex spatial structure of objects can be preserved and the style parts could focus more on the relatively small scale but discriminative domain-specific information. Different from, we aim to translate not only between two domains but among multiple ones. To this end, we need to model Gaussians of styles for all domains in a common space (not independently in two spaces like) such that the single decoder G could generate target image based on which Gaussian is sampled. In the multi-domain and multimodal Figure 2: Overview of the whole framework of the proposed method, which mainly consists of five modules: an encoder, a domain distributions modeling module, a decoder, a discriminator and a hierarchical classifier. Given images from different categories, the encoder extracts domain-irrelevant and domain-specific features respectively from the content and style branches. Then the decoder takes them as input to reconstruct the inputs supervised by the reconstruction losses. To realize the multimodal and multi-domain translation, domain distributions are modeled in a common space based on the semantic hierarchy structure and elaborately designed nested loss. Combining the domain-irrelevant features and sampled styles from any distribution, the decoder could translate them to the target domain, guided by the adversarial loss and hierarchical classification loss. settings, it is noted that categories have the hierarchical relationships. As we introduced in Fig.1, multi-domain translation is in the horizontal direction among categories in a particular hierarchy level, and multimodal translation is in the vertical direction since samples can be further divided into multiple child modes. Therefore, distribution of a parent domain covers several conditional distributions, leading to the nested relationship. In this paper, we model all category domains in a given hierarchy structure as nested Gaussian distributions in a common space, realizing Hierarchical Image-to-image Translation (HIT) between any two domains. In such settings, N. The framework is trained with adversarial loss that ensures the translated images approximate the manifold of real images, hierarchical cross-entropy loss that makes the generation conditioned on the sampled domain, nested loss that constrains distributions of domains to satisfy their hierarchical relationships, as well as bidirectional reconstruction losses that ensure enough and meaningful information be encoded. In math, the relation between a parent node u and a child node v in the hierarchy is called partial order relation , defined as v u. In the application of taxonomy, for concept u and v, v u means every instance of category v is also an instance of category u, but not vise versa. We call such partial order on probability densities as the notion of nested (encapsulation called by). Let g and f be the densities of u and v respectively, if v u, then f g, i.e. f is nested in g. Quantitatively measuring the loss violate the nested relation between f and g is not easy. According to the definition of partial order, strictly measuring that can be done as: where {x : f (x) > η} is the set where f is greater than a nonnegative threshold η. Threshold η indicates the nested degree required by us. Small value of η means high requirement for the overlap between f and g to satisfy f g. Eqn. describes how many regions with densities greater than η of f are not nested in those of g. Eqn. is difficult to be computed for most distributions including Gaussians. Inspired by the work in word embedding , we turn to use a thresholded divergence: where D(·||·) is a divergence measure between densities, we use the KL divergence considering its simple formulation for Gaussians. Such loss is a soft measure of violation of the nested relation. If f = g, then D(f ||g) = 0. In case of f g, D(f ||g) would be positive but smaller than a threshold α. As for the effectiveness of using α, please make a reference to. To learn the nested distributions for domains in the hierarchy shown in Fig.2, the penalty described by Eqn. between a positive pair of distributions (N i N j) should be minimized, while that between a negative pair (N i N j) should be greater than a margin m: where P and N denote the numbers of positive and negative pairs respectively. Apart from the proposed nested loss in Eqn., our HIT is equipped with an adversarial loss and a hierarchical classification loss to distinguish which domain the generated images belong to, and two general reconstruction losses applied on both images and features. Adversarial loss. GAN is an effective objective to match the generated images to the real data manifold. The discriminator D tries to classify natural images as real and distinguish generated ones as fake, while the generator G learns to improve image quality to fool D, defined as: Hierarchical classification loss. Similar to , we introduce an auxiliary classifier D cls on top of D and impose the domain classification loss when optimizing G and D, i.e. using real images to train D cls and generated ones to optimize G with such classification loss. Differently, our classifier is hierarchical. In general, the deeper of categories in the hierarchy, the more difficult to distinguish. To alleviate such problem, the loss is cumulative, i.e. classification loss of l-th level is the summation of losses of all levels above l-th with more than two categories. where y k j is the label of domain X j in k-th level. Bidirectional reconstruction loss. To ensure meaningful information encoded and inverse between G and E, we encourage reconstruction of both images and latent features. -Image reconstruction loss: -Feature reconstruction loss: Full objectives. To learn E, G and N l j, we need to optimize the following terms: where λ 1, λ 2 and λ 3 are loss weights of different terms. D is updated with the following losses: and translation works. As shown in Fig.2, we add a distribution modeling module where a pair of mean vector and diagonal covariance matrix of Gaussian for each domain is parameterized to learn. More training details are given in the Appendix. Style adversarial loss. Eqn. and Eqn. match the generated images to the distribution of a target domain. Such loss functions can also be applied on the encoded style codes, i.e. matching s i (act as fake data) of input images to domain Gaussians N l i (act as real data) they belong to. By doing so, it is found that the performance of style transfer between a pair of real images would become better. However, such loss would lead to the training collapse on small scale datasets. Therefore, it is recommended to equip it to our framework on datasets with enough training data. We conduct experiments on hierarchical annotated data from , and. Typical examples are shown in Fig.8, Fig.9 and Fig.10 in Appendix. CelebA provides more than 200K face images with 40 attribute annotations. Following the official train/test protocol and imitating the category hierarchy, we define a hierarchy based on attribute annotations. Specifically, all faces are first clustered into male and female and are further classified according to the age and hair color in the next two levels. Following, we collect images from 3 super domains including house cats, dogs and big cats of ImageNet. Each super domain contains 4 fine-grained categories, which thus construct in a three-level hierarchy (root is animal). All images split by official train/test protocol are processed by a pre-trained faster-rcnn head detector and then cropped as the inputs for translation. ShapeNet is constitutive of 51,300 3D models covering 55 common and 205 finer-grained categories. 12 2D images with different poses are obtained for each 3D model. A three-level hierarchy of furniture containing different kinds of tables and sofas are defined. Ratio of train/test split is 4:1. 4.3 EVALUATION METRICS Frankly speaking, quantitatively evaluating the quality of generated images is not easy. Recent proposed metrics may be fooled by artifacts in some extent. In this paper, we use the Inception Score (IS) and Frchet Inception Distance (FID) to evaluate the semantics of generated images, and leverage the (LPIPS) to measure the semantic diversity of generated visual modes. 4.4 COMPARED BASELINES We mainly compared methods proposed for the objectives of either multi-domain or multimodal translation. Considering the unpaired training settings, the multi-domain method and multimodal method are compared in this paper. Since MUNIT needs to train a model for each pair of domains, it is trained for domain pairs of male/female, young/old and black/golden hair on CelebA, house cat/dog, house cat/big cat and big cat/dog on ImageNet, and sofa/table on ShapeNet, respectively. The average of evaluations on all domain pairs for each dataset is reported. As for StarGAN, it is trained on CelebA as done in its opened source codes. Translations among house cat, dog and big cat domains on ImageNet, and between sofa and table domains on ShapeNet are learned for StarGAN. As comparison, of our HIT in corresponding domain levels for each dataset are reported. 4.5 Table. 1 shows the quantitative comparisons of the proposed HIT with the baselines. Fig.3 shows qualitative on CelebA. It is observed that StarGAN achieves outstanding image quality espe- cially on the fine-grained translations among attribute domains, while the advantages of multimodal methods are generating multiple translations with intra-domain mode variations at the cost of image quality. The image quality of MUNIT is not satisfactory on CelebA both in quantitatively in Table. 1 and in qualitatively in Fig.3. The reasons for this may be that using only the adversarial learning to find fine-grained attribute differences between domains is not stable while multi-domain classifier is good at such task. Besides MUNIT obtains the best diversity performance. It is reasonable as it only involves two domains in one model and equips a triplet of backbone including encoder, decoder and discriminator for each domain. Our method considers both multimodal and multi-domain translation within only one triplet of such backbone, which has high requirement for capacity of networks. It performs in trade-off between image quality and diversity. From Fig.3, artifacts accompanying the generated faces for MUNIT may overestimate the LPIPS on CelebA. Fig.4 and Fig.5 further shows the qualitative of our HIT on ImageNet and ShapeNet datasets respectively. Generally speaking, translation among such categories with large variations is much more challenging than that for face data (several times of increase of the FID in Table. 1 can be found). Even so, our HIT achieves promising qualitative . Besides, using the fixed styles from a particular category distribution (same columns in Fig.4 and Fig.5), the generated images indeed have similar styles of that category and dissimilar content appearances (e.g. pose, expression), demonstrating good disentanglement of content and style of images. Walking in the path towards leaf-level, translated images would have fewer variations with more conditions being specified by the categories in high levels. In other words, distributions in low levels are local modes of its ancestor domains in high levels, leading to the nested relationship. Results in Fig.6 validate the learned distributions of styles in different levels are exactly nested. In the Appendix, we give an experimental parameter-sensitiveness analysis of m and α which constrain the nested relationships among distributions. In Fig.7(a), we further study the smoothness of learned distributions. It is observed one can conduct smooth translation via interpolations between styles from different attribute domains. Besides, with the help of additional style adversarial loss introduced in Sec.4.1, our method can provide users more controlled translation as done in, i.e. use the styles of referenced real images instead of sampling them from distributions. Fig.7(b) shows some example . We can find that the semantics of gender, age and hair color are all correctly transferred to the input images. In this paper we propose the Hierarchical Image-to-image Translation (HIT) method which incorporates multi-domain and multimodal translation into one model. Experiments on three datasets especially on CelebA show that the proposed method can well achieve such granularity controlled translation objectives, i.e. the variation modes of outputs can be specified owe to the nested distributions. However, current work has a limitation, i.e. the assumption of single Gaussian for each category domain. On one hand, though Gaussian distribution prior is a good approximation for many data, it may not be applicable when scale of available training data is small but variations within domain are large such as the used hierarchical data on ImageNet and ShapeNet in this paper. On the other hand, the parent distributions should be mixture of Gaussians given multiple single Gaussians of its children. This issue would lead to sparse sampling around the centers of parent distributions and poor nested if samples are not enough to fulfill the whole space. We have made efforts to the idea of mixture of Gaussians and found that it is hard to compute the KL divergence between two mixture of Gaussians which does not have an analytical solution. Besides, the re-parameterize trick for distribution sampling during SGD optimization can not be transferred to the case of mixture of Gaussians. A better assumption to realize the nested relationships among parent-children distributions is a promising direction for our future research. We use the Adam optimizer with β 1 = 0.5, β 2 = 0.999, and initial learning rate of 0.0001. We train HIT on all datasets for 300K iterations and half decay the learning rate every 100K iterations. We set batch size to 16. The loss weights λ 1, λ 2 andλ 3 in Eqn. are set as 1, 10 and 1 respectively. α and m in Eqn. are empirically set as 50, 200 respectively. Random mirroring is applied during training. In this section, Fig.8, Fig.9 and Fig.10 provide leaf-level examples for better understanding the nested relationships among categories in different hierarchy levels. Take the CelebA for example, the root category face has two children distinguished by gender attribute. For each of the two super categories, it includes two finer granular children which are further divided by the age attribute (young/old). Finally, in the leaf-level, each local branch are classified according to their hair colors, i.e. black, golden and brown hair. Within each leaf-level category, samples mainly contain intraclass variations caused by identities, expressions, poses, etc. The impacts of hyper-parameters in the nested distribution learning on word embedding task have been studied in. In this section, we further make an analysis of them in current image generation task. Fig.11 and Fig.12 show the impacts of hyper-parameters m and α in the nested loss of Eqn.. It is observed that distribution margin m has larger impact than nested threshold α. With too large settings of m, distributions which do not have nested relationship would be pushed far away, leading to sparse space. Sampling in such space would make the learning of generator quite difficult. In contrast, with too small settings of m, the discriminabilities of distributions may be poor. Therefore, a trade-off value 200 is set for m in this paper. As for nested threshold α, a relative small or large value performs well in terms of the image quality metric. However, in theory, large value setting of α would relax the nested constraint too much, in small overlap between parent and children distributions. Therefore, we recommend to set α in the left half axis of α. When α is set as 0, it means the parent and children distributions are all overlapped, which is too strict to learn. Finally, we set α as 50, and the ratio of 1:4 between α and m is consistent with the observation in.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkljrTEKvr
Granularity controled multi-domain and multimodal image to image translation method
Recurrent Neural Networks (RNNs) are very successful at solving challenging problems with sequential data. However, this observed efficiency is not yet entirely explained by theory. It is known that a certain class of multiplicative RNNs enjoys the property of depth efficiency --- a shallow network of exponentially large width is necessary to realize the same score function as computed by such an RNN. Such networks, however, are not very often applied to real life tasks. In this work, we attempt to reduce the gap between theory and practice by extending the theoretical analysis to RNNs which employ various nonlinearities, such as Rectified Linear Unit (ReLU), and show that they also benefit from properties of universality and depth efficiency. Our theoretical are verified by a series of extensive computational experiments. Recurrent Neural Networks are firmly established to be one of the best deep learning techniques when the task at hand requires processing sequential data, such as text, audio, or video BID10 BID18 BID7. The ability of these neural networks to efficiently represent a rich class of functions with a relatively small number of parameters is often referred to as depth efficiency, and the theory behind this phenomenon is not yet fully understood. A recent line of work BID12 BID5 focuses on comparing various deep learning architectures in terms of their expressive power. It was shown in that ConvNets with product pooling are exponentially more expressive than shallow networks, that is there exist functions realized by ConvNets which require an exponentially large number of parameters in order to be realized by shallow nets. A similar also holds for RNNs with multiplicative recurrent cells BID12. We aim to extend this analysis to RNNs with rectifier nonlinearities which are often used in practice. The main challenge of such analysis is that the tools used for analyzing multiplicative networks, namely, properties of standard tensor decompositions and ideas from algebraic geometry, can not be applied in this case, and thus some other approach is required. Our objective is to apply the machinery of generalized tensor decompositions, and show universality and existence of depth efficiency in such RNNs. Tensor methods have a rich history of successful application in machine learning. BID21, in their framework of TensorFaces, proposed to treat facial image data as multidimensional arrays and analyze them with tensor decompositions, which led to significant boost in face recognition accuracy. BID0 ) employed higher-order co-occurence data and tensor factorization techniques to improve on word embeddings models. Tensor methods also allow to produce more accurate and robust recommender systems by taking into account a multifaceted nature of real environments BID6.In recent years a great deal of work was done in applications of tensor calculus to both theoretical and practical aspects of deep learning algorithms. BID15 represented filters in a convolutional network with CP decomposition BID11 BID1 which allowed for much faster inference at the cost of a negligible drop in performance. BID19 proposed to use Tensor Train (TT) decomposition BID20 to compress fully-connected layers of large neural networks while preserving their expressive power. Later on, TT was exploited to reduce the number of parameters and improve the performance of recurrent networks in long-term forecasting BID24 and video classification BID23 problems. In addition to the practical benefits, tensor decompositions were used to analyze theoretical aspects of deep neural nets. ) investigated a connection between various network architectures and tensor decompositions, which made possible to compare their expressive power. Specifically, it was shown that CP and Hierarchial Tucker BID9 decompositions correspond to shallow networks and convolutional networks respectively. Recently, this analysis was extended by BID12 who showed that TT decomposition can be represented as a recurrent network with multiplicative connections. This specific form of RNNs was also empirically proved to provide a substantial performance boost over standard RNN models BID22.First on the connection between tensor decompositions and neural networks were obtained for rather simple architectures, however, later on, they were extended in order to analyze more practical deep neural nets. It was shown that theoretical can be generalized to a large class of CNNs with ReLU nonlinearities and dilated convolutions BID5, providing valuable insights on how they can be improved. However, there is a missing piece in the whole picture as theoretical properties of more complex nonlinear RNNs have yet to be analyzed. In this paper, we elaborate on this problem and present new tools for conducting a theoretical analysis of such RNNs, specifically when rectifier nonlinearities are used. Let us now recall the known about the connection of tensor decompositions and multiplicative architectures, and then show how they are generalized in order to include networks with ReLU nonlinearities. Suppose that we are given a dataset of objects with a sequential structure, i.e. every object in the dataset can be written as DISPLAYFORM0 We also introduce a parametric feature map f θ: R N → R M which essentially preprocesses the data before it is fed into the network. Assumption 1 holds for many types of data, e.g. in the case of natural images we can cut them into rectangular patches which are then arranged into vectors x (t). A typical choice for the feature map f θ in this particular case is an affine map followed by a nonlinear activation: f θ (x) = σ(Ax + b). To draw the connection between tensor decompositions and feature tensors we consider the following score functions (logits 1): DISPLAYFORM1 where W ∈ R M ×M ×···×M is a trainable T -way weight tensor and Φ(X) ∈ R M ×M ×···×M is a rank 1 feature tensor, defined as DISPLAYFORM2 where we have used the operation of outer product ⊗, which is important in tensor calculus. For a tensor A of order N and a tensor B of order M their outer product C = A ⊗ B is a tensor of order N + M defined as: DISPLAYFORM3 It is known that equation 2 possesses the universal approximation property (it can approximate any function with any prescribed precision given sufficiently large M) under mild assumptions on f θ BID8 ). Working the entire weight tensor W in eq. FORMULA1 is impractical for large M and T, since it requires exponential in T number of parameters. Thus, we compactly represent it using tensor decompositions, which will further lead to different neural network architectures, referred to as tensor networks BID2. The most basic decomposition is the so-called Canonical (CP) decomposition BID11 BID1 which is defined as follows DISPLAYFORM0 where v (t) r ∈ R M and minimal value of R such that decomposition equation 5 exists is called canonical rank of a tensor (CP-rank). By substituting eq. into eq. we find that DISPLAYFORM1 In the equation above, outer products ⊗ are taken between scalars and coincide with the ordinary products between two numbers. However, we would like to keep this notation as it will come in handy later, when we generalize tensor decompositions to include various nonlinearities. TT-decomposition Another tensor decomposition is Tensor Train (TT) decomposition which is defined as follows DISPLAYFORM2 where g (t)rt−1rt ∈ R M and r 0 = r T = 1 by definition. If we gather vectors g (t)rt−1rt for all corresponding indices r t−1 ∈ {1, . . ., R t−1} and r t ∈ {1, . . ., R t} we will obtain three-dimensional tensors G (t) ∈ R M ×Rt−1×Rt (for t = 1 and t = T we will get matrices DISPLAYFORM3 is called TT-cores and minimal values of {R t} T −1 t=1 such that decomposition equation 7 exists are called TT-ranks. In the case of TT decomposition, the score function has the following form: DISPLAYFORM4 Now we want to show that the score function for Tensor Train decomposition exhibits particular recurrent structure similar to that of RNN. We define the following hidden states: DISPLAYFORM0 r0r1, DISPLAYFORM1 Such definition of hidden states allows for more compact form of the score function. Lemma 3.1. Under the notation introduced in eq., the score function can be written as DISPLAYFORM2 Proof of Lemma 3.1 as well as the proofs of our main from Section 5 were moved to Appendix A due to limited space. Note that with a help of TT-cores we can rewrite eq. in a more convenient index form: DISPLAYFORM3 where the operation of tensor contraction is used. Combining all weights from G (t) and f θ (·) into a single variable Θ (t)G and denoting the composition of feature map, outer product, and contraction as g: DISPLAYFORM4 Rt we arrive at the following vector form: G depend on the time step. However, if we set DISPLAYFORM5 DISPLAYFORM6 we will get simplified hidden state equation used in standard recurrent architectures: DISPLAYFORM7 Note that this equation is applicable to all hidden states except for the first DISPLAYFORM8 and for the last DISPLAYFORM9, due to two-dimensional nature of the corresponding TT-cores. However, we can always pad the input sequence with two auxiliary vectors x and x (T +1) to get full compliance with the standard RNN structure. In the previous section we showed that tensor decompositions correspond to neural networks of specific structure, which are simplified versions of those used in practice as they contain multiplicative nonlinearities only. One possible way to introduce more practical nonlinearities is to replace outer product ⊗ in eq. FORMULA5 and eq. with a generalized operator ⊗ ξ in analogy to kernel methods when scalar product is replaced by nonlinear kernel function. Let ξ: R × R → R be an associative and commutative binary operator (∀x, y, z ∈ R : ξ(ξ(x, y), z) = ξ(x, ξ(y, z)) and ∀x, y ∈ R: ξ(x, y) = ξ(y, x)). Note that this operator easily generalizes to the arbitrary number of operands due to associativity. For a tensor A of order N and a tensor B of order M we define their generalized outer product C = A ⊗ ξ B as an (N + M) order tensor with entries given by: DISPLAYFORM0 Now we can replace ⊗ in eqs. FORMULA5 and FORMULA0 with ⊗ ξ and get networks with various nonlinearities. For example, if we take ξ(x, y) = max(x, y, 0) we will get an RNN with rectifier nonlinearities; if we take ξ(x, y) = ln(e x + e y) we will get an RNN with softplus nonlinearities; if we take ξ(x, y) = xy we will get a simple RNN defined in the previous section. Concretely, we will analyze the following networks. • Score function: DISPLAYFORM0 • Parameters of the network: DISPLAYFORM1 Generalized RNN with ξ-nonlinearity• Score function: DISPLAYFORM2 • Parameters of the network: DISPLAYFORM3 Note that in eq. FORMULA0 we have introduced the matrices C (t) acting on the input states. The purpose of this modification is to obtain the plausible property of generalized shallow networks being able to be represented as generalized RNNs of width 1 (i.e., with all R i = 1) for an arbitrary nonlinearity ξ. In the case of ξ(x, y) = xy, the matrices C (t) were not necessary, since they can be simply absorbed by G (t) via tensor contraction (see Appendix A for further clarification on these points).Initial hidden state Note that generalized RNNs require some choice of the initial hidden state h. We find that it is convenient both for theoretical analysis and in practice to initialize h as unit of the operator ξ, i.e. such an element u that ξ(x, y, u) = ξ(x, y) ∀x, y ∈ R. Henceforth, we will assume that such an element exists (e.g., for ξ(x, y) = max(x, y, 0) we take u = 0, for ξ(x, y) = xy we take u = 1), and set h = u. For example, in eq. FORMULA10 it was implicitly assumed that h = 1. Introduction of generalized outer product allows us to investigate RNNs with wide class of nonlinear activation functions, especially ReLU. While this change looks appealing from the practical viewpoint, it complicates following theoretical analysis, as the transition from obtained networks back to tensors is not straightforward. In the discussion above, every tensor network had corresponding weight tensor W and we could compare expressivity of associated score functions by comparing some properties of this tensors, such as ranks BID12. This method enabled comprehensive analysis of score functions, as it allows us to calculate and compare their values for all possible input sequences X = x,..., x (T). Unfortunately, we can not apply it in case of generalized tensor networks, as the replacement of standard outer product ⊗ with its generalized version ⊗ ξ leads to the loss of conformity between tensor networks and weight tensors. Specifically, not for every generalized tensor network with corresponding score function (X) now exists a weight tensor W such that (X) = W, Φ(X). Also, such properties as universality no longer hold automatically and we have to prove them separately. Indeed as it was noticed in shallow networks with ξ(x, y) = max(x, 0) + max(y, 0) no longer have the universal approximation property. In order to conduct proper theoretical analysis, we adopt the apparatus of so-called grid tensors, first introduced in.Given a set of fixed vectors X = x,..., x (M) referred to as templates, the grid tensor of X is defined to be the tensor of order T and dimension M in each mode, with entries given by: DISPLAYFORM0 where each index i t can take values from {1, . . ., M}, i.e. we evaluate the score function on every possible input assembled from the template vectors {x DISPLAYFORM1 . To put it simply, we previously considered the equality of score functions represented by tensor decomposition and tensor network on set of all possible input sequences X = x,..., x (T), x (t) ∈ R N, and now we restricted this set to exponentially large but finite grid of sequences consisting of template vectors only. Define the matrix F ∈ R M ×M which holds the values taken by the representation function f θ: R N → R M on the selected templates X: DISPLAYFORM2 Using the matrix F we note that the grid tensor of generalized shallow network has the following form (see Appendix A for derivation): DISPLAYFORM3 Construction of the grid tensor for generalized RNN is a bit more involved. We find that its grid tensor Γ (X) can be computed recursively, similar to the hidden state in the case of a single input sequence. The exact formulas turned out to be rather cumbersome and we moved them to Appendix A. With grid tensors at hand we are ready to compare the expressive power of generalized RNNs and generalized shallow networks. In the further analysis, we will assume that ξ(x, y) = max(x, y, 0), i.e., we analyze RNNs and shallow networks with rectifier nonlinearity. However, we need to make two additional assumptions. First of all, similarly to we fix some templates X such that values of the score function outside of the grid generated by X are irrelevant for classification and call them covering templates. It was argued that for image data values of M of order 100 are sufficient (corresponding covering template vectors may represent Gabor filters). Secondly, we assume that the feature matrix F is invertible, which is a reasonable assumption and in the case of f θ (x) = σ(Ax + b) for any distinct template vectors X the parameters A and b can be chosen in such a way that the matrix F is invertible. As was discussed in section 4.2 we can no longer use standard algebraic techniques to verify universality of tensor based networks. Thus, our first states that generalized RNNs with ξ(x, y) = max(x, y, 0) are universal in a sense that any tensor of order T and size of each mode being m can be realized as a grid tensor of such RNN (and similarly of a generalized shallow network).Theorem 5.1 (Universality). Let H ∈ R M ×M ×···×M be an arbitrary tensor of order T. Then there exist a generalized shallow network and a generalized RNN with rectifier nonlinearity ξ(x, y) = max(x, y, 0) such that grid tensor of each of the networks coincides with H.Part of Theorem 5.1 which corresponds to generalized shallow networks readily follows from (, Claim 4). In order to prove the statement for the RNNs the following two lemmas are used. Lemma 5.1. Given two generalized RNNs with grid tensors Γ A (X), Γ B (X), and arbitrary ξ-nonlinearity, there exists a generalized RNN with grid tensor Γ C (X) satisfying DISPLAYFORM0 This lemma essentially states that the collection of grid tensors of generalized RNNs with any nonlinearity is closed under taking arbitrary linear combinations. Note that the same clearly holds for generalized shallow networks because they are linear combinations of rank 1 shallow networks by definition. Lemma 5.2. Let E (j1j2...j T) be an arbitrary one-hot tensor, defined as DISPLAYFORM1 Then there exists a generalized RNN with rectifier nonlinearities such that its grid tensor satisfies DISPLAYFORM2 This lemma states that in the special case of rectifier nonlinearity ξ(x, y) = max(x, y, 0) any basis tensor can be realized by some generalized RNN.Proof of Theorem 5.1. By Lemma 5.2 for each one-hot tensor E (i1i2...i T) there exists a generalized RNN with rectifier nonlinearities, such that its grid tensor coincides with this tensor. Thus, by Lemma 5.1 we can construct an RNN with DISPLAYFORM3 For generalized shallow networks with rectifier nonlinearities see the proof of, Claim 4).The same regarding networks with product nonlinearities considered in BID12 directly follows from the well-known properties of tensor decompositions (see Appendix A).We see that at least with such nonlinearities as ξ(x, y) = max(x, y, 0) and ξ(x, y) = xy all the networks under consideration are universal and can represent any possible grid tensor. Now let us head to a discussion of expressivity of these networks. As was discussed in the introduction, expressivity refers to the ability of some class of networks to represent the same functions as some other class much more compactly. In our case the parameters defining size of networks are ranks of the decomposition, i.e. in the case of generalized RNNs ranks determine the size of the hidden state, and in the case of generalized shallow networks rank determines the width of a network. It was proven in BID12 that ConvNets and RNNs with multiplicative nonlinearities are exponentially more expressive than the equivalent shallow networks: shallow networks of exponentially large width are required to realize the same score functions as computed by these deep architectures. Similarly to the case of ConvNets, we find that expressivity of generalized RNNs with rectifier nonlinearity holds only partially, as discussed in the following two theorems. For simplicity, we assume that T is even. Theorem 5.2 (Expressivity 1). For every value of R there exists a generalized RNN with ranks ≤ R and rectifier nonlinearity which is exponentially more efficient than shallow networks, i.e., the corresponding grid tensor may be realized only by a shallow network with rectifier nonlinearity of width at least DISPLAYFORM0 This states that at least for some subset of generalized RNNs expressivity holds: exponentially wide shallow networks are required to realize the same grid tensor. Proof of the theorem is rather straightforward: we explicitly construct an example of such RNN which satisfies the following description. Given an arbitrary input sequence X = x,... x (T) assembled from the templates, these net- DISPLAYFORM1, and 1 in every other case, i.e. they measure pairwise similarity of the input vectors. A precise proof is given in Appendix A. In the case of multiplicative RNNs BID12 almost every network possessed this property. This is not the case, however, for generalized RNNs with rectifier nonlinearities. In other words, for every rank R we can find a set of generalized RNNs of positive measure such that the property of expressivity does not hold. In the numerical experiments in Section 6 and Appendix A we validate whether this can be observed in practice, and find that the probability of obtaining CP-ranks of polynomial size becomes negligible with large T and R. Proof of Theorem 5.3 is provided in Appendix A.Shared case Note that all the RNNs used in practice have shared weights, which allows them to process sequences of arbitrary length. So far in the analysis we have not made such assumptions about RNNs (i.e., G = · · · = G (T −1) ). By imposing this constraint, we lose the property of universality; however, we believe that the statements of Theorems 5.2 and 5.3 still hold (without requiring that shallow networks also have shared weights). Note that the example constructed in the proof of Theorem 5.3 already has this property, and for Theorem 5.2 we provide numerical evidence in Appendix A. In this section, we study if our theoretical findings are supported by experimental data. In particular, we investigate whether generalized tensor networks can be used in practical settings, especially in problems typically solved by RNNs (such as natural language processing problems). Secondly, according to Theorem 5.3 for some subset of RNNs the equivalent shallow network may have a low rank. To get a grasp of how strong this effect might be in practice we numerically compute an estimate for this rank in various settings. Performance For the first experiment, we use two computer vision datasets MNIST BID16 and CIFAR-10 , and natural language processing dataset for sentiment analysis IMDB BID17. For the first two datasets, we cut natural images into rectangular patches which are then arranged into vectors x (t) (similar to BID12) and for IMDB dataset the input data already has the desired sequential structure. Figure 2 depicts test accuracy on IMDB dataset for generalized shallow networks and RNNs with rectifier nonlinearity. We see that generalized shallow network of much higher rank is required to get the level of performance close to that achievable by generalized RNN. Due to limited space, we have moved the of the experiments on the visual datasets to Appendix B. Expressivity For the second experiment we generate a number of generalized RNNs with different values of TT-rank r and calculate a lower bound on the rank of shallow network necessary to realize the same grid tensor (to estimate the rank we use the same technique as in the proof of Theorem 5.2). FIG4 shows that for different values of R and generalized RNNs of the corresponding rank there exist shallow networks of rank 1 realizing the same grid tensor, which agrees well with Theorem 5.3. This looks discouraging, however, there is also a positive observation. While increasing rank of generalized RNNs, more and more corresponding shallow networks will necessarily have exponentially higher rank. In practice we usually deal with RNNs of R = 10 2 − 10 3 (dimension of hidden states), thus we may expect that effectively any function besides negligible set realized by generalized RNNs can be implemented only by exponentially wider shallow networks. The numerical for the case of shared cores and other nonlinearities are given in Appendix B. In this paper, we sought a more complete picture of the connection between Recurrent Neural Networks and Tensor Train decomposition, one that involves various nonlinearities applied to hidden states. We showed how these nonlinearities could be incorporated into network architectures and provided complete theoretical analysis on the particular case of rectifier nonlinearity, elaborating on points of generality and expressive power. We believe our will be useful to advance theoretical understanding of RNNs. In future work, we would like to extend the theoretical analysis to most competitive in practice architectures for processing sequential data such as LSTMs and attention mechanisms. A PROOFS Lemma 3.1. Under the notation introduced in eq., the score function can be written as DISPLAYFORM0 Proof. DISPLAYFORM1 rt−1rt h r1 DISPLAYFORM2 r1r2 h r1 DISPLAYFORM3 =... DISPLAYFORM4 Proposition A.1. If we replace the generalized outer product ⊗ ξ in eq. with the standard outer product ⊗, we can subsume matrices C (t) into tensors G (t) without loss of generality. Proof. Let us rewrite hidden state equation eq. after transition from ⊗ ξ to ⊗: DISPLAYFORM5 We see that the obtained expression resembles those presented in eq. with TT-cores G (t) replaced byG (t) and thus all the reasoning applied in the absence of matrices C (t) holds valid. Proposition A.2. Grid tensor of generalized shallow network has the following form (eq.): DISPLAYFORM6 denote an arbitrary sequence of templates. Corresponding element of the grid tensor defined in eq. FORMULA1 has the following form: DISPLAYFORM7 Proposition A.3. Grid tensor of a generalized RNN has the following form: DISPLAYFORM8 Proof. Proof is similar to that of Proposition A.2 and uses eq. FORMULA0 to compute the elements of the grid tensor. Lemma 5.1. Given two generalized RNNs with grid tensors Γ A (X), Γ B (X), and arbitrary ξ-nonlinearity, there exists a generalized RNN with grid tensor Γ C (X) satisfying DISPLAYFORM9 Proof. Let these RNNs be defined by the weight parameters DISPLAYFORM10 and DISPLAYFORM11 We claim that the desired grid tensor is given by the RNN with the following weight settings. DISPLAYFORM12 It is straightforward to verify that the network defined by these weights possesses the following property: DISPLAYFORM13, 0 < t < T, and h DISPLAYFORM14 B, concluding the proof. We also note that these formulas generalize the well-known formulas for addition of two tensors in the Tensor Train format .Proposition A.4. For any associative and commutative binary operator ξ, an arbitrary generalized rank 1 shallow network with ξ-nonlinearity can be represented in a form of generalized RNN with unit ranks (R 1 = · · · = R T −1 = 1) and ξ-nonlinearity. DISPLAYFORM15 be the parameters specifying the given generalized shallow network. Then the following weight settings provide the equivalent generalized RNN (with h being the unity of the operator ξ). DISPLAYFORM16 Indeed, in the notation defined above, hidden states of generalized RNN have the following form:Theorem 5.3 (Expressivity 2). For every value of R there exists an open set (which thus has positive measure) of generalized RNNs with rectifier nonlinearity ξ(x, y) = max(x, y, 0), such that for each RNN in this open set the corresponding grid tensor can be realized by a rank 1 shallow network with rectifier nonlinearity. Proof. As before, let us denote by I (p,q) a matrix of size p × q such that I (p,q) ij = δ ij, and by a (p1,p2,...p d) we denote a tensor of size p 1 × · · · × p d with each entry being a (sometimes we will omit the dimensions when they can be inferred from the context). Consider the following weight settings for a generalized RNN. DISPLAYFORM17 The RNN defined by these weights has the property that Γ (X) is a constant tensor with each entry being 2(M R) T −1, which can be trivially represented by a rank 1 generalized shallow network. We will show that this property holds under a small perturbation of C (t), G (t) and F. Let us denote each of these perturbation (and every tensor appearing size of which can be assumed indefinitely small) collectively by ε. Applying eq. FORMULA0 we obtain (with ξ(x, y) = max(x, y, 0)). where we have used a simple property connecting ⊗ ξ with ξ(x, y) = max(x, y, 0) and ordinary ⊗: if for tensors A and B each entry of A is greater than each entry of B, A ⊗ ξ B = A ⊗ 1. The obtained grid tensors can be represented using rank 1 generalized shallow networks with the following weight settings. λ = 1, DISPLAYFORM18 DISPLAYFORM19 ε (2(MR) T−1 + ε), t = 1, 0, t > 1, where F ε is the feature matrix of the corresponding perturbed network. In this section we provide the additional computational experiments, aimed to provide more thorough and complete analysis of generalized RNNs. Different ξ-nonlinearities In this paper we presented theoretical analysis of rectifier nonlinearity which corresponds to ξ(x, y) = max(x, y, 0). However, there is a number of other associative binary operators ξ which can be incorporated in generalized tensor networks. Strictly speaking, every one of them has to be carefully explored theoretically in order to speak about their generality and expressive power, but for now we can compare them empirically. Table 1 shows the performance (accuracy on test data) of different nonlinearities on MNIST, CIFAR-10, and IMDB datasets for classification. Although these problems are not considered hard to solve, we see that the right choice of nonlinearity can lead to a significant boost in performance. For the experiments on the visual datasets we used T = 16, m = 32, R = 64 and for the experiments on the IMDB dataset we had T = 100, m = 50, R = 50. Parameters of all networks were optimized using Adam (learning rate α = 10 −4) and batch size 250. Expressivity in the case of shared cores We repeat the expressivity experiments from Section 6 in the case of equal TT-cores (G = · · · = G (T −1) ). We observe that similar to the case of different cores, there always exist rank 1 generalized shallow networks which realize the same score function as generalized RNN of higher rank, however, this situation seems too unlikely for big values of R. Figure 4: Distribution of lower bounds on the rank of generalized shallow networks equivalent to randomly generated generalized RNNs of ranks (M = 6, T = 6, ξ(x, y) = max(x, y, 0)).Figure 5: Distribution of lower bounds on the rank of generalized shallow networks equivalent to randomly generated generalized RNNs of ranks (M = 6, T = 6, ξ(x, y) = x 2 + y 2 ).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1gNni0qtm
Analysis of expressivity and generality of recurrent neural networks with ReLu nonlinearities using Tensor-Train decomposition.
While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image. In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step. We demonstrate our on MNIST with convolutional neural networks and on ImageNet with Inception-v3 and ResNet-101. In a first observation in BID25 it was found that deep neural networks exhibit unstable behavior to small perturbations in the input. For the task of image classification this means that two visually indistinguishable images may have very different outputs, ing in one of them being misclassified even if the other one is correctly classified with high confidence. Since then, a lot of research has been done to investigate this issue through the construction of adversarial examples: given a correctly classified image x, we look for an image y which is visually indistinguishable from x but is misclassified by the network. Typically, the image y is constructed as y = x + r, where r is an adversarial perturbation that is supposed to be small in a suitable sense (normally, with respect to an p norm). Several algorithms have been developed to construct adversarial perturbations, see BID9 BID18; BID14; BID16; BID5 and the review paper BID0.Even though such pathological cases are very unlikely to occur in practice, their existence is relevant since malicious attackers may exploit this drawback to fool classifiers or other automatic systems. Further, adversarial perturbations may be constructed in a black-box setting (i.e., without knowing the architecture of the DNN but only its outputs) BID19 BID17 and also in the physical world BID14 BID1 BID3 BID24. This has motivated the investigation of defenses, i.e., how to make the network invulnerable to such attacks, see BID13; BID4; BID16; BID27; BID28; BID20; BID2; BID12. In most cases, adversarial examples are artificially created and then used to retrain the network, which becomes more stable under these types of perturbations. Most of the work on the construction of adversarial examples and on the design of defense strategies has been conducted in the context of small perturbations r measured in the ∞ norm. However, this is not necessarily a good measure of image similarity: e.g., for two translated images x and y, the norm of x−y is not small in general, even though x and y will look indistinguishable if the translation is small. Several papers have investigated the construction of adversarial perturbations not designed for norm proximity BID21 BID24 BID3 BID6 BID29.In this work, we build up on these ideas and investigate the construction of adversarial deformations. In other words, the misclassified image y is not constructed as an additive perturbation y = x + r, but as a deformation y = x • (id + τ), where τ is a vector field defining the transformation. In this case, the similarity is not measured through a norm of y − x, but instead through a norm of τ, which quantifies the deformation between y and x. We develop an efficient algorithm for the construction of adversarial deformations, which we call ADef. It is based on the main ideas of DeepFool BID18, and iteratively constructs the smallest deformation to misclassify the image. We test the procedure on MNIST (LeCun) (with convolutional neural networks) and on ImageNet (with Inception-v3 BID26 and ResNet-101 BID10). The show that ADef can succesfully fool the classifiers in the vast majority of cases (around 99%) by using very small and imperceptible deformations. We also test our adversarial attacks on adversarially trained networks for MNIST. Our implementation of the algorithm can be found at https://gitlab.math. ethz.ch/tandrig/ADef.The of this work have initially appeared in the master's thesis BID8, to which we refer for additional details on the mathematical aspects of this construction. While writing this paper, we have come across BID29, in which a similar problem is considered and solved with a different algorithm. Whereas in BID29 the authors use a second order solver to find a deforming vector field, we show how a first order method can be formulated efficiently and justify a smoothing operation, independent of the optimization step. We report, for the first time, success rates for adversarial attacks with deformations on ImageNet. The topic of deformations has also come up in BID11, in which the authors introduce a class of learnable modules that deform inputs in order to increase the performance of existing DNNs, and BID7, in which the authors introduce a method to measure the invariance of classifiers to geometric transformations. Let K be a classifier of images consisting of P pixels into L ≥ 2 categories, i.e. a function from the space of images X = R cP, where c = 1 (for grayscale images) or c = 3 (for color images), and into the set of labels L = {1, . . ., L}. Suppose x ∈ X is an image that is correctly classified by K and suppose y ∈ X is another image that is imperceptible from x and such that K(y) = K(x), then y is said to be an adversarial example. The meaning of imperceptibility varies, but generally, proximity in p -norm (with 1 ≤ p ≤ ∞) is considered to be a sufficient substitute. Thus, an adversarial perturbation for an image x ∈ X is a vector r ∈ X such that K(x + r) = K(x) and r p is small, where DISPLAYFORM0 Given such a classifier K and an image x, an adversary may attempt to find an adversarial example y by minimizing x − y p subject to K(y) = K(x), or even subject to K(y) = k for some target label k = K(x). Different methods for finding minimal adversarial perturbations have been proposed, most notably FGSM BID9 and PGD BID16 for ∞, and the DeepFool algorithm BID18 DISPLAYFORM1 extending ξ by zero outside of 2. Deformations capture many natural image transformations. For example, a translation of the image ξ by a vector v ∈ R 2 is a deformation with respect to the constant vector field τ = v. If v is small, the images ξ and ξ v may look similar, but the corresponding perturbation ρ = ξ v − ξ may be arbitrarily large in the aforementioned L p -norms. Figure 1 shows three minor deformations, all of which yield large L ∞ -norms. In the discrete setting, deformations are implemented as follows. We consider square images of W × W pixels and define the space of images to be DISPLAYFORM2 In what follows we will only consider the set T of vector fields that do not move points on the grid {1, . . ., W} 2 outside of [1, W] 2. More precisely, DISPLAYFORM3 An image x ∈ X can be viewed as the collection of values of a function ξ: DISPLAYFORM4 for s, t = 1,..., W. Such a function ξ can be computed by interpolating from x. Thus, the deformation of an image x with respect to the discrete vector field τ can be defined as the discrete deformed image x τ in X by DISPLAYFORM5 It is not straightforward to measure the size of a deformation such that it captures the visual difference between the original image x and its deformed counterpart x τ. We will use the size of the corresponding vector field, τ, in the norm defined by DISPLAYFORM6 as a proxy. The p -norms defined in, adapted to vector fields, can be used as well. (We remark, however, that none of these norms define a distance between x and x τ, since two vector fields τ, σ ∈ T with τ T = σ T may produce the same deformed image x τ = x σ .) We will now describe our procedure for finding deformations that will lead a classifier to yield an output different from the original label. DISPLAYFORM0 Let x ∈ X be the image of interest and fix ξ: 2 → R c obtained by interpolation from x. Let l = K(x) denote the true label of x, let k ∈ L be a target label and set f = F k − F l. We assume that x does not lie on a decision boundary, so that we have f (x) < 0.We define the function g: DISPLAYFORM1 We can use a linear approximation of g around the zero vector field as a guide: DISPLAYFORM2 for small enough τ ∈ T and D 0 g: T → R the derivative of g at τ = 0. Hence, if τ is a vector field such that DISPLAYFORM3 and τ T is small, then the classifier K has approximately equal confidence for the deformed image x τ to have either label l or k. This is a scalar equation with unknown in T, and so has infinitely many solutions. In order to select τ with small norm, we solve it in the least-squares sense. In view of, we have DISPLAYFORM4. Thus, by applying the chain rule to g(τ) = f (x τ), we obtain that its derivative at τ = 0 can, with a slight abuse of notation, be identified with the vector field DISPLAYFORM5 where ∇f (x) s,t ∈ R 1×c is the derivative of f in x calculated at (s, t). With this, DISPLAYFORM6, and the solution to in the least-square sense is given by DISPLAYFORM7 Finally, we define the deformed image x τ ∈ X according to.One might like to impose some degree of smoothness on the deforming vector field. In fact, it suffices to search in the range of a smoothing operator S: T → T. However, this essentially amounts to applying S to the solution from the larger search space DISPLAYFORM8 where S denotes the componentwise application of a two-dimensional Gaussian filter ϕ (of any standard deviation). Then the vector field DISPLAYFORM9 also satisfies, since S is self-adjoint. We can hence replace τ byτ to obtain a smooth deformation of the image x. We iterate the deformation process until the deformed image is misclassified. More explicitly, let x = x and for n ≥ 1 let τ (n) be given by for x (n−1). Then we can define the iteration as DISPLAYFORM10 ). The algorithm terminates and outputs an adversarial example y = x DISPLAYFORM11 The iteration also terminates if x (n) lies on a decision boundary of K, in which case we propose to introduce an overshoot factor 1 + η on the total deforming vector field. Provided that the number of iterations is moderate, the total vector field can be well approximated by τ * = τ +· · ·+τ (n) and the process can be altered to output the deformed image DISPLAYFORM12 The target label k may be chosen in each iteration to minimize the vector field to obtain a better approximation in the linearization. More precisely, for a candidate set of labels k 1,..., k m, we compute the corresponding vectors fields τ 1,..., τ m and select DISPLAYFORM13 The candidate set consists of the labels corresponding to the indices of the m smallest entries of F − F l, in absolute value. DISPLAYFORM14 By equation, provided that ∇f is moderate, the deforming vector field takes small values wherever ξ has a small derivative. This means that the vector field will be concentrated on the edges in the image x (see e.g. the first row of figure 2). Further, note that the of a deformation is always a valid image in the sense that it does not violate the pixel value bounds. This is not guaranteed for the perturbations computed with DeepFool. We evaluate the performance of ADef by applying the algorithm to classifiers trained on the MNIST (LeCun) and ImageNet datasets. Below, we briefly describe the setup of the experiments and in tables 1 and 2 we summarize their . MNIST: We train two convolutional neural networks based on architectures that appear in BID16 and BID27 respectively. The network MNIST-A consists of two convolutional layers of sizes 32 × 5 × 5 and 64 × 5 × 5, each followed by 2 × 2 max-pooling and a rectifier activation function, a fully connected layer into dimension 1024 with a rectifier activation function, and a final linear layer with output dimension 10. The network MNIST-B consists of two convolutional layers of sizes 128 × 3 × 3 and 64 × 3 × 3 with a rectifier activation function, a fully connected layer into dimension 128 with a rectifier activation function, and a final linear layer with output dimension 10. During training, the latter convolutional layer and the former fully connected layer of MNIST-B are subject to dropout of drop probabilities 1 /4 and 1 /2. We use ADef to produce adversarial deformations of the images in the test set. The algorithm is configured to pursue any label different from the correct label (all incorrect labels are candidate labels). It performs smoothing by a Gaussian filter of standard deviation 1 /2, uses bilinear interpolation to obtain intermediate pixel intensities, and it overshoots by η = 2 /10 whenever it converges to a decision boundary. An image from the ILSVRC2012 validation set, the output of ADef with a Gaussian filter of standard deviation 1, the corresponding vector field and perturbation. The rightmost image is a close-up of the vector field around the nose of the ape. Second row: A larger deformation of the same image, obtained by using a wider Gaussian filter (standard deviation 6) for smoothing. We apply ADef to pretrained Inception-v3 BID26 and ResNet-101 BID10 ) models to generate adversarial deformations for the images in the ILSVRC2012 validation set. The images are preprocessed by first scaling so that the smaller axis has 299 pixels for the Inception model and 224 pixels for ResNet, and then they are center-cropped to a square image. The algorithm is set to focus only on the label of second highest probability. It employs a Gaussian filter of standard deviation 1, bilinear interpolation, and an overshoot factor η = 1 /10.We only consider inputs that are correctly classified by the model in question, and, since τ * = τ +· · ·+τ (n) approximates the total deforming vector field, we declare ADef to be successful if its output is misclassified and τ * T ≤ ε, where we choose ε = 3. Observe that, by, a deformation with respect to a vector field τ does not displace any pixel further away from its original position than τ T. Hence, for high resolution images, the choice ε = 3 indeed produces small deformations if the vector fields are smooth. In appendix A, we illustrate how the success rate of ADef depends on the choice of ε. When searching for an adversarial example, one usually searches for a perturbation with ∞ -norm smaller than some small number ε > 0. Common choices of ε range from 1 /10 to 3 /10 for MNIST classifiers BID9 BID16 BID28 BID27 BID12 and 2 /255 to 16 /255 for ImageNet classifiers BID9 BID13 BID27 BID12. TAB1 shows that on average, the perturbations obtained by ADef are quite large compared to those constraints. However, as can be seen in FIG0, the relatively high resolution images of the ImageNet dataset can be deformed into adversarial examples that, while corresponding to large perturbations, are not visibly different from the original images. In appendices B and C, we give more examples of adversarially deformed images. In addition to training MNIST-A and MNIST-B on the original MNIST data, we train independent copies of the networks using the adversarial training procedure described by BID16. That is, before each step of the training process, the input images are adversarially perturbed using the PGD algorithm. This manner of training provides increased robustness against adversarial perturbations of low ∞ -norm. Moreover, we train networks using ADef instead of PGD as an adversary. In table 2 we show the of attacking these adversarially trained networks, using ADef on the one hand, and PGD on the other. We use the same configuration for ADef as above, and for PGD we use 40 iterations, step size 1 /100 and 3 /10 as the maximum ∞ -norm of the perturbation. Interestingly, using these configurations, the networks trained against PGD attacks are more resistant to adversarial deformations than those trained against ADef. ADef can also be used for targeted adversarial attacks, by restricting the deformed image to have a particular target label instead of any label which yields the optimal deformation. FIG1 demonstrates the effect of choosing different target labels for a given MNIST image, and FIG2 shows the of targeting the label of lowest probability for an image from the ImageNet dataset. In this work, we proposed a new efficient algorithm, ADef, to construct a new type of adversarial attacks for DNN image classifiers. The procedure is iterative and in each iteration takes a gradient descent step to deform the previous iterate in order to push to a decision boundary. We demonstrated that with almost imperceptible deformations, state-of-the art classifiers can be fooled to misclassify with a high success rate of ADef. This suggests that networks are vulnerable to different types of attacks and that simply training the network on a specific class of adversarial examples might not form a sufficient defense strategy. Given this vulnerability of neural networks to deformations, we wish to study in future work how ADef can help for designing possible defense strategies. Furthermore, we also showed initial on fooling adversarially trained networks. Remarkably, PGD trained networks on MNIST are more resistant to adversarial deformations than ADef trained networks. However, for this to be more conclusive, similar tests on ImageNet will have to be conducted. We wish to study this in future work. T from the MNIST experiments. Deformations that fall to the left of the vertical line at ε = 3 are considered successful. The networks in the first column were trained using the original MNIST data, and the networks in the second and third columns were adversarially trained using ADef and PGD, respectively. Figures 5 and 6 show the distribution of the norms of the total deforming vector fields, τ *, from the experiments in section 3. For networks that have not been adversarially trained, most deformations fall well below the threshold of = 3. Out of the adversarially trained networks, only MNIST-A trained against PGD is truly robust against ADef. Further, a comparison between the first column of figure 5 and figure 6 indicates that ImageNet is much more vulnerable to adversarial deformations than MNIST, also considering the much higher resolution of the images in ImageNet. Thus, it would be very interesting to study the performance of ADef with adversarially trained network for ImageNet, as mentioned in the Conclusion. The standard deviation of the Gaussian filter used for smoothing in the update step of ADef has significant impact on the ing vector field. To explore this aspect of the algorithm, we repeat the experiment from section 3 on the Inception-v3 model, using standard deviations σ = 0, 1, 2, 4, 8 (where σ = 0 stands for no smoothing). The are shown in table 3, and the effect of varying σ is illustrated in figures 7 and 8. We observe that as σ increases, the adversarial distortion steadily increases both in terms of vector field norm and perturbation norm. Likewise, the success rate of ADef decreases with larger σ. However, from figure 8 we see that the constraint τ * T ≤ 3 on the total vector field may provide a rather conservative measure of the effectiveness of ADef in the case of smooth high dimensional vector fields. Figures 9 and 10 show adversarial deformations for the models MNIST-A and MNIST-B, respectively. The attacks are performed using the same configuration as in the experiments in section 3. Observe that in some cases, features resembling the target class have appeared in the deformed image. For example, the top part of the 4 in the fifth column of figure 10 has been curved slightly to more resemble a 9. Figures 11 -15 show additional deformed images ing from attacking the Inception-v3 model using the same configuration as in the experiments in section 3. Similarly, figures 16 -20 show deformed images ing from attacking the ResNet-10 model. However, in order to increase variability in the output labels, we perform a targeted attack, targeting the label of 50th highest probability. Deformed: hartebeest
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hk4dFjR5K7
We propose a new, efficient algorithm to construct adversarial examples by means of deformations, rather than additive perturbations.
Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods. Adversarial learning methods provide a promising approach to modeling distributions over highdimensional data with complex internal correlation structures. These methods generally use a discriminator to supervise the training of a generator in order to produce samples that are indistinguishable from the data. A particular instantiation is generative adversarial networks, which can be used for high-fidelity generation of images BID21 and other highdimensional data BID45 BID46 BID9. Adversarial methods can also be used to learn reward functions in the framework of inverse reinforcement learning BID10 BID12, or to directly imitate demonstrations BID19. However, they suffer from major optimization challenges, one of which is balancing the performance of the generator and discriminator. A discriminator that achieves very high accuracy can produce relatively uninformative gradients, but a weak discriminator can also hamper the generator's ability to learn. These challenges have led to widespread interest in a variety of stabilization methods for adversarial learning algorithms BID24 BID4.In this work, we propose a simple regularization technique for adversarial learning, which constrains the information flow from the inputs to the discriminator using a variational approximation to the information bottleneck. By enforcing a constraint on the mutual information between the input observations and the discriminator's internal representation, we can encourage the discriminator to learn a representation that has heavy overlap between the data and the generator's distribution, thereby effectively modulating the discriminator's accuracy and maintaining useful and informative gradients for the generator. Our approach to stabilizing adversarial learning can be viewed as an adaptive variant of instance noise BID39. However, we show that the adaptive nature of this method is critical. Constraining the mutual information between the discriminator's internal representation and the input allows the regularizer to directly limit the discriminator's accuracy, which automates the choice of noise magnitude and applies this noise to a compressed representation of the input that is specifically optimized to model the most discerning differences between the generator and data distributions. The main contribution of this work is the variational discriminator bottleneck (VDB), an adaptive stochastic regularization method for adversarial learning that substantially improves performance across a range of different application domains, examples of which are available in FIG0. Our method can be easily applied to a variety of tasks and architectures. First, we evaluate our method on a suite of challenging imitation tasks, including learning highly acrobatic skills from mocap data with a simulated humanoid character. Our method also enables characters to learn dynamic continuous control skills directly from raw video demonstrations, and drastically improves upon previous work that uses adversarial imitation learning. We further evaluate the effectiveness of the technique for inverse reinforcement learning, which recovers a reward function from demonstrations in order to train future policies. Finally, we apply our framework to image generation using generative adversarial networks, where employing VDB improves the performance in many cases. Recent years have seen an explosion of adversarial learning techniques, spurred by the success of generative adversarial networks (GANs). A GAN framework is commonly composed of a discriminator and a generator, where the discriminator's objective is to classify samples as real or fake, while the generator's objective is to produce samples that fool the discriminator. Similar frameworks have also been proposed for inverse reinforcement learning (IRL) BID11 and imitation learning BID19. The training of adversarial models can be extremely unstable, with one of the most prevalent challenges being balancing the interplay between the discriminator and the generator BID4. The discriminator can often overpower the generator, easily differentiating between real and fake samples, thus providing the generator with uninformative gradients for improvement BID7. Alternative loss functions have been proposed to mitigate this problem BID31 BID47 BID49. Regularizers have been incorporated to improve stability and convergence, such as gradient penalties BID24 BID14, reconstruction loss BID7, and a myriad of other heuristics BID39 BID4. Task-specific architectural designs can also substantially improve performance BID37 BID21. Similarly, our method also aims to regularize the discriminator in order to improve the feedback provided to the generator. But instead of explicit regularization of gradients or architecture-specific constraints, we apply a general information bottleneck to the discriminator, which previous works have shown to encourage networks to ignore irrelevant cues BID0 ). We hypothesize that this then allows the generator to focus on improving the most discerning differences between real and fake samples. Adversarial techniques have also been applied to inverse reinforcement learning BID12, where a reward function is recovered from demonstrations, which can then be used to train policies to reproduce a desired skill. BID10 showed an equivalence between maximum entropy IRL and GANs. Similar techniques have been developed for adversarial imitation learning BID19 BID32, where agents learn to imitate demonstrations without explicitly recovering a reward function. One advantage of adversarial methods is that by leveraging a discriminator in place of a reward function, they can be applied to imitate skills where reward functions can be difficult to engineer. However, the performance of policies trained through adversarial methods still falls short of those produced by manually designed reward functions, when such reward functions are available BID38 BID36. We show that our method can significantly improve upon previous works that use adversarial techniques, and produces of comparable quality to those from state-of-the-art approaches that utilize manually engineered reward functions. Our variational discriminator bottleneck is based on the information bottleneck BID44, a technique for regularizing internal representations to minimize the mutual information with the input. Intuitively, a compressed representation can improve generalization by ignoring irrelevant distractors present in the original input. The information bottleneck can be instantiated in practical deep models by leveraging a variational bound and the reparameterization trick, inspired by a similar approach in variational autoencoders (VAE) BID23. The ing variational information bottleneck approximates this compression effect in deep networks BID1 BID0. A similar bottleneck has also been applied to learn disentangled representations BID17. Building on the success of VAEs and GANs, a number of efforts have been made to combine the two. BID30 used adversarial discriminators during the training of VAEs to encourage the marginal distribution of the latent encoding to be similar to the prior distribution, similar techniques include and BID8. modeled the generator of a GAN using a VAE. BID47 used an autoencoder instead of a VAE to model the discriminator, but does not enforce an information bottleneck on the encoding. While instance noise is widely used in modern architectures BID39, we show that explicitly enforcing an information bottleneck leads to improved performance over simply adding noise for a variety of applications. In this section, we provide a review of the variational information bottleneck proposed by BID1 in the context of supervised learning. Our variational discriminator bottleneck is based on the same principle, and can be instantiated in the context of GANs, inverse RL, and imitation learning. Given a dataset {x i, y i}, with features x i and labels y i, the standard maximum likelihood estimate q(y i |x i) can be determined according to DISPLAYFORM0 Unfortunately, this estimate is prone to overfitting, and the ing model can often exploit idiosyncrasies in the data BID26 BID43. BID1 proposed regularizing the model using an information bottleneck to encourage the model to focus only on the most discriminative features. The bottleneck can be incorporated by first introducing an encoder E(z|x) that maps the features x to a latent distribution over Z, and then enforcing an upper bound I c on the mutual information between the encoding and the original features I(X, Z). This in the following regularized objective J(q, E) DISPLAYFORM1 Note that the model q(y|z) now maps samples from the latent distribution z to the label y. The mutual information is defined according to DISPLAYFORM2 where p(x) is the distribution given by the dataset. Computing the marginal distribution p(z) = E(z|x) p(x) dx can be challenging. Instead, a variational lower bound can be obtained by using an approximation r(z) of the marginal. Since KL [p(z)||r(z)] ≥ 0, p(z) log p(z) dz ≥ p(z) log r(z) dz, an upper bound on I(X, Z) can be obtained via the KL divergence, This provides an upper bound on the regularized objectiveJ(q, E) ≥ J(q, E), DISPLAYFORM3 DISPLAYFORM4 To solve this problem, the constraint can be subsumed into the objective with a coefficient β BID1 evaluated the method on supervised learning tasks, and showed that models trained with a VIB can be less prone to overfitting and more robust to adversarial examples. DISPLAYFORM5 To outline our method, we first consider a standard GAN framework consisting of a discriminator D and a generator G, where the goal of the discriminator is to distinguish between samples from the target distribution p * (x) and samples from the generator G(x), DISPLAYFORM0 We incorporate a variational information bottleneck by introducing an encoder E into the discriminator that maps a sample x to a stochastic encoding z ∼ E(z|x), and then apply a constraint I c on the mutual information I(X, Z) between the original features and the encoding. D is then trained to classify samples drawn from the encoder distribution. A schematic illustration of the framework is available in FIG1. The regularized objective J(D, E) for the discriminator is given by DISPLAYFORM1 DISPLAYFORM2 G being a mixture of the target distribution and the generator. We refer to this regularizer as the variational discriminator bottleneck (VDB). To optimize this objective, we can introduce a Lagrange multiplier β, DISPLAYFORM3 As we will discuss in Section 4.1 and demonstrate in our experiments, enforcing a specific mutual information budget between x and z is critical for good performance. We therefore adaptively update β via dual gradient descent to enforce a specific constraint I c on the mutual information, DISPLAYFORM4 where DISPLAYFORM5 and α β is the stepsize for the dual variable in dual gradient descent BID5. In practice, we perform only one gradient step on D and E, followed by an update to β. We refer to a GAN that incorporates a VDB as a variational generative adversarial network (VGAN).In our experiments, the prior r(z) = N (0, I) is modeled with a standard Gaussian. The encoder E(z|x) = N (µ E (x), Σ E (x)) models a Gaussian distribution in the latent variables Z, with mean µ E (x) and diagonal covariance matrix Σ E (x). When computing the KL loss, each batch of data contains an equal number of samples from p * (x) and G(x). We use a simplified objective for the generator, max DISPLAYFORM6 where the KL penalty is excluded from the generator's objective. Instead of computing the expectation over Z, we found that approximating the expectation by evaluating D at the mean µ E (x) of the encoder's distribution was sufficient for our tasks. The discriminator is modeled with a single linear unit followed by a sigmoid DISPLAYFORM7 To interpret the effects of the VDB, we consider the presented by, which show that for two distributions with disjoint support, the optimal discriminator can perfectly classify all samples and its gradients will be zero almost everywhere. Thus, as the discriminator converges to the optimum, the gradients for the generator vanishes accordingly. To address this issue, proposed applying continuous noise to the discriminator inputs, thereby ensuring that the distributions have continuous support everywhere. In practice, if the original distributions are sufficiently distant from each other, the added noise will have negligible effects. As shown by, the optimal choice for the variance of the noise to ensure convergence can be quite delicate. In our method, by first using a learned encoder to map the inputs to an embedding and then applying an information bottleneck on the embedding, we can dynamically adjust the variance of the noise such that the distributions not only share support in the embedding space, but also have significant overlap. Since the minimum amount of information required for binary classification is 1 bit, by selecting an information constraint I c < 1, the discriminator is prevented from from perfectly differentiating between the distributions. To illustrate the effects of the VDB, we consider a simple task of training a discriminator to differentiate between two Gaussian distributions. FIG1 visualizes the decision boundaries learned with different bounds I c on the mutual information. Without a VDB, the discriminator learns a sharp decision boundary, ing in vanishing gradients for much of the space. But as I c decreases and the bound tightens, the decision boundary is smoothed, providing more informative gradients that can be leveraged by the generator. Taking this analysis further, we can extend Theorem 3.2 from to analyze the VDB, and show that the gradient of the generator will be non-degenerate for a small enough constraint I c, under some additional simplifying assumptions. The in states that the gradient consists of vectors that point toward samples on the data manifold, multiplied by coefficients that depend on the noise. However, these coefficients may be arbitrarily small if the generated samples are far from real samples, and the noise is not large enough. This can still cause the generator gradient to vanish. In the case of the VDB, the constraint ensures that these coefficients are always bounded below. Due to space constraints, this is presented in Appendix A. To extend the VDB to imitation learning, we start with the generative adversarial imitation learning (GAIL) framework BID19, where the discriminator's objective is to differentiate between the state distribution induced by a target policy π * (s) and the state distribution of the agent's policy π(s), max DISPLAYFORM0 Figure 3: Simulated humanoid performing various skills. VAIL is able to closely imitate a broad range of skills from mocap data. The discriminator is trained to maximize the likelihood assigned to states from the target policy, while minimizing the likelihood assigned to states from the agent's policy. The discriminator also serves as the reward function for the agent, which encourages the policy to visit states that, to the discriminator, appear indistinguishable from the demonstrations. Similar to the GAN framework, we can incorporate a VDB into the discriminator, DISPLAYFORM1 π represents a mixture of the target policy and the agent's policy. The reward for π is then specified by the discriminator r t = −log (1 − D(µ E (s))). We refer to this method as variational adversarial imitation learning (VAIL). The VDB can also be applied to adversarial inverse reinforcement learning BID12 to yield a new algorithm which we call variational adversarial inverse reinforcement learning (VAIRL). AIRL operates in a similar manner to GAIL, but with a discriminator of the form DISPLAYFORM0 where f (s, a, s) = g(s, a) + γh(s) − h(s), with g and h being learned functions. Under certain restrictions on the environment, Fu et al. show that if g(s, a) is defined to depend only on the current state s, the optimal g(s) recovers the expert's true reward function r * (s) up to a constant g * (s) = r * (s) + const. In this case, the learned reward can be re-used to train policies in environments with different dynamics, and will yield the same policy as if the policy was trained under the expert's true reward. In contrast, GAIL's discriminator typically cannot be re-optimized in this way BID12. In VAIRL, we introduce stochastic encoders E g (z g |s), E h (z h |s), and g(z g), h(z h) are modified to be functions of the encoding. We can reformulate Equation 13 as DISPLAYFORM1, DISPLAYFORM2 We then obtain a modified objective of the form DISPLAYFORM3 where π(s, s) denotes the joint distribution of successive states from a policy, and E(z|s, DISPLAYFORM4 Figure 4: Learning curves comparing VAIL to other methods for motion imitation. Performance is measured using the average joint rotation error between the simulated character and the reference motion. Each method is evaluated with 3 random seeds. We evaluate our method on adversarial learning problems in imitation learning, inverse reinforcement learning, and image generation. In the case of imitation learning, we show that the VDB enables agents to learn complex motion skills from a single demonstration, including visual demonstrations provided in the form of video clips. We also show that the VDB improves the performance of inverse RL methods. Inverse RL aims to reconstruct a reward function from a set demonstrations, which can then used to perform the task in new environments, in contrast to imitation learning, which aims to recover a policy directly. Our method is also not limited to control tasks, and we demonstrate its effectiveness for unconditional image generation. The goal of the motion imitation tasks is to train a simulated character to mimic demonstrations provided by mocap clips recorded from human actors. Each mocap clip provides a sequence of target states {s * 0, s * 1, ..., s * T} that the character should track at each timestep. We use a similar experimental setup as BID36, with a 34 degrees-of-freedom humanoid character. We found that the discriminator architecture can greatly affect the performance on complex skills. The particular architecture we employ differs substantially from those used in prior work BID32, details of which are available in Appendix C. The encoding Z is 128D and an information constraint of I c = 0.5 is applied for all skills, with a dual stepsize of α β = 10 −5. All policies are trained using PPO.The motions learned by the policies are best seen in the supplementary video. Snapshots of the character's motions are shown in Figure 3. Each skill is learned from a single demonstration. VAIL is able to closely reproduce a variety of skills, including those that involve highly dynamics flips and complex contacts. We compare VAIL to a number of other techniques, including state-only GAIL BID19, GAIL with instance noise applied to the discriminator inputs (GAIL -noise), GAIL with instance noise applied to the last hidden layer (GAIL -noise z), and GAIL with a gradient penalty applied to the discriminator (GAIL -GP). Since the VDB helps to prevent vanishing gradients, while GP mitigates exploding gradients, the two techniques can be seen as being complementary. Therefore, we also train a model that combines both VAIL and GP (VAIL - GP). Implementation details for combining the VDB and GP are available in Appendix B. Learning curves for the various methods are shown in FIG0 and Table 1 summarizes the performance of the final policies. Performance is measured in terms of the average joint rotation error between the simulated character and the reference motion. We also include a reimplementation of the method described by BID32. For the purpose of our experiments, GAIL denotes policies trained using our particular architecture but without a VDB, and BID32 denotes policies trained using an architecture that closely mirror those from previous work. Furthermore, we include comparisons to policies trained using the handcrafted reward from BID36, as well as policies trained via behavioral cloning (BC). Since mocap data does not provide expert actions, we use the policies from BID36 as oracles to provide state-action demonstrations, which are then used to train the BC policies via supervised learning. Each BC policy is trained with 10k samples from the oracle policies, while all other policies are trained from just a single demonstration, the equivalent of approximately 100 samples. VAIL consistently outperforms previous adversarial methods, and VAIL -GP achieves the best performance overall. Simply adding instance noise to the inputs BID39 or hidden layer without the KL constraint leads to worse performance, since the network can learn a latent representation that renders the effects of the noise negligible. Though training with the handcrafted reward still outperforms the adversarial methods, VAIL demonstrates comparable performance to the handcrafted reward without manual reward or feature engineering, and produces motions that closely resemble the original demonstrations. The method from BID32 was able to imitate simple skills such as running, but was unable to reproduce more acrobatic skills such as the backflip and spinkick. In the case of running, our implementation produces more natural gaits than the reported in BID32. Behavioral cloning is unable to reproduce any of the skills, despite being provided with substantially more demonstration data than the other methods. Video Imitation: While our method achieves substantially better on motion imitation when compared to prior work, previous methods can still produce reasonable behaviors. However, if the demonstrations are provided in terms of the raw pixels from video clips, instead of mocap data, the imitation task becomes substantially harder. The goal of the agent is therefore to directly im- Figure 7: Left: C-Maze and S-Maze. When trained on the training maze on the left, AIRL learns a reward that overfits to the training task, and which cannot be transferred to the mirrored maze on the right. In contrast, VAIRL learns a smoother reward function that enables more-reliable transfer. Right: Performance on flipped test versions of our two training mazes. We report mean return (± std. dev.) over five runs, and the mean return for the expert used to generate demonstrations.itate the skill depicted in the video. This is also a setting where manually engineering rewards is impractical, since simple losses like pixel distance do not provide a semantically meaningful measure of similarity. FIG4 compares learning curves of policies trained with VAIL, GAIL, and policies trained using a reward function defined by the average pixel-wise difference between the frame M * t from the video demonstration and a rendered image M t of the agent at each timestep t, DISPLAYFORM0 Each frame is represented by a 64 × 64 RGB image. Both GAIL and the pixel-loss are unable to learn the running gait. VAIL is the only method that successfully learns to imitate the skill from the video demonstration. Snapshots of the video demonstration and the simulated motion is available in FIG3. To further investigate the effects of the VDB, we visualize the gradient of the discriminator with respect to images from the video demonstration and simulation. Saliency maps for discriminators trained with VAIL and GAIL are available in FIG3. The VAIL discriminator learns to attend to spatially coherent image patches around the character, while the GAIL discriminator exhibits less structure. The magnitude of the gradients from VAIL also tend to be significantly larger than those from GAIL, which may suggests that VAIL is able to mitigate the problem of vanishing gradients present in GAIL.Adaptive Constraint: To evaluate the effects of the adaptive β updates, we compare policies trained with different fixed values of β and policies where β is updated adaptively to enforce a desired information constraint I c = 0.5. FIG4 illustrates the learning curves and the KL loss over the course of training. When β is too small, performance reverts to that achieved by GAIL. Large values of β help to smooth the discriminator landscape and improve learning speed during the early stages of training, but converges to a worse performance. Policies trained using dual gradient descent to adaptively update β consistently achieves the best performance overall. Next, we use VAIRL to recover reward functions from demonstrations. Unlike the discriminator learned by VAIL, the reward function recovered by VAIRL can be re-optimized to train new policies from scratch in the same environment. In some cases, it can also be used to transfer similar behaviour to different environments. In Figure 7, we show the of applying VAIRL to the C-maze from BID12, and a more complex S-maze; the simple 2D observation spaces of these tasks make it easy to interpret the recovered reward functions. In both mazes, the expert is trained to navigate from a start position at the bottom of the maze to a fixed target position at the top. We use each method to obtain an imitation policy and to approximate the expert's reward on the original maze. The recovered reward is then used to train a new policy to solve a left-right flipped version of the training maze. On the C-maze, we found that plain AIRL-without a gradient penaltywould sometimes overfit and fail to transfer to the new environment, as evidenced by the reward visualization in Figure 7 (left) and the higher return variance in Figure 7 (right). In contrast, by incorporating a VDB into AIRL, VAIRL learns a substantially smoother reward function that is more suitable for transfer. Furthermore, we found that in the S-maze with two internal walls, AIRL was too unstable to acquire a meaningful reward function. This was true even with the use of a gradient penalty. In contrast, VAIRL was able to learn a reasonable reward in most cases without a gradient penalty, and its performance improved even further with the addition of a gradient penalty. To evaluate the effects of the VDB, we observe that the performance of VAIRL drops on both tasks when the KL constraint is disabled (β = 0), suggesting that the improvements from the VDB cannot be attributed entirely to the noise introduced by the sampling process for z. Further details of these experiments and illustrations of the recovered reward functions are available in Appendix D. Finally, we apply the VDB to image generation with generative adversarial networks, which we refer to as VGAN. Experiment are conducted on CIFAR-10 (Krizhevsky et al.), CelebA BID28 ), and CelebAHQ BID22 datasets. We compare our approach to recent stabilization techniques: WGAN-GP, instance noise, spectral normalization (SN), and gradient penalty (GP), as well as the original GAN on CIFAR-10. To measure performance, we report the Fréchet Inception Distance (FID) BID16, which has been shown to be more consistent with human evaluation. All methods are implemented using the same base model, built on the resnet architecture of. Aside from tuning the KL constraint I c for VGAN, no additional hyperparameter optimization was performed to modify the settings provided by. The performance of the various methods on CIFAR-10 are shown in FIG5. While vanilla GAN and instance noise are prone to diverging as training progresses, VGAN remains stable. Note that instance noise can be seen as a non-adaptive version of VGAN without constraints on I c. This experiment again highlights that there is a significant improvement from imposing the information bottleneck over simply adding instance noise. Combining both VDB and gradient penalty (VGAN -GP) achieves the best performance overall with an FID of 18.1. We also experimented with combining the VDB with SN, but this combination is prone to diverging. See FIG6 for samples of images generated with our approach. Please refer to Appendix E for experimental details and more . We present the variational discriminator bottleneck, a general regularization technique for adversarial learning. Our experiments show that the VDB is broadly applicable to a variety of domains, and yields significant improvements over previous techniques on a number of challenging tasks. While our experiments have produced promising for video imitation, the have been primarily with videos of synthetic scenes. We believe that extending the technique to imitating realworld videos is an exciting direction. Another exciting direction for future work is a more in-depth theoretical analysis of the method, to derive convergence and stability or conditions. In this appendix, we show that the gradient of the generator when the discriminator is augmented with the VDB is non-degenerate, under some mild additional assumptions. First, we assume a pointwise constraint of the form KL[E(z|x) r(z)] ≤ I c for all x. In reality, we use an average KL constraint, since we found it to be more convenient to optimize, though a pointwise constraint is also possible to enforce by using the largest constraint violation to increment β. We could likely also extend the analysis to the average constraint, though we leave this to future work. The main theorem can then be stated as follows:Theorem A.1. Let g(u) denote the generator's mapping from a noise vector u ∼ p(u) to a point in X. Given the generator distribution G(x) and data distribution p * (x), a VDB with an encoder E(z|x) = N (µ E (x), Σ), and KL[E(z|x) r(z)] ≤ I c, the gradient passed to the generator has the form DISPLAYFORM0 where D * (z) is the optimal discriminator, a(x) and b(x) are positive functions, and we always have E(µ E (g(u))|x) > C(I c), where C(I c) is a continuous monotonic function, and C(I c) → δ > 0 as I c → 0.Analysis for an encoder with an input-dependent variance Σ(x) is also possible, but more involved. We'll further assume below for notational simplicity that Σ is diagonal with diagonal values σ 2. This assumption is not required, but substantially simplifies the linear algebra. Analogously to Theorem 3.2 from, this theorem states that the gradient of the generator points in the direction of points in the data distribution, and away from points in the generator distribution. However, going beyond the theorem in, this states that the coefficients on these vectors, given by E(µ E (g(u))|x), are always bounded below by a value that approaches a positive constant δ as we decrease I c, meaning that the gradient does not vanish. The proof of the first part of this theorem is essentially identical to the proof presented by, but accounting for the fact that the noise is now injected into the latent space of the VDB, rather than being added directly to x. This assumes that E(z|x) has a learned but input-independent variance Σ = σ 2 I, though the proof can be repeated for an input-dependent or non-diagonal Σ:Proof. Overloading p * (x) and G(x), let p * (z) and G(z) be the distribution of embeddings z under the real data and generator respectively. p * (z) is then given by DISPLAYFORM1 and similarly for G(z) DISPLAYFORM2 From, the optimal discriminator between p * (z) and G(z) is DISPLAYFORM3 The gradient passed to the generator then has the form DISPLAYFORM4 We then have DISPLAYFORM5 )|x)dp DISPLAYFORM6 Similar to the from, the gradient of the generator drives the generator's samples in the embedding space µ E (g(u)) towards embeddings of the points from the dataset µ E (x) weighted by their likelihood E(µ E (g(u))|x) under the real data. For an arbitrary encoder E, real and fake samples in the embedding may be far apart. As such, the coefficients E(µ E (g(u))|x) can be arbitrarily small, thereby ing in vanishing gradients for the generator. The second part of the theorem states that C(I c) is a continuous monotonic function, and C(I c) → δ > 0 as I c → 0. This is the main , and relies on the fact that KL[E(z|x)||r(z)] ≤ I c. The intuition behind this is that, for any two inputs x and y, their encoded distributions E(z|x) and E(z|y) have means that cannot be more than some distance apart, and that distance shrinks with I c. This allows us to bound E(µ E (y))|x) below by C(I c), which ensures that the coefficients on the vectors in the theorem above are always at least as large as C(I c).Proof. Let r(z) = N (0, I) be the prior distribution and suppose the KL divergence for all x in the dataset and all g(u) generated by the generator are bounded by I c DISPLAYFORM7 From the definition of the KL-divergence we can bound the length of all embedding vectors, DISPLAYFORM8 and similarly for ||µ E (g(u))|| 2, with K denoting the dimension of Z. A lower bound on E(µ E (g(u))|x), where u ∼ p(u) and x ∼ p * (x), can then be determined by DISPLAYFORM9 Since DISPLAYFORM10 and it follows that DISPLAYFORM11 The likelihood is therefore bounded below by DISPLAYFORM12 From the KL constraint, we can derive a lower bound (I c) and an upper bound U(I c) on σ 2. DISPLAYFORM13 For the upper bound, since DISPLAYFORM14 Substituting (I c) and U(I c) into Equation 14, we arrive at the following lower bound DISPLAYFORM15 To combine VDB with gradient penalty, we use the reparameterization trick to backprop through the encoder when computing the gradient of the discriminator with respect to the inputs. DISPLAYFORM0 The coefficient w GP weights the gradient penalty in the objective, w GP = 10 for the image generation, w GP = 1 for motion imitation, and w GP = 0.1 (C-maze) or w GP = 0.01 (S-maze) for the IRL tasks. The gradient penalty is applied only to real samples p * (x). We have experimented with apply the penalty to both real and fake samples, but found that performance was worse than penalizing only gradients from real samples. This is consistent with the GP implementation from. Experimental Setup: The goal of the motion imitation tasks is to train a simulated agent to mimic a demonstration provided in the form of a mocap clip recorded from a human actor. We use a similar experimental setup as BID36, with a 34 degrees-of-freedom humanoid character. The state s consists of features that represent the configuration of the character's body (link positions and velocities). We also include a phase variable φ ∈ among the state features, which records the character's progress along the motion and helps to synchronize the character with the reference motion. With 0 and 1 denoting the start and end of the motion respectively. The action a sampled from the policy π(a|s) specifies target poses for PD controller positioned at each joint. Given a state, the policy specifies a Gaussian distribution over the action space π(a|s) = N (µ(s), Σ), with a state-dependent mean µ(s) and fixed diagonal covariance matrix Σ. µ(s) is modeled using a 3-layered fully-connected network with 1024 and 512 hidden units, followed by a linear output layer that specifies the mean of the Gaussian. ReLU activations are used for all hidden layers. The value function is modeled with a similar architecture but with a single linear output unit. The policy is queried at 30Hz. Physics simulation is performed at 1.2kHz using the Bullet physics engine.Given the rewards from the discriminator, PPO ) is used to train the policy, with a stepsize of 2.5 × 10 −6 for the policy, a stepsize of 0.01 for the value function, and a stepsize of 10 −5 for the discirminator. Gradient descent with momentum 0.9 is used for all models. The PPO clipping threshold is set to 0.2. When evaluating the performance of the policies, each episode is simulated for a maximum horizon of 20s. Early termination is triggered whenever the character's torso contacts the ground, leaving the policy is a maximum error of π radians for all remaining timesteps. Phase-Functioned Discriminator: Unlike the policy and value function, which are modeled with standard fully-connected networks, the discriminator is modeled by a phase-functioned neural network (PFNN) to explicitly model the time-dependency of the reference motion BID20. While the parameters of a network are generally fixed, the parameters of a PFNN are functions of the phase variable φ. The parameters θ of the network for a given φ is determined by a weighted combination of a set of fixed parameters {θ 0, θ 1, ..., θ k}, DISPLAYFORM0 where w i (φ) is a phase-dependent weight for θ i. In our implementation, we use k = 5 sets of parameters and w i (φ) is designed to linearly interpolate between two adjacent sets of parameters for each phase φ, where each set of parameters θ i corresponds to a discrete phase value φ i spaced FIG0: Learning curves comparing VAIL to other methods for motion imitation. Performance is measured using the average joint rotation error between the simulated character and the reference motion. Each method is evaluated with 3 random seeds. Figure 11: Learning curves comparing VAIL with a discriminator modeled by a phase-functioned neural network (PFNN), to modeling the discriminator with a fully-conneted network that receives the phase-variable φ as part of the input (no PFNN), and a discriminator modeled with a fullyconnected network but does not receive φ as an input (no phase).uniformly between. For a given value of φ, the parameters of the discriminator are determined according to DISPLAYFORM1 where θ i and θ i+1 correspond to the phase values φ i ≤ φ < φ i+1 that form the endpoints of the phase interval that contains φ. A PFNN is used for all motion imitation experiments, both VAIL and GAIL, except for those that use the approach proposed by BID32, which use standard fully-connected networks for the discriminator. FIG0 compares the performance of VAIL when the discriminator is modeled with a phase-functioned neural network (with PFNN) to discriminators modeled with standard fully-connected networks. We increased the size of the layers of the fully-connected nets to have a similar number of parameters as a PFNN. We evaluate the performance of fully-connected nets that receive the phase variable φ as part of the input (no PFNN), and fully-connected nets that do not receive φ as an input. The phase-functioned discriminator leads to significant performance improvements across all tasks evaluated. Policies trained without a phase variable performs worst overall, suggesting that phase information is critical for performance. All methods perform well on simpler skills, such as running, but the additional phase structure introduced by the PFNN proved to be vital for successful imitation of more complex skills, such as the dance and backflip. Next we compare the accuracy of discriminators trained using different methods. FIG0 illustrates accuracy of the discriminators over the course of training. Discriminators trained via GAIL quickly overpowers the policy, and learns to accurately differentiate between samples, even when instance noise is applied to the inputs. VAIL without the KL constraint slows the discriminator's progress, but nonetheless reaches near perfect accuracy with a larger number of samples. Once the KL constraint is enforced, the information bottleneck constrains the performance of the discriminator, converging to approximately 80% accuracy. FIG0 also visualizes the value of β over the course of training for motion imitation tasks, along with the loss of the KL term in the objective. The dual gradient descent update effectively enforces the VDB constraint I c.Video Imitation: In the video imitation tasks, we use a simplified 2D biped character in order to avoid issues that may arise due to depth ambiguity from monocular videos. The biped character has a total of 12 degrees-of-freedom, with similar state and action parameters as the humanoid. The video demonstrations are generated by rendering a reference motion into a sequence of video frames, which are then provided to the agent as a demonstration. The goal of the agent is to imitate the motion depicted in the video, without access to the original reference motion, and the reference motion is used only to evaluate performance. Environments We evaluate on two maze tasks, as illustrated in FIG0. The C-maze is taken from BID12: in this maze, the agent starts at a random point within a small fixed distance of the mean start position. The agent has a continuous, 2D action space which allows it to accelerate in the x or y directions, and is able to observe its x and y position, but not its velocity. The ground truth reward is r t = −d t − 10 −3 a t 2, where d t is the agent's distance to the goal, and a t is its action (this action penalty is assumed to be zero in FIG0). Episodes terminate after 100 steps; for evaluation, we report the undiscounted mean sum of rewards over each episode The S-maze is larger variant of the same environment with an extra wall between the agent and its goal. To make the S-maze easier to solve for the expert, we added further reward shaping to encourage the agent to pass between the gaps between walls. We also increased the maximum control forces relative to the C-maze to enable more rapid exploration. Environments will be released along with the rest of our VAIRL implementations. Hyperparameters Policy networks for all methods were two-layer ReLU MLPs with 32 hidden units per layer. Reward and discriminator networks were similar, but with 32-unit mean and standard deviation layers inserted before the final layer for VDB methods. To generate expert demonstrations, we trained a TRPO BID40 agent on the ground truth reward for the training environment for 200 iterations, and saved 107 trajectories from each of the policies corresponding to the five final iterations. TRPO used a batch size of 10,000, a step size of 0.01, and entropy bonus with a coefficient of 0.1 to increase diversity. After generating demonstrations, we trained the IRL and imitation methods on a training maze for 200 iterations; again, our policy optimizer was TRPO with the same hyperparameters used to generate demonstrations. Between each policy update, we did 100 discriminator updates using Adam with a learning rate of 5 × 10 −5 and batch size of 32. For the C-maze our VAIRL runs used a target KL of I C = 0.5, while for the more complex S-maze we FIG0: Left: The C-maze used for training and its mirror version used for testing. Colour contours show the ground truth reward function that we use to train the expert and evaluate transfer quality, while the red and green dots show the initial and goal positions, respectively. Right: The analogous diagram for the S-maze. use a tighter target of I C = 0.05. For the test C-maze, we trained new policies against the recovered reward using TRPO with the hyperparameters described above; for the test S-maze, we modified these parameters to use a batch size of 50,000 and learning rate of 0.001 for 400 iterations. FIG0 and 15 show the reward functions recovered by each IRL baseline on the C-maze and S-maze, respectively, along with sample trajectories for policies trained to optimize those rewards. Notice that VAIRL tends to recover smoother reward functions that match the ground truth reward more closely than the baselines. Addition of a gradient penalty enhances this effect for both AIRL and VAIRL. This is especially true in S-maze, where combining a gradient penalty with a variational discriminator bottleneck leads to a smooth reward that gradually increases as the agent nears its goal position at the top of the maze. We provide further experiment on image generation and details of the experimental setup. We use the non-saturating objective of for all models except WGAN-GP. Following BID29, we compute FID on samples of size 10000 2. We base our implementation on, where we do not use any batch normalization for both the generator and the discriminator. We use RMSprop (Hinton et al.) and a fixed learning rate for all experiments. For convolutional GAN, variational discriminative bottleneck is implemented as a 1x1 convolution on the final embedding space that outputs a Gaussian distribution over Z parametrized with a mean and a diagonal covariance matrix. For all image experiments, we preserve the dimensionality of the latent space. All experiments use adaptive β update with a dual stepsize of α β = 10 −5. We will make our code public. Similarly to VGAN, instance noise; is added to the final embedding space of the discriminator right before applying the classifier. Instance noise can be interpreted as a non-adaptive VGAN without a information constraint. Architecture: For CIFAR-10, we use a resnet-based architecture adapted from detailed in Tables 2, 3, and 4. For CelebA and CelebAHQ, we use the same architecture used in. BID22 1024 × 1024 resolution at 300k iterations. Models are trained from scratch at full resolution, without the progressive scheme proposed by BID21.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyxPx3R9tm
Regularizing adversarial learning with an information bottleneck, applied to imitation learning, inverse reinforcement learning, and generative adversarial networks.
Deploying machine learning systems in the real world requires both high accuracy on clean data and robustness to naturally occurring corruptions. While architectural advances have led to improved accuracy, building robust models remains challenging, involving major changes in training procedure and datasets. Prior work has argued that there is an inherent trade-off between robustness and accuracy, as exemplified by standard data augmentation techniques such as Cutout, which improves clean accuracy but not robustness, and additive Gaussian noise, which improves robustness but hurts accuracy. We introduce Patch Gaussian, a simple augmentation scheme that adds noise to randomly selected patches in an input image. Models trained with Patch Gaussian achieve state of the art on the CIFAR-10 and ImageNet Common Corruptions benchmarks while also maintaining accuracy on clean data. We find that this augmentation leads to reduced sensitivity to high frequency noise (similar to Gaussian) while retaining the ability to take advantage of relevant high frequency information in the image (similar to Cutout). We show it can be used in conjunction with other regularization methods and data augmentation policies such as AutoAugment. Finally, we find that the idea of restricting perturbations to patches can also be useful in the context of adversarial learning, yielding models without the loss in accuracy that is found with unconstrained adversarial training. Patch Gaussian augmentation overcomes the accuracy/robustness tradeoff observed in other augmentation strategies. Larger σ of Patch Gaussian (→) improves mean corruption error (mCE) and maintains clean accuracy, whereas larger σ of Gaussian (→) and patch size of Cutout (→) hurt accuracy or robustness. More robust and accurate models are down and to the right. Modern deep neural networks can achieve impressive performance at classifying images in curated datasets (; ;). Yet, they lack robustness to various forms of distribution shift that typically occur in real-world settings. For example, neural networks are sensitive to small translations and changes in scale , blurring and additive noise , small objects placed in images , and even different images from a distribution similar to the training set (; . For models to be useful in the real world, they need to be both accurate on a high-quality held-out set of images, which we refer to as "clean accuracy," and robust on corrupted images, which we refer to as "robustness." Most of the literature in machine learning has focused on architectural changes (; ; ; ; ; ;) to improve clean accuracy but interest has recently shifted toward robustness as well. Research in neural network robustness has tried to quantify the problem by establishing benchmarks that directly measure it and comparing the performance of humans and neural networks (b;). Others have tried to understand robustness by highlighting systemic failure modes of current methods. For instance, networks exhibit excessive invariance to visual features , texture bias (a), sensitivity to worst-case (adversarial) perturbations , and a propensity to rely on non-robust, but highly predictive features for classification . Of particular relevance, has established connections between popular notions of adversarial robustness and some measures of distribution shift considered here. Another line of work has attempted to increase model robustness performance, either by projecting out superficial statistics , via architectural improvements , pretraining schemes , or with the use of data augmentations. Data augmentation increases the size and diversity of the training set, and provides a simple way to learn invariances that are challenging to encode architecturally . Recent work in this area includes learning better transformations (; ;), inferring combinations of them , unsupervised methods , theory of data augmentation , and applications for one-shot learning . Despite these advances, individual data augmentation methods that improve robustness do so at the expense of reduced clean accuracy. Further, achieving robustness on par with the human visual system is thought to require major changes in training procedures and datasets: the current state of the art in robustness benchmarks involves creating a custom dataset with styled-transferred images before training (a), and still incurs a significant drop in clean accuracy. The ubiquity of reported robustness/accuracy trade-offs in the literature have even led to the hypothesis that these trade-offs may be inevitable . Because of this, many recent works focus on improving either one or the other (; a). In this work we propose a simple data augmentation method that overcomes this trade-off, achieving improved robustness while maintaining clean accuracy. Our contributions are as follows: • We characterize a trade-off between robustness and accuracy in standard data augmentations Cutout and Gaussian (Section 2.1). • We describe a simple data augmentation method (which we term Patch Gaussian) that allows us to interpolate between the two augmentations above (Section 3.1). Despite its simplicity, Patch Gaussian achieves a new state of the art in the Common Corruptions robustness benchmark , while maintaining clean accuracy, indicating current methods have not reached this fundamental trade-off (Section 4.1). • We demonstrate that Patch Gaussian can be combined with other regularization strategies (Section 4.2) and data augmentation policies (Section 4.3). • We perform a frequency-based analysis of models trained with Patch Gaussian and find that they can better leverage high-frequency information in lower layers, while not being too sensitive to them at later ones (Section 5.1). • We show a similar method can be used in adversarial training, suggesting under-explored questions about training distributions' effect on out-of-distribution robustness (Section 5.2). We start by considering two data augmentations: Cutout and Gaussian . The former sets a random patch of the input image to a constant (mean pixel in the dataset) in order to improve clean accuracy. The latter adds independent Gaussian noise to each pixel of the input image, which directly increases robustness to Gaussian noise. We compare the effectiveness of Gaussian and Cutout data augmentation for accuracy and robustness by measuring the performance of models trained with each on clean as well as corrupted data. Here, robustness is defined as average accuracy of the model, when tested on data corrupted by various σ (0.1, 0.2, 0.3, 0.5, 0.8, 1.0) of Gaussian noise, relative to the clean accuracy: Relative Gaussian Robustness = E σ (Accuracy on Data Corrupted by σ) − Clean Accuracy Fig. 2 highlights an apparent trade-off in using these methods. In accordance to previous work , Cutout improves accuracy on clean test data. Despite this, we find it does not lead to increased robustness. Conversely, training with higher σ of Gaussian can lead to increased robustness to Gaussian noise, but also leads to decreased accuracy on clean data. Therefore, any robustness gains are offset by poor overall performance: a model with a perfect Relative Robustness of 0, but whose clean accuracy dropped to 50% will be wrong half the time, even on clean data. The y-axis is the change in accuracy when tested on data corrupted with Gaussian noise at various σ (average corrupted accuracy minus clean accuracy). The diamond indicates augmentation hyper-parameters selected by the method in Section 3.2. At first glance, these seem to reinforce the findings of previous work , indicating that robustness comes at the cost of generalization, which would offset any benefits of improved robustness. In the following sections, we will explore whether there exists augmentation strategies that do not exhibit this limitation. Each of the two methods seen so far achieves one half of our stated goal: either improving robustness or slightly improving/maintaining clean test accuracy, but never both. To explore whether this observed trade-off is fundamental, we introduce Patch Gaussian, a technique that combines the noise robustness of Gaussian with the slightly improved clean accuracy of Cutout. Our method is intentionally simple but, as we'll see, it's powerful enough to overcome the limitations described and beats complex training schemes designed to provide robustness. Patch Gaussian works by adding a W × W Figure 3: Patch Gaussian is the addition of Gaussian noise to pixels in a square patch. It allows us to interpolate between Gaussian and Cutout, approaching Gaussian with increasing patch size and Cutout with increasing σ. patch of Gaussian noise to the image (Figure 3). As with Cutout, the center of the patch is sampled to be within the image. By varying the size of this patch and the maximum standard deviation of noise sampled σ max, we can interpolate between Gaussian (which applies additive Gaussian noise to the whole image) and an approximation of Cutout (which removes all information inside the patch). See Fig. 9 for more examples. Our goal is to learn models that achieve both good clean accuracy and improved robustness to corruptions. Prior work has optimized for one or the other but, as noted before, to meaningfully improve robustness to other distributions, a method can't incur a significant drop in clean accuracy. Therefore, when selecting hyper-parameters, we focus on identifying the models that are most robust while still achieving a minimum accuracy (Z) on the clean test data. Values of Z are selected to incur negligible decrease in clean accuracy. As such, they vary per dataset and model, and can be found in the Appendix (Table 5). If no model has clean accuracy ≥ Z, we report the model with highest clean accuracy, unless otherwise specified. We find that patch sizes around 25 on CIFAR (≤250 on ImageNet, i.e.: uniformly sampled with maximum value 250) with σ ≤ 1.0 generally perform the best. A complete list of selected hyperparameters for all augmentations can be found in Table 5. We are interested in out-of-distribution robustness, and report performance of selected models on Common Corruption . However, when selecting hyper-parameters, we use Relative Gaussian Robustness as a stand-in for "robustness. " indicates that this metric is correlated with performance on Common Corruptions, so selected models should be generally robust beyond Gaussian corruptions. By picking models based on robustness to Gaussian noise, we ensure that our selection process does not overfit to the Common Corruptions benchmark. Models trained with Patch Gaussian overcome the observed trade-off and gain robustness to Gaussian noise while maintaining clean accuracy (Fig. 1). Because Gaussian robustness is only used for hyper-parameter selection, we omit these , but refer the curious reader to Appendix Fig. 7. Instead, we report how this Gaussian robustness translates into better Common Corruption robustness, which is in line with reports of the correlation between the two . In doing so, we establish a new state of the art in the Common Corruptions benchmark (Section 4.1), despite the simplicity of our method when compared with the previous best (a). We then show that Patch Gaussian can be used in complement to other common regularization strategies (Section 4.2) and data augmentation policies (Section 4.3). In this section, we look at how our augmentations impact robustness to corruptions beyond Gaussian noise. Rather than focusing on adversarial examples that are worst-case bounded perturbations, we focus on a more general set of corruptions that models are likely to encounter in real-world settings: the Common Corruptions benchmark . This benchmark, also referred to as CIFAR-C and ImageNet-C, is composed of images transformed with 15 corruptions, at 5 severities each. Each is designed to model transformations commonly found in real-world settings, such as brightness, different weather conditions, and different kinds of noise. Table 1 shows that Patch Gaussian achieves state of the art on both of these benchmarks in terms of mean Corruption Error (mCE). A "Corruption Error" is a model's average error over 5 severities of a given corruption, normalized by the same average of a baseline model. However, ImageNet-C was released in compressed JPEG format , which alters the corruptions applied to the raw pixels. Therefore, we report on the benchmark as-released ("Original mCE") 1 as well as a version of 12 corruptions without the extra compression ("mCE"). Additionally, because Patch Gaussian is a noise-based augmentation, we wanted to verify whether its gains on this benchmark were solely due to improved performance on noise-based corruptions (Gaussian Noise, Shot Noise, and Impulse Noise). To do this, we also measure the models' average performance on all other corruptions, reported as "Original mCE (-noise)", and "mCE (-noise)". The models used to normalize Corruption Errors are the "Baselines" trained with only flip and crop data augmentation. The one exception is Original mCE ImageNet, where we use the AlexNet baseline to be directly comparable with previous work (; a). On CIFAR, we compare with an adversarially-trained model . On ImageNet, we compare with a model trained with Random Erasing , as well as a shape-biased model "SIN+IN ftIN" (a). Finally, previous work has found that augmentation diversity is a key component of robustness gains. To confirm that Patch Gaussian's gains aren't simply a of using multiple augmentations, we also report on training with Cutout and Gaussian applied in sequence ("Cutout & Gaussian" and "Gaussian & Cutout"), as well as to 50% of batches ("Gaussian or Cutout"). We observe that Patch Gaussian outperforms all models, even on corruptions like fog where Gaussian hurts performance . Scores for each corruption can be found in the Table 1: Patch Gaussian achieves state of the art in the CIFAR-C (left) and ImageNet-C (right) robustness benchmarks while maintaining clean test accuracy. " Original mCE" refers to the jpegcompressed benchmark, as used in Geirhos et al. (2018a);. "mCE" is a version of it without the extra jpeg compression. Note that Patch Gaussian improves robustness even in corruptions that aren't noise-based. * Cutout 16 is presented for direct comparison with;. For Resnet-200, we also present Gaussian at a higher σ to highlight the accuracy-robustness trade-off. Augmentation hyper-parameters were selected based on the method in Section 3.2 and can be found in Appendix. See text for details. These are surprising: achieving robustness on par with the human visual system is thought to require major changes in training procedures and datasets. Training shape-biased models (a) involves creating a custom dataset of style-transferred images, which is a computationallyexpensive operation. Even with these, the most robust model reported SIN+IN displays a significant drop in clean accuracy. Because of this, our main comparison is with SIN+IN ftIN, which is fine-tuned on ImageNet. A comparison with SIN+IN can be found in Appendix Table 8. In sum, despite its simplicity, Patch Gaussian achieves a substantial decrease in mCE relative to other models, indicating that current methods have not reached the theoretical trade-off , and that complex training schemes (a) are not needed for robustness. Since Patch Gaussian has a regularization effect on the models trained above, we compare it with other regularization methods: larger weight decay, label smoothing, and dropblock (Table 2). We find that while label smoothing improves clean accuracy, it weakens the robustness in all corruption metrics we have considered. This agrees with the theoretical prediction from , which argued that increasing the confidence of models would improve robustness, whereas label smoothing reduces the confidence of predictions. We find that increasing the weight decay from the default value used in all models does not improve clean accuracy or robustness. Here, we focus on analyzing the interaction of different regularization methods with Patch Gaussian. Previous work indicates that improvements on the clean accuracy appear after training with Dropblock for 270 epochs , but we did not find that training for 270 epochs changed our analysis. Thus, we present models trained at 90 epochs for direct comparison with other . Due to the shorter training time, Dropblock does not improve clean accuracy, yet it does make the model more robust (relative to baseline) according to all corruption metrics we consider. We find that using label smoothing in addition to Patch Gaussian has a mixed effect, it improves clean accuracy while slightly improving robustness metrics except for the Original mCE. Combining Dropblock with Patch Gaussian reduces the clean accuracy relative to the Patch Gaussian-only model, as Dropblock seems to be a strong regularizer when used for 90 epochs. However, using Dropblock and Patch Gaussian together leads to the best robustness performance. These indicate that Patch Gaussian can be used in conjunction with existing regularization strategies. Knowing that Patch Gaussian can be combined with other regularizers, it's natural to ask whether it can also be combined with other data augmentation policies. Previous work has found that varied augmentation policies have a large positive impact on model robustness . In this section, we verify that Patch Gaussian can be added to these policies for further gains. Because AutoAugment leads to state of the art accuracies, we are interested in seeing how far it can be combined with Patch Gaussian to improve . Therefore, and unlike previous experiments, models are trained for 180 epochs to yield best possible. In an attempt to understand Patch Gaussian's performance, we perform a frequency-based analysis of models trained with various augmentations using the method introduced in. First, we perturb each image in the dataset with noise sampled at each orientation and frequency in Fourier space. Then, we measure changes in the network activations and test error when evaluated with these Fourier-noise-corrupted images: we measure the change in 2 norm of the tensor directly after the first convolution, as well as the absolute test error. This procedure yields a heatmap, which indicates model sensitivity to different frequency and orientation perturbations in the Fourier domain. Each image in Fig 4 shows first layer (or test error) sensitivity as a function of frequency and orientation of the sampled noise, with the middle of the image containing the lowest frequencies, and the edges of the image containing highest frequencies. For CIFAR-10 models, we present this analysis for the entire Fourier domain, with noise sampled with norm 4. For ImageNet, we focus our analysis on lower frequencies that are more visually salient add noise with norm 15.7. Note that for Cutout and Gaussian, we chose larger patch sizes and σs than those selected with the method in Section 3.2 in order to highlight the effect of these augmentations on sensitivity. Heatmaps of other models can be found in the Appendix (Figure 11). We confirm findings by that Gaussian encourages the model to learn a low-pass filter of the inputs. Models trained with this augmentation, then, have low test error sensitivity at high frequencies, which could help robustness. However, valuable high-frequency information is being thrown out at low layers, which could explain the lower test accuracy. We further find that Cutout encourages the use of high-frequency information, which could help explain its improved generalization performance. Yet, it does not encourage lower test error sensitivity, which explains why it doesn't improve robustness either. Patch Gaussian, on the other hand, seems to allow high-frequency information through at lower layers, but still encourages relatively lower test error sensitivity at high frequencies. Indeed, when we measure accuracy on images filtered with a high-pass filter, we see that Patch Gaussian models can maintain accuracy in a similar way to the baseline and to Cutout, where Gaussian fails to. See Figure 4 for full . Cutout encourages the use of high frequencies in earlier layers, but its test error remains too sensitive to them. Gaussian learns low-pass filtering of features, which increases robustness at later layers, but makes lower layers too invariant to highfrequency information (thus hurting accuracy). Patch Gaussian allows high frequencies to be used in lower layers, and its test error remains relatively robust to them. This can also be seen by the presence of high-frequency kernels in the first layer filters of the models (or lack thereof, in the case of Gaussian). (right) Indeed, Patch Gaussian models match the performance of Cutout and Baseline when presented with only the high frequency information of images, whereas Gaussian fails to effectively utilize this information (see Appendix Fig. 12 for experiment details). This pattern of reduced sensitivity of predictions to high frequencies in the input occurs across all augmentation magnitudes, but here we use larger patch sizes and σ of noise to highlight the differences in models indicated by *. See text for details. Understanding the impact of data distributions and noise on representations has been well-studied in neuroscience (; ;). The data augmentations that we propose here alter the distribution of inputs that the network sees, and thus are expected to alter the kinds of representations that are learned. Prior work on efficient coding and autoencoders has shown how filter properties change with noise in the unsupervised setting, ing in lower-frequency filters with Gaussian, as we observe in Fig. 4. Consistent with prior work on natural image statistics , we find that networks are least sensitive to low frequency noise where spectral density is largest. Performance drops at higher frequencies when the amount of noise we add grows relative to typical spectral density observed at these frequencies. In future work, we hope to better understand the relationship between naturally occurring properties of images and sensitivity, and investigate whether training with more naturalistic noise can yield similar gains in corruption robustness. Our indicate that patching a transformation can prevent overfitting to that particular transformation and maintain clean accuracy. To further confirm this, we train a model with adversarial training applied only to a patch of the training input. Adversarial training is a method of achieving robustness to worst-case perturbations. Models trained in this setting notoriously exhibit decreased clean accuracy, so it is a good candidate to verify whether our robustness gains come from patching. We train our models with PGD, in a setting equivalent to. For Patch PGD, the adversarial perturbation is calculated on the whole image for all steps, and patched after the fact. We also tried calculating PGD on a patch only and found similar . We select hyper-parameters based on PGD performance on validation set, while maintaining accuracy above 90%. However, in this section we are not interested in improving adversarial robustness performance, but on seeing its effect on robustness to Common Corruptions, to evaluate out-of-distribution (o.o.d.) robustness. We leave an analysis of the effect of patching on adversarial robustness to future work. Indeed, Table 4 shows that training with Patch PGD obtains similar PGD accuracy to training with PGD, but maintains most of the clean accuracy of the baseline model. Surprisingly, Patch PGD also improves robustness to unseen Common Corruptions, when compared to the baseline without adversarial training, indicating that patching is a generally powerful tool. This also suggests there are unexplored questions regarding the training distribution and how that translates into i.i.d and o.o.d generalization. We hope to explore these in future work. In this work, we introduced a simple data augmentation operation, Patch Gaussian, which improves robustness to common corruptions without incurring a drop in clean accuracy. For models that are large relative to the dataset size (like ResNet-200 on ImageNet and all models on CIFAR-10), Patch Gaussian improves clean accuracy and robustness concurrently. We showed that Patch Gaussian achieves this by interpolating between two standard data augmentation operations Cutout and Gaussian. Finally, we analyzed the sensitivity to noise in different frequencies of models trained with Cutout and Gaussian, and showed that Patch Gaussian combines their strengths without inheriting their weaknesses. Our method is much simpler than previous state of the art, and can be used in conjunction with other regularization and data augmentation strategies, indicating it is generally useful. We end by showing that applying perturbations in patches can be a powerful method to vary training distributions in the adversarial setting. Our indicate current methods have not reached a fundamental robustness/accuracy trade-off, and that future work is needed to understand the effect of training distributions in o.o.d. robustness. Fig. 5 shows the accuracy/robustness trade-off of models trained with various hyper-parameters of Cutout and Gaussian. Fig. 6 shows clean accuracy change of models trained with various hyper-parameters of Patch Gaussian. Fig. 7 shows how Patch Gaussian can overcome the observed trade-off and gain Gaussian robustness in various models and datasets. We run our experiments on CIFAR-10 and ImageNet datasets. On CIFAR-10, we use the Wide-ResNet-28-10 model , as well as the Shake-shake-112 model , trained for 200 epochs and 600 epochs respectively. The Wide-ResNet model uses a initial learning rate of 0.1 with a cosine decay schedule. Weight decay is set to be 5 × 10 −4 and batch size is 128. We train all models, including the baseline, with standard data augmentation of horizontal flips and pad-and-crop. Our code uses the same hyper parameters as On ImageNet, we use the ResNet-50 and Resnet-200 models , trained for 90 epochs. We use a weight decay rate of 1 × 10 −4, global batch size of 512 and learning rate of 0.2. The learning rate is decayed by 10 at epochs 30, 60, and 80. We use standard data augmentation of horizontal flips and crops. All CIFAR-10 and ImageNet experiments use the listed hyper-parameters above, unless specified otherwise. To apply Gaussian, we uniformly sample a standard deviation σ from 0 up to some maximum value σ max, and add i.i.d. noise sampled from N (0, σ 2) to each pixel. To apply Cutout, we use a fixed patch size W, and randomly set a square region with size W × W to the constant mean of each RGB channel in the dataset. As in , the patch location is randomly sampled and can lie outside of the 32 × 32 CIFAR-10 (or 224 × 224 ImageNet) image but its center is constrained to lie within it. Patch sizes and σ max are selected based on the method in Section 3.2. Table 5: Augmentation hyper-parameters selected with the method in Section 3.2 for each model/dataset. *Indicates manually-chosen stronger hyper-parameters, used to highlight the effect of the augmentation on the models. "≤" indicates that the value is uniformly sampled up to the given maximum value. Since Patch Gaussian can be combined with both regularization strategies as well as data augmentation policies, we want to see if it is generally useful beyond classification tasks. We train a RetinaNet detector with ResNet-50 backbone on the COCO dataset . Images for both baseline and Patch Gaussian models are horizontally flipped half of the time, after being resized to 640 × 640. We train both models for 150 epochs using a learning rate of 0.08 and a weight decay of 1 × 10 −4. The focal loss parameters are set to be α = 0.25 and γ = 1.5. Despite being designed for classification, Patch Gaussian improves detection performance according to all metrics when tested on the clean COCO validation set (Table 9). On the primary COCO metric mean average precision (mAP), the model trained with Patch Gaussian achieves a 1% higher accuracy over the baseline, whereas the model trained with Gaussian suffers a 2.9% loss. Next, we evaluate these models on the validation set corrupted by i.i.d. Gaussian noise, with σ = 0.25. We find that model trained with Gaussian and Patch Gaussian achieve the highest mAP of 26.1% on the corrupted data, whereas the baseline achieves 11.6%. It is interesting to note that Patch Gaussian model achieves a better on the harder metrics of small object detection and stricter intersection over union (IoU) thresholds, whereas the Gaussian model achieves a better on the easier tasks of large object detection and less strict IOU threshold metric. Overall, as was observed for the classification tasks, training object detection models with Patch Gaussian leads to significantly more robust models without sacrificing clean accuracy. FOURIER ANALYSIS Fig. 10 shows a fourier analysis of selected models reported. Fig. 11 shows complete filters for ResNet-50 models. Fig. 12 shows high-pass filters used in high-pass experiment in Fig. 4. Figure 4 for details. We again note the presence of filters of high fourier frequency in models trained with Cutout* and Patch Gaussian. We also note that Gaussian* exhibits high variance filters. We posit these have not been trained and have little importance, given the low sensitivity of this model to high frequencies. Future work will investigate the importance of filters on sensitivity. reports that PGD training helps with corruption robustness. However, they fail to report mCE values for their models. We find that, indeed, PGD helps with some corruptions, and when all corruption severities' errors are averaged, it mostly maintains performance (23.8% error, compared to baseline error of 23.51%). However, as table 4 shows, when we properly calculate mCE by normalizing with a baseline model, PGD displays much worse robustness, while Patch PGD improves performance. Figure 13 shows the frequency-based analysis for models with different hyperparameters of Patch Gaussian. First, for hyper-parameters W =16, σ=1.0 (center), the reader will note that these are very similar to the frequency sensitivity reported in Figure 4. The main difference being that the smaller patch size (16 vs 25 in Figure 4) makes the model slightly more sensitive to high frequencies. This makes sense since smaller patch size moves the model further away from a Gaussian-trained one. When we make the scale smaller (W =16, σ=0.3, left), less information is corrupted in the patch, which moves the model farther from the one trained with Cutout (and therefore closer to a Gaussiantrained one). This can be seen in the increased invariance to high frequencies at the first layer, which is reflected in invariance at test error as well. If we, instead, make the scale larger(W =16, σ=2.0, right), we move the model closer to the one trained with Cutout. Notice the higher intensity red in the first layer plot, indicating higher sensitivity to high-frequency features. We also see this sensitivity reflected in the test error, which matches the behavior for Cutout-trained models. Figure 13: Frequency-based analysis for models with different hyper-parameters of Patch Gaussian.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkxWXkStDB
Simple augmentation method overcomes robustness/accuracy trade-off observed in literature and opens questions about the effect of training distribution on out-of-distribution generalization.
Offset regression is a standard method for spatial localization in many vision tasks, including human pose estimation, object detection, and instance segmentation. However, if high localization accuracy is crucial for a task, convolutional neural networks will offset regression usually struggle to deliver. This can be attributed to the locality of the convolution operation, exacerbated by variance in scale, clutter, and viewpoint. An even more fundamental issue is the multi-modality of real-world images. As a consequence, they cannot be approximated adequately using a single mode model. Instead, we propose to use mixture density networks (MDN) for offset regression, allowing the model to manage various modes efficiently and learning to predict full conditional density of the outputs given the input. On 2D human pose estimation in the wild, which requires accurate localisation of body keypoints, we show that this yields significant improvement in localization accuracy. In particular, our experiments reveal viewpoint variation as the dominant multi-modal factor. Further, by carefully initializing MDN parameters, we do not face any instabilities in training, which is known to be a big obstacle for widespread deployment of MDN. The method can be readily applied to any task with a spatial regression component. Our findings highlight the multi-modal nature of real-world vision, and the significance of explicitly accounting for viewpoint variation, at least when spatial localization is concerned. Training deep neural networks is a non-trivial task in many ways. Properly initializing the weights, carefully tuning the learning rate, normalization of weights or targets, or using the right activation function can all be vital for getting a network to converge at all. From another perspective, it is crucial to carefully formulate the prediction task and loss on top of a rich representation to efficiently leverage all the features learned. For example, combining representations at various network depths has been shown to be important to deal with objects at different scales;;. For some issues, it is relatively straightforward to come up with a network architecture or loss formulation to address them -see e.g. techniques used for multi-scale training and inference. In other cases it is not easy to manually devise a solution. For example, offset regression is extensively used in human pose estimation and instance segmentation, but it lacks high spatial precision. Fundamental limitations imposed by the convolution operation and downsampling in networks, as well as various other factors contribute to this -think of scale variation, variation in appearance, clutter, occlusion, and viewpoint. When analyzing a standard convolutional neural network (CNN) with offset regression, it seems the network knows roughly where a spatial target is located and moves towards it, but cannot get precise enough. How can we make them more accurate? That's the question we address in this paper, in the context of human pose estimation. Mixture density models offer a versatile framework to tackle such challenging, multi-modal settings. They allow for the data to speak for itself, revealing the most important modes and disentangling them. To the best of our knowledge, mixture density models have not been successfully integrated in 2D human pose estimation to date. In fact, our work has only become possible thanks to recent work of Zhou et al. (2019a) proposing an offset based method to do dense human pose estimation, object detection, depth estimation, and orientation estimation in a single forward pass. Essentially, in a dense fashion they classify some central region of an instance to decide if it belongs to a particular category, and then from that central location regress offsets to spatial points of interest belonging to the instance. In human pose estimation this would be keypoints; in instance segmentation it could be extreme points; and in tracking moving objects in a video this could be used to localize an object in a future frame Zhou et al. (2019b);;;. This eliminates the need for a two stage top-down model or for an ad hoc post processing step in bottom-up models. The former would make it very slow to integrate a density estimation method, while for the latter it is unclear how to do so -if possible at all. In particular, we propose to use mixture density networks (MDN) to help a network disentangle the underlying modes that, when taken together, force it to converge to an average regression of a target. We conduct experiments on the MS COCO human pose estimation task , because its metric is very sensitive to spatial localization: if the ground truth labels are displaced by just a few pixels, the scores already drop significantly, as shown in top three rows of Table 4. This makes the dataset suitable for analyzing how well different models perform on high precision localization. Any application demanding high precision localization can benefit from our approach. For example, spotting extremely small broken elements on an electronic board or identifying surface defects on a steel sheet using computer vision are among such applications. In summary, our contributions are as follows: • We propose a new solution for offset regression problems in 2D using MDNs. To the best of our knowledge this is the first work to propose a full conditional density estimation model for 2D human pose estimation on a large unconstrained dataset. The method is general and we expect it to yield significant gains in any spatial dense prediction task. • We show that using MDN we can have a deeper understanding of what modes actually make a dataset challenging. Here we observe that viewpoint is the most challenging mode that forces a single mode model to settle down for a sub-optimal solution. Multi-person human pose estimation solutions usually work either top-down or bottom-up. In the top-down approach, a detector finds person instances to be processed by a single person pose estimator;. When region-based detectors are deployed, top-down methods are robust to scale variation. But they are slower compared to bottom-up models. In the bottom-up approach, all keypoints are localized by means of heatmaps , and for each keypoint an embedding is learned in order to later group them into different instances. Offset based geometric Cao et al. (2018; ; and associative embeddings are the most successful models. However, they lead to inferior accuracy and need an ad hoc post-processing step for grouping. To overcome these limitations, recently Zhou et al. (2019a) proposed a solution that classifies each spatial location as corresponding to (the center of) a person instance or not and at the same location generates offsets for each keypoint. This method is very fast and eliminates the need for a detector or post-processing to group keypoints. In spirit, it is similar to YOLO and SSD models developed for object detection;. However, offset regression does not deliver high spatial precision and the authors still rely on heatmaps to further refine the predictions. Overcoming this lack of accuracy is the main motivation for this work. As for the superiority of having multiple choice solutions for vision tasks,; Lee et al. (2015; ; have shown that having multiple prediction heads while enforcing them to have diverse predictions, works better than a single head or an ensemble of models. However, they depend on an oracle to choose the best prediction for a given input. The underlying motivation is that the system later will be used by another application that can assess and choose the right head for an input. Clearly this is a big obstacle in making such models practical. And, of course, these methods do not have a mechanism to learn conditional density of outputs for a given input. This is a key feature of mixture models. Mixture density networks have attracted a lot of attention in the very recent years. In particular, it has been applied to 3D human pose estimation , and 3D hand pose estimation . Both works are applied to relatively controlled environments. In 2D human pose estimation, We first review the mixture density networks and then show how we adapt it for offset regression. Mixture models are theoretically very powerful tools to estimate the density of any distribution . They recover different modes that contribute to the generation of a dataset, and are straightforward to interpret. Mixture density networks (MDN) is a technique that enables us to use neural networks to estimate the parameters of a mixture density model. MDNs estimate the probability density of a target conditioned on the input. This is a key technique to avoid converging to an average target value given an input. For example, if a 1D distribution consists of two Gaussians with two different means, trying to estimate its density using a single Gaussian will in a mean squashed in between the two actual means, and will fail to estimate any of them. This effect is well illustrated in Figure 1 of the original paper by. In a regression task, given a dataset containing a set of input vectors as {x 0 . . . x n} and the associated target vectors {t 0 . . . t n}, MDN will fit the weigths of a neural network such that it maximizes the likelihood of the training data. The key formulation then is the representation of the probability density of the target values conditioned on the input, as shown in equation 1: Here M is a hyper-parameter and denotes the number of components constituting the mixture model. α m (x i) is called mixing coefficient and indicates the probability of component m being responsible for generation of the sample x i. φ m denotes the probability density function of component m for t i |x i. The conditional density function is not restricted to be Gaussian, but that is the most common choice and works well in practice. It is given in equation 2: In equation 2, c is the dimension of the target vector, µ m is the component mean and σ m is the common variance for the elements of the target. The variance term does not have to be shared between dimensions of target space, and can be replaced with a diagonal or full co-variance matrix if necessary. Given an image with an unspecified number of people in uncontrolled poses, the goal of human pose estimation is to localize a predefined set of keypoints for each person and have them grouped together per person. We approach this problem using a mixed bottom-up and top-down formulation very recently proposed in Zhou et al. (2019a). In this formulation, unlike the top-down methods there is no need to use an object detector to localize the person instance first. And unlike bottom-up methods, the grouping is not left as a post-processing step. Rather, at a given spatial location, the model predicts if it is the central pixel of a person, and at the same location, for each keypoint it generates an offsets vector to the keypoint location. This formulation takes the best of both approaches: it is fast like a bottom-up method, and postprocessing free as in a top-down model. At least equally important is the fact that it enables applying many advanced techniques in an end-to-end manner. As a case in point, in this paper it allows us to perform density estimation for human pose in a dense fashion. That is, in a single forward pass through the network, we estimate the parameters of a density estimation model. In particular, we use the mixture density model to learn the probability density of poses conditioned on an input image. Formally, Zhou et al. (2019a) start from an input RGB image I of size H * W * 3, and a CNN that receives I and generates an output with height H, width W, and C channels. If we indicate the downsampling factor of the network with D, then we have H = D * H, and similarly for width. We refer to the set of output pixels as P. Given the input, the network generates a dense 2D classification map C to determine instance centers, i.e. C p indicates the probability of location p ∈ P corresponding to the center of a person instance. Simultaneously, at p, the network predicts, where K is the number of keypoints that should be localized (17 in the COCO dataset). Once the network classifies p as a person's central pixel, the location of each keypoint is directly given by the offset vectors O. In the literature, it is common to train for offset regression O using L 1 loss;;; Zhou et al. (2019a). However, spatial regression is a multimodal task and having a single set of outputs will lead to a sub-optimal prediction, in particular when precise localization is important. With this in mind, we use mixture density networks to model the offset regression task. In this case, µ m from equation 2 would be used to represent offsets predicted by different components. Then the density of the ground truth offset vectors G conditioned on image I is given by equation 3, where the density φ m for each component is given by equation 4. Here O m (I) is the input dependent network output function for component m that generates offsets and p,y ] indicates the ground truth offsets. σ m (I) is the standard deviation of the component m in two dimensions, X and Y. It is shared by all keypoints of an instance. However, in order to account for scale differences of keypoints, in equation 4 for each keypoint we divided σ m (I) by its scale factor provided in COCO dataset. In this framework, the keypoints are independent within each component, but the full model does not assume such independence. Given the conditional probability density of the ground truth in equation 3, we can define the loss using the negative log likelihood formulation and minimize it using stochastic gradient descent. The loss for MDN is given in equation 5. Here N is the number of samples in the dataset. Practically, this loss term replaces the popular L 1 loss. Please note that MDN is implemented in a dense fashion, that density estimation is done independently at each spatial location p ∈ P. A schematic overview of the model is shown in Figure 1. We do not modify the other loss terms used in Zhou et al. (2019a). This includes a binary classification loss L C, a keypoint heatmap loss L HM (used for refinement), a small offset regression loss to compensate for lost spatial precision due to downsampling for both center and keypoints L C of f and L KP of f, and a loss for instance size regression L wh. The total loss is given in equation 6: Once the network is trained, at each spatial location, C will determine if that is the center of a person (the bounding box center is used for training). Each MDN component at that location will generate Figure 1: Schematic overview of our proposed solution using mixture density networks. offsets conditioned on the input. To obtain the final offset vectors, we can use either the mixture of the components or the component with the highest probability. We do experiments with both and using the maximum component leads to slightly better . Once we visually investigate what modes the components have learned, they seem to have very small overlap. Hence, it is not surprising that both approaches have similar performance. , but modify it such that minimum values is 10. We did experiment with smaller and larger values for minimum, but did not observe any significant difference. To avoid numerical issues, we have implemented the log likelihood using the LogSumExp function. Our implementation is on top of the code base published by Zhou et al. (2019a), and we use their model as the base in our comparisons. The network architecture is based on a version of stacked hourglass presented in. We refer to this architecture as LargeHG. To analyse effect of model capacity, we also conduct experiments with smaller variants. The SmallHG architecture is obtained by replacing the residual layers with convolutional layers, and XSmallHG is obtained by further removing one layer from each hourglass level. Unless stated otherwise, all models are trained for 50 epochs (1X schedule) using batch size 12 and ADAM optimizer with learning rate 2.5e-4. Only for visualization and comparison to state-of-the-art we use a version of our model trained for 150 epochs (3X). Except for comparison with the state-of-the-art, we have re-trained the base model to assure fair comparison. To analyse effect of number of components, we train on XSmallHG and SmallHG architectures with up to 5 components, and on LargeHG architecture with up to 3 components. Table 1 shows the evaluation for various models on the coco-val. The table also shows the evaluation for MDN models when we ignore the predictions by particular components. We report predictions with and without using the extra heatmap based refinement deployed in Zhou et al. (2019a). This refinement is a post-processing step, which tries to remedy the lack of enough precision in offset regression by pushing the detected keypoints towards the nearest detection from the keypoint heatmaps. It is clear that MDN leads to a significant improvement. Interestingly, only two modes will be retrieved, no matter how many components we train and how big the network is. Having more than two components in slightly better recall, but it will not improve precision. Only when the network capacity is very low more than two component seems to have significant contribution Visualizing prediction by various models, makes it clear that one of the modes focuses on frontal view instances, and the other one on the instance with backward view. Figure 2 shows sample visualisation from M DN 3 model trained with 3X schedule. We further evaluate the M DN 2 trained on the LargeHG on various subsets of the COCO validation split by ignoring predictions by each of components or forcing all predictions to be made by a particular component. The detailed evaluations are presented in table 2. The show that the components correlate well with face viability, confirming the we make by visualising predictions. It is worth noting that although we use annotation of nose as indicator of face visibility, it is noisy, as in some case the person is view from side such that nose is just barely visible and the side view is very close to back view (like the first image in the third row of Figure 2). But, even this noisy split is enough to show that two modes are chose based on viewpoint. Table 1 ). The full coco validation split Visible Keypoints All keypoints that are occluded and annotated are ignored Occluded Keypoints All keypoints that are visible and annotated are ignored Visible Face Instances with at least 5 annotated keypoints where nose is visible and annotated Instances with at least 5 annotated keypoints where nose is occluded or not annotated Table 3: Statistics for coco-val subsets and MDN max component. For face visibility, instances with more than 5 annotated keypoints (in parentheses for minimum of 10) are used. For components, predictions with score at least.5 are considered (in parentheses for minimum of .7). Occluded Keypoints Visible Keypoints Occluded Face Visible Face comp1 comp1 comp1 (back view) comp2 comp2 comp2 (front view) Table 3 compares portion of the dataset each subset comprises against portion of predictions made by each component of M DN 2. Obviously, the component statistics correlates well with the face visibility, which in fact is an indicator of viewpoint in 2D. Majority of instances in the dataset are in frontal view, and similarly the front view component makes majority of the prediction. Related to our , have shown that excluding occluded keypoints from training (by treating them as ) leads to improved performance. More recently, achieves more accurate 3D habd pose estimation by proposing a model that directly predicts occlusion of a keypoint in order to use it for selecting a downstream model. And, here we illustrate that occlusion caused by viewpoint imposes more challenge to spatial regression models, than other possible factors, like variation in pose itself. It is common to train offset regression targets with L1 loss Zhou et al. (2019a);;. In contrast, the single component version of our model is technically equal to L2 loss normalized by the instance scale learned via MDN variance terms. This is in fact equal to directly optimizing the MS COCO OKS scoring metric for human pose estimation. Comparing the performance of the two losses in Table 1, we see normalized L2 yields superior . That is, for any capacity, M DN 1 outperforms the base model which is trained using L1. For a deeper insight on what body parts gain the most from the MDN, we do fine grained evaluation for various keypoint subsets. In doing so, we modify the COCO evaluation script such that it only considers set of keypoints we are interested in. Table 4 shows the . For the facial keypoints where the metric is the most sensitive, the improvement is higher. nevertheless, the highest improvement comes for the wrists, which have the highest freedom to move. On the other hand, for torso keypoints (shoulders and hips) which are the most rigid, there is almost no different in comparison to base model. Given that MDN revels two modes, we build a hierarchical model by doing a binary classification and using it to choose from two separate full MDN models. The goal is to see, if binary classification. Therefore, a two component MDN that learns full conditional probability density and assumes dependence between target dimensions delivers higher performance. For comparison to the state-of-the-art in human pose estimation, we train M DN 1 and M DN 3 for 150 epochs using the LaregHG architecture, and test it on COCO test-dev split. The are presentetd in Table 5. Using MDN significantly improves the offset regression accuracy (row 6 vs row 10 of the table). When refined, both models achieve similar performance. In contrast to all other state-of-the-art models, MDNs performance drops if we deploy the ad-hoc left-right flip augmentation at inference time. This is a direct consequence of using a multi-modal prediction model which learns to deal with viewpoint. It is important to note that left-right flip is used widely for increasing accuracy at test time for object detection and segmentation tasks as well. Therefore, we expect our method to improve performance for those tasks as well. M DN 1 with refinement gives slightly lower accuracy than the base model. Our investigation attributes this discrepancy to a difference in the training batch size. The official base model is trained with batch size 24, but we train all models with batch size 12, due to limited resources. Under the same training setting, M DN 1 will outperform the base model, as shown in Table 1. We have shown mixture density models significantly improve spatial offset regression accuracy. Further, we have demonstrate that MDNs can be deployed on real world data for conditional density estimation without facing mode collapse. Analyzing the ground truth data and revealed modes, we have observe that in fact MDN picks up on a mode, that significantly contributes to achieving higher accuracy and it can not be incorporated in a single mode model. In the case of human pose estimation, it is surprising that viewpoint is the dominant factor, and not the pose variation. This stresses the fact that real world data is multi-modal, but not necessarily in the way we expect. Without a principled approach like MDNs, it is difficult to determine the most dominant factors in a data distribution. A stark difference between our work and others who have used mixture models is the training data. Most of the works reporting mode collapse rely on small and controlled datasets for training. But here we show that when there is a large and diverse dataset, just by careful initialization of parameters, MDNs can be trained without any major instability issues. We have made it clear that one can actually use a fully standalone multi-hypothesis model in a real-world scenario without the need to rely on an oracle or postponing model selection to a downstream task. We think there is potential to learn more finer modes from the dataset, maybe on the pose variance, but this needs further research. Specially, it will be very helpful if the role of training data diversity could be analysed theoretically. At the same time, the sparsity of revealed modes also reminds us of the sparsity of latent representations in generative models. We attribute this to the fact that deep models, even without advanced special prediction mechanism, are powerful enough to deliver fairly high quality on the current datasets. Perhaps, a much needed future direction is applying density estimation models to fundamentally more challenging tasks like the very recent large vocabulary instance segmentation task.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByeYOerFvr
We use mixture density networks to do full conditional density estimation for spatial offset regression and apply it to the human pose estimation task.
Like language, music can be represented as a sequence of discrete symbols that form a hierarchical syntax, with notes being roughly like characters and motifs of notes like words. Unlike text however, music relies heavily on repetition on multiple timescales to build structure and meaning. The Music Transformer has shown compelling in generating music with structure . In this paper, we introduce a tool for visualizing self-attention on polyphonic music with an interactive pianoroll. We use music transformer as both a descriptive tool and a generative model. For the former, we use it to analyze existing music to see if the ing self-attention structure corroborates with the musical structure known from music theory. For the latter, we inspect the model's self-attention during generation, in order to understand how past notes affect future ones. We also compare and contrast the attention structure of regular attention to that of relative attention (,), and examine its impact on the ing generated music. For example, for the JSB Chorales dataset, a model trained with relative attention is more consistent in attending to all the voices in the preceding timestep and the chords before, and at cadences to the beginning of a phrase, allowing it to create an arc. We hope that our analyses will offer more evidence for relative self-attention as a powerful inductive bias for modeling music. We invite the reader to explore our video animations of music attention and to interact with the visualizations at https://storage.googleapis.com/nips-workshop-visualization/index.html. Attention is a cornerstone in neural network architectures. It can be the primary mechanism for constructing a network, such as in the self-attention based Transformer, or serve as a secondary mechanism for connecting parts of a model that would otherwise be far apart or different modalities of varying dimensionalities. Attention also offers us an avenue for visualizing the inner workings of a model, often to illustrate alignments BID3. For example in machine translation, the Transformer uses attention to build up both context and alignment while in the LSTM-based seq2seq models, attention eases the word alignment between source and target sentences. For both types, attention gives points us to where a model is looking when translating BID6 BID0. For example in speech recognition, attention aligns different modalities from spectograms to phonemes BID1.In contrast to the above domains, there is less "groundtruth" in what should be attended to in a creative domain such as music. Moreover, in contrast to encoder-decoder models where attention serves as alignment, in language modeling self-attention serves to build context, to retrieve relevant information from the past to predict the future. Music theory gives us some insight of the motivic, harmonic, temporal dependencies across a piece, and attention could be a lens in showing their relevance in a generative setting, i.e. does the model have to pay attention to this previous motif to generate the new note? Music Transformer, based on self-attention BID6, has been shown to be effective in modeling music, being able to generate sequences with repetition on multiple timescales (motifs and phrases) with long-term coherence BID2. In particular, the use of relative attention improved sample quality and allowed the model generalize beyond lengths observed during training time. Why does relative attention help? More generally, how does the attention structure look like on these models?In this paper, we introduce a tool for visualizing self-attention on music with an interactive pianoroll. We use Music Transformer as both a descriptive tool and a generative model. For the former, we use it to analyze existing music to see if the ing self-attention structure corroborates with musical structure known from music theory. For the latter, we inspect the model's self-attention during generation, in order to understand how past notes affect future ones. We explore music attention on two music datasets, JSB Chorales and Piano-e-Competition. The former are Chorale harmonizations, and we see attention keeping track of the harmonic progression and also voice-leading. The latter are virtuosic classical piano music and attention looks back on previous motifs and gestures. We show for JSB Chorales the heads in multihead-attention distribute and focus on different temporal regions. Moreover, we compare and contrast the attention structure of regular attention to that of relative attention, and examine its impact on the ing generated music. For example, for the JSB Chorales dataset, a model trained with relative attention is more consistent in attending to all the voices in the preceding timestep and the many chords before, and at cadences to the beginning of a phrase, allowing it to create an arc. In contrast, regular attention often becomes a "local" model only attending to the most recent history, ing in certain voice repeating the same note for a long duration, perhaps due to overconfidence. We take a language-modeling approach to training generative models for symbolic music. Hence we represent music as a sequence of discrete tokens, with the vocabulary determined by the dataset. The JSB Chorale dataset consists of four-part scored choral music, which can be represented as a pianoroll like representation with rows being pitch and columns being time discretized to sixteenth notes. It is serialized in raster-scan fashion when consumed by a language model. For the Piano-e-Competition dataset we use the performance encoding BID4 which consists of a vocabulary of 128 NOTE_ON events, 128 NOTE_OFFs, 100 TIME_SHIFTs allowing for expressive timing at 10ms and 32 VELOCITY bins for expressive dynamics. The Transformer BID6 is a sequence model based primarily on self-attention. Multiple heads are typically used to allow the model to focus on different parts of the history. These are supported by first splitting the queries Q, keys K, and values V into h parts on the depth d dimension. FIG0 shows the scaled dot-product attention for a single head. Regular attention consists of only the Q h K h term, while relative attention adds S rel to modulate the attention logits based on pairwise distances between queries and keys. DISPLAYFORM0 We adopt S rel = Skew(Q h E h) as in BID2, where E h are learned embeddings for every possible pairwise distance. The attention outputs for each head are concatenated and linearly transformed to get Z, a L by D dimensional matrix, where L is the length of the input sequence. FIG0 shows a full-view of our visualization tool for exploring self-attention 2. The arcs, inspired by BID7, connect the current query (highlighted by the pink playhead) to earlier parts of the piece. Each head bears a different color, and the thickness of the lines give the attention weights. The user can choose to see a subset of the attention arcs either by specifying the top n number of arcs or by specifying a threshold at which attention weights lower then that would not be shown. Our tool also supports animation, which allows us to inspect if a certain phenomena is consistent throughout a piece, and not just for certain timesteps. FIG0 shows that some heads focus on the immediate past, some further back, nicely distributed in time. This maybe due to relative attention explicitly modulating attention based on pairwise distance. The left shows how on the bottom layer the attention is dense, while the right shows on the top layer each position is already a summary and hence the model only needs to attend to less positions. When trained on JSB Chorales, regular attention failed to align the voices, causing one voice to repeat the same note (left on FIG2), while relative attention generated samples with musical phrasing. To compare, we use relative attention (pink) and regular attention (green) to analyze the same JSB Chorale, by feeding the piece through the models and recording their attention weights. FIG3 shows a drastic difference in how regular attention only focuses on the immediate past and the beginning of the piece, while relative attention attends to the entire passage. FIG4 shows a sample generated by Transformer with relative attention trained on the Piano-eCompetition dataset. The top shows a passage with right-hand "triangular" motifs and the model attends to the runs to learn the scale and also peaks to know when to change directions. The bottom shows the same passage with the query being on the left-hand, and the attention focuses more on the left-hand chords and also the right-hand when they coincide with the left-hand. On the left panel, the query is a left-hand note and attention is more focused on the bottom half compared to the right which is right-hand note, with more attention on the top half. We presented a visualization tool for seeing and exploring music self-attention in context of music sequences. We have shown some preliminary observations and we hope this it the beginning to furthering our understanding in how these models learn to generate music.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryfxVNEajm
Visualizing the differences between regular and relative attention for Music Transformer.
We study the statistical properties of the endpoint of stochastic gradient descent (SGD). We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients.. Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it’s invariant under a simultaneous rescaling of each by the same amount. We experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small. Despite being massively over-parameterized BID13, deep neural networks (DNNs) have demonstrated good generalization ability and achieved state-of-the-art performances in many application domains such as image BID13 and speech recognition BID1. The reason for this success has been a focus of research recently but still remains an open question. Our work provides new theoretical insights and useful suggestions for deep learning practitioners. The standard way of training DNNs involves minimizing a loss function using SGD and its variants BID4. In SGD, parameters are updated by taking a small discrete step depending on the learning rate in the direction of the negative loss gradient, which is approximated based on a small subset of training examples (called a mini-batch). Since the loss functions of DNNs are highly non-convex functions of the parameters, with complex structure and potentially multiple minima and saddle points, SGD generally converges to different regions of parameter space depending on optimization hyper-parameters and initialization. Recently, several works BID2 BID0 BID28 have investigated how SGD impacts generalization in DNNs. It has been argued that wide minima tend to generalize better than sharp minima BID15 BID28. This is entirely compatible with a Bayesian viewpoint that emphasizes targeting the probability mass associated with a solution, rather than the density value at a solution BID21. Specifically, BID28 find that larger batch sizes correlate with sharper minima. In contrast, we find that it is the ratio of learning rate to batch size which is correlated with sharpness of minima, not just batch size alone. In this vein, while BID9 discuss the existence of sharp minima which behave similarly in terms of predictions compared with wide minima, we argue that SGD naturally tends to find wider minima at higher noise levels in gradients, and such wider minima seem to correlate with better generalization. In order to achieve our goal, we approximate SGD as a continuous stochastic differential equation BID3 BID22 BID19. Assuming isotropic gradient noise, we derive the Boltzmann-Gibbs equilibrium distribution of this stochastic process, and further derive the relative probability of landing in one local minima as compared to another in terms of their depth and width. Our main finding is that the ratio of learning rate to batch-size along with the gradient's covariances influence the trade-off between the depth and sharpness of the final minima found by SGD, with a high ratio of learning rate to batch size favouring flatter minima. In addition, our analysis provides a theoretical justification for the empirical observation that scaling the learning rate linearly with batch size (up to a limit) leads to identical performance in DNNs BID18 BID12.We verify our theoretical insights experimentally on different models and datasets. In particular, we demonstrate that high learning rate to batch size ratio (due to either high learning rate or low batchsize) leads to wider minima and correlates well with better validation performance. We also show that a high learning rate to batch size ratio helps prevent memorization. Furthermore, we observe that multiplying each of the learning rate and the batch size by the same scaling factor in similar training dynamics. Extending this observation, we validate experimentally that one can exchange learning rate and batch size for the recently proposed cyclic learning rate (CLR) schedule BID31, where the learning rate oscillates between two levels. Finally, we discuss the limitations of our theory in practice. The relationship between SGD and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers BID8 BID33; BID27 BID26. In particular, BID22 describe the dynamics of stochastic gradient descent (SGD) as a stochastic process that can be divided into three distinct phases. In the first phase, weights diffuse and move away from the initialization. In the second phase the gradient magnitude dominates the noise in the gradient estimate. In the final phase, the weights are near the optimum. BID29 ) make related observations from an information theoretic point of view and suggest the diffusion behaviour of the parameters in the last phase leads to the minimization of mutual information between the input and hidden representation. We also relate the SGD dynamics to the stationary distribution of the stochastic differential equation. Our derivation bears similarity with BID22. However, while BID22 study SGD as an approximate Bayesian inference method in the final phase of optimization in a locally convex setting, our end goal is to analyze the stationary distribution over the entire parameter space reached by SGD. Further, our analysis allows us to compare the probability of SGD ending up in one minima over another (in terms of width and depth), which is novel in our case. We discuss the Fokker-Planck equation which has appeared before in the machine learning literature though the exact form and solution we consider we believe is novel. For example, in the online setting BID14 ) derive a Gibbs distribution from the Fokker-Planck equation, but the relation there does not give the temperature of the Gibbs distribution in terms of the learning rate, batch size and gradient covariance. Our work is also closely related to the ongoing discussion about the role of large batch size and the sharpness of minima found in terms of generalization BID28. BID28 showed that SGD ends up in sharp minimum when using large batch size. BID12 BID16 empirically observed that scaling up the learning rate, and training for more epochs, leads to good generalization when using large batch size. Our novelty is in explaining the importance of the ratio of learning rate to batch size. In particular, our theoretical and empirical show that simultaneously rescaling the batch size and learning rate by the same amount leads SGD to minima having similar width despite using different batch sizes. Concurrent with this work, BID32 have both analyzed SGD approximated as a continuous time stochastic process and stressed the importance of the learning rate to batch size ratio. BID32 focused on the training dynamics while explored the stationary non-equilibrium solution for the stochastic differential equation for non-isotropic gradient noise, but assuming other conditions on the covariance and loss to enforce the stationary distribution to be path-independent. Their solution does not have an explicit solution in terms of the loss in this case. In contrast to other work, we strictly focus on the explicitly solvable case of the Boltzmann-Gibbs equilibrium distribution with isotropic noise. This focus allows us to relate the noise in SGD, controlled by the learning rate to batch size ratio, with the width of its endpoint. We empirically verify that the width and height of minima correlates with the learning rate to batch size ratio in practice. Our work continues the line of research on the importance of noise in SGD BID4; BID23 BID22. Our novelty is in formalizing the impact of batch size and learning rate (i.e. noise level) on the width and depth of the final minima, and empirical verifications of this. Our focus in this section is on finding the relative probability with which we end optimization in a region near a minimum characterized by a certain loss value and Hessian determinant. We will find that the relative probability depends on the local geometry of the loss function at each minimum along with batch size, learning rate and the covariance of the loss gradients. To reach this , we first derive the equilibrium distribution of SGD over the parameter space under a stochastic differential equation treatment. We make the assumption of isotropic covariance of the loss gradients, which allows us to write down an explicit closed-form analytic expression for the equilibrium distribution, which turns out to be a Boltzmann-Gibbs distribution. We follow a similar (though not identical) theoretical setup to BID22, approximating SGD with a continuous-time stochastic process, which we now outline. Let us consider a model parameterized by θ = {θ 1, . . ., θ q}. For N training examples x i, i ∈ {1, ..., N}, the loss function, L(θ), and the corresponding gradient g(θ), are defined based on the sum over the loss values for all training examples. Stochastic gradients g (S) (θ) arise when we consider a batch B of size S < N of random indices drawn uniformly from {1, ..., N} and form an (unbiased) estimate of loss and gradient based on the corresponding subset of training examples DISPLAYFORM0 We consider stochastic gradient descent (SGD) with learning rate η, as defined by the update rule DISPLAYFORM1 We now make the following assumptions: By the central limit theorem (CLT), we assume the noise in the stochastic gradient is Gaussian with covariance matrix DISPLAYFORM2 We note that the covariance is symmetric positive-semidefinite, and so can be decomposed into the product of two matrices C(θ) = B(θ)B (θ). We assume the discrete process of SGD can be approximated by the continuous time limit of the following stochastic differential equation (known as a Langevin equation) DISPLAYFORM3 where f (t) is a normalized Gaussian time-dependent stochastic term. Note that the continuous time approximation of SGD as a stochastic differential equation has been shown to hold in a weak approximation on the condition that the learning rate is small BID19.Note that we have not made Assumption 4 of BID22, where they assume the loss can be globally approximated by a quadratic. Instead, we allow for a general loss function, which can have many local minima. The Langevin equation is a stochastic differential equation, and we are interested in its equilibrium distribution which gives insights into the behavior of SGD and the properties of the points it converges to. Assuming isotropic noise, the Langevin equation is well known to have a GibbsBoltzmann distribution as its equilibrium distribution. This equilibrium distribution can be derived by finding the stationary solution of the Fokker-Planck equation, with detailed balance, which governs the evolution of the probability density of the value of the parameters with time. The FokkerPlanck equation and its derivation is standard in the statistical physics literature. In Appendix A we give the equation in the machine learning context in Eq. FORMULA12 and for completeness of presentation we also give its derivation. In Appendix C we restate the standard proofs of the stationary distribution of a Langevin system, and provide the ing Gibbs-Boltzmann equilbirium distribution here, using the notation of this paper: Theorem 1 (Equilibrium Distribution). Assume 1 that the gradient covariance is isotropic, i.e. C(θ) = σ 2 I, where σ 2 is a constant. Then the equilibrium distribution of the stochastic differential equation 1 is given by DISPLAYFORM0 where n ≡ η S and P 0 is a normalization constant, which is well defined for loss functions with L 2 regularization. Discussion: Here P (θ) defines the density over the parameter space. The above says that if we run SGD for long enough (under the assumptions made regarding the SGD sufficiently matching the infinitesimal limit), then the probability of the parameters being in a particular state asymptotically follows the above density. Note, that n ≡ η S is a measure of the noise in the system set by the choice of learning rate η and batch size S. The fact that the loss is divided by n emphasizes that the higher the noise n, the less granular the loss surface appears to SGD. The gradient variance C(θ) on the other hand is determined by the dataset and model priors (e.g. architecture, model parameterization, batch normalization etc.). This reveals an important area of investigation, i.e., how different architectures and model parameterizations affect the gradient's covariance structure. We note that in the analysis above, the assumption of the gradient covariance C(θ) being fixed and isotropic in the parameter space is unrealistic. However it is a simplification that enables straightforward insights regarding the relationship of the noise, batch size and learning rate in the Gibbs-Boltzmann equilibrium. We empirically show that various predictions based on this relationship hold in practice. Returning to SGD as an optimization method, we can ask, given the probability density P (θ) can we derive the probability of ending at a given minimum, θ A, which we will denote by lowercase p A =p A C, where C is a normalization constant which is the same for all minima (the unnormalized probabilityp A is all we are interested in when estimating the relative probability of finishing in a given minimum compared to another one). This probability is derived in Appendix D, and given in the following theorem, which is the core insight from our theory. Theorem 2 (Probability of ending in region near minima θ A). Assume the loss has a series of separated local minima. Consider one such minima, with Hessian H A and loss L A at a minimum θ A. Then the unnormalized probability of ending in a region near minima θ A is DISPLAYFORM1 where n = η S is the noise used in the SGD process to reach θ A. Discussion: For this analysis, we qualitatively categorize a minima θ A by its loss L A (depth) and the determinant of the Hessian det H A (a larger determinant implies a sharper minima). The above 1 Here we also assume a weak regularity condition that the loss L(θ) includes the regularization term τ θ 2 2for some τ > 0. shows that the probability of landing in a specific minimum depends on three factors -learning rate, batch-size and covariance of the gradients. The two factors that we directly control only appear in the ratio given by the noise n = η/S. Note that the proof of this utilizes a Laplace Approximation in which the loss near a given minimum can be approximated using a second order Taylor series in order to evaluate an integral. We emphasize this is not the same as globally treating the loss as a quadratic. To study which kind of minima are more likely if we were to reach equilibrium, it is instructive to consider the ratio of probabilities p A and p B at two distinct minima θ A and θ B respectively given by DISPLAYFORM2 To highlight that towards the equilibrium solution SGD favors wider rather than sharper minima, let's consider the special case when L A = L B, i.e., both minima have the same loss value. Then, DISPLAYFORM3 This case highlights that in equilibrium, SGD favors the minimum with lower determinant of the Hessian (i.e. the flatter minima) when all other factors are identical. On the flip side, it can be seen that if two minima have the same curvature (det H A = det H B), then SGD will favor the minima with lower loss. Finally in the general case when L A ≥ L B, it holds that p A ≥ p B if and only if DISPLAYFORM4 That is, there is an upper bound on the inverse of the noise for θ A to be favored in the case that its loss is higher than at θ B, and this upper bound depends on the difference in the heights compared to the ratio of the widths. In particular we can see that if det H B < det H A, then Y < 0, and so no amount of noise will in θ A being more probable than θ B. In words, if the minimum at θ A is both higher and sharper than the minimum at θ B, it is never reached with higher probability than θ B, regardless of the amount of noise. However, if det H B > det H A then Y > 0, and there is a lower bound on the noise DISPLAYFORM5 to make θ A more probable than θ B. In words, if the minimum at θ A is higher but flatter than the minimum at θ B, it is favored over θ B, as long as the noise is large enough, as defined by eq..To summarize, the presented theory shows that the noise level in SGD (which is defined by the ratio of learning rate to batch size) controls the extent to which optimization favors wider over deeper minima. Increasing the noise by increasing the ratio of learning rate to batch size increases the probability of wider compared to deeper minima. For a discussion on the relative probabilities of critical points that are not strictly minima, see appendix D. In this section, we empirically study the impact of learning rate η and batch size S on the local minimum that SGD finds. We first focus on a 4-layer Batch Normalized ReLU MLP trained on Fashion-MNIST . We study how the noise ratio n = η S leads to minima with different curvatures and validation accuracy. To measure the curvature at a minimum, we compute the norm of its Hessian (a higher norm implies higher sharpness of the minimum) using the finite difference method . In Figure 1a, we report the norm of the Hessian for local minima obtained by SGD for different n = set. As n grows, we observe that the norm of the Hessian at the minima also decreases, suggesting that higher To further illustrate the behavior of SGD with different noise levels, we train three Resnet56 models on CIFAR10 using SGD (without momentum) with different BID11 by investigating the loss on the line interpolating between the parameters of two models. More specifically, let θ 1 and θ 2 be the final parameters found by SGD using different Results indicate that models with large batch size (Fig. 2 -left) or low learning rate (Fig. 2-middle ; both having a lower η S than the baseline) end up in a sharper minimum relative to the baseline model. These plots are consistent with our theoretical analysis that higher n = η/S gives preference to wider minima over sharper minima. On the other hand, figure 2 (right) shows that models trained with roughly the same level of noise end up in minima of similar quality. The following experiment explores this aspect further. We train VGG-11 models BID30 on CIFAR-10, such that all the models are trained with the same noise level but with different values of learning rate and batch size. Specifically, we use DISPLAYFORM0 S=50×β, where we set β = 0.25, 1, 4. We then interpolate between the model parameters found when training with β = 1 and β = 4 (Fig. 3-left), and β = 1 and β = 0.25 S=50×β, but different η and S values determined by β. As predicted by our theory, the minima for models with identical noise levels should be qualitatively similar as can be seen by these plots.: Learning rate schedule can be replaced by an equivalent batch size schedule. The ratio of learning rate to batch size is equal at all times for both red and blue curves in each plot. Above plots show train and test accuracy for experiments involving VGG-11 architecture on CIFAR10 dataset. Left: cyclic batch size schedule (blue) in range 128 to 640, compared to cyclic learning rate schedule (red) in range 0.001 to 0.005. Right: constant batch size 128 and constant learning rate 0.001 (blue), compared to constant batch size 640 and constant learning rate 0.005 (red).(Fig. 3-right). The interpolation indicate that all the minima have similar width and depth, qualitatively supporting our theoretical observation that for the same noise ratio SGD ends up in minima of similar quality. In this section we look at two experimental phenomena: firstly, the equilibrium endpoint of SGD and secondly the dynamical evolution of SGD. The former, was theoretically analysed in the theory section, while the latter is not directly addressed in the theory section, but we note that the two are related -the endpoint is the of the intermediate dynamics. We experimentally study both phenomena in the following four experiments involving the VGG-11 architecture on the CIFAR10 dataset, shown in Regarding the first phenomena, of the endpoint of SGD, the test accuracy when training with a cyclic batch size and cyclic learning rate is 89.39% and 89.24%, respectively, and we emphasize that these are similar scores. For a constant learning rate to batch-size ratio of Figure 5: Impact of η S on memorization of MNIST when 25% and 50% of labels in the training set are replaced with random labels, using no momentum (on the right) or a momentum with parameter 0.9 (on the left). We observe that for a specific level of memorization, high η S leads to better generalization. Red has higher value of η S than blue. test accuracy is 87.25% and 86.92%, respectively, and we again emphasize that these two scores are similar to each other. That in each of these experiments the endpoint test accuracies are similar shows exchangability of learning rate for batch size for the endpoint, and is consistent with our theoretical calculation which says characteristics of the minima found at the endpoint are determined by the ratio of learning rate to batch-size, but not individually on learning rate or batch size. Additional exploring cyclical learning rate and batch size schedule are reported in Appendix F.4.Regarding the second phenomena of the dynamical evolution, we note the similarity of the training and test accuracy curves for each pair of same-noise curves in each experiment. Our theoretical analysis does not explain this phenomena, as it does not determine the dynamical distribution. Nonetheless, we report it here as an interesting observation, and point to Appendix B for some intuition on why this may occur from the Fokker-Planck equation. In Appendix F.2, FIG4 we show in more detail the loss curves. While the epoch-averaged loss curves match well when exchanging batch size for learning rate, the per-iteration loss is not invariant to switching batch size for learning rate. In particular, we note that each run with smaller batch-size has higher variance in per-iteration loss than it's same-noise pair. This is expected, since from one iteration to the next, the examples will have higher variance for a smaller batch-size. The take-away message from this section is that the endpoint and dynamics of SGD are approximately invariant if the batch size and learning rate are simultaneously rescaled by the same amount. This is in contrast to a commonly used heuristic consisting of scaling the learning rate with the square root of the batch size, i.e. of keeping the ratio η/ √ S constant. This is used for example by BID16 as a way of keeping the covariance matrix of the parameter update step the same for any batch size. However, our theory and experiments suggest changing learning rate and batch size in a way that keeps the ratio n = η/S constant instead, since this in the same equilibrium distribution. To generalize well, a model must identify the underlying pattern in the data instead of simply perfectly memorizing each training example. An empirical approach to test for memorization is to analyze how good a DNN can fit a training set when the true labels are partly replaced by random labels BID13 BID2. The experiments described in this section highlight that SGD with a sufficient amount of noise improves generalization at a given level of memorization. Experiments are performed on the MNIST dataset with an MLP similar to the one used by BID2, but with 256 hidden units. We train the MLP with different amounts of random labels in the training set. For each level of label noise, we evaluate the impact of In each experiment, we multiply the learning rate (η) and batch size (S) with β such that the ratio β×(η=0.1) β×(S=50) is fixed. We observe that for the same ratio, increasing the learning rate and batch size yields similar performance up to a certain β value, for which the performance drops significantly. (d)-Breaking point analysis when half the noise level β×(η=0.05) β×(S=50) is used. The breaking point happens for much larger β when using a smaller noise. All experiments are repeated 5 times with different random seeds. The graphs denote the mean validation accuracies and the numbers in the brackets denote the mean and standard deviation of the maximum validation accuracy across different runs. The * denotes at least one seed lead to divergence.highlights that SGD with low noise n = η S steers the endpoint of optimization towards a minimum with low generalization ability. While Fig 5 reports the generalization at the endpoint, we observe that SGD with larger noise continuously steers away from sharp solutions throughout the dynamics. We also reproduce the observation reported by BID2: that memorization roughly starts after reaching maximum generalization. For runs with momentum we exclude learning rates higher than 0.02 as they lead to divergence. Full learning curves are reported in FIG5 included in Appendix F.3. Our analysis relies on the assumption that the gradient step is sufficiently small to guarantee that the first order approximation of a Taylor's expansion is a good estimate of the loss function. In the case where the learning rate becomes too high, this approximation is no longer suitable, and the continuous limit of the discrete SGD update equation will no longer be valid. In this case, the stochastic differential equation doesn't hold, and hence neither does the Fokker-Planck equation, and so we don't expect our theory to be valid. In particular, we don't expect to arrive at the same stationary distribution as indicated by a fixed ratio η/S, if the learning rate gets too high. This is exemplified by the empirical reported in FIG7, where similar learning dynamics and final performance can be observed when simultaneously multiplying the learning rate and batch size by a factor β up to a certain limit. This is done for different training set sizes to investigate if the breaking point depends on this factor FIG7. The plots suggest that the breaking point happens for smaller β values if the dataset size is smaller. We also investigate the influence of β when half the noise level is used, due to halving the learning rate, in (FIG7 . These experiments strongly suggest that the reason behind breaking point is the use of a high learning rate because the performance drops at much higher β when the base learning rate is halved. A similar experiment is performed on Resnets (for see Fig 7 in the appendix). We highlight other limitations of our theory in appendix E. In the theoretical section of this work we treat the learning rate as fixed throughout training. However, in practical applications, the learning rate is annealed to a lower value, either gradually or in discrete jumps. When viewed within our framework, at the beginning with high noise, SGD favors width over depth of a region, then as the noise decreases, SGD prioritizes the depth more stronglythis can be seen from Theorem 3 and the comments that follow. In the theoretical section we made the additional assumption that the covariance of the gradients is isotropic, in order to be able to derive a closed form solution for the equilibrium distribution. We do not expect this assumption to hold in practice, but speculate that there may be mechanisms which drive the covariance towards isotropy, for example one may be able to tune learning rates on a per-parameter basis in such a way that the combination of learning rate and covariance matrix is approximately isotropic -this may lead to improvements in optimization. Perhaps some existing mechanisms such as batch normalization or careful initialization give rise to more equalized covariance -we leave study of this for future work. We note further that our theoretical analysis considered an equilibrium distribution, which was independent of the intermediate dynamics. However, this may not be the case in practice. Without the isotropic covariance, the system of partial differential equations in the late time limit will in general have a solution which will depend on the path through which optimization occurs, unless other restrictive assumptions are made to force this path dependence to disappear. Despite this simplifying assumption, our empirical are consistent with the developed theory. We leave study of path dependence and dynamics to future work. In experiments investigating memorization we explored how the noise level changes the preference of wide minima over sharp ones. BID2 argues that SGD first learns true labels, before focusing on random labels. Our insight is that in the second phase the high level of noise maintains generalization. This illustrates the trade-off between width of minima and depth in practice. When the noise level is lower, DNNs are more likely to fit random labels better, at the expense of generalizing less well on true ones. We shed light on the role of noise in SGD optimization of DNNs and argue that three factors (batch size, learning rate and gradient variance) strongly influence the properties (loss and width) of the final minima at which SGD converges. The learning rate and batch size of SGD can be viewed as one effective hyper-parameter acting as a noise factor n = η/S. This, together with the gradient covariance influences the trade-off between the loss and width of the final minima. Specifically, higher noise favors wider minima, which in turn correlates with better generalization. Further, we experimentally verify that the noise n = η/S determines the width and height of the minima towards which SGD converges. We also show the impact of this noise on the memorization phenomenon. We discuss the limitations of the theory in practice, exemplified by when the learning rate gets too large. We also experimentally verify that η and S can be simultaneously rescaled as long as the noise η/S remains the same. In this appendix we derive the Fokker-Planck equation. For any stochastic differential equation, since the evolution is noisy we can't say exactly where in parameter space the parameter values will be at any given time. But we can talk about the probability P (θ, t|θ 0, t 0) that a parameter takes a certain value θ at a certain time t, given that it started at θ 0, t 0. That is captured by the Fokker-Planck equation, which reads DISPLAYFORM0 In this appendix we will derive the above equation from the stochastic differential equation FORMULA13. We will not be interested in pure mathematical rigour, but intend the proof to add intuition for the machine learning audience. For brevity we will sometimes write the probability just as P (θ, t). We will sometimes make use of tensor index notation, where a tensor is denoted by its components, (for example θ i are the components of the vector θ), and we will use the index summation convention, where a repeated index is to be summed over. We start with the stochastic differential equation DISPLAYFORM1 A formal expression for the probability for a given noise function f is given by P (θ, t) = δ(θ − θ f), but since we don't know the noise, we instead average over all possible noise functions to get DISPLAYFORM2 While this is just a formal solution we will make use of it later in the derivation. We now consider a small step in time δt, working at first order in δt, and ask how far the parameters move, denoted δθ, which is given by integrating DISPLAYFORM3 where we've assumed that δθ is small enough that we can evaluate g at the original θ. We now look at expectations of δθ. Using the fact that the noise is normalized Gaussian, E(f (t)) = 0, we get (switching to index notation for clarity) DISPLAYFORM4 and using that the noise is normalized Gaussian again, we have that E(f (t)f (t)) = Iδ(t − t), leading to DISPLAYFORM5 If at time t we are at position θ and end up at position θ = θ + δθ at time t + δt, then we can take with θ f = θ and Taylor expand it in δθ i DISPLAYFORM6 We deal with the derivatives of the delta functions in the following way. We will use the following identity, called the Chapman-Kolmogorov equation, which reads DISPLAYFORM7 for any t, such that t 0 ≤ t ≤ t + δt. This identity is an integral version of the chain rule of probability, stating there are multiple paths of getting from an initial position θ 0 to θ at time t + δt and one should sum all of these paths. We will now substitute into the first term on the right hand side of FORMULA3 with t set to be t, and apply integration by parts (assuming vanishing boundary conditions at infinity) to move the derivatives off of the delta functions and onto the other terms. We end up with DISPLAYFORM8 We can then take the first term on the right hand side of FORMULA3 to the other side, insert FORMULA16 and FORMULA3, divide by δt and take the limit δt → 0, getting a partial derivative with respect to time on the left hand side, leading directly to the Fokker-Planck equation quoted in the text (where we have reverted back to vector notation from index notation for conciseness) DISPLAYFORM9 In this appendix we will give some supplementary comments about the intuition we can gain from the Fokker-Planck equation FORMULA12. If the learning rate, batch size are constant and the covariance is proportional to the identity C = σ 2 I and σ 2 is constant, then we can rewrite the Fokker-Planck equation in the following form DISPLAYFORM0 where we have rescaled the time coordinate to be t η ≡ tη. One can now see that in terms of this rescaled time coordinate, the ratio between the drift and diffusion terms is governed by the following ratio ησ DISPLAYFORM1 In terms of the balance between drift and diffusion, we see that a higher value of this ratio gives rise to a more diffusive evolution, while a lower value allows the potential drift term to dominate. In the next section we will see how this ratio controls the stationary distribution at which SGD converges to. For now we highlight that in terms of this rescaled coordinate, only this ratio controls the evolution towards this stationary distribution (not just its endpoint), in terms of the rescaled time t η. That is, learning rate and batch size are interchangable in the sense that the ratio is invariant under transformations S → aS, η → aη, for a > 0. But note that the time it takes to reach the stationary distribution depends on η as well, because of the rescaled time variable. For example, for a higher learning rate, but constant ratio, one arrives at the same stationary distribution, but in a quicker time by a factor of 1/η. However, a caution here is necessary. The first order SGD update equation only holds for small enough η that a first order approximation to a Taylor expansion is valid, and hence we expect the first order approximation of SGD as a continuous stochastic differential equation to break down for high η. Thus, we expect learning rate and batch size to be interchangable up to a maximum value of η at which the approximation breaks. In this appendix we will prove equation quoted in the main text, which claims that for an isotropic covariance, C(θ) = σ 2 I, with σ constant, the equilibrium solution of the Fokker-Planck equation has the following form DISPLAYFORM0 where n ≡ η S and P 0 is a normalization constant, which is well defined for loss functions with L 2 regularization. In order to prove this is the equilibrium distribution, we need to solve the Fokker-Planck equation FORMULA12 with the left hand side set equal to zero (which would give a stationary distribution) and further we require for equilibrium that detailed balance holds. To do this, we begin by writing the FokkerPlanck equation in a slightly different form, making use of a probability current J, defined DISPLAYFORM1 in which the Fokker-Planck equation becomes DISPLAYFORM2 At this point we use the assumption that C(θ) = σ 2 I to get DISPLAYFORM3 The stationary solution has ∂P (θ,t) ∂t = 0 and hence ∇ θ · J = 0. But we require the equilibrium solution, which is a stronger demand than just the stationary solution. The equilibrium solution is a particular stationary solution in which detailed balance occurs. Detailed balance means that in the stationary solution, each individual transition balances precisely with its time reverse, ing in zero probability currents, see §5.3.5 of BID10, i.e. that J = 0. Detailed balance is a sufficient condition for having entropy increasing with time. Non-zero J with ∇ θ · J = 0 would correspond to a non-equilibrium stationary distribution, which we don't consider here. For the equilibrium solution, J = 0, we get the for the probability distribution DISPLAYFORM4 which is the desired stationary solution we intended to find. Finally, to ensure that P 0 is a finite value, we note that the loss function L(θ) can be decomposed as, DISPLAYFORM5 where τ is some non-negative constant controlling L 2 regularization. Then we see that, DISPLAYFORM6 DISPLAYFORM7 Since L 0 (θ) ≥ 0 and DISPLAYFORM8 and we now note that the right hand side of FORMULA4 is finite because it is just a multidimensional Gaussian integral. Thus, P 0 has a finite value and hence the stationary distribution P (θ) is well defined. In this appendix we derive the discrete set of probabilities of ending at each minima, as given in. Essentially, we will use Laplace's method to approximate the integral of the probability density in the region near a minimum. This is a common approach used to approximate integrals, appearing for example in (a) and BID17. We work locally near θ A and take the following approximation of the loss function, since it is near a minimum, DISPLAYFORM0 where H A is the Hessian, and is positive definite for a minimum, and where the subscript R A indicates that this approximation only holds in the region R A near the minimum θ A. We emphasize this is not the same as Assumption 4 of BID22, where they assume the loss can be globally approximated by a quadratic. In contrast, we allow for a general loss, with multiple minima, and locally to each minima approximate the loss by its second-order Taylor expansion in order to evaluate the integral in the Laplace method. The distribution P A (θ) is a probability density, while we are interested in the discrete set of probabilities of ending at a given minimum, which we will denote by lowercase p A. To calculate this discrete set of probabilities, for each minimum we need to integrate this stationary distribution over an interval containing the minimum. Integrating FORMULA3 in some region R A around θ A and using DISPLAYFORM1 DISPLAYFORM2 where in the last line we assume the region is large enough that an approximation to the full Gaussian integral can be used (i.e. that the tails don't contribute, which is fine as long as the region is sufficiently larger than det H A) -note that the region can't be too large, otherwise we would invalidate our local assumption. The picture is that the minima are sufficiently far apart that the region can be taken sufficiently large for this approximation to be valid. Note thatP 0 is different to P 0 and includes the normalization factors from performing the Gaussian integral, which are the same for all minima. Since we are interested in relative probabilities between different minima, so we can consider the unnormalized probability, dropping factors that are common amongst all minima we get DISPLAYFORM3 This is the required expression given in the main text. We note that the derivation used above talks about strict minima, i.e., minima with a positive definite Hessian. In practice however, for deep neural networks with a large number of parameters, it is unrealistic to expect the endpoint of training to be in a strict minimum. Instead, it is more likely to be a point at which the Hessian has positive eigenvalues in a few directions, while the other eigenvalues are (approximately) zero. In such cases, to understand which minima SGD favors, we can consider the fact that at equilibrium, the iterate of SGD follows the distribution, DISPLAYFORM4 By definition this means that at any time during equilibrium, the iterate is more likely to be found in a region of higher probability. In the restrictive case of strict minimum, we model it as a Gaussian and characterize the probability of landing in a minimum depending on the curvature and depth of the loss around that minimum. In the general case of minima with degenerate (flat) directions, we can say that a minimum with more volume is a more probable one. Figure 7: Experiments involving Resnet56 architecture on CIFAR10 dataset. In each curve, we multiply the η S ratio by a given factor (increasing both batch size and learning rate). We observe that multiplying the ratio by a factor up to 5 in similar performances. However, the performance degrades for factor superior to 5. Due to our assumptions, we expect the theory to become unreliable when the discrete to continuous approximation fails, when the covariance in gradients is non-isotropic, when the batch size becomes comparable to the finite size of the training set and when momentum is considered. We discussed the limits of the discrete to continous approximation in 4.4 (further illustrated in 7).When the covariance in gradients is highly non-isotropic, the equilibrium solution to the FokkerPlanck equation is the solution to a complicated partial differential equation, and one can't easily spot the solution by inspection as we do in Appendix C. We expect this approximation to break down especially in the case of complicated architectures where different gradient directions will be have very different gradient covariances. Our theory does not involve the finite size of the training set, which is a drawback of the theory. This may be especially apparent when the batch size becomes large, compared to the training set size, and we expect the theory to break down at this point. Finally, we mention that momentum is used in practical deep learning optimization algorithms. Our theory does not consider momentum, which is a drawback of the theory and we expect it to break down in models in which momentum is important. We can write a Langevin equation for the case of momentum, with momentum damping coefficient µ, DISPLAYFORM0 where the velocity is v = dθ/dt. where P is now a function of velocity v as well as θ and t, and we've again assumed the gradient covariance varies slowly compared to the loss. From this more complicated Fokker-Planck equation, it is hard to spot the equilibrium distribution. We leave the study of this for further work. For now, we just note that a factor of η can be taken from the right hand side, and then the ratio between the diffusion and drift terms will again be the ratio ησ 2 /(2S). In this appendix we look at other experiments exploring the correlation between learning rate to batch-size ratio and sharpness of minima, and to validation performance. In FIG9, we report the validation accuracy for Resnet56 models trained on CIFAR10 with different learning rate to batchsize ratio. We notice there is a peak validation accuracy at a learning rate to batch-size ratio of around 2 × 10 −3. In particular, this example emphasizes that higher learning rate to batch-size ratio doesn't necessarily lead to higher validation accuracy. Instead, it acts as a control on the validation accuracy, and there is an optimal learning rate to batch-size ratio. FIG10 show the of a variant of 4 layer ReLU experiment, where we use 20 layers and remove Batch Normalization. The inspiration is to test predictions of theory in the more challenging setup. We again observe a correlation of Hessian norm with learning rate to batch-size ratio (though weaker), and similarly between validation performance and learning rate to batch-size ratio. Figures 10,11, 12 report additional line interpolation plots between models that uses different learning rate to batch size ratio but similar learning rate decay throught training. We repeat the experiment several time to ensure robustness with respect to the model random initializations. Results indicate that models with large batch size or low learning rate end up in a sharper minimum relative to the baseline model. In this appendix we show in more detail the experiments of Section 4.2 which show the exchangeability of learning rate and batch size. In FIG4 we show the log cross entropy loss, the epochaveraged log cross entropy loss, train, validation and test accuracy as well as the batch size schedule, learning rate schedule. We see in the learning rate to batch-size ratio plot that the orange and blue Table 1: Comparison between different cyclical training schedules (cycle length and learning rate are optimized using a grid search). Discrete schedules perform similarly, or slightly better than triangular. Additionally, discrete S schedule leads to much wider minima for similar loss. Hessian norm is approximated on 1 out of 6 repetitions and measured at minimum value (usually endpoint of training).lines (cyclic schedules) have the same ratio of learning rate to batch-size as each other throughout, and that their dynamics are very similar to each other. The same holds for the green and red lines, of constant batch size and learning rate. This supports the theoretical of this paper, that the ratio S/η governs the stationary distribution of SGD.We note that it is not just in the stationary distribution that the exchangeability holds in these plots -it appears throughout training, highlighted especially in the cyclic schedules. We postulate that due to the scaling relation in the Fokker-Planck equation, the exchangeability holds throughout learning as well as just at the end, as long as the learning rate does not get so high as to ruin the approximations under which the Fokker-Planck equation holds. We note similar behaviour occurs also for standard learning rate annealing schedules, which we omit here for brevity. We report learning curves from memorization experiment with 0.0 momentum, see Fig. 14. We additionally confirm similar to the previous experiments and show correlation between batch size and learning rate ratio and norm of Hessian, see Fig. 15 without momentum (left) and with momentum 0.9 (right). It has been observed that a cyclic learning rate (CLR) schedule leads to better generalization BID31. In Sec. 4.2 we demonstrated that one can exchange cyclic learning rate schedule (CLR) with cyclic batch size (CBS) and approximately preserve the practical benefit of CLR. This exchangeability shows that the generalization benefit of CLR must come from the varying noise level, rather than just from cycling the learning rate. To explore why this helps generalization, we run VGG-11 on CIFAR10 using 4 training schedules: we compared two discrete schedules (where either η or S switches discretely from one value to another between epochs) and two baseline schedules, one constant (η is constant) and one triangle (η is interpolated linearly between its maximum and minimum value). We track the norm of the Hessian and the training loss throughout training. Each experiment is repeated six times. For each schedule we optimize η in [1e − 3, 5e − 2] and cycle length in {5, 10, 15} on a validation set. In all cyclical schedules the maximum value (of η or S) is 5× larger than the minimum value. First, we observe that cyclical schemes oscillate between sharp and wide regions of the parameter space, see FIG7. Next, we empirically demonstrate that a discrete schedule varying either S or η performs similarly, or slightly better than triangular CLR schedule, see Tab. 1. Finally, we observe that cyclical schemes reach wider minima at the same loss value, see FIG7 and Tab. 1. All of the above suggest that by changing noise levels cyclical schemes reach different endpoints than constant learning rate schemes with same final noise level. We leave the exploration of the implications and a more thorough comparison with other learning schedules for future work. 8.0E+041.0E+05 Train loss FIG7: Cyclical schemes oscillate between sharp and wide regions. Additionally, cyclical schemes find wider minima than baseline run for same level of loss, which might explain their better generalization. All cyclical schedules use base η = 0.005 and cycle length 15 epochs, which approximate convergence at the end of each cycle. Plots from left to right: discrete S, discrete η, triangle η, constant learning rate η = 0.001, constant learning rate η = 0.005. On vertical axis we report loss (red) and approximated norm of Hessian (blue).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJma2bZCW
Three factors (batch size, learning rate, gradient noise) change in predictable way the properties (e.g. sharpness) of minima found by SGD.
Although word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems. In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns. My demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted. Furthermore, my show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model’s ability to learn the relationship between a word pair the way a human does. Word vectors constructed using Word2vec BID6, BID8 ) and Glove BID9 ) are central to the success of several state of the art models in natural language processing BID1, BID2, BID7, BID11 ). These vectors are low dimensional vector representations of words that accurately capture the semantic and syntactic information about the word in a document. The ability of these vectors to encode language is best illustrated by their efficiency at solving word analogy problems. The problem involves predicting a word, D, which completes analogies of the form'A:B:: C:D'. For example, if the phrase is''King:Queen:: Man:D', then the appropriate value of D is Woman. Word2vec solves these problems by observing that the word vectors for A, B, C and D satisfy the equation V ec(D) ≈ V ec(C) + V ec(B) − V ec(A) in several cases. Although this equation accurately resolves the word analogy for a wide variety of semantic and syntactic problems, the precise dynamics underlying this equation are largely unknown. Part of the difficulty in understanding the dynamics is that word vectors are essentially'black boxes' which lack interpretability. This difficulty has been overcome in large part due to the systematic analyses of Levy, Goldberg and colleagues, who have derived connections between word vectors and the more human-interpretable count based approach of representing words. They show that 1) there are mathematical equivalences between Word2vec and the count based approach BID4 ), 2) that the count based approach can produce comparable to Word2vec on word analogy problems BID3 ) and more generally, 3) that the count based approach can perform as well as Word2vec on most NLP tasks when the hyper-parameters in the model are properly tuned BID5. Their (see section 9 in BID3) demonstrate that V ec(B) − V ec(A) is likely capturing the'common information' between A and B, and this information is somehow being'transferred' on to C to compute D.Still the question remains, how is this transference process taking place? The answer would provide insight into the topology of word vectors and would help us to identify gaps in our understanding of word vectors. In this paper, I attempt to gain insights into the transference process by building a simple generative algorithm for solving semantic word analogy problems in the case where A, B, C and D are nouns. My algorithm works in two steps: In the first step, I compute a list of nouns that likely represent the information that is common to both A and B. In the second step, I impose the information about the nouns obtained in the first step on to C to compute D. Both steps of the algorithm work only on word counts; therefore, it is possible to precisely understand how and why D is generated in every word analogy question. Despite the simplicity of my approach, the algorithm is able to produce comparable to Word2vec on the semantic word analogy questions, even using a very small dataset. My study reveals insights into why word vectors solve certain classes of word analogy problems much better than others. I show that there is no universal interpretation of the information contained in V ec(B) − V ec(A) because the'common information' between A and B is strongly dependent on the training dataset. My reveal that a machine may not be'learning' the relationship between a pair of words the way a human does, even when it accurately solves an analogy problem. Problem Statement. In this paper, I analyze a variant of the semantic word analogy problem studied in BID8. The problem can be stated as follows: given 3 nouns, A, B and C, appearing in the text, T, find a fourth noun D such that the semantic relationship (R) between A and B is the same as the semantic relationship between C and D. Here, R describes an'is a' relationship; for instance, if A = Beijing and B = China, then R is likely to be capital since Beijing is a capital of China. Typically, the relationship between A and B will not be unique; in the example above, we could also have said Beijing is a city in China, or Beijing is a center of tourism in China. The dataset: For my analysis, the text, T, comprises the first billion characters from Wikipedia. This dataset contains less than 10 % of the information present in Wikipedia. The data can be downloaded from http://mattmahoney.net/dc/enwik9.zip and pre-processed using wikiextractor detailed in http://medialab.di.unipi.it/wiki/Wikipedia_Extractor. The raw data is divided into several unrelated chapters for e.g., there is a chapter on'Geography of Angola','Mouthwash Antiseptic' etc. As part of the pre-processing I remove all those chapters containing less than 5 words. Analysis questions: I test the efficacy of my proposed solution using a subset of the semantic word analogy problems compiled by BID8 that is relevant to this study. The test set used here comprise 8,363 problems 1 divided into 4 categories: common capitals (e.g., Athens:Greece::Oslo:Norway), all capitals (e.g., Vienna:Austria::Bangkok:Thailand), currencies (e.g., Argentina:peso::Hungary: forint) and cities in states of the U.S (e.g., Dallas:Texas::Henderson:Nevada). The original dataset comprises questions in a fifth category of gender inflections (e.g., grandfather:grandmother::father:mother) which is left out of the analysis because many of the problems involve pronouns (e.g., his:her::man:woman).Word2vec. I compare the produced by my method with those obtained using Word2vec. Word2vec derives low dimensional vector representations for words such that'similar' words occupy'similar' regions in a vector space. Given a sequence of words w 1, w 2... w T, Word2vec maximizes the log probability DISPLAYFORM0 where w is the window size, and p(w t+j |w t) is a function of the word vectors for w t+j and w t respectively; for detailed insights into the functioning of Word2vec see BID0 and. Given the word vectors V ec(A), V ec(B) and V ec(C) for A, B and C respectively, Word2vec derives D as the word whose word vector best satisfies the relationship DISPLAYFORM1 The critical implicit assumption made in Equation FORMULA0 is that the w words surrounding w i on either side, namely [w i−w, w i−w+1, . . . w i−1, w i+1 . . . w i+w−1, w i+w] contains the most semantically and syntactically relevant words to w i. In the next section, I will explain how a slight generalization of this forms the basis for my algorithm. The goal is to solve the word analogy problem using a simple, generative window based approach. I begin my analysis by noting that all terms relevant in the analysis (A, B, C, D, and R) are nouns. Accordingly, I construct a new document, T, comprising only the nouns appearing in T, stored in the order in which they appear 2 For convenience of notation, I will assume that there are H nouns in T, and the i th noun appearing in T is represented as T i (i.e., T [i] = T i ). Since the same noun likely appear multiple times in the text, I use the set Q(X) = {i|T i = X} to indicate the locations of the noun, X in T.The key idea in Word2vec is that the words surrounding a given word in the text contain rich semantic information about that word. More generally, we expect the nouns surrounding a given noun to contain rich semantic information about the noun. This implies that for certain values of w, the context of T i defined as DISPLAYFORM0 will contain nouns that are semantically related to DISPLAYFORM1 is a metric distance function describing the number of nouns separating y and T i in T, and w is the window size. Clearly, equation FORMULA2 is likely to hold for small values of w, and not likely to hold for very large values of w. Accordingly, I make the following assumption: Assumption 1. There exists a maximal window size, w * = 2s, such that the nouns present in DISPLAYFORM2 The thus far describe contexts around one noun. We are interested in nouns that are semantically related to 2 nouns. Therefore, I definẽ DISPLAYFORM3 which describes the combined context of F (T i, w) and F (T j, w), when F (T i, w) and F (T j, w) overlap i.e., when i < j ≤ i + w. For any 2 nouns, A and B, and the set, S(A, B, w) defined as DISPLAYFORM4 we have the following : Proposition 1. If assumption 1 holds true, then all the nouns present in W ∈ S(A, B, s) will be semantically related to both A and B.Proof. For every noun N ∈ W we have DISPLAYFORM5 i.e., N belongs to one of the contexts of B and is therefore, semantically related to B. Similarly, it can be shown that N is semantically related to A.Proposition 1 describes the ideal scenario; realistically, we do not expect assumption 1 to hold exactly and therefore, we expect W ∈ S(A, B, s) to contain a higher frequency of nouns that are relevant to both A and B, and a lower frequency of nouns that are relevant to only either A or B. In particular, the higher the frequency of the noun appearing in the list Set L C to empty List 3: DISPLAYFORM6 DISPLAYFORM7 if F (T i, s) overlaps with F (T j, s) and j > i then DISPLAYFORM8 Equation FORMULA5 6: DISPLAYFORM9 Once N AB is computed using algorithm 1, we can derive D from C as the most frequently appearing noun in the list for every X in N AB do 4: DISPLAYFORM10 DISPLAYFORM11 if F (T i, s) overlaps with F (T j, s) and j > i then DISPLAYFORM12 return MostCommon(L D, 1) D is the most frequent noun in L D Algorithms 1 and 2 described above have two hyper-parameters: s and k. A grid search on the parameter values suggests that the improvement of the approach begins to saturate around s = 10 and k = 20; these are the parameter values used in the remainder of the analysis unless specified otherwise. TAB0 shows that the approach described in this paper does better than Word2vec on 3 out of 4 categories when the word vectors are trained using the same dataset. The current approach beats the obtained by Word2vec in 2 out of 4 categories even when the word vectors are trained on the entire Wikipedia 2014 corpus, which has more than 10 times the amount of data used in the current analysis. Algorithm 1 assumes that the k nouns in N AB are equally likely candidates for the relationship between A and B. While this assumption is a good starting point, it's not precise enough. Table 2 shows that the more frequently co-occurring nouns with A and B capture more information about the relationship between A and B than the less frequently co-occurring nouns. This suggests that 1) word vectors are likely capturing information about the nouns co-occurring with A and B, weighted by their frequency of occurrence with A and B, and 2) that the most frequently co-occurring noun with A and B likely represents the Maximum Likelihood Estimate (MLE) of the relationship between A and B. TAB2 shows the 5 most frequently observed MLE values for questions in each of the 4 categories, listed by their frequency of occurrence. Although most of the MLE values in the table make intuitive sense, some (particularly the MLEs for 'common capitals') do not; I will revisit this point later. The remainder of my analysis proceeds in two parts: In the first step, I discuss some of the problems associated with estimating word vectors for infrequently occurring words in the dataset and in the second step, I describe problems one might encounter with word vectors corresponding to frequently occurring words in the dataset. There is substantial variation in the prediction ability between the categories; both the current approach and Word2vec perform worse predicting currencies than the other 3 categories. This is probably because currencies appear far less frequently in the training dataset as compared to nouns in the other categories as demonstrated in Table 4. The lack of training data likely in poor estimates of the relationship between A and B and accordingly, poor estimates of D. Similar problems will likely by observed with word analogy problems involving words appearing less frequently than the currencies in the current dataset. Increasing the size of the dataset will resolve some of these issues relating to data scarcity, but to what extent? Will word vectors corresponding to most of the words trained on a larger dataset be accurate? To answer this question, consider the word vectors obtained by training Word2vec on the entire Wikipedia corpus, which comprises approximately 1.5 billion tokens, of which approximately 5.8 million are unique. The most popular token is the which appears 86 million times. 3. From Zipf's law, we expect the frequency of occurrence of a particular word in the text to be inversely proportional to its frequency rank. Therefore, a word with a frequency rank of 1 million will appear approximately 86 times. Since Word2vec is fitting a 200 dimensional vector corresponding to this word using data from 86 points, I expect the estimate of the vector to be unreliable. This suggests that word vectors corresponding to at least 80 % of the unique words trained on the entire Wikipedia corpus will be unreliable. In general, any dataset will contain some percentage of low frequency words for which accurate word vectors cannot be adequately estimated. Care should be taken to filter these out as is done in BID9.3 http://imonad.com/seo/wikipedia-word-frequency-list/ Table 2: Frequency of co-occurence matters. Accuracy of the current approach at solving the word analogy problem for different values of k. Increasing the value of k produces diminishing returns beyond k = 5; the improvements on going from k = 10 to k = 20 are nominal. In the'common capitals' category, the countries and capitals being considered appear with high frequencies (see table 4), and are therefore not plagued by the data scarcity issues described in the previous section. Furthermore, the algorithm described in this paper is able to solve the word analogy problem in this category with a high accuracy as shown in table 1. These facts together seem to imply that the approach is learning the relationship capital accurately from the data. However, as shown in table 3, capital is not even in the top 5 most likely candidates for the relationship between A and B. This suggests that a model may not be'learning' the relationship between a pair of words the way a human does, even when it accurately solves the word analogy problem. DISPLAYFORM0 To further elaborate on this point, consider figure 2 of BID8, wherein the authors demonstrate that the projection of the vector describing the relationship between a country and its capital has nearly the same orientation for ten (country,capital) pairs. The authors attribute this fixed orientation to the ability of word vectors to learn the'is capital' relationship without any supervised effort. Table 5 indicates that word vectors might actually be learning the relationship from information about wars afflicting cities in different countries. 4 Since the attacks in a war are generally targeted at the capital city, word vectors appear to be learning this relationship simply as a of correlations. These show that the context that a model uses to infer the relationship between a pair words may not match the context that humans typically use to infer the relationship between the same pair of words. The ambiguity in context can lead to some paradoxical properties of word relationships (see table 6) such as:• Pluralization can change the relationship between words. The relationship for the pair Bear:Lion derives from the fact that they are both animals, but the relationship for the pair Bears:Lions derives from the fact that they are both names of sports teams. A corollary of this is that the of word analogy questions need not be symmetric; for the word analogy question Bears:Lions::Bear:D the context is'sports game', but for the word analogy question Bear:Lion::Bears:D the context is'animal'. 4 Most of the relationships seem to derive from World War II Table 4: Median counts Median number of times A and B appear in the dataset for each of the categories. The median counts for the frequencies are far less than that for any other group. Table 5: Derived relationship between countries and their capitals. The five most commonly cooccurring nouns with countries and their capitals. The countries chosen in this list are the same as those considered in figure 2 of BID8. The model appears to be learning information about the wars that affected the capital city of these countries.. Here, there is ambiguity in the relationship despite the fact that in both cases, the nouns, Bears and Lions are being used in the context of sports teams. The relationship which gets over-emphasized in the word vector is strictly a function of the data being used to train the word vectors.• Inferences drawn from word analogies may produce counter-intuitive . The relationships for the Lions:Giants and Dolphins:Giants pairs are deriving from the fact that the Lions, the Dolphins, and the Giants are teams in the National Football League. However, the relationship for the Lions:Dolphins pair is deriving from the fact that they are both animals. The generative approach described above makes a critical assumption that is not required by Word2vec -that the answer to the word analogy problem being posed will always be a noun. Indeed, Word2vec produces high accuracies even on questions where the answers to the word analogy questions are not nouns, by'learning' the part of speech required by the outcome. I believe that this learning takes place because word vectors corresponding to words having the same Part of Speech (POS) tag lie in the same subspace of the word vector space. The word analogy problem relies on the fact that DISPLAYFORM0 5 https://en.wikipedia.org/wiki/Bears%E2%80%93Lions_rivalry 6 https://en.wikipedia.org/wiki/Brisbane_Lions Therefore, if a POS subspace did exist, V ec(D) would belong to Span(V ec(A), V ec(B), V ec(C)), and D would be forced to have the same POS tag as either A or B or C. For the word analogy questions considered in the previous section, A, B and C are all nouns, and therefore I would expect D to also be a noun, as is the case. The idea behind equation FORMULA16 is that V ec(B) − V ec(A) is capturing information that is common to both A and B, and the information that is common to both A and B is also common to C and D. If a POS subspace does exist, then for any 2 nouns, A and B, the'common information' (described by V ec(B) − V ec(A)) will lie in the noun subspace of the vector space, and can be expressed as a linear combination of the information in other nouns in the text; this is the possibly why algorithm 1 is able to express the common information between 2 nouns solely in terms of other nouns. The question remains, what does'common information' mean? Is Canada Air the common information between Canada and Air? Our intuitive guess would be no, as is the claim made in BID8, since there is nothing'airline' about Canada. To test their claim, I construct algorithm 3 to derive the most likely interpretation of a two-word phrase'A B'. Algorithm 3 is a minor variant of algorithm 1, which finds all nouns co-occurring with A and B under the stricter constraint that B is always the noun succeeding A. If the claim in BID8 is true, we would expect the information captured by algorithm 1 to be drastically different from that captured by algorithm 3. Equation FORMULA5 6:L C ← L C + [w for w inF (i, j, s) if (w = A and w = B)] Equation FORMULA8 7:return MostCommon(L C, k) N AB contains the k most frequent nouns in L C This is not what happens. TAB5 shows that both, algorithm 1 and algorithm 3 capture information about Canada Air being an airline equally well. But, when considering the common information between the words Pacific and Southwest, algorithm 3 captures information about the airline whereas algorithm 1 captures information about the region. Similar contradictions are observed when considering names of famous people -for both algorithms, the common information between Larry and Page is Google. But, with the words Stephen and King, algorithm 3 captures information about the novelist while algorithm 1 captures information about history. These suggest that the notion of'common information' is very subjective, and is strongly influenced by the data used to train the word vectors. Although problems have become a mainstay in illustrating the efficacy of word vectors, little is known about the dynamics underlying the solution. In this paper, I attempt to improve our understanding by providing a simple generative approach to solve the problem for the case when A, B, C and D are nouns. My approach proceeds by first estimating the relationship between the (A, B) pair, and then transferring this relationship to C to compute D. My demonstrates the high ambiguity associated with estimating the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most represented. My analysis shows that even when the model predicts D accurately, it is difficult to infer the relationship the model learns about the (A, B) pair.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryA-jdlA-
Simple generative approach to solve the word analogy problem which yields insights into word relationships, and the problems with estimating them
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyper-parameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several common practices of setting hyper-parameters for fine-tuning. Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter. We find that picking the right value for momentum is critical for fine-tuning performance and connect it with previous theoretical findings. Optimal hyper-parameters for fine-tuning in particular the effective learning rate are not only dataset dependent but also sensitive to the similarity between the source domain and target domain. This is in contrast to hyper-parameters for training from scratch. Reference-based regularization that keeps models close to the initial model does not necessarily apply for "dissimilar" datasets. Our findings challenge common practices of fine- tuning and encourages deep learning practitioners to rethink the hyper-parameters for fine-tuning. Many real-world applications often have limited number of training instances, which makes directly training deep neural networks hard and prone to overfitting. Transfer learning with the knowledge of models learned on a similar task can help to avoid overfitting. Fine-tuning is a simple and effective approach of transfer learning and has become popular for solving new tasks in which pre-trained models are fine-tuned with the target dataset. Specifically, fine-tuning on pre-trained ImageNet classification models (; b) has achieved impressive for tasks such as object detection and segmentation and is becoming the de-facto standard of solving computer vision problems. It is believed that the weights learned on the source dataset with a large number of instances provide better initialization for the target task than random initialization. Even when there is enough training data, fine-tuning is still preferred as it often reduces training time significantly . The common practice of fine-tuning is to adopt the default hyperparameters for training large models while using smaller initial learning rate and shorter learning rate schedule. It is believed that adhering to the original hyperparameters for fine-tuning with small learning rate prevents destroying the originally learned knowledge or features. For instance, many studies conduct fine-tuning of ResNets (b) with these default hyperparameters: learning rate 0.01, momentum 0.9 and weight decay 0.0001. However, the default setting is not necessarily optimal for fine-tuning on other tasks. While few studies have performed extensive hyperparameter search for learning rate and weight decay , the momentum coefficient is rarely changed. Though the effectiveness of the hyperparameters has been studied extensively for training a model from scratch, how to set the hyperparameters for fine-tuning is not yet fully understood. In addition to using ad-hoc hyperparameters, commonly held beliefs for fine-tuning also include: • Fine-tuning pre-trained networks outperforms training from scratch; recent work has already revisited this. • Fine-tuning from similar domains and tasks works better (; ; ;). • Explicit regularization with initial models matters for transfer learning performance (; 2019). Are these practices or beliefs always valid? From an optimization perspective, the difference between fine-tuning and training from scratch is all about the initialization. However, the loss landscape of the pre-trained model and the fine-tuned solution could be much different, so as their optimization strategies and hyperparameters. Would the hyperparameters for training from scratch still be useful for fine-tuning? In addition, most of the hyperparameters (e.g., batch size, momentum, weight decay) are frozen; will the differ when some of them are changed? With these questions in mind, we re-examined the common practices for fine-tuning. We conducted extensive hyperparameter search for fine-tuning on various transfer learning benchmarks with different source models. The goal of our work is not to obtain state-of-the-art performance on each fine-tuning task, but to understand the effectiveness of each hyperparameter for fine-tuning, avoiding unnecessary computations. We explain why certain hyperparameters work so well on certain datasets while fail on others, which can guide future hyperparameter search for fine-tuning. Our main findings are as follows: • Optimal hyperparameters for fine-tuning are not only dataset dependent, but also depend on the similarity between the source and target domains, which is different from training from scratch. Therefore, the common practice of using optimization schedules derived from ImageNet training cannot guarantee good performance. It explains why some tasks are not achieving satisfactory after fine-tuning because of inappropriate hyperparameter selection. Specifically, as opposed to the common practice of rarely tuning the momentum value beyond 0.9, we verified that zero momentum could work better for fine-tuning on tasks that are similar with the source domain, while nonzero momentum works better for target domains that are different from the source domain. • Hyperparameters are coupled together and it is the effective learning rate-which encapsulates the learning rate, momentum and batch size-that matters for fine-tuning performance. While effective learning rate has been studied for training from scratch, to the best of our knowledge, no previous work investigates effective learning rate for fine-tuning and is less used in practice. Our observation of momentum can be explained as small momentum actually decreases the effective learning rate, which is more suitable for fine-tuning on similar tasks. We show that the optimal effective learning rate actually depends on the similarity between the source and target domains. • We find regularization methods that were designed to keep models close to the initial model does not apply for "dissimilar" datasets, especially for nets with Batch Normalization. Simple weight decay can in as good performance as the reference based regularization methods for fine-tuning with better search space. In transfer learning for image classification, the last layer of a pre-trained network is usually replaced with a randomly initialized fully connected layer with the same size as the number of classes in the target task . It has been shown that fine-tuning the whole network usually in better performance than using the network as a static feature extractor (; ; ;). select images that have similar local features from source domain to jointly fine-tune pre-trained networks. estimate domain similarity with ImageNet and demonstrate that transfer learning benefits from pre-training on a similar source domain. Besides image classification, many object detection frameworks also rely on fine-tuning to improve over training from scratch . Many researchers re-examined whether fine-tuning is a necessity for obtaining good performance. find that when domains are mismatched, the effectiveness of transfer learning is negative, even when domains are intuitively similar. examine the fine-tuning performance of various ImageNet models and find a strong correlation between ImageNet top-1 accuracy and the transfer accuracy. They also find that pre-training on ImageNet provides minimal benefits for some fine-grained object classification dataset. questioned whether ImageNet pre-training is necessary for training object detectors. They find the solution of training from scratch is no worse than the fine-tuning counterpart as long as the target dataset is large enough. find that transfer learning has negligible performance boost on medical imaging applications, but speed up the convergence significantly. There is much literature on the hyperparameter selection for training neural networks from scratch, mostly on batch size, learning rate and weight decay (; ;). There are few works on the selection of momentum . proposed an automatic tuner for momentum and learning rate in SGD and empirically show that it converges faster than Adam . There are also studies on the correlations of the hyperparameters, such as linear scaling rule between batch size and learning (; ;). However, most of these advances on hyperparameter tuning are designed for training from scratch, but not examined on fine-tuning tasks for computer vision problems. Most work on fine-tuning just choose fixed hyperparameters for all fine-tuning experiments or use dataset dependent learning rates in their experiments . Due to the huge computational cost for hyperparameter search, only a few works performed large-scale grid search of learning rate and weight decay for obtaining the best performance. In this section, we first introduce the notations and experimental settings, and then present our observations on momentum, effective learning rate and regularization. The fine-tuning process is not different from learning from scratch except for the weights initialization. The goal of the process is still to minimize the loss function L =, where is the loss function, N is the number of samples, x i is the input data, y i is its label, f is the neural network and θ is the model parameters. Momentum is widely used for accelerating and smoothing the convergence of SGD by accumulating a velocity vector in the direction of persistent loss reduction . The commonly used Nesterov momentum SGD iteratively updates the model in the following form: where θ t indicates the model parameter at iteration t. The hyperparameters include the learning rate η t, batch size n, momentum coefficient m ∈, and the weight decay λ. We evaluate fine-tuning on seven widely used image classification datasets, which covers tasks for fine-grained object recognition, scene recognition and general object recognition. Detailed statistics of each dataset can be seen in Table 1. We use ImageNet , Place365 and iNaturalist as source domains for pre-trained models. We resize the input images such that the aspect ratio is preserved and the shorter side is 256 pixels. The images are normalized with mean and std values calculated over ImageNet. For data augmentation, we adopt the common practices used for training ImageNet models : random mirror, random scaled cropping with scale and aspect variations, and color jittering. The augmented images are resized to 224×224. Note that state-of-the-art could achieve even better performance by using higher resolution images or better data augmentation . We mainly use ResNet-101-V2 (a) as our base network, which is pre-trained on ImageNet . Similar observations are also observed on DenseNets and MobileNet (see Appendix B). The hyperparameters to be tuned (and ranges) are: learning rate (0.1, 0.05, 0.01, 0.005, 0.001, 0.0001), momentum (0.9, 0.99, 0.95, 0.9, 0.8, 0.0) and weight decay (0.0, 0.0001, 0.0005, 0.001). We set the default hyperparameter to be batch size 256 1, learning rate 0.01, momentum 0.9 and weight decay 0.0001. To avoid insufficient training and observe the complete convergence behavior, we use 300 epochs for fine-tuning and 600 epochs for scratch-training, which is long enough for the training curves to converge. The learning rate is decayed by a factor of 0.1 at epoch 150 and 250. We report the Top-1 validation error at the end of fine-tuning. The total computation time for the experiments is more than 10K GPU hours. Momentum 0.9 is the most widely adopted value for training from scratch (; ; b), and is also widely adopted in fine-tuning . To the best of our knowledge, it is rarely changed, regardless of the network architectures or target tasks. To check the influence of momentum on fine-tuning, we first search the best momentum values for fine-tuning on the Birds dataset with different weight decay and learning rate. Figure 1(a) shows the performance of fine-tuning with and without weight decays. Surprisingly, momentum zero actually outperforms the nonzero momentum. We also noticed that the optimal learning rate increases when the momentum is disabled (Figure 1(b) and Appendix A). To verify this observation, we further compare momentum 0.9 and 0.0 on other datasets. Table 2 shows the performance of 8 hyperparameter settings on seven datasets. We find a clear pattern that disabling momentum works better for Dogs, Caltech, Indoor datasets, while momentum 0.9 works better for Cars, Aircrafts and Flowers. Interestingly, datasets such as Dogs, Caltech, Indoor and Birds are known to have high overlap with ImageNet dataset 2, while Cars/Aircrafts are identified to be difficult to benefit from fine-tuning from pre-trained ImageNet models . According to , in which the Earth Mover's Distance (EMD) is used to calculate the distance between a dataset with ImageNet, the similarity to Birds and Dogs are 0.562 and 0.620, while the similarity to Cars, Aircrafts and Flowers are 0.560 and 0.555, 0.525 3. The relative order of similarity to ImageNet is Dogs, Birds, Cars, Aircrafts and Flowers which aligns well with the transition of optimal momentum value from 0.0 to 0.9. To verify this dependency on domain similarity, we fine-tune from pre-trained models of different source domains. It is reported that Place365 and iNaturalist are better source domains than ImageNet for fine-tuning on Indoor and Birds dataset . We can expect that fine-tuning from iNaturalist works well for Birds with m = 0 and similarly, Places365 for Indoor. Indeed, as shown in Table 3, disabling momentum improves the performance when the source and target domains are similar, such as Places for Indoor and iNaturalist for Birds. Large momentum works better for fine-tuning on different domains but not for tasks that are close to source domains Our explanation for the above observations is that because the Dogs dataset is very close to ImageNet, the pre-trained ImageNet model is expected to be close to the fine-tuned solution on the Dogs dataset. In this case, momentum may not help much as the gradient direction around the minimum could be much random and accumulating the momentum direction could be meaningless. Whereas, for faraway target domains (e.g., Cars and Aircrafts) where the pre-trained ImageNet model could be much different with the fine-tuned solution, the fine-tuning process is more similar with training from scratch, where large momentum stabilizes the decent directions towards the minimum. An illustration of the difference can be found in Figure 2. Connections to early observations on decreasing momentum Early work actually pointed out that reducing momentum during the final stage of training allows finer convergence while aggressive momentum would prevent this. They recommended reducing momentum from 0.99 to 0.9 in the last 1000 parameter updates but not disabling it completely. Recent work showed that a large momentum helps escape saddle points but can hurt the final convergence within the neighborhood of the optima, implying that momentum should be reduced at the end of training. find that a larger momentum introduces higher variance of noise and encourages more exploration at the beginning of optimization, and encourages more aggressive exploitation at the end of training. They suggest that at the final stage of the step size annealing, momentum SGD should use a much smaller step size than that of vanilla SGD. When applied to fine-tuning, we can interpret that if the pre-trained model lies in the neighborhood of the optimal solution on the target dataset, the momentum should be small. Our work identifies the empirical evidence of disabling momentum helps final convergence, and fine-tuning on close domains seems to be a perfect case. for SGD with momentum is follows: which was shown to be more closely related with training dynamics and final performance rather than η . The effective learning rate with m = 0.9 is 10× higher than the one with m = 0.0 if other hyperparameters are fixed, which is probably why we see an increase in optimal learning rate when momentum is disabled in Figure 1 (b) and Appendix A. Because learning rate and momentum are coupled, looking at the performance with only one hyperparameter varied can give a misleading understanding of the effect of hyperparameters. Therefore, we report the best with and without momentum. which does not affect the maximum accuracy obtainable with and without momentum, as long as the hyperparameters explored are sufficiently close to the optimal parameters. We review previous experiments that demonstrated the importance of momentum tuning when the effective learning rate η = η/(1−m) is held fixed instead of the learning rate η. Figure 3 shows that when η is constant, momentum 0.0 and 0.9 are actually equivalent. In addition, the best performance obtained by momentum 0.9 and momentum 0 is equivalent when other hyperparameters are allowed to change. However, different effective learning rates in different performance, which indicates that it is effective learning rate that matters for the best performance. It explains why the common practice of changing only learning rate generally works, though changing momentum may in the same effect. They both change the effective learning rate. Optimal effective learning rate and weight decay depend on the similarity between source domain and target domain. Now that we have shown ELR is critical for the performance of fine-tuning, we are interested in the factors that determine the optimal ELR affected. found that there is an optimum fluctuation scale which maximizes the test set accuracy (at constant learning rate). However, the relationship between ELR and domain distance is unknown, which is important for fine-tuning. Effective learning rate encapsulates the effect of learning rate and momentum for fine-tuning. We varied other hyperparameters and report the best performance for each η. As shown in Figure 4, a smaller η works better if source and target domains are similar, such as Dogs for ImageNet and Birds for iNaturalist. On the other hand, the ELR for training from scratch is large and relative stable. We made similar observations on DenseNets and MobileNet (see Appendix B). The relationship between weight decay and effective learning rate are also well-studied (; van Laarhoven;). It was shown that the optimal weight decay value λ is inversely related with learning rate η. The'effective' weight decay is λ = λ/η. We show in Figure 5 that the optimal effective weight decay is larger when the source domain is similar with the target domain. Domain Similarity and Hyperparameter Selection Now we have made qualitative observations about the relationship between domain similarity and optimal ELR, which can help for reducing the hyperparameter search ranges. Note the original similarity score is based on models pre-trained on large scale dataset , which are not public available. We revisited the domain similarity score calculation in Appendix C and propose to use the source model as feature extractor We find there is a good correlation between our own domain similarity and the scale of optimal ELR. Based on the correlation between the similarity score and the optimal ELR, we propose a simple strategy for ELR selection in Appendix C. L 2 regularization or weight decay is widely used for constraining the model capacity . Recent work (; 2019) pointed that standard L 2 regularization, which drives the parameters towards the origin, is not adequate in transfer learning. To retain the knowledge learned by the pre-trained model, reference based regularization was used to regularize the distance between fine-tuned weights and the pre-trained weights, so that the finetuned model is not too different from the initial model. propose L 2 -SP norm, i.e.,, where θ refers to the part of network that shared with the source network, and θ refers to the novel part, e.g., the last layer with different number of neurons. While the motivation is intuitive, there are several issues for adopting reference based regularization for fine-tuning: Many applications actually adopt fine-tuning on target domains that are quite different from source domain, such as fine-tuning ImageNet models for medical imaging . The fine-tuned model does not necessarily have to be close with the initial model. The scale invariance introduced by Batch Normalization (BN) layers enable models with different parameter scales to function the same, i.e., f (θ) = f (αθ). Therefore, when L 2 regularization drives θ 2 2 towards zeros, it could still have the same functionality as the initial model. On the contrary, a model could still be different even when the L 2 -SP norm is small. It has been shown that the effect of weight decay on models with BN layers is equivalent to increasing the effective learning rate by shrinking the weights scales (van Laarhoven;). Regularizing weights with L 2 -SP norm would constrain the scale of weights to be close to the original one, therefore not increasing the effective learning rate, during fine-tuning. As a small effective learning rate is beneficial for fine-tuning from similar domains, which may explain why L 2 -SP provides better performance. If this is true, then by decreasing the effective learning rate, L 2 regularization would functions the same. To examine these conjectures, we revisited the work of with additional experiments. To show the effectiveness of L 2 -SP norm, conducted experiments on datasets such as Dogs, Caltech and Indoor, which are all datasets close to the source domain (ImageNet or Place-365) according to previous sections. We extend their experiments on other datasets that are relatively "far" away from ImageNet, such as Birds, Cars, Aircrafts and Flowers. We use the source code of to fine-tune on these datasets with both L 2 and L 2 -SP regularization. For fair comparison, we performed the same hyperparameter search for both methods (see experimental settings in Appendix E). As expected, Table 4 shows that L 2 regularization is very close to if not better than L 2 -SP on Birds, Cars, Aircrafts and Flowers, which indicates that reference based regularization methods may not be able to generalize for fine-tuning on dissimilar domains. The two extreme ways for selecting hyperparameters-performing exhaustive hyperparameter search or taking ad-hoc hyperparameters from scratch training-could be either too computationally expensive or yield inferior performance. Different with training from scratch, the default hyperparameter setting may work well for random initialization, the choice of hyperparameters for fine-tuning is not only dataset dependent but is also influenced by the similarity between the target domain and the source domains. The rarely tuned momentum value could impede the performance when the target domain and source domain are close. These observations connect with previous theoretical works on decreasing momentum at the end of training and effective learning rate. We further identify the optimal effective learning rate depends on the similarity of source domain and target domain. With this understanding, one can significant reduce the hyperparameter search space. We hope these findings could be one step towards better hyperparameter selection strategies for fine-tuning. To check the influence of momentum on fine-tuning, we first search the best momentum values for fine-tuning on the Birds dataset with fixed learning rate but different batch size and weight decay. Figure 6 provides the convergence curves for the shown in Figure 1 (a), which shows the learning curves of fine-tuning with 6 different batch sizes and weight decay combinations. Zero momentum outperforms the nonzero momentum in 5 of the 6 configurations. Optimal learning rate increases after disabling momentum. Figure 7 compares the performance of turning on/off momentum for each datasets with different learning. For datasets that are "similar" to ImageNet (Figure 7 (a-h)) and fixed learning rate (e.g., 0.01), the Top-1 validation error decreases significantly after disabling momentum. On the other hand, for datasets that are "dissimilar" to ImageNet (Figure 7(g-n) ) and fixed learning rate, disabling momentum hurts the top-1 accuracy. We can also observe that the optimal learning rate generally increase 10x after changing from 0.9 to 0.0, which is coherrent with the rule of effective learning rate. We also verified our observations on DenseNet-121 and MobileNet-1.0 with similar settings. We made similar observations among the three architectures: the optimal effective learning rate is related with the similarity to source domain. As seen in Figure 8 (b) and (c), the optimal effective learning rates for Dogs/Caltech/Indoor datasets are much smaller than these for Aircrafts/Flowers/Cars when fine-tuned from ImageNet, similar with ResNet-101. This verified our findings on ResNet-101 are pretty consistent on a variety of architectures. In Appendix C, we will show the the correlation between EMD based domain similarity and optimal ELR is quite consistent across different architectures. The domain similarity calculation based on Earth Mover Distance (EMD) is introduced in the section 4.1 of 4. Here we briefly introduce the steps. In , the authors first train ResNet-101 on the large scale JFT dataset and use it as a feature extractor. They extracted features from the penultimate layer of the model for each image of the training set of the source domain and target domain. For ResNet-101, the length of the feature vector is 2048. The features of images belonging to the same category are averaged and g(s i) denotes the average feature vector of ith label in source domain S, similarly, g(t j) denotes the average feature vector of jth label in target domain T. The distance between the averaged features of two labels is d i,j = g(s i) − g(t j). Each label is associated with a weight w ∈ corresponding to the percentage of images with this label in the dataset. So the source domain S with m labels and the target domain T with n labels can be represented as The EMD between the two domains is defined as where the optimal flow f i,j corresponds to the least amount of total work by solving the EMD optimization problem. The domain similarity is defined as where γ is 0.01. The domain similarities based on this pre-trained model for the seven dataset is listed in Table 5. , which use JFT pretrained ResNet-101 as the feature extractor. Note that the ResNet-101 pre-trained on JFT is not released and therefore we cannot calculate its similarities for datasets such as Caltech and Indoor. On the right three columns, we use three public available ImageNet pre-trained models as the feature extractor. All features are extracted from the training set of each dataset. The 1st, 2nd, 3rd and 4th scores are color coded, and the smallest three scores are marked in brown. The corresponding optimal ELR η is listed, which is the same as shown in Figure Due to the unavailability of the large-scale JFT dataset (300x larger than ImageNet) and its pre-trained ResNet-101 model, we cannot use it for extracting features for new datasets, such as Caltech256 and MIT67-Indoor. Instead of using the powerful feature representation, we use the pre-trained source model directly as the feature extractor for both source domain and target domains. We believe the similarity based on features extracted from source models captures the transfer learning process better for different source models. In Table 5, we compared the domain similarities calculated by three different ImageNet pre-trained models. We find some consistent patterns across different architectures: • The 1st and 2nd highest similarity scores are Caltech and Dogs across architectures. • The 3rd and 4th highest similarity scores refers to Birds and Indoor. • The most dissimilar datasets are Cars, Aircrafts and Flowers. Note the relative orders for the dissimilar datasets (Cars, Aircrafts and Flowers) are not consistent across difference architectures. As we can see from Table 5, the domain similarity score has some correlation with the scale of optimal ELR. Generally, the more similarity between two domains, the smaller the optimal ELR. Though the optimal ELR is not strictly corresponding to the domain similarity score, the scores provide reasonable predictions about the scale of optimal ELR, such as and therefore can reduce the search ranges of ELR. One can calculate the domain similarities and perform exhaustive hyperparameter searches for the first few target datasets, including similar and dissimilar datasets, such as Dogs and Cars and we refer these datasets as reference datasets. Then for the given new dataset to fine-tune, one can calculate its domain similarity and compare with the scores of reference datasets, and choose the range of ELRs with the closest domain similarity. D referred to the fact that the momentum parameter of BN is essential for fine-tuning. They found it useful to decrease the batch normalization momentum parameter from its ImageNet value to max(110/s, 0.9) where s is the number of steps per epoch. This will change the default BN momentum value (0.9) when s is larger than 100, but only applies when the dataset size is larger than 25.6K with batch size 256. The maximum data size used in our experiments is Caltech-256, which is 15K, so this strategy is not applicable to our experiments. We further explore the effect of BN momentum by perform similar study as to ELR. We want to identify whether there is an optimal BN momentum for each dataset. For each dataset, we finetune the pre-trained model using previously obtained best hyperparameters and only tune the BN momentum. In addition to the default 0.9, we searched 0.0, 0.95 and 0.99. If BN mommentum is critical, we can expect significant performance differences. The is shown in Figure 9 We see m bn = 0.99 slightly improves the performance for some datasets, however, we did not see the significant performance difference among values greater than 0.9. We use the code 5 provided by the authors. The base network is ImageNet pretrained ResNet-101-V1. The model is fine-tuned with batch size 64 in 9000 iterations, and the learning rate is decayed at iteration 6000. Following the original setting, we use momentum 0.9. We performed grid search on learning rate and weight decay, with the range of η: {0.02, 0.01, 0.005, 0.001, 0.0001} and λ 1: {0.1, 0.01, 0.001, 0.0001}, and report the best average error for both methods. For L 2 -SP norm, we follow the authors' setting to use constant λ 2 = 0.01. Different with the original setting for L 2 regularization, we set λ 2 = λ 1 to simulate the normal L 2 -norm. Data augmentation is an important way of increasing data quantity and diversity to make models more robust. It is even critical for transfer learning with few instances. The effect of data augmentation can be viewed as a regularization method and the choice of data augmentation method is also a hyperparameter. Most current widely used data augmentation methods have verified their effectiveness on training ImageNet models, such as random mirror flipping, random rescaled cropping 6, color jittering and etc; and they are also widely used for fine-tuning. Do these methods transfer for fine-tuning on other datasets? Here we compare three settings for data augmentation: 1) random resized cropping: our default data augmentation; 2) random crop: the same as standard data augmentation except that we use random cropping with fixed size; 3) random flip: simply random horizontal flipping. The effect of data augmentation is dataset dependent and has big impact on the convergence time The training and validation errors of fine-tuning with different data augmentation strategies are illustrated in Figure 10. We find that advanced cropping works significantly better on datasets like Cars, Aircrafts and Flowers but performs worse on Dogs. The choice of data augmentation methods has dramatic influence to the convergence behaviour. Simpler data augmentation usually converge very quickly (e.g., in 20 epochs), while the training error for random resized cropping converges much slower. We see that default hyperparemter and data augmentation method lead to overfitting on Dogs dataset. This can be solved by disabling momentum as we can see in Table 2, and in better performance than random cropping. We can expect that random resized cropping adds extra variance to the gradient direction and the effect of disabling momentum is more obvious for this case. Disabling momentum improves performance on datasets that are close to source domains Here we compare data augmentation methods with different momentum settings. As can be seen in Table 6, random resized cropping consistently outperforms random cropping on datasets like Cars, Aircrafts and Flowers. Using momentum improves the performance significantly for both methods. We see that advanced data augmentation method with default hyperparameters (m = 0.9 and η = 0.01) leads to overfitting on Dogs and Caltech dataset (Figure 11 (a) and (c)). Random resized cropping with zero momentum solves this problem and in better performance than random cropping. When momentum is disabled for random cropping, the performance is still better for Dogs, but decreases for other datasets. This can be expected as random cropping produces images with less variation and noise than random resized cropping, the gradients variation is less random and momentum can still point to the right direction. This can be further verified as we increase the learning rate for random cropping, which adds variation to the gradients, and disabling momentum shows better performance that nonzero momentum on datasets that are close. Transfer learning from similar source domains helps but does not guarantee good performance We consider two ImageNet subsets: 449 Natural objects and 551 Man-made objects, following the splits of (supplementary materials). From the bottom of Table 7, we can see that fine-tuning from ImageNet-Natural pre-trained models performs better on Birds and Dogs dataset, whereas Caltech-256 and Indoor benefit more from ImageNet-Manmade pretrained models. The performance gap between ImageNet-Manmad and ImageNet-Natural on Cars and Flowers are not as significant as for Birds and Dogs. It is surprising to see that fine-tuning from ImageNetManmade subset yields worse performance than ImageNet-Natural on the Cars and Indoor dataset. The fine-tuning on both subsets do not exceed the pre-trained models with full ImageNet. Scratch training can outperform fine-tuning with better hyperparameters We further reexamine the default hyperparameters for scratch training. For most tasks, training from scratch with default hyperparameters is much worse than fine-tuning from ImageNet. However, after slight hyperparameter tuning on learning rates, momentum and weight decay, the performance of training from scratch gets close to the default fine-tuning (e.g., Cars and Aircrafts). Scratch training HPO on Cars and Aircrafts even surpasses the default fine-tuning . Previous studies; also identified that datasets like Cars, Aircrafts do not benefit too much from fine-tuning. FT ImageNet Default is the fine-tuning with the default hyperparameters. Scratch Train use similar hyperparameters as default fine-tuning, which are η = 0.1, n = 256, λ = 0.0001 and m = 0.9 with doubled length of training schedules. HPO refers to the best with hyperparameter grid search. Note our Indoor dataset is fine-tuned from ImageNet. Results of ResNet-101 DELTA refers to , and Inception-v3 refers to (Pre-trained models on ImageNet-Natural and ImageNet-Manmade We train ResNet-101 from scratch on each subset using standard hyperparameters, i.e., initial learning rate 0.1, batch size 256, momentum 0.9. We train 180 epochs, learning rate is decayed at epoch 60 and 120 by a factor of 10. Table 8 illustrates the Top-1 errors of training ResNet-101 on each source datasets. Scratch Training HPO Figure 12 shows the training/validation errors of training from scratch on each dataset with different learning rate and weight decay. We use initial learning rate 0.1, batch size 256. For most dataset, we train 600 epochs, and decay the learning rate at epoch 400 and 550 by a factor of 10. The parameters to search is η ∈ [0.1, 0.2, 0.5] and λ ∈ [0.0001, 0.0005] with fixed momentum 0.9 and batch size 256. We observe weight decay 0.0005 consistently performs better than 0.0001.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1g8VkHFPH
This paper re-examines several common practices of setting hyper-parameters for fine-tuning.
Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task. To solve the problem, some regularization methods, constraining the outer layer weights of the target network using the starting point as references (SPAR), have been studied. In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention. Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network. Specifically, in addition to minimizing the empirical loss, DELTA intends to align the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by attention that has been learned in an supervised learning manner. We evaluate DELTA with the state-of-the-art algorithms, including L2 and L2-SP. The experiment show that our proposed method outperforms these baselines with higher accuracy for new tasks. In many real-world applications, deep learning practitioners often have limited number of training instances. Direct training a deep neural network with a small training data set usually in the so-called over-fitting problem and the quality of the obtained model is low. A simple yet effective approach to obtain high-quality deep learning models is to perform weight fine-tuning. In such practices, a deep neural network is first trained using a large (and possibly irrelevant) source dataset (e.g. ImageNet). The weights of such a network are then fine-tuned using the data from the target application domain. Fine-tuning is a specific approach to perform transfer learning in deep learning. The weights pretrained by the source dataset with a sufficiently large number of instances usually provide a better initialization for the target task than random initializations. In a typical fine-tuning approach, weights in lower convolution layers are fixed and weights in upper layers are re-trained using data from the target domain. In this approach parameters of the target model may be driven far away from initial values, which also causes over-fitting in transfer learning scenarios. Approaches called regularization using the starting point as the reference (SPAR), were recently proposed to solve the over-fitting problem. For example, Li et al. BID10 proposed L 2 -SP that incorporates the Euclid distance between the target weights and the starting point (i.e., weights of source network) as part of the loss. Minimizing this loss function, L 2 -SP aims to minimize the empirical loss of deep learning while reducing the distance of weights between source and target networks. They achieved significant improvement compared with standard practice of using the weight decay (L 2 normalization).However such regularization method may not deliver optimal solution for transfer learning. On one side, if the regularization is not strong, even with fine-turning, the weights may still be driven far away from the initial position, leading to the lose of useful knowledge, i.e. catastrophic memory loss. On the other side, if the regularization is too strong, newly obtained model is constrained to a local neighborhood of the original model, which may be suboptimal to the target data set. Although aforementioned methods demonstrated the power of regularization in deep transfer learning, we argue that we need to perform research on at least the following two aspects in order to further improve current regularization methods. Behavior vs. Mechanisms. The practice of weight regularization for CNN is motivated by a simple intuition -the network (layers) with similar weights should produce similar outputs. However, due to the complex structures of deep neural network with strong redundancies, regulating the model parameters directly seems an over-killing of the problem. We argue that we should regularize the "Behavior", or in our case, the outer layer outputs (e.g. the feature maps) produced by each layer, rather than model parameters. With constrained feature maps, the generalization capacity could be improved through aligning the behaviors of the outer layers of the target network to the source one, which has been pre-trained using an extremely large dataset. In Convolutional Neural Networks, which we focus on exclusively in this paper, an outer layer is a convolution layer and the output of an outer layer is its feature map. Syntax vs Semantics. While regularizing the feature maps might improve the transfer of generalization capacity, it is still difficult to design such regularizers. It is challenging to measure the similarity/distance between the feature maps without understanding its semantics or representations. For example for image classification, some of the convolution kernels may be corresponding to features that are shared between the two learning tasks and hence should be preserved in transfer learning while others are specific to the source task and hence could be eliminated in transfer learning. In this paper, we propose a novel regularization approach DELTA to address the two issues. Specifically, DELTA selects the discriminative features from outer layer outputs through re-weighting the feature maps with a novel supervised attention mechanism. Through paying attention to discriminative parts of feature maps, DELTA characterizes the distance between source/target networks using their outer layer outputs, and incorporates such distance as the regularization term of the loss function. With the back-propagation, such regularization finally affects the optimization for weights of deep neural network and awards the target network generalization capacity inherited from the source network. In summary, our key insight is what we call "unactivated channel re-usage". Specifically our approach identifies those transferable channels and preserves such filters through regularization and identify those untransferable channels and reuse them, using an attention mechanism with feature map regularization. We have conducted extensive experiments using a wide range of source/target datasets and compared DELTA to the existing deep transfer learning algorithms that are in pursuit of weight similarity. The experiment show that DELTA significantly outperformed the state-of-the-art regularization algorithms including L 2 and L 2 -SP with higher accuracy on a wide group of image classification data sets. The rest of the paper is organized as follows: in Section 2 related works are summarized, in Section 3 our feature map based regularization method is introduced, in Section 4 experimental are presented and discussed, and finally in Section 5 the paper is concluded. In this section, we first review the related works to this paper, where we discuss the contributions made by this work beyond previous studies. Then, we present the s of our work. Transfer learning is a type of machine learning paradigm aiming at transferring the knowledge obtained in a source task to a target task (; BID14 . Our work primarily focuses on inductive transfer learning for deep neural networks, where the label space of the target task dif-fers from that of the source task. For example, Donahue et al BID3 proposed to train a classifier based on feature extracted from a pre-trained CNN, where a large mount of parameters, such as filters, of the source network are reused directly in the target one. This method may overload the target network with tons of irrelevant features (without discrimination power) involved, while the key features of the target task might be ignored. To understand whether a feature can be transferred to the target network, Yosinki et al. BID19 quantified the transferability of features from each layer considering the performance gain. Moreover, to understand the factors that may affect deep transfer learning performance, Huh et al. BID8 empirically analyzed the features obtained by the ImageNet pre-trained source network to a wide range of computer vision tasks. Recently, more studies to improve the inductive transfer learning from a diverse set of angles have been proposed, such as filter subset selection BID4 BID2, sparse transfer BID12, filter distribution constraining BID0, and parameter transfer BID21.For deep transfer learning problems, the most relevant work to our study is BID10, where authors investigated regularization schemes to accelerate deep transfer learning while preventing fine-tuning from over-fitting. Their work showed that a simple L 2 -norm regularization on top of the "Starting Point as a Reference" optimization can significantly outperform a wide range of regularization-based deep transfer learning mechanisms, such as the standard L 2 -norm regularization. Compared to above work, the key contributions made in this paper include 1) rather than regularizing the distance between the parameters of source network and target network, DELTA constrains the L 2 -norm of the difference between their behaviors (i.e., the feature maps of outer layer outputs in the source/target networks); and 2) the regularization term used in DELTA incorporates a supervised attention mechanism, which re-weights regularizers according to their performance gain/loss. In terms of methodologies, our work is also related to the knowledge distillation for model compression BID6 BID15. Generally, knowledge distillation focuses on teacher-student network training, where the teacher and student networks are usually based on the same task BID6. These work frequently intends to transfer the knowledge in the teacher network to the student one through aligning their outputs of some layers BID15. The most close works to this paper are BID20 BID18, where knowledge distillation technique has been studied to improve transfer learning. Compared to above work, our work, including other transfer learning studies, intends to transfer knowledge between different source/target tasks (i.e., source and target tasks), though the source/target networks can be viewed as teachers and students respectively. We follow the conceptual ideas of knowledge distillation to regularize the outer layer outputs of the network (i.e., feature maps), yet further extend such regularization to a supervised transfer learning mechanism by incorporating the labels of target task (which is different from the source task/network). Moreover, a supervised attention mechanism has been adopted to regularize the feature maps according to the importance of filters. Other works relevant to our methodology include: continual learning BID9 BID11, attention mechanism for CNN models BID13 BID16 BID17 BID20, among others. Deep convolutional networks usually consist of a great number of parameters that need fit to the dataset. For example, ResNet-110 has more than one million free parameters. The size of free parameters causes the risk of over-fitting. Regularization is the technique to reduce this risk by constraining the parameters within a limited space. The general regularization problem is usually formulated as follow. Let's denote the dataset for the desired task as {(x 1, y 1), (x 2, y 2), (x 3, y 3)..., (x n, y n)}, where totally n tuples are offered and each tuple (x i, y i) refers to the input image and its label in the dataset. We further denote ω ∈ R d be the d-dimensional parameter vector containing all d parameters of the target model. The optimization object with regularization is to obtain DISPLAYFORM0 where the first term n i=1 L(z(x i, ω), y i ) refers to the empirical loss of data fitting while the second term is a general form of regularization. The tuning parameter λ > 0 balances the trade-off between the empirical loss and the regularization loss. Without any explicit information (such as other datasets) given, one can easily use the L 0 /L 1 /L 2 -norm of the parameter vector ω as the regularization to fix the consistency issue of the network. Given a pre-trained network with parameter ω * based on an extremely large dataset as the source, one can estimate the parameter of target network through the transfer learning paradigms. Using the ω * as the initialization to solve the problem in Eq 1 can accelerate the training of target network through knowledge transfer BID7 BID1. However, the accuracy of the target network would be bottlenecked in such settings. To further improve the transfer learning, novel regularized transfer learning paradigms that constrain the divergence between target and source networks has been proposed, such that DISPLAYFORM0 where the regularization term Ω(ω, ω *) characterizes the differences between the parameters of target and source network. As ω * is frequently used as the initialization of ω during the optimization procedure, this method sometimes refers to Starting Point As the Reference (SPAR) method. To regularize weights straightforwardly, one can easily use the geometric distance between ω and ω * as the regularization terms. For example, L 2 -SP algorithm constrains the Euclid distance of the weights of convolution filters between the source/target networks BID10.In this way, we summarize the existing deep transfer learning approaches as the solution of the regularized learning problem listed in Eq 2, where the regularizer aims at constraining the divergence of parameters of the two networks while ignoring the behavior of the networks with the training data set {(x 1, y 1), (x 2, y 2),..., (x n, y n)}. More specific, the regularization terms used by the existing deep transfer learning approaches neither consider how the network with certain parameters would behave with the new data (images) or leverages the supervision information from the labeled data (images) to improve the transfer performance. In this section, we first formulate the problem, then present the overall design of proposed solution and introduce several key algorithms. In our research, instead of bounding the difference of weights, we intend to regulate the network behaviors and force some layers of the target network to behave similarly to the source ones. Specifically, we define the "behaviors" of a layer as its output, which are with semantics-rich and discriminative information. DELTA intends to incorporate a new regularizer Ω (ω, ω *, x). Given a pre-trained parameter ω * and any input image x, the regularizer Ω (ω, ω *, x) measures the distance between the behaviors of target network with parameter ω and the source one based on ω *. With such regularizer, the transfer learning problem can be reduced to learning problem as follows: DISPLAYFORM0 where n i=1 Ω(ω, ω *, x i, y i, z) characterizes the aggregated difference between the source and target network over the whole training dataset using the model z. Note that, with the input tuples Figure 1: Behavior-based Regularization using Feature Maps with Attentions (x i, y i) and for 1 ≤ i ≤ n, the proposed regularizer Ω(ω, ω *, x i, y i, z) is capable of regularizing the behavioral differences of network model z based on each labeled sample (x i, y i) in the dataset, using the parameters ω and ω * respectively. Further, inspired by SPAR method, DELTA accelerates the optimization procedure of the regularizer through incorporating a parameter-based proximal term, such that DISPLAYFORM1 where α, β are two non-negative tuning parameters to balance two terms. On top of the behavioral regularizer Ω (ω, ω *, x, y, z), DELTA includes a term Ω (ω\ω *) regularizing a subset of parameters that are privately owned by the target network w only but not exist in the source network w *. Specifically, Ω (ω\ω *) constrains the L 2 -norm of the private parameters in ω, so as to improve the consistency of inner layer parameters estimation. Note that, when using w * as the initialization of ω for optimization, DELTA indeed adopts starting point as reference (SPAR) strategy BID10 to accelerate the optimization and gains better generalizability. To regularize the behavior of the networks, DELTA considers the distance between the outer layer outputs of the two networks. Figure 1 illustrates the concepts of proposed method. Specifically, the outer layer of the network consists of a large set of convolutional filters. Given an input x i (for ∀1 ≤ i ≤ n in training set), each filter generates a feature map. Thus, DELTA characterizes the outer layer output of the network model z based on input x i and parameter ω using a set of feature maps, such as FM j (z, ω, x i) and 1 ≤ j ≤ N for the N filters in networks. In this way, the behavioral regularizer is defined as: DISPLAYFORM0 where W j (z, ω *, x i, y i) refers to the weight assigned to the j th filter and the i th image (for ∀1 ≤ i ≤ n and ∀1 ≤ j ≤ N) and the behavioral difference between the two feature maps, i.e., FM j (z, ω, x i) and FM j (z, ω *, x i), is measured using their Euclid distance (denoted as · 2).In following sections, we are going to present the design and implementation of feature map extraction FM j (z, ω, x) for 1 ≤ j ≤ N, as well as the the attention model that assigns the weight W j (z, ω *, x i, y i) to each labeled image and filter. Given each filter of the network with parameter ω and the input x i drawn from the target dataset, DELTA first uses such filter to get the corresponding output based on x, then adopts Rectified Linear Units (ReLU) to rectify the output as a matrix. Further, DELTA formats the output matrices into vectors through concatenation. In this way, DELTA obtains FM j (z, ω, x i) for 1 ≤ j ≤ N and 1 ≤ i ≤ n that have been used in Eq 5. In DELTA, the proposed regularizer measures the distance between the feature maps generated by the two networks, then aggregates the distances using non-negative weights. Our aim is to pay more attention to those features with greater capacity of discrimination through supervised learning. To obtain such weights for feature maps, we propose a supervised attention method derived from the backward variable selection, where the weights of features are characterized by the potential performance loss when removing these features from the network. For clear description, following common conventions, we first define a convolution filter as follow. The parameter of a conv2d layer is a four-dimensional tensor with the shape of (c i+1, c i, k h, k w), where c i and c i+1 represent for the number of channels of the i th and (i + 1) th layer respectively. c i+1 filters are contained in such a convolutional layer, each of which with the kernel size of c i * k h * k w, taking the feature maps with the size of c i * h i * w i of the i-th layer as input, and outputing the feature map with the size of h i+1 * w i+1.In particular, we evaluate the weight of a filter as the performance reduction when the filter is disabled in the network. Intuitively, removing a filter with greater capacity of discrimination usually causes higher performance loss. In this way, such channels should be constrained more strictly since a useful representation for the target task is already learned by the source task. Given the pre-trained parameter ω * and an input image x i, DELTA sets the weight of the j th channel, using the gap between the empirical losses of the networks on the labeled sample (x i, y i) with and without the j th channel, as follow, DISPLAYFORM0 where ω * \j refers to the modification of original parameter ω * with all elements of the j th filter set to zero (i.e., removing the j th filter from the network). We use softmax to normalize the to ensure all weights are non-negative. The aforementioned supervised attention mechanism yields a filter a higher weight for a specific image if and only if the corresponding feature map in the pretrained source network is with higher discrimination power -i.e., paying more attention to such filter on such image might bring higher performance gain. Note that, to calculate L(z(x i, ω * \j), y i ) and L(z(x i, ω *), y i ) for supervised attention mechanism, we introduce a baseline algorithm L 2 -F E that fixes the feature extractor (with all parameters copied from source networks) and only trains the discriminators using the target task. The L 2 -F E model can be viewed as an adaption of the source network (weights) to the target tasks, without further modifications to the outer layer parameters. In our work, we use L 2 -F E to evaluate L(z(x i, ω * \j), y i ) and L(z(x i, ω *), y i ) using the target datasets. We have conducted a comprehensive experimental study of the proposed DELTA method. Below we first briefly review the used datasets, followed by a description of experimental procedure and finally our observations. We evaluate the performance of three benchmarks with different tasks: Caltech 256 for general object recognition, Stanford Dogs 120 for fine-grained object recognition, and MIT Indoors 67 for scene classification. For the first two benchmarks, we used ImageNet as the source domain and Places 365 for the last one. Caltech 256. Caltech 256 is a dataset with 256 object categories containing a total of 30607 images. Different numbers of training examples are used by researchers to validate the generalization of proposed algorithms. In this paper, we create two configurations for Caltech 256, which have 30 and 60 random sampled training examples respectively for each category, following the procedure used in BID10.Stanford Dogs 120. The Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. There are exactly 100 examples per category in the training set. It is used for the task of fine-grained image categorization. We do not use the bounding box annotations. MIT Indoors 67. MIT Indoors 67 is a scene classification task containing 67 indoor scene categories, each of which consists of 80 images for training and 20 for testing. Indoor scene recognition is challenging because both spatial properties and object characters are expected to be extracted. -200-2011 -200-. CUB-200-2011 contains 11,788 images of 200 bird species. Each species is associated with a wikipedia article and organized by scientific classification. Each image is annotated with bounding box, part location, and attribute labels. We use only classification labels during training. While part location annotations are used in a quantitative evaluation of show cases, to explain the transferring effect of our algorithm. Food-101. Food-101 a large scale data set of 101 food categories, with 101,000 images, for the task of fine-grained image categorization. 750 training images and 250 test images are provided for each class. This dataset is challenging because the training images contain some amount of noise. We implemented our method with ResNet-101 and Inception-V3 as the base networks. For experiment set up we followed almost the same procedure in BID10 due to the close relationship between our work and theirs. After training with the source dataset and before fine-tuning the network with the target dataset, we replace the last layer of the base network with random initialization in suit for the target dataset. For ResNet-101, the input images are resized to 256*256 and normalized to zero mean for each channel, following with data augmentation operations of random mirror and random crop to 224*224. For Inception-V3, images are resized to 320*320 and finally cropped to 229*229. We use a batch size of 64. SGD with the momentum of 0.9 is used for optimizing all models. The learning rate for the base model starts with 0.01 for ResNet-101 and 0.001 for Inception-V3, and is divided by 10 after 6000 iterations. The training is finished at 9000 iterations. We use five-fold cross validation for searching the best configurations of the hyperparameter α for each experiment. The hyperparameter β is fixed to 0.01. As was mentioned, our experiments compared DELTA to several key baseline algorithms including L 2, L 2 -SP BID10, and L 2 -F E (see also in Section 3.4), all under the same settings. Each experiment is repeated five times. The average top-1 classification accuracy and standard division is reported. DISPLAYFORM0 DISPLAYFORM1 In FIG0 plotted a sample learning curve of training with different regularization techniques. Comparing these regularization techniques, we observe that our proposed DELTA shows faster convergence than the simple L 2 -SP regularization with both step decay (StepLR) and exponential decay (ExponentialLR) learning rate scheduler. In addition, we find that the learning curve of DELTA is smoother than L 2 -SP and it is not sensitive to the learning rate decay happened at the 6000th iteration when using StepLR.In TAB0 we show the of our proposed method DELTA with and without attention, compared to the baseline of L 2 -SP reported in BID10 and also the naive L 2 -F E and L 2 methods. We find that on some datasets, fine-tuning using L 2 normalization does not perform significantly better than directly using the pre-trained model as a feature extractor(L 2 -F E), while L 2 -SP outperforms the naive methods without SPAR. We observe that greater benefits are gained using our proposed attention mechanism. Data augmentation is a widely used technique to improve image classification. Following BID10, we used a simple data augmentation method and a post-processing technique. First, we keep the original aspect ratio of input images by resizing them with the shorter edge being 256, instead of ignoring the aspect ratio and directly resizing them to 256*256. Second, we apply 10-crop testing to further improve the performance. In TAB1, we documented the experimental using these technique with different regularization methods. We observe a clear pattern that with additional data augmentation, all the three evaluated methods L 2, L 2 -SP, DELTA have improved classification accuracy while our method still delivers the best one. To better understand the performance gain of DELTA we performed an experiment where we analyzed how parameters of the convolution filters change after fine-tuning. Towards that purpose we randomly sampled images from the testing set of Stanford Dogs 120. For ResNet-101, which we use exclusively in this paper, we grouped filters into stages as described in (he et al., 2016). These stages are conv2 x, conv3 x, conv4 x, conv5 x. Each stage contains a few stacked blocks and a block is a basic inception unit having 3 conv2d layers. One conv2d layer consists of a number of output filters. We flatten each filter into a one dimension parameter vector for convenience. The Euclidian distance between the parameter vectors before and after fine-tuning is calculated. All distances are sorted as shown in FIG3.We observed a sharp difference between the two distance distributions. Our hypothesis of possible cause of the difference is that simply using L 2 -SP regularization all convolution filters are forced to be similar to the original ones. Using attention, we allow "unactivated" convolution filters to be reused for better image classification. About 90% parameter vectors of DELTA have larger distance than L 2 -SP. We also observe that a small number of filters is driven very far away from the initial value (as shown at the left end of the curves in FIG3 . We call such an effect as "unactivated channel re-usage".To further understand the effect of attention and the implication of "unactivated channel re-usage", we "attributed" the attention to the original image to identify the set of pixels having high contributions in the activated feature maps. We select some convolution filters on which the source model (the initialization before fine-tuning) has low activation. For the convenience of analyzing the effect of regularization methods, each element a i of the original activation map is normalized with DISPLAYFORM0 where the min and max terms in the formula represent for the minimum and maximum value of the whole activation map respectively. Activation maps of these convolution filter for various regularization method are presented on each row. As shown in FIG4, our first observation is that without attention, the activation maps from DELTA in different images are more or less the same activation maps from other regularization methods. This partially explains the fact that we do not observe significant improvement of DELTA without attention. Using attention, however, changes the activation map significantly. Regularization of DELTA with attention show obviously improved concentration. With attention (the right-most column in FIG4), we observed a large set of pixels that have high activation at important regions around the head of the animals. We believe this phenomenon provides additional evidence to support our intuition of "unactivated channel re-usage" as discussed in previous paragraphs. In addition, we included new statistical of activations on part locations of CUB-200-2011 supporting the above qualitative cases. The CUB-200-2011 datasets defined 15 discriminative parts of birds, e.g. the forehead, tail, beak and so on. Each part is annotated with a pixel location representing for its center position if it is visible. So for each image, we got several key points which are very important to discriminate its category. Using all testing examples of CUB-200-2011, we calculated normalized activations on these key points of these different regularization methods. As shown in TAB2, DELTA got the highest average activations on those key points, demonstrating that DELTA focused on more discriminate features for bird recognition. In this paper, we studied a regularization technique that transfers the behaviors and semantics of the source network to the target one through constraining the difference between the feature maps generated by the convolution layers of source/target networks with attentions. Specifically, we designed a regularized learning algorithm DELTA that models the difference of feature maps with attentions between networks, where the attention models are obtained through supervised learning. Moreover, we further accelerate the optimization for regularization using start point as reference (SPAR). Our extensive experiments evaluated DELTA using several real-world datasets based on commonly used convolutional neural networks. The experiment show that DELTA is able to significantly outperform the state-of-the-art transfer learning methods.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkgbwsAcYm
improving deep transfer learning with regularization using attention based feature maps
Neural models achieved considerable improvement for many natural language processing tasks, but they offer little transparency, and interpretability comes at a cost. In some domains, automated predictions without justifications have limited applicability. Recently, progress has been made regarding single-aspect sentiment analysis for reviews, where the ambiguity of a justification is minimal. In this context, a justification, or mask, consists of (long) word sequences from the input text, which suffice to make the prediction. Existing models cannot handle more than one aspect in one training and induce binary masks that might be ambiguous. In our work, we propose a neural model for predicting multi-aspect sentiments for reviews and generates a probabilistic multi-dimensional mask (one per aspect) simultaneously, in an unsupervised and multi-task learning manner. Our evaluation shows that on three datasets, in the beer and hotel domain, our model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable. Neural networks have become the standard for many natural language processing tasks. Despite the significant performance gains achieved by these complex models, they offer little transparency concerning their inner workings. Thus, they come at the cost of interpretability . In many domains, automated predictions have a real impact on the final decision, such as treatment options in the field of medicine. Therefore, it is important to provide the underlying reasons for such a decision. We claim that integrating interpretability in a (neural) model should supply the reason of the prediction and should yield better performance. However, justifying a prediction might be ambiguous and challenging. Prior work includes various methods that find the justification in an input text -also called rationale or mask of a target variable. The mask is defined as one or multiple pieces of text fragments from the input text. 1 Each should contain words that altogether are short, coherent, and alone sufficient for the prediction as a substitute of the input . Many works have been applied to single-aspect sentiment analysis for reviews, where the ambiguity of a justification is minimal. In this case, we define an aspect as an attribute of a product or service , such as Location or Cleanliness for the hotel domain. There are three different methods to generate masks: using reinforcement learning with a trained model (b), generating rationales in an unsupervised manner and jointly with the objective function , or including annotations during training . However, these models generate justifications that are 1) only tailored for one aspect, and 2) expressed as a hard (binary) selection of words. A review text reflects opinions about multiple topics a user cares about . It appears reasonable to analyze multiple aspects with a multi-task learning setting, but a model must be trained as many times as the number of aspects. A hard assignment of words to aspects might lead to ambiguities that are difficult to capture with a binary mask: in the text "The room was large, clean and close to the beach.", the word "room" refers to the aspects Room, Cleanliness and Location. Finally, collecting human-provided rationales at scale is expensive and thus impractical. In this work, we study interpretable multi-aspect sentiment classification. We describe an architecture for predicting the sentiment of multiple aspects while generating a probabilistic (soft) multi-dimensional mask (one dimension per aspect) jointly, in an unsupervised and multi-task learning manner. We show that the induced mask is beneficial for identifying simultaneously what parts of the review relate to what aspect, and capturing ambiguities of words belonging to multiple aspects. Thus, the induced mask provides fine-grained interpretability and improves the final performance. Traditionally interpretability came at a cost of reduced accuracy. In contrast, our evaluation shows that on three datasets, in the beer and hotel domain, our model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable compared to attention-based methods and a single-aspect masker. We show that it can be a benefit to 1) guide the model to focus on different parts of the input text, and 2) further improve the sentiment prediction for all aspects. Therefore, interpretabilty does not come at a cost anymore. The contributions of this work can be summarized as follow: • We propose a Multi-Aspect Masker (MAM), an end-to-end neural model for multi-aspect sentiment classification that provides fine-grained interpretability in the same training. Given a text review as input, the model generates a probabilistic multi-dimensional mask, with one dimension per aspect. It predicts the sentiments of multiple aspects, and highlights long sequences of words justifying the current rating prediction for each aspect; • We show that interpretability does not come at a cost: our final model significantly outperforms strong baselines and attention models, both in terms of performance and mask coherence. Furthermore, the level of interpretability is controllable using two regularizers; • Finally, we release a new dataset for multi-aspect sentiment classification, which contains 140k reviews from TripAdvisor with five aspects, each with its corresponding rating. Developing interpretable models is of considerable interest to the broader research community, even more pronounced with neural models . Many works analyzed and visualized state activation (; a;), learned sparse and interpretable word vectors (b; a;) or analyzed attention . Our work differs from these in terms of what is meant by an explanation. Our system identifies one or multiple short and coherent text fragments that -as a substitute of the input text -are sufficient for the prediction. Attention models (; ;) have been shown to improve prediction accuracy, visualization, and interpretability. The most popular and widely used attention mechanism is soft attention over hard and sparse ones . According to; , standard attention modules noisily predict input importance; the weights cannot provide safe and meaningful explanations. Our model differs in two ways from attention mechanisms: our loss includes two regularizers to favor long word sequences for interpretability; the normalization is not done over the sequence length. Review multi-aspect sentiment classification is sometimes seen as a sub-problem (; ;), by utilizing heuristic-based methods or topic models. Recently, neural models achieved significant improvements with less feature engineering. built a hierarchical attention model with aspect representations by using a set of manually defined topics. extended this work with user attention and additional features such as overall rating, aspect, and user embeddings. The disadvantage of these methods is their limited interpretability, as they rely on many features in addition to the review text. The idea of including human rationales during training has been explored in (; ;). Although they have been shown to be beneficial, they are expensive to collect and might vary across annotators. In our work, no annotation is used. The work most closely related to ours is Li et al. (2016b) and. Both generate hard rationales and address single-aspect sentiment classification. Their model must be trained separately for each aspect, which leads to ambiguities. Li et al. (2016b) developed a post-training method that removes words from a review text until another trained model changes its prediction. provides a model that learns an aspect sentiment and its rationale jointly, but hinders the performance and relies on assumptions on the data, such as a small correlation between aspect ratings. In contrast, our model: 1) supports multi-aspect sentiment classification, 2) generates soft multidimensional masks in a single training; 3) the masks provide interpretability and improve the performance significantly. Let X be a review composed of L words x 1, x 2,..., x L and Y the target A-dimensional sentiment vector, corresponding to the different rated aspects. Our proposed model, called Multi-Aspect Masker, is composed of three components: 1) a Masker module that computes a probability distribution over aspects for each word, ing in A + 1 different masks (including one for not-aspect); 2) an Encoder that learns a representation of a review conditioned on the induced masks; 3) a Classifier that predicts the target variables. The overall model architecture is shown in Figure 1. Our framework generalizes for other tasks, and each neural module is interchangeable with other models. The Masker first computes a hidden representation h for each word x in the input sequence, using their word embeddings e 1, e 2,..., e L. Many sequence models could realize this task, such as recurrent, attention, or convolution neural networks. In our case, we chose a convolutional network because it led to a smaller model, faster training, and empirically, performed similarly to recurrent models. Let a i denote the i th aspect for i = 1,..., A, and a 0 the not-aspect case, because many words can be irrelevant to every aspect. We define M ∈ R (A+1), the aspect distribution of the input word x as: Because we have categorical distributions, we cannot directly sample P (M |x) and backpropagate the gradient through this discrete generation process. Instead, we model the variable m ai using the Straight Through Gumbel Softmax , to approximate sampling from a categorical distribution. We model the parameters of each Gumbel Softmax distribution M with a single-layer feedforward neural network followed by applying a log softmax, which induces the log-probabilities of the th distribution: ω = log(softmax(W h + b)). W and b are shared across all tokens, to have a constant number of parameters with respect to the sequence length. We control the sharpness of the distributions with the temperature parameter τ. Compared to attention mechanisms, the word importance is a probability distribution over the targets: T t=0 P (m at |x) = 1, instead of a normalization over the sequence length, We weight the word embeddings by their importance towards an aspect a i with the induced submasks, such that Thereafter, each modified embedding E ai is fed into the Encoder block. Note that E a0 is ignored because M a0 only serves to absorb probabilities of words that are insignificant to every aspect. The Encoder module includes a convolutional neural network, for the same reasons as earlier, followed by a max-over-time pooling to obtain a fixed-length feature vector. It produces the hidden representation h ai for each aspect a i. To exploit commonalities and differences among aspects, we share the weights of the encoders for all E ai. Finally, the Classifier block contains for each aspect a i a two-layer feedforward neural networks followed by a softmax layer to predict the sentimentŷ ai. Multi-Aspect Masker Trained on sent and no constraint Trained on sent with λ p, sel, and cont i stayed at daulsol in september 2013 and could n't have asked for anymore for the price!! it is a great location.... only 2 minutes walk to jet, space and sankeys with a short drive to ushuaia. the hotel is basic but cleaned daily and i did nt have any problems at all with the bathroom or kitchen facilities. the lady at reception was really helpful and explained everything we needed to know..... even when we managed to miss our flight she let us stay around and use the facilities until we got on a later flight. there are loads of restaurants in the vicinity and supermarkets and shops right outside. i loved these apartments so much that i booked to come back for september 2014!! can not wait:) Aspect Changes: 30 i stayed at daulsol in september 2013 and could n't have asked for anymore for the price!! it is a great location.... only 2 minutes walk to jet, space and sankeys with a short drive to ushuaia. the hotel is basic but cleaned daily and i did nt have any problems at all with the bathroom or kitchen facilities. the lady at reception was really helpful and explained everything we needed to know..... even when we managed to miss our flight she let us stay around and use the facilities until we got on a later flight. there are loads of restaurants in the vicinity and supermarkets and shops right outside. i loved these apartments so much that i booked to come back for september 2014!! can not wait:) Aspect Changes: 5 Masks lead to mostly long sequences describing clearly each aspect (one switch per aspect), while attention to many short and interleaving sequences (30 changes between aspects), where most relate to noise or multiple aspects. Highlighted words correspond to the highest aspect-attention scores above 1/L (i.e., more important than a uniform distribution), and the aspect a i maximizing P (m ai |x). The first objective to optimize is the sentiment loss, represented with the cross-entropy between the true aspect sentiment label y ai and the predictionŷ ai: Training Multi-Aspect Masker to optimize sent will lead to meaningless sub-masks M ai, as the model tends to focus on certain key-words. Consequently, we guide the model to produce long and meaningful sequences of words, as shown in Figure 2. We propose two regularizers: the first controls the number of selected words, and the second favors consecutive words belonging to the same aspect. For the first term sel, we calculate the probability p sel of tagging a word as aspect and then compute the cross-entropy with a parameter λ p. The hyper-parameter λ p can be interpreted as the prior on the number of selected words among all aspects, which corresponds to the expectation of Binomial(p sel), as the optimizer will try to minimize the difference between p sel and λ p. The second regularizer discourages aspect transition between two consecutive words, by minimizing the mean variation of two consecutive aspect distributions. We generalize the formulation in , from a hard binary single-aspect selection, to a soft probabilistic multi-aspect selection. Finally, we train our Multi-Aspect Masker in an end-to-end manner, and optimize the final loss M AM = sent +λ sel · sel +λ cont · cont, where λ sel and λ cont control the impact of each constraint. In this section, we assess our model on two dimensions: the predictive performance and the quality of the induced mask. We first evaluate Multi-Aspect Masker on the multi-aspect sentiment classification task. In a second experiment, we measure the quality of the induced sub-masks using aspect sentence-level annotations, and an automatic topic model evaluation method. 5 million beer reviews from BeerAdvocat. Each contains multiple sentences describing various beer aspects: Appearance, Smell, Palate, and Taste; users also provided a five-star rating for each aspect. modified the dataset to suit the requirements of their method. 5 As a consequence, the obtained subset, composed of 280k reviews, does not reflect the real data distribution: it contains only the first three aspects, and the sentiment correlation between any pair of aspects is decreased significantly (27.2% on average, instead of 71.8% originally). We denote this version as the Filtered Beer dataset, and the original one as the Full Beer dataset. To evaluate the robustness of models across domains, we crawled 140k hotel reviews from TripAdvisor. Each review contains a five-star rating for each aspect: Service, Cleanliness, Value, Location, and Room. The average correlation between aspects is high (63.0% on average). Compared to beer reviews, hotel reviews are longer, noisier, and less structured, as shown in Appendix A.3. As in , we binarize the problem: ratings at three and above are labeled as positive and the rest as negative. We further divide the datasets into 80/10/10 for train, development, and test subsets (more details in Appendix A.1). We compared our Multi-Aspect Masker (MAM) with various baselines. We first used a simple baseline, Sentiment Majority, that reports the majority sentiment across aspects, as the sentiment correlation between aspects might be high (see Section 4.1). Because this information is not available at testing, we trained a model to predict the majority sentiment of a review using. The second baseline we used is a shared encoder followed by A classifiers, that we denote Emb + Enc + Clf. This model does not offer any interpretability. We extended it with a shared attention mechanism after the encoder, noted A Shared, that provides a coarse-grained interpretability: for all aspects, the network focuses on the same words in the input. Our final goal is to achieve the best performance and provide fine-grained interpretability: to visualize what sequences of words a model focuses on and to predict the aspect sentiment predictions. To this end, we included other baselines: two trained separately for each aspect and two trained with a multi-aspect sentiment loss. We employed for the first ones: the well-known NB-SVM of for sentiment analysis tasks, and the Single Aspect-Mask (SAM) model from , each trained separately for each aspect. The two last methods are composed of a separate encoder, attention mechanism, and classifier for each aspect. We utilized two types of attention mechanism: additive , and sparse . We call each variant Multi Aspect-Attentions (MAA) and Multi Aspect-Sparse-Attentions (MASA). Diagrams for the baselines can be found in Appendix A.5. In this section, we enquire whether fine-grained interpretability can become a benefit rather than a cost. We group the models and baselines in three different levels of interpretability: • None: we cannot identify what parts of the review are important for the prediction; • Coarse-grained: we can identify what parts of the reviews were important to predict all aspect sentiments, without knowing what part corresponds to what aspect; • Fine-grained: for each aspect, we can identify what parts are used to predict its sentiment. Overall F1 scores (macro and for each aspect A i) for the controlled-environment Filtered Beer (where there are assumptions on the data distribution) and the real-world Full Beer dataset are shown in Table 1 and Table 2. We find that our Multi-Aspect Masker (MAM) model 8, with 1.7 to 2.1 times fewer parameters than aspect-wise attention models (6 + 7), performs better on average than all other baselines on both datasets, and provides fine-grained interpretability. For the synthetic Filtered Beer dataset, MAM achieves a significant improvement of at least 0.36 macro F1 score, and 0.05 for the Full Beer one. To demonstrate that the induced sub-masks M a1,..., M a A are 1) meaningful for other models to improve final predictions, and 2) bring fine-grained interpretability, we extracted and concatenated the masks to the word embeddings, ing in contextualized embeddings . We trained a simple Encoder-Classifier with the contextualized embeddings 9, which has approximately 1.5 times fewer parameters than MAM. We achieved a macro F1 score absolute improvement of 0.34 compared to MAM, and 1.43 compared to the non-contextualized variant for the Filtered Beer dataset; for the Full Beer one, the performance increases by 0.39 and 2.49 respectively. Similarly, each individual aspect A i F1 score of MAM is improved to a similar extent. We provide in Appendix A.3.1 and A.3.2 visualizations of reviews with the computed sub-masks M a1,..., M a A and attentions by different models. Not only do sub-masks enable the reach of higher performance; they better capture parts of reviews related to each aspect compared to other methods. Both NB-SVM 4 and SAM 5, offering fine-grained interpretability and trained separately for each aspect, significantly underperform compared to the Encoder-Classifier 1. This is expected: the goal of SAM is to provide rationales at the price of performance , and NB-SVM might not perform well because of its simplicity. Shared attention models (2 + 3) perform similarly to the Encoder-Classifier 1, but provide only coarse-grained interpretability. However, in the Full Beer dataset, SAM 5 obtains better than the Encoder-Classifier baseline 1 and NB-SVM 4, which is outperformed by all other models. It might be counterintuitive that SAM performs better, but we claim that its behavior comes from the high correlation between aspects: SAM selects words that should belong to aspect a i to predict the sentiment of aspect a j (a i = a j). Moreover, in Section 4.5, we show that a single-aspect mask from SAM cannot be employed for interpretability. Finally, Sentiment Majority 0 is outperformed by a large margin by all other models in the Filtered Beer dataset, because of the low sentiment correlation between aspects. However, in the realistic dataset Full Beer, Sentiment Majority obtains higher score and performs better than NB-SVM 4. These emphasize the ease of the Filtered Beer dataset compared to the Full Beer one, because of the assumptions not holding in the real data distribution. On the Hotel dataset, the learned mask M from Multi-Aspect Masker 8 is again meaningful, by increasing the performance and adding interpretability. The Encoder-Classifier with contextualized embeddings 9 outperforms all other models significantly, with an absolute macro F1 score improvement of 0.49. Moreover, it achieves the best individual F1 score for each aspect A 1,..., A 5. Visualizations of reviews, with masks and attentions, are available in Appendix A.3.3. The interpretability comes from the long sequences that MAM identifies, unlike attention models. In comparison, SAM 5 lacks coverage and suffers from ambiguity due to the high correlation between aspects. We observe that Multi-Aspect Masker 8 performs slightly worse than aspect-wise atten- tion models (7 + 8), with 2.5 times fewer parameters. We emphasize that using the induced masks in the Encoder-Classifier 9 already achieves the best performance. The Single Aspect-Mask 5 obtains the lowest relative macro F1 score of the three datasets: a difference of −3.27; −2.6 and −2.32 for the Filtered Beer and Full Beer dataset respectively. This proves that the model is not meant to provide rationales and increase the performance simultaneously. Finally, we show that learning soft multi-dimensional masks along training objectives achieves strong predictive , and using these to create contextualized word embeddings and train a baseline model with, provides the best performance across the three datasets. In these experiments, we verify that Multi-Aspect Masker generates induced masks M a1,..., M a A that, in addition to improving performance, are meaningful and can be interpreted by humans. Evaluating justifications that have short and coherent pieces of text is challenging because there is no gold standard provided with reviews. provided 994 beer reviews with aspect sentence-level annotations, although our model computes masks at a finer level. Each sentence of the, we used trained models on beer reviews and extracted a similar number of selected words. We show that the generated sub-masks M a1, M a2, M a3 obtained with Multi-Aspect Masker (MAM) correlate best with human judgment. Table 4 presents the precision of the masks and attentions computed on sentence-level aspect annotations. We reported of the models in: SVM, the Single Aspect-Attention (SAA) and Single Aspect-Mask (SAM) -trained separately for each aspect because they find hard justifications for a single aspect. In comparison to SAM, our MAM model obtains significant higher precisions with an average of +1.13. Interestingly, SVM and attention models perform poorly compared with mask models: especially MASA that focuses only on a couple of words due to the sparseness of the attention (examples in Appendix A.3.1). In addition to evaluating masks with human annotations, we computed their semantic interpretability for each dataset. According to; , NPMI is a good metric for qualitative evaluation of topics, because it matches human judgment most closely. However, the top-N topic words, used for evaluation, are often selected arbitrarily. To alleviate this problem, we followed: we computed the topic coherence over several cardinalities N, and report all the , as well as the average; the authors claim the mean leads to a more stable and robust evaluation. More details are available in Appendix A.4. We show that generated masks by MAM obtains the highest mean NPMI and, on average, superior in all datasets (17 out of 21 cases), while only needing a single training. Results are shown in Table 5. For the Hotel and Full Beer datasets, where reviews reflect the real data distribution, our model significantly outperforms SAM and attention models for N ≥ 20. For smaller N, MAM obtains higher scores in four out of six cases, and for these two, the difference is only below 0.003. For the controlled-environment Filtered Beer dataset, MAM still performs better for N ≥ 15, although the differences are smaller, and is beat by SAM for N ≤ 10. However, SAM obtains poor in all other cases of all datasets and must be trained as many times as the number of aspects. We show the top words for each aspect and a human evaluation in Appendix A.4. Generally, our model finds better sets of words among the three datasets compared with other methods. In this work, we propose Multi-Aspect Masker, an end-to-end neural network architecture to perform multi-aspect sentiment classification for reviews. Our model predicts aspect sentiments while generating a probabilistic (soft) multi-dimensional mask (one dimension per aspect) simultaneously, in an unsupervised and multi-task learning manner. We showed that the induced mask is beneficial to guide the model to focus on different parts of the input text and to further improve the sentiment prediction for all aspects. Our evaluation shows that on three datasets, in the beer and hotel domain, our model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable compared to attention-based methods and a single-aspect masker. Nikolaos Aletras and Mark Stevenson. Evaluating topic coherence using distributional semantics. In . We used the 200-dimensional pretrained word embeddings of for beer reviews. For the hotel domain, we trained word2vec on a large collection of hotel reviews with an embedding size of 300. We used dropout of 0.1, clipped the gradient norm at 1.0 if higher, added L2-norm regularizer with a regularization factor of 10 −6 and trained using early stopping with a patience of three iterations. We used Adam for training with a learning rate of 0.001, β 1 = 0.9, and β 2 = 0.999. The temperature τ for Gumbel-Softmax distributions was fixed at 0.8. The two regularizer terms and the prior of our model are λ sel = 0.03, λ cont = 0.04, and λ p = 0.11 for the Filtered Beer dataset; λ sel = 0.03, λ cont = 0.03, and λ p = 0.15 for the Full Beer dataseet; and λ sel = 0.02, λ cont = 0.02 and λ p = 0.10 for the Hotel dataset. We ran all experiments for a maximum of 50 epochs with a batch-size of 256 and a Titan X GPU. For the model of , we reused the code from the authors. We randomly sampled reviews from each dataset and computed the masks and attentions of four models: our Multi-Aspect Masker model (MAM), the Single Aspect-Mask method (SAM) of and two attention models with additive and sparse attention, called Multi AspectAttentions (MAA) and Multi Aspect-Sparse-Attentions (MASA) respectively (more details in Section 4.2). Each color represents an aspect and the shade its confidence. All models generate soft attentions or masks besides SAM, which does hard masking. Samples for the Filtered Beer, Full Beer and Hotel dataset are shown below. Multi Aspect-Masks (Ours) a: ruby red brown in color. fluffy off white single -finger head settles down to a thin cap. coating thin lacing all over the sides on the glass. s: some faint burnt, sweet malt smells, but little else and very faint. t: taste is very solid for a brown. malts and some sweetness. maybe some toffee, biscuit and burnt flavors too. m: decent carbonation is followed by a medium bodied feel. flavor coats the tongue for a very satisfying and lasting finish. d: an easy drinker, as a good brown should be. my wife is a big brown fan, so i'll definitely be grabbing this one for her again. a solid beer for any time of the year. served: in a standard pint glass. Single Aspect-Mask a: ruby red brown in color. fluffy off white single -finger head settles down to a thin cap. coating thin lacing all over the sides on the glass. s: some faint burnt, sweet malt smells, but little else and very faint. t: taste is very solid for a brown. malts and some sweetness. maybe some toffee, biscuit and burnt flavors too. m: decent carbonation is followed by a medium bodied feel. flavor coats the tongue for a very satisfying and lasting finish. d: an easy drinker, as a good brown should be. my wife is a big brown fan, so i'll definitely be grabbing this one for her again. a solid beer for any time of the year. served: in a standard pint glass. Multi Aspect-Attentions a: ruby red brown in color. fluffy off white single -finger head settles down to a thin cap. coating thin lacing all over the sides on the glass. s: some faint burnt, sweet malt smells, but little else and very faint. t: taste is very solid for a brown. malts and some sweetness. maybe some toffee, biscuit and burnt flavors too. m: decent carbonation is followed by a medium bodied feel. flavor coats the tongue for a very satisfying and lasting finish. d: an easy drinker, as a good brown should be. my wife is a big brown fan, so i'll definitely be grabbing this one for her again. a solid beer for any time of the year. served: in a standard pint glass. Multi Aspect-Sparse-Attentions a: ruby red brown in color. fluffy off white single -finger head settles down to a thin cap. coating thin lacing all over the sides on the glass. s: some faint burnt, sweet malt smells, but little else and very faint. t: taste is very solid for a brown. malts and some sweetness. maybe some toffee, biscuit and burnt flavors too. m: decent carbonation is followed by a medium bodied feel. flavor coats the tongue for a very satisfying and lasting finish. d: an easy drinker, as a good brown should be. my wife is a big brown fan, so i'll definitely be grabbing this one for her again. a solid beer for any time of the year. served: in a standard pint glass. Figure 3: Our model MAM highlights all the words corresponding to aspects. SAM only highlights the most crucial information, but some words are missing out, and one is ambiguous. MAA and MASA fail to identify most of the words related to the aspect Appearance, and only a few words have high confidence, ing in noisy labeling. Additionally, MAA considers words belonging to the aspect Taste whereas the Filtered Beer dataset does not include it in the aspect set. Multi Aspect-Masks (Ours) a-crystal clear gold, taunt fluffy three finger white head that holds it own very well, when it falls it falls to a 1/2 " ring, full white lace on glass s-clean, crisp, floral, pine, citric lemon t-crisp biscuit malt up front, hops all the way through, grassy, lemon, tart yeast at finish, hop bitterness through finish m-dry, bubbly coarse, high carbonation, light bodied, hops leave impression on palette. d-nice hop bitterness, good flavor, sessionable, recommended, good brew Single Aspect-Mask a-crystal clear gold, taunt fluffy three finger white head that holds it own very well, when it falls it falls to a 1/2 " ring, full white lace on glass s-clean, crisp, floral, pine, citric lemon t-crisp biscuit malt up front, hops all the way through, grassy, lemon, tart yeast at finish, hop bitterness through finish m-dry, bubbly coarse, high carbonation, light bodied, hops leave impression on palette. d-nice hop bitterness, good flavor, sessionable, recommended, good brew Multi Aspect-Attentions a-crystal clear gold, taunt fluffy three finger white head that holds it own very well, when it falls it falls to a 1/2 " ring, full white lace on glass s-clean, crisp, floral, pine, citric lemon t-crisp biscuit malt up front, hops all the way through, grassy, lemon, tart yeast at finish, hop bitterness through finish m-dry, bubbly coarse, high carbonation, light bodied, hops leave impression on palette. d-nice hop bitterness, good flavor, sessionable, recommended, good brew Multi Aspect-Sparse-Attentions a-crystal clear gold, taunt fluffy three finger white head that holds it own very well, when it falls it falls to a 1/2 " ring, full white lace on glass s-clean, crisp, floral, pine, citric lemon t-crisp biscuit malt up front, hops all the way through, grassy, lemon, tart yeast at finish, hop bitterness through finish m-dry, bubbly coarse, high carbonation, light bodied, hops leave impression on palette. d-nice hop bitterness, good flavor, sessionable, recommended, good brew Figure 4: MAM finds the exact parts corresponding to the aspect Appearance and Palate while covering most of the aspect Smell. SAM identifies key-information without any ambiguity, but lacks coverage. MAA highlights confidently nearly all the words while having some noise for the aspect Appearance. MASA selects confidently only most predictive words. Multi Aspect-Masks (Ours) sa's harvest pumpkin ale 2011. had this last year, loved it, and bought 6 harvest packs and saved the pumpkins and the dunkel's... not too sure why sa dropped the dunkel, i think it would make a great standard to them. pours a dark brown with a 1 " bone white head, that settles down to a thin lace across the top of the brew. smells of the typical pumpkin pie spice, along with a good squash note. tastes just like last years, very subtle, nothing over the top. a damn good pumpkin ale that is worth seeking out. when i mean everything is subtle i mean everything. nothing is overdone in this pumpkin ale, and is a great representation of the original style. mouthfeel is somewhat thick, with a pleasant coating feel. overall, i loved it last year, and i love it this year. do n't get me wrong, its no pumpking, but this is a damn fine pumpkin ale that could hold its own any day among all the others. i would rate this as my 4th favorite pumpkin ale to date. i'm not sure why the bros rated it so low, but do n't take their opinion, make your own! Appearance Smell Palate Taste Single Aspect-Mask sa's harvest pumpkin ale 2011. had this last year, loved it, and bought 6 harvest packs and saved the pumpkins and the dunkel's... not too sure why sa dropped the dunkel, i think it would make a great standard to them. pours a dark brown with a 1 " bone white head, that settles down to a thin lace across the top of the brew. smells of the typical pumpkin pie spice, along with a good squash note. tastes just like last years, very subtle, nothing over the top. a damn good pumpkin ale that is worth seeking out. when i mean everything is subtle i mean everything. nothing is overdone in this pumpkin ale, and is a great representation of the original style. mouthfeel is somewhat thick, with a pleasant coating feel. overall, i loved it last year, and i love it this year. do n't get me wrong, its no pumpking, but this is a damn fine pumpkin ale that could hold its own any day among all the others. i would rate this as my 4th favorite pumpkin ale to date. i'm not sure why the bros rated it so low, but do n't take their opinion, make your own! Appearance Smell Palate Taste Multi Aspect-Attentions sa's harvest pumpkin ale 2011. had this last year, loved it, and bought 6 harvest packs and saved the pumpkins and the dunkel's... not too sure why sa dropped the dunkel, i think it would make a great standard to them. pours a dark brown with a 1 " bone white head, that settles down to a thin lace across the top of the brew. smells of the typical pumpkin pie spice, along with a good squash note. tastes just like last years, very subtle, nothing over the top. a damn good pumpkin ale that is worth seeking out. when i mean everything is subtle i mean everything. nothing is overdone in this pumpkin ale, and is a great representation of the original style. mouthfeel is somewhat thick, with a pleasant coating feel. overall, i loved it last year, and i love it this year. do n't get me wrong, its no pumpking, but this is a damn fine pumpkin ale that could hold its own any day among all the others. i would rate this as my 4th favorite pumpkin ale to date. i'm not sure why the bros rated it so low, but do n't take their opinion, make your own! Appearance Smell Palate Taste Multi Aspect-Sparse-Attentions sa's harvest pumpkin ale 2011. had this last year, loved it, and bought 6 harvest packs and saved the pumpkins and the dunkel's... not too sure why sa dropped the dunkel, i think it would make a great standard to them. pours a dark brown with a 1 " bone white head, that settles down to a thin lace across the top of the brew. smells of the typical pumpkin pie spice, along with a good squash note. tastes just like last years, very subtle, nothing over the top. a damn good pumpkin ale that is worth seeking out. when i mean everything is subtle i mean everything. nothing is overdone in this pumpkin ale, and is a great representation of the original style. mouthfeel is somewhat thick, with a pleasant coating feel. overall, i loved it last year, and i love it this year. do n't get me wrong, its no pumpking, but this is a damn fine pumpkin ale that could hold its own any day among all the others. i would rate this as my 4th favorite pumpkin ale to date. i'm not sure why the bros rated it so low, but do n't take their opinion, make your own! Figure 5: MAM can identify accurately what parts of the review describe each aspect. Due to the high imbalance and correlation, MAA provides very noisy labels, while MASA highlights only a few important words. We can see that SAM is confused and performs a poor selection. Multi Aspect-Masks (Ours) 75cl bottle shared with larrylsb, pre -grad. bright, hazy gold with a big white head. the flavor has bursting fruit and funky yeast with tropical and peach standing out. the flavor has the same intense fruitiness, with a funky, lightly tart edge, and a nice hop balance. dry and refreshing on the tongue. medium bodied with perfect carbonation that livens up the palate. this was just beautiful stuff that i'm already craving more of. Single Aspect-Mask 75cl bottle shared with larrylsb, pre -grad. bright, hazy gold with a big white head. the flavor has bursting fruit and funky yeast with tropical and peach standing out. the flavor has the same intense fruitiness, with a funky, lightly tart edge, and a nice hop balance. dry and refreshing on the tongue. medium bodied with perfect carbonation that livens up the palate. this was just beautiful stuff that i'm already craving more of. Multi Aspect-Attentions 75cl bottle shared with larrylsb, pre -grad. bright, hazy gold with a big white head. the flavor has bursting fruit and funky yeast with tropical and peach standing out. the flavor has the same intense fruitiness, with a funky, lightly tart edge, and a nice hop balance. dry and refreshing on the tongue. medium bodied with perfect carbonation that livens up the palate. this was just beautiful stuff that i'm already craving more of. Multi Aspect-Sparse-Attentions 75cl bottle shared with larrylsb, pre -grad. bright, hazy gold with a big white head. the flavor has bursting fruit and funky yeast with tropical and peach standing out. the flavor has the same intense fruitiness, with a funky, lightly tart edge, and a nice hop balance. dry and refreshing on the tongue. medium bodied with perfect carbonation that livens up the palate. this was just beautiful stuff that i'm already craving more of. i stayed at daulsol in september 2013 and could n't have asked for anymore for the price!! it is a great location.... only 2 minutes walk to jet, space and sankeys with a short drive to ushuaia. the hotel is basic but cleaned daily and i did nt have any problems at all with the bathroom or kitchen facilities. the lady at reception was really helpful and explained everything we needed to know..... even when we managed to miss our flight she let us stay around and use the facilities until we got on a later flight. there are loads of restaurants in the vicinity and supermarkets and shops right outside. i loved these apartments so much that i booked to come back for september 2014!! can not wait:) Single Aspect-Mask i stayed at daulsol in september 2013 and could n't have asked for anymore for the price!! it is a great location.... only 2 minutes walk to jet, space and sankeys with a short drive to ushuaia. the hotel is basic but cleaned daily and i did nt have any problems at all with the bathroom or kitchen facilities. the lady at reception was really helpful and explained everything we needed to know..... even when we managed to miss our flight she let us stay around and use the facilities until we got on a later flight. there are loads of restaurants in the vicinity and supermarkets and shops right outside. i loved these apartments so much that i booked to come back for september 2014!! can not wait:) Multi Aspect-Attentions i stayed at daulsol in september 2013 and could n't have asked for anymore for the price!! it is a great location.... only 2 minutes walk to jet, space and sankeys with a short drive to ushuaia. the hotel is basic but cleaned daily and i did nt have any problems at all with the bathroom or kitchen facilities. the lady at reception was really helpful and explained everything we needed to know..... even when we managed to miss our flight she let us stay around and use the facilities until we got on a later flight. there are loads of restaurants in the vicinity and supermarkets and shops right outside. i loved these apartments so much that i booked to come back for september 2014!! can not wait:) Multi Aspect-Sparse-Attentions i stayed at daulsol in september 2013 and could n't have asked for anymore for the price!! it is a great location.... only 2 minutes walk to jet, space and sankeys with a short drive to ushuaia. the hotel is basic but cleaned daily and i did nt have any problems at all with the bathroom or kitchen facilities. the lady at reception was really helpful and explained everything we needed to know..... even when we managed to miss our flight she let us stay around and use the facilities until we got on a later flight. there are loads of restaurants in the vicinity and supermarkets and shops right outside. i loved these apartments so much that i booked to come back for september 2014!! can not wait:) Figure 7: MAM emphasizes consecutive words, identifies important spans while having a small amount of noise. SAM focuses on certain specific words and spans, but labels are ambiguous. The MAA model highlights many words, ignores a few important key-phrases, and labels are noisy when the confidence is not high. MASA provides noisier tags than MAA. Multi-Aspect Masker (Ours) stayed at the parasio 10 apartments early april 2011. reception staff absolutely fantastic, great customer service.. ca nt fault at all! we were on the 4th floor, facing the front of the hotel.. basic, but nice and clean. good location, not too far away from the strip and beach (10 min walk). however.. do not go out alone at night at all! i went to the end of the street one night and got mugged.. all my money, camera.. everything! got sratches on my chest which has now scarred me, and i had bruises at the time. just make sure you have got someone with you at all times, the local people are very renound for this. went to police station the next day (in old town) and there was many english in there reporting their muggings from the day before. shocking!! apart from this incident (on the first night we arrived : we had a good time in the end, plenty of laughs and everything is very cheap! beer -1euro! fryups -2euro. would go back again, but maybe stay somewhere else closer to the beach (sol pelicanos etc).. this hotel is next to an alley called' muggers alley' Single Aspect-Mask stayed at the parasio 10 apartments early april 2011. reception staff absolutely fantastic, great customer service.. ca nt fault at all! we were on the 4th floor, facing the front of the hotel.. basic, but nice and clean. good location, not too far away from the strip and beach (10 min walk). however.. do not go out alone at night at all! i went to the end of the street one night and got mugged.. all my money, camera.. everything! got sratches on my chest which has now scarred me, and i had bruises at the time. just make sure you have got someone with you at all times, the local people are very renound for this. went to police station the next day (in old town) and there was many english in there reporting their muggings from the day before. shocking!! apart from this incident (on the first night we arrived : we had a good time in the end, plenty of laughs and everything is very cheap! beer -1euro! fryups -2euro. would go back again, but maybe stay somewhere else closer to the beach (sol pelicanos etc).. this hotel is next to an alley called' muggers alley' Multi Aspect-Attentions stayed at the parasio 10 apartments early april 2011. reception staff absolutely fantastic, great customer service.. ca nt fault at all! we were on the 4th floor, facing the front of the hotel.. basic, but nice and clean. good location, not too far away from the strip and beach (10 min walk). however.. do not go out alone at night at all! i went to the end of the street one night and got mugged.. all my money, camera.. everything! got sratches on my chest which has now scarred me, and i had bruises at the time. just make sure you have got someone with you at all times, the local people are very renound for this. went to police station the next day (in old town) and there was many english in there reporting their muggings from the day before. shocking!! apart from this incident (on the first night we arrived : we had a good time in the end, plenty of laughs and everything is very cheap! beer -1euro! fryups -2euro. would go back again, but maybe stay somewhere else closer to the beach (sol pelicanos etc).. this hotel is next to an alley called' muggers alley' Multi Aspect-Sparse-Attentions stayed at the parasio 10 apartments early april 2011. reception staff absolutely fantastic, great customer service.. ca nt fault at all! we were on the 4th floor, facing the front of the hotel.. basic, but nice and clean. good location, not too far away from the strip and beach (10 min walk). however.. do not go out alone at night at all! i went to the end of the street one night and got mugged.. all my money, camera.. everything! got sratches on my chest which has now scarred me, and i had bruises at the time. just make sure you have got someone with you at all times, the local people are very renound for this. went to police station the next day (in old town) and there was many english in there reporting their muggings from the day before. shocking!! apart from this incident (on the first night we arrived : we had a good time in the end, plenty of laughs and everything is very cheap! beer -1euro! fryups -2euro. would go back again, but maybe stay somewhere else closer to the beach (sol pelicanos etc).. this hotel is next to an alley called' muggers alley' Figure 8: Our MAM model finds most of the important span of words with a small amount of noise. SAM lacks coverage but identifies words where half are correctly tags and the others ambiguous. MAA partially correctly highlights words for the aspects Service, Location, and Value while missing out the aspect Cleanliness. MASA confidently finds a few important words. For each model, we computed the probability distribution of words per aspect by using the induced sub-masks M a1,..., M a A or attention values. Given an aspect a i and a set of top-N words w N ai, the Normalized Pointwise Mutual Information coherence score is: Top words of coherent topics (i.e., aspects) should share a similar semantic interpretation and thus, interpretability of a topic can be estimated by measuring how many words are not related. For each aspect a i and word w having been highlighted at least once as belonging to aspect a i, we computed the probability P (w|a i) on each dataset and sorted them in decreasing order of P (w|a i). Unsurprisingly, we found that the most common words are stop words such as "a" and "it", because masks are mostly word sequences instead of individual words. To gain a better interpretation of the aspect words, we followed the procedure in: we first computed averages across all aspect words for each word w: b w = 1 |A| |A| i=1 P (w|a i), which generates a general distribution that includes words common to all aspects. The final word distribution per aspect is computed by removing the general distribution:P (w|a i) = P (w|a i) − b w. After generating the final word distribution per aspect, we picked the top ten words and asked two human annotators to identify intruder words, i.e., words not matching the corresponding aspect. We show in subsequent tables the top ten words for each aspect, where red denotes all words identified as unrelated to the aspect by the two annotators. Generally, our model finds better sets of words across the three datasets compared with other methods. Additionally, we observe that the aspects can be easily recovered given its top words. Table 7: Top ten words for each aspect from the Filtered Beer dataset, learned by various models. Red denotes intruders according to two human annotators. For the three aspects, MAM has only one word considered as an intruder, followed by MASA with SAM (two) and MAA (six). Top-10 Words Appearance SAM head color white brown dark lacing pours amber clear black MASA head lacing lace retention glass foam color amber yellow cloudy MAA nice dark amber pours black hazy brown great cloudy clear MAM (Ours) head color lacing white brown clear amber glass black retention Smell SAM sweet malt hops coffee chocolate citrus hop strong smell aroma MASA smell aroma nose smells sweet aromas scent hops malty roasted MAA taste smell aroma sweet chocolate lacing malt roasted hops nose MAM (Ours) smell aroma nose smells sweet malt citrus chocolate caramel aromas Palate SAM mouthfeel smooth medium carbonation bodied watery body thin creamy full MASA mouthfeel medium smooth body nice m-feel bodied mouth beer MAA carbonation mouthfeel medium overall smooth finish body drinkability bodied watery MAM (Ours) mouthfeel carbonation medium smooth body bodied drinkability good mouth thin
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1lyZpEYvH
Neural model predicting multi-aspect sentiments and generating a probabilistic multi-dimensional mask simultaneously. Model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable.
Neural sequence generation is commonly approached by using maximum- likelihood (ML) estimation or reinforcement learning (RL). However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency. We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL. In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL. We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases (ML corresponds to α → 0 and RL to α → 1). We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α. Experimental on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods. Neural sequence models have been successfully applied to various types of machine learning tasks, such as neural machine translation;, caption generation (; BID6, conversation , and speech recognition BID8 BID2. Therefore, developing more effective and sophisticated learning algorithms can be beneficial. Popular objective functions for training neural sequence models include the maximum-likelihood (ML) and reinforcement learning (RL) objective functions. However, both have limitations, i.e., training/testing discrepancy and sample inefficiency, respectively. indicated that optimizing the ML objective is not equal to optimizing the evaluation metric. For example, in machine translation, maximizing likelihood is different from optimizing the BLEU score BID19, which is a popular metric for machine translation tasks. In addition, during training, ground-truth tokens are used for the predicting the next token; however, during testing, no ground-truth tokens are available and the tokens predicted by the model are used instead. On the contrary, although the RL-based approach does not suffer from this training/testing discrepancy, it does suffer from sample inefficiency. Samples generated by the model do not necessarily yield high evaluation scores (i.e., rewards), especially in the early stage of training. Consequently, RL-based methods are not self-contained, i.e., they require pre-training via ML-based methods. As discussed in Section 2, since these problems depend on the sampling distributions, it is difficult to resolve them simultaneously. Our solution to these problems is to integrate these two objective functions. We propose a new objective function α-DM (α-divergence minimization) for a neural sequence generation, and we demonstrate that it generalizes ML-and RL-based objective functions, i.e., α-DM can represent both functions as its special cases (α → 0 and α → 1). We also show that, for α ∈, the gradient of the α-DM objective is a combinations of the ML-and RL-based objective gradients. We apply the same optimization strategy as BID18, who useed importance sampling, to optimize this proposed objective function. Consequently, we avoid on-policy RL sampling which suffers from sample inefficiency, and optimize the objective function closer to the desired RL-based objective than the ML-based objective. The experimental for a machine translation task indicate that the proposed α-DM objective outperforms the ML baseline and the reward augmented ML method (RAML; BID18, upon which we build the proposed method. We compare our to those reported by BID3, who proposed an on-policy RL-based method. We also confirm that α-DM can provide a comparable BLEU score without pre-training. The contributions of this paper are summarized as follows.• We propose the α-DM objective function using α-divergence and demonstrate that it can be considered a generalization of the ML-and RL-based objective functions (Section 4).• We prove that the α-DM objective function becomes closer to the desired RL-based objectives as α increases in the sense that the upper bound of the maximum discrepancy between ML-and RL-based objective functions monotonically decreases as α increases.• The of machine translation experiments demonstrate that the proposed α-DM objective outperforms the ML-baseline and RAML (Section 7). In this section, we introduce ML-based and RL-based objective functions and the problems in association with learning neural sequence models using them. We also explain why it is difficult to resolve these problems simulataneously. Maximum-likelihood An ML approach is typically used to train a neural sequence model. Given a context (or input sequence) x ∈ X and a target sequence y = (y 1, . . ., y T) ∈ Y, ML minimizes the negative log-likelihood objective function DISPLAYFORM0 where q(y|x) denotes the true sampling distribution. Here, we assume that x is uniformly sampled from X and omit the distribution of x from Eq. for simplicity. For example, in machine translation, if a corpus contains only a single target sentence y * for each input sentence x, then q(y|x) = δ(y − y *) and the objective becomes L(θ) = − x∈X log p θ (y * |x).ML does not directly optimize the final performance measure; that is, training/testing discrepancy exists. This arises from at leset these two problems:(i) Objective score discrepancy. The reward function is not used while training the model; however, it is the performance measure in the testing (evaluation) phase. For example, in the case of machine translation, the popular evaluation measures such as BLEU or edit rate differ from the negative likelihood function. (ii) Sampling distribution discrepancy. The model is trained with samples from the true sampling distribution q(y|x); however, it is evaluated using samples generated from the learned distribution p θ (y|x).Reinforcement learning In most sequence generation task, the optimization of the final performance measure can be formulated as the minimization of the negative total expected rewards expressed as follows: DISPLAYFORM1 where r(y, y * |x) is a reward function associated with the sequence prediction y, i.e., the BLEU score or the edit rate in machine translation. RL is an approach to solve the above problems. The objective function of RL is L * in Eq., which is a reward-based objective function; thus, there is no objective score discrepancy, thereby resolbing problem (i). Sampling from p θ (y|x) and taking the expectation with p θ (y|x) in Eq. also resolves problem (ii). BID20 and BID3 directly optimized L * using policy gradient methods . A sequence prediction task that selects the next token based on an action trajectory (y 1, . . ., y t−1) can be considered to be an RL problem. Here the next token selection corresponds to the next action selection in RL. In addition, the action trajectory and the context x correspond to the current state in RL.RL can suffer from sample inefficiency; thus, it may not generate samples with high rewards, particularly in the early learning stage. By definition, RL generates training samples from its model distribution. This means that, if model p θ (y|x) has low predictive ability, only a few samples will exist with high rewards.(iii) Sample inefficiency. The RL model may rarely draw samples with high rewards, which hinders to find the true gradient to optimize the objective function. Machine translation suffers from this problem because the action (token) space is vast (typically >10, 000 dimensions) and rewards are sparse, i.e., positive rewards are observed only at the end of a sequence. Therefore, the RL-based approach usually requires good initialization and thus is not self-contained. Previous studies have employed pre-training with ML before performing on-policy RL-based sampling BID20 BID3.Entropy regularized RL To prevent the policy from becoming overly greedy and deterministic, some studies have used the following entropy-regularized version of the policy gradient objective function BID17: DISPLAYFORM2 Reward augmented proposed RAML, which solves problems (i) and (iii) simultaneously. RAML replaces the sampling distribution of ML, i.e., q(y|x) in Eq., with a reward-based distribution q (τ) (y|x) ∝ exp {r(y, y * |x)/τ }. In other words, RAML incorporates the reward information into the ML objective function. The RAML objective function is expressed as follows: DISPLAYFORM3 However, problem (ii) remains. Despite these various attempts, a fundamental technical barrier exists. This barrier prevents solving the three problems using a single method. The barrier originates from a trade-off between sampling distribution discrepancy (ii) and sample inefficiency (iii), because these issues are related to the sampling distribution. Thus, our approach is to control the trade-off of the sampling distributions by combining them. The proposed method utilizes α-divergence DA (p q), which measures the asymmetric distance between two distributions p and q BID0. A prominent feature of α-divergence is that it can behave as D KL (p q) or D KL (q p) depending on the value of α, i.e., D DISPLAYFORM0 where DISPLAYFORM1. Furthermore, α-divergence becomes a Hellinger distance when α equals to 1/2. In this section, we describe the proposed objective function α-DM and its gradient. Furthermore, we demonstrate that it can smoothly bridge both ML-and RL-based objective functions. We define the α-DM objective function as the α-divergence between p θ and q (τ): This DISPLAYFORM0 DISPLAYFORM1 Figure 1 illustrates how the α-DM objective bridges the ML-and RL-based objective functions. Although the objectives L * DISPLAYFORM2, and L (τ) (θ) have the same global minimizer p θ (y|x) = q (τ) (y|x), empirical solutions often differ. To train neural network or other machine learning models via α-divergence minimization, one can use the gradient of α-DM objective function. The gradient of Eq. can be expressed as DISPLAYFORM0 where p DISPLAYFORM1 is a weight that mixes sampling distributions p θ and q (τ). This weight makes it clear that the α-DM objective can be considered as a mixture of ML-and RL-based objective functions. See Appendix A for the derivation of this gradient. It converges to the gradient of entropy regularized RL or RAML by taking α → 1 or α → 0 limits, respectively (up to constant); i.e., DISPLAYFORM2 In Appendix C, we summarize all of the objective functions, gradients, and their connections. In this section, we characterize the difference between α-DM objective function L (α,τ) and the desired RL-based objective function L * (τ) with respect to sup-norm. Our main claim is that, with respect to sup-norm, the discrepancy between L (α,τ) and L * (τ) decreases linearly as α increases to 1. We utilize this analysis to motivate our α-DM objective function with larger α if there are no concerns about the sampling inefficiency. Proposition 1 Assume that p θ has the same finite support S as that of q (τ), and that for any s ∈ S, there exists δ > 0 such that p θ (s) > δ holds. For any α ∈, the following holds. DISPLAYFORM0 whereL (α,τ):= αL (α,τ). Here, C 1, C 2 is universal constants irrelevant to α. The following proposition immediately proves the theorem above. Proposition 2 Assume that probability distribution p has the same finite support S as that of q, and that for any s ∈ S there exists δ > 0 such that p(s) > δ holds. For any α ∈, the following holds. sup DISPLAYFORM1 Here, C = max sup p p log 2 (q/p), sup p q log 2 (q/p).For the proof of the Proposition 1 and Proposition 2, see Appendix B. In this paper, we employed the optimization strategy which is similar to that of RAML. We sample target sentence y for each x from another data augmentation distribution q 0 (y|x), and then estimate the gradient by importance sampling (IS). For example, we add some noise to the ground truth target sentence y * by insertion, substitution, or deletion, and the distribution p 0 (y|x) assigns some probability to each modified target sentence. Given samples from this proposal ditribution p 0 (y|x), we update the parameter using the following IS estimator DISPLAYFORM0 Here, {(x 1, y 1),..., (x N, y N)} are the N samples from the proposal distribution q 0 (y|x), and w i is the importance weight which is proportional to p (α,τ) θ (y i |x i): DISPLAYFORM1 Note that the difference betweene RAML and α-DM is only this importance weight w i. In RAML, w i depends only on q (τ) (y i |x i) but not on p θ (y i |x i). We normalize w i in each minibatch in order to use same hyperparameter (e.g., learning rate) as ML baseline. Thus, this estimator becomes a weighted IS estimator. A weighted IS estimator is not unbiased, yet but it has smaller variance. Also, we found that normalizing q (τ) (y i |x i) and p θ (y i |x i) in each minibatch leads to good . We evaluate the effectiveness of α-DM experimentally using neural machine translation tasks. We compare the BLEU scores of ML, RAML, and the proposed α-DM on the IWSLT'14 GermanEnglish corpus BID5. In order to evaluate the impact of training objective function, we train the same attention-based encoder-decoder model BID16 for each objective function. Furthermore, we use the same hyperparameter (e.g., learning rate, dropout rate, and temperature τ) between all the objective functions. For RAML and α-DM, we employ a data augmentation procedure similar to that of BID18, and thus we generate samples from a data augmentation distribution q 0 (y|x). Note that the difference between RAML and α-DM is only the weight w i of Eq.. The details of data augmentation distribution are described in Section 7.2. BID3. Specifically, we trained attention-based encoder-decoder model with the encoder of a bidirectional LSTM with 256 units and the LSTM decoder with the same number of layers and units. We exponentially decay the learning rate, and the initial learning rate is chosen using grid search to maximize the BLEU performance of ML baseline on development dataset. The important hyperparameter τ of RAML and α-DM is also determined to maximize the BLEU performance of RAML baseline on development dataset. As a , the initial learning rate of 0.5 and τ of 1.0 were used. Our α-DM used the same hyperparameters as ML and RAML including the initial learning rate, τ, and so on. Details about the models and parameters are discussed in Section 7.2. To investigate the impact of hyperparameter α, we train the neural sequence models using α-DM 5 times for each fixed α ∈ {0.0, 0.1, . . ., 0.9}, and then reported the BLEU score of test dataset. Moreover, assuming that the underfitted model prevents the gradient from being stable in the early stage of training, we train the same models with α being linearly annealed from 0.0 to larger values; we increase the value of α by adding 0.03 at each epoch. Here, the beam width k was set to 1 or 10. All BLEU scores and their averages are plotted in FIG1. The show that for both k = 1, 10, the models performance are better than smaller or larger α when α is around 0.5 (α = 0.5). However, for larger fixed α, the performance was worse than RAML and ML baselines. On the other hand, we can see that the annealed versions of α-DM improve the performance of the corresponding fixed versions in relatively larger α. As a , in the annealed scenario, α-DM with wide range of α ∈ improves on the performance consistently. This implies that the underfitted model makes the performance worse. We summarize the average BLEU scores and their standard deviation of ML, RAML, and α-DM with α ∈ {0.3, 0.4, 0.5} in Table. 1. The shows that the BLEU score (k = 10) of our α-DM outperforms ML and RAML baseline. Furthermore, although the ML baseline performances differ between our and those of BID3, the proposed α-DM performance with α = 0.5 without pre-training is comparable with the on-policy RL-based methods BID3. We believe that these come from the fact that α-DM with α > 0 has smaller bias than that of α = 0 (i.e., RAML). We utilized a stochastic gradient descent with a decaying learning rate. The learning rate decays from the initial learning rate to 0.05 with dev-decay , i.e., after training each epoch, we monitored the perplexity for the development set and reduced the learning rate by multiplying it with δ = 0.5 only when the perplexity for the development set does not update the best perplexity. The mini-batch size is 128. We used the dropout with probability 0.3. Gradients are rescaled when the norms exceed 5. In addition, if an unknown token, i.e., a special token representing a word that is not in the vocabulary, is generated in the predicted sentence, it was replaced by the token with the highest attention in the source sentence BID13. We implemented our models using a fork from the PyTorch 1 version of the OpenNMT toolkit BID14. We calculated the BLEU scores with multi-bleu.perl 2 script for both the development and test sets. We obtained augmented data in the same manner as the RAML framework BID18. For each target sentence, some tokens were replaced by other tokens in the vocabulary and we used the negative Hamming distance as reward. We assumed that Hamming distance e for each sentence is less than [m × 0.25], where m is the length of the sentence and [a] denotes the maximum integer which is less than or equal to a ∈ R. Moreover, the Hamming distance for a sample is uniformly selected from 0 to [m × 0.25]. One can also use BLEU or another machine translation metric for this reward. However, we assumed proposal distribution q 0 (y|x) different from that of RAML. We assumed the simplified proposal distribution q 0 (y|x), which is a discrete uniform distribution over [0, m × 0.25]. This in hyperparameter τ used in this experiment being different from that of RAML. We search the τ, which maximize the BLEU score of RAML on the development set. As a , τ = 1.0 was chosen, and α-DM also uses this fixed τ in all the experiments. From the RL literature, reward-based neural sequence model training can be separated into on-policy and off-policy approaches, which differ in the sampling distributions. The proposed α-DM approach can be considered an off-policy approach with importance sampling. Recently, on-policy RL-based approaches for neural sequence predictions have been proposed. BID20 proposed a method that uses the REINFORCE algorithm . Based on Ranzato et al. FORMULA0, BID3 proposed a method that estimates a critic network and uses it to reduce the variance of the estimated gradient. proposed a method that replaces some ground-truth tokens in an output sequence with generated tokens. Yu et al. FORMULA0, BID15 proposed methods based on GAN (generative adversarial net) approaches BID11. Note that on-policy RL-based approaches can directly optimize the evaluation metric. BID10 proposed off-policy gradient methods using importance sampling, and the proposed α-DM off-policy approach utilizes importance sampling to reduce the difference between the objective function and the evaluation measure when α > 0.As mentioned previously, the proposed α-DM can be considered an off-policy RL-based approach in that the sampling distribution differs from the model itself. Thus, the proposed α-DM approach has the same advantages as off-policy RL methods compared to on-policy RL methods, i.e., computational efficiency during training and learning stability. On-policy RL approaches must generate samples during training, and immediately utilize these samples. This property leads to high computational costs during training and if the model falls into a poor local minimum, it is difficult to recover from this failure. On the other hand, by exploiting data augmentation, the proposed α-DM can collect samples before training. Moreover, because the sampling distribution is a stationary distribution independent of the model, one can expect that the learning process of α-DM is more stable than that of on-policy RL approaches. Several other methods that compute rewards before training can be considered off-policy RL-based approaches, e.g., minimum risk training (MRT; BID21, RANDOMER (, and Google neural machine translation (GNMT; .While the proposed approach is a mixture of ML-and RL-based approaches, this attempt is not unique. The sampling distribution of scheduled sampling is also a mixture of ML-and RL-based sampling distributions. However, the sampling distributions of scheduled sampling can differ even in the same sentence, whereas ours are sampled from a stationary distribution. To bridge the ML-and RL-based approaches, BID12 considered the weights of the gradients of the ML-and RL-based approaches by directly comparing both gradients. In contrast, the weights of the proposed α-DM approach are obtained as the of defining the α-divergence objective function. GNMT considered a mixture of ML-and RL-based objective functions by the weighted arithmetic sum of L and L *. Comparing this weighted mean objective function and α-DM's objective function could be an interesting research direction in future. In this study, we have proposed a new objective function as α-divergence minimization for neural sequence model training that unifies ML-and RL-based objective functions. In addition, we proved that the gradient of the objective function is the weighted sum of the gradients of negative loglikelihoods, and that the weights are represented as a mixture of the sampling distributions of the ML-and RL-based objective functions. We demonstrated that the proposed approach outperforms the ML baseline and RAML in the IWSLT'14 machine translation task. In this study, we focus our attention on the neural sequence generation problem, but we expect our framework may be useful to broader area of reinforcement learning. The sample inefficiency is one of major problems in reinforcement learning, and people try to mitigiate this problem by using several type of supervised learning frameworks such as imitation learning or apprenticisip learning. This alternative approaches bring another problem similar to the neural sequence generaton problem that is originated from the fact that the objective function for training is different from the one for testing. Since our framework is general and independent from the task, our approach may be useful to combine these approaches. A GRADIENT OF α-DM OBJECTIVEThe gradient of α-DM can be obtained as follows: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 where DISPLAYFORM3 In Eq. FORMULA0, we used the so-called log-trick: ∇ θ p θ (y|x) = p θ (y|x)∇ θ log p θ (y|x). Proposition 1 Assume that probability distribution p has the same finite support S as that of q, and that for any s ∈ S there exists δ > 0 such that p(s) > δ holds. For any α ∈ the following holds. DISPLAYFORM0 A (p q) ≤ C(1 − α).Here, C = max sup p p log 2 (q/p), sup p q log 2 (q/p).Proof. By Taylor's theorem, there is an α ∈ (α, 1) such that DISPLAYFORM1 DISPLAYFORM2 where C 1 = τ max sup θ x∈X y∈Y p θ log 2 (q (τ) /p θ ), sup θ x∈X y∈Y q (τ) log 2 (q (τ) /p θ ) and C 2 = |Z(τ)|. In this section, we summarize the objective functions of• ML (Maximum Likelihood),• RL (Reinforcement Learning),• RAML (Reward Augmented Maximum Likelihood; BID18,• EnRL (Entropy regularized Reinforcement Learning), and• α-DM (α-Divergence Minimization Training).Objectives. The objective functions of ML, RL, RAML, EnRL, and α-DM are as follows DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 where q (τ) (y|x) ∝ exp {r(y, y * |x)/τ }. Typically, q(y|x) = δ(y, y * |x) where y * is the target with the highest reward. We can rewrite some of these functions using KL or α-divergences: DISPLAYFORM5 DISPLAYFORM6 DISPLAYFORM7 A (p θ q (τ) ).In the limits, there are the following connections between the objectives. DISPLAYFORM8 DISPLAYFORM9 DISPLAYFORM10 DISPLAYFORM11
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1Nyf7W0Z
Propose new objective function for neural sequence generation which integrates ML-based and RL-based objective functions.
Capsule Networks have shown encouraging on \textit{defacto} benchmark computer vision datasets such as MNIST, CIFAR and smallNORB. Although, they are yet to be tested on tasks where the entities detected inherently have more complex internal representations and there are very few instances per class to learn from and where point-wise classification is not suitable. Hence, this paper carries out experiments on face verification in both controlled and uncontrolled settings that together address these points. In doing so we introduce \textit{Siamese Capsule Networks}, a new variant that can be used for pairwise learning tasks. We find that the model improves over baselines in the few-shot learning setting, suggesting that capsule networks are efficient at learning discriminative representations when given few samples. We find that \textit{Siamese Capsule Networks} perform well against strong baselines on both pairwise learning datasets when trained using a contrastive loss with $\ell_2$-normalized capsule encoded pose features, yielding best in the few-shot learning setting where image pairs in the test set contain unseen subjects. Convolutional Neural networks (CNNs) have been a mainstay model for a wide variety of tasks in computer vision. CNNs are effective at detecting local features in the receptive field, although the spatial relationship between features is lost when crude routing operations are performed to achieve translation invariance, as is the case with max and average pooling. Essentially, pooling in viewpoint invariance so that small perturbations in the input do not effect the output. This leads to a significant loss of information about the internal properties of present entities (e.g location, orientation, shape and pose) in an image and relationships between them. The issue is usually combated by having large amounts of annotated data from a wide variety of viewpoints, albeit redundant and less efficient in many cases. As noted by hinton1985shape, from a psychology perspective of human shape perception, pooling does not account for the coordinate frames imposed on objects when performing mental rotation to identify handedness BID23; BID19 BID12. Hence, the scalar output activities from local kernel regions that summarize sets of local inputs are not sufficient for preserving reference frames that are used in human perception, since viewpoint information is discarded. Spatial Transformer Networks (STN) BID13 have acknowledged the issue by using dynamic spatial transformations on feature mappings to enhance the geometric invariance of the model, although this approach addresses changes in viewpoint by learning to remove rotational and scale variance, as opposed to viewpoint variance being reflected in the model activations. Instead of addressing translation invariance using pooling operations, BID8 have worked on achieving translation equivariance. The recently proposed Capsule Networks BID24; BID7 have shown encouraging to address these challenges. Thus far, Capsule Networks have only been tested on datasets that have a relatively sufficient number of instances per class to learn from and utilized on tasks in the standard classification setup. This paper extends Capsule Networks to the pairwise learning setting to learn relationships between whole entity encodings, while also demonstrating their ability to learn from little data that can perform few-shot learning where instances from new classes arise during testing (i.e zero-shot prediction). The Siamese Capsule Network is trained using a contrastive loss with 2 -normalized encoded features and demonstrated on face verification tasks. BID8 first introduced the idea of using whole vectors to represent internal properties (referred to as instantiation parameters that include pose) of an entity with an associated activation probability where each capsule represents a single instance of an entity within in an image. This differs from the single scalar outputs in conventional neural networks where pooling is used as a crude routing operation over filters. Pooling performs sub-sampling so that neurons are invariant to viewpoint change, instead capsules look to preserve the information to achieve equivariance, akin to perceptual systems. Hence, pooling is replaced with a dynamic routing scheme to send lowerlevel capsule (e.g nose, mouth, ears etc.) outputs as input to parent capsule (e.g face) that represent part-whole relationships to achieve translation equivariance and untangles the coordinate frame of an entity through linear transformations. The idea has its roots in computer graphics where images are rendered given an internal hierarchical representation, for this reason the brain is hypothesized to solve an inverse graphics problem where given an image the cortex deconstructs it to its latent hierarchical properties. The original paper by BID24 describes a dynamic routing scheme that represent these internal representations as vectors given a group of designated neurons called capsules, which consist of a pose vector u ∈ R d and activation α ∈. The architecture consists of two convolutional layers that are used as the initial input representations for the first capsule layer that are then routed to a final class capsule layer. The initial convolutional layers allow learned knowledge from local feature representations to be reused and replicated in other parts of the receptive field. The capsule inputs are determined using a Iterative Dynamic Routing scheme. A transformation W ij is made to output vector u i of capsule C L i. The length of the vector u i represents the probability that this lower-level capsule detected a given object and the direction corresponds to the state of the object (e.g orientation, position or relationship to upper capsule). The output vector u i is transformed into a prediction vectorû j|i, whereû j|i = W ij u i. Then,û j|i is weighted by a coupling coefficient c ij to obtain s j = i c ijûj|i, where coupling coefficients for each capsule j c ij = 1 and c ij is got by log prior probabilities b ij from a sigmoid function, followed by the softmax, c ij = e bij / k e b ik. Ifû L j|i has high scalar magnitude when multiplied by u L+1 j then the coupling coefficient c ij is increased and the remaining potential parent capsules coupling coefficients are decreased. Routing By Agreement is then performed using coincidence filtering to find tight clusters of nearby predictions. The entities output vector length is represented as the probability of an entity being present by using the nonlinear normalization shown in Equation 1 where vote v j is the output from total input s j, which is then used to compute the agreement a ij = v jûj|i that is added to the log prior b ij. The capsule is assigned a high log-likelihood if densely connected clusters of predictions are found from a subset of s. The centroid of the dense cluster is output as the entities generalized pose. This coincidence filtering step can also be achieved by traditional outlier detection methods such as Random sample consensus (RANSAC) BID3 BID3 classical for finding subsets of the feature space with high agreement. Although, the motivation for using the vector normalization of the instantiation parameters is to force the network to preserve orientation. Lastly, a reconstruction loss on the images was used for regularization which constrains th capsules to learn properties that can better encode the entities. In this paper, we do not use such regularization scheme by autoencoding pairs of input images, instead we use a variant of dropout. BID7 recently describe matrix capsules that perform routing by agreement using the expectation maximization (EM) algorithm, motivated by computer graphics where pose matrices are used to define rotations and translations of objects to account for viewpoint changes. Each parent capsule is considered a Gaussian and the pose matrix of each child capsule are considered data samples of the Gaussian. A given layer L contains a set of capsules C L such that ∀C ∃ {M, α} ∈ C L where pose matrix M ∈ R n×n (n = 4) and activation α ∈ are the outputs. A vote is made V ij = M i W ij for the pose matrix of C L+1 j where W ij ∈ R n×n is a learned viewpoint invariant transformation matrix from capsule where the cost h j is the negative log-probability density weighted by the assignment probabilities r ij, −β u is the negative log probability density per pose matrix computed to describe C L+1 j. If C L+1 j is activated −β a is the cost for describing (µ j, σ 2 j) from lower-level pose data samples along with r ij and λ is the inverse temperature so as the assignment probability becomes higher the slope of the sigmoid curve becomes steeper (represents the presence of an entity instead of the nonlinear vector normalization seen in Equation 1). The network uses 1 standard convolutional layer, a primary capsule layer, 2 intermediate capsule convolutional layer, followed by the final class capsule layer. The matrix capsule network significantly outperformed CNNs on the SmallNORB dataset. LaLonde & Bagci FORMULA7 introduce SegCaps which uses a locally connected dynamic routing scheme to reduce the number of parameters while using deconvolutional capsules to compensate for the loss of global information, showing best performance for segmenting pathological lungs from low dose CT scans. The model obtained a 39% and 95% reduction in parameters over baseline architectures while outperforming both. Bahadori FORMULA7 introduced Spectral Capsule Networks demonstrated on medical diagnosis. The method shows faster convergence over the EM algorithm used with pose vectors. Spatial coincidence filters align extracted features on a 1-d linear subspace. The architecture consists of a 1d convolution followed by 3 residual layers with dilation. Residual blocks R are used as nonlinear transformations for the pose and activation of the first primary capsule instead of the linear transformation that accounts for rotations in CV, since deformations made in healthcare imaging are not fully understood. The weighted votes are obtained as s j,i = α i R j (u i) ∀i where S j is a matrix of concatenated votes that are then decomposed using SVD, where the first singular value dimensions 1 is used to capture most of the variance between votes, thus the activation a j activation is computed as σ η(s DISPLAYFORM0 k is the ratio of all variance explained for all right singular vectors in V, b is optimized and η is decreased during training. The model is trained by maximizing the log-likelihood showing better performance than the spread loss used with matrix capsules and mitigates the problem of capsules becoming dormant. BID31 formalize the capsule routing strategy as an optimization of a clustering loss and a KL regularization term between the coupling coefficient distribution and its past states. The proposed objective function follows as min C,S {L(C, S):= − i j c ij o j|i, s j + α i j c ij log c ij } where o j|i = T ij µ i /||T ij || F and ||T ij || F is the Frobenious norm of T ij. This routing scheme shows significant benefit over the original routing scheme by BID24 as the number of routing iterations increase. Evidently, there has been a surge of interest within the research community. In contrast, the novelty presented in this paper is the pairwise learning capsule network scheme that proposes a different loss function, a change in architecture that compares images, aligns entities across images and describes a method for measuring similarity between final layer capsules such that inter-class variations are maximized and intra-class variations are minimized. Before describing these points in detail, we briefly describe the current state of the art work (SoTA) in face verification that have utilized Siamese Networks. Siamese Networks (SNs) are neural networks that learn relationships between encoded representations of instance pairs that lie on low dimensional manifold, where a chosen distance function d ω is used to find the similarity in output space. Below we briefly describe state of the art convolutional SN's that have been used for face verification and face recognition. BID27 presented a joint identification-verification approach for learning face verification with a contrastive loss and face recognition using cross-entropy loss. To balance loss signals for both identification and verification, they investigate the effects of varying weights controlled by λ on the intra-personal and inter-personal variations, where λ = 0 leaves only the face recognition loss and λ → ∞ leaves the face verification loss. Optimal are found when λ = 0.05 intra personal variation is maximized while both class are distinguished. BID32 propose a center loss function to improve discriminative feature learning in face recognition. The center loss function proposed aims to improve the discriminability between feature representations by minimizing the intra-class variation while keeping features from different classes separable. The center loss is given as DISPLAYFORM0 where z = W T j x i + b j. The c yi is the centroid of feature representations pertaining to the i th class. This penalizes the distance between class centers and minimizes the intra-class variation while the softmax keeps the inter-class features separable. The centroids are computed during stochastic gradient descent as full batch updates would not be feasible for large networks. BID18 proposed Sphereface, a hypersphere embedding that uses an angular softmax loss that constrains disrimination on a hypersphere manifold, motivated by the prior that faces lie on a manifold. The model achieves 99.22 % on the LFW dataset, and competitive on Youtube Face (YTF) and MegaFace. BID25 proposed a triplet similarity embedding for face verification using a triple loss arg min W = α,p,n∈T max(0, α + α T W T W (n − p)) where for T triplet sets lies an anchor class α, positive class p and negative class n, a projection matrix W, (performed PCA to obtain W 0) is minimized with the constraint that W BID9 use deep metric learning for face verification with loss arg min DISPLAYFORM1 DISPLAYFORM2 F | where g(z) = log(1 + e βz)/β, β controls the slope steepness of the logistic function, ||A|| F is the frobenius norm of A and λ is a regularization parameter. Hence, the loss function is made up of a logistic loss and regularization on parameters θ = [W, b]. Best are obtained using a combination of SIFT descriptors, dense SIFT and local binary patterns (LBP), obtaining 90.68% (+/-1.41) accuracy on the LFW dataset. BID21 used an 2 -constraint on the softmax loss for face verification so that the encoded face features lie on the ambit of a hypersphere, showing good improvements in performance. This work too uses an 2 -constraint on capsule encoded face embeddings. FaceNet BID26 too uses a triplet network that combines the Inception network BID28 and a 8-layer convolutional model BID33 which learns to align face patches during training to perform face verification, recognition and clustering. The method trains the network on triplets of increasing difficulty using a negative example mining technique. Similarly, we consider a Siamese Inception Network for the tasks as one of a few comparisons to SCNs. The most relevant and notable use of Siamese Networks for face verification is the DeepFace network BID30. The performance obtained was on par with human level performance on the Faces in the Wild (LFW) dataset and significantly outperformed previous methods. However, it is worth noting this model is trained on a large dataset from Facebook (SFC), therefore the model can be considered to be performing transfer learning before evaluation. The model also carries out some manual steps for detecting, aligning and cropping faces from the images. For detecting and aligning the face a 3D model is used. The images are normalized to avoid any differences in illumination values, before creating a 3D model which is created by first identifying 6 fiducial points in the image using a Support Vector Regressor from a LBP histogram image descriptor. Once the faces are cropped based on these points, a further 67 fiducial point are identified for 3D mesh model, followed by a piecewise affine transformation for each section of the image. The cropped image is then passed to 3 CNN layers with an initial max-pooling layer followed two fully-connected layers. Similar to Capsule Networks, the authors refrain from using max pooling at each layer due to information loss. In contrast to this work, the only preprocessing steps for the proposed SCNs consist of pixel normalization and a reszing of the image. The above work all achieve comparable state of the art for face verification using either a single CNN or a combination of various CNNs, some of which are pretrained on large related datasets. In contrast, this work looks to use a smaller Capsule Network that is more efficient, requires little preprocessing steps (i.e only a resizing of the image and normalization of input features, no aligning, cropping etc.) and can learn from relatively less data. The Capsule Network for face verification is intended to identify enocded part-whole relationships of facial features and their pose that in turn leads to an improved similarity measure by aligning capsule features across paired images. The architecture consists of a 5-hidden layer (includes 2 capsule layers) network with tied weights (since both inputs are from the same domain). The 1 st layer is a convolutional filter with a stride of 3 and 256 channels with kernels κ 1 i ∈ R 9×9 ∀i over the image pairs x 1, x 2 ∈ R 100×100, ing in 20, 992 parameters. The 2 nd layer is the primary capsule layer that takes κ and outputs κ ∈ R 31×31 matrix for 32 capsules, leading to 5.309 × 10 6 parameters (663, 552 weights and 32 biases for each of 8 capsules). The 3 rd layer is the face capsule layer, representing the routing of various properties of facial features, consisting of 5.90 × 10 6 parameters. This layer is then passed to a single fully connected layer by concatenating DISPLAYFORM0 as input, while the sigmoid functions control the dropout rate for each capsule during training. The nonlinear vector normalization shown in Equation 1 is replaced with a tanh function tanh which we found in initial testing to produce better . Euclidean distance, Manhattan distance and cosine similarity are considered as measures between the capsule image encodings. The aforementioned SCN architecture describes the setup for the AT&T dataset. For the LFW dataset, 6 routing iterations are used and 4 for AT&T. To encode paired images x 1, x 2 into vector pairs h 1, h 2 the pose vector of each capsule is vectorized and passed as input to a fully connected layer containing 20 activation units. Hence, for each input there is a lower 20-dimensional representation of 32 capsule pose vectors ing in 512 input features. To ensure all capsules stay active the dropout probability rate is learned for each capsule. The logit function learns the dropout rate of the final capsule layer using Concrete Dropout BID4. This builds on prior work Kingma et al. FORMULA7; BID20 by using a continuous relaxation that is referred to as a concrete distribution, the mask becomes differentiable, unlike the discrete Bernoulli distribution used in Binary dropout. Equation 2 shows the objective function for updating the concrete distribution. For a given capsule probability p c in the last capsule layer, the sigmoid computes the relaxationz on the Bernoulli variable z, where u is drawn uniformly between and log(u c) produces log-uniform noise, and t denotes the temperature values (t = 0.1 in our experiments) which forces probabilities at the extremum when small. The pathwise derivative estimator is used to find a continuous estimation of the dropout mask.z DISPLAYFORM0 Equation FORMULA8 show the dropout loss where the LHS numerator represents the weight regularization term for N weights in the final layer L where γ controls the amount of regularization. The remaining H(p d L) is the dropout entropy term used as dropout regularization. This entropy term H(p d L) is essential to learn the dropout probability for capsule representations, also note when regularization plays a significant role reducing the contrastive loss, d p → 0.5. In our experiments, weight regularization λ ω L = 0.1 is chosen based on a grid search during training for λ θ L = {0.001, 0.01, 0.1, 0.2, 0.5}. This is used as an auxiliary regularization term with the contrastive losses discussed in the next section. DISPLAYFORM1 We also use a straight-through pathwise derivative estimator BID5 to then optimize the dropout probability, following the implementation by BID4. BID7 has also been used to maximize the inter-class distance between the target class and the remaining classes for classifying on the smallNORB dataset. This is given as where the margin m is increased linearly during training to ensure lower-level capsule stay active throughout training. This work instead uses a contrastive margin loss BID2 where the aforementioned capsule encoding similarity function d ω outputs a predicted similarity score. The contrastive loss L c ensures similar vectorized pose encodings are drawn together and dissimilar poses repulse. Equation 4 shows a a pair of images that are passed to the SCN model where DISPLAYFORM0 2 computes the Euclidean distance between encodings and m is the margin. When using Manhattan distance DISPLAYFORM1 A double margin loss that has been used in prior work by BID17 is also considered to affect matching pairs such that to account for positive pairs that can also have high variance in the distance measure. It is worth noting this double margin is similar to the aforementioned margin loss used on class capsules, without the use of λ. Equation 5 shows the double-margin contrastive loss where positive margin m p and negative margin m n are used to find better separation between matching and non-matching pairs. This loss is only used for LFW, given the limited number of instances in AT&T we find the amount of overlap between pairs to be less severe in experimentation. DISPLAYFORM2 The original reconstruction loss DISPLAYFORM3 i ) 2 used as regularization is not used in the pairwise learning setting, instead we rely on the dropout for regularization with exception of the SCN model that uses concrete dropout on the final layer. In the case of using concrete dropout the loss is L conc = L c (ω) + Γ c (p d) and Γ controls the amount of dropout regularization Optimization Convergence can often be relatively slow for face verification tasks, where few informative batch updates (e.g a sample with significantly different pose for a given class) get large updates but soon after the effect is diminished through gradient exponential averaging (originally introduced to prevent α → 0). Motivated by recent findings that improve adaptive learning rates we use AMSGrad BID22. AMSGrad improves over ADAM in some cases by replacing the exponential average of squared gradients with a maximum that mitigates the issue by keeping long-term memory of past gradients. Thus, AMSGrad does not increase or decrease the learning rate based on gradient changes, avoiding divergent or vanishing step sizes over time. Equation 6 presents the update rule, where diagonal of gradient g t is given as DISPLAYFORM4 A. AT&T dataset The AT&T face recognition and verification dataset consists of 40 different subjects with only 10 gray-pixel images per subject in a controlled setting. This smaller dataset allows us to test how SCNs perform with little data. For testing, we hold out 5 subjects so that we are testing on unseen subjects, as opposed to training on a given viewpoint of a subject and testing on another viewpoint of the same subject. Hence, zero-shot pairwise prediction is performed during testing. The LFW consists of 13,000 colored photographed faces from the web. This dataset is significantly more complex not only because there 1680 subjects, with some subjects only consisting of two images, but also because of varied amount of aging, pose, gender, lighting and other such natural characteristics. Each image is 250 × 250, in this work the image is resized to 100×100 and normalized. From the original LFW dataset there has been 2 different versions of the dataset that align the images using funneling BID10 and deep funneling BID11. The latter learns to align the images using Restricted Boltzmann Machines with a group sparsity penalty, showing performance improvements for face verification tasks. The penalty leads to an arrangement of the filters that improved the alignment . This overcomes the problems previous CNNs and models alike had in accounting for pose, orientation and problems Capsule Networks look to address. In contrast, we use the original raw image dataset. Both datasets allow us to test different challenges in face verification. The AT&T dataset provides a useful benchmark for the siamese-based models in the few-shot learning setting as it only provides a limited number of instances and unique subjects, while also being limited to grey-pixel images. In contrast, LFW contains coloured images that are in an unconstrained setting, large in comparison to AT&T and have highly imbalanced classes. Baselines SCNs are compared against well-established architectures for image recognition and verification tasks, namely AlexNet BID15, ResNet-34 and InceptionV3 BID29 with 6 inception layers instead of the original network that uses 8 layers which are used many of the aforementioned papers in Section 3. Table 1 shows best test obtained when using contrastive loss with Euclidean distance between encodings (i.e Mahalanobis distance) for both AT &T and LFW over 100 epochs. Shaded cells correspond to best performing models over test contrastive loss and test accuracy for each respective dataset and loss (e.g LFW+Double Margin (M)). The former uses m = 0.4 and the latter uses m = 0.2, while for the double margin contrastive loss m n = 0.2 matching margin and m p = 0.5 negative matching margin is selected. These settings were chosen during 5-fold cross validation, grid searching over possible margin settings. SCN outperforms baselines on the AT &T dataset after training for 100 epochs. We find that because AT&T contains far fewer instances an adapted dropout rate leads to a slight increase in contrastive loss. Additionally, adding a reconstruction loss with λ r = 1e −4 for both paired images led to a decrease in performance when compared to using dropout with a rate p = 0.2 on all layers except the final layer that encodes the pose vectors. We find for the LFW dataset that the SCN and AlexNet have obtained the best while SCN has 25% less parameters. Additionally, the use of a double margin in better for the LeNet SCN but a slight drop in performance when used with concrete dropout on the final layer (i.e SDropCapNet). Figure 1 illustrates the contrastive loss during training 2 -normalized features for each model tested with various distance measures on AT&T and LFW. We find that SCN yields faster convergence on AT&T, particularly when using Manhattan distance. However for Euclidean distance, we observe a loss variance reduction during training and the best overall performance. Through experiments we find that batch normalized convolutional layers improves performance of the SCN. In batch normalization, (k) and scaled with β (k) so that a (k) = γ (k)x(k) + β (k). This allows the network to learn whether the input range should be more or less diffuse. Batch normalization on the initial convolutional layers reduced variance in loss during training on both the AT &T and LF W datasets. LFW test show that the SCN model takes longer to converge particularly in the early stages of training, in comparison to AlexNet. Figure 2 shows the probability density of the positive pair predictions for each model for all distances between encodings with contrastive loss for the LFW dataset. Accuracy is computed using the aforementioned margins to separate positive and negatives instance pairs, which is set to be constant throughout training since the capsule encoded features are − 2 normalized, similar to the approach taken for FaceNet BID26. We find the variance of predictions is lower in comparison to the remaining models, showing a higher precision in the predictions, particularly for Manhattan distance. Additionally, varying distances for these matching images were close in variance to nonmatching images. This motivated the use of the double margin loss considered for the LFW dataset. DISPLAYFORM0 Finally, the SCN model has between 104-116 % less parameters than Alexnet, 24-27 % Resnet-34 and 127-135% less than the best LeNet baseline for both datasets. However, even considering tied weights between models in the SCN, Capsule Networks are primarily limited in speed even with a reduction in parameters due to the routing iterations that are necessary during training. This paper has introduced the Siamese Capsule Network, a novel architecture that extends Capsule Networks to the pairwise learning setting with a feature 2 -normalized contrastive loss that maximizes inter-class variance and minimizes intra-class variance. The indicate Capsule Networks perform better at learning from only few examples and converge faster when a contrastive loss is used that takes face embeddings in the form of encoded capsule pose vectors. We find Siamese Capsule Networks to perform particularly well on the AT&T dataset in the few-shot learning setting, which is tested on unseen classes (i.e subjects) during testing, while competitive against baselines for the larger Labeled Faces In The Wild dataset.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1xylj04_V
A pairwise learned capsule network that performs well on face verification tasks given limited labeled data
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN and Categorical DQN , while giving better run-time performance than A3C . Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the β-leaveone-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training. Model-free deep reinforcement learning has achieved several remarkable successes in domains ranging from super-human-level control in video games and the game of Go BID10, to continuous motor control tasks .Much of the recent work can be divided into two categories. First, those of which that, often building on the DQN framework, act -greedily according to an action-value function and train using minibatches of transitions sampled from an experience replay buffer BID10 BID13 BID5 BID0. These value-function agents benefit from improved sample complexity, but tend to suffer from long runtimes (e.g. DQN requires approximately a week to train on Atari). The second category are the actor-critic agents, which includes the asynchronous advantage actor-critic (A3C) algorithm, introduced by. These agents train on transitions collected by multiple actors running, and often training, in parallel (; BID12 . The deep actor-critic agents train on each trajectory only once, and thus tend to have worse sample complexity. However, their distributed nature allows significantly faster training in terms of wall-clock time. Still, not all existing algorithms can be put in the above two categories and various hybrid approaches do exist BID17 ; BID4 BID14 . We consider a Markov decision process (MDP) with state space X and finite action space A. A (stochastic) policy π(·|x) is a mapping from states x ∈ X to a probability distribution over actions. We consider a γ-discounted infinite-horizon criterion, with γ ∈ the discount factor, and define for policy π the action-value of a state-action pair (x, a) as DISPLAYFORM0 where ({x t} t≥0 ) is a trajectory generated by choosing a in x and following π thereafter, i.e., a t ∼ π(·|x t) (for t ≥ 1), and r t is the reward signal. The objective in reinforcement learning is to find an optimal policy π *, which maximises Q π (x, a). The optimal action-values are given by Q * (x, a) = max π Q π (x, a). The Deep Q-Network (DQN) framework, introduced by , popularised the current line of research into deep reinforcement learning by reaching human-level, and beyond, performance across 57 Atari 2600 games in the ALE. While DQN includes many specific components, the essence of the framework, much of which is shared by Neural Fitted Q-Learning , is to use of a deep convolutional neural network to approximate an action-value function, training this approximate action-value function using the Q-Learning algorithm BID15 and mini-batches of one-step transitions (x t, a t, r t, x t+1, γ t) drawn randomly from an experience replay buffer . Additionally, the next-state action-values are taken from a target network, which is updated to match the current network periodically. Thus, the temporal difference (TD) error for transition t used by these algorithms is given by δ t = r t + γ t max a ∈A Q(x t+1, a ;θ) − Q(x t, a t ; θ),where θ denotes the parameters of the network andθ are the parameters of the target network. Since this seminal work, we have seen numerous extensions and improvements that all share the same underlying framework. Double DQN BID10, attempts to correct for the over-estimation bias inherent in Q-Learning by changing the second term of to Q(x t+1, arg max a ∈A Q(x t+1, a ; θ);θ). The dueling architecture BID13, changes the network to estimate action-values using separate network heads V (x; θ) and A(x, a; θ) with Q(x, a; θ) = V (x; θ) + A(x, a; θ) − 1 |A| a A(x, a ; θ).Recently, BID6 introduced Rainbow, a value-based reinforcement learning agent combining many of these improvements into a single agent and demonstrating that they are largely complementary. Rainbow significantly out performs previous methods, but also inherits the poorer time-efficiency of the DQN framework. We include a detailed comparison between Reactor and Rainbow in the Appendix. In the remainder of the section we will describe in more depth other recent improvements to DQN. The experience replay buffer was first introduced by and later used in DQN . Typically, the replay buffer is essentially a first-in-first-out queue with new transitions gradually replacing older transitions. The agent would then sample a mini-batch uniformly at random from the replay buffer. Drawing inspiration from prioritized sweeping , prioritized experience replay replaces the uniform sampling with prioritized sampling proportional to the absolute TD error .Specifically, for a replay buffer of size N, prioritized experience replay samples transition t with probability P (t), and applies weighted importance-sampling with w t to correct for the prioritization bias, where DISPLAYFORM0 Prioritized DQN significantly increases both the sample-efficiency and final performance over DQN on the Atari 2600 benchmarks BID13. Retrace(λ) is a convergent off-policy multi-step algorithm extending the DQN agent . Assume that some trajectory {x 0, a 0, r 0, x 1, a 1, r 1, . . ., x t, a t, r t, . . .,} has been generated according to behaviour policy µ, i.e., a t ∼ µ(·|x t). Now, we aim to evaluate the value of a different target policy π, i.e. we want to estimate Q π. The Retrace algorithm will update our current estimate Q of Q π in the direction of DISPLAYFORM0 DISPLAYFORM1 is the temporal difference at time s under π, and DISPLAYFORM2 The Retrace algorithm comes with the theoretical guarantee that in finite state and action spaces, repeatedly updating our current estimate Q according to produces a sequence of Q functions which converges to Q π for a fixed π or to Q * if we consider a sequence of policies π which become increasingly greedy w.r.t. the Q estimates . Distributional reinforcement learning refers to a class of algorithms that directly estimate the distribution over returns, whose expectation gives the traditional value function BID2. Such approaches can be made tractable with a distributional Bellman equation, and the recently proposed algorithm C51 showed state-of-the-art performance in the Atari 2600 benchmarks. C51 parameterizes the distribution over returns with a mixture over Diracs centered on a uniform grid, DISPLAYFORM0 with hyperparameters v min, v max that bound the distribution support of size N. In this section we review the actor-critic framework for reinforcement learning algorithms and then discuss recent advances in actor-critic algorithms along with their various trade-offs. The asynchronous advantage actor-critic (A3C) algorithm , maintains a parameterized policy π(a|x; θ) and value function V (x; θ v), which are updated with DISPLAYFORM0 where, A(x t, a t ; DISPLAYFORM1 A3C uses M = 16 parallel CPU workers, each acting independently in the environment and applying the above updates asynchronously to a shared set of parameters. In contrast to the previously discussed value-based methods, A3C is an on-policy algorithm, and does not use a GPU nor a replay buffer. Proximal Policy Optimization (PPO) is a closely related actor-critic algorithm , which replaces the advantage with, DISPLAYFORM2 where ρ t is as defined in Section 2.1.2. Although both PPO and A3C run M parallel workers collecting trajectories independently in the environment, PPO collects these experiences to perform a single, synchronous, update in contrast with the asynchronous updates of A3C.Actor-Critic Experience Replay (ACER) extends the A3C framework with an experience replay buffer, Retrace algorithm for off-policy corrections, and the Truncated Importance Sampling Likelihood Ratio (TISLR) algorithm used for off-policy policy optimization BID14. The Reactor is a combination of four novel contributions on top of recent improvements to both deep value-based RL and policy-gradient algorithms. Each contribution moves Reactor towards our goal of achieving both sample and time efficiency. The Reactor architecture represents both a policy π(a|x) and action-value function Q(x, a). We use a policy gradient algorithm to train the actor π which makes use of our current estimate Q(x, a) of Q π (x, a). Let V π (x 0) be the value function at some initial state x 0, the policy gradient theorem says that DISPLAYFORM0, where ∇ refers to the gradient w.r.t. policy parameters BID9. We now consider several possible ways to estimate this gradient. To simplify notation, we drop the dependence on the state x for now and consider the problem of estimating the quantity DISPLAYFORM1 In the off-policy case, we consider estimating G using a single actionâ drawn from a (possibly different from π) behaviour distributionâ ∼ µ. Let us assume that for the chosen actionâ we have access to an unbiased estimate R(â) of Q π (â). Then, we can use likelihood ratio (LR) method combined with an importance sampling (IS) ratio (which we call ISLR) to build an unbiased estimate of G: DISPLAYFORM2 where V is a baseline that depends on the state but not on the chosen action. However this estimate suffers from high variance. A possible way for reducing variance is to estimate G directly from by using the return R(â) for the chosen actionâ and our current estimate Q of Q π for the other actions, which leads to the so-called leave-one-out (LOO) policy-gradient estimate: This estimate has low variance but may be biased if the estimated Q values differ from Q π. A better bias-variance tradeoff may be obtained by the more general β-LOO policy-gradient estimate: DISPLAYFORM3 DISPLAYFORM4 where β = β(µ, π,â) can be a function of both policies, π and µ, and the selected actionâ. Notice that when β = 1, reduces to, and when β = 1/µ(â), then iŝ DISPLAYFORM5 This estimate is unbiased and can be seen as a generalization ofĜ ISLR where instead of using a state-only dependent baseline, we use a state-and-action-dependent baseline (our current estimate Q) and add the correction term a ∇π(a)Q(a) to cancel the bias. Proposition 1 gives our analysis of the bias of G β-LOO, with a proof left to the Appendix. DISPLAYFORM6 Thus the bias is small when β(a) is close to 1/µ(a), or when the Q-estimates are close to the true Q π values, and unbiased regardless of the estimates if β(a) = 1/µ(a). The variance is low when β is small, therefore, in order to improve the bias-variance tradeoff we recommend using the β-LOO estimate with β defined as: β(â) = min c, 1 µ(â), for some constant c ≥ 1. This truncated 1/µ coefficient shares similarities with the truncated IS gradient estimate introduced in BID14 ) (which we call TISLR for truncated-ISLR): DISPLAYFORM7 The differences are: (i) we truncate 1/µ(â) = π(â)/µ(â) × 1/π(â) instead of truncating π(â)/µ(â), which provides an additional variance reduction due to the variance of the LR ∇ log π(â) = DISPLAYFORM8 (since this LR may be large when a low probability action is chosen), and (ii) we use our Q-baseline instead of a V baseline, reducing further the variance of the LR estimate. In off-policy learning it is very difficult to produce an unbiased sample R(â) of Q π (â) when following another policy µ. This would require using full importance sampling correction along the trajectory. Instead, we use the off-policy corrected return computed by the Retrace algorithm, which produces a (biased) estimate of Q π (â) but whose bias vanishes asymptotically .In Reactor, we consider predicting an approximation of the return distribution function from any state-action pair (x, a) in a similar way as in BID2. The original algorithm C51 described in that paper considered single-step Bellman updates only. Here we need to extend this idea to multi-step updates and handle the off-policy correction performed by the Retrace algorithm, as defined in. Next, we describe these two extensions. Multi-step distributional Bellman operator: First, we extend C51 to multi-step Bellman backups. We consider return-distributions from (x, a) of the form i q i (x, a)δ zi (where δ z denotes a Dirac in z)which are supported on a finite uniform grid DISPLAYFORM0 The coefficients q i (x, a) (discrete distribution) corresponds to the probabilities assigned to each atom z i of the grid. From an observed n-step sequence {x t, a t, r t, x t+1, . . ., x t+n}, generated by behavior policy µ (i.e, a s ∼ µ(·|x s) for t ≤ s < t + n), we build the n-step backed-up return-distribution from (x t, a t). The n-step distributional Bellman target, whose expectation is t+n−1 s=t γ s−t r s + γ n Q(x t+n, a), is given by: DISPLAYFORM1 Since this distribution is supported on the set of atoms {z n i}, which is not necessarily aligned with the grid {z i}, we do a projection step and minimize the KL-loss between the projected target and the current estimate, just as with C51 except with a different target distribution BID2.Distributional Retrace: Now, the Retrace algorithm defined in involves an off-policy correction which is not handled by the previous n-step distributional Bellman backup. The key to extending this distributional back-up to off-policy learning is to rewrite the Retrace algorithm as a linear combination of n-step Bellman backups, weighted by some coefficients α n,a. Indeed, notice that rewrites as DISPLAYFORM2 where α n,a = c t+1... c t+n−1 π(a|x t+n) − I{a = a t+n}c t+n. These coefficients depend on the degree of off-policy-ness (between µ and π) along the trajectory. We have that n≥1 a α n,a = n≥1 c t+1... c t+n−1 (1 − c t+n) = 1, but notice some coefficients may be negative. However, in expectation (over the behavior policy) they are non-negative. Indeed, DISPLAYFORM3 by definition of the c s coefficients. Thus in expectation (over the behavior policy), the Retrace update can be seen as a convex combination of n-step Bellman updates. Then, the distributional Retrace algorithm can be defined as backing up a mixture of n-step distributions. More precisely, we define the Retrace target distribution as: DISPLAYFORM4 where h zi (x) is a linear interpolation kernel, projecting onto the support {z i}: DISPLAYFORM5 We update the current probabilities q(x t, a t) by performing a gradient step on the KL-loss DISPLAYFORM6 Again, notice that some target "probabilities" q * i (x t, a t) may be negative for some sample trajectory, but in expectation they will be non-negative. Since the gradient of a KL-loss is linear w.r.t. its first argument, our update rule provides an unbiased estimate of the gradient of the KL between the expected (over the behavior policy) Retrace target distribution and the current predicted distribution. Remark: The same method can be applied to other algorithms (such as TB(λ) and importance sampling ) in order to derive distributional versions of other off-policy multi-step RL algorithms. Prioritized experience replay has been shown to boost both statistical efficiency and final performance of deep RL agents . However, as originally defined prioritized replay does not handle sequences of transitions and weights all unsampled transitions identically. In this section we present an alternative initialization strategy, called lazy initialization, and argue that it better encodes prior information about temporal difference errors. We then briefly describe our computationally efficient prioritized sequence sampling algorithm, with full details left to the appendix. It is widely recognized that TD errors tend to be temporally correlated, indeed the need to break this temporal correlation has been one of the primary justifications for the use of experience replay . Our proposed algorithm begins with this fundamental assumption. Assumption 1. Temporal differences are temporally correlated, with correlation decaying on average with the time-difference between two transitions. Prioritized experience replay adds new transitions to the replay buffer with a constant priority, but given the above assumption we can devise a better method. Specifically, we propose to add experience to the buffer with no priority, inserting a priority only after the transition has been sampled and used for training. Also, instead of sampling transitions, we assign priorities to all (overlapping) sequences of length n. When sampling, sequences with an assigned priority are sampled proportionally to that priority. Sequences with no assigned priority are sampled proportionally to the average priority of assigned priority sequences within some local neighbourhood. Averages are weighted to compensate for sampling biases (i.e. more samples are made in areas of high estimated priorities, and in the absence of weighting this would lead to overestimation of unassigned priorities).The lazy initialization scheme starts with priorities p t corresponding to the sequences {x t, a t, . . ., x t+n} for which a priority was already assigned. Then it extrapolates a priority of all other sequences in the following way. Let us define a partition (I i) i of the states ordered by increasing time such that each cell I i contains exactly one state s i with already assigned priority. We define the estimated priorityp t to all other sequences asp t = si∈J(t) DISPLAYFORM0 is a collection of contiguous cells (I i) containing time t, and w i = |I i | is the length of the cell I i containing s i. For already defined priorities denotep t = p t. Cell sizes work as estimates of inverse local density and are used as importance weights for priority estimation.2 For the algorithm to be unbiased, partition (I i) i must not be a function of the assigned priorities. So far we have defined a class of algorithms all free to choose the partition (I i) and the collection of cells I(t), as long that they satisfy the above constraints. FIG6 in the Appendix illustrates the above description. Now, with probability we sample uniformly at random, and with probability 1 − we sample proportionally top t. We implemented an algorithm satisfying the above constraints and called it Contextual Priority Tree (CPT). It is based on AVL trees BID11 and can execute sampling, insertion, deletion and density evaluation in O(ln(n)) time. We describe CPT in detail in the Appendix in Section 6.3.We treated prioritization as purely a variance reduction technique. Importance-sampling weights were evaluated as in prioritized experience replay, with fixed β = 1 in. We used simple gradient magnitude estimates as priorities, corresponding to a mean absolute TD error along a sequence for Retrace, as defined in for the classical RL case, and total variation in the distributional Retrace case. In order to improve CPU utilization we decoupled acting from learning. This is an important aspect of our architecture: an acting thread receives observations, submits actions to the environment, and stores transitions in memory, while a learning thread re-samples sequences of experiences from memory and trains on them (Figure 2, left). We typically execute 4-6 acting steps per each learning step. We sample sequences of length n = 33 in batches of 4. A moving network is unrolled over frames 1-32 while the target network is unrolled over frames 2-33.We allow the agent to be distributed over multiple machines each containing action-learner pairs. Each worker downloads the newest network parameters before each learning step and sends delta-updates at the end of it. Both the network and target network are stored on a shared parameter server while each machine contains its own local replay memory. Training is done by downloading a shared network, evaluating local gradients and sending them to be applied on the shared network. While the agent can also be trained on a single machine, in this work we present of training obtained with either 10 or 20 actor-learner workers and one parameter server. In Figure 2 (right) we compare resources and runtimes of Reactor with related algorithms. In some domains, such as Atari, it is useful to base decisions on a short history of past observations. The two techniques generally used to achieve this are frame stacking and recurrent network architectures. We chose the latter over the former for reasons of implementation simplicity and computational efficiency. As the Retrace algorithm requires evaluating action-values over contiguous sequences of trajectories, using a recurrent architecture allowed each frame to be processed by the convolutional network only once, as opposed to n times times if n frame concatenations were used. The Reactor architecture uses a recurrent neural network which takes an observation x t as input and produces two outputs: categorical action-value distributions q i (x t, a) (i here is a bin identifier), and policy probabilities π(a|x t). We use an architecture inspired by the duelling network architecture BID13. We split action-value -distribution logits into state-value logits and advantage logits, which in turn are connected to the same LSTM network BID7.Final action-value logits are produced by summing state-and action-specific logits, as in BID13. Finally, a softmax layer on top for each action produces the distributions over discounted future returns. The policy head uses a softmax layer mixed with a fixed uniform distribution over actions, where this mixing ratio is a hyperparameter (, Section 5.1.3). Policy and Q-networks have separate LSTMs. Both LSTMs are connected to a shared linear layer which is connected to a shared convolutional neural network . The precise network specification is given in TAB6 in the Appendix. Gradients coming from the policy LSTM are blocked and only gradients originating from the Qnetwork LSTM are allowed to back-propagate into the convolutional neural network. We block gradients from the policy head for increased stability, as this avoids positive feedback loops between π and q i caused by shared representations. We used the Adam optimiser , with a learning rate of 5 × 10 −5 and zero momentum because asynchronous updates induce implicit momentum . Further discussion of hyperparameters and their optimization can be found in Appendix 6.1. We trained and evaluated Reactor on 57 Atari games BID1. FIG4 compares the performance of Reactor with different versions of Reactor each time leaving one of the algorithmic improvements out. We can see that each of the algorithmic improvements (Distributional retrace, beta-LOO and prioritized replay) contributed to the final . While prioritization was arguably the most important component, Beta-LOO clearly outperformed TISLR algorithm. Although distributional and non-distributional versions performed similarly in terms of median human normalized scores, distributional version of the algorithm generalized better when tested with random human starts (We evaluated Reactor with target update frequency T update = 1000, λ = 1.0 and β-LOO with β = 1 on 57 Atari games trained on 10 machines in parallel. We averaged scores over 200 episodes using 30 random human starts and noop starts TAB8 in the Appendix). We calculated mean and median human normalised scores across all games. We also ranked all algorithms (including random and human scores) for each game and evaluated mean rank of each algorithm across all 57 Atari games. We also evaluated mean Rank and Elo scores for each algorithm for both human and noop start settings. Please refer to Section 6.2 in the Appendix for more details. TAB2 with several other state-of-art algorithms across 57 Atari games for a fixed random seed across all games BID1. We compare Reactor against are: DQN , Double DQN BID10, DQN with prioritised experience replay BID13, dueling architecture and prioritised dueling BID13, ACER BID14, A3C , and Rainbow BID6. Each algorithm was exposed to 200 million frames of experience, or 500 million frames when followed by 500M, and the same pre-processing pipeline including 4 action repeats was used as in the original DQN paper .In TAB2, we see that Reactor exceeds the performance of all algorithms across all metrics, despite requiring under two days of training. With 500 million frames and four days training we see Reactor's performance continue to improve significantly. The difference in time-efficiency is especially apparent when comparing Reactor and Rainbow (see FIG4 . Additionally, unlike Rainbow, Reactor does not use Noisy Networks BID3, which was reported to have contributed to the performance gains. When evaluating under the no-op starts regime TAB4, Reactor out performs all methods except for Rainbow. This suggests that Rainbow is more sample-efficient when training and evaluation regimes match exactly, but may be overfitting to particular trajectories due to the significant drop in performance when evaluated on the random human starts. Regarding ACER, another Retrace-based actor-critic architecture, both classical and distributional versions of Reactor FIG4) exceeded the best reported median human normalized score of 1.9 with noop starts achieved in 500 million steps. In this work we presented a new off-policy agent based on Retrace actor-critic architecture and show that it achieves similar performance as the current state-of-the-art while giving significant real-time performance gains. We demonstrate the benefits of each of the suggested algorithmic improvements, including Distributional Retrace, beta-LOO policy gradient and contextual priority tree. DISPLAYFORM0 Proof. The bias ofĜ β-LOO is DISPLAYFORM1 As we believe that algorithms should be robust with respect to the choice of hyperparameters, we spent little effort on parameter optimization. In total, we explored three distinct values of learning rates and two values of ADAM momentum (the default and zero) and two values of T update on a subset of 7 Atari games without prioritization using non-distributional version of Reactor. We later used those values for all experiments. We did not optimize for batch sizes and sequence length or any prioritization hyperparamters. Commonly used mean and median human normalized scores have several disadvantages. A mean human normalized score implicitly puts more weight on games that computers are good and humans are bad at. Comparing algorithm by a mean human normalized score across 57 Atari games is almost equivalent to comparing algorithms on a small subset of games close to the median and thus dominating the signal. Typically a set of ten most score-generous games, namely Assault, Asterix, Breakout, Demon Attack, Double Dunk, Gopher, Pheonix, Stargunner, Up'n Down and Video Pinball can explain more than half of inter-algorithm variance. A median human normalized score has the opposite disadvantage by effectively discarding very easy and very hard games from the comparison. As typical median human normalized scores are within the range of 1-2.5, an algorithm which scores zero points on Montezuma's Revenge is evaluated equal to the one which scores 2500 points, as both performance levels are still below human performance making incremental improvements on hard games not being reflected in the overall evaluation. In order to address both problem, we also evaluated mean rank and Elo metrics for inter-algorithm comparison. Those metrics implicitly assign the same weight to each game, and as a is more sensitive of relative performance on very hard and easy games: swapping scores of two algorithms on any game would in the change of both mean rank and Elo metrics. We calculated separate mean rank and Elo scores for each algorithm using of test evaluations with 30 random noop-starts and 30 random human starts TAB8 ). All algorithms were ranked across each game separately, and a mean rank was evaluated across 57 Atari games. For Elo score evaluation algorithm, A was considered to win over algorithm B if it obtained more scores on a given Atari. We produced an empirical win-probability matrix by summing wins across all games and used this matrix to evaluate Elo scores. A ranking difference of 400 corresponds to the odds of winning of 10:1 under the Gaussian assumption. Contextual priority tree is one possible implementation of lazy prioritization FIG6 ). All sequence keys are put into a balanced binary search tree which maintains a temporal order. An AVL tree BID11 ) was chosen due to the ease of implementation and because it is on average more evenly balanced than a Red-Black Tree. Each tree node has up to two children (left and right) and contains currently stored key and a priority of the key which is either set or is unknown. Some trees may only have a single child subtree while Sampling is done by going from the root node up the tree by selecting one of the children (or the current key) stochastically proportional to orange proportions. Sampling terminates once the current (square) key is chosen. Figure 6: Example of a balanced priority tree. Dark blue nodes contain keys with known priorities, light blue nodes have at least one child with at least a single known priority, while ping nodes do not have any priority estimates. Nodes 1, 2 and 3 will obtain priority estimates equal to 2/3 of the priority of key 5 and 1/3 of the priority of node 4. This implies that estimated priorities of keys 1, 2 and 3 are implicitly defined by keys 4 and 6. Nodes 8, 9 and 11 are estimated to have the same priority as node 10.some may have none. In addition to this information, we were tracking other summary statistics at each node which was re-evaluated after each tree rotation. The summary statistics was evaluated by consuming previously evaluated summary statistics of both children and a priority of the key stored within the current node. In particular, we were tracking a total number of nodes within each subtree and mean-priority estimates updated according to rules shown in FIG7. The total number of nodes within each subtree was always known (c in FIG7), while mean priority estimates per key (m in FIG7) could either be known or unknown. If a mean priority of either one child subtree or a key stored within the current node is unknown then it can be estimated to by exploiting information coming from another sibling subtree or a priority stored within the parent node. Sampling was done by traversing the tree from the root node up while sampling either one of the children subtrees or the currently held key proportionally to the total estimated priority masses contained within. The rules used to evaluate proportions are shown in orange in FIG7. Similarly, probabilities of arbitrary keys can be queried by traversing the tree from the root node towards the child node of an interest while maintaining a product of probabilities at each branching point. Insertion, deletion, sampling and probability query operations can be done in O(ln(n)) time. The suggested algorithm has the desired property that it becomes a simple proportional sampling algorithm once all the priorities are known. While some key priorities are unknown, they are estimated by using nearby known key priorities (Figure 6).Each time when a new sequence key is added to the tree, it was set to have an unknown priority. Any priority was assigned only after the key got first sampled and the corresponding sequence got passed through the learner. When a priority of a key is set or updated, the key node is deliberately removed from and placed back to the tree in order to become a leaf-node. This helped to set priorities of nodes in the immediate vicinity more accurately by using the freshest information available. The value of = 0.01 is the minimum probability of choosing a random action and it is hard-coded into the policy network. FIG8 shows the overall network topology while TAB6 specifies network layer sizes. In this section we compare Reactor with the recently published Rainbow agent BID6. While ACER is the most closely related algorithmically, Rainbow is most closely related in terms of performance and thus a deeper understanding of the trade-offs between Rainbow and Reactor may benefit interested readers. There are many architectural and algorithmic differences between Rainbow and Reactor. We will therefore begin by highlighting where they agree. Both use a categorical action-value distribution critic BID2, factored into state and state-action logits BID13, DISPLAYFORM0 Both use prioritized replay, and finally, both perform n-step Bellman updates. Despite these similarities, Reactor and Rainbow are fundamentally different algorithms and are based upon different lines of research. While Rainbow uses Q-Learning and is based upon DQN , Reactor is an actor-critic algorithm most closely based upon A3C . Each inherits some design choices from their predecessors, and we have not performed an extensive ablation comparing these various differences. Instead, we will discuss four of the differences we believe are important but less obvious. First, the network structures are substantially different. Rainbow uses noisy linear layers and ReLU activations throughout the network, whereas Reactor uses standard linear layers and concatenated ReLU activations throughout. To overcome partial observability, Rainbow, inheriting this choice from DQN, uses frame stacking. On the other hand, Reactor, inheriting its choice from A3C, uses LSTMs after the convolutional layers of the network. It is also difficult to directly compare the number of parameters in each network because the use of noisy linear layers doubles the number of parameters, although half of these are used to control noise, while the LSTM units in Reactor require more parameters than a corresponding linear layer would. Second, both algorithms perform n-step updates, however, the Rainbow n-step update does not use any form of off-policy correction. Because of this, Rainbow is restricted to using only small values of n (e.g. n = 3) because larger values would make sequences more off-policy and hurt performance. By comparison, Reactor uses our proposed distributional Retrace algorithm for off-policy correction of n-step updates. This allows the use of larger values of n (e.g. n = 33) without loss of performance. Third, while both agents use prioritized replay buffers , they each store different information and prioritize using different algorithms. Rainbow stores a tuple containing the state x t−1, action a t−1, sum of n discounted rewards n−1 k=0 r t+k k−1 m=0 γ t+m, product of n discount factors n−1 k=0 γ t+k, and next-state n steps away x t+n−1. Tuples are prioritized based upon the last observed TD error, and inserted into replay with a maximum priority. Reactor stores length n sequences of tuples (x t−1, a t−1, r t, γ t) and also prioritizes based upon the observed TD error. However, when inserted into the buffer the priority is instead inferred based upon the known priorities of neighboring sequences. This priority inference was made efficient using the previously introduced contextual priority tree, and anecdotally we have seen it improve performance over a simple maximum priority approach. Finally, the two algorithms have different approaches to exploration. Rainbow, unlike DQN, does not use -greedy exploration, but instead replaces all linear layers with noisy linear layers which induce randomness throughout the network. This method, called Noisy Networks BID3, creates an adaptive exploration integrated into the agent's network. Reactor does not use noisy networks, but instead uses the same entropy cost method used by A3C and many others , which penalizes deterministic policies thus encouraging indifference between similarly valued actions. Because Rainbow can essentially learn not to explore, it may learn to become entirely greedy in the early parts of the episode, while still exploring in states not as frequently seen. In some sense, this is precisely what we want from an exploration technique, but it may also lead to highly deterministic trajectories in the early part of the episode and an increase in overfitting to those trajectories. We hypothesize that this may be the explanation for the significant difference in Rainbow's performance between evaluation under no-op and random human starts, and why Reactor does not show such a large difference.6.6 ATARI Table 4: Scores for each game evaluated with 30 random human starts. Reactor was evaluated by averaging scores over 200 episodes. All scores (except for Reactor) were taken from BID13, and BID6
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkHVZWZAZ
Reactor combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN while giving better run-time performance than A3C.
Hierarchical planning, in particular, Hierarchical Task Networks, was proposed as a method to describe plans by decomposition of tasks to sub-tasks until primitive tasks, actions, are obtained. Plan verification assumes a complete plan as input, and the objective is finding a task that decomposes to this plan. In plan recognition, a prefix of the plan is given and the objective is finding a task that decomposes to the (shortest) plan with the given prefix. This paper describes how to verify and recognize plans using a common method known from formal grammars, by parsing. Hierarchical planning is a practically important approach to automated planning based on encoding abstract plans as hierarchical task networks (HTNs) BID3. The network describes how compound tasks are decomposed, via decomposition methods, to sub-tasks and eventually to actions forming a plan. The decomposition methods may specify additional constraints among the subtasks such as partial ordering and causal links. There exist only two systems for verifying if a given plan complies with the HTN model (a given sequence of actions can be obtained by decomposing some task). One system is based on transforming the verification problem to SAT BID2 and the other system is using parsing of attribute grammars BID1. Only the parsing-based system supports HTN fully (the SAT-based system does not support the decomposition constraints).Parsing became popular in solving the plan recognition problem BID5 as researchers realized soon the similarity between hierarchical plans and formal grammars, specifically context-free grammars with parsing trees close to decomposition trees of HTNs. The plan recognition problem can be formulated as the problem of adding a sequence of actions after some observed partial plan such that the joint sequence of actions forms a complete plan generated from some task (more general formulations also exist). Hence plan recognition can be seen as a generalization of plan verification. There exist numerous approaches to plan recognition using parsing or string rewriting (Avrahami-Zilberbrand and Kaminka 2005; BID5 BID4 BID5), but they use hierarchical models that are weaker than HTNs. The languages defined by HTN planning problems (with partial-order, preconditions and effects) lie somewhere between context-free (CF) and context-sensitive (CS) languages BID5 so to model HTNs one needs to go beyond the CF grammars. Currently, the only grammar-based model covering HTNs fully uses attribute grammars BID0. Moreover, the expressivity of HTNs makes the plan recognition problem undecidable BID2. Currently, there exists only one approach for HTN plan recognition. This approach relies on translating the plan recognition problem to a planning problem BID5, which is a method invented in BID5.In this paper we focus on verification and recognition of HTN plans using parsing. The uniqueness of the proposed methods is that they cover full HTNs including task interleaving, partial order of sub-tasks, and other decomposition constraints (prevailing constraints, specifically). The methods are derived from the plan verification technique proposed in BID1.There are two novel contributions of this paper. First, we will simplify the above mentioned verification technique by exploiting information about actions and states to improve practical efficiency of plan verification. Second, we will extend that technique to solve the plan (task) recognition problem. For plan verification, only the method in BID1 supports HTN fully. We will show that the verification algorithm can be much simpler and, hence, it is expected to be more efficient. For plan recognition, the method proposed in BID5 can in principle support HTN fully, if a full HTN planner is used (which is not the case yet as prevailing conditions are not supported). However, like other plan recognition techniques it requires the top task (the goal) and the initial state to be specified as input. A practical difference of out methods is that they do not require information about possible top (root) tasks and an initial state as their input. This is particularly interesting for plan/task recognition, where existing methods require a set of candidate tasks (goals) to select from (in principle, they may use all tasks as candidates, but this makes them inefficient). In this paper we work with classical STRIPS planning that deals with sequences of actions transferring the world from a given initial state to a state satisfying certain goal condition. World states are modelled as sets of propositions that are true in those states and actions are changing validity of certain propositions. Formally, let P be a set of all propositions modelling properties of world states. Then a state S ⊆ P is a set of propositions that are true in that state (every other proposition is false). Later, we will use the notation S + = S to describe explicitly the valid propositions in the state S and S − = P \ S to describe explicitly the propositions not valid in the state S.Each action a is described by three sets of propositions DISPLAYFORM0 a describes positive preconditions of action a, that is, propositions that must be true right before the action a. Some modeling approaches allow also negative preconditions, but these preconditions can be compiled away. For simplicity reasons we assume positive preconditions only (the techniques presented in this paper can also be extended to cover negative preconditions directly). Action a is applicable to state S iff B + a ⊆ S. Sets A + a and A − a describe positive and negative effects of action a, that is, propositions that will become true and false in the state right after executing the action a. If an action a is applicable to state S then the state right after the action a is: DISPLAYFORM1 γ(S, a) is undefined if an action a is not applicable to state S. The classical planning problem, also called a STRIPS problem, consists of a set of actions A, a set of propositions S 0 called an initial state, and a set of goal propositions G + describing the propositions required to be true in the goal state (again, negative goal is not assumed as it can be compiled away). A solution to the planning problem is a sequence of actions a 1, a 2,..., a n such that S = γ(...γ(γ(S 0, a 1), a 2 ),..., a n ) and G + ⊆ S. This sequence of actions is called a plan. The plan verification problem is formulated as follows: given a sequence of actions a 1, a 2,..., a n, and goal propositions G +, is there an initial state S 0 such that the sequence of actions forms a valid plan leading from S 0 to a goal state? In some formulations, the initial state might also be given as an input to the verification problem. To simplify the planning process, several extensions of the basic STRIPS model were proposed to include some control knowledge. Hierarchical Task Networks BID3 were proposed as a planning domain modeling framework that includes control knowledge in the form of recipes how to solve specific tasks. The recipe is represented as a task network, which is a set of sub-tasks to solve a given task together with the set of constraints between the sub-tasks. Let T be a compound task and ({T 1, ..., T k}, C) be a task network, where C are its constraints (see later). We can describe the decomposition method as a derivation (rewriting) rule: DISPLAYFORM0 The planning problem in HTN is specified by an initial state (the set of propositions that hold at the beginning) and by an initial task representing the goal. The compound tasks need to be decomposed via decomposition methods until a set of primitive tasks -actions -is obtained. Moreover, these actions need to be linearly ordered to satisfy all the constraints obtained during decompositions and the obtained plan -a linear sequence of actions -must be applicable to the initial state in the same sense as in classical planning. We denote action as a i, where the index i means the order number of action in the plan (a i is the i-th action in the plan). The state right after the action a i is denoted S i, S 0 is the initial state. We denote the set of actions to which a task T decomposes as act(T). If U is a set of tasks, we define act(U) = ∪ T ∈U act(T). The index of the first action in the decomposition of T is denoted start(T), that is, start(T) = min{i|a i ∈ act(T)}. Similarly, end(T) means the index of the last action in the decomposition of T, that is, end(T) = max{i|a i ∈ act(T)}.We can now define formally the constraints C used in the decomposition methods. The constraints can be of the following three types:• t 1 ≺ t 2: a precedence constraint meaning that in every plan the last action obtained from task t 1 is before the first action obtained from task t 2, end(t 1) < start(t 2), • before(U, p): a precondition constraint meaning that in every plan the proposition p holds in the state right before the first action obtained from tasks U, p ∈ S start(U)−1,• between(U, V, p): a prevailing condition meaning that in every plan the proposition p holds in all the states between the last action obtained from tasks U and the first action obtained from tasks V, DISPLAYFORM1 The HTN plan verification problem is formulated as follows: given a sequence of actions a 1, a 2,..., a n, is there an initial state S 0 such that the sequence of actions is a valid plan applicable to S 0 and obtained from some compound task? Again, the initial state might also be given as an input in some formulations. The HTN plan recognition problem is formulated as follows: given a sequence of actions a 1, a 2,..., a n, are there an initial state S 0 and actions a n+1,..., a n+m for some m ≥ 0 such that the sequence of actions a 1, a 2,..., a n+m is a valid plan applicable to S 0 and obtained from some compound task? In other words, the given actions form a prefix of some plan obtained from some compound task T. We will be looking for such a task T minimizing the value m (the number of added actions to complete the plan). If only the task T is of interest (not the actions a n+1, . . ., a n+m) then we are talking about the task (goal) recognition problem. The existing parsing-based HTN verification algorithm (Barták, Maillard, and Cardoso 2018) uses a complex structure of a timeline, that maintains the decomposition constraints so they can be checked when composing sub-tasks to a compound task. We propose a simplified verification method, that does not require this complex structure, as it checks all the constraints directly in the input plan. This makes the algorithm easier for implementation and presumably also faster. The novel hierarchical plan verification algorithm is shown in Algorithm 1. It first calculates all intermediate states (lines 2-8) by propagating information about propositions in action preconditions and effects. At this stage, we actually solve the classical plan validation problem as the algorithm verifies that the given plan is causally consistent (action precondition is provided by previous actions or by the initial state). The original verification algorithm did this calculation repeatedly each time it composed a compound task. It is easy to show that every action is applicable, that is, B + ai ⊆ S i−1 (lines 2 and 4). Next, we will show that DISPLAYFORM0. Right-to-left propagation (line 6) ensures that preconditions are propagated to earlier states if not provided by the action at a given position. In other words, if there is a proposition p ∈ S i+1 \ A + ai+1 then this proposition should be at S i. Line 6 adds such propositions to S i so it holds DISPLAYFORM1 then p would be deleted by the action a i+1, which means that the plan is not valid. The algorithm detects such situations (line 8).When the states are calculated, we apply a parsing algorithm to compose tasks. Parsing starts with the set of primitive tasks (line 9), each corresponding to an action from the input plan. For each task T, we keep a data structure describing the set act(T), that is, the set of actions to which the task decomposes. We use a Boolean vector I of the same size as the plan to describe this set; a i ∈ act(T) ⇔ I(i) = 1. To simplify checks of decomposition constraints, we also keep information about the index of first and last actions from act(T). Together, the task is represented using a quadruplet (T, s, e, I) in which T is a task, s is the index in the plan of the first action generated by T, e is the index in the plan of the last action generated by T (we say that [i, j] represents the interval of actions over which T spans), and I is a Boolean vector as described above. The algorithm applies each decomposition rule to compose a new task from already known sub-tasks (line 12). The composition consists of merging the sub-tasks, when we check that every action in the decomposition is obtained from a single sub-task (line 20), that is, act(T 0) = k j=1 act(T j) and ∀i = j: act(T i) ∩ act(T j) = ∅. We also check all the decomposition constraints; the code is direct Data: a plan P = (a 1, ..., a n) and a set of decomp. methods Result: a Boolean equal to true if the plan can be derived from some compound task, false DISPLAYFORM2 A i is a primitive task corresponding to action a i, I i is a Boolean vector of size n, such that ∀i ∈ 1..n, DISPLAYFORM3 foreach decomposition method R of the form If all tests pass, the new task is added to a set of tasks (line 25). Then we know that the task decomposes to actions, which form a sub-sequence (non necessarily continuous) of the plan to be verified. The process is repeated until a task that decomposes to all actions is obtained (line 27) or no new task can be composed (line 10). The algorithm is sound as the returned task decomposes to all actions in the input plan. If the algorithm finishes with the value false then no other task can be derived. As there is a finite number of possible tasks, the algorithm has to finish, so it is complete. DISPLAYFORM4 Any plan verification algorithm, for example, the one from the previous section, can be extended to plan recognition by feeding the verification algorithm with actions a 1,..., a n+k, where we progressively increase k. The actions a 1,..., a n are given as an input, while the actions a n+1,..., a n+k need to be generated (planned). However, this generate-and-verify approach would be inefficient for larger k as it requires exploration of all valid sequences of actions with the prefix a 1,..., a n. Assume that there could be 5 actions at the position n + 1 and 6 actions at the position n + 2. Then the generate-and-verify approach needs to explore up to 30 plans (not every action at the position n + 2 could follow every action at the position n + 1) and for each plan the verification part starts from scratch as the plans are different. This is where the verification algorithm from BID1 can be used as it does not require exactly one action at each position. The algorithm stores actions (sub-tasks) independently and only when it combines them to form a new task, it generates the states between the actions and checks the constraints for them. This resembles the idea of the Graphplan algorithm (Blum and Furst 1997). There are also sets of candidate actions for each position in the plan and the planextraction stage of the algorithm selects some of them to form a causally valid plan. We use compound tasks together with their decomposition constraints to select and combine the actions (we do not use parallel actions in the plan). The algorithm from (Barták, Maillard, and Cardoso 2018) extended to solve the plan recognition problem is shown in Algorithm 2. It starts with actions a 1,..., a n (line 2) and it finds all compound tasks that decompose to subsets of these actions (lines 4-30). This inner while-loop is taken from (Barták, Maillard, and Cardoso 2018), we only syntactically modified it to highlight the similarity with the verification algorithm from the previous section. If a task that decomposes to all current actions is found (line 30) then we are done. This is the goal task that we looked for and its timeline describes the recognized plan. Otherwise, we add all primitive tasks corresponding to possible actions at position n + 1 (line 33). Note that these are not parallel actions, the algorithm needs to select exactly one of them for the plan. Now, the parsing algorithm continues as it may compose new tasks that include one of those just added primitive tasks. Notice that the algorithm uses all composed tasks from previous iterations in succeeding iterations so it does not start from scratch when new actions are added. This process is repeated until the goal task is found. The algorithm is clearly sound as the task found is the task that decomposes to the shortest plan with a given prefix. This goes from the soundness and completeness of the verification algorithm (in particular, no task that decomposes to a shorter plan exists). The algorithm is semi-complete as if there exists a plan with the length n + k and with a given prefix, the algorithm will eventually find it at the (k + 1)-th iteration. If no plan with a given prefix exists then the algorithm will not stop. However, recall that the plan recognition problem is undecidable BID2 so any plan recognition approach suffers from this deficiency. Data: a plan P = (a 1, ..., a n), A i is a primitive task corresponding to action a i, and a set of decomposition methods Result: a Task that decomposes to a plan with prefix DISPLAYFORM0 DISPLAYFORM1 action a can be at position l; A is a primitive task for a} 34 goto 4Algorithm 2: Plan recognitionThe algorithm maintains a timeline for each compound task to verify all the constraints. This is the major difference from the above verification algorithm that points to the original plan. This timeline has been introduced in BID1, where all technical details can be found. We include a short description to make the paper selfcontained. A timeline is an ordered sequence of slots, where each slot describes an action, its effects, and the state right before the action. For task T, the actions in slots are exactly the actions from act(T). Both effects and states are modelled using two sets of propositions, Post + and Post − modeling positive and negative effects of the action and Pre + and Pre − modeling propositions that must and must not be the true in the state right before the action. Two sets are used as the state is specified only partially and propositions are added to it during propagation so it is necessary to keep information about propositions that must not be true in the state. The timeline always spans from the first to the last action of the task. Due to interleaving of tasks (actions from one task might be located between the actions of another task in the plan), some slots of the task might be empty. These empty slots describe "space" for actions of other tasks. When we are merging sub-tasks (lines 12-22), we merge their timelines, slot by slot. This is how the actions from sub-tasks are put together in a compound task. Notice, specifically, that it is not allowed for two merged sub-tasks to have actions in the same slot (line 15). This ensures that each action is generated by exactly one task. Data: a set of slots, a set of bef ore constraints Result: an updated set of slots 1 Function APPLYPRE(slots, pre) Propositions from bef ore and between constraints are "stored" in the corresponding slots (Algorithms 3 and 4) and their consistency is checked each time the slots are modified (line 26 of Algorithm 2). Consistency means that no proposition is true and false at the same state. Information between subsequent slots is propagated similarly to the verification algorithm (see Algorithm 5). Positive and negative propositions are now propagated separately taking in account empty slots. If there is no action in the slot then effects are unknown and hence propositions cannot be propagated. DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 Algorithm 5: Propagate A unique property of the proposed techniques is handling task interleaving -actions generated from different tasks may interleave to form a plan. This is the property that parsing techniques based on CF grammars cannot handle. Example in FIG4 demonstrates how the timelines are filled by actions as the tasks are being derived/composed from the plan. Assume, first, that a complete plan consisting of actions a 1, a 2,..., a 7 is given. The plan recognition algorithm can also handle such situations, when a complete plan is given, so it can serve for plan verification too (the verification variant of the Algorithm 2 should stop with a failure at line 33 as no action can be added during plan verification). In the first iteration, the algorithm will compose tasks T 2, T 3, T 4 as these tasks decompose to actions directly. Notice, how the timelines with empty slots are constructed. We know where the empty slots are located as we know the exact location of actions in the plan. In the second iteration, only the task T 1 is composed from already known tasks T 3 and T 4. Notice how the slots from these tasks are copied to the slots of a new timeline for T 1. By contrary, the slots in original tasks remain untouched as these tasks may merge with other tasks to form alternative decomposition trees (see the discussion below). Finally, in the third iteration, tasks T 1 and T 2 are merged to a new task T 0 and the algorithm stops there as a complete timeline that spans the plan fully is obtained (condition at line 30 of Algorithm 2 is satisfied).Let us assume that there is a constraint between({a 1}, {a 3}, p) in the decomposition method for T 3. This constraint may model a causal link between a 1 and a 3. When composing the task T 3, the second slot of its timeline remains empty, but the proposition p is placed there (see Algorithm 4). This proposition is then copied to the timeline of task T 1, when merging the timelines (line 17 of Algorithm 2), and finally also to the timeline of task T 0. During each merge operation, the algorithm checks that p can still be in the slot, in particular, that p is not required to be false at the same slot (line 26 repeatedly checks the constraints from methods. The new plan verification algorithm (Algorithm 1) handles the method constraints more efficiently as it uses the complete plan with states to check them. Moreover, the propagation of states is run just once in Algorithm 1 (lines 2-8), while Algorithm 2 runs it repeatedly each time the task is composed from subtasks. Hence, each constraint is verified just once in Algorithm 1, when a new task is composed. In particular, the constraint between({a 1}, {a 3}, p) is verified with respect to the states when task T 3 is introduced. Otherwise, both Algorithm 1 and Algorithm 2 derive the tasks in the same order (if the decomposition methods are explored in the same order). Instead of timelines, Algorithm 1 uses the Boolean vector I to identify actions belonging to each task. For example, for task T 3 the vector is and for task T 4 it is. When composing task T 1 from T 3 and T 4 the vectors are merged to get (see the loop at line 17). Notice that the vector always spans the whole plan, while the timelines start at the first action and finish with the last action of the task (and hence the same timeline can be used for different plan lengths).Assume now that only plan prefix consisting of a 1, a 2,..., a 6 is given. The plan recognition algorithm (Algorithm 2) will first derive tasks T 3 and T 4 only. Specifically, task T 2 cannot be derived yet as action a 7 is not in the plan. In the second iteration, the algorithm will derive task T 1 by merging tasks T 3 and T 4, exactly as we described above. As no more tasks can be derived, the inner loop finishes and the algorithm attempts to add actions that can follow the prefix a 1, a 2,..., a 6 (line 33). Let action a 7 be added at the 7-th position in the plan; actually all actions, that can follow the prefix, will be added as separate primitive tasks at position 7. Now the inner loop is restarted and task T 2 will be added in its first iteration. In the next iteration, task T 0 will be added and this will be the final task as it satisfies the condition at line 30.Assume, hypothetically, that the verification Algorithm 1 is used there. When it is applied to plan a 1, a 2,..., a 6, the algorithm derives tasks T 1, T 3, T 4 and fails as no task spans the whole plan and no more tasks can be derived. After adding action a 7, the algorithm will start from scratch as the states might be different due to propagating some propositions from the precondition of a 7. Hence, the algorithm needs to derive the tasks T 1, T 3, T 4 again and it will also add tasks T 0, T 2 and then it will finish with success. It may happen, that action a 5 can also be consistently placed to position 7. Then, we can derive two versions of task Let us denote the second version as T 1. The Algorithm 1 will stop there as no more tasks can be derived. Notice that tasks T 1, T 3, T 4 were derived repeatedly. If we try a 5 earlier than a 7 at position 7 then tasks T 1, T 3, T 4 will actually be generated three times before the algorithm finds a complete plan. Contrary, Algorithm 2 will add actions a 5 and a 7 together as two possible primitive tasks at position 7. It will use tasks T 1, T 3, T 4 from the previous iteration, it will add tasks T 1, T 3 as they can be composed from the primitive tasks (using the last a 5), it will also add tasks T 0, T 2 (using the last a 7), and will finish with success. Notice that T 1 cannot be merged with T 2 to get a new T 0 as T 1 has action a 5 at the 7-th slot while T 2 has a 7 there so the timelines cannot be merged (line 15 of Algorithm 2). In the paper, we proposed two versions of the parsing technique for verification of HTN plans and for recognition of HTN plans. As far as we know, these are the only approaches that currently cover HTN fully including all decomposition constraints. Both versions can be applied to solve both verification and recognition problems, but as we demonstrated using an example, each of them has some deficiencies when applied to the other problem. The next obvious step is implementation and empirical evaluation of both techniques. There is no doubt that the novel verification algorithm is faster than the previous approaches BID2 and BID1. The open question is how much faster it will be, in particular for large plans. The efficiency of the novel plan recognition technique in comparison to existing compilation technique BID5 ) is less clear as both techniques use different approaches, bottom-up vs. top-down. The disadvantage of the compilation technique is that it needs to re-generate the known plan prefix, but it can exploit heuristics to remove some overhead there. Contrary, the parsing techniques looks more like generate-and-test, but controlled by the hierarchical structure. It also guarantees finding the shortest extension of plan prefix.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJgRIrHWt4
The paper describes methods to verify and recognize HTN plans by parsing of attribute grammars.
Neural architecture search (NAS), the task of finding neural architectures automatically, has recently emerged as a promising approach for unveiling better models over human-designed ones. However, most success stories are for vision tasks and have been quite limited for text, except for a small language modeling setup. In this paper, we explore NAS for text sequences at scale, by first focusing on the task of language translation and later extending to reading comprehension. From a standard sequence-to-sequence models for translation, we conduct extensive searches over the recurrent cells and attention similarity functions across two translation tasks, IWSLT English-Vietnamese and WMT German-English. We report challenges in performing cell searches as well as demonstrate initial success on attention searches with translation improvements over strong baselines. In addition, we show that on attention searches are transferable to reading comprehension on the SQuAD dataset. There has been vast literature on finding neural architectures automatically dated back to the 1980s with genetic algorithms BID18 to recent approaches that use random weights BID17, Bayesian optimization BID23, reinforcement learning BID1 BID28, evolution BID16, and hyper networks BID3. Among these, the approach of neural architecture search (NAS) using reinforcement learning by, barring computational cost, has been most promising, yielding stateof-the-art performances on several popular vision benchmarks such as CIFAR-10 and ImageNet. Building on NAS, others have found better optimizers BID2 and activation functions BID15 than human-designed ones. Despite these success stories, most of the work mainly focuses on vision tasks, with little attention to language ones, except for a small language modeling task on the Penn Tree Bank dataset (PTB) in.This work aims to bridge that gap by exploring neural architecture search for language tasks. We start by applying the approach of to neural machine translation (NMT) with sequence-to-sequence BID25 as an underlying model. Our goal is to find new recurrent cells that can work better than Long Short-term Memory (LSTM) BID6. We then introduce a novel "stack" search space as an alternative to the fixed-structure tree search space defined in. We use this new search space to find similarity functions for the attention mechanism in NMT BID0 BID9. Through our extensive searches across two translation benchmarks, small IWSLT English-Vietnamse and large WMT German-English, we report challenges in performing cell searches for NMT and demonstrate initial success on attention searches with translation improvements over strong baselines. Lastly, we show that the attention similarity functions found for NMT are transferable to the reading comprehension task on the Stanford Question Answering Dataset (SQuAD) BID14, yielding non-trivial improvements over the standard dot-product function. Directly running NAS attention search on SQuAD boosts the performance even further. Figure 1: Tree search space for recurrent cells -shown is an illustration of a tree search space specifically designed for searching over LSTM-inspired cells. The figure was obtained from with permission. Left: the tree that defines the computation steps to be predicted by controller. Center: an example set of predictions made by the controller for each computation step in the tree. Right: the computation graph of the recurrent cell constructed from example predictions of the controller. In neural architecture search (NAS), a controller iteratively samples a model architecture which is then run against a task of interest to obtain a reward signal. The reward signal is then used to update the controller so as to produce better and better architectures over time. We follow the architecture search setup in, which was originally developed for a small language modeling task on PTB, and adapt it to other language tasks. We first focus on translation and conduct searches on both small and large scale translation tasks (IWSLT and WMT respectively, with details in Section 3). In terms of reward functions, we have a choice of using either perplexity or BLEU scores which have been known to be well-correlated in neural machine translation BID9. Formally, we scale the reward scores to be within as follows 1: DISPLAYFORM0 We now describe the search spaces over recurrent cells and attention similarity functions. This search space is identical to what was designed by to search for LSTMinspired cells. Each instance of the search space takes as inputs (state h t−1, cell c t−1, input x t) and produces as outputs (state h t, cell c t). The idea is to define computation through a balanced binary tree which we start off first by connecting each leaf node with inputs (h t−1, x t) and producing output h t at the top node. The left part of Figure 1 illustrates a binary computation tree of depth 2. The RNN controller will then decide for each node in the tree what combined operations, e.g., add or mul, to use and what nonlinearity, e.g., tanh or sigmoid, to be immediately followed. The controller will also decide how to incorporate ("cell inject") the previous cell c t−1 and which nodes ("cell indices") in the tree to use for that injection as well as the output of the new cell. The RNN controller is illustrated in the center part of Figure 1 and the realization of an LSTM-inspired instance is shown on the right part. The parameters of instance is defined by linear transformation of each inputs (h t−1, x t) before passing to each leaf node. LSTM-inspired Cells for NMT In this work, we are interested in how this search space, which works well for the language modeling task, performs on translation. Our set of combined operations are element-wise addition and multiplication; whereas the set of non-linear functions are (identity, tanh, sigmoid, relu). We use a binary tree of depth 4 with 8 leaf nodes. We propose a simple stack-based programming language as an alternative to the fixed structure of the previous tree search space. This is reminiscent of the Push language which is historically used in genetic programming BID24. A program consists of a sequence of function calls, with each function popping N arguments from the top of the stack and pushing M back onto it. As illustrated in Figure 2, we consider in this work only unary or binary ops, i.e., N = 1 or 2, with M = 1. If there are not enough arguments in the stack, such as a binary operation when there is a single item in the stack, the operation is ignored. This ensures every program is valid without additional constraints on the controller. The program is also given the ability to copy outputs produced within S steps ago. Such capability is achieved through ops (copy 0, . . ., copy S−1) which the controller can predict together with other unary and binary ops. The indices in Figure 2 indicate the order in which outputs were generated so copy ops can be applied. At the start of execution, input arguments of the same shape and data type are both pushed onto the stack. At the end of execution, we either sum all remaining values or take the top value of the stack as the output. DISPLAYFORM0 Figure 2: General stack search space -shown is an illustration of how the stack changes over a sequence of operations. In this example, the controller predicts (copy 0, linear, sigmoid, add) as a sequence of operations to be applied. The stack has a vector x as an initial input and produces x + σ (W x) as an output. Attention Search for NMT As attention mechanism is key to the success of many sequence-tosequence models, including neural machine translation BID0, it is worthwhile to improve it. To recap, we show in Figure 3 an instance of the attention mechanism proposed by BID8 for NMT. At each time step, the current target hidden state h t (a "query") is used to compare with each of the source hidden statesh s (memory "keys") to compute attention weights which indicate which memory slots are most useful for the translation at that time step. The attention weight for each source position s is often computed as exp score(ht,hs) DISPLAYFORM1, with S being the source length. In BID8, various forms of the scoring function have been proposed, e.g., the simple dot-producth s h t or the bilinear formh s W h t. Figure 3: Attention Mechanism -example of an attention-based NMT system as described in BID8. We highlight in detail the first step of the attention computation. Instead of hand-designing these functions, we propose to search through the set of scoring functions score(q, k) using our stack-based search space. The stack starts out with a key vector k followed by a query vectory q on top. The RNN controller predicts a list of L ops, followed by a special reduce op that is used to turn vectors into scalars by summing over the final dimension as illustrated in Figure 4. L is the program length, which controls the search space complexity and is set to 8 in our experiments. The set of binary and unary ops include (linear, mul, add, sigmoid, tanh, relu, identity) with mul being element-wise multiplication. The occurrence of op identity is for the controller to shorten the program as needed. DISPLAYFORM2 k Figure 4: Stack Search Space for Attention -shown is the bilinear scoring function BID8 as an instance of the stack search space for attention. The controller predicts (linear, mul) as ops. All scoring functions end with a reduce op that turns vectors into scalars. Attention mechanisms are a core part of modern question answering systems. In the extractive problem setting, given a query sentence and a context paragraph, the task is to output start and end positions in the text, called a span, which contains the answer. On the most common dataset of this type, Stanford Question Answering Dataset (SQuAD) BID13, the top performing models all perform some variant of attention between the encoded query sentence and encoded context paragraph. Many of the best models are variants of a model known as Bidirectional Attentive Flow (BiDAF) BID21. BiDAF encodes the query and context separately, then performs bidirectional attention from the query vectors over the context vectors and from the context vectors over the query vectors. The similarity is generally computed as the dot product between the two vectors. To test the generalization of our attention functions and search, we apply the best attention mechanisms from NMT in place of the dot product. We also repeat the search procedure directly on the SQuAD dataset. The search is similar as for NMT, with the addition of a unary separable 1D convolution operator. We consider two translation setups, small and large-scale ones, to test different effects of Neural Architecture Search (NAS). Remember that, in NAS, there are often two phases involved: (a) search architectures -in which we train many child models to find best architectures according to the reward signals and (b) run convergence -where we take top architectures found and train full models until convergence. To make things clear, for each of the translation setup below, we will describe the data used, as well as the training hyperparameters for both the child and the full models. Our evaluation metrics include both perplexity and BLEU BID11 scores; whereas, the NAS reward signals can be either perplexity or BLEU, which we will detail later. Data We utilize a small parallel corpus of English-Vietnamese TED talks (133K sentence pairs), provided by the IWSLT Evaluation Campaign BID4. Following BID7, we tokenize with the default Moses tokenizer and replace words whose frequencies are less than 5 by <unk>. 2 The final data has vocabulary sizes of 17K for English and 7.7K for Vietnamese. We use the TED tst2012 (1553 sentences) as a validation set for hyperparameter tuning and TED tst2013 (1268 sentences) as a test set. Full-model hyperparameters Each full model is an attention-based sequence-to-sequence model with 2 LSTM layers of 512 units each; the encoder is bidirectional and the embedding dimension is also 512. We train each child model for 12K steps (roughly 12 epochs) with dropout rate of 0.2 (i.e., keep probability 0.8) and batch size 128. Our optimizer is SGD with learning rate 1; after 8K steps, we start halving learning rate every 1K step. We use 5 for gradient norm clipping and uniformly initialize the parameters within [−0.1, 0.1]. The exact implementation and setup were obtained from the NMT tutorial BID10 ).Child-model hyperparameters Since this dataset is small, the hyperparameters of the child models for NAS are identical to those of the full models. Data We consider the WMT German-English translation task with 4.5M training sentence pairs. The data is split into subword units using the BPE scheme BID20 with 32K operations. We use newstest2013 (3000 sentences) as a development set and report translation performances on both newstest2014 (2737 sentences) and newstest2015 (2169 sentences). Full-model hyperparameters We train strong translation models based on the architecture of Google's Neural Machine Translation systems BID27 ) with a implementation in BID10. The model consists of 4 LSTM layers of 1024 units (the encoder starts with a bidirectional LSTM followed by three unidirectional layers); embedding dim is 1024. The hyperparameters are similar to those of the English-Vietnamese setup: init range [−0.1, 0.1], dropout 0.2, gradient norm 5.0. We train with SGD for 340K steps (10 epochs). The learning rate is 1.0; after 170K steps, we start halving learning rate every 17K step. Child-model hyperparameters Since we cannot afford to run NAS on full models, our child model is a scaled-down version of the full one with 2 layers and 256 units, trained for 10K steps. Following, we train the controller RNN using the Proximal Policy Optimization BID19 with a learning rate of 0.0005. To encourage exploration, an entropy penalty 0.0001 was used. We use an exponential moving average of previous rewards with weight 0.99 as a baseline function. The controller weights are uniformly initialized within [−0.1, 0.1]. We use minibatches of 20 architectures to update the controller RNN weights. During search, we employ a global workqueue system similar to to process a pool of child networks proposed by the RNN controller. In our experiments, the pool of workers consists of 100-200 GPUs. We stop the searches once the dev performance saturates and the set of unique child models remains unchanged. Once the search is over, the top 10 architectures are then chosen to train until convergence. We present in Table 1 of neural architecture search for translation. We compare over strong baselines provided by BID10, which replicate Goole's NMT architectures BID27. As we can see in the first three rows, the strong baseline trained with SGD and LSTM as the basic unit outperforms NASCell, the TensorFlow public implementation of the best recurrent cell found on the PTB language modeling task by. 4 Architecture searches directly on translation tasks yield better performances compared to NASCell. We can find cells that outperform the baseline in the IWSLT benchmark with 26.2 BLEU. Beating the strong baseline on the larger WMT task remains a challenge; with cell searches performed directly on WMT, we can narrow the gap with the baseline, achieving 28.4 BLEU. Finally, by performing attention searches on WMT, we were able to outperform the WMT baseline with 29.1 BLEU. The same attention function found is also transferable to the small IWSLT benchmark, yielding a high score of 26.0 BLEU. IWSLT (small) WMT (large) tst2013 newstest2015 Google NMT baseline [adam] BID10 21.8 26.8 Google NMT baseline [sgd] BID10 25.5 28.8 NASCell on LM [sgd] 25.1 27.7 NASCell on IWSLT [ppl, sgd] Table 1: Neural architectural searches for translation -shown are translation performances in BLEU for various neural architecture searches (NAS) at the cell and attention levels. Searches are performed on either the small IWSLT or large WMT translation setups with reward functions being either ppl (perplexity) or BLEU. For each NAS search, we report on both translation setups. For NASCell, we use the TensorFlow public implementation by and run on our translation setups. We highlight in bold numbers that are best in each group. In this section, we continue in Table 1 to further discuss the effects of optimizers and reward functions used in architecture search. We also show the top attention functions found by NAS and their effects. Lastly, we examine the transferability of these attention functions and searches to the task of reading comprehension. We found that optimizers used for NMT models matter greatly in architecture search. From the training plots in FIG2, it seems to appear that Adam outperforms SGD greatly, achieving much higher BLEU scores on both the dev and test sets after a fixed training duration of 10K steps per child model. However, as observed in rows 5 and 6 of Table 1, recurrent cells found by Adam are unstable, yielding much worse performance compared to those found by SGD. We have tried using Glorot initialization scheme BID5 but could not alleviate the problem of large gradients when using Adam-found cells. We suspect further hyperparameter tuning for final-model training might help. We also carry a small experiment comparing the reward functions described in Eq for the attention search. From Figure 6, the reward function based on BLEU trivially leads to higher dev and test BLEU scores during the search. The attention functions found does transfer to higher BLEU scores as shown in row 8 of Table 1. What surprised us was the fact that the attention mechanisms found with perplexity-based reward function perform poorly. The top-performing attention similarity functions found are: reduce(sigmoid(relu(tanh(W (k q))))) and reduce(sigmoid(relu(W (k q)))). At first, the equations look puzzling with multiple nonlinearity functions stacked together, which we think due to noise in the design of the search space that lets the controller favor over nonlinear functions. However, a closer look does reveal an interesting pattern that keys and queries are encouraged to interact element-wise, followed by linear transformation, nonlinearity, before the final reduce-sum op. On the other hand, several bad attention functions have the following pattern of applying nonlinearity immediately after elementwise multiplication, e.g., reduce(sigmoid(W (tanh(tanh(k q))))). Nevertheless, we think the search space could have been improved by predicting when a program can end and when to perform reduce operations. Figure 6: Effects of reward functions for attention searches -shown are similar plots to those in FIG2 with two attention searches that differ in terms of reward functions used: one based one BLEU while the other is based on perplexity (pp). For brevity, we only show plot for dev BLEU and total entropy. SQuAD Systems F 1 BiDAF BID22 77.3 Our baseline (dot-product) 80.1 NASAttention on NMT 80.5 NASAttention on SQuAD 81.1 For the reading comprehension setting, we evaluate on the Stanford Question Answering dataset as discussed in 2.3. We report in F 1 which measures the portion of overlap tokens between the predicted answer and groundtruth. Model Our model details are as follows: we embed both the query and context with pretrained GLoVE BID12 embeddings. Each is then encoded independently using the embedding encoder, sharing weights between the two. We then combine them with context-toquery attention (leaving out the query-to-context attention from BiDAF), with the output from each position in the context the of attending over the query with that position's encoding. When doing attention search, we search over the form of the equation to compute this similarity. Finally, we take the output of this attention mechanism and run through a stack of three model encoder, giving three outputs x 1, x 2, and x 3, each of which is the length of the context sentence. The probability of the span starting at each position is computed as W 0 [x 1, x 2], and the probability of it being the end position is W 1 [x 1, x 3]. We score each span as the product of its start position probability and end position probability, returning the span with the highest score. Results TAB1 demonstrates that the NASAttention found with NMT does provide improvement over the baseline with dot-product attention, yielding a F 1 score of 80.5. When performing attention search directly on SQuAD, the performance is further boosted to 80.1 F 1. We find that the best performing attention function for the context-to-query attention is simple yet novel: f (key, query) = relu(conv(key • query)), where conv is a 1d separable convolution with a kernel size of 3. Neighboring positions in the convolution correspond to neighboring keys (in this case in the encoded question) being queried with the same query vector. In this paper, we have made a contribution towards extending the success of neural architecture search (NAS) from vision to another domain, languages. Specifically, we are first to apply NAS to the tasks of machine translation and reading comprehension at scale. Our newly-found recurrent cells perform better on translation than previously-discovered NASCell. Furthermore, we propose a novel stack-based search space as a more flexible alternative to the fixed-structure tree search space used for recurrent cell search. With this search space, we find new attention functions that outperform strong translation baselines. In addition, we demonstrate that the attention search are transferable to the SQuAD reading comprehension task, yielding nontrivial improvements over dot-product attention. Directly running NAS attention search on SQuAD boosts the performance even further. We hope that our extensive experiments will pave way for future research in NAS for languages.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1Zi2Mb0-
We explore neural architecture search for language tasks. Recurrent cell search is challenging for NMT, but attention mechanism search works. The result of attention search on translation is transferable to reading comprehension.
Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can "interpolate": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations. One goal of unsupervised learning is to uncover the underlying structure of a dataset without using explicit labels. A common architecture used for this purpose is the autoencoder, which learns to map datapoints to a latent code from which the data can be recovered with minimal information loss. Typically, the latent code is lower dimensional than the data, which indicates that autoencoders can perform some form of dimensionality reduction. For certain architectures, the latent codes have been shown to disentangle important factors of variation in the dataset which makes such models useful for representation learning BID7 BID15. In the past, they were also used for pre-training other networks by being trained on unlabeled data and then being stacked to initialize a deep network BID1 BID44. More recently, it was shown that imposing a prior on the latent space allows autoencoders to be used for probabilistic or generative modeling BID18 BID34 BID27.In some cases, autoencoders have shown the ability to interpolate. Specifically, by mixing codes in latent space and decoding the , the autoencoder can produce a semantically meaningful combination of the corresponding datapoints. Interpolation has been frequently reported as a qualitative experimental in studies about autoencoders BID5 BID35 BID30 BID29 BID14 and latent-variable generative models in general BID10 BID33 BID41. The ability to interpolate can be useful in its own right e.g. for creative applications . However, it also indicates that the autoencoder can "extrapolate" beyond the training data and has learned a latent space with a particular structure. Specifically, if interpolating between two points in latent space produces a smooth semantic warping in data space, this suggests that nearby points in latent space are semantically similar. A visualization of this idea is shown in FIG0, where a smooth A critic network is fed interpolants and reconstructions and tries to predict the interpolation coefficient α corresponding to its input (with α = 0 for reconstructions). The autoencoder is trained to fool the critic into outputting α = 0 for interpolants. interpolation between a "2" and a "9" suggests that the 2 is surrounded by semantically similar points, i.e. other 2s. This property may suggest that an autoencoder which interpolates well could also provide a good learned representation for downstream tasks because similar points are clustered. If the interpolation is not smooth, there may be "discontinuities" in latent space which could in the representation being less useful as a learned feature. This connection between interpolation and a "flat" data manifold has been explored in the context of unsupervised representation learning BID3 and regularization BID43.Given the widespread use of interpolation as a qualitative measure of autoencoder performance, we believe additional investigation into the connection between interpolation and representation learning is warranted. Our goal in this paper is threefold: First, we introduce a regularization strategy with the specific goal of encouraging improved interpolations in autoencoders (section 2); second, we develop a synthetic benchmark where the slippery concept of a "semantically meaningful interpolation" is quantitatively measurable (section 3.1) and evaluate common autoencoders on this task (section 3.2); and third, we confirm the intuition that good interpolation can in a useful representation by showing that the improved interpolation ability produced by our regularizer elicits improved representation learning performance on downstream tasks (section 4). We also make our codebase available 1 which provides a unified implementation of many common autoencoders including our proposed regularizer. Autoencoders, also called auto-associators BID4, consist of the following structure: First, an input x ∈ R dx is passed through an "encoder" z = f θ (x) parametrized by θ to obtain a latent code z ∈ R dz. The latent code is then passed through a "decoder"x = g φ (z) parametrized by φ to produce an approximate reconstructionx ∈ R dx of the input x. We consider the case where f θ and g φ are implemented as multi-layer neural networks. The encoder and decoder are trained simultaneously (i.e. with respect to θ and φ) to minimize some notion of distance between the input x and the outputx, for example the squared L 2 distance x −x 2.Interpolating using an autoencoder describes the process of using the decoder g φ to decode a mixture of two latent codes. Typically, the latent codes are combined via a convex combination, so that interpolation amounts to computingx α = g φ (αz 1 +(1−α)z 2 ) for some α ∈ where z 1 = f θ (x 1) and z 2 = f θ (x 2) are the latent codes corresponding to data points x 1 and x 2. We also experimented with spherical interpolation which has been used in settings where the latent codes are expected to have spherical structure BID17 BID45 BID35, but found it made no discernible difference in practice for any autoencoder we studied. Ideally, adjusting α from 0 to 1 will produce a sequence of realistic datapoints where each subsequentx α is progressively less semantically similar to x 1 and more semantically similar to x 2. The notion of "semantic similarity" is problem-dependent and ill-defined; we discuss this further in section 3. As mentioned above, a high-quality interpolation should have two characteristics: First, that intermediate points along the interpolation are indistinguishable from real data; and second, that the intermediate points provide a semantically smooth morphing between the endpoints. The latter characteristic is hard to enforce because it requires defining a notion of semantic similarity for a given dataset, which is often hard to explicitly codify. So instead, we propose a regularizer which encourages interpolated datapoints to appear realistic, or more specifically, to appear indistinguishable from reconstructions of real datapoints. We find empirically that this constraint in realistic and smooth interpolations in practice (section 3.1) in addition to providing improved performance on downstream tasks (section 4).To enforce this constraint we introduce a critic network BID12 which is fed interpolations of existing datapoints (i.e.x α as defined above). Its goal is to predict α fromx α, i.e. to predict the mixing coefficient used to generate its input. When training the model, for each pair of training data points we randomly sample a value of α to producex α. In order to resolve the ambiguity between predicting α and 1 − α, we constrain α to the range [0, 0.5] when feedingx α to the critic. In contrast, the autoencoder is trained to fool the critic to think that α is always zero. This is achieved by adding an additional term to the autoencoder's loss to optimize its parameters to fool the critic. In a loose sense, the critic can be seen as approximating an "adversarial divergence" BID23 BID0 between reconstructions and interpolants which the autoencoder tries to minimize. Formally, let d ω (x) be the critic network, which for a given input produces a scalar value. The critic is trained to minimize DISPLAYFORM0 where, as above, DISPLAYFORM1 for some x (not necessarily x 1 or x 2), and γ is a scalar hyperparameter. The first term trains the critic to recover α from x α. The second term serves as a regularizer with two functions: First, it enforces that the critic consistently outputs 0 for non-interpolated inputs; and second, by interpolating between x andx (the autoencoder's reconstruction of x) in data space it ensures the critic is exposed to realistic data even when the autoencoder's reconstructions are poor. We found the second term was not crucial for our approach, but helped stabilize the convergence of the autoencoder and allowed us to use consistent hyperparameters and architectures across all datasets and experiments. The autoencoder's loss function is modified by adding a regularization term: DISPLAYFORM2 where λ is a scalar hyperparameter which controls the weight of the regularization term. Note that the regularization term is effectively trying to make the critic output 0 regardless of the value of α, thereby "fooling" the critic into thinking that an interpolated input is non-interpolated (i.e., having α = 0). The parameters θ and φ are optimized with respect to L f,g (which gives the autoencoder access to the critic's gradients) and ω is optimized with respect to L d. We refer to the use of this regularizer as Adversarially Constrained Autoencoder Interpolation (ACAI). A diagram of the ACAI is shown in FIG1. Assuming an effective critic, the autoencoder successfully "wins" this adversarial game by producing interpolated points which are indistinguishable from reconstructed data. We find in practice that encouraging this behavior also produces semantically smooth interpolations and improved representation learning performance, which we demonstrate in the following sections. Our loss function is similar to the one used in the Least Squares Generative Adversarial Network BID28 in the sense that they both measure the distance between a critic's output and a scalar using a squared L2 loss. However, they are substantially different in that ours is used as a regularizer for autoencoders rather than for generative modeling and our critic attempts to regress the interpolation coefficient α instead of a fixed scalar hyperparameter. Note that the only thing ACAI encourages is that interpolated points appear realistic. The critic only ever sees a single reconstruction or interpolant at a time; it is never fed real datapoints or latent vectors. It therefore will only be able to successfully recover α if the quality of the autoencoder's output degrades consistently across an interpolation as a function of α (as seen, for example, in fig. 3a where interpolated points become successively blurrier and darker). ACAI's primary purpose is to discourage this behavior. In doing so, it may implicitly modify the structure of the latent space learned by the autoencoder, but ACAI itself does not directly impose a specific structure. Our goal in introducing ACAI is to test whether simply encouraging better interpolation behavior produces a better representation for downstream tasks. Further, in contrast with the standard Generative Adversarial Network (GAN) setup BID12, ACAI does not distinguish between "real" and "fake" data; rather, it simply attempts to regress the interpolation coefficient α. Furthermore, GANs are a generative modeling technique, not a representation learning technique; in this paper, we focus on autoencoders and their ability to learn useful representations. How can we measure whether an autoencoder interpolates effectively and whether our proposed regularization strategy achieves its stated goal? As mentioned in section 2, defining interpolation relies on the notion of "semantic similarity" which is a vague and problem-dependent concept. For example, a definition of interpolation along the lines of "αz 1 + (1 − α)z 2 should map to αx 1 + (1 − α)x 2 " is overly simplistic because interpolating in "data space" often does not in realistic datapointsin images, this corresponds to simply fading between the pixel values of the two images. Instead, we might hope that our autoencoder smoothly morphs between salient characteristics of x 1 and x 2, even when they are dissimilar. Put another way, we might hope that decoded points along the interpolation smoothly traverse the underlying manifold of the data instead of simply interpolating in data space. However, we rarely have access to the underlying data manifold. To make this problem more concrete, we introduce a simple benchmark task where the data manifold is simple and known a priori which makes it possible to quantify interpolation quality. We then evaluate the ability of various common autoencoders to interpolate on our benchmark. Finally, we test ACAI on our benchmark and show that it exhibits dramatically improved performance and qualitatively superior interpolations. Given that the concept of interpolation is difficult to pin down, our goal is to define a task where a "correct" interpolation between two datapoints is unambiguous and well-defined. This will allow us to quantitatively evaluate the extent to which different autoencoders can successfully interpolate. Towards this goal, we propose the task of autoencoding 32 × 32 grayscale images of lines. We consider 16-pixel-long lines beginning from the center of the image and extending outward at an angle Λ ∈ [0, 2π] (or put another way, lines are radii of the circle circumscribed within the image borders). An example of 16 such images is shown in FIG2 (appendix A.1). In this task, the data manifold can be defined entirely by a single variable: Λ. We can therefore define a valid interpolation from x 1 to x 2 as one which smoothly and linearly adjusts Λ from the angle of the line in x 1 to the angle in x 2. We further require that the interpolation traverses the shortest path possible along the data manifold. We provide some concrete examples of good and bad interpolations, shown and described in appendix A.1.On any dataset, our desiderata for a successful interpolation are that intermediate points look realistic and provide a semantically meaningful morphing between its endpoints. On this synthetic lines dataset, we can formalize these notions as specific evaluation metrics, which we describe in detail in appendix A.2. To summarize, we propose two metrics: Mean Distance and Smoothness. Mean Distance measures the average distance between interpolated points and "real" datapoints. Smoothness measures whether the angles of the interpolated lines follow a linear trajectory between the angle of the start and endpoint. Both of these metrics are simple to define due to our construction of a dataset where we exactly know the data distribution and manifold; we provide a full definition and justification in appendix A.2. A perfect alignment would achieve 0 for both scores; larger values indicate a failure to generate realistic interpolated points or produce a smooth interpolation respectively. By choosing a synthetic benchmark where we can explicitly measure the quality of an interpolation, we can confidently evaluate different autoencoders on their interpolation abilities. To evaluate an autoencoder on the synthetic lines task, we randomly sample line images during training and compute our evaluation metrics on a separate randomly-sampled test set of images. Note that we never train any autoencoder explicitly to produce an optimal interpolation; "good" interpolation is an emergent property which occurs only when the architecture, loss function, training procedure, etc. produce a suitable latent space. In this section, we describe various common autoencoder structures and objectives and try them on the lines task. Our goal is to quantitatively evaluate the extent to which standard autoencoders exhibit useful interpolation behavior. Our , which we describe below, are summarized in table 1.Base Model Perhaps the most basic autoencoder structure is one which simply maps input datapoints through a "bottleneck" layer whose dimensionality is smaller than the input. In this setup, f θ and g φ are both neural networks which respectively map the input to a deterministic latent code z and then back to a reconstructed input. Typically, f θ and g φ are trained simultaneously with respect to x −x 2. We will use this framework as a baseline for experimentation for all of the autoencoder variants discussed below. In particular, for our base model and all of the other autoencoders we will use the model architecture and training procedure described in appendix B. As a short summary, our encoder consists of a stack of convolutional and average pooling layers, whereas the decoder consists of convolutional and nearest-neighbor upsampling layers. For experiments on the synthetic "lines" task, we use a latent dimensionality of 64. Note that, because the data manifold is effectively onedimensional, we might expect autoencoders to be able to model this dataset using a one-dimensional latent code; however, using a larger latent code reflects the realistic scenario where the latent space is larger than necessary. After training our baseline autoencoder, we achieved a Mean Distance score which was the worst (highest) of all of the autoencoders we studied, though the Smoothness was on par with various other approaches. In general, we observed some reasonable interpolations when using the baseline model, but found that the intermediate points on the interpolation were typically not realistic as seen in the example interpolation in fig. 3a.Denoising Autoencoder An early modification to the standard autoencoder setup was proposed by BID44, where instead of feeding x into the autoencoder, a corrupted versionx ∼ q(x|x) is sampled from the conditional probability distribution q(x|x) and is fed into the autoencoder instead. The autoencoder's goal remains to producex which minimizes x −x 2. One justification of this approach is that the corrupted inputs should fall outside of the true data manifold, so the autoencoder must learn to map points from outside of the data manifold back onto it. This provides an implicit way of defining and learning the data manifold via the coordinate system induced by the latent space. While various corruption procedures q(x|x) have been used such as masking and salt-and-pepper noise, in this paper we consider the simple case of additive isotropic Gaussian noise wherex ∼ N (x, σ 2 I) and σ is a hyperparameter. After tuning σ, we found simply setting σ = 1.0 to work best. Interestingly, we found the denoising autoencoder often produced "data-space" interpolation (as seen in fig. 3b) when interpolating in latent space. This ed in comparatively poor Mean Distance and Smoothness scores. Variational Autoencoder The Variational Autoencoder (VAE) BID18 BID34 introduces the constraint that the latent code z is a random variable distributed according to a prior distribution p(z). The encoder f θ can then be considered an approximation to the posterior p(z|x). Then, the decoder g φ is taken to parametrize the likelihood p(x|z); in all of our experiments, we consider x to be Bernoulli distributed. The latent distribution constraint is enforced by an additional loss term which measures the KL divergence between approximate posterior and prior. VAEs then use log-likelihood for the reconstruction loss (cross-entropy in the case of Bernoulli-distributed data), which in the following combined loss function: −E[log g φ (z)] + KL(f θ (x)||p(z)) where the expectation is taken with respect to z ∼ f θ (x) and KL(·||·) is the KL divergence. Minimizing this loss function can be considered maximizing a lower bound (the "ELBO") on the likelihood of the training set, producing a likelihood-based generative model which allows novel data points to be sampled by first sampling z ∼ p(z) and then computing g φ (z). A common choice is to let q(z|x) be a diagonalcovariance Gaussian, in which case backpropagation through sampling from q(z|x) is feasible via the "reparametrization trick" which replaces z ∼ N (µ, σI) with ∼ N (0, I), z = µ + σ where µ, σ ∈ R dz are the predicted mean and standard deviation produced by f θ. Various modified objectives BID15 BID47, improved prior distributions BID19 BID39 BID17 and improved model architectures BID37 BID8 BID13 have been proposed to better the VAE's performance on downstream tasks, but in this paper we solely consider the "vanilla" VAE objective and prior described above applied to our baseline autoencoder structure. When trained on the lines benchmark, we found the VAE was able to effectively model the data distribution (see samples, fig. 5 in appendix C) and accurately reconstruct inputs. In interpolations produced by the VAE, intermediate points tend to look realistic, but the angle of the lines do not follow a smooth or short path (fig. 3c). This ed in a very good Mean Distance score but a very poor Smoothness score. Contrary to expectations, this suggests that desirable interpolation behavior may not follow from an effective generative model of the data distribution. Adversarial Autoencoder The Adversarial Autoencoder (AAE) BID27 proposes an alternative way of enforcing structure on the latent code. Instead of minimizing a KL divergence between the distribution of latent codes and a prior distribution, a critic network is trained in tandem with the autoencoder to predict whether a latent code comes from f θ or from the prior p(z). The autoencoder is simultaneously trained to reconstruct inputs (via a standard reconstruction loss) and to "fool" the critic. The autoencoder is allowed to backpropagate gradients through the critic's loss function, but the autoencoder and critic parameters are optimized separately. This effectively computes an "adversarial divergence" between the latent code distribution and the chosen prior. This framework was later generalized and referred to as the "Wasserstein Autoencoder" BID38 One advantage of this approach is that it allows for an arbitrary prior (as opposed to those which have a tractable KL divergence). The disadvantages are that the AAE no longer has a probabilistic interpretation and involves optimizing a minimax game, which can cause instabilities. Using the AAE requires choosing a prior, a critic structure, and a training scheme for the critic. For simplicity, we also used a spherical Gaussian prior for the AAE. We experimented with various architectures for the critic, and found the best performance with a critic which consisted of two dense layers, each with 100 units and a leaky ReLU nonlinearity. We found it satisfactory to simply use the same optimizer and learning rate for the critic as was used for the autoencoder. On our lines benchmark, the AAE typically produced smooth interpolations, but exhibited degraded quality in the middle of interpolations (fig. 3d). This behavior produced the best Smoothness score among existing autoencoders, but a relatively poor Mean Distance score. The Vector Quantized Variational Autoencoder (VQ-VAE) was introduced by (van den) as a way to train discrete-latent autoencoders using a learned codebook. In the VQ-VAE, the encoder f θ (x) produces a continuous hidden representation z ∈ R d z which is then mapped to z q, its nearest neighbor in a "codebook" {e j ∈ R dz, j ∈ 1, . . ., K}. z q is then passed to the decoder for reconstruction. The encoder is trained to minimize the reconstruction loss using the straight-through gradient estimator BID2, together with a commitment loss term β z − sg(z q) (where β is a scalar hyperparameter) which encourages encoder outputs to move closer to their nearest codebook entry. Here sg denotes the stop gradient operator, i.e. sg(x) = x in the forward pass, and sg(x) = 0 in the backward pass. The codebook entries e j are updated as an exponential moving average (EMA) of the continuous latents z that map to them at each training iteration. The VQ-VAE training procedure using this EMA update rule can be seen as performing the K-means or the hard Expectation Maximization (EM) algorithm on the latent codes BID36.We perform interpolation in the VQ-VAE by interpolating continuous latents, mapping them to their nearest codebook entries, and decoding the . Assuming sufficiently large codebook, a semantically "smooth" interpolation may be possible. On the lines task, we found that this procedure produced poor interpolations. Ultimately, many entries of the codebook were mapped to unrealistic datapoints, and the interpolations resembled those of the baseline autoencoder. Adversarially Constrained Autoencoder Interpolation Finally, we turn to evaluating our proposed adversarial regularizer for improving interpolations. For simplicity, on the lines benchmark we found it sufficient to use a critic architecture which was equivalent to the encoder (as described in appendix B). To produce a single scalar value from its output, we computed the mean of its final layer activations. For the hyperparameters λ and γ we found values of 0.5 and 0.2 to achieve good , though the performance was not very sensitive to their values. We use these values for the coefficients for all of our experiments. Finally, we trained the critic using the same optimizer and hyperparameters as the autoencoder. We found dramatically improved performance on the lines benchmark when using ACAI -it achieved the best Mean Distance and Smoothness score among the autoencoders we considered. When inspecting the ing interpolations, we found it occasionally chose a longer path than necessary but typically produced "perfect" interpolation behavior as seen in fig. 3f. This provides quantitative evidence ACAI is successful at encouraging realistic and smooth interpolations. We have so far only discussed on our synthetic lines benchmark. We also provide example reconstructions and interpolations produced by each autoencoder for MNIST BID22, SVHN BID31, and CelebA BID24 in appendix D. For each dataset, we trained autoencoders with latent dimensionalities of 32 and 256. Since we do not know the underlying data manifold for these datasets, no metrics are available to evaluate performance and we can only make qualitative judgments as to the reconstruction and interpolation quality. We find that most autoencoders produce "blurrier" images with d z = 32 but generally give smooth interpolations regardless of the latent dimensionality. The exception to this observation was the VQ-VAE which seems generally to work better with d z = 32 and occasionally even diverged for d z = 256 (see e.g. fig. 9e). This may be due to the nearest-neighbor discretization (and gradient estimator) failing in high dimensions. Across datasets, we found the VAE and denoising autoencoder typically produced more blurry interpolations. AAE and ACAI generally produced realistic interpolations, even between dissimilar datapoints (for example, in fig. 7 bottom). The baseline model often effectively interpolated in data space. We have so far solely focused on measuring the interpolation abilities of different autoencoders. Now, we turn to the question of whether improved interpolation is associated with improved performance on downstream tasks. Specifically, we will evaluate whether using our proposed regularizer in latent space representations which provide better performance in supervised learning and clustering. Put another way, we seek to test whether improving interpolation in a latent representation which has disentangled important factors of variation (such as class identity) in the dataset. To answer this question, we ran classification and clustering experiments using the learned latent spaces of Table 3: Clustering accuracy for using K-Means on the latent space of different autoencoders (left) and previously reported methods (right). On the right, "Data" refers to performing K-Means directly on the data and DEC, RIM, and IMSAT are the methods proposed in BID46 BID20 BID16 respectively. Results marked * are excerpted from BID16 and ** are from BID46. Single-Layer Classifier A common method for evaluating the quality of a learned representation (such as the latent space of an autoencoder) is to use it as a feature representation for a simple, one-layer classifier (i.e. logistic regression) trained on a supervised learning task. The justification for this evaluation procedure is that a learned representation which has effectively disentangled class identity will allow the classifier to obtain reasonable performance despite its simplicity. To test different autoencoders in this setting, we trained a separate single-layer classifier in tandem with the autoencoder using the latent representation as input. We did not optimize autoencoder parameters with respect to the classifier's loss, which ensures that we are measuring unsupervised representation learning performance. We repeated this procedure for latent dimensionalities of 32 and 256 (MNIST and SVHN) and 256 and 1024 (CIFAR-10).Our are shown in table 2. In all settings, using ACAI instead of the baseline autoencoder upon which it is based produced significant gains -most notably, on SVHN with a latent dimensionality of 256, the baseline achieved an accuracy of only 22.74% whereas ACAI achieved 85.14%. In general, we found the denoising autoencoder, VAE, and ACAI obtained significantly higher performance compared to the remaining models. On MNIST and SVHN, ACAI achieved the best accuracy by a significant margin; on CIFAR-10, the performance of ACAI and the denoising autoencoder was similar. By way of comparison, we found a single-layer classifier applied directly to (flattened) image pixels achieved an accuracy of 92.31%, 23.48%, and 39.70% on MNIST, SVHN, and CIFAR-10 respectively, so classifying using the representation learned by ACAI provides a huge benefit. Clustering If an autoencoder groups points with common salient characteristics close together in latent space without observing any labels, it arguably has uncovered some important structure in the data in an unsupervised fashion. A more difficult test of an autoencoder is therefore clustering its latent space, i.e. separating the latent codes for a dataset into distinct groups without using any labels. To test the clusterability of the latent spaces learned by different autoencoders, we simply apply K-Means clustering BID26 to the latent codes for a given dataset. Since K-Means uses Euclidean distance, it is sensitive to each dimension's relative variance. We therefore used PCA whitening on the latent space learned by each autoencoder to normalize the variance of its dimensions prior to clustering. K-Means can exhibit highly variable depending on how it is initialized, so for each autoencoder we ran K-Means 1,000 times from different random initializations and chose the clustering with the best objective value on the training set. For evaluation, we adopt the methodology of BID46 BID16: Given that the dataset in question has labels (which are not used for training the model, the clustering algorithm, or choice of random initialization), we can cluster the data into C distinct groups where C is the number of classes in the dataset. We then compute the "clustering accuracy", which is simply the accuracy corresponding to the optimal one-to-one mapping of cluster IDs and classes BID46.Our are shown in table 3. On both MNIST and SVHN, ACAI achieved the best or second-best performance for both d z = 32 and d z = 256. We do not report on CIFAR-10 because all of the autoencoders we studied achieved a near-random clustering accuracy. Previous efforts to evaluate clustering performance on CIFAR-10 use learned feature representations from a convolutional network trained on ImageNet BID16 which we believe only indirectly measures unsupervised learning capabilities. In this paper, we have provided an in-depth perspective on interpolation in autoencoders. We proposed Adversarially Constrained Autoencoder Interpolation (ACAI), which uses a critic to encourage interpolated datapoints to be more realistic. To make interpolation a quantifiable concept, we proposed a synthetic benchmark and showed that ACAI substantially outperformed common autoencoder models. This task also yielded unexpected insights, such as that a VAE which has effectively learned the data distribution might not interpolate. We also studied the effect of improved interpolation on downstream tasks, and showed that ACAI led to improved performance for feature learning and unsupervised clustering. These findings confirm our intuition that improving the interpolation abilities of a baseline autoencoder can also produce a better learned representation for downstream tasks. However, we emphasize that we do not claim that good interpolation always implies a good representation -for example, the AAE produced smooth and realistic interpolations but fared poorly in our representations learning experiments and the denoising autoencoder had low-quality interpolations but provided a useful representation. In future work, we are interested in investigating whether our regularizer improves the performance of autoencoders other than the standard "vanilla" autoencoder we applied it to. In this paper, we primarily focused on image datasets due to the ease of visualizing interpolations, but we are also interested in applying these ideas to non-image datasets. A LINE BENCHMARK Some example data and interpolations for our synthetic lines benchmark are shown in FIG2. Full discussion of this benchmark is available in section 3.1. We define our Mean Distance and Smoothness metrics as follows: Let x 1 and x 2 be two input images we are interpolating between and DISPLAYFORM0 be the decoded point corresponding to mixing x 1 and x 2's latent codes using coefficient α = n−1 /N−1. The imagesx n, n ∈ {1, . . ., N} then comprise a length-N interpolation between x 1 and x 2. To produce our evaluation metrics, we first find the closest true datapoint (according to cosine distance) for each of the N intermediate images along the interpolation. Finding the closest image among all possible line images is infeasible; instead we first generate a size-D collection of line images D with corresponding angles Λ q, q ∈ {1, . . ., D} spaced evenly between 0 and 2π. Then, we match each image in the interpolation to a real datapoint by finding DISPLAYFORM1 for n ∈ {1, . . ., N}, where C n,q is the cosine distance betweenx n and the qth entry of D. To capture the notion of "intermediate points look realistic", we compute DISPLAYFORM2 We now define a perfectly smooth interpolation to be one which consists of lines with angles which linearly move from the angle of D q 1 to that of D q N. Note that if, for example, the interpolated lines go from Λ q 1 = π /10 to Λ q N = 19π /10 then the angles corresponding to the shortest path will have a discontinuity from 0 to 2π. To avoid this, we first "unwrap" the angles {Λ q 1, . . ., Λ q N} by removing discontinuities larger than π by adding multiples of ±2π when the absolute difference between Λ q n−1 and Λ q n is greater than π to produce the angle sequence {Λ q 1, . . .,Λ q N}. 2 Then, we define a measure of smoothness as DISPLAYFORM3 In other words, we measure the how much larger the largest change in (normalized) angle is compared to the minimum possible value (1 /(N−1)). B BASE MODEL ARCHITECTURE AND TRAINING PROCEDUREAll of the autoencoder models we studied in this paper used the following architecture and training procedure: The encoder consists of blocks of two consecutive 3 × 3 convolutional layers followed by 2 × 2 average pooling. All convolutions (in the encoder and decoder) are zero-padded so that the input and output height and width are equal. The number of channels is doubled before each average pooling layer. Two more 3 × 3 convolutions are then performed, the last one without activation and the final output is used as the latent representation. All convolutional layers except for the final use a leaky ReLU nonlinearity BID25. For experiments on the synthetic "lines" task, the convolution-average pool blocks are repeated 4 times until we reach a latent dimensionality of 64.For subsequent experiments on real datasets (section 4), we repeat the blocks 3 times, ing in a latent dimensionality of 256.The decoder consists of blocks of two consecutive 3 × 3 convolutional layers with leaky ReLU nonlinearities followed by 2 × 2 nearest neighbor upsampling BID32. The number of channels is halved after each upsampling layer. These blocks are repeated until we reach the target resolution (32 × 32 in all experiments). Two more 3 × 3 convolutions are then performed, the last one without activation and with a number of channels equal to the number of desired colors. All parameters are initialized as zero-mean Gaussian random variables with a standard deviation of 1 / √ fan_in(1+0. In fig. 5, we show some samples from our VAE trained on the synthetic line benchmark. The VAE generally produces realistic samples and seems to cover the data distribution well, despite the fact that it does not produce high-quality interpolations ( fig. 3c). In this section, we provide a series of figures (figs. 6 to 11) showing interpolation behavior for the different autoencoders we studied. Further discussion of these is available in section 3.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1fQSiCcYm
We propose a regularizer that improves interpolation and autoencoders and show that it also improves the learned representation for downstream tasks.
We consider the problem of generating plausible and diverse video sequences, when we are only given a start and an end frame. This task is also known as inbetweening, and it belongs to the broader area of stochastic video generation, which is generally approached by means of recurrent neural networks (RNN). In this paper, we propose instead a fully convolutional model to generate video sequences directly in the pixel domain. We first obtain a latent video representation using a stochastic fusion mechanism that learns how to incorporate information from the start and end frames. Our model learns to produce such latent representation by progressively increasing the temporal resolution, and then decode in the spatiotemporal domain using 3D convolutions. The model is trained end-to-end by minimizing an adversarial loss. Experiments on several widely-used benchmark datasets show that it is able to generate meaningful and diverse in-between video sequences, according to both quantitative and qualitative evaluations. Imagine if we could teach an intelligent system to automatically turn comic books into animations. Being able to do so would undoubtedly revolutionize the animation industry. Although such an immensely labor-saving capability is still beyond the current state-of-the-art, advances in computer vision and machine learning are making it an increasingly more tangible goal. Situated at the heart of this challenge is video inbetweening, that is, the process of creating intermediate frames between two given key frames. Recent development in artificial neural network architectures and the emergence of generative adversarial networks (GAN) have led to rapid advancement in image and video synthesis (Aigner & Körner, 2018;). At the same time, the problem of inbetweening has received much less attention. The majority of the existing works focus on two different tasks: i) unconditional video generation, where the model learns the input data distribution during training and generates new plausible videos without receiving further input (; ;); and ii) video prediction, where the model is given a certain number of past frames and it learns to predict how the video evolves thereafter (; ; ;). In most cases, the generative process is modeled as a recurrent neural network (RNN) using either long-short term memory (LSTM) cells or gated recurrent units (GRU) . Indeed, it is generally assumed that some form of a recurrent model is necessary to capture long-term dependencies, when the goal is to generate videos over a length that cannot be handled by pure frame-interpolation methods based on optical flow. In this paper, we show that it is in fact possible to address the problem of video inbetweening using a stateless, fully convolutional model. A major advantage of this approach is its simplicity. The absence of recurrent components implies shorter gradient paths, hence allowing for deeper networks and more stable training. The model is also more easily parallelizable, due to the lack of sequential states. Moreover, in a convolutional model, it is straightforward to enforce temporal consistency with the start and end frames given as inputs. Motivated by these observations, we make the following contributions in this paper: • We propose a fully convolutional model to address the task of video inbetweening. The proposed model consists of three main components: i) a 2D-convolutional image encoder, which maps the input key frames to a latent space; ii) a 3D-convolutional latent representation generator, which learns how to incorporate the information contained in the input frames with progressively increasing temporal resolution; and iii) a video generator, which uses transposed 3D-convolutions to decode the latent representation into video frames. • Our key finding is that separating the generation of the latent representation from video decoding is of crucial importance to successfully address video inbetweening. Indeed, attempting to generate the final video directly from the encoded representations of the start and end frames tends to perform poorly, as further demonstrated in Section 4. To this end, we carefully design the latent representation generator to stochastically fuse the key frame representations and progressively increase the temporal resolution of the generated video. • We carried out extensive experiments on several widely used benchmark datasets, and demonstrate that the model is able to produce realistic video sequences, considering key frames that are well over a half second apart from each other. In addition, we show that it is possible to generate diverse sequences given the same start and end frames, by simply varying the input noise vector driving the generative process. The rest of the paper is organized as follows: We review the outstanding literature related to our work in Section 2. Section 3 describes our proposed model in details. Experimental , both quantitative and qualitative, are presented in Section 4, followed by our in Section 5. Recent advances based on deep networks have led to tremendous progress in three areas related to the current work: i) video prediction, ii) video generation and iii) video interpolation. Video prediction: Video prediction addresses the problem of producing future frames given one (or more) past frames of a video sequence. The methods that belong to this group are deterministic, in the sense that always produce the same output for the same input and they are trained to minimize the L2 loss between the ground truth and the predicted future frames. Most of the early works in this area adopted recurrent neural networks to model the temporal dynamics of video sequences. a LSTM encoder-decoder framework is used to learn video representations of image patches. The work in extends the prediction to video frames rather than patches, training a convolutional LSTM. The underlying idea is to compute the next frame by first predicting the motions of either individual pixels or image segments and then merge these predictions via masking. A multi-layer LSTM is also used in , progressively refining the prediction error. Some methods do not use recurrent networks to address the problem of video prediction. For example, a 3D convolutional neural network is adopted in. An adversarial loss is used in addition to the L2 loss to ensure that the predicted frames look realistic. More recently, Aigner & Körner proposed a similar approach, though in this case layers are added progressively to increase the image resolution during training . All the aforementioned methods aim at predicting the future frames in the pixel domain directly. An alternative approach is to first estimate local and global transformations (e.g., affine warping and local filters), and then apply them to each frame to predict the next, by locally warping the image content accordingly (; a; van). Video generation: Video generation differs from video prediction in that it aims at modelling future frames in a probabilistic manner, so as to generate diverse and plausible video sequences. To this end, methods based on generative adversarial networks (GAN) and variational autoencoder networks (VAN) are being currently explored in the literature. a GAN architecture is proposed, which consists of two generators (to produce, respectively, foreground and static pixels), and a discriminator to distinguish between real and generated video sequences. generates the whole output video sequence from a single latent vector, in a temporal generator is first used to produce a sequence of latent vectors that captures the temporal dynamics. Subsequently an image generator produces the output images from the latent vectors. Both the generators and the discriminator are based on CNNs. The model is also able to generate video sequences conditionally on an input label, as well as interpolating between frames by first linearly interpolating the temporal latent vectors. To address mode collapse in proposes to use a variational approach. Each frame is recursively generated combining the previous frame encoding with a latent vector. This is fed to a LSTM, whose output goes through a decoder. Similarly to this, samples a latent vector, which is then used as conditioning for the deterministic frame prediction network in. A variational approach is used to learn how to sample the latent vector, conditional on the past frames. Other methods do not attempt to predict the pixels of the future frame directly. Conversely, a variational autoencoder is trained to generate plausible differences between consecutive frames , or motion trajectories . Recently, proposed to use a loss function that combines a variational loss (to produce diverse videos) with an adversarial loss (to generate realistic frames) . Video sequences can be modelled as two distinct components: content and motion. the latent vector from which the video is generated is divided in two parts: content and motion. This leads to improved quality of the generated sequences when compared with previous approaches . A similar idea is explored in Villegas et al. (2017a), where two encoders, one for motion and one for content, are used to produce hidden representations that are then decoded to a video sequence. Also explicitly separates motion and content in two streams, which are generated by means of a variational network and then fused to produce the predicted sequence. An adversarial loss is then used to improve the realism of the generated videos. All of the aforementioned methods are able to predict or generate just a few video frames into the future. Long-term video prediction has been originally addressed in with the goal of predicting up to 100 future frames of an Atari game. The current frame is encoded using a CNN or LSTM, transformed conditionally on the player action, and decoded into the next frame. More recently, Villegas et al. (2017b) addressed a similar problem, but for the case of real-world video sequences. The key idea is to first estimate high-level structures from past frames (e.g., human poses). Then, a LSTM is used to predict a sequence of future structures, which are decoded to future frames. One shortcoming of Villegas et al. (2017b) is that it requires ground truth landmarks as supervision. This is addressed in , which proposes a fully unsupervised method that learns to predict a high-level encoding into the future. Then, a decoder with access to the first frame generates the future frames from the predicted high-level encoding. Video interpolation: Video interpolation is used to increase the temporal resolution of the input video sequence. This is addressed with different approaches: optical flow based interpolation (;, phase-based interpolation , and pixels motion transformation . These method typically target temporal super-resolution and the frame rate of the input sequence is often already sufficiently high. Interpolating frames becomes more difficult when the temporal distance between consecutive frames increases. Long-term video interpolation received far less attention in the past literature. Deterministic approaches have been explored using either block-based motion estimation/compensation , or convolutional LSTM models . Our work is closer to those using generative approaches. In Chen et al. (2017b) two convolutional encoders are used to generate hidden representations of both the first and last frame, which are then fed to a decoder to reconstruct all frames in between. A variational approach is presented in. A multi-layer convolutional LSTM is used to interpolate frames given a set of extended reference frames, with the goal of increasing the temporal resolution from 2 fps to 16 fps. In our experiments, we compare our method with those in , , and. The proposed model receives three inputs: a start frame x s, an end frame x e, and a Gaussian noise vector u ∈ R D. The output of the model is a video (x s,x 1, . . .,x T −2, x e), where different se- Up-sample (time-wise) Sigmoid Sigmoid × × Figure 1: Layout of the model used to generate the latent video representation z. The inputs are the encoded representations of the start and and frames E(x s) and E(x e), together with a noise vector u. quences of plausible in-between frames (x i) are generated by feeding different instantiations of the noise vector u. In the rest of this paper, we set T = 16 and D = 128. The model consists of three components: an image encoder, a latent representation generator and a video generator. In addition, a video discriminator and an image discriminator are added so that the whole model can be trained using adversarial learning to produce realistic video sequences. The image encoder E(x) receives as input a video frame of size H 0 × W 0 and produces a feature map of shape H × W × C, where C is the number of channels. The encoder architecture consists of six layers, alternating between 4 × 4 convolutions with stride-2 down-sampling and regular 3 × 3 convolutions, followed by a final layer to condense the feature map to the target depth C. This in spatial dimensions H = H 0 /8 and W = W 0 /8. We set C = 64 in all our experiments. The latent representation generator G Z (·) receives as input E(x s), E(x e) and u, and produces an output tensor of shape T × H × W × C. Its main function is to gradually fill in the video content between the start and end frames, working directly in the latent space defined by the image encoder. The model architecture is composed of a series of L residual blocks , each consisting of 3D convolutions and stochastic fusion with the encoded representations of x s and x e. This way, each block progressively learns a transformation that improves the video content generated by the previous block. The generic l-th block is represented by the inner rectangle in Figure 1. Note that the lengths of the intermediate representations can differ from the final video length T, due to the use of a coarse-to-fine scheme in the time dimension. To simplify the notation, we defer its description to the end of this section and omit the implied temporal up-sampling from the equations. Let T (l) denote the representation length within block l. First, we produce a layer-specific noise tensor of shape T (l) × C by applying a linear transformation to the input noise vector u: where (l) C, and reshaping the into a T (l) × C tensor u (l). This is used to drive two stochastic "gating" functions for the start and end frames, respectively: where * denotes convolution along the time dimension, k s, k e are kernels of width 3 and depth C, and σ(·) is the sigmoid activation function. The gating functions are used to progressively fuse the encoded representations of the start and end frames with the intermediate output of the previous layer z (l−1), as described by the following equation: where n n denotes an additional learned stochastic component added to stimulate diversity in the generative process. Note that z (l) in has shape T (l) × H × W × C. Therefore, to compute the component-wise multiplication ·, E(x s) and E(x e) (each of shape S × S × C) are broadcast (i.e., replicated uniformly) T (l) times along the time dimension, while g e and n (l) (each of shape T (l) × C) are broadcast H × W times over the spatial dimensions. The idea of the fusion step is similar to that of StyleGAN, albeit with different construction and purposes. Finally, the fused input is convolved spatially and temporally with 3 × 3 × 3 kernels k 2 in a residual unit : where h is the leaky ReLU activation function (with parameter α = 0.2). given E(x s) and E(x e), with A, k, b being its learnable parameters. The generation of the overall latent video representation z ∈ R T ×S×S×C can be expressed as: Coarse-to-fine generation: For computational efficiency, we adopt a coarse-to-fine scheme in the time dimension, represented by the outer dashed rectangle in Figure 1. More specifically we double the length of z (l) every L/3 generator blocks, i.e., z,..., z (L/3) have length T /4 = 4, z (L/3+1),..., z (2L/3) have T /2 = 8, and z (2L/3+1),..., z (L) have the full temporal resolution T = 16. We initialize z to (E(x s), E(x e)) (which becomes (E(x s), E(x s), E(x e), E(x e)) after the first up-sampling) and set L = 24, ing in 8 blocks per granularity level. The video generator G V produces the output video sequence (x s,x 1,x 2, . . ., x e) = G V (z) from the latent video representation z using spatially transposed 3D convolutions. The generator architecture alternates between 3 × 3 × 3 regular convolutions and transposed 3 × 4 × 4 convolutions with a stride of, hence applying only spatial (but not temporal) up-sampling. Note that it actually generates all T frames including the "reconstructed" start framex 0 and end framex T −1, though they are not used and are always replaced by the real x s and x e in the output. We train our model end-to-end by minimizing an adversarial loss function. To this end, we train two discriminators: a 3D convolutional video discriminator D V and a 2D convolutional image discriminator D I, following the approach of. The video discriminator has a similar architecture to , except that in our case we produce a single output for the entire video rather than for its sub-volumes ("patches"). For the image discriminator, we use a Resnet-based architecture instead of the DCGAN-based architecture used in. Let X = (x s, x 1, . . ., x T −2, x e) denote a real video andX = (x s,x 1, . . .,x T −2, x e) denote the corresponding generated video conditioned on x s and x e. Adopting the non-saturating log-loss, training amounts to optimizing the following adversarial objectives: During optimization we replace the average over the T − 2 intermediate frames with a single uniformly sampled frame to save computation, as is done in. This does not change the convergence properties of stochastic gradient descent, since the two quantities have the same expectation. We regularize the discriminators by penalizing the derivatives of the pre-sigmoid logits with respect to their input videos and images, as is proposed in to improve GAN stability and prevent mode collapse. In our case, instead of the adaptive scheme of , we opt for a constant coefficient of 0.1 for the gradient magnitude, which we found to be more reliable in our experiments. We use batch normalization on all 2D and 3D convolutional layers in the generator and layer normalization in the discriminators. 1D convolutions and fully-connected layers are not normalized. Architectural details of the encoder, decoder, and discriminators are further provided in Appendix A. We evaluated our approach on three well-known public datasets: BAIR robot pushing , KTH Action Database , and UCF101 Action Recognition Data Set . All video frames were down-sampled and cropped to 64×64, and subsequences of 16 frames were used in all the experiments, that is, 14 intermediate frames are generated. The videos in KTH and UCF101 datasets are 25 fps, translating to key frames 0.6 seconds apart. The frame rate of BAIR videos is not provided, though visually it appears to be much lower, hence longer time in between key frames. For all the datasets, we adopted the conventional train/test splits practised in the literature. A validation set held out from the training set was used for model checkpoint selection. More details on the exact splits are provided in Appendix B. We did not use any dataset-specific tuning of hyper-parameters, architectural choices, or training schemes. Our main objective is to generate plausible transition sequences with characteristics similar to real videos, rather than predicting the exact content of the original sequence from which the key frames were extracted. Therefore we use the recently proposed Fréchet video distance (FVD) as our primary evaluation metrics. The FVD is equivalent to the Fréchet Inception distance (FID) widely used for evaluating image generative models, but revisited in a way that it can be applied to evaluate videos, by adopting a deep neural network architecture that computes video embeddings taking the temporal dimension explicitly into account. The FVD is a more suitable metrics for evaluating video inbetweening than the widely used structural similarity index (SSIM) . The latter is suitable when evaluating prediction tasks, as it compares each synthetized frame with the original reference at the pixel level. Conversely, FVD compares the distributions of generated and ground-truth videos in an embedding space, thus measuring whether the synthesized video belongs to the distribution of realistic videos. Since the FVD was only recently proposed, we also report the SSIM to be able to compare with the previous literature. During testing, we ran the model 100 times for each pair of key frames, feeding different instances of the noise vector u to generate different sequences consistent with the given key frames, and computed the FVD for each of these stochastic generations. This entire procedure was repeated 10 121 [0.112, 0.129] times for each model variant and dataset to account for the randomness in training. We report the mean over all training runs and stochastic generations as well as the confidence intervals obtained by means of the bootstrap method . For training we used the ADAM optimizer with β 1 = 0.5, β 2 = 0.999, = 10 −8, and ran it on batches of 32 samples with a conservative learning rate of 5 × 10 − 5 for 500,000 steps. A checkpoint was saved every 5000 steps, ing in 100 checkpoints. Training took around 5 days on a single Nvidia Tesla V100 GPU. The checkpoint for evaluation was selected to be the one with the lowest FVD on the validation set. To assess the impact of the stochastic fusion mechanism as well the importance of having a separate latent video representation generator component, we compare the full model with baselines in which the corresponding components are omitted. • Baseline without fusion: The gating functions (Equation 2 and 3) are omitted and Equation 4 reduces to z • Naïve: The entire latent video representation generator described in Section 3.2 is omitted. Instead, decoding with transposed 3D convolution is performed directly on the (stacked) start/end frame encoded representations z = (E(x 1), E(x N)) (which has dimensionality 2×8×8), using a stride of 2 in both spatial and temporal dimensions when up-scaling, to eventually produce 16 64×64 frames. To maintain stochasticity in the generative process, a spatially uniform noise map is generated by sampling a Gaussian vector u, applying a (learned) linear transform, and adding the in the latent space before decoding. The in Table 1 shows that the dedicated latent video representation generator is indispensable, as the naïve baseline performs rather poorly. Moreover, stochastic fusion improves the quality of video generation. Note that the differences are statistically significant at 95% confidence level across all three datasets. To illustrate the generated videos, Figure 2 shows some exemplary outputs of our full model. The generated sequence is not expected (or even desired) to reproduce the ground truth, but only needs to be similar in style and consistent with the given start and end frames. The samples show that the model does well in this area. For stochastic generation, good models should produce samples that are not only high-quality but also diverse. Following the approach of, we measure diversity by means of the average pairwise cosine distance (i.e., 1 − cosine similarity) in the FVD embedding space among samples generated from the same start/end frames. The Table 2 shows that incorporating fusion increases sample diversity and the difference is statistically significant. A qualitative illustration of the diversity in the generated videos is further illustrated in Figure 3, where we take the average of 100 generated videos conditioned on the same start and end frames. If the robot arm has a very diverse set of trajectories, we should expect to see it "diffuse" into the due to averaging. Indeed this is the case, especially near the middle of the sequence. Finally we computed the average SSIM for our method for each dataset in order to compare our with those previously reported in the literature, before the FVD metrics was introduced. The are shown in Table 3 alongside several existing methods that are capable of video inbetweening, ranging from RNN-based video generation to optical flow-based interpolation 1. Note that the competing methods generate 7 frames 1 The numbers for these methods are cited directly from. Table 3: Average SSIM of our model using direct 3D convolution and alternative methods based on RNN, such as SDVI, or optical flow, such as SepConv and SuperSloMo . Higher is better. Note the difference in setup: our model spans a time base twice as long as the others. The SSIM for each test example is computed on the best sequence out of 100 stochastic generations, as per standard practice (;;. We report the mean and the 95%-confidence interval for our model over 10 training runs. and are conditioned on potentially multiple frames before and after. In contrast our model generates 14 frames, i.e., over a time base twice as long, and it is conditioned on only one frame before and after. Consequently, the SSIM figures are not directly comparable. However it is interesting to see that on UCF101, the most challenging dataset among the three, our model attains higher SSIM than all the other methods despite having to generate much longer sequences. This demonstrates the potential of the direct convolution approach to outperform existing methods, especially on difficult tasks. It is also worth noting from Table 3 that purely optical flow-based interpolation methods achieve essentially the same level of SSIM as the sophisticated RNN-based SDVI on BAIR and KTH, which suggests either that a 7-frame time base is insufficient in length to truly test video inbetweening models or that the SSIM is not an ideal metric for this task. We presented a method for video inbetweening using only direct 3D convolutions. Despite having no recurrent components, our model produces good performance on most widely-used benchmark datasets. The key to success for this approach is a dedicated component that learns a latent video representation, decoupled from the final video decoding phase. A stochastic gating mechanism is used to progressively fuse the information of the given key frames. The rather surprising fact that video inbetweening can be achieved over such a long time base without sophisticated recurrent models may provide a useful alternative perspective for future research on video generation.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
S1epu2EtPB
This paper presents method for stochastically generating in-between video frames from given key frames, using direct 3D convolutions.
Aligning knowledge graphs from different sources or languages, which aims to align both the entity and relation, is critical to a variety of applications such as knowledge graph construction and question answering. Existing methods of knowledge graph alignment usually rely on a large number of aligned knowledge triplets to train effective models. However, these aligned triplets may not be available or are expensive to obtain for many domains. Therefore, in this paper we study how to design fully-unsupervised methods or weakly-supervised methods, i.e., to align knowledge graphs without or with only a few aligned triplets. We propose an unsupervised framework based on adversarial training, which is able to map the entities and relations in a source knowledge graph to those in a target knowledge graph. This framework can be further seamlessly integrated with existing supervised methods, where only a limited number of aligned triplets are utilized as guidance. Experiments on real-world datasets prove the effectiveness of our proposed approach in both the weakly-supervised and unsupervised settings. Knowledge graphs represent a collection of knowledge facts and are quite popular in the real world. Each fact is represented as a triplet (h, r, t), meaning that the head entity h has the relation r with the tail entity t. Examples of real-world knowledge graphs include instances which contain knowledge facts from general domain in different languages (Freebase 1, DBPedia BID2, Yago BID19, WordNet 2) or facts from specific domains such as biomedical ontology (UMLS 3). Knowledge graphs are critical to a variety of applications such as question answering BID4 ) and semantic search BID13 ), which are attracting growing interest recently in both academia and industry communities. In practice, each knowledge graph is usually constructed from a single source or language, the coverage of which is limited. To enlarge the coverage and construct more unified knowledge graphs, a natural idea is to integrate multiple knowledge graphs from different sources or languages BID0 ). However, different knowledge graphs use distinct symbol systems to represent entities and relations, which are not compatible. As a , it is necessary to align entities and relations across different knowledge graphs (a.k.a., knowledge graph alignment) before integrating them. Indeed, there are some recent studies focusing on aligning entities and relations from a source knowledge graph to a target knowledge graph (BID23 ; BID6 ; BID7). These methods typically represent entities and relations in a low-dimensional space, and meanwhile learn a mapping function to align entities and relations from the source knowledge graph to the target one. However, these methods usually rely on a large number of aligned triplets as labeled data to train effective alignment models. In reality, the aligned triplets may not be available or can be expensive to obtain, making existing methods fail to achieve satisfactory . Therefore, we are seeking for an unsupervised or weakly-supervised approach, which is able to align knowledge graphs with a few or even without labeled data. In this paper, we propose an unsupervised approach for knowledge graph alignment with the adversarial training framework BID11. Our proposed approach aims to learn alignment functions, i.e., P e (e tgt |e src) and P r (r tgt |r src), to map the entities and relations (e src and r src) from the source knowledge graph to those (e tgt and r tgt) in the target graph, without any labeled data. Towards this goal, we notice that we can align each triplet in the source knowledge graph with one in the target knowledge graph by aligning the head/tail entities and relation respectively. Ideally, the optimal alignment functions would align all the source triplets to some valid triplets (i.e., triplets expressing true facts). Therefore, we can enhance the alignment functions by improving the plausibility of the aligned triplets. With this intuition, we train a triplet discriminator to distinguish between the real triplets in the target knowledge graph and those aligned from the source graph, which provides a reward function to measure the plausibility of a triplet. Meanwhile, the alignment functions are optimized to maximize the reward. The above process naturally forms an adversarial training procedure BID11 ). By alternatively optimizing the alignment functions and the discriminator, the discriminator can consistently enhance the alignment functions. However, the above approach may suffer from the problem of mode collapse BID17 ). Specifically, many entities in the source knowledge graph may be aligned to only a few entities in the target knowledge graph. This problem can be addressed if the aggregated posterior entity distribution e src P e (e tgt |e src)P (e src) derived by the alignment functions matches the prior entity distribution P (e tgt) in the target knowledge graph. Therefore, we match them with another adversarial training framework, which shares similar idea with adversarial auto-encoders BID16 ).The whole framework can also be seamlessly integrated with existing supervised methods, in which we can use a few aligned entities or relations as guidance, yielding a weakly-supervised approach. Our approach can be effectively optimized with stochastic gradient descent, where the gradient for the alignment functions is calculated by the REINFORCE algorithm . We conduct extensive experiments on several real-world knowledge graphs. Experimental prove the effectiveness of our proposed approach in both the weakly-supervised and unsupervised settings. Our work is related to knowledge graph embedding, that is, embedding knowledge graphs into lowdimensional spaces, in which each entity and relation is represented as a low-dimensional vector (a.k.a., embedding). A variety of knowledge graph embedding approaches have been proposed BID3; BID21 ), which can effectively preserve the semantic similarities of entities and relations into the learned embeddings. We treat these techniques as tools to learn entity and relation embeddings, which are further used as features for knowledge graph alignment. In literature, there are also some studies focusing on knowledge graph alignment. Most of them perform alignment by considering contextual features of entities and relations, such as their names BID15 ) or text descriptions BID8; BID20; BID21 ). However, such contextual features are not always available, and therefore these methods cannot generalize to most knowledge graphs. In this paper, we consider the most general case, in which only the triplets in knowledge graphs are used for alignment. The studies most related to ours are BID23 and BID6. Similar to our approach, they treat the entity and relation embeddings as features, and jointly train an alignment model. However, they totally rely on the labeled data (e.g., aligned entities or relations) to train the alignment model, whereas our approach incorporates additional signals by using adversarial training, and therefore achieves better in the weakly-supervised and unsupervised settings. More broadly, our work belongs to the family of domain alignment, which aims at mapping data from one domain to data in the other domain. With the success of generative adversarial networks BID11 ), many researchers have been bringing the idea to domain alignment, getting impressive in many applications, such as image-to-image translation BID24; BID25 ), word-to-word translation BID9 ) and text style transfer BID18 ). They typically train a domain discriminator to distinguish between data points from different domains, and then the alignment function is optimized by fooling the discriminator. Our approach shares similar idea, but is designed with some specific intuitions in knowledge graphs. Source Knowledge Graph Aligned Knowledge Graph Triplet Discriminator Figure 1: Framework overview. By applying the alignment functions to the triplets in the source knowledge graph, we obtain an aligned knowledge graph. The alignment functions are learned through two GANs. We expect all triplets in the aligned knowledge graph are valid, therefore we train a triplet discriminator to distinguish between valid and invalid triplets, and further use it to facilitate the alignment functions. FORMULA1 We also expect the entity distribution in the aligned knowledge graph matches the one in the target knowledge graph, which is achieved with another GAN. Definition 1 (KNOWLEDGE GRAPH.) A knowledge graph is denoted as G = (E, R, T), where E is a set of entities, R is a set of relations and T is a set of triplets. Each triplet (h, r, t) consists of a head entity h, a relation r and a tail entity t, meaning that entity h has relation r with entity t. In practice, the coverage of each individual knowledge graph is usually limited, since it is typically constructed from a single source or language. To construct knowledge graphs with broader coverage, a straightforward way is to integrate multiple knowledge graphs from different sources or languages. However, each knowledge graph uses a unique symbol system to represent entities and relations, which is not compatible with other knowledge graphs. Therefore, a prerequisite for knowledge graph integration is to align entities and relations across different knowledge graphs (a.k.a., knowledge graph alignment). In this paper, we study how to align entities and relations from a source knowledge graph to those in a target knowledge graph, and the problem is formally defined below:Definition 2 (KNOWLEDGE GRAPH ALIGNMENT.) Given a source knowledge graph G src = (E src, R src, T src) and a target knowledge graph G tgt = (E tgt, R tgt, T tgt), the problem aims at learning an entity alignment function P e and a relation alignment function P r. Given an entity e src in the source knowledge graph and an entity e tgt in the target knowledge graph, P e (e tgt |e src) gives the probability that e src aligns to e tgt. Similarly, for a source relation r src and a target relation r tgt, P r (r tgt |r src) gives the probability that r src aligns to r tgt. In this paper we propose an unsupervised approach to learning the alignment functions, i.e., P e (e tgt |e src) and P r (r tgt |r src), for knowledge graph alignment. To learn them without any supervision, we notice that we can align each triplet in the source knowledge graph with one in the target knowledge graph by aligning the head/tail entities and relation respectively. For an ideal alignment model, all the aligned triplets should be valid ones (i.e., triplets expressing true facts). As a , we can improve the alignment functions by raising the plausibility of the aligned triplets. With the intuition, our approach trains a triplet discriminator to distinguish between valid triplets and other ones. Then we build a reward function from the discriminator to facilitate the alignment functions. However, using the triplet discriminator alone may cause the problem of mode collapse. More specifically, many entities in the source knowledge graph are aligned to only a few entities in the target knowledge graph. This problem can be addressed if the aggregated posterior distribution of entities derived by the alignment functions matches the prior entity distribution from the target knowledge graph. Therefore, we follow the idea in adversarial auto-encoders BID16 ), and leverage another adversarial training framework to regularize the distribution. The above strategies yield an unsupervised approach. However, in many cases, the structures of the source and target knowledge graphs (e.g., entity and triplet distributions) can be very different, making our unsupervised approach unable to perform effective alignment. In such cases, we can integrate our approach with existing supervised methods, and use a few labeled data as guidance, which further yields a weakly-supervised approach. Formally, our approach starts by learning entity and relation embeddings with existing knowledge graph embedding techniques, which are denoted as {x e src} e src ∈E src, {x e tgt} e tgt ∈E tgt and {x r src} r src ∈R src, {x r tgt} r tgt ∈R tgt. The learned embeddings preserve the semantic correlations of entities and relations, hence we treat them as features and build our alignment functions on top of them. Specifically, we define the probability that a source entity e src or relation r src aligns to a target entity e tgt or relation r tgt as follows: DISPLAYFORM0 where γ is a temperature parameter, Z is a normalization term. W is a linear projection matrix, which maps an embedding in the source knowledge graph (e.g., x e src) to one in the target graph (e.g., W x e src), so that we can perform alignment by calculating the distance between the mapped source embeddings (e.g., W x e src) and the embeddings in the target graph (e.g., x e tgt). Note that W is the only parameter to be learned, and it is shared across the entity and relation alignment functions. We also try independent projection matrices or nonlinear projections, but get inferior . In the following chapters, we first briefly introduce the method for learning entity and relation embeddings (Section 4.1). Then, we introduce how we leverage the triplet discriminator (Section 4.2) and the regularization mechanism (Section 4.3) to facilitate training the alignment functions. Afterwards, we introduce a simple supervised method as an example, to illustrate how to incorporate labeled data (Section 4.4). Finally, we introduce our optimization algorithm (Section 4.5). In this paper, we leverage the TransE algorithm BID3 ) for entity and relation embedding learning, due to its simplicity and effectiveness in a wide range of datasets. In general, we can also use other knowledge graph embedding algorithms as well. Given a triplet t = (e h, r, e t), TransE defines its score as follows: DISPLAYFORM0 Then the model is trained by maximizing the margin between the scores of real triplets and random triplets, and the objective function is given below: DISPLAYFORM1 where T is the set of real triplets in the knowledge graph, T is the set of random triplets, and m is a parameter controlling the margin. By defining the alignment functions for entity and relation (Eqn. 1), we are able to align each triplet in the source knowledge graph to the target knowledge graph by aligning the entities and relation respectively. An ideal alignment function would align all the source triplets to some valid triplets. Therefore, we can enhance the alignment functions by raising the plausibility of the aligned triplets. Ideally, we would wish that all the aligned triplets are valid ones. Towards this goal, we train a triplet discriminator to distinguish between valid triplets and other ones. Then the discriminator is used to define different reward functions for guiding the alignment functions. In our approach, we train the discriminator by treating the real triplets in knowledge graphs as positive examples, and the aligned triplets generated by our approach as negative examples. Following existing studies BID11 ), we define the objective function below: DISPLAYFORM0 where t ∼ A(t src) is a triplet aligned from t src and A is defined in Eqn. 4. D t is the triplet discriminator, which concatenates the embeddings of the head/tail entities and relation in a triplet t, and further predicts the probability that t is a valid triplet. Based on the discriminator, we can construct a scalar-to-scalar reward function R to measure the plausibility of a triplet. Then the alignment functions can be trained by maximizing the reward: DISPLAYFORM1 There are several ways to define the reward function R, which essentially yields different adversarial training frameworks. For example, BID11 and BID14 Besides, we may also leverage R(x) = x, which is the first-order Taylor's expansion of − log(1−x) at x = 1 and has a limited range when x ∈. All different reward functions have the same optimal solution, i.e, the derived distribution of the aligned triplets matching the real triplet distribution in the target knowledge graph. In practice, these reward functions may have different variance, and we empirically compare them in the experiments TAB6 ).During optimization, the gradient with respect to the alignment functions cannot be calculated directly, as the triplets sampled from the alignment functions are discrete variables. Therefore, we leverage the REINFORCE algorithm , which calculates the gradient as follows: DISPLAYFORM2 where P (t|t src) = P e (e h |e src h)P r (r|r src)P e (e t |e src t) with t = (e h, r, e t), t src = (e src h, r src, e src t). Although the triplet discriminator provides effective reward to the alignment functions, many entities in the source knowledge graph can be aligned to only a few entities in the target knowledge graph. Such problems can be solved by constraining the aggregated posterior entity distribution derived by the alignment functions to match the prior entity distribution in the target knowledge graph. Formally, the aggregated posterior distribution of entities is given below: DISPLAYFORM0 where P (e src) is the entity distribution in the source knowledge graph. We expect this distribution to match the prior distribution P (e tgt), which is the entity distribution in the target knowledge graph. Following BID16, we regularize the distribution with another adversarial training framework BID11 ). During training, an entity discriminator D e is learned to distinguish between the posterior and prior distributions using the following objective function: DISPLAYFORM1 where D e takes the embedding of an entity as features to predict the probability that the entity is sampled from prior distribution P (e tgt). To enforce the posterior distribution to match the prior distribution, the entity alignment function is trained to fool the discriminator by maximizing the following objective: DISPLAYFORM2 where R is the same reward function as used in the triplet discriminator (Eqn. 6), and the gradient for the alignment functions can be similarly calculated with the REINFORCE algorithm. The above sections introduce an unsupervised approach to knowledge graph alignment. In many cases, the source and target knowledge graphs may have very different structures (e.g., entity or triplet distributions), making our approach fail to perform effective alignment. In these cases, we can integrate our approach with any supervised methods, and leverage a few labeled data (e.g., aligned entity or relation pairs) as guidance, which yields a weakly-supervised approach. In this section, we introduce a simple yet effective method to show how to utilize the labeled data. Suppose we are given some aligned entity pairs, and the aligned relation pairs can be handled in a similar way. We define our objective function as follows:O L = E (e src,e tgt)∈S log P e (e tgt |e src) − λH(P e (e tgt |e src)),where S is the set of aligned entity pairs, e src and e tgt are random variables of entities in the source and target knowledge graphs, H is the entropy of a distribution. The first term corresponds to a softmax classifier, which aims at maximizing the probability of aligning a source entity to the ground-truth target entity. The second term minimizes the entropy of the probability distribution calculated by the alignment function, which encourages the alignment function to make confident predictions. Such an entropy minimization strategy is used in many semi-supervised learning studies BID12 ). We leverage the stochastic gradient descent algorithm for optimization. In practice, we find that first pre-training the alignment functions with the given labeled data (Eqn. 11), then fine-tuning them with the triplet discriminator (Eqn. 6) and the regularization mechanism (Eqn. 8) leads to better performance, compared with jointly training all of them TAB7 ). Consequently, we adopt the pre-training and fine-tuning framework for optimization, which is summarized in Alg. 1. Update the triplet discriminator D t and the alignment functions with Eqn. 5 6. Update the entity discriminator D e and the alignment functions with Eqn. 9 10. 6: end while In experiment, we use four datasets for evaluation. In FB15k-1 and FB15k-2, the knowledge graphs have very different triplets, which can be seen as constructed from different sources; in WK15k(enfr) and WK15k(en-de), the knowledge graphs are from different languages. The statistics are summarized in TAB2. Following existing studies BID23; BID6 ), we consider the task of entity alignment, and three different settings are considered, including supervised, weakly-supervised and unsupervised settings. Hit@1 and Hit@10 are reported. • FB15k-1, FB15k-2: Following Zhu et al. (2017a), we construct two datasets from the FB15k dataset BID3 ). In FB15k-1, the two knowledge graphs share 50% triplets, and in FB15k-2 10% triplets are shared. According to the study, we use 5000 and 500 aligned entity pairs as labeled data in FB15k-1 and FB15k-2 respectively, and the rest for evaluation.• WK15k(en-fr): A bi-lingual (English and French) dataset in BID6. Some aligned triplets are provided as labeled data, and some aligned entity pairs as test data. The labeled data and test data have some overlaps, so we delete the overlapped pairs from labeled data.• WK15k(en-de): A bi-lingual (English and German) dataset used in BID6. The dataset is similar to WK15k(en-fr), so we perform preprocessing in the same way. Compared Algorithms FORMULA0 ): A supervised method for word translation, which learns the translation in a bootstrapping way. We apply the method on the pre-trained entity and relation embeddings to perform knowledge graph alignment. UWT BID9 ): An unsupervised word translation method, which leverages adversarial training and a refinement strategy. We apply the method to the entity and relation embeddings to perform alignment. KAGAN-sup: The supervised method introduced in Section 4.4, which is simple but effective, and can be easily integrated with our unsupervised approach. KAGAN: Our proposed Knowledgegraph Alignment GAN, which leverages the labeled data for pre-training, and then fine-tunes the model with the triplet discriminator and the regularization mechanism. KAGAN-t: A variant with only the triplet GAN, which first performs pre-training with the labeled data, and then performs fine-tuning with the triplet discriminator. KAGAN-e: A variant with only the entity GAN, which first pre-trains with the labeled data, and then fine-tunes with the regularization mechanism. The dimension of entity and relation embeddings is set as 512 for all compared methods. For our proposed approach, λ is set as 1, the temperature parameter γ is set as 1, and the reward function is set as x by default. SGD is used for optimization. The learning rate is set as 0.1 for pre-training and 0.001 for fine-tuning. 10% labeled pairs are treated as the validation set. The training process is terminated if the performance on the validation set drops. For the compared methods, the parameters are chosen according to the performance on the validation set. The main are presented in TAB3. In the supervised setting, our approach significantly outperforms all the compared methods. On the FB15k-1 and FB15k-2 datasets, without using any labeled data, our approach already achieves close as in the supervised setting. On the WK15k datasets under the weakly-supervised setting, our performance is comparable or even superior to the performance of other methods in the supervised setting, but with much fewer labeled data (about 13% in WK15k(en-fr) and 1% in WK15k(en-de)). Overall, our approach is quite effective in the weakly-supervised and unsupervised settings, outperforming all the baseline approaches. To understand the effect of each part in our approach, we further conduct some ablation studies. Table 3 presents the in the supervised setting. Both the triplet discriminator (KAGAN-t) and the regularization mechanism (KAGAN-e) improves the pre-trained alignment models (KAGAN-pre). Combining them (KAGAN) leads to even better . TAB5 gives the in the unsupervised (FB15k-2) and weakly-supervised (WK15k-fr2en, WK15k-de2en) settings. On the FB15k-2 dataset, using the regularization mechanism alone (KAGAN-e) already achieves impressive . This is because the source and target knowledge graphs in FB15k-2 share similar structures, and our regularization mechanism can effectively leverage such similarity for alignment. However, the performance of using only the triplet discriminator (KAGAN-t) is very poor, which is caused by the problem of mode collapse. The problem is effectively solved by integrating the approach with the regularization mechanism (KAGAN), which achieves the best in all cases. Comparison of the reward functions. In our approach, we can choose different reward functions, leading to different adversarial training frameworks. These frameworks have the same optimal solutions, but with different variance. In this section we compare them on the WK15k datasets, and the of Hit@1 are presented in TAB6. We notice that all reward functions lead to significant improvement compared with using no reward. Among them, x 1−x and x obtain the best . Comparison of the optimization methods. During training, our approach fixes the entity/relation embeddings, and uses a pre-training and fine-tuning framework for optimization. In this section, we compare the framework with some variants, and the of Hit@1 are presented in TAB7. We see that our framework (pre-training and fine-tuning) outperforms the joint training framework. Besides, fine-tuning entity/relation embeddings yields worse than fixing them during training. Case study. In this section, we present some visualization to intuitively show the effect of the triplet discriminator and regularization mechanism in our approach. We consider the unsupervised setting on the FB15k-2 dataset, and leverage the PCA algorithm to visualize certain embeddings. Figure 2 compares the entity embeddings obtained with and without the regularization mechanism, where red is for the mapped source entity embeddings (W x e src), and green for the target embeddings (x e tgt). We see that without the mechanism, many entities from the source knowledge graph are mapped to a small region (the red region), leading to the problem of mode collapse. The problem is addressed with the regularization mechanism. FIG2 compares the triplet embeddings obtained with and without the triplet discriminator, where the triplet embedding is obtained by concatenating the entity and relation embeddings in a triplet. Red color is for triplets aligned from the source knowledge graph, and green is for triplets in the target graph. Without the triplet discriminator, the aligned triplets look quite different from the real ones (under different distributions). With the triplet discriminator, the aligned triplets look like the real ones (under similar distributions). 6 This paper studied knowledge graph alignment. We proposed an unsupervised approach based on the adversarial training framework, which is able to align entities and relations from a source knowledge graph to those in a target knowledge graph. In practice, our approach can be seamlessly integrated with existing supervised methods, which enables us to leverage a few labeled data as guidance, leading to a weakly-supervised approach. Experimental on several real-world datasets proved the effectiveness of our approach in both the unsupervised and weakly-supervised settings. In the future, we plan to learn alignment functions from two directions (source to target and target to source) to further improve the , which is similar to CycleGAN BID24 ).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S14h9sCqYm
This paper studies weakly-supervised knowledge graph alignment with adversarial training frameworks.
Multi-view learning can provide self-supervision when different views are available of the same data. Distributional hypothesis provides another form of useful self-supervision from adjacent sentences which are plentiful in large unlabelled corpora. Motivated by the asymmetry in the two hemispheres of the human brain as well as the observation that different learning architectures tend to emphasise different aspects of sentence meaning, we present two multi-view frameworks for learning sentence representations in an unsupervised fashion. One framework uses a generative objective and the other a discriminative one. In both frameworks, the final representation is an ensemble of two views, in which, one view encodes the input sentence with a Recurrent Neural Network (RNN), and the other view encodes it with a simple linear model. We show that, after learning, the vectors produced by our multi-view frameworks provide improved representations over their single-view learnt counterparts, and the combination of different views gives representational improvement over each view and demonstrates solid transferability on standard downstream tasks. Multi-view learning methods provide the ability to extract information from different views of the data and enable self-supervised learning of useful features for future prediction when annotated data is not available BID16. Minimising the disagreement among multiple views helps the model to learn rich feature representations of the data and, also after training, the ensemble of the feature vectors from multiple views can provide an even stronger generalisation ability. Distributional hypothesis BID22 noted that words that occur in similar contexts tend to have similar meaning BID51, and distributional similarity BID19 consolidated this idea by stating that the meaning of a word can be determined by the company it has. The hypothesis has been widely used in machine learning community to learn vector representations of human languages. Models built upon distributional similarity don't explicitly require human-annotated training data; the supervision comes from the semantic continuity of the language data. Large quantities of annotated data are usually hard and costly to obtain, thus it is important to study unsupervised and self-supervised learning. Our goal is to propose learning algorithms built upon the ideas of multi-view learning and distributional hypothesis to learn from unlabelled data. We draw inspiration from the lateralisation and asymmetry in information processing of the two hemispheres of the human brain where, for most adults, sequential processing dominates the left hemisphere, and the right hemisphere has a focus on parallel processing BID9, but both hemispheres have been shown to have roles in literal and non-literal language comprehension BID14 BID15.Our proposed multi-view frameworks aim to leverage the functionality of both RNN-based models, which have been widely applied in sentiment analysis tasks BID57, and the linear/log-linear models, which have excelled at capturing attributional similarities of words and sentences BID5 BID6 BID24 BID51 for learning sentence representations. Previous work on unsupervised sentence representation learning based on distributional hypothesis can be roughly categorised into two types:Generative objective: These models generally follow the encoder-decoder structure. The encoder learns to produce a vector representation for the current input, and the decoder learns to generate sentences in the adjacent context given the produced vector BID28 BID24 BID20 BID50. The idea is straightforward, yet its scalability for very large corpora is hindered by the slow decoding process that dominates training time, and also the decoder in each model is discarded after learning as the quality of generated sequences is not the main concern, which is a waste of parameters and learning effort. Our first multi-view framework has a generative objective and uses an RNN as the encoder and an invertible linear projection as the decoder. The training time is drastically reduced as the decoder is simple, and the decoder is also utilised after learning. A regularisation is applied on the linear decoder to enforce invertibility, so that after learning, the inverse of the decoder can be applied as a linear encoder in addition to the RNN encoder. Discriminative Objective: In these models, a classifier is learnt on top of the encoders to distinguish adjacent sentences from those that are not BID31 BID26 BID40 BID33; these models make a prediction using a predefined differential similarity function on the representations of the input sentence pairs or triplets. Our second multi-view framework has a discriminative objective and uses an RNN encoder and a linear encoder; it learns to maximise agreement among adjacent sentences. Compared to earlier work on multi-view learning BID16 BID17 BID52 that takes data from various sources or splits data into disjoint populations, our framework processes the exact same data in two distinctive ways. The two distinctive information processing views tend to encode different aspects of an input sentence; forcing agreement/alignment between these views encourages each view to be a better representation, and is beneficial to the future use of the learnt representations. Our contribution is threefold:• Two multi-view frameworks for learning sentence representations are proposed, in which one framework uses a generative objective and the other one adopts a discriminative objective. Two encoding functions, an RNN and a linear model, are learnt in both frameworks.• The show that in both frameworks, aligning representations from two views gives improved performance of each individual view on all evaluation tasks compared to their single-view trained counterparts, and furthermore ensures that the ensemble of two views provides even better than each improved view alone.• Models trained under our proposed frameworks achieve good performance on the unsupervised tasks, and overall outperform existing unsupervised learning models, and armed with various pooling functions, they also show solid on supervised tasks, which are either comparable to or better than those of the best unsupervised transfer model. It is shown BID24 that the consistency between supervised and unsupervised evaluation tasks is much lower than that within either supervised or unsupervised tasks alone and that a model that performs well on supervised tasks may fail on unsupervised tasks. BID13 subsequently showed that, with a labelled training corpus, such as SNLI BID8 and MultiNLI BID56, the ing representations of the sentences from the trained model excel in both supervised and unsupervised tasks. Multi-task learning BID48 also gives impressive performance on downstream tasks while labelled data is costly. Our model is able to achieve good on both groups of tasks without labelled information. Our goal is to marry RNN-based sentence encoder and the avg-on-word-vectors sentence encoder into multi-view frameworks with simple training objectives. The motivation for the idea is that, as mentioned in the prior work, RNN-based encoders process the sentences sequentially, and are able to capture complex syntactic interactions, while the avg-on-wordvectors encoder has been shown to be good at capturing the coarse meaning of a sentence which could be useful for finding paradigmatic parallels BID51.We present two multi-view frameworks, each of which learns two different sentence encoders; after learning, the vectors produced from two encoders of the same input sentence are used to compose the sentence representation. The details of our learning frameworks are described as follows: In our multi-view frameworks, we first introduce two encoders that, after learning, can be used to build sentence representations. One encoder is a bi-directional Gated Recurrent Unit BID10 f (s; φ), where s is the input sentence and φ is the parameter vector in the GRU. During learning, only hidden state at the last time step is sent to the next stage in learning. The other encoder is a linear avg-on-word-vectors model g(s; W), which basically transforms word vectors in a sentence by a learnable weight matrix W and outputs an averaged vector. Given the finding BID50 that neither an autoregressive nor an RNN decoder is necessary for learning sentence representations that excel on downstream tasks, our learning framework only learns to predict words in the next sentence. The framework has an RNN encoder f, and a linear decoder h. Given an input sentence s i, the encoder produces a vector z f i = f (s i ; φ), and the decoder h projects the vector to DISPLAYFORM0, which has the same dimension as the word vectors v w. Negative sampling is applied to calculate the likelihood of generating the j-th word in the (i + 1)-th sentence, shown in Eq. 1. DISPLAYFORM1 where v w k are pretrained word vectors for w k, the empirical distribution P e (w) is the unigram distribution raised to power 0.75 BID38, and K is the number of negative samples. The learning objective is to maximise the likelihood for words in all sentences in the training corpus. Ideally, the inverse of h should be easy to compute so that during testing we can set g = h −1. As h is a linear projection, the simplest situation is when U is an orthogonal matrix and its inverse is equal to its transpose. Often, as the dimensionality of vector z f i doesn't necessarily need to match that of word vectors v w, U is not a square matrix BID0. To enforce invertibility on U, a row-wise orthonormal regularisation on U is applied during training, which leads to U U = I, where I is the identity matrix, thus the inverse function is simply h −1 (x) = U x, which is easily computed. The regularisation formula is ||U U − I|| F, where || · || F is the Frobenius norm. Specifically, the update rule BID11 for the regularisation is: DISPLAYFORM2 where β is set to 0.01. After learning, we set W = U, then the inverse of the decoder h becomes the encoder g. Compared to prior work with generative objective, our framework reuses the decoding function rather than ignoring it for building sentence representations after learning, thus information encoded in the decoder is also utilised. Our multi-view framework with discriminative objective learns to maximise the agreement between the representations of a sentence pair across two views if one sentence in the pair is in the neighbourhood of the other one. An RNN encoder f (s; φ) and a linear avg-on-word-vectors g(s; W) produce a vector representation z DISPLAYFORM0 where DISPLAYFORM1 where τ is the trainable temperature term, which is essential for exaggerating the difference between adjacent sentences and those that are not. The neighbourhood/context window c, and the batch size N are hyperparameters. The choice of cosine similarity based loss is based on the observations BID51 that, of word vectors derived from distributional similarity, vector length tends to correlate with frequency of words, thus angular distance captures more important meaning-related information. Also, since our model is unsupervised/self-supervised, whatever similarity there is between neighbouring sentences is what is learnt as important for meaning. The postprocessing step BID6, which removes the top principal component of a batch of representations, is applied on produced representations from f and g respectively after learning with a final l 2 normalisation. In addition, in our multi-view framework with discriminative objective, in order to reduce the discrepancy between training and testing, the top principal component is estimated by the power iteration method BID39 and removed during learning. Three unlabelled corpora from different genres are used in our experiments, including BookCorpus BID59, UMBC News BID21 and Amazon Book Review 2 BID35; six models are trained separately on each of three corpora with each of two objectives. The summary statistics of the three corpora can be found in TAB6. Adam optimiser BID27 and gradient clipping BID43 are applied for stable training. Pretrained word vectors, fastText BID7, are used in our frameworks and fixed during learning. TAB6: Summary statistics of the three corpora used in our experiments. For simplicity, the three corpora will be referred to as 1, 2 and 3 in the following tables respectively. Table 2: Representation pooling in testing phase. "max(·)", "mean(·)", and "min(·)" refer to global max-, mean-, and min-pooling over time, which in a single vector. The table also presents the diversity of the way that a single sentence representation can be calculated. X i refers to word vectors in i-th sentence, and H i refers to hidden states at all time steps produced by f. DISPLAYFORM0 All of our experiments including training and testing are done in PyTorch BID44. The modified SentEval BID12 package with the step that removes the first principal component is used to evaluate our models on the downstream tasks. Hyperparameters, including negative samples K in the framework with generative objective, context window c in the one with discriminative objective, are tuned only on the averaged performance on STS14 of the model trained on the BookCorpus; STS14/G1 and STS14/D1 are thus marked with a in TAB1 to indicate possible overfitting on that dataset/model only. Batch size N and dimension d in both frameworks are set to be the same for fair comparison. Hyperparameters are summarised in supplementary material. Representation: For a given sentence input s with M words, suggested by BID45 BID30, the representation is calculated as z = ẑ f +ẑ g /2, whereẑ refers to the post-processed and normalised vector, and is mentioned in Table 2. BID34. We compare our models with: • Unsupervised learning: We selected models with strong from related work, including fastText, fastText+WR.• Semisupervised learning: The word vectors are pretrained on each task BID54 without label information, and word vectors are averaged to serve as the vector representation for a given sentence BID6.• Supervised learning: ParaNMT BID55 is included as a supervised learning method as the data collection requires a neural machine translation system trained in supervised way. InferSent 3 BID13 trained on SNLI and MultiNLI is included as well. The are presented in TAB1. Since the performance of FastSent BID24 and QT BID33 were only evaluated on STS14, we compare to their in TAB2.All six models trained with our learning frameworks outperform other unsupervised and semi supervised learning methods, and the model trained on the UMBC News Corpus with discriminative objective gives the best performance likely because the STS tasks contain multiple news-and headlines-related datasets which is well matched by the domain of the UMBC News Corpus. The evaluation on these tasks involves learning a linear model on top of the learnt sentence representations produced by the model. Since a linear model is capable of selecting the most relevant dimensions in the feature vectors to make predictions, it is preferred to concatenate various types of representations to form a richer, and possibly more redundant feature vector, which allows the linear model to explore the combination of different aspects of encoder functions to provide better . Representation: Inspired by prior work BID36 BID46, the representation z f is calculated by concatenating the outputs from the global mean-, max-and min-pooling on top of the hidden states H, and the last hidden state, and z g is calculated with three pooling functions as well. The post-processing and the normalisation step is applied individually. These two representations are concatenated to form a final sentence representation. Table 2 presents the details. Tasks: Semantic relatedness (SICK) BID34, paraphrase detection (MRPC) BID18, question-type classification (TREC) BID32, movie review sentiment (MR) BID42, Stanford Sentiment Treebank (SST) BID47, customer product reviews (CR) BID25, subjectivity/objectivity classification (SUBJ) BID41, opinion polarity (MPQA) BID53. The are presented in TAB3.Comparison: Our as well as related of supervised task-dependent training models, supervised learning models, and unsupervised learning models are presented in TAB3. Note that, for fair comparison, we collect the of the best single model (MC-QT, BID33) trained on BookCorpus. Six models trained with our learning frameworks either outperform other existing methods, or achieve similar on some tasks. The model trained on the Amazon Book Review gives the best performance on sentiment analysis tasks, since the corpus conveys strong sentiment information. In both frameworks, RNN encoder and linear encoder perform well on all tasks, and generative objective and discriminative objective give similar performance. The orthonormal regularisation applied on the linear decoder to enforce invertibility in our multi-view framework encourages the vector representations produced by f and those by h −1, which is g in testing, to agree/align with each other. A direct comparison is to train our multi-view framework Table 6: Ablation study on our multi-view frameworks. Variants of our frameworks are tested to illustrate the advantage of our multi-view learning frameworks. In general, under the proposed frameworks, learning to align representations from both views helps each view to perform better and an ensemble of both views provides stronger than each of them. The arrow and value pair indicate how a differs from our multi-view learning framework. Better view in colour. DISPLAYFORM0 Multi-view with g 1 and g 2: a ij = cos(z without the invertible constraint, and still directly use U as an additional encoder in testing. The of our framework with and without the invertible constraint are presented in Table 6 . DISPLAYFORM1 The ensemble method of two views, f and g, on unsupervised evaluation tasks (STS12-16 and SICK14) is averaging, which benefits from aligning representations from f and g by applying invertible constraint, and the RNN encoder f gets improved on unsupervised tasks by learning to align with g. On supervised evaluation tasks, as the ensemble method is concatenation and a linear model is applied on top of the concatenated representations, as long as the encoders in two views process sentences distinctively, the linear classifier is capable of picking relevant feature dimensions from both views to make good predictions, thus there is no significant difference between our multi-view framework with and without invertible constraint. In order to determine if the multi-view framework with two different views/encoding functions is helping the learning, we compare our framework with discriminative objective to other reasonable variants, including the multi-view model with two functions of the same type but parametrised independently, either two f -s or two g-s, and the single-view model with only one f or g. Table 6 presents the of the models trained on UMBC News Corpus. As specifically emphasised in previous work BID24, linear/log-linear models, which include g in our model, produce better representations for unsupervised evaluation tasks than RNN-based models do. This can also be observed in Table 6 as well, where g consistently provides better on unsupervised tasks than f. In addition, as expected, multi-view learning with f and g, improves the ing performance of f on unsupervised tasks, also improves the ing g on supervised evaluation tasks. Provided the of models with generative and discriminative objective in Table 6, we confidently show that, in our multi-view frameworks with f and g, the two encoding functions improve each other's view. In general, aligning the representations generated from two distinct encoding functions ensures that the ensemble of them performs better. The two encoding functions f and g encode the input sentence with emphasis on different aspects, and the subsequently trained linear model for each of the supervised downstream tasks benefits from this diversity leading to better predictions. However, on unsupervised evaluation tasks, simply averaging representations from two views without aligning them during learning leads to poor performance and it is worse than g (linear) encoding function solely. Our multi-view frameworks ensure that the ensemble of two views provides better performance on both supervised and unsupervised evaluation tasks. Compared with the ensemble of two multi-view models, each with two encoding functions of the same type, our multi-view framework with f and g provides slightly better on unsupervised tasks, and similar on supervised evaluation tasks, while our model has much higher training efficiency. Compared with the ensemble of two single-view models, each with only one encoding function, the matching between f and g in our multi-view model produces better . We proposed multi-view sentence representation learning frameworks with generative and discriminative objectives; each framework combines an RNN-based encoder and an average-on-word-vectors linear encoder and can be efficiently trained within a few hours on a large unlabelled corpus. The experiments were conducted on three large unlabelled corpora, and meaningful comparisons were made to demonstrate the generalisation ability and transferability of our learning frameworks and consolidate our claim. The produced sentence representations outperform existing unsupervised transfer methods on unsupervised evaluation tasks, and match the performance of the best unsupervised model on supervised evaluation tasks. Our experimental support the finding BID24 that linear/log-linear models (g in our frameworks) tend to work better on the unsupervised tasks, while RNN-based models (f in our frameworks) generally perform better on the supervised tasks. As presented in our experiments, multi-view learning helps align f and g to produce better individual representations than when they are learned separately. In addition, the ensemble of both views leveraged the advantages of both, and provides rich semantic information of the input sentence. Future work should explore the impact of having various encoding architectures and learning under the multi-view framework. Our multi-view learning frameworks were inspired by the asymmetric information processing in the two hemispheres of the human brain, in which the left hemisphere is thought to emphasise sequential processing and the right one more parallel processing BID9. Our experimental raise an intriguing hypothesis about how these two types of information processing may complementarily help learning. The details, including size of each dataset and number of classes, about the evaluation tasks are presented below 1. The Power Iteration was proposed in BID39, and it is an efficient algorithm for estimating the top eigenvector of a given covariance matrix. Here, it is used to estimate the top principal component from the representations produced from f and g separately. We omit the superscription here, since the same step is applied to both f and g. Suppose there is a batch of representations Z = [z1, z2 ..., zN] ∈ R 2d×N from either f or g, the Power Iteration method is applied here to estimate the top eigenvector of the covariance matrix 2: C = ZZ, and it is described in Algorithm 1:Algorithm 1 Estimating the First Principal Component BID39 Input: Covariance matrix C ∈ R 2d×2d, number of iterations T Output: First principal component u ∈ R u ← Cu, 4: u ← u ||u||In our experiments, T is set to be 5. The hyperparameters we need to tune include the batch size N, the dimension of the GRU encoder d, and the context window c, and the number of negative samples K. The we presented in this paper is based on the model trained with N = 512, d = 1024. Specifically, in discriminative objective, the context window is set c = 3, and in generative objective, the number of negative samples is set K = 5. It takes up to 8GB on a GTX 1080Ti GPU.The initial learning rate is 5 × 10 −4, and we didn't anneal the learning rate through the training. All weights in the model are initialised using the method proposed in BID23, and all gates in the bi-GRU are initialised to 1, and all biases in the single-layer neural network are zeroed before training. The word vectors are fixed to be those in the FastText BID7, and we don't finetune them. Words that are not in the FastText's vocabulary are fixed to 0 vectors through training. The temperature term is initialised as 1, and is tuned by the gradient descent during training. The temperature term is used to convert the agreement aij to a probability distribution pij in Eq. 1 in the main paper. In our experiments, τ is a trainable parameter initialised to 1 that decreased consistently through training. Another model trained with fixed τ set to the final value performed similarly. Table 2: Effect of the Post-processing Step.'WR' refers to the post-processing step BID6 which removes the principal component of a set of learnt vectors. The postprocessing step overall improves the performance of our models on unsupervised evaluation tasks, and also improves the models with generative objective on supervised sentence similarity tasks. However, it doesn't have a significant impact on single sentence classification tasks. 5 Combining both generative and discriminative objective in our multi-view framework Models with both generative and discriminative objectives are trained to see if further improvement can be provided by combining an RNN encoder, an inverse of a linear decoder in the generative objective and a linear encoder in the discriminative objective. The of models trained on BookCorpus and UMBC News are presented in TAB1. As presented in the table, no further improvement against models with only one objective is shown. In our understanding, the inverse of the linear decoder in generative objective behaves similarly to the linear encoder in the discriminative objective, which is presented in Table 6 in the main paper. Therefore, combining two objectives doesn't perform better than only one of them. The number of parameters of each of the selected models is:1. Ours: ≈ 8.8M2. Quick-thought BID33: ≈ 19.8M3. Skip-thought BID28: ≈ 57.7M
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyxaHGcij7
Multi-view learning improves unsupervised sentence representation learning
There are myriad kinds of segmentation, and ultimately the `"right" segmentation of a given scene is in the eye of the annotator. Standard approaches require large amounts of labeled data to learn just one particular kind of segmentation. As a first step towards relieving this annotation burden, we propose the problem of guided segmentation: given varying amounts of pixel-wise labels, segment unannotated pixels by propagating supervision locally (within an image) and non-locally (across images). We propose guided networks, which extract a latent task representation---guidance---from variable amounts and classes (categories, instances, etc.) of pixel supervision and optimize our architecture end-to-end for fast, accurate, and data-efficient segmentation by meta-learning. To span the few-shot and many-shot learning regimes, we examine guidance from as little as one pixel per concept to as much as 1000+ images, and compare to full gradient optimization at both extremes. To explore generalization, we analyze guidance as a bridge between different levels of supervision to segment classes as the union of instances. Our segmentor concentrates different amounts of supervision of different types of classes into an efficient latent representation, non-locally propagates this supervision across images, and can be updated quickly and cumulatively when given more supervision. Many tasks of scientific and practical interest require grouping pixels, such as cellular microscopy, medical imaging, and graphic design. Furthermore, a single image might need to be segmented in several ways, for instance to first segment all people, then focus on a single person, and finally pick out their face. Learning a particular type of segmentation, or even extending an existing model to a new task like a new semantic class, generally requires collecting and annotating a large amount of data and (re-)training a large model for many iterations. Interactive segmentation with a supervisor in-the-loop can cope with less supervision, but requires at least a little annotation for each image, entailing significant effort over image collections or videos. Faced with endless varieties of segmentation and countless images, yet only so much expertise and time, a segmentor should be able to learn from varying amounts of supervision and propagate that supervision to unlabeled pixels and images. We frame these needs as the problem of guided segmentation: given supervision from few or many images and pixels, collect and propagate this supervision to segment any given images, and do so quickly and with generality across tasks. The amount of supervision may vary widely, from a lone annotated pixel, millions of pixels in a fully annotated image, or even more across a collection of images as in conventional supervised learning for segmentation. The number of classes to be segmented may also vary depending on the task, such as when segmenting categories like cats vs. dogs, or when segmenting instances to group individual people. Guided segmentation extends fewshot learning to the structured output setting, and the non-episodic accumulation of supervision as data is progressively annotated. Guided segmentation broadens the scope of interactive segmentation by integrating supervision across images and segmenting unannotated images. As a first step towards solving this novel problem, we propose guided networks to extract guidance, a latent task representation, from variable amounts of supervision (see Figure 1). To do so we meta-learn how to extract and follow guidance by training episodically on tasks synthesized from a large, fully annotated dataset. Once trained, our model can quickly and cumulatively incorporate annotations to perform new tasks not seen during training. Guided networks reconcile static and interactive modes of inference: a guided model is both able to make predictions on its own, like a fully supervised model, and to incorporate expert supervision for defining new tasks or correcting errors, Figure 1: A guide g extracts a latent task representation z from an annotated image (red) for inference by f θ (x, z) on a different, unannotated image (blue). like an interactive model. Guidance, unlike static model parameters, does not require optimization to update: it can be quickly extended or corrected during inference. Unlike annotations, guidance is latent and low-dimensional: it can be collected and propagated across images and episodes for inference without the supervisor in-the-loop as needed by interactive models. We evaluate our method on a variety of challenging segmentation problems in Section 5: interactive image segmentation, semantic segmentation, video object segmentation, and real-time interactive video segmentation, as shown in 2. We further perform novel exploratory experiments aimed at understanding the characteristics and limits of guidance. We compare guidance with standard supervised learning across the few-shot and many-shot extremes of support size to identify the boundary between few-shot and many-shot learning for segmentation. We demonstrate that in some cases, our model can generalize to guide tasks at a different level of granularity, such as meta-learning from instance supervision and then guiding semantic segmentation of categories. Guided segmentation extends few-shot learning to structured output models, statistically dependent data, and variable supervision in amount of annotation (shot) and numbers of classes (way). Guided segmentation spans different kinds of segmentation as special cases determined by the supervision that constitutes a task, such as a collection of category masks for semantic segmentation, sparse positive and negative pixels in an image for interactive segmentation, or a partial annotation of an object on the first frame of a clip for video object segmentation. Few-shot learning Few-shot learning BID7 BID15 holds the promise of data efficiency: in the extreme case, one-shot learning requires only a single annotation of a new concept. The present wave of methods BID14 BID22 BID27 BID28 BID1 BID8 BID20 BID26 frame it as direct optimization for the few-shot setting: they synthesize episodes by sampling supports and queries, define a task loss, and learn a task model for inference of the queries given the support supervision. While these works address a setting with a fixed, small number of examples and classes at meta-test time, we explore settings where the number of annotations and classes is flexible. Our approach is most closely related to episodically optimized metric learning approaches. We design a novel, efficient segmentation architecture for metric learning, inspired by Siamese networks BID5 BID11 and few-shot metric methods BID14 BID27 BID26 that learn a distance to retrieve support annotations for the query. In contrast to existing meta-learning schemes, we examine how a meta-learned model generalizes across task families with a nested structure, such as performing semantic segmentation after meta-learning on instance segmentation tasks. Segmentation There are many kinds of segmentation, and many current directions for deep learning techniques BID10. We take up semantic BID6 BID17, interactive BID13 BID2, and semi-supervised video object segmentation as challenge problems for our unified view with guidance. See FIG0 for summaries of these tasks. For semantic segmentation BID23 proposes a one-shot segmentor (OSLSM), which requires few but densely annotated images, and must independently infer one annotation and class at a time. Our guided segmentor can segment from sparsely annotated pixels and perform multi-way inference. For video object segmentation one-shot video object segmentation (OSVOS) by achieve high accuracy by fine-tuning during inference, but this online optimization is too costly in time and fails with sparse annotations. Our guided segmentor is feed-forward, hence quick, and segments more accurately from extremely sparse annotations. BID4 impressively achieve state-of-the-art accuracy and real-time, interactive video object segmentation by replacing online optimization with offline metric learning and nearest neighbor inference on a deep, spatiotemporal embedding; however, they focus exclusively on video segmentation. We consider a variety of segmentation tasks, and investigate how guidance transfers across semantic and instance tasks and how it scales with more annotation. For interactive segmentation, BID29; BID18 learn state-of-the-art interactive object segmentation, and BID18 only needs four annotations per object. However, these purely interactive methods infer each task in isolation and cannot pool supervision across tasks and images without optimization, while our guided segmentor quickly propagates supervision non-locally between images. Akin to few-shot learning, we divide the input into an annotated support, which supervises the task to be done, and an unannotated query on which to do the task. The common setting in which the support contains K distinct classes and S examples of each is referred to as K-way, S-shot learning BID15 BID7 BID27. For guided segmentation tasks we add a further pixel dimension to this setting, as we must now consider the number of support pixel annotations for each image, as well as the number of annotated support images. We denote the number of annotated pixels per image as P, and consider the settings of (S, P)-shot learning for various S and P. In particular, we focus on sparse annotations where P is small, as these are more practical to collect, and merely require the annotator to point to the segment(s) of interest. This type of data collection is more efficient than collecting dense masks by at least an order of magnitude BID0. Since segmentation commonly has imbalanced classes and sparse annotations, we consider mixed-shot and semi-supervised supports where the shot varies by class and some points are unlabeled. This is in contrast to the standard few-shot assumption of class-balanced supports. We define a guided segmentation task as the set of input-output pairs (T i, Y i) sampled from a task distribution P, adopting and extending the notation of BID9. The task inputs are DISPLAYFORM0 where S is the number of annotated support images x s, Q is the number of unannotated query images x q, and L s are the support annotations. The annotations are sets of point-label pairs (p, l) with |L s | = P per image, where every label l is one of the K classes or unknown (∅). The task outputs, that is the targets for the support-defined segmentation task on the queries, are DISPLAYFORM1 Our model handles general way K, but for exposition we focus on binary tasks with K = 2, or L = (+, −). We let Q = 1 throughout as inference of each query is independent in our model. Our approach to guided segmentation has two parts: extracting a task representation from the semi-supervised, structured support and segmenting the query given the task representation. We define the task representation as z = g(x, +, −), and the query segmentation guided by that representation asŷ = f (x, z). The design of the task representation z and its encoder g is crucial for guided segmentation to handle the hierarchical structure of images and pixels, the high and variable dimensions of images and their pixelwise annotations, the semi-supervised nature of support with many unannotated pixels, and skewed support distributions. We examine how to best design the guide g and inference f as deep networks. Our method is one part architecture and one part optimization. For architecture, we define branched fully convolutional networks, with a guide branch for extracting the task representation from the support with a novel late fusion technique (Section 4.1), and an inference branch for segmenting queries given the guidance (Section 4.2). For optimization, we adapt episodic meta-learning to image-to-image learning for structured output (Section 4.3), and increase the diversity of episodes past existing practice by sampling within and across segmentation task families like categories and instances. The task representation z must fuse the visual information from the image with the annotations in order to determine what should be segmented in the query. As images with (partial) segmentations, our support is statistically dependent because pixels are spatially correlated, semi-supervised because the full supervision is arduous to annotate, and high dimensional and class-skewed because scenes are sizable and complicated. For simplicity, we first consider a binary task with (1, P)-shot support consisting of one image with an arbitrary number of annotated pixels P, and then extend to multi-way tasks and general (S, P)-shot support. To begin we decompose the support encoder g(x s, + s, − s) across receptive fields indexed by i for local task representations z i = g(x si, + si, − si); this is the same independence assumption made by existing fully convolutional approaches to structured output. See FIG1 for an overview and our novel late global fusion technique. Early Fusion (prior work) Stacking the image and annotations channel-wise at the input makes z si = g early (x, +, −) = φ S (x ⊕ + ⊕ −) with a support feature extractor φ S. This early fusion strategy, employed by BID29, gives end-to-end learning full control of how to fuse. Masking the image by the positive pixels BID23 BID30 instead forces invariance to context, potentially speeding up learning, but precludes learning from the and disturbs input statistics. All early fusion techniques suffer from an inherent modeling issue: incompatibility of the support and query representations. Stacking requires distinct φ S, φ Q while masking disturbs the input distribution. Early fusion is slow, since changes in annotations trigger a full pass through the network, and only one task can be inferred at a time, limiting existing interactive and few-shot segmentors alike BID29 BID18 BID23.Late Fusion (ours) We resolve the learning and inference issues of early fusion by factorizing features and annotations in the guide architecture as z si = g late (x, +, −) = ψ(φ(x), m(+), m(−)). We first extract visual features from the image alone by φ(x), map the annotations into masks in the feature layer coordinates m(+), m(−), and then fuse both by ψ chosen to be element-wise product. This factorization into visual and annotation branches defines the spatial relationship between image and annotations, improving learning sample efficiency and inference computation time. Fixing m to interpolation and ψ to multiplication, the task representation can be updated quickly by only recomputing the masking and not features φ. See FIG1 (center). We do not model a distribution over z, although this is a possible extension of our work for regularization or sampling diverse segmentations. Our late fusion architecture can now share the feature extractor φ for joint optimization through the support and query. Sharing improves learning efficiency with convergence in fewer iterations and task accuracy with 60% relative improvement for video object segmentation. Late fusion reduces inference time, as only the masking needs to be recomputed to incorporate new annotations, making it capable of real-time interactive video segmentation. Optimization-based methods need seconds or minutes to update. Locality We are generally interested in segmentation tasks that are determined by visual characteristics and not absolute location in space or time, i.e. the task is to group pixels of an object and not pixels in the bottom-left of an image. When the support and query images differ, there is no known spatial correspondence, and the only mapping between support and query should be through features. To fit the architecture to this task structure, we merge the local task representations by m P ({z si : ∀i}) for all positions i. Choosing global pooling for m P globalizes the task by discarding the spatial dimensions. The pooling step can be done by averaging, our choice, or other reductions. The effect of pooling in an image with multiple visually similar objects is shown in FIG1 (right).Multi-Shot and Multi-Way The full (S, P)-shot setting requires summarizing the entire support with a variable number of images with varying amounts of pixelwise annotations. Note in this case that the annotations might be divided across the support, for instance one frame of a video may only have positives while a different frame has only negatives, so S-shot cannot always be reduced to 1-shot, as done in prior work BID23. We form the full task representation z S = m S ({z 1, . . ., z S}) simply and differentiably by averaging the shot-wise representations z s. While we have considered binary tasks thus far, we extend guidance to multi-way inference do in our experiments. We construct a separate guide for each class, averaging across all shots containing annotations for that class. Note that all the guides share φ for efficiency and differ only in the masking. Inference in a static segmentation model is simplyŷ = f θ (x) for output y, parameters θ, and input x. Guided inference is a functionŷ = f (x, z) of the query given the guidance extracted from the support. We further structure inference by f (φ(x), z), where φ is a fully convolutional encoder from input pixels to visual features. Multiple forms of conditioning are possible and have been explored for low-dimensional classification and regression problems by the few-shot learning literature. In preliminary experiments we consider parameter regression, nearest neighbor and prototype retrieval, and metric learning on fused features. We select metric learning with feature fusion because it was simple and robust to optimize. Note that feature fusion is similar to siamese architectures, but we directly optimize the classification loss rather than a contrastive loss. In particular we fuse features by m f = φ(x) ⊕ tile(z) which concatenates the guide with the query features, while tiling z to the spatial dimensions of the query. The fused query-support feature is then scored by a small convolutional network f θ that can be interpreted as a learned distance metric for retrieval from support to query. For multi-way guidance, the fusions of the query and each guide are batched for parallel inference. We distinguish between optimizing the parameters of the model during training (learning) and adapting the model during inference (guidance). Thus during training, we wish to "learn to guide." In standard supervised learning, the model parameters θ are optimized according to the loss between predictionŷ = f θ (x) and target y. We reduce the problem of learning to guide to supervised learning by jointly optimizing the parameters of the guidance branch g and the segmentation branch f according to the loss between f θ (x, z) and query target y, see FIG2. For clarity, we distinguish between tasks, a given support and query for segmentation, and task distributions that define a kind of segmentation problem. For example, semantic segmentation is a task distribution while segmenting birds (a semantic class) is a task. We train a guided network for each task distribution by optimizing episodically on sampled tasks. The supports and queries that comprise an episode are synthesized from a fully labeled dataset. We first sample a task, then a subset of images containing that task which we divide into support and query. During training, the target for the query image is available, while for testing it is not. We binarize support and query annotations to encode the task, and spatially sample support annotations for sparsity. Given inputs and targets, we train the network with the pixelwise cross-entropy loss between the predicted and target segmentation of the query. See Sections 7.1 and 7.2 for more details on data processing and network optimization respectively. After learning, the model parameters are fixed, and task inference is determined by guidance. While we evaluate for varying support size S, as described in 4.2, we train with S = 1 for efficiency while sampling P ∼ Uniform. Once learned, our guided networks can operate at different (S, P) shots to address sparse and dense pixelwise annotations with the same model, unlike existing methods that train for particular shot and way. In our experiments, we train with tasks sampled from a single task distribution, but co-or cross-supervision of distributions is possible. Intriguingly, we see some transfer between distributions when evaluating a guided network on a different distribution than it was trained on in Section 5.3. We evaluate our guided segmentor on a variety of problems that are representative of segmentation as a whole: interactive segmentation, semantic segmentation, and video object segmentation. These are conventionally regarded as separate problems, but we demonstrate that each can be viewed as an instantiation of guided segmentation. As a further demonstration of our method, we present for real-time, interactive video segmentation from dot annotations. To better understand the characteristics of guidance, we experiment with cross-task supervision in Section 5.2 and guiding with large-scale supports in Section 5.3.To standardize evaluation we select one metric for all tasks: the intersection-over-union (IU) of the positives averaged across all tasks and masks. This choice allows us to compare scores across the different kinds of segmentation we consider without skew from varying numbers of classes or images. Note that this metric is not equivalent to the mean IU across classes that is commonly reported for semantic segmentation. Please refer to Section 7.3 for more detail. We include fine-tuning and foreground- segmentation as baselines for all problems. Finetuning simply attempts to optimize the model on the support. Foreground- verifies that methods are learning to co-vary their output with the support supervision and sets an accuracy floor. The backbone of our networks is VGG-16 BID25, pre-trained on ILSVRC BID21, and cast into fully convolutional form BID24. This choice is made for fair comparison with existing works across our challenge tasks of semantic, interactive, and video object segmentation without confounds of architecture, pre-training data, and so forth. Interactive Image Segmentation We recover this problem as a special case of guided segmentation when the support and query images are identical. We evaluate on PASCAL VOC BID6 and compare to deep interactive object selection (DIOS) BID29, because it is state-of-the-art and shares our focus on learning for label efficiency and generality. Our approach differs in support encoding: DIOS fuses early while we fuse late and globally. Our guided segmentor is more accurate with extreme sparsity and intrinsically faster to update, as DIOS must do a full forward pass. See Figure 5 (left). From this we decide on late-global guidance throughout. Video Object Segmentation We evaluate our guided segmentor on the DAVIS 2017 benchmark of 2-3 second videos. For this problem, the object indicated by the fully annotated first frame must be segmented across the video. We then extend the benchmark to sparse annotations to gauge how methods degrade. We compare to OSVOS, a state-ofthe-art online optimization method that fine-tunes on the annotated frame and then segments the video frame-by-frame. While BID4 presents impressive on this task and on real-time interactive video segmentation without optimization, their scope is limited to video, and they employ orthogonal improvements that make comparison difficult. We were unable to reproduce their in our own experimental framework. See Figure 6 (left) for a comparison of accuracy, speed, and annotation sparsity. In the dense regime our method achieves 33.3% accuracy for 80% relative improvement over OSVOS in the same time envelope. Given (much) more time fine-tuning significantly improves in accuracy, but takes 10+min/video. Guidance is ∼ 200× faster at 3sec/video. Our method handles extreme sparsity with little degradation, maintaining 87% of the dense accuracy with only 5 points for positive and negative. Fine-tuning struggles to optimize over so few annotations. Interactive Video Segmentation By dividing guidance and inference, our guided segmentor can interactively segment video in real time. As an initial evaluation, we simulate interactions with randomly-sampled dot annotations. We define a benchmark by fixing the amount of annotation and measuring accuracy as the annotations are given. The accuracy-annotation tradeoff curve is plotted in Figure 6 (right). Our guided segmentor improves with both dimensions of shot, whether images (S) or pixels (P). Our guided architecture is feedforward and fast, and faster still to update for changes to the annotations. Semantic Segmentation Semantic segmentation is a challenge for learning from little data due to the high intra-class variance of appearance. For this problem it is crucial to evaluate on not only held-out inputs, but held-out classes, to be certain the guided learner has not covertly learned to be an unguided semantic segmentor. To do so we follow the experimental protocol of BID23 and score by averaging across four class-wise splits of PASCAL VOC BID6, with has 21 classes (including ), and compare to OSLSM.Our approach achieves state-of-the-art sparse that rival the most accurate dense with just two labeled pixels: see Figure 5 (right). OSLSM is incompatible with missing annotations, as it does early fusion by masking, and so is only defined for {0, 1} annotations. To evaluate it we map all missing annotations to negative. Foreground- is a strong baseline, and we were unable to improve on it with fine-tuning. The oracle is trained on all classes (nothing is held-out). (1 We carry out a novel examination of meta-learning with cross-task supervision. In the language of task distributions, the distribution of instance tasks for a given semantic category are nested in the distribution of tasks for that category. We investigate whether meta-training on the sub-tasks (instances) can address the super-tasks (classes). This tests whether guidance can capture an enumerative definition of a semantic class as the union of instances in that category. To do so, we meta-train our guided segmentor on interactive instance segmentation tasks draw from all classes of PASCAL VOC BID6, and then evaluate the model on semantic segmentation tasks from all categories. We experiment with (S, 1) support from semantic annotations, where S varies from one image to all the images in the training set, shown in the plot to the right. We compare to foreground as a class-agnostic accuracy floor, and a standard semantic segmentation net trained with semantic labels as an oracle. Increasing the amount of semantic annotations for guidance steadily increases accuracy. Thus far we have considered guidance in a variable but constrained scale of annotations, ranging from a single pixel in a single image to a few fully annotated images. We meta-learned our guided networks over episodes with such support sizes, and they perform accordingly well in this regime. Here we consider a much wider spectrum of support sizes, with the goal of understanding how guidance compares to standard supervised learning at both ends of the spectrum. To the best of our knowledge, this is the first evaluation of how few-shot learning scales to many-shot usage for structured output. For this experiment we compare guidance and supervised learning on a transfer task between disjoint semantic categories. We take the classes of PASCAL VOC BID6 as source classes, and take the non-intersecting classes of COCO BID16 as the target classes. We divide COCO 2017 validation into class-balanced train/test halves to look at transfer from a practical amount of annotation (thousands instead of more than a hundred thousand images). Our guided segmentor is meta-trained with semantic tasks sampled from the source classes, then guided with 5,989 densely annotated semantic masks from the target classes. For fair comparison, the supervised learner is first trained on the source classes, and then fine-tuned on the same annotated target data. Both methods share the same ILSVRC pre-training, backbone architecture, and (approximate) number of parameters. In this many-shot regime, guidance achieves 95% of supervised learning performance. A key point of this is to shed light on the spectrum of supervision that spans few-shot and many-shot settings, and encourage future work to explore bridging the two. Guided segmentation unifies annotation-bound segmentation problems. Guided networks reconcile task-driven and interactive inference by extracting guidance, a latent task representation, from any amount of supervision given. With guidance our segmentor revolver can learn and infer tasks without optimization, improve its accuracy near-instantly with more supervision, and once-guided can segment new images without the supervisor in the loop. 7.1 DATA PREPARATION Semantic Segmentation and Interactive Segmentation on PASCAL We use PASCAL VOC 2012 BID6 with the additional annotations of SBDD BID12. We define the training set to be the union of the VOC and SBDD training sets, and take the validation set to be the union of VOC and SBDD validation sets, excluding the images in VOC val that overlap with SBDD train. We sparsify the dense masks with random sampling, which we found ed in performance about equal to the more complex sampling strategies of BID29. Thus for a given P, we sample P points randomly from each of the objects to be segmented, as well as the . Labels for classes or instances that are not part of the task are relabeled to . The process of sampling a task and sparsifying and remapping the ground truth labels is illustrated in FIG2.For few-shot semantic segmentation, we follow the experimental protocol of BID23. We test few-shot performance on held-out classes by dividing the 20 classes of PASCAL into 4 sets of 5 classes. Images that contain both held-out and training classes are placed in the held-out set. We subsample splits with more images to ensure that each split contains the same number of images. For each split, we meta-train a guided segmentor with binary tasks sampled from the 15 training classes. We then compute the average performance across 1000 binary tasks sampled from the 5 held-out classes, and average across all four splits. We use the DAVIS 2017 benchmark of 2-3s video clips. We meta-train on the training videos and report average performance on the validation videos. During training, we synthesize tasks by sampling any two frames from the same video and treating one as the support and the other as the query. During testing, the support consists of all labeled frames, while the remaining frames comprise the query. For the video object segmentation benchmark, the first frame is densely labeled. For interactive video segmentation, varying numbers of frames are labeled with varying numbers of pixelwise labels. The backbone of our guided networks as well as our baseline networks is VGG-16 BID25, pre-trained on ILSVRC BID21, and cast into fully convolutional form BID24. We largely follow the optimization procedure for FCNs detailed in BID24: we optimize our guided nets by SGD with a learning rate of 1e −5, batch size 1, high momentum 0.99, and weight decay of 5e −4. The interpolation weights in the decoder are fixed to bilinear and not learned. Note that we normalize the loss by the number of pixels in each image in order to simplify learning rate selection across different datasets with varying image dimensions. Intersection-over-union (IU) is a standard metric for segmentation, but different families of segmentation tasks choose different forms of the metric. We report the IU of positives averaged across all tasks and masks, defined as i tpi i tpi+f pi+f ni where i ranges over ground truth segment masks. This choice makes performance comparable across tasks, because it is independent of the number of classes. We choose not to include negatives in the metric because it adds no information, given the binary nature of the scoring, even for multi-class predictions and ground truth since these are handled as a set of binary tasks by the metric. Note that this metric is not directly comparable to the mean IU across classes typically reported for semantic segmentation benchmarks. As a point of comparison, the 0.62 mean IU achieved by FCN-32s on the PASCAL segmentation benchmark corresponds to 0.45 positive IU.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJej6jR5Fm
We propose a meta-learning approach for guiding visual segmentation tasks from varying amounts of supervision.
Training generative adversarial networks requires balancing of delicate adversarial dynamics. Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes. In this work, we introduce a new form of latent optimisation inspired by the CS-GAN and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator. We develop supporting theoretical analysis from the perspectives of differentiable games and stochastic approximation. Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet (128 x 128) dataset. Our model achieves an Inception Score (IS) of 148 and an Frechet Inception Distance (FID) of 3.4, an improvement of 17% and 32% in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters. Generative Adversarial Nets (GANs) are implicit generative models that can be trained to match a given data distribution. GANs were originally proposed and demonstrated for images by. As the field of generative modelling has advanced, GANs have remained at the frontier, generating high-fidelity images at large scale . However, despite growing insights into the dynamics of GAN training, most recent advances in large-scale image generation come from architectural improvements , or regularisation focusing on particular parts of the model . Inspired by the compressed sensing GAN (CS-GAN;), we further exploit the benefit of latent optimisation in adversarial games using natural gradient descent to optimise the latent variable z at each step of training, presenting a scalable and easy to implement approach to improve the dynamical interaction between the discriminator and the generator. For clarity, we unify these approaches as latent optimised GANs (LOGAN). To summarise our contributions: 1. We present a novel analysis of latent optimisation in GANs from the perspective of differentiable games and stochastic approximation , arguing that latent optimisation can improve the dynamics of adversarial training. 2. Motivated by this analysis, we improve latent optimisation by taking advantage of efficient second-order updates. 3. Our algorithm improves the state-of-the-art BigGAN-deep model by a significant margin, without introducing any architectural change or additional parameters, ing in higher quality images and more diverse samples (Figure 1 and 2). We use θ D and θ G to denote the vectors representing parameters of the generator and discriminator. We use x for images, and z for the latent source generating an image. The prime is used to denote. p(x) and p(z) denote the data distribution and source distribution respectively. E p(x) [f (x)] indicates taking the expectation of function f (x) over the distribution p(x). A GAN consists of a generator that generates image x = G(z; θ G) from a latent source z ∼ p(z), and a discriminator that scores the generated images as D(x; θ D) . Training GANs involves an adversarial game: while the discriminator tries to distinguish generated samples x = G (z; θ G) from data x ∼ p(x), the generator tries to fool the discriminator. This procedure can be summarised as the following min-max game: The exact form of h(·) depends on the choice of loss function (; ;). To simplify our presentation and analysis, we use the Wasserstein loss , so that h D (t) = −t and h G (t) = t. Our experiments with BigGANdeep uses the hinge loss , which is identical to this form in its linear regime. Our analysis can be generalised to other losses as in previous theoretical work (e.g., Arora et al. 2017). To simplify notation, we abbreviate, which may be further simplified as f (z) when the explicit dependency on θ D and θ G can be omitted. Training GANs requires carefully balancing updates to D and G, and is sensitive to both architecture and algorithm choices . A recent milestone is BigGAN (and BigGAN-deep, Brock et al. 2018), which pushed the boundary of high fidelity image generation by scaling up GANs to an unprecedented level. BigGANs use an architecture based on residual blocks , in combination with regularisation mechanisms and self-attention (; ;). Here we aim to improve the adversarial dynamics during training. We focus on the second term in eq. 1 which is at the heart of the min-max game, with adversarial losses for D and G, which can be written as Computing the gradients with respect to θ D and θ G obtains the following gradient, which cannot be expressed as the gradient of any single function : The fact that g is not the gradient of a function implies that gradient updates in GANs can exhibit cycling behaviour which can slow down or prevent convergence. , vector fields of this form are referred to as the simultaneous gradient. Although many GAN models use alternating update rules (e.g., Goodfellow et al. 2014; Brock et al. 2018), following the gradient with respect to θ D and θ G alternatively in each step, they can still suffer from cycling, so we use the simpler simultaneous gradient (eq. 3) for our analysis. Inspired by compressed sensing , introduced latent optimisation for GANs. We call this type of model latent-optimised GANs (LOGAN). Latent optimization has been shown to improve the stability of training as well as the final performance for medium-sized models such as DCGANs and Spectral Normalised GANs . Latent optimisation exploits knowledge from D to guide updates of z. Intuitively, the gradient ∂z points in the direction that satisfies the discriminator D, which implies better samples. Therefore, instead of using the randomly sampled z ∼ p(z), uses the optimised latent in eq. 1 for training 1. The general algorithm is summarised in Algorithm 1 and illustrated in Figure 3 a. We develop the natural gradient descent form of latent update in Section 4. and use it to obtain ∆z from eq. 4 (GD) or eq. 12 (NGD) Optimise the latent z ← [z + ∆z], [·] indicates clipping the value between −1 and 1 Compute generator loss L Update θ D and θ G with the gradients until reaches the maximum training steps 3 ANALYSIS OF THE ALGORITHM To understand how latent optimisation improves GAN training, we analyse LOGAN as a 2-player differentiable game following;;. The appendix provides a complementary analysis that relates LOGAN to unrolled GANs and stochastic approximation . An important problem with gradient-based optimization in GANs is that the vector-field generated by the losses of the discriminator and generator is not a gradient vector field. It follows that gradient descent is not guaranteed to find a local optimum and can cycle, which can slow down convergence or lead to phenomena like mode collapse and mode hopping.; proposed Symplectic Gradient Adjustment (SGA) to improve the dynamics of gradient-based methods in adversarial games. For a game with gradient g (eq. 3), define the Hessian as the second order derivatives with respect to the parameters, H = ∇ θ g. SGA uses the adjusted gradient g * = g + λ A T g where λ is a positive constant and A = 1 2 (H − H T) is the anti-symmetric component of the Hessian. Applying SGA to GANs yields the adjusted updates (see Appendix B.1 for details): Compared with g in eq. 3, the adjusted gradient g * has second-order terms reflecting the interactions between D and G. SGA has been shown to significantly improve GAN training in basic examples , allowing faster and more robust convergence to stable fixed points (local Nash equilibria). Unfortunately, SGA is expensive to scale because computing the second-order derivatives with respect to all parameters is expensive. Explicitly computing the gradients for the discriminator and generator at z after one step of latent optimisation (eq. 4) obtains In both equations, the first terms represent how f (z) depends on the parameters directly and the second terms represent how f (z) depends on the parameters via the optimised latent source. For the second equality, we substitute ∆z = α ∂z as the gradient-based update of z and use ∂z. The original GAN's gradient (eq. 3) does not include any second-order term, since ∆z = 0 without latent optimisation. In LOGAN, these extra terms are computed by automatic differentiation when back-propagating through the latent optimisation process (see Algorithm 1). The SGA updates in eq. 6 and the LOGAN updates in eq. 8 are strikingly similar, suggesting that the latent step used by LOGAN reduces the negative effects of cycling by introducing a symplectic gradient adjustment into the optimization procedure. The role of the latent step can be formalized in terms of a third player, whose goal is to help the generator, see appendix B for details. Crucially, latent optimisation approximates SGA using only second-order derivatives with respect to the latent z and parameters of the discriminator and generator separately. The second order terms involving parameters of both the discriminator and the generator -which are extremely expensive to compute -are not used. For latent z's with dimensions typically used in GANs (e.g., 128-256, orders of magnitude less than the number of parameters), these can be computed efficiently. In short, latent optimisation efficiently couples the gradients of the discriminator and generator, as prescribed by SGA, but using the much lower-dimensional latent source z which makes the adjustment scalable. An important consequence of reducing the rotational aspect of GAN dynamics is that it is possible to use larger step sizes during training which suggests using stronger optimisers to fully take advantage of latent optimisation. Latent optimisation can improve GAN training dynamics further by allowing larger single steps ∆z towards the direction of ∂f (z) ∂z without overshooting. Appendix B further explains how LOGAN relates to unrolled GANs and stochastic approximation. Our main finding is that latent optimisation accelerates the speed of updating D relative to that of G, facilitating convergence according to In particular, the generator requires less update compared with D to achieve the same reduction of loss, because latent optimisation "helps" G. Although our analysis suggests using strong optimisers for optimising z, only used basic gradient descent (GD) with a fixed step-size. This choice limits the size ∆z can take: in order not to overshoot when the curvature is large, the step size would be too conservative when the curvature is small. We hypothesis that GD is more detrimental for larger models, which have complex loss surfaces with highly varying curvatures. Consistent with this hypothesis, we observed only marginal improvement over the baseline using GD (section 5.3, Table In this work, we instead use natural gradient descent (NGD, Amari 1998) for latent optimisation. NGD can be seen as an approximate second-order optimisation method , and has been applied successfully in many domains. By using the positive semidefinite (PSD) Gauss-Newton matrix to approximate the (possibly negative definite) Hessian, NGD often works even better than exact second-order methods. NGD is expensive in high dimensional parameter spaces, even with approximations . However, we demonstrate it is efficient for latent optimisation, even in very large models. Given the gradient of z, g = ∂f (z) ∂z, NGD computes the update as where the Fisher information matrix F is defined as The log-likelihood function ln p(t|z) typically corresponds to commonly used error functions such as cross entropy loss. This correspondence is not necessary when NGD is interpreted as an approximate second-order method, as has long been used in practice . Nevertheless, Appendix C provides a Poisson log-likelihood interpretation for the hinge loss commonly used in GANs . An important difference between latent optimisation and commonly seen senarios using NGD is that the expectation over the condition (z) is absent. Since each z is only responsible for generating one image, it only minimises the loss L G (z) for this particular instance. Computing per-sample Fisher this way is necessary to approximate SGA (see Appendix B.1 for details). More specifically, we use the empirical Fisher F with Tikhonov damping, as in TONGA F = g · g T + β I F is cheaper to compute compared with the full Fisher, since g is already available. The damping factor β regularises the step size, which is important when F only poorly approximates the Hessian or when the Hessian changes too much across the step. Using the Sherman-Morrison formula, the NGD update can be simplified into the following closed form: which does not involve any matrix inversion. Thus, NGD adapts the step size according to the curvature estimate c = 1 Figure 4 a illustrates the scaling for different values of β. NGD automatically smooths the scale of updates by down-scaling the gradients as their norm grows, which also contributes to the smoothed norms of updates (Figure 4 b). Since the NGD update remains proportional to g, our analysis based on gradient descent in section 3 still holds. We focus on large scale GANs based on BigGAN-deep trained on 128 × 128 size images from the ImageNet dataset . In Appendix E, we present from applying our algorithm on Spectral Normalised GANs trained with CIFAR dataset , which obtains state-of-the-art scores on this model. We used the standard BigGAN-deep architecture with three minor modifications: 1. We increased the size of the latent source from 128 to 256, to compensate the randomness of the source lost when optimising z. 2. We use the uniform distribution U(−1, 1) instead of the standard normal distribution N for p(z), to be consistent with the clipping operation (Algorithm 1). 3. We use leaky ReLU instead of ReLU as the non-linearity for smoother gradient flow for ∂z. Consistent with detailed findings in that these changes have limited effect, our experiment with this baseline model obtains only slightly better scores compared with those in (Table 1, see also Figure 8). The FID and IS are computed as in , and IS values are computed from checkpoints with the lowest FIDs. The means and standard deviations are computed from 5 models with different random seeds. To apply latent optimisation, we use a damping factor β = 5.0 combined with a large step size of α = 0.9. As an additional way of damping, we only optimise 50% of z's dimensions. Optimising the entire population of z was unstable in our experiments. Similar to , we found it was helpful to regularise the Euclidean norm of weight-change ∆z, with a regulariser weight of 300.0. All other hyper-parameters, including learning rates and a large batch size of 2048, remain the same as in BigGAN-deep; we did not optimise these hyper-parameters. We call this model LOGAN (NGD). Employing the same architecture and number of parameters as the BigGAN-deep baseline, LOGAN (NGD) achieved better FID and IS (Table 1). As observed by , BigGAN training always eventually collapsed. Training with LOGAN also collapsed, perhaps due to higher-order dynamics beyond the scope we have analysed, but took significantly longer (600k steps versus 300k steps with BigGAN-deep). During training, LOGAN was 2 − 3 times slower per step compared with BigGAN-deep because of the additional forward and backward pass. We found that optimising z during evaluation did not improve sample scores (even up to 10 steps), so we do not optimise z for evaluation. Therefore, LOGAN has the same evaluation cost as original BigGAN-deep. To help understand this behaviour, we plot the change from ∆z during training in Figure 5 a. Although the movement in Euclidean space ∆z grew until training collapsed, the movement in D's output space, measured as f (z + ∆z) − f (z), remained unchanged (see Appendix D for details). As shown in our analysis, optimising z improves the training dynamics, so LOGANs work well after training without requiring latent optimisation. We verify our theoretical analysis in section 3 by examining key components of Algorithm 1 via ablation studies. First, we experimented with using basic GD to optimising z, as in , and call this model LOGAN (GD). A smaller step size of α = 0.0001 was required; larger values were unstable and led to premature collapse of training. As shown in Table 1, the scores from LOGAN (GD) were worse than LOGAN (NGD) and similar to the baseline model. We then evaluate the effects of removing those terms depending on ∂f (z) ∂z in eq. 8, which are not in the ordinary gradient (eq. 3). Since these terms were computed when back-propagating through the latent optimisation procedure, we removed them by selectively blocking back-propagation with "stop gradient" operations (e.g., in TensorFlow Abadi et al. 2016). T ∂f (z) ∂z and removing both terms. As predicted by our analysis (section 3), both terms help stabilise training; training diverged early for all three ablations. Truncation is a technique introduced by to illustrate the trade-off between the FID and IS in a trained model. For a model trained with z ∼ p(z) from a source distribution symmetric around 0, such as the standard normal distribution N and the uniform distribution U(−1, 1), down-scaling (truncating) the sourcez = s · z with 0 ≤ s ≤ 1 gives samples with higher visual quality but reduced diversity. This observation is quantified as higher IS and lower FID when evaluating samples from truncated distributions. Figure 2 b show higher quality compared with those in a (e.g., the interfaces between the elephants and ground, the contours around the pandas). In this work we present the LOGAN model which significantly improves the state-of-the-art on large scale GAN training for image generation by online optimising the latent source z. Our illustrate improvements in quantitative evaluation and samples with higher quality and diversity. Moreover, our analysis suggests that LOGAN fundamentally improves adversarial training dynamics. We therefore expect our method to be useful in other tasks that involve adversarial training, including representation learning and inference , text generation , style learning , audio generation and video generation (; A ADDITIONAL SAMPLES AND Figure 6 and 7 provide additional samples, organised similarly as in Figure 1 and 2. Figure 8 shows additional truncation curves. In this section we present three complementary analyses of LOGAN. In particular, we show how the algorithm brings together ideas from symplectic gradient adjustment, unrolled GANs and stochastic approximation with two time scales. To analyse LOGAN as a differentiable game we treat the latent step ∆z as adding a third player to the original game played by the discriminator and generator. The third player's parameter, ∆z, is optimised online for each z ∼ p(z). Together the three players (latent player, discriminator, and generator) have losses averaged over a batch of samples: where η = 1 N (N is the batch size) reflects the fact that each ∆z is only optimised for a single sample z, so its contribution to the total loss across a batch is small compared with θ D and θ G which are directly optimised for batch losses. This choice of η is essential for the following derivation, and has important practical implication. It means that the per-sample loss L G (z), instead of the loss summed over a batch N n=1 L G (z n), should be the only loss function guiding latent optimisation. Therefore, when using natural gradient descent (Section 4), the Fisher information matrix should only be computed using the current sample z. The ing simultaneous gradient is , we can write the Hessian of the game as: The presence of a non-zero anti-symmetric component in the Hessian implies the dynamics have a rotational component which can cause cycling or slow down convergence. Since η 1 for typical batch sizes (e.g., Symplectic gradient adjustment (SGA) counteracts the rotational force by adding an adjustment term to the gradient to obtain g * ← g + λ A T g, which for the discriminator and generator has the form: The gradient with respect to ∆z is ignored since the convergence of training only depends on θ D and θ G. If we drop the last terms in eq.17 and 18, which are expensive to compute for large models with high-dimensional θ D and θ G, and use ∂z, the adjusted updates can be rewritten as Because of the third player, there are still the terms depend on to adjust the gradients. Efficiently computing is non-trivial (e.g., Pearlmutter 1994). However, if we introduce the local approximation then the adjusted gradient becomes identical to 8 from latent optimisation. In other words, automatic differentiation by commonly used machine learning packages can compute the adjusted gradient for θ D and θ G when back-propagating through the latent optimisation process. Despite the approximation involved in this analysis, both our experiments in section 5 and the from verified that latent optimisation can significantly improve GAN training. Latent optimisation can be seen as unrolling GANs in the space of the latent, rather than the parameters. Unrolling in the latent space has the advantages that: 1. LOGAN is more scalable than Unrolled GANs because it avoids second-order derivatives over a potentially very large number of parameters. 2. While unrolling the update of D only affects the parameters of G (as in Metz et al. 2016), latent optimisation effects both D and G as shown in eq. 8. We next formally present this connection by showing that SGA can be seen as approximating Unrolled GANs . For the update θ D = θ D + ∆θ D, we have the Taylor expansion approximation at θ D: Here p(t = 1; z, D, G) is the probability that the generated image G(z) can fool the discriminator D. The original GAN's discriminator can be interpreted as outputting a Bernoulli distribution In this case, if we parameterise β G = D (G(z)), the generator loss is the negative log-likelihood Bernoulli, however, is not the only valid choice as the discriminator's output distribution. Instead of sampling "1" or "0", we assume that there are many identical discriminators that can independently vote to reject an input sample as fake. The number of votes k in a given interval can be described by a Poisson distribution with parameter λ with the following PMF: The probability that a generated image can fool all the discriminators is the probability of G(z) receiving no vote for rejection Therefore, we have the following negative log-likelihood as the generator loss if we parameterise λ = −D (G(z)): This interpretation has a caveat that when D (G(z)) > 0 the Poisson distribution is not well defined. However, in general the discriminator's hinge loss pushes D (G(z)) < 0 via training. For a temporal sequence x 1, x 2,..., x T (changes of z or f (z) at each training step in this paper), to normalise its variance while accounting for the non-stationarity, we process it as follows. We first compute the moving average and standard deviation over a window of size N: Then normalise the sequence as: The in Figure 5 a is robust to the choice of window size. Our experiments with N from 10 to 50 yielded visually similar plots. To test if latent optimisation works with models at more moderate scales, we applied it on SN-GANs . Although our experiments on this model are less thorough than in the main paper with BigGAN-deep, we hope to provide basic guidelines for researchers interested in applying latent optimisation on smaller models. The experiments follows the same basic setup and hyper-parameter settings as the CS-GAN in. There is no class conditioning in this model. For NGD, we found a smaller damping factor β = 0.1, a z regulariser weight of 3.0 (the same as in Wu et al. 2019), combined with optimising 70% of the latent source (instead of 50% for BigGAN-deep) worked best for SN-GANs. In addition, we found running extra latent optimisation steps benefited evaluation, so we use ten steps of latent optimisation in evaluation for in this section, although the models were still trained with a single optimisation step. We reckon that smaller models might not be "over-parametrised" enough to fully amortise the computation from optimising z, which can then further exploit the architecture in evaluation time. On the other hand, the overhead from running multiple iterations of latent optimisation is relatively small at this scale. We aim to further investigate this difference in future studies. Table 2 shows the FID and IS alongside SN-GAN and CS-CAN which used the same architecture. Here we observe similarly significant improvement over the baseline SN-GAN model, with an improvement of 16.8% in IS and 39.6% in FID. Figure 9 shows random samples from these two models. Overall, samples from LOGAN (NGD) have higher contrasts and sharper contours.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJeU_1SFvr
Latent optimisation improves adversarial training dynamics. We present both theoretical analysis and state-of-the-art image generation with ImageNet 128x128.
In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset. We look at this problem in the setting where the number of parameters is greater than the number of sampled points. We show that for a wide class of differentiable activation functions (this class involves most nonlinear functions and excludes piecewise linear functions), we have that arbitrary first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular. We essentially show that these non-singular hidden layer matrix satisfy a ``"good" property for these big class of activation functions. Techniques involved in proving this inspire us to look at a new algorithmic, where in between two gradient step of hidden layer, we add a stochastic gradient descent (SGD) step of the output layer. In this new algorithmic framework, we extend our earlier and show that for all finite iterations the hidden layer satisfies the``good" property mentioned earlier therefore partially explaining success of noisy gradient methods and addressing the issue of data independency of our earlier . Both of these are easily extended to hidden layers given by a flat matrix from that of a square matrix. Results are applicable even if network has more than one hidden layer provided all inner hidden layers are arbitrary, satisfy non-singularity, all activations are from the given class of differentiable functions and optimization is only with respect to the outermost hidden layer. Separately, we also study the smoothness properties of the objective function and show that it is actually Lipschitz smooth, i.e., its gradients do not change sharply. We use smoothness properties to guarantee asymptotic convergence of $O(1/\text{number of iterations})$ to a first-order optimal solution. Neural networks architecture has recently emerged as a powerful tool for a wide variety of applications. In fact, they have led to breakthrough performance in many problems such as visual object classification BID13, natural language processing BID5 and speech recognition BID17. Despite the wide variety of applications using neural networks with empirical success, mathematical understanding behind these methods remains a puzzle. Even though there is good understanding of the representation power of neural networks BID1, training these networks is hard. In fact, training neural networks was shown to be NP-complete for single hidden layer, two node and sgn(·) activation function BID2. The main bottleneck in the optimization problem comes from non-convexity of the problem. Hence it is not clear how to train them to global optimality with provable guarantees. Neural networks have been around for decades now. A sudden resurgence in the use of these methods is because of the following: Despite the worst case by BID2, first-order methods such as gradient descent and stochastic gradient descent have been surprisingly successful in training these networks to global optimality. For example, empirically showed that sufficiently over-parametrized networks can be trained to global optimality with stochastic gradient descent. Neural networks with zero hidden layers are relatively well understood in theory. In fact, several authors have shown that for such neural networks with monotone activations, gradient based methods will converge to the global optimum for different assumptions and settings BID16 BID10 BID11 BID12 ).Despite the hardness of training the single hidden layer (or two-layer) problem, enough literature is available which tries to reduce the hardness by making different assumptions. E.g., BID4 made a few assumptions to show that every local minimum of the simplified objective is close to the global minimum. They also require some independent activations assumption which may not be satisfied in practice. For the same shallow networks with (leaky) ReLU activations, it was shown in that all local minimum are global minimum of the modified loss function, instead of the original objective function. Under the same setting, showed that critical points with large "diversity" are near global optimal. But ensuring such conditions algorithmically is difficult. All the theoretical studies have been largely focussed on ReLU activation but other activations have been mostly ignored. In our understanding, this is the first time a theoretical will be presented which shows that for almost all nonlinear activation functions including softplus, an arbitrary first-order optimal solution is also the global optimal provided certain "simple" properties of hidden layer. Moreover, we show that a stochastic gradient descent type algorithm will give us those required properties for free for all finite number of iterations hence even if the hidden layer variables are data dependent, we still get required properties. Our assumption on data distribution is very general and can be reasonable for practitioners. This comes at two costs: First is that the hidden layer of our network can not be wider than the dimension of the input data, say d. Since we also look at this problem in over-parametrized setting (where there is hope to achieve global optimality), this constraint on width puts a direct upper-bound of d 2 on the number of data points that can be trained. Even though this is a strong upper bound, recent from margin bounds BID19 show that if optimal network is closer to origin then we can get an upper bound on number of samples independent of dimension of the problem which will ensure closeness of population objective and training objective. Second drawback of this general setting is that we can prove good properties of the optimization variables (hidden layer weights) for only finite iterations of the SGD type algorithm. But as it is commonly known, stochastic gradient descent converges to first order point asymptotically so ideally we would like to prove these properties for infinitely many iterations. We compare our to some of the prior work of and. Both of these papers use similar ideas to examine first order conditions but give quite different from ours. They give for ReLU or Leaky ReLU activations. We, on the other hand, give for most other nonlinear activations, which can be more challenging. We discuss this in section 3 in more detail. We also formally show that even though the objective function for training neural networks is nonconvex, it is Lipschitz smooth meaning that gradient of the objective function does not change a lot with small changes in underlying variable. To the best of our knowledge, there is no such formally stated in the literature. discuss similar , but there constant itself depends locally on w max, a hidden layer matrix element, which is variable of the the optimization function. Moreover, there is probabilistic. Our is deterministic, global and computable. This allows us to show convergence for the gradient descent algorithm, enabling us to establish an upper bound on the number of iterations for finding an ε-approximate first-order optimal solution (∇f ≤ ε). Therefore our algorithm will generate an ε-approximate first-order optimal solution which satisfies aforementioned properties of the hidden layer. Note that this does not mean that the algorithm will reach the global optimal point asymptotically. As mentioned before, when number of iterations tend to infinity, we could not establish "good" properties. We discuss technical difficulties to prove such a conjecture in more detail in section 5 which details our convergence . At this point we would also like to point that there is good amount of work happening on shallow neural networks. In this literature, we see variety of modelling assumptions, different objective functions and local convergence . BID15 focuses on a class of neural networks which have special structure called "Identity mapping". They show that if the input follows from Gaussian distribution then SGD will converge to global optimal for population objective of the "identity mapping" network. BID3 show that for isotropic Gaussian inputs, with one hidden layer ReLU network and single non-overlapping convolutional filter, all local minimizers are global hence gradient descent will reach global optimal in polynomial time for the population objective. For the same problem, after relaxing the constraint of isotropic Gaussian inputs, they show that the problem is NP-complete via reduction from a variant of set splitting problem. In both of these studies, the objective function is a population objective which is significantly different from training objective in over parametrized domain. In over-parametrized regime, shows that for the training objective with data coming from isotropic Gaussian distribution, provided that we start close to the true solution and know maximum singular value of optimal hidden layer then corresponding gradient descent will converge to the optimal solution. This is one of its kind of where local convergence properties of the neural network training objective function have studied in great detail. Our differ from available current literature in variety of ways. First of all, we study the training problem in the over-parametrized regime. In that regime, the training objective can be significantly different from population objective. Moreover, we study the optimization problem for many general non-linear activation functions. Our can be extended to deeper networks when considering the optimization problem with respect to outermost hidden layer. We also prove that stochastic noise helps in keeping the aforementioned properties of hidden layer. This , in essence, provides justification for using stochastic gradient descent. Another line of study looks at the effect of over-parametrization in the training of neural networks BID9 ). These are not for the same problem as they require huge amount of over-parametrization. In essence, they require the width of the hidden layer to be greater than number of data points which is unreasonable in many settings. These work for fairly general activations as do our but we require a moderate over-parametrization, width × dimension ≥ number of data population, much more reasonable in practice as pointed before from margin bound . They also work for deeper neural network as do our when optimization is with respect to outermost hidden layer (and aforementioned technical properties are satisfied for all hidden layers). We define set [q]:= {1, . . ., q}. For any matrix A ∈ R a×b, we write vect(A) ∈ R ab×1 as vector DISPLAYFORM0 is the i-th element in vector z. B i (r) represents a l i -ball of radius r, centred at origin. We define component-wise product of two vectors with operator. We say that a collection of vectors, DISPLAYFORM1 Similarly, we say that collection of matrices, DISPLAYFORM2 A fully connected two-layer neural network has three parameters: hidden layer W, output layer θ and activation function h. For a given activation function, h, we define neural network function as DISPLAYFORM3 In the above equation, W ∈ R n×d is hidden layer matrix, θ ∈ R n is the output layer. Finally h: R → R is an activation function. The main problem of interest in this paper is the two-layer neural network problem given by DISPLAYFORM4 In this paper, we assume that DISPLAYFORM5 DISPLAYFORM6 (3.1) can also be written in a matrix vector product form: DISPLAYFORM7 where DISPLAYFORM8 Notice that if matrix D ∈ R nd×N is of full column rank (which implies nd ≥ N, i.e., number of samples is less than number of parameters) then it immediately gives us that s = 0 which means such a stationary point is global optimal. This motivates us to investigate properties of h under which we can provably keep matrix D full column rank and develop algorithmic methods to help maintain such properties of matrix D. Note that similar approach was explored in the works of and. To get the full rank property for matrix D, use leaky ReLu function. Basically this leaky activation function adds noise to entries of matrix D which allows them to show matrix D is full rank and hence all local minimums are global minimums. So this is essentially a change of model. We, on the other hand, do not change model of the problem. Moreover, we look at the algorithmic process of finding W differently. We show that SGD will achieve full rank property of matrix D with probability 1 for all finite iterations. So this is essentially a property of the algorithm and not of the model. Even if that is the case, to show global optimality, we need to prove that matrix D is full column rank in asymptotic sense. That question was partly answered in. They show that matrix D is full column rank by achieving a lower bound on smallest singular value of matrix D. But to get this, they need two facts. First, the activation function has to be ReLu so that they can find the spectrum of corresponding kernel matrix. Second, they require a bound on discrepancy of weights W. These conditions are strong in the sense that they restrict the analysis to a non-differentiable activation function and finding an algorithm satisfying discrepancy constraint on W can be a difficult task. On other hand, our are proved for a simple SGD type algorithm which is easy to implement. But we do not get a lower bound on singular value of D in asymptotic sense. There are obvious pluses and minuses for both types of . For the rest of the discussion, we will assume that n = d (our can be extended to case n ≤ d easily) and hence W is a square matrix. In this setting, we develop the following algorithm whose output is a provable first-order approximate solution. Here we present the algorithm and in next sections we will discuss conditions that are required to satisfy full rank property of matrix D as well as convergence properties of the algorithm. In Algorithm 1, we use techniques inspired from alternating minimization to minimize with respect to θ and W. For minimization with respect to θ, we add gaussian noise to the gradient information. This will be useful to prove convergence of this algorithm. We use randomness in θ to ensure some "nice" properties of W which help us in proving that matrix D generated along the trajectory of Algorithm 1 is full column rank. More details will follow in next section. The algorithm has two loops. An outer loop implements a single gradient step with respect to hidden layer, W. For each outer loop iteration, there is an inner loop which optimizes objective function with respect to θ using a stochastic gradient descent algorithm. In the stochastic gradient descent, we generate a noisy estimated of ∇ θ f (W, θ) as explained below. Let ξ ∈ R d be a vector whose elements are i.i.d. Gaussian random variable with zero mean. Then for a given value of W we define stochastic gradient w.r.t. θ as follows: DISPLAYFORM9 (3.4)Then we know that DISPLAYFORM10. We can choose a constant σ > 0 such that following holds DISPLAYFORM11 Moreover, in the algorithm we consider a case where θ ∈ R. Note that R can be kept equal to R d but that will make parameter selection complicated. In our convergence analysis, we will use DISPLAYFORM12 for some constant R, to make parameter selection simpler. We use prox-mapping P x: R d → R as follows: DISPLAYFORM13 In case R is a ball centred at origin, solution of (3.7) is just projection of x − y on that ball. For case where R = R d then the solution is quantity x − y itself. Initialize N o to predefined iteration count for outer ietaration Initialize N i to predefined iteration count for inner iteration Begin outer iteration: DISPLAYFORM0 Notice that the problem of minimization with respect to θ is a convex minimization problem. So we can implement many procedures developed in the Stochastic optimization literature to get the convergence to optimal value BID18 ). In the analysis, we note that one does not even need to implement complete inner iteration as we can skip the stochastic gradient descent suboptimally given that we improve the objective value with respect to where we started, i.e., DISPLAYFORM1 In essence, if evaluation of f for every iteration is not costly then one might break out of inner iterations before running N i iterations. If it is costly to evaluate function values then we can implement the whole SGD for convex problem with respect to θ as specified in inner iteration of the algorithm above. In each outer iteration, we take one gradient decent step with respect to variable W. We have total of N o outer iterations. So essentially we evaluate DISPLAYFORM2 Overall, this algorithm is new form of alternate minimization, where one iteration can be potentially left suboptimally and other one is only one gradient step. We prove in this section that arbitrary first order optimal points are globally optimal. One does not expect to have an arbitrary first order optimal point because it has to depend on data. We still would like to put our analysis here because that inspires us to consider a new algorithmic framework in Section 3 providing similar for all finite iterations of the algorithm. We say that h: R → R satisfy the condition "C1" if DISPLAYFORM0 One can easily notice that most activation functions used in practice e.g., DISPLAYFORM1 satisfy the condition C1. Note that h (x) also satisfy condition C1 for all of them. In fact, except for very small class of functions (which includes linear functions), none of the continuously differentiable functions satisfy condition C1. We first prove a lemma which establishes that columns of the matrix D (each column is a vector form of d×d matrix itself) are linearly independent when W = I d and h satisfies condition C1. We later generalise it to any full rank W using a simple corollary. The statement of following lemma is intuitive but its proof is technical. DISPLAYFORM2 is full rank with measure 1.This means that if u i in the Problem (2.1) are coming from a Lebesgue measure then by corollary 4.2 DISPLAYFORM3 will be a full rank collection given that we have maintained full rank property of W. Now note that in the first-order condition, given in (3.3), row of matrix D are scaled by constant factors θ[j]'s, j ∈ [d]. Notice that we may assume θ[j] = 0 because otherwise there is no contribution of corresponding j-th row of W to the Problem (2.1) and we might as well drop it entirely from the optimization problem. Hence we can rescale rows of matrix D by factor DISPLAYFORM4 without changing the rank. In essence, corollary 4.2 implies that matrix D is full rank when W is full rank. So by our discussion in earlier section, we show that satisfying first-order optimality is enough to show global optimality under condition C1 for data independent W. Remark 4.4 As a of corollary above one can see that the collection of vectors h(W x i) is full rank under the assumption that W is non-singular, x i ∈ R d are independently sampled from Lebesgue measure and h satisfies condition C1.Remark 4.5 Since collection h(W u i) is also full rank, we can say that z i:= h(W 1 u i) are independent and sampled from a Lebesgue measure for a non-singular matrix W 1. Applying the Lemma to z i, we have collection of matrices g(W 2 z i)z i T are full rank with measure 1 for non-singular W 2 and g satisfying condition C1. So we see that for multiple hidden layers satisfying non-singularity, we can apply full rank property for collection of gradients with respect to outermost hidden layer. has rank min{rank(W)d, N } with measure 1 by removing dependent rows and using remark 4.6. Even though we have proved that collection h(Wis full rank in the previous section, we need such W to be independent of data. In general, any algorithm will use data to find W and it appears that in previous section are not meaningful in practice. However, the analysis of Lemma 4.1 motivates the idea that stochastic noise of θ might help in obtaining the required properties of W and D along the trajectory of Algorithm 1. In this section we first prove that by using random noise in stochastic gradient on θ gives a non-singular W k in every iteration. Then using this fact, we prove that matrix D generated along the algorithm is also full rank. The proof techniques are very similar to proof of Lemma 4.1. Later on, we will also show that overall algorithm will converge to approximate first-order optimal solution to Problem (2.1) by using smoothness properties. It should be noted however that this can not guarantee convergence to a global optimal solution. To prove such a , one needs to analyze the smallest singular value of random matrix D, defined in (3.3). More specifically, we have to show that σ min (D) decreases at the rate slower than the first-order convergence rate of the algorithm so that the overall algorithm converges to the global optimal solution. Even if it is very difficult to prove such a in theory, we think that such an assumption about σ min (D) is reasonable in practice. Now we analyze the algorithm. For the sake of simplicity of notation, let us define 2) where N i is the inner iteration count in Algorithm 1. Essentially ξ [k] contains the record of all random samples used until the k-th outer iteration in Algorithm 1 and ξ j [Ni] contains record of all random samples used in the inner iterations of j-th outer iteration. DISPLAYFORM0 DISPLAYFORM1 where W k are matrices generated by Algorithm 1 and measure Pr{. DISPLAYFORM2 Now that we have proved that W k 's generated by the algorithm are full rank, we show that matrix D generated along the trajectory of the algorithm is full rank for any finite number of iterations. We use techniques inspired from Lemma 4.1 but this time we use Lebesgue measure over Θ rather than data. Over randomness of Θ, we can show that our algorithm will not produce any W such that corresponding matrix D is rank deficient. Since Θ is essentially designed to be independent of data so we will not produce rank deficient D throughout the process of randomized algorithm. DISPLAYFORM3 and v is a random vector with Lebesgue measure in R d . W, Z ∈ R d×d and Z = 0. Let h be a function which follows condition C1. Also assume that W is full rank with measure 1 over randomness of v. Then h(W u i) u Proof. We know that DISPLAYFORM4 Now apply Lemma 5.2 to obtain the required . Note that Lemma 5.3 is very similar to the in section 4. Some remarks are in order. Remark 5.4 As a of Lemma 5.3 above, one can see that collection of vectors h(W k u i) is full rank for all finite iterations of Algorithm 1.Remark 5.5 If we have a neural network with multiple hidden layer, we can assume that inner layers are initialized to arbitrary full rank matrices and we are optimizing w.r.t. outermost hidden layer. Corollary 4.2 and Remark 4.4 give us that input to outermost hidden layer are independent vectors sampled from some lebesgue measure. Then applying Algorithm 1 to optimize w.r.t. outermost hidden layer will give us similar as mentioned in Lemma 5.3.Hence we showed that algorithm will generate full rank matrix D for any finite iteration. Now to prove convergence of the algorithm, we need to analyze the function f (defined in (2.1)) itself. We show that f is a Lipschitz smooth function for any given instance of data {u DISPLAYFORM5 . This will give us a handle to estimate convergence rates for the given algorithm. Lemma 5.6 Assuming that h : R → R is such that its gradients, hessian as well as values are bounded and data DISPLAYFORM6 is given then there exists a constant L such that DISPLAYFORM7 Moreover, a possible upper bound on L can be as follows: DISPLAYFORM8 Remark 5.7 Before staing the proof, we should stress that assumptions on h is satisfied by most activation functions e.g., sigmoid, sigmoid symmetric, gaussian, gaussian symmetric, elliot, elliot symmetric, tanh, Erf. Remark 5.8 Note that one can easily estimate value of L given data and θ. Moreover, if we put constraints on θ 2 then L is constant in every iteration of the algorithm 1. As mentioned in section 3, this will provide an easier way to analyze the algorithm. Lemma 5.9 Assuming that scalar function h is such that |h(·)| ≤ u then there exists L θ s.t. DISPLAYFORM9 Notice that Lemma 5.9 gives us value of L θ irrespective of value of W or data. Also observe that f (W, ·) is convex function since hessian DISPLAYFORM10 which is the sum of positive semidefinite matrices. By Lemma 5.9, we know that f (W, ·) is smooth as well. So we can use following convergence provided by BID14 for stochastic composite optimization. A simplified proof can be found in appendix. Theorem 5.10 Assume that stepsizes β i satisfy 0 < β i ≤ 1/2L θ, ∀ i ≥ 1. Let {θ av i+1} i≥1 be the sequence computed according to Algorithm 1. Then we have, DISPLAYFORM11 5) DISPLAYFORM12 i where θ 1 is the starting point for inner iteration and σ is defined in (3.5). Now we look at a possible strategy of selecting stepsize β i. Suppose we adopt a constant stepsize policy then we have DISPLAYFORM13 we get DISPLAYFORM0 By Lemma 5.6, the objective function for neural networks is Lipschitz-smooth with respect to the hidden layer, i.e., it satisfies eq (5.3). Notice that it is equivalent to saying DISPLAYFORM1 (5.7) Since we have a handle on the smoothness of objective function, we can provide a convergence for the overall algorithm. DISPLAYFORM2 8) where R/2 is the radius of origin centred ball, R in algorithm, defined as R: DISPLAYFORM3 In view of Theorem 5.11, we can derive a possible way of choosing γ k, σ and N i to obtain a convergence . More specifically, if DISPLAYFORM4 L and β k τ is chosen according to (5.6) then we have DISPLAYFORM5 Note that in the algorithm 1, we have proved that having a stochastic noise helps keeping matrix D full rank for all finite iterations. Then in Theorem 5.11, we showed a methodical way of achieving approximate first order optimal. So essentially at the end of the finitiely many steps of algorithm 1, we have a point W which satisfies full rank property of D and is approximately first order optimal. We think this kind of can be extended to variety of different first order methods developed for Lipschitz-smooth non-convex optimization problems. More specifically, accelerated gradient method such as unified accelerated method proposed by BID8 or accelerated gradient method by BID7 can be applied in outer iteration. We can also use stochastic gradient descent method for outer iteration. For this, we need a stochastic algorithm that is designed for non-convex and Lipschitz smooth function optimization. Randomized stochastic gradient method, proposed by BID6, Stochastic variance reduction gradient method (SVRG) by Reddi et al. or Simplified SVRG by Allen-Zhu & Hazan can be employed in outer iteration. Convergence of these new algorithms will follow immediately from the convergence of respective studies. Some work may be needed to prove that they hold matrix D full rank. We leave that for the future work. We also leave the problem of proving a bound on singular value for future. This will close the gap between empirical and theory. Value of Lipschitz constant, L, puts a significant impact on the running time of the algorithm. Notice that if L increases then correspondingly N o and N i increase linearly with L. So we need methods by which we can reduce the value of the estimate of L. One possible idea would be to use l 1 -ball for feasible region of θ. More specifically, if R = B 1 (R/2) then we can possibly enforce sparsity on θ which will allow us to put better bound on L. In this appendix, we provide proofs for auxiliary . The is trivially true for d =1, we will show this using induction on d. DISPLAYFORM0. Note that it suffices to prove independence of vector DISPLAYFORM1 DISPLAYFORM2 Moreover, for any collection satisfying x i ∈ Z i, corresponding collection of vector v i are linearly dependent, i.e., DISPLAYFORM3 Noticing the definition of Z 1, we can choose > 0 s.t. x 1:= x 1 + e 1 ∈ Z 1. Since we ensure that x 1 ∈ Z 1 then by (A.1) we have DISPLAYFORM4 So using (A.1) and (A.2) we get DISPLAYFORM5 2 components of v 1 − v 1 are zero. Let us define: DISPLAYFORM6... DISPLAYFORM7 By definition we have y i ∈ R d−1 are independently sampled from (d − 1)-dimensional Lebesgue measure. So by inductive hypothesis, rank of collection of matrices h(y i)y DISPLAYFORM8 2 then λ 2 = · · · = λ N = 0 with measure 1, then by (A.3) we have w 1 = 0 with measure 1, which is contradiction to the fact that w 1 = 0 with measure 1. This gives us DISPLAYFORM9 Notice that (A.4) in its matrix form can be written as linear system DISPLAYFORM10 By (A.6), we have that vector of λ's lies in the null space of the matrix. Finally by inductive hypothesis and (A.5) we conclude that the dimension of that space is DISPLAYFORM11 2 ∈ R N −1 be the basis of that null space i.e. DISPLAYFORM12 Define t i ∈ R 2d−1 as: DISPLAYFORM13 then we can rewrite (A.3) as DISPLAYFORM14 where DISPLAYFORM15 2 and z 1 part of the equation is already satisfied due to selection of null space. 2 ) are constant. Let us define the set S to be the index set of linearly independent DISPLAYFORM16 DISPLAYFORM17 2 ] and every other row is a linear combination of rows in S. Since (A.8) is consistent so the same combination must be valid for the rows of w 1. Now if N ≤ d 2 − 1 then number of variables in (A.8) is ≤ 2d − 3 but number of equations is 2d − 1, therefore at least two equations are linearly dependent on other equation. This implies last (2d − 2) equations then function must be dependent on each other: DISPLAYFORM18 for some fixed combination α j, β j. If we divide above equation by and take the limit as → 0 then we see that h satisfies following differential equation on interval (a DISPLAYFORM19 which is a contradiction to the condition C1! Clearly this leaves only one case i.e. N = d 2 and (2d − 1) equations must satisfy dependency of the following form for all x1 ∈ (a1, b DISPLAYFORM20 Again by similar arguments, the combination is fixed. Let H(x) = xh(x) then dividing above equation by and taking the limit as → 0, we can see that h satisfies following differential equation: DISPLAYFORM21 DISPLAYFORM22 Here the second statement follows from the fact W is a non-singular matrix. Now by Lemma 4.1 we have that collection h(x i)x i T is linearly independent with measure 1. So DISPLAYFORM23 is linearly independent with measure 1. Since any rotation is U is a full rank matrix so we have the . A.3 PROOF OF LEMMA 5.1 This is true for k = 0 trivially since we are randomly sampling matrix W 0. We now show this by induction on k. Recall that gradient of f (W, θ) with respect to W can be written as DISPLAYFORM24 Notice that in effect, we are multiplying i-th row of the rank one matrix h (W u i)u i T by i-th element of vector θ. So this can be rewritten as a matrix product DISPLAYFORM25 where Θ:= diag{θ[i], i = 1,..., d}. So iterative update of the algorithm can be given as DISPLAYFORM26 Notice that given ξ [k], vector θ k+1 and corresponding diagonal matrix Θ k+1 are found by SGD in the inner loop so θ k+1 is a random vector. More specifically, since {ξ DISPLAYFORM27 induces a Lebesgue measure on random variable DISPLAYFORM28 then W k is deterministic quantity. For the sake of contradiction, take any vector v that is supposed to be in the null space of W k+1 with positive probability. DISPLAYFORM29 Now the last equation is of the form DISPLAYFORM30 Suppose we can find such θ with positive probability. Then we can find hypercuboid Z := {x ∈ R d |a < x < b} such that any θ k+1 in given hypercuboid can solve equation (A.10). By induction we have b = 0. We may assume b = 0. Then to get contradiction on existence of Z, we observe that first equation in (A.10) is: DISPLAYFORM31 can not be 0. Hence we arrive at a contradiction to the assumption that there existed a hypercuboid Z containing solutions of (A.10). Since measure on θ k+1 was induced by {ξ DISPLAYFORM32 A.4 PROOF OF LEMMA 5.2We use induction on d. For d = 1 this is trivially true. Now assume this is true for d − 1. We will show this for d. DISPLAYFORM33 For simplicity of notation define t i := Zu i . Due to simple linear algebraic fact provided by full rank property of W we have rank of collection (h(W u i)u DISPLAYFORM34 For the sake of contradiction, say the collection is rank deficient with positive probability then there exists d-dimensional volume V such that for all v ∈ V, we have h(W u i)u i T is not full rank where DISPLAYFORM35 Without loss of generality, we may assume d-dimensional volume to be a hypercuboid V:= {x ∈ R d |a < x < b} (if not then we can inscribe a hypercuboid in that volume). Let us take v ∈ V and ε small enough such that v:= v + εe 1 ∈ V. Correspondingly we have z i and z i. Note that DISPLAYFORM36 Here DISPLAYFORM37 Similarly we also have v i = c i g i. Now by the act that v, v corresponding to z, z are in V, and our assumption of linear dependence for all v ∈ V we get DISPLAYFORM38. Also by induction on d − 1, we have that DISPLAYFORM39 is an invertible matrix and rewrite one part of equation (A.12) as DISPLAYFORM40. So essentially we have satisfied one part of equations (A.12) and (A.13). Notice that since we are moving only one coordinate of random DISPLAYFORM41 ) (by ε incremental changes) keeping all other elements of v constant so we will have y i as constants which implies g i, G, G are constant. So for the sake of simplicity of notation we define l: DISPLAYFORM42 Now, we look at the remaining part of two equation (A.12),(A.13): DISPLAYFORM43 which can be rewritten as DISPLAYFORM44 After (A.15) − (A.14), we have DISPLAYFORM45 Now note that (A.16), characterizes incremental changes in C, C, µ due to ε. So taking the limit as ε → 0, we have DISPLAYFORM46 Here, last equation is due to product rule in calculus. In (A.17), we see that we have 2d−1 equations and Assume that all the gradients in this proof are w.r.t. W then we know that DISPLAYFORM47 DISPLAYFORM48 where the last inequality follows from Cauchy-Schwarz inequality. DISPLAYFORM49, ∀ i then we are done. Let θ max:= max DISPLAYFORM50 Suppose the Lipschitz constants for the first and second term are L i,L and L i,R respectively. Then DISPLAYFORM51 ) and possible upper bound on value of L would become DISPLAYFORM52 Since the Hessian of scalar function h(·) is bounded so we have h (x) is Lipschitz continuous with constant L h. Let r 1, r 2 be two row vectors then we claim h (r 1 x) − h (r 2 x) 2 ≤ L h x 2. r 1 − r 2 2, ∀ r 1, r 2 because: h (r 1 x) − h (r 2 x) 2 ≤ L h r 1 x − r 2 x ≤ L h x 2 r 1 − r 2 2 From the relation above we have the following: Noting that DISPLAYFORM53 DISPLAYFORM54 we have DISPLAYFORM55 where u 1 and u 2 are upper bounds on scalar functions |h(·)| and |h (·)| respectively and d is rowdimension of W.A.7 PROOF OF THEOREM 5.11We know by Lemma 5.6 that f (·, θ) is a Lipschitz smooth function. So using (5.7) we have DISPLAYFORM56 DISPLAYFORM57 (γ k − L/2γ 2 k). Now taking expectation with respect to ξ [No] (which is defined in (5.1)), we have DISPLAYFORM58 DISPLAYFORM59.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkIkkseAZ
This paper talks about theoretical properties of first-order optimal point of two layer neural network in over-parametrized case
We introduce the concept of channel aggregation in ConvNet architecture, a novel compact representation of CNN features useful for explicitly modeling the nonlinear channels encoding especially when the new unit is embedded inside of deep architectures for action recognition. The channel aggregation is based on multiple-channels features of ConvNet and aims to be at the spot finding the optical convergence path at fast speed. We name our proposed convolutional architecture “nonlinear channels aggregation networks (NCAN)” and its new layer “nonlinear channels aggregation layer (NCAL)”. We theoretically motivate channels aggregation functions and empirically study their effect on convergence speed and classification accuracy. Another contribution in this work is an efficient and effective implementation of the NCAL, speeding it up orders of magnitude. We evaluate its performance on standard benchmarks UCF101 and HMDB51, and experimental demonstrate that this formulation not only obtains a fast convergence but stronger generalization capability without sacrificing performance. With modern learnable representations such as deep convolutional neural networks (CNNs) matured in many image understanding tasks BID7, human action recognition has received a significant amount of attentions BID11 BID2 BID9 BID3 BID12. Due to the fact that video itself provides an additional temporal clue and that the parameters and the calculations of CNNs grow exponentially, training CNNs with such large-scale parameters in video domain is time-consuming. However, it remains unclear how the effective convergence accelerators could be conducted for the optimal path by formulizing the handcrafted rules. Since videos consist of still images, training tricks and methods, such as Relu, BN, have been shown to transfer to videos directly. Recent theoretical and empirical works have demonstrated the importance of quickly training deep architectures successfully, and the effective convergence accelerators advanced in the 2D image, such as relu BID4 and batch normalization BID5, have been developed for fast convergence. This is in part inspired by observations of the limited GPU memory and computing power, especially when confronting the large-scale video dataset which may introduce a large majority of parameters. Another pipeline of algorithms focuses on the training optimizer of CNNs, for examples, sgd, momentum, nesterov, adagrad and adadelta. However, training CNNs utilizing the large-scale video datasets is still nontrivial in video task, particularly if one seeks a compact but fast long termporal dynamic representation that can be processed efficiently. Our current work reconsiders the means of facilitating convergence of ConvNets to increase the understanding of how to embed some hand-crafted rules inside of CNNs for fast convergence in a more thorough fashion. In addition to the accelerators and effective optimizers, we tend to explore a thorough method causing the value of the loss function to descend rapidly. Intuitively, we argue that CNNs will accelerate training process once the complex relationship across convolutional features channels is modeled, explicitly, by the hand-crafted rules. In the existing units 3D convolution implements a linear partial sum of channels BID6, 3D max-pooling takes the maximum feature by channels and 3D average-pooling make a spatial-channel average of features. Unfortunately, all the 3D units conduct a linear channels aggregation, implicitly and locally. Despite that the implicit linear aggregation has been applied to broad fields, there seems to be less works explicitly taking modeling the complex nonlinear relationship across channels into account. In fact, either one-stream or two-stream algorithms ignore the channel-level encoding. For video recognition task, a very tricky problem is how to train the CNN architectures for the sake of making a lower loss rapidly in the scarcity of videos. We conjecture that there is complex nonlinear relationship among the channels of CNN features. Once this implicit relationship is explicitly modeled, such accomplishment will facilitate converging with faster search to the optimal trajectory. In this paper, we proposed a nonlinear channels aggregation layer (NCAL), which explicitly models the complex nonlinear relationships across channels. Since a standard CNN provides a whole hierarchy of video representations, the first question worthy exploring is where the NACL should take place. For example, we can aggregate the output of the fully-connected layers of CNN architecture pre-trained on videos. A drawback of such implementation is that the convolutional features channels of CNN itself are still implicitly encoded and are unaware of the lower level channels relationship. The alternative is to model the nonlinear channels aggregation of some intermediate network layer. In this case, the lower layers fail to extract the representative features from video sequences, but the upper layers can reason about the overall dynamics in the video. The former is prone to sacrificing the recognition performance while the latter is thus thought of as the appropriate convolutional features for the compact aggregation. Here we build our methods on top of the successful Inception V1 architecture. More specifically, three main contributions are provided in this work. Our first contribution is to introduce the concept of nonlinear channels aggregation for fast convergence. We also show that, in this manner, it is possible to apply the concept of nonlinear channels aggregation to the intermediate layers of a CNN representation by constructing an efficient nonlinear channels aggregation layer (NCAL).Here we build our methods on top of the successful Inception V1 architecture. More specifically, three main contributions are provided in this work. Our first contribution is to introduce the concept of nonlinear channels aggregation for fast convergence. We also show that, in this manner, it is possible to construct an efficient nonlinear channels aggregation by applying the concept of nonlinear channels aggregation to the intermediate layers of the standard CNN. More importantly, it is explicitly and globally that the nonlinear channels relationship is modeled compared to the traditional local and implicit units. Our second contribution is to simplify the process of nonlinear channels aggregation layer (NCAL) and make a fast yet accurate implementation of it. Notably, the proposed NCAL can be embodied inside of any standard CNN architectures, and not break the rest components of structures. More broadly, the proposed NCAL is not limited to action recognition, that is, it can be applied to any task with CNNs. Here we introduce it into action recognition, and leave the explorations of it on the other domains in the future. Our third contribution is to leverage these ideas to construct a novel nonlinear channels aggregation network, perform the training process end-to-end. We show that such nonlinear channels encoding in a fast decline in the value of the loss function of CNNs while obtains efficient and accurate classification of actions in videos. The rest of the paper is organized as follows: Section 2 describes the related works, and section 3 represents the principle of the nonlinear channels aggregation networks (NCAN) and the backward propagation of NCAN. This is followed by the experiments in section 4. Finally, we conclude this paper in Section 6. Owing to the difficulty in training convolutional networks on video dataset, e.g. more parameters and small-scale video sequences, we restrict our works under the premise of action recognition. Previous works related to ours fall into two categories: convolutional networks for action recognition, linear channels aggregation. Convolutional networks for action recognition. Currently one type of the common practices in the mainstream algorithms enables a stack of consecutive video sequences to train the convolutional architectures, which implicitly captures motion characteristics. Another variants built on this is temporal segment networks (TSN), thus ConvNets are trained leveraging the multiple segments BID0 BID1 BID14 BID13 a). The last practice extends the 2-D convolution to capture the motions with extra temporal dimension. In this case so far the extra temporal dimension introduces exponential parameters. Also, we can make finding that most of effective methods are built on all the basic frameworks presented above. Thus, we perform our methods on these fundamental architectures that most of video classification algorithms have in common. Linear channels aggregation. In the traditional ConvNets, the implicit relationship across channels can be captured by the 3D convolution, 3D max pooling, 3D average pooling and 3D weighted-average pooling. Compared with 2-D convolution, max pooling, average pooling and weighted-average pooling, these 3D operators conduct the local linear aggregation on the space and channels level. Nevertheless, the guidelines of these simple linear methods fail to model the explicit, global and nonlinear channels relationships. These local linear encoders, in essence, remain unable to completely represent the complex nonlinear channels functions. The proposed nonlinear channels aggregation network, while also emphasizing this principle, is the first framework for end-to-end fast convergence of CNNs by capturing the global channels dependency. In this section, we give detailed descriptions of performing nonlinear channels aggregation. Specifically, we first introduce the basic linear channels encoding concepts in the framework of video tasks and utilize the available layers to conduct the channel-level linear encoding. Then, we introduce the principle of the nonlinear channels aggregation layer (NCAL) simulating the complex channel-level encodings. Finally we describe the fast optimization and backward propagation of NCAL. We consider how to leverage current units in the standard CNN to comply the channels relationships. Suppose for the moment that different channels in the convolutional network capture various discriminative features, and for the bold hypothesis that there is a nonlinear relationship between them, stacks of the channels in the subsequent layer will skew CNN in favor of the significant correspondence between these appropriate channels. To achieve the goal, current methods, 3D pooling and 3D convolution can be utilized to achieve it. As mentioned in the introduction, pooling would lead to a decline in feature resolution, which would cause the poor performance of recognition. We thus utilize 3D convolution to implement the linear channels relationship. Given video frames V = {V 1, V 2, · · ·, V K} multiple features S = {S 1, S2, · · ·, SK} are denoted as the responses of the immediate layer of CNN. Let h, w and c separately represent the height, width and channels, thus Si ∈ R h * w * c. The response at the spatial position (x, y, z) in the j th feature map of the i th layer, defined as v ij (x, y, z), is given by, DISPLAYFORM0 where I, J and C are the height, width and channels of the kernel, respectively. activate is denoted as the activate functions. Discuss. Since the numbers of channel is arbitrary, 3D operators, such as 3D max, average pooling and 3D convolution, in the subsequent layers can, implicitly, learn the arbitrary partial correspondence between these local channels. Moreover, pooling operators do not define a local correspondence between channels, but filter the significant features in a space-channels cube, finally leading to a decrease in the resolution of convolutional features. Among these local operators, 3D convolution seems to be a relatively better encoding. In fact, we expect that the 3D encoding between channels cannot be imperceptibly influenced by the spatial encoding, then, encoding spatial and channels relationships simultaneously in the 3D cube, locality and linearity, is not the better model constructing the nonlinear functions. DISPLAYFORM1 The nonlinear channels aggregation networks for end-to-end global channels aggregation representations. Frame sequences from a video are taken as inputs and fed forward till the end of the nonlinear channels aggregation layer (NCAL). At the NCAL, the video sequences are encoded by channels aggregation operator to produce the same number of channels with the convolutional features, and then is fed to the next layer in the network. In general, pooling is necessary to make network training computationally feasible and, more fundamentally, to allow information aggregation over large areas of the input features. However, when these implementations are utilized to model the complex channel-level relationships, only linear and local channels representations are modeled in an implicit manner. Nevertheless, these temporal pooling and variants tend to in the reduced resolution, which will achieve coarse classifications. Another downside of the linear channels aggregation is that its locality, inadequate in representing such complex relationship, lacks the global representations across all the channels. To tackle with the problems above, it is desirable to explicitly capture the complex nonlinear relationship encoded in multiple channels. A general design principle of nonlinear channels aggregation layer (NCAL) is that its introduction cannot break the rest of CNN architecture, while keep a unique yet global channel-level encoding for final classification. Denoting F as the function of CNN, then, S i is calculated as follows: DISPLAYFORM0 In particular, let DISPLAYFORM1 ] denote the responses of NCAL, one for each of the K video clips. Then, we utilize the channels aggregation equation to encode the channel-level features to produce a compact feature maps group, DISPLAYFORM2 where G is the linear kernel functions of NCAL, H represents the normalization, m, n represent the spatial location, c denotes the channel location,r DISPLAYFORM3 Instead of many-to-one linear channels aggregation, the NCAL is implement a many-to-many mapping. As shown in Fig.2, each response of NCAL is aggregated by all the features at the same spatial location of S i producing the global yet compact representation, while no extra parameters are involved. Most importantly, in its input and output, the dimensions remain constant, which means NCAL can be positioned anywhere in the feedforward ConvNet architectures consisting of convolutional, fully connected, pooling and nonlinearity layers. Therefore such implementation enables to keep the rest components in the CNN structure constant with the original, and it is critical to utilize the benefit of transfer learning for other tasks. Next, we multiply every representation Ys i with individual convolutional feature S r in the forward propagation pipeline, DISPLAYFORM4 where E r represents the input features of NACL. Eq. keeps the parameters of layers before NCAL module not being greatly offset by the introduced of nonlinear channels aggregation rules. It can be seen that this implementation, avoiding the vanishing gradient problem, enlarges the gradient of loss with respect to the former layers vanishing gradient problem in the following part. Different from linear channels aggregation unit mapping the same spatial location across the multiple channels to a single feature, the NCAL aims to obtain the same number of global outputs as the inputs. Like the temporal linear encoding networks, one can utilize outer product for capturing the interaction of features with each other at all spatial locations, hence leading to a high-dimensional representation. For this reason, the Tensor Sketch algorithm is utilized to projects this high-dimensional space to a lower-dimensional space. We propose nonlinear channels aggregation layer (NCAL) to fit the complex function from all the channels, and to implement it end-to-end in a fast but efficient manner. Consider convolutional features truncated at a convolutional layer for K segments, these feature maps are matrices S = {S 1, S2, · · ·, SK} A channels aggregation function N: S 1,S 2, · ··, S K → Ys 1, Ys 2, · · ·, Ys K, aggregates c channels in each segment to output multiple encoded representations Ys 1, Ys 2, · · ·, Ys K, and this unit can be applied to the output of intermediate convolutional layers. In order to embed NCAL inside of CNNs as an intermediate layer, it is critical to compute the fast forward propagation, the gradients for the back-propagation step and to allow back-prop training through it. First, we implement a skilled and equivalent forward propagation. Based on this, the specific gradient of NCAL is derived for the back-propagation. We divide eq. into two parts, DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 where the normalization process H is removed for simplifying the process. We can find that, as shown in eq. and FORMULA9, such accumulation across all the channels for all the segments suffer enormous computation complexity equivalent to the outer product operators. To tackle the optimization difficulty, we redefine eq. and FORMULA9 to the novel matrix form, DISPLAYFORM3 where W L sr represents the corresponding transformation matrix. More specifically, the corresponding matrix form can be formulized as, DISPLAYFORM4 All the steps we have done so far, whether Y After this, the back-propagation step can be conducted by the gradient, DISPLAYFORM5 Discuss. Next, we discuss the difference of nonlinear channels aggregation layer compared with the traditional outer product. The benefit of the original outer product is that the ing feature captures multiplicative interactions at corresponding spatial locations. The main drawback of this feature is its high dimensionality. Once the ing features are followed by a fully-connected layer, many more parameters will be involved in the CNN, prone to network over-fitting. So far we can find that the NCAL, as well as the outer product, captures the global information representing the whole channels range and the NCAL is optimized by a fast yet compact implementation of matrix. Not only that, but each value in the weighting kernel in the NCAL is a single mapping for channels while the corresponding value in the outer product is many-to-one mapping, that is, outer product between the whole channels range views each channel equally important. In our proposed methods, all the responses of NCAL are obtained, only once, by the multiplication of transformation matrix with the convolutional feature maps of an intermediate layer which the outer-product implementation that every response of NACL is aggregated by every channel of one intermediate layer is transformed to matrix multiplication. We first give a detailed description for the experimental setup used in the paper. Then we evaluate and compare the performances of NCAN architectures in the following sections. Finally the change curse of loss with iteration is depicted for further analyzing its effect. UCF101. UCF101 is an action recognition dataset, collected from YouTube, with 101 video categories BID10. The videos in 101 action categories are grouped into 25 units, each unit containing 4-7 videos of an action. With large variations in camera motion, object appearance and pose, viewpoint, clutter , etc, it is a challenging dataset. HMDB51. HMDB51 is a large human motion dataset, containing 6849 clips divided into 51 action categories BID8. Due to that HMDB 51 is extracted from commercial movies as well as YouTube, it can represent a sufficient reflect of light conditions, situations and surroundings, which are close to the realistic video. Experimental setup. In view of the efficiency and accuracy, our architectures are built on the Inception V1, and pre-trained on ImageNet dataset. With the pre-trained model initializing our architectures all the layers are fine-tuned with the learning rate set to 10 −2, decrease it to 10 −3 after 12 epochs and stop the training process after 20 epochs. We position the NCAL behind the different convolutional layer, thus, various nonlinear aggregation networks with NCAL in different location are constructed to explore where the NCAL should be placed. In this section, we sample a single image from a video for training and evaluate the performances of CNN with NACL in various convolutional layers. Table I reports the experimental of CNNs with NCAL in various location on the split 1 of UCF101. We can see that the NCAL following the last convolutional layer outperforms the reproduction of the original architecture, and the NCAL performs well when placed behind the latter convolutional layer. As shown in Table I, the more distinctive the convolutional features are, the higher, on accuracy of recognition, is the effect of CNN with the NCAL. Based on this, we stack multiple frames to train CNNs with NCAL, embedded inside of the final convolutional layer, and report the performances on the three splits of UCF101 and HMDB51. Among the NCANs the CNN with multiple NCALs turn out to be most competitive. In practice, on accuracy of recognition, our NCANs make marginal improvements of 0.3%-0.5% (single frame in table I) and 0.3%-0.7% (stacked frames in table II). This can be ascribed to that the channel-level aggregation maps the convolutional maps to another distinctive feature space, in which each feature map is encoded by the hand-crafted rules of feature enhancement. Another intuitive conjecture is that the NCAL captures the global nonlinear property across the channels, ing in the improvement of accuracy. However, the purpose of this paper is not to boost the performance of action recognition but to facilitate convergence without scarifying classification accuracy. As another interesting noting, no extra parameters are updated in the NACL, in comparison to CNN which purely consisting of convolutional layer, pooling layer and fully-connected layers and so on. Thus, our models are constructed simple, and computationally efficient. Moreover, there is no need rebuilding the CNN models when introducing the NCAL. In the case seen in the training process of CNN, the network will almost lead to a multi-fold increase in training time once trained by stacking the still images. Therefore, it is critical to arrive at the same loss within fewer iterations. we evaluate in FIG2 Among the 6 splits of UCF101 and HMDB51, the proposed NACNs achieve superior performance of convergence than the standard CNN which is shown in FIG3. We conclude that nonlinear channels aggregation is a high-quality encoding of channels relationship, while to be a global fitting to complex channels functions as it aggregates all the channels features to a new dimension space. From a modeling perspective, we are interested in answering the following questions: what hand-crafted rules are best at taking advantage of global channels relationships in standard CNN architecture? How do such hand-crafted rules influence the convergence of a CNN? We limit the hand-crafted rule under the constraint of channels enhancement, and examine these questions empirically by taking a nonlinear channels aggregation to model the complex and nonlinear relationships across the channels. From a practical standpoint, currently there are few methods that conduct the global yet explicit channels dependency because exploring global channels relationship are significantly more difficult to ascertain and model. That is, most of channels relationships are local, vague and implicit. For the explicit and global channels aggregation, we propose nonlinear channels aggregation operations in the section 3.1. To keep the rest parts of the convolutional structure constant with the original CNNs, both linear and nonlinear channels aggregations use the convolutional features as input and the same number of channels as its output. Such setting is significant to extend the channels aggregation to other domains in the future.4.4 Why NCAL can facilitate network convergence. In addition to the convergence comparison of NCAN with the standard CNN, a theoretical analysis why NCAL can facilitate training network should be considered. We therefore perform a qualitative analysis in the forward and backward propagations. As explained in the previous section, the NCAL can be conducted as a nonlinear combination of data points along the channels and coefficients can be computed by using the eq.. In our work, if the normalization H is ignored, these parameters can also be defined as a one-to-one mapping in Equ..Each item in eq. FORMULA0 is greater than 1 when θ ∈ [0, In practice, owing to the periodicity of sinusoidal and cosine functions, the mapping process may in the mistake that the feature in different channels is mapped to the same value. This mapping mistake will make CNN confused and lose the ability of recognizing the features with characteristics with periodic relations, which reduces the accuracy of recognition. To tackle this problem, we propose the kernel function of NCAL G rescales θ to a single-valued interval θ ∈ ( 0, π 2), such that each feature in the convolutional is enhanced and aggregated by distinctive transformation items. For the back-propagation, the gradients have been derived in eq. FORMULA0, and fortunately is a symmetric positive definite matrix. It is interesting to note that the proposed nonlinear channels aggregation layer not only captures the global channels relationship but enlarges the gradient in the back-propagation, reducing the risk of vanishing gradients. We, furthermore, perform an analysis of nonlinear channels aggregation layer (NCAL). In general, representations on later convolutional layers tend to be somewhat local, where channels correspond to specific, natural parts instead of being dimensions in a completely distributed coding. That said, not all channels correspond to natural parts, ing in a possible and different decomposition of discriminative features than humans might expect. We can even assume that partial channels are useless or interfering, inevitably, this will become a key point that leads to retarded network convergence. In particular, NCANs perform well on CNNs with methods facilitating convergence, for instance, BN, dropout and optimization strategies, which indicate that the NCAL is complementary to the existing training tricks. We present nonlinear channels aggregation, a powerful and new, yet simple concept in the context of deep learning that captures the global channels relationship. We introduce a novel nonlinear channels aggregation layer (NCAL) and make a fast yet accurate implementation of NCAL, which allows us to embed the principle of complex channels encoding to the mainstream CNN architectures and back-propagate the gradients through NCALs. Experiments on video sequences demonstrate the effective power of nonlinear channels aggregation on facilitating training CNNs. In this paper we fit the complex channels relationships by capturing the global channels aggregation. Still, there seems to be some possible research directions that can be further expanded, modeling the nonlinear functions across channels. In the future it is beneficial to explore multiple-scale channel-levels by pyramid coding across channels. In sublimation, we can embed any hand-crafted rules, channels aggregation in the mainstream architectures, to making CNN working as we expect.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJgdHs05FQ
An architecture enables CNN trained on the video sequences converging rapidly
We present a new method for black-box adversarial attack. Unlike previous methods that combined transfer-based and scored-based methods by using the gradient or initialization of a surrogate white-box model, this new method tries to learn a low-dimensional embedding using a pretrained model, and then performs efficient search within the embedding space to attack an unknown target network. The method produces adversarial perturbations with high level semantic patterns that are easily transferable. We show that this approach can greatly improve the query efficiency of black-box adversarial attack across different target network architectures. We evaluate our approach on MNIST, ImageNet and Google Cloud Vision API, ing in a significant reduction on the number of queries. We also attack adversarially defended networks on CIFAR10 and ImageNet, where our method not only reduces the number of queries, but also improves the attack success rate. The wide adoption of neural network models in modern applications has caused major security concerns, as such models are known to be vulnerable to adversarial examples that can fool neural networks to make wrong predictions . Methods to attack neural networks can be divided into two categories based on whether the parameters of the neural network are assumed to be known to the attacker: white-box attack and black-box attack. There are several approaches to find adversarial examples for black-box neural networks. The transfer-based attack methods first pretrain a source model and then generate adversarial examples using a standard white-box attack method on the source model to attack an unknown target network (; ; ; a). The score-based attack requires a loss-oracle, which enables the attacker to query the target network at multiple points to approximate its gradient. The attacker can then apply the white-box attack techniques with the approximated gradient (; a;). A major problem of the transfer-based attack is that it can not achieve very high success rate. And transfer-based attack is weak in targeted attack. On the contrary, the success rate of score-based attack has only small gap to the white-box attack but it requires many queries. Thus, it is natural to combine the two black-box attack approaches, so that we can take advantage of a pretrained white-box source neural network to perform more efficient search to attack an unknown target black-box model. In fact, in the recent NeurIPS 2018 Adversarial Vision Challenge , many teams transferred adversarial examples from a source network as the starting point to carry out black-box boundary attack . N Attack also used a regression network as initialization in the score-based attack (a). The transferred adversarial example could be a good starting point that lies close to the decision boundary for the target network and accelerate further optimization. P-RGF used the gradient information from the source model to accelerate searching process. However, gradient information is localized and sometimes it is misleading. In this paper, we push the idea of using a pretrained white-box source network to guide black-box attack significantly further, by proposing a method called TRansferable EMbedding based Black-box Attack (TREMBA). TREMBA contains two stages: train an encoder-decoder that can effectively generate adversarial perturbations for the source network with a low-dimensional embedding space; apply NES (Natural Evolution Strategy) of to the low-dimensional embedding space of the pretrained generator to search adversarial examples for the target network. TREMBA uses global information of the source model, capturing high level semantic adversarial features that are insensitive to different models. Unlike noise-like perturbations, such perturbations would have much higher transferablity across different models. Therefore we could gain query efficiency by performing queries in the embedding space. We note that there have been a number of earlier works on using generators to produce adversarial perturbations in the white-box setting (; ;). While black-box attacks were also considered there, they focused on training generators with dynamic distillation. These early approaches required many queries to fine-tune the classifier for different target networks, which may not be practical for real applications. While our approach also relies on a generator, we train it as an encoder-decoder that produces a low-dimensional embedding space. By applying a standard black-box attack method such as NES on the embedding space, adversarial perturbations can be found efficiently for a target model. It is worth noting that the embedding approach has also been used in AutoZOOM . However, it only trained the autoencoder to reconstruct the input, and it did not take advantage of the information of a pretrained network. Although it also produces structural perturbations, these perturbations are usually not suitable for attacking regular networks and sometimes its performance is even worse than directly applying NES to the images . TREMBA, on the other hand, tries to learn an embedding space that can efficiently generate adversarial perturbations for a pretrained source network. Compared to AutoZOOM, our new method produces adversarial perturbation with high level semantic features that could hugely affect arbitrary target networks, ing in significantly lower number of queries. We summarize our contributions as follows: 1. We propose TREMBA, an attack method that explores a novel way to utilize the information of a pretrained source network to improve the query efficiency of black-box attack on a target network. 2. We show that TREMBA can produce adversarial perturbations with high level semantic patterns, which are effective across different networks, ing in much lower queries on MNIST and ImageNet especially for the targeted attack that has low transferablity. 3. We demonstrate that TREMBA can be applied to SOTA defended models . Compared with other black-box attacks, TREMBA increases success rate by approximately 10% while reduces the number of queries by more than 50%. There have been a vast literature on adversarial examples. We will cover the most relevant topics including white-box attack, black-box attack and defense methods. White-Box Attack White-box attack requires the full knowledge of the target model. It was first discovered by that adversarial examples could be found by solving an optimization problem with L-BFGS . Later on, other methods were proposed to find adversarial examples with improved success rate and efficiency (; ; b;). More recently, it was shown that generators can also construct adversarial noises with high success rate . Black-Box Attack Black-box attack can be divided into three categories: transfer-based, score-based and decision-based. It is well known that adversaries have high transferablity across different networks (a). Transfer-based methods generate adversarial noises on a source model and then transfer it to an unknown target network. It is known that targeted attack is harder than untargeted attack for transfer-based methods, and using an ensemble of source models can improve the success rate . Score-based attack assumes that the attacker can query the output scores of the target network. The attacker usually uses sampling methods to approximate the true gradient (; a; a; . AutoZOOM tried to improve the query efficiency by reducing the sampling space with a bilinear transformation or an autoencoder . (b) incorporated data and time prior to accelerate attacking. In contrast to the gradient based method, used combinatorial optimization to achieve good efficiency. In decision-based attack, the attacker only knows the output label of the classifier. Boundary attack and its variants are very powerful in this setting (; . In NeutIPS 2018 Adversarial Vision Challenge , some teams combined transfer-based attack and decision-based attack in their attacking methods . And in a similar spirit, N Attack also used a regression network as initialization in score-based attack (a). Gradient information from the surrogate model could also be used to accelerate the scored-based attack . Defense Methods Several methods have been proposed to overcome the vulnerability of neural networks. Gradient masking based methods add non-differential operations in the model, interrupting the backward pass of gradients. However, they are vulnerable to adversarial attacks with the approximated gradient (; a). Adversarial training is the SOTA method that can be used to improve the robustness of neural networks. Adversarial training is a minimax game. The outside minimizer performs regular training of the neural network, and the inner maximizer finds a perturbation of the input to attack the network. The inner maximization process can be approximated with FGSM , PGD , adversarial generator etc. Moreover, feature denoising can improve the robustness of neural networks on ImageNet . be an input, and let F (x) be the output vector obtained before the softmax layer. We denote F (x) i as the i-th component for the output vector and y as the label for the input. For un-targeted attack, our goal is to find a small perturbation δ such that the classifier predicts the wrong label, i.e. arg max F (x + δ) = y. And for targeted attack, we want the classifier to predicts the target label t, i.e. arg max F (x + δ) = t. The perturbation δ is usually bounded by p norm: δ p ≤ ε, with a small ε > 0. Adversarial perturbations often have high transferablity across different DNNs. Given a white-box source DNN F s with known architecture and parameters, we can transfer its white-box adversarial perturbation δ s to a black-box target DNN F t with reasonably good success rate. It is known that even if x + δ s fails to be an adversarial example, δ s can still act as a good starting point for searching adversarial examples using a score-based attack method. This paper shows that the information of F s can be further utilized to train a generator, and performing search on its embedding space leads to more efficient black-box attacks of an unknown target network F t. Adversarial perturbations can be generated by a generator network G. We explicitly divide the generator into two parts: an encoder E and a decoder D. The encoder takes the origin input x and output a latent vector z = E(x), where dim(z) dim(x). The decoder takes z as the input and outputs an adversarial perturbation δ = ε tanh(D(z)) with dim(δ) = dim(x). In our new method, we will train the generator G so that δ = ε tanh(G(x)) can fool the source network F s. Suppose we have a training set {(x 1, y 1),..., (x n, y n)}, where x i denotes the input and y i denotes its label. For un-targeted attack, we train the desired generator by minimizing the hinge loss used in the C&W attack : And for targeted, we use where t denotes the targeted class and κ is the margin parameter that can be used to adjust transferability of the generator. A higher value of κ leads to higher transferability to other models . We focus on ∞ norm in this work. By adding point-wise tanh function to an unnormalized output D(z), and scaling it with ε, δ = ε tanh(D(z)) is already bounded as δ ∞ < ε. Therefore we employ this transformation, so that we do not need to impose the infinity norm constraint explicitly. While hinge loss is employed in this paper, we believe other loss functions such the cross entropy loss will also work. Given a new black-box DNN classifier F t (x), for which we can only query its output at any given point x. As in (a;), we can employ NES to approximate the gradient of a properly defined surrogate loss in order to find an adversarial example. Denote the surrogate loss by L, rather than calculating ∇ δ L(x+δ, y) directly, NES update δ by using The expectation can be approximated by taking finite samples. And we could use the following equation to iteratively update δ: where η is the learning rate, b is the minibatch sample size, ω k is the sample from the gaussian distribution and [−ε,ε] represents a clipping operation, which projects δ onto the ∞ ball. The sign function provides an approximation of the gradient, which has been widely used in adversarial attack (a;). However, it is observed that more effective attacks can be obtained by removing the sign function (b). Therefore in this work, we remove the sign function from Eqn and directly use the estimated gradient. Instead of performing search on the input space, TREMBA performs search on the embedding space z. The generator G explores the weakness of the source DNN F s so that D produces perturbations that can effective attack F s. For a different unknown target network F t, we show that our method can still generate perturbations leading to more effective attack of F t. Given an input x and its label y, we choose a starting point z 0 = E(x). The gradient of z t given by NES can be estimated as: where ν k is the sample from the gaussian distribution N (z t, σ 2). Moreover, z t is updated with stochastic gradient descent. The detailed procedure is presented in Algorithm 1. We do not need to do projection explicitly since δ already satisfies δ ∞ < ε. Next we shall briefly explain why applying NES on the embedding space z can accelerate the search process. Adversarial examples can be viewed as a distribution lying around a given input. Usually this distribution is concentrated on some small regions, making the search process relatively slow. After training on the source network, the adversarial perturbations of TREMBA would have high level semantic patterns that are likely to be adversarial patterns of the target network. Therefore searching over z is like searching adversarial examples in a lower dimensional space containing likely adversarial patterns. The distribution of adversarial perturbations in this space is much less concentrated. It is thus much easier to find effective adversarial patterns in the embedding space. We evaluated the number of queries versus success rate of TREMBA on undefended network in two datasets: MNIST and ImageNet . Moreover, we evaluated the efficiency of our method on adversarially defended networks in CIFAR10 and ImageNet. We also attacked Google Cloud Vision API to show TREMBA can generalize to truly black-box model. 1 We used the hinge loss from Eqn 1 and 2 as the surrogate loss for un-targeted and targeted attack respectively. We compared TREMBA to four methods: NES: Method introduced by (a), but without the sign function for reasons explained earlier. Trans-NES: Take an adversarial Algorithm 1 Black-Box adversarial attack on the embedding space Input: Target Network F t; Input x and its label y or the target class t; Encoder E; Decoder D; Standard deviation σ; Learning rate η; Sample size b; Iterations T; Bound for adversarial perturbation ε Output: Adversarial perturbation δ 1: z 0 = E(x) 2: for t = 1 to T do 3: perturbation generated by PGD or FGSM on the source model to initialize NES. AutoZOOM: Attack target network with an unsupervised autoencoder described in . For fair comparisons with other methods, the strategy of choosing sample size was removed. P-RGF: Prior-guided random gradient-free method proposed in . The P-RGF D (λ *) version was compared. We also combined P-RGF with initialization from Trans-NES PGD to form a more efficient method for comparison, denoted by Trans-P-RGF. Since different methods achieve different success rates, we need to compare their efficiency at different levels of success rate. For method i with success rate s i, the average number of queries is q i for all success examples. Let q * denote the upper limit of queries, we modified the average number of queries to be q * which unified the level of success rate and treated queries of failure examples as the upper limit on the number of queries. Average queries sometimes could be misleading due to the the heavy tail distribution of queries. Therefore we plot the curve of success rate at different query levels to show the detailed behavior of different attacks. The upper limit on the number of queries was set to 50000 for all datasets, which already gave very high success rate for nearly all the methods. Only correctly classified images were counted towards success rate and average queries. And to fairly compare these methods, we chose the sample size to be the same for all methods. We also added momentum and learning decay for optimization. And we counted the queries as one if its starting point successfully attacks the target classifier. The learning rate was fine-tuned for all algorithms. We listed the hyperparameters and architectures of generators and classifiers in Appendix B and C. We trained four neural networks on MNIST, denoted by ConvNet1, ConvNet1*, ConvNet2 and FCNet. ConvNet1* and ConvNet1 have the same architecture but different parameters. All the network achieved about 99% accuracy. The generator G was trained on ConvNet1* using all images from the training set. Each attack was tested on images from the MNIST test set. The limit of ∞ was ε = 0.2. We performed un-targeted attack on MNIST. Table 1 lists the success rate and the average queries. Although the success rate of TREMBA is slightly lower than Trans-NES in ConvNet1 and FCNet, their success rate are already close to 100% and TREMBA achieves about 50% reduction of queries compared with other attacks. In contrast to efficient attack on ImageNet, P-RGF and Trans-P-RGF behaves very bad on MNIST. Figure 4.1 shows that TREMBA consistently achieves higher success rate at nearly all query levels. We randomly divided the ImageNet validation set into two parts, containing 49000 and 1000 images respectively. The first part was used as the training data for the generator G, and the second part was used for evaluating the attacks. We evaluated the efficiency of all adversarial attacks on VGG19 , Resnet34 , DenseNet121 and MobilenetV2 . All networks were downloaded using torchvision package. We set ε = 0.03125. ) as the source model to improve transferablity for both targeted and un-targeted attack. TREMBA, Trans-NES, P-RGF and Trans-P-RGF all used the same source model for fair comparison. We chose several target class. Here, we show the of attacking class 0 (tench) in Table 2 and Figure 2. And we leave the of attacking other classes in Appendix A.1. The average queries for TREMBA is about 1000 while nearly all the average queries for other methods are more than 6000. TREMBA also achieves much lower queries for un-targeted attack on ImageNet. The is shown in Appendix A.2 due to space limitation. And we also compared TREMBA with CombOpt in the Appendix A.9. Figure 3 shows the adversarial perturbations of different methods. Unlike adversarial perturbations produced by PGD, the perturbations of TREMBA reflect some high level semantic patterns of the targeted class such as the fish scale. As neural networks usually capture such patterns for classification, the adversarial perturbation of TREMBA would be more easy to transfer than the noise-like perturbation produced by PGD. Therefore TREMBA can search very effectively for the target network. More examples of perturbations of TREMBA are shown in Appendix A.3. We performed attack on different ensembles of source model, which is shown in Appendix A.4. TREMBA outperforms the other methods in different ensemble model. And more source networks lead to better transferability for TREMBA, Trans-NES and Trans-P-RGF. Varying ε: We also changed ε and performed attack on ε = 0.02 and ε = 0.04. As shown in Appendix A.5, TREMBA still outperforms the other methods despite using the G trained on ε = 0.03125. We also show the of TREMBA for commonly used ε = 0.05. Sample size and dimension the embedding space: To justify the choice of sample size, we performed a hyperparameter sweep over b and the is shown in Appendix A.6. And we also changed the dimension of the embedding space for AutoZOOM and Trans-P-RGF. As shown in Appendix A.7, the performance gain of TREMBA does not purely come from the diminishing of dimension of the embedding space. This section presents the for attacking defended networks. We performed un-targeted attack on two SOTA defense methods on CIFAR10 and ImageNet. MNIST is not studied since it is already robust against very strong white-box attacks. For CIFAR10, the defense model was going through PGD minimax training . We directly used their model as the source network 2, denoted by WResnet. To test whether these methods can transfer to a defended network with a different architecture, we trained a defended ResNeXt using the same method. For ImageNet, we used the SOTA model 3 from . We used "ResNet152 Denoise" as the source model and transfered adversarial perturbations to the most robust "ResNeXt101 DenoiseAll". Following the previous settings, we set ε = 0.03125 for both CIFAR10 and ImageNet. As shown in Table 3, TREMBA achieves higher success rates with lower number of queries. TREMBA achieves about 10% improvement of success rate while the average queries are reduced by more than 50% on ImageNet and by 80% on CIFAR10. The curves in Figure 4 (a) and 4(b) show detailed behaviors. The performance of AutoZOOM surpasses Trans-NES on defended models. We suspect that low-frequency adversarial perturbations produced by AutoZOOM will be more suitable to fool the defended models than the regular networks. However, the patterns learned by AutoZOOM are still worse than adversarial patterns learned by TREMBA from the source network. An optimized starting point for TREMBA: z 0 = E(x) is already a good starting point for attacking undefended networks. However, the capability of generator is limited for defended networks . Therefore, z 0 may not be the best starting point we can get from the defended source network. To enhance the usefulness of the starting point, we optimized z on the source network by gradient descent and found The method is denoted by TREMBA OSP (TREMBA with optimized starting point). Figure 4 shows TREMBA OSP has higher success rate at small query levels, which means its starting point is better than TREMBA. We also attacked the Google Cloud Vision API, which was much harder to attack than the single neural network. Therefore we set ε = 0.05 and perform un-targeted attack on the API, changing the top1 label to whatever is not on top1 before. We chose 10 images for the ImageNet dataset and set query limit to be 500 due to high cost to use the API. As shown Table 4, TREMBA achieves much higher accuracy success rate and lower number of queries. We show the example of successfully attacked image in Appendix A.8. We propose a novel method, TREMBA, to generate likely adversarial patterns for an unknown network. The method contains two stages: training an encoder-decoder to generate adversarial perturbations for the source network; search adversarial perturbations on the low-dimensional embedding space of the generator for any unknown target network. Compared with SOTA methods, TREMBA learns an embedding space that is more transferable across different network architectures. It achieves two to six times improvements in black-box adversarial attacks on MNIST and ImageNet and it is especially efficient in performing targeted attack. Furthermore, TREMBA demonstrates great capability in attacking defended networks, ing in a nearly 10% improvement on the attack success rate, with two to six times of reductions in the number of queries. TREMBA opens up new ways to combine transfer-based and score-based attack methods to achieve higher efficiency in searching adversarial examples. For targeted attack, TREMBA requires different generators to attack different classes. We believe methods from conditional image generation may be combined with TREMBA to form a single generator that could attack multiple targeted classes. We leave it as a future work. A EXPERIMENT A.1 TARGETED ATTACK ON IMAGENET Figure 9 shows of the targeted attack on dipper, American chameleon, night snake, ruffed grouse and black swan. TREMBA achieves much higher success rate than other methods at almost all queries level. We used the same source model from targeted attack as the source model for un-targeted attack. We report our evaluation in Table 5 and Figure 5. Compared with Trans-P-RGF, TREMBA reduces the number of queries by more than a half in ResNet34, DenseNet121 and MobilenetV2. Searching in the embedding space of generator remains very effective even when the target network architecture differs significantly from the networks in the source model. Figure 5: The success rate of un-targeted black-box adversarial attack at different query levels for undefended ImageNet models. Figure 10 shows some examples of adversarial perturbations produced by TREMBA. The first column is one image of the target class and other columns are examples of perturbations (amplified by 10 times). It is easy to discover some features of the target class in the adversarial perturbation such as the feather for birds and the body for snakes. We chose two more source ensemble models for evaluation. The first ensemble contains VGG16 and Squeezenet. And the second ensemble is consist of VGG16, Squeezenet and Googlenet. Figure 6 shows our for targeted attack for ImageNet. We only compared Trans-NES PGD and Trans-P-RGF since they are the best variants from Trans-NES and P-RGF. Figure 6: We show the success rate at different query levels for targeted attack for different ensemble source networks. V represents VGG16; S represents Squeezenet; G represents Googlenet; R represents Resnet18 A.5 VARYING ε We chose ε = 0.02 and ε = 0.04 and performed targeted attack on ImageNet. Although TREMBA used the same model that is trained on ε = 0.03125, it still outperformed other methods, which shows that TREMBA can also generalize to different strength of adversarial attack with different ε. For the commonly used ε = 0.05, TREMBA also performs well. The are shown in Table 6, Table 7, and Figure 8. We performed a hyperparameter sweep over b on Densenet121 on un-targeted attack on ImageNet. b = 20 may not be the best choice Trans-NES, but it is not the best for TREMBA, either. Generally, the performance is not very sensitive to b, and TREMBA will also outperform other methods even if we fine-tune the sample size for all the methods. We slightly changed the architecture of the autoencoder by adding max pooling layers and changing the number of filters and perform un-targeted attack on ImageNet. More specifically, we added additional max pooling layers after the first and the fourth convolution layers and changed the number of filters of the last layer in the encoder to be 8. Thus, the dimension of the embedding space would be 8 × 8 × 8. And we also changed the factor of bilinear sampling in the decoder. The remaining settings are the same in Appendix A.2. As shown in Table 9, this autoencoder is even worse than the original autoencoder despite small dimension of the embedding space. In addition, we also changed to dimension of the data-dependent prior of Trans-P-RGF to match the dimension of TREMBA, whose performance is also not better than before. They show that simply diminishing the size of the embedding space may not lead to better performance. The performance gain of TREMBA comes beyond the effect of diminishing the dimension of the embedding space. A.8 EXAMPLES OF ATTACKING GOOGLE CLOUD VISION API Figure 11 shows one example of attacking Google Cloud Vision API. TREMBA successfully make the shark to be classified as green. CombOpt is one of the SOTA score-based black-box attack. We compared our method with it on the targeted and un-targeted attack on Imagenet. The targeted attack is 0 and ε = 0.03125. As shown in Table 10 and Table 11, TREMBA requires much lower queries than CombOpt. It demonstrates the great improvement by combining the transfer-based and score-based attack. B ARCHITECTURE OF CLASSIFIERS AND GENERATORS B.1 CLASSIFIER Table 12 lists the architectures of ConvNet1, ConvNet2 and FCNet. The architecture of ResNeXt used in CIFAR10 is from https://github.com/prlz77/ResNeXt.pytorch. We set the depth to be 20, the cardinality to be 8 and the widen factor to be 4. Other architectures of classifiers are specified in the corresponding paper. B.2 GENERATOR Table 13 lists the architectures of generator for three datasets. For AutoZOOM, we find our architectures are not suitable and use the same generators in the corresponding paper. We trained the generators with learning rate starting at 0.01 and decaying half every 50 epochs. The whole training process was 500 epochs. The batch size was determined by the memory of GPU. Specifically, we set batch size to be 256 for MNIST and CIFAR10 defense model, 64 for ImageNet model. All large κ will work well for our method and we chose κ = 200.0. All the experiments were performed using pytorch on NVIDIA RTX 2080Ti. Table 14 to 19 list the hyperparameters for all the algorithms. The learning rate was fine-tuned for all the algorithms. We set sample size b = 20 for all the algorithms for fair comparisons. Table 18: Hyperparameters for TREMBA. Un-targeted Targeted
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJxhNTNYwB
We present a new method that combines transfer-based and scored black-box adversarial attack, improving the success rate and query efficiency of black-box adversarial attack across different network architectures.
Deep neural networks (DNNs) are inspired from the human brain and the interconnection between the two has been widely studied in the literature. However, it is still an open question whether DNNs are able to make decisions like the brain. Previous work has demonstrated that DNNs, trained by matching the neural responses from inferior temporal (IT) cortex in monkey's brain, is able to achieve human-level performance on the image object recognition tasks. This indicates that neural dynamics can provide informative knowledge to help DNNs accomplish specific tasks. In this paper, we introduce the concept of a neuro-AI interface, which aims to use human's neural responses as supervised information for helping AI systems solve a task that is difficult when using traditional machine learning strategies. In order to deliver the idea of neuro-AI interfaces, we focus on deploying it to one of the fundamental problems in generative adversarial networks (GANs): designing a proper evaluation metric to evaluate the quality of images produced by GANs. Deep neural networks (DNNs) have successfully been applied to a number of different areas such as computer vision and natural language processing where they have demonstrated state-of-the-art , often matching and even sometimes surpassing a human's ability. Moreover, DNNs have been studied with respect to how similar processing is carried out in the human brain, where identifying these overlaps and interconnections has been a focus of study and investigation in the literature BID5 BID4 BID11 BID18 BID30 BID1 BID35 BID17 BID15. In this research area, convolutional neural networks (CNNs) are widely studied to be compared with the visual system in human's brain because of following reasons: CNNs and human's visual system are both hierarchical system; Steps of processing input between CNNs and human's visual system are similar to each other e.g., in a object recognition task, both CNNs and human recognize a object based on their its shape, edge, color etc.. Work BID35 outlines the use of CNNs approach for delving even more deeply into understanding the development and organization of sensory cortical processing. It has been demonstrated that CNNs are able to reflect the spatio-temporal neural dynamics in human's brain visual area BID5 BID30 BID18. Despite lots of work is carried out to reveal the similarity between CNNs and brain system, research on interacting between CNNs and neural dynamics is less discussed in the literature as understanding of neural dynamics in the neuroscience area is still limited. There is a growing interest in studying generative adversarial networks (GANs) in the deep learning community BID10. Specifically, GANs have been widely applied to various domains such as computer vision BID14, natural language processing BID7 and speech synthesis BID6. Compared with other deep generative models (e.g. variational autoencoders (VAEs)), GANs are favored for effectively handling sharp estimated density functions, efficiently generating desired samples and eliminating deterministic bias. Due to these properties GANs have successfully contributed to plausible image generation BID14, image to image translation BID38, image super-resolution BID19, image completion BID37 etc.. However, three main challenges still exist currently in the research of GANs: Mode collapse -the model cannot learn the distribution of the full dataset well, which leads to poor generalization ability; Difficult to trainit is non-trivial for discriminator and generator to achieve Nash equilibrium during the training; Hard to evaluate -the evaluation of GANs can be considered as an effort to measure the dissimilarity between real distribution p r and generated distribution p g. Unfortunately, the accurate estimation of p r is intractable. Thus, it is challenging to have a good estimation of the correspondence between p r and p g. Aspects and are more concerned with computational aspects where much research has been carried out to mitigate these issues BID20; BID0. Aspect is similarly fundamental, however, limited literature is available and most of the current metrics only focus on measuring the dissimilarity between training and generated images. A more meaning-ful GANs evaluation metric that is consistent with human perceptions is paramount in helping researchers to further refine and design better GANs. Although some evaluation metrics, e.g., Inception Score (IS), Kernel Maximum Mean Discrepancy (MMD) and Fréchet Inception Distance (FID), have already been proposed (; BID13 BID2, their limitations are obvious: These metrics do not agree with human perceptual judgments and human rankings of GAN models. A small artefact on images can have a large effect on the decision made by a machine learning system BID16, whilst the intrinsic image content does not change. In this aspect, we consider human perception to be more robust to adversarial images samples when compared to a machine learning system; These metrics require large sample sizes for evaluation ). Large-scale samples for evaluation sometimes are not realistic in real-world applications since it is time-consuming; and They are not able to rank individual GAN-generated images by their quality i.e., the metrics are generated on a collection of images rather than on a single image basis. The within GAN variances are crucial because it can provide the insight on the variability of that GAN.Work BID36 demonstrates that CNN matched with neural data recorded from inferior temporal cortex BID3 has high performance in object recognition tasks. Given the evidence above that a CNN is able to predict the neural response in the brain, we describe a neuro-AI interface system, where human being's neural response is used as supervised information to help the AI system (CNNs used in this work) solve more difficult problems in real-world. As a starting point for exploiting the idea of neuro-AI interface, we focus on utilizing it to solve one of the fundamental problems in GANs: designing a proper evaluation metric. Neural response Stimulus Figure 1. Schematic of neuro-AI interface. Stimuli (image stimuli used in this work) are simultaneously presented to an AI system and participants. Participants' neural responses are transferred to the AI system as supervised information for assisting the AI system make decision. In this paper, we first demonstrate the ability of a brainproduced score (we call it Neuroscore), generated from human being's electroencephalography (EEG) signals, in terms of the quality evaluation on GANs. Secondly, we demonstrate and validate a neural-AI interface (as seen in Fig. 1), which uses neural responses as supervised information to train a CNN. The trained CNN model is able to predict Neuroscore for images without corresponding neural responses. We test this framework via three models: Shallow convolutional neural network, Mobilenet V2 BID26 and Inception V3 BID29.In detail, Neuroscore is calculated via measurement of the P300, an event-related potential (ERP) BID23 present in EEG, via a rapid serial visual presentation (RSVP) paradigm BID32. P300 and RSVP paradigm are mature techniques in the brain-computer interface (BCI) community and have been applied in a wider variety of tasks such as image search BID9, information retrieval BID22, and etc. The unique benefit of Neuroscore is that it more directly reflects the human perceptual judgment of images, which is intuitively more reliable compared to the conventional metrics in the literature BID2. Current literature has demonstrated that CNNs are able to predict neural responses in inferior temporal cortex in image recognition task BID36 BID35 via invasive BCI techniques BID31. Evidence shows that neural responses in inferior temporal cortex directly link the information processing during the image recognition task. Therefore, a CNN trained by predicting neural responses in inferior temporal cortex also achieves the good performance during the image recognition BID36. Comparing the traditional end-toend machine learning system, use of DNNs for predicting neural responses in the brain favors following benefits: It enables the information processing of DNNs closer to human being's brain system; For some difficult tasks in real-world e.g, evaluation of image quality demonstrated in this paper, it is still challenging to design the machine learning algorithms, which teach DNNs to process the information like humans; and Neural signals directly reflect the human perception and interfacing between neural responses and DNNs can be more efficient than the traditional methods regarding the area of human and AI.The investigation of using CNNs to predict neural response from non-invasive BCI aspect is still blank in the literature. Comparing to invasively measured neural dynamics, EEG favors pros such as simple measurement, unpainful experience during recording, free to ethic argument and more easily generalized to real-world applications. However, EEG suffers challenges such as low signal quality (i.e TAB1 Neuro-AI Interface low SNR), low spatial resolution (interested neural activities span all scalp and difficult to be localized), which makes the prediction for EEG response still challenging. With advanced machine leaching technologies applied to non-invasive BCI area, source localization and reconstruction are feasible for EEG signals today. Previously work BID33 a) have demonstrated the efficacy of using spatial filtering approaches for reconstructing P300 source ERP signals. The low SNR issue can be remedied by averaging the EEG trials. Based on this evidence, we explore the use of DNNs to predict Neuroscore when neural information is available. We propose a neuro-AI interface in order to generalize the use of Neuroscore. This kind of framework interfaces between neural responses and AI systems (CNN used in this study), which uses neural responses as supervised information to train a CNN. The trained CNN is then used for predicting Neuroscore given images generated by one of the popular GAN models. Figure 2 demonstrates the schematic of neuro-AI interface used in this work 1. Flow 1 shows that the image processed by human being's brain and produces single trial P300 source signal for each input image. Flow 2 in Fig. 2 demonstrates a CNN with including EEG signals during training stage. The convolutional and pooling layers process the image similarly as retina done BID21. Fully connected layers (FC) 1-3 aim to emulate the brain's functionality that produces EEG signal. Yellow dense layer in the architecture aims to predict the single trial P300 source signal in 400-600 ms response from each image input. In order to help model make a more accurate prediction for the single trial P300 amplitude for the output, the single trial P300 source signal in 400-600 ms is fed to the yellow dense layer to learn parameters for the previous layers in the training step. The model was then trained to predict the single trial P300 source amplitude (red point shown in signal trail P300 source signal of Fig. 2). Mobilenet V2, Inception V3 and Shallow network were explored in this work, where in flow 2 we use these three network bones: such as Conv1-pooling layers. For Mobilenet V2 and Inception V3. We used pretrained parameters from up to the FC 1 shown in Fig. 2. We trained parameters from FC 1 to FC 4 for Mobilenet V2 and Inception V3. θ 1 is 1 We understand that human being's brain system is much more complex than what we demonstrated in this work and the flow in the brain is not one-directional BID27 BID2. Our framework can be further extended to be more biologically plausible. Add windowed single trial P300 source signal Figure 2. A neuro-AI interface and training details with adding EEG information. Our training strategy includes two stages: Learning from image to P300 source signal; and Learning from P300 source signal to P300 amplitude. loss1 is the L2 distance between the yellow layer and the single trial P300 source signal in the 400 -600 ms corresponding to the single input image. loss2 is the mean square error between model prediction and the single trial P300 amplitude. loss1 and loss2 will be introduced in section 3.2.used to denote the parameters from FC 1 to FC 3 and θ 2 indicates the parameters in FC 4. For the Shallow model, we trained all parameters from scratch. We added EEG to the model because we first want to find a function f (χ) → s that maps the images space χ to the corresponding single trial P300 source signal s. This prior knowledge can help us to predict the single trial P300 amplitude in the second learning stage. We compared the performance of the models with and without EEG for training. We defined two stage loss function (loss 1 for single trial P300 source signal in the 400 -600 ms time window and loss 2 for single trial P300 amplitude) as DISPLAYFORM0 where S true i ∈ R 1×T is the single trial P300 signal in the 400 -600 ms time window to the presented image, and y i refers to the single trial P300 amplitude to each image. The training of the models without using EEG is straightforward, models were trained directly to minimize loss 2 (θ 1, θ 2) by feeding images and the corresponding single trial P300 amplitude. Training with EEG information is explained in Algorithm 1 and visualized in the "Flow 2" of Fig. 2 with two stages. Stage 1 learns parameters θ 1 to predict P300 source signal while stage 2 learns parameters θ 2 to predict single trial P300 amplitude with θ 1 fixed. Update θ 2 by descending its stochastic gradient: and without EEG. All models with EEG perform better than models without EEG, with much smaller errors and variances. Statistic tests between model with EEG and without EEG are also carried out to verify the significance of including EEG information during the training phase. One-way ANOVA tests (P-value) for each model with EEG and without EEG are stated as: P Shallow = 0.003, P M obilenet = 0.012 and P Inception = 5.980e − 05. Results here demonstrate that including EEG during the training stage helps all three CNNs improve the performance on predicting the Neuroscore. The performance of models with EEG is ranked as follows: Inception-EEG, Mobilenet-EEG, and Shallow-EEG, which indicates that deeper neural networks may achieve better performance in this task. We used the randomized EEG signal here as a baseline to see the efficacy of adding EEG to produce better Neuroscore output. When randomizing the EEG, it shows that the error for each three model increases significantly. For Mobilenet and Inception, the error of the randomized EEG is even higher than those without EEG in the training stage, demonstrating that the EEG information in the training stage is crucial to each model. Figure 3 shows that the models with EEG information have a stronger correlation between predicted Neuroscore and real Neuroscore. The cluster (blue, orange, and green circles) for each category of the model trained with EEG (left column) is more separable than the cluster produced by model without EEG (right column). This conveys with EEG for training models: Neuroscore is more accurate; and Neuroscore is able to rank the performances of different GANs, which cannot be achieved by other metrics BID2. Figure 3. Scatter plot of predicted and real Neuroscore of 6 models (Shallow, Mobilenet, Inception with and without EEG for training) cross participants by 20 times repeated shuffling training and testing set. Each circle represents the cluster for a specific category. Small triangle markers inside each cluster correspond to each shuffling process. The dot at the center of each cluster is the mean. In this paper, we introduce a neuro-AI interface that interacts CNNs with neural signals. We demonstrate the use of neuro-AI interface by introducing a challenge in the area of GANs i.e., evaluate the quality of images produced by GANs. Three deep network architectures are explored and the demonstrate that including neural responses during the training phase of the neuro-AI interface improves its accuracy even when neural measurements are absent when evaluating on the test set. More details of the performance of Neuroscore can be referred in Appendix. FIG1 shows the averaged reconstructed P300 signal across all participants (using LDA beamformer) in the RSVP experiment. It should be noted here that the averaged reconstructed P300 signal is calculated as the difference between averaged target trials and averaged standard trials after applying the LDA beamformer method. The solid lines in FIG1 are the means of the averaged reconstructed P300 signals for each image category (across 12 participants) while the shaded areas represent the standard deviations (across participants). It can be seen that the averaged reconstructed P300 (across participants) clearly distinguishes between different image categories. In order to statistically measure this correlative relationship, we calculated the Pearson correlation coefficient and p-value (two-tailed) between Neuroscore and BE accuracy and found (r = −0.767, p = 2.089e − 10). We also did the Pearson statistical test and bootstrap on the correlation between Neuroscore and BE accuracy (human judgment performance) only for GANs i.e., DCGAN, BEGAN and PROGAN. Pearson statistic is (r=-0.827, p=4.766e-10) and the bootstrapped p ≤ 0.0001. Three traditional methods are also employed to evaluate the GANs used in this study. score indicates better GAN performance), we use 1/Neuroscore for comparison. It can be seen that all three methods are consistent with each other and they rank the GANs in the same order of PROGAN, DCGAN and BEGAN from high to low performance. By comparing the three traditional evaluation metrics to the human, it can be seen that they are not consistent with human judgment of GAN performance. It should be remembered that Inception Score is able to measure the quality of the generated images while the other two methods cannot do so. However, Inception Score still rates DCGAN as outperforming BE-GAN. Our proposed Neuroscore is consistent with human judgment. Another property of using Neuroscore is the ability to track the quality of an individual image. Traditional evaluation metrics are unable to score each individual image for two reasons: They need large-scale samples for evaluation; Most methods (e.g. MMD and FID) evaluate GANs based on the dissimilarity between real images and generated images so they are not able to score the generated image one by one. For our proposed method, the score of each single image can also be evaluated as a single trial P300 amplitude. We demonstrate that using the predicted single trial P300 amplitude to observe the single image quality in Fig. 6. This property provides Neuroscore with a novel capability that can observe the variations within a typical GAN. Although Neuroscore and IS are generated from deep neural networks. Neuroscore is more suitable than IS for evaluating GANs in that: It is more explainable than IS as it is a direct reflection of human perception; Much smaller sample size is required for evaluation; Higher Neuroscore exactly indicates better image quality while IS Figure 6. P300 for each single image predicted by the proposed neuro-AI interface in our paper. Higher predicted P300 indicates the better image quality.does not. We also included the RFACE images in our generalization test. FIG3 (c) demonstrates that the predicted Neuroscore is still correlated with the real Neuroscore when adding the RFACE images and the model ranks the types of images as: PROGAN>RFACE>BEGAN>DCGAN, which is consistent with the Neuroscore that has been measured directly from participants shown in FIG3.Compared to traditional evaluation metrics, Neuroscore is able to score the GAN based on very few image samples, relatively. Recording EEG in the training stage could be the limitation of generalizing Neuroscore to evaluate a new GAN. However, the use of dry electrode EEG recording system BID8 can accelerate and simplify the data acquisition significantly. Moreover, GANs enable the possibility of synthesizing the EEG BID12, which has wide applications in brain-machine interface research.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1xKj1n9a4
Describe a neuro-AI interface technique to evaluate generative adversarial networks
While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing. Real-world testing, the de facto evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims. We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms. Using adaptive sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. We demonstrate our framework on a highway scenario. Several fatal accidents involving autonomous vehicles (AVs) underscore the importance of testing whether AV perception and control pipelines-when considered as a whole systemcan safely interact with other human traffic participants. Unfortunately, testing AVs in real environments, the most straightforward validation framework for system-level inputoutput behavior, requires prohibitive amounts of time due to the rare nature of serious accidents BID22. Concretely, a recent study BID8 argues that AVs need to drive "hundreds of millions of miles, and, under some scenarios, hundreds of billions of miles to create enough data to clearly demonstrate their safety." On the other hand, formally verifying an AV algorithm's "correctness" BID11 BID0 BID21 BID13 is inherently difficult because all driving policies are subject to crashes caused by other drivers BID22. Ruling out scenarios where the AV should not be blamed for such accidents is a task subject to logical inconsistency and subjective assignment of fault. Motivated by the challenges underlying real-world testing and formal verification, we consider a probabilistic paradigmwhich we describe as a risk-based framework BID14 -where the goal is to evaluate the probability of an accident under a base distribution representing standard traffic behavior. By assigning learned probabilities to environmental states and agent behaviors, our risk-based framework considers performance of the AV policy under a data-driven model of the world. Formally, we let P 0 denote the base distribution that models standard traffic behavior and X ∼ P 0 be a realization of the simulation (e.g. weather conditions and driving policies of other agents). For an objective function f: X → R that measures "safety"-so that low values of f (x) correspond to dangerous scenarios-our goal is to evaluate the probability of a dangerous event p γ:= P 0 (f (X) ≤ γ) for some threshold γ. Our riskbased framework is agnostic to the complexity of the ego-policy and views it as a black-box module. Importantly, this approach allows deep-learning based perception systems that make formal verification methods intractable. For control algorithms which approach or exceed humanlevel performance, an adverse event will be rare, and the probability p γ close to 0. Thus, estimating p γ is a rare event simulation problem (see BID1 for an overview of this topic). For rare probabilities p γ, the naive Monte Carlo method can be prohibitively inefficient. For a sample X i DISPLAYFORM0 DISPLAYFORM1 In order to achieve -relative accuracy, we need N 1−pγ pγ 2 rollouts from our simulator. For light-tailed f (X), then p γ ∝ exp(−γ) so that the required sample size for naive Monte Carlo grows exponentially in γ. In the next section, we use adaptive sampling techniques that sample unsafe events more frequently to make the evaluation of p γ tractable. To address the shortcomings of naive Monte Carlo for estimating rare event probabilities, we use a multilevel splitting method BID7 BID3 BID5 BID24 ] that decomposes the rare-event probability p γ into conditional probabilities with interim threshold levels DISPLAYFORM0 This decomposition introduces a product of non-rare probabilities that are easy to estimate individually. Markov chain Monte Carlo (MCMC) is used to estimate each term, which can be accurately estimated as long as consecutive levels are close and the conditional probability is therefore large. Intuitively, the splitting method iteratively steers samples X i to the rare set {X|f (X) < γ} through a series of supersets {X|f (X) DISPLAYFORM1 Since it is a priori unclear how to choose the levels ∞ =: The discarding fraction δ trades off two dueling objectives; for small values, each term in the product is large and hence easy to estimate by MCMC; for large values, the number of total iterations K until convergence is reduced and more samples (δN) at each iteration can be simulated in parallel. DISPLAYFORM2 The AMS approach complements other procedures such as adaptive importance sampling (AIS) methods. AIS methods require computation of likelihood ratios between different distributions and become numerically unstable in high dimensions. AMS does not require computation of likelihood ratios, nor does it need to postulate models for the form of the optimal importance sampling distribution P 0 (·|f (·) < γ). On the other hand, the "modes" of failure AMS discovers are limited by the number of samples and the mixing properties of the MCMC sampler employed. Contrary to model-based AIS methods such as the cross-entropy method BID19, AMS has several convergence guarantees including those for bias, variance, and runtime (see BID4 for details). Notably, AMS is unbiased and has relative variance which scales as log(1/p γ) as opposed to 1/p γ for naive Monte Carlo (cf. Section II). Intuitively, AMS computes O(log(1/p γ)) independent probabilities, each of which has variance independent of p γ. To implement our risk-based framework, we first learn a base distribution P 0 of nominal traffic behavior. Using videos of highway traffic in the NGSim BID12 dataset, we train policies of human drivers via imitation learning BID20 BID17 BID18 BID6 BID2. It has recently been observed that supervised approaches to imitationlearning-where expert data is used to predict actions given vehicle states-suffer from poor performance in regions of the state space not encountered in data BID17 BID18. Reinforcementlearning techniques such as generative adversarial imitation learning (GAIL) BID6 improve generalization performance, as the imitation agent explores novel regions of the state space during training. We use the model-based variant of GAIL (MGAIL) BID2 that allows end-to-end differentiation. GAIL has been validated by Kuefler et al. BID10 to realistically mimic human-like driving behavior from the NGSim dataset across multiple metrics. These include the similarity of low-level actions (speeds, accelerations, turn-rates, jerks, and time-to-collision), as well as higher-level behaviors (lane change rate, collision rate, hard-brake rate, etc).We consider a scenario consisting of six agents, five of which are considered part of the environment. The environment vehicles' policies follow the distribution learned via GAIL. All vehicles are constrained to start within a set of possible initial configurations consisting of pose and velocity, and each vehicle has a goal of reaching the end of the approximately 2 km stretch of road. We created a photorealistic simulator of the portion of I-80 in Emeryville, CA where the traffic data was collected BID12 (see Appendix B for details). FIG1 details the performance of the AMS algorithm on the scenario, where the risk metric f (X) is the minimum time-to-collision (TTC) over a rollout (cf. Appendix C). For events with probability 10 −5 or less, AMS outperforms naive Monte Carlo, and the variance of the failure probability is reduced by up to 56×. We can also combine AMS with importance sampling. Since AMS outputs particles sampled from the desired failure region(s), we simply fit a model to this empirical distribution. FIG2 shows the output from 10 5 samples from a normalizing-flow-based importance sampler (cf. Appendix D) built upon the output of AMS. It is significantly more efficient than naive sampling. A fundamental tradeoff emerges when comparing the requirements of our risk-based framework to other testing paradigms. Real-world testing endangers the public but is still in some sense a gold standard. Verified subsystems provide evidence that the AV should drive safely in all specified scenarios; they are limited by computational intractability and require both white-box models and a complete specifications for assigning blame (e.g. BID22). In turn, our risk-based framework is most useful when the base distribution P 0 is accurate. Although an estimate of p γ is not informative when P 0 is misspecified, our adaptive sampling techniques still efficiently identify dangerous scenarios in this case; such dangerous scenarios are independent of potentially subjective assignments of blame. Principled techniques for building and validating the model of the environment P 0 represent an open research question. Rigorous safety evaluation of AVs necessitates benchmarks based on adaptive adversarial conditions rather than standard nominal conditions. Importantly, our framework only requires black-box access to the driving policy and simulation environment. Our approach offers significant speedups over realworld testing and allows efficient evaluation of black-box AV input/output behavior, providing a powerful tool to aid in the design of safe AVs. DISPLAYFORM0 Evaluate and sort f (X i) in decreasing order BID5: DISPLAYFORM1 Discard X,..., X (δN) and reinitialize by resampling with replacement from X (δN +1),..., X (N) Apply T MCMC transitions separately to each of X,..., X (δN) 11: end while Our simulator is a distributed, modular framework, which is necessary to support the inclusion of new AV systems and updates to the environment-vehicle policies. A benefit of this design is that simulation rollouts are simple to parallelize. In particular, we allow instantiation of multiple simulations simultaneously, without requiring that each include the entire set of components. For example, a desktop may support only one instance of Unreal Engine (for perception pipelines) but could be capable of simulating 10 physics simulations in parallel; it would be impossible to fully utilize the compute resource with a monolithic executable wrapping all engines together. Our architecture enables instances of components to be distributed on heterogeneous GPU clusters while maintaining the ability to perform meaningful analysis locally on commodity desktops. Using the asynchronous messaging library ZeroMQ, our implementation is fully-distributed among available CPUs and GPUs; our rollouts are up to 30P times faster than real time, where P is the number of processors. A video of a rollout from the simulator is available at https://youtu.be/CLXJ0CitDck and a snapshot from this rollout is shown in FIG4. In our implementation the safety measure is minimum timeto-collision (TTC). TTC is defined as the time it would take for two vehicles to intercept one another given that they each maintain their current heading and velocity BID23. The TTC between the ego-vehicle and vehicle i is given by DISPLAYFORM0 where r i is the distance between the ego vehicle and vehicle i, andṙ i the time derivative of this distance (which is simply computed by projecting the relative velocity of vehicle i onto the vector between the vehicles' poses). The operator [·] + is defined as [x] +:= max(x, 0). We define T T C i (t) = ∞ foṙ r i (t) ≥ 0. In this paper, vehicles are described as oriented rectangles in the 2D plane. Since we are interested in the time it would take for the ego-vehicle to intersect the polygonal boundary of another vehicle on the road, we utilize a finite set of range and range measurements in order to approximate the TTC metric. For a given configuration of vehicles, we compute N uniformly spaced angles θ 1,..., θ N in the range [0, 2π] with respect to the ego vehicle's orientation and cast rays outward from the center of the ego vehicle. For each direction we compute the distance which a ray could travel before intersecting one of the M other vehicles in the environment. These form N range measurements s 1,..., s N. Further, for each ray s i, we determine which vehicle (if any) that ray hit; projecting the relative velocity of this vehicle with respect to ego vehicle gives the range-rate measurementṡ i. Finally, we approximate the minimum TTC for a given simulation rollout X of length T discrete time steps by: DISPLAYFORM1 where we again define the approximate instantaneous TTC as ∞ forṡ i (t) ≥ 0. Note that this measure can approximate the true TTC arbitrarily well via choice of N and the discretization of time used by the simulator. Furthermore, note that our definition of TTC is with respect to the center of the ego vehicle touching the boundary of another vehicle. Crashing, on the other hand, is defined in our simulation as the intersection of boundaries of two vehicles. Thus, TTC values we evaluate in our simulation are nonzero even during crashes, since the center of the ego vehiclehas not yet collided with the boundary of another vehicle. Normalizing flows are used to describe classes of distributions using multi-layer neural networks, which are more expressive than typical analytical parameterizations such as exponential families. First a base distribution (usually something easy to sample from e.g. a standard normal distribution) is chosen. Then a series of transformations are applied to the samples from this distribution. Note that if the transformations are invertible, it is possible to work backwards from new samples to determine their likelihood using standard methods. Suppose a non-linear function y = f (x) is applied to x ∈ X then we can determine the density p(y) as follows: DISPLAYFORM0 where J(·) denotes the Jacobian. Composing a sequence of such transforms f 1 (x), f 2 (f 1 (x))... allows expressive transformations to the base density. Rezende and Mohamed BID16 describe a modern version of the approach which ensures that the Jacobian is upper triangular, rendering the determinant computation efficient. Given that each transform is parameterized by trainable weights, the architecture can be used to learn a density by maximizing the log-probability of the observed data in the transformed distribution p(y). Further enhancements to this architecture for fitting distributions can be found in BID9 BID15.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJxc4FZoTV
Using adaptive sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior.
Many tasks in natural language understanding require learning relationships between two sequences for various tasks such as natural language inference, paraphrasing and entailment. These aforementioned tasks are similar in nature, yet they are often modeled individually. Knowledge transfer can be effective for closely related tasks, which is usually carried out using parameter transfer in neural networks. However, transferring all parameters, some of which irrelevant for a target task, can lead to sub-optimal and can have a negative effect on performance, referred to as \textit{negative} transfer. Hence, this paper focuses on the transferability of both instances and parameters across natural language understanding tasks by proposing an ensemble-based transfer learning method in the context of few-shot learning. Our main contribution is a method for mitigating negative transfer across tasks when using neural networks, which involves dynamically bagging small recurrent neural networks trained on different subsets of the source task/s. We present a straightforward yet novel approach for incorporating these networks to a target task for few-shot learning by using a decaying parameter chosen according to the slope changes of a smoothed spline error curve at sub-intervals during training. Our proposed method show improvements over hard and soft parameter sharing transfer methods in the few-shot learning case and shows competitive performance against models that are trained given full supervision on the target task, from only few examples. Learning relationships between sentences is a fundamental task in natural language understanding (NLU). Given that there is gradience between words alone, the task of scoring or categorizing sentence pairs is made even more challenging, particularly when either sentence is less grounded and more conceptually abstract e.g sentence-level semantic textual similarity and textual inference. The area of pairwise-based sentence classification/regression has been active since research on distributional compositional semantics that use distributed word representations (word or sub-word vectors) coupled with neural networks for supervised learning e.g pairwise neural networks for textual entailment, paraphrasing and relatedness scoring BID15.Many of these tasks are closely related and can benefit from transferred knowledge. However, for tasks that are less similar in nature, the likelihood of negative transfer is increased and therefore hinders the predictive capability of a model on the target task. However, challenges associated with transfer learning, such as negative transfer, are relatively less explored explored with few exceptions BID23; BID5 and even fewer in the context of natural language tasks BID18. More specifically, there is only few methods for addressing negative transfer in deep neural networks BID9.Therefore, we propose a transfer learning method to address negative transfer and describe a simple way to transfer models learned from subsets of data from a source task (or set of source tasks) to a target task. The relevance of each subset per task is weighted based on the respective models validation performance on the target task. Hence, models within the ensemble trained on subsets of a source task which are irrelevant to the target task are assigned a lower weight in the overall ensemble prediction on the target task. We gradually transition from using the source task ensemble models for prediction on the target task to making predictions solely using the single model trained on few examples from the target task. The transition is made using a decaying parameter chosen according to the slope changes of a smoothed spline error curve at sub-intervals during training. The idea is that early in training the target task benefits more from knowledge learned from other tasks than later in training and hence the influence of past knowledge is annealed. We refer to our method as Dropping Networks as the approach involves using a combination of Dropout and Bagging in neural networks for effective regularization in neural networks, combined with a way to weight the models within the ensembles. For our experiments we focus on two Natural Language Inference (NLI) tasks and one Question Matching (QM) dataset. NLI deals with inferring whether a hypothesis is true given a premise. Such examples are seen in entailment and contradiction. QM is a relatively new pairwise learning task in NLU for semantic relatedness that aims to identify pairs of questions that have the same intent. We purposefully restrict the analysis to no more than three datasets as the number of combinations of transfer grows combinatorially. Moreover, this allows us to analyze how the method performs when transferring between two closely related tasks (two NLI tasks where negative transfer is less apparent) to less related tasks (between NLI and QM). We show the model averaging properties of our negative transfer method show significant benefits over Bagging neural networks or a single neural network with Dropout, particularly when dropout is high (p=0.5). Additionally, we find that distant tasks that have some knowledge transfer can be overlooked if possible effects of negative transfer are not addressed. The proposed weighting scheme takes this issue into account, improving over alternative approaches as we will discuss. In transfer learning we aim to transfer knowledge from a one or more source task T s in the form of instances, parameters and/or external resources to improve performance on a target task T t. This work is concerned about improving in this manner, but also not to degrade the original performance of T s, referred to as Sequential Learning. In the past few decades, research on transfer learning in neural networks has predominantly been parameter based transfer. BID29 have found lower-level representations to be more transferable than upper-layer representations since they are more general and less specific to the task, hence negative transfer is less severe. We will later describe a method for overcoming this using an ensembling-based method, but before we note the most relevant work on transferability in neural networks. BID21 introduced the notion of parameter transfer in neural networks, also showing the benefits of transfer in structured tasks, where transfer is applied on an upstream task from its sub-tasks. Further to this , a hyperplane utility measure as defined by θ s from T t which then rescales the weight magnitudes was shown to perform well, showing faster convergence when transferred to T t. BID22 focused on constructing a covariance matrix for informative Gaussian priors transferred from related tasks on binary text classification. The purpose was to overcome poor generalization from weakly informative priors due to sparse text data for training. The off-diagonals of represent the parameter dependencies, therefore being able to infer word relationships to outputs even if a word is unseen on the test data since the relationship to observed words is known. More recently, transfer learning (TL) in neural networks has been predominantly studied in Computer Vision (CV). Models such as AlexNet allow features to append to existing networks for further fine tuning on new tasks. They quantify the degree of generalization each layer provides in transfer and also evaluate how multiple CNN weights are used to be of benefit in TL. This also reinforces to the motivation behind using ensembles in this paper. BID14 describe the transferability of parameters in neural networks for NLP tasks. Questions posed included the transferability between varying degrees of "similar" tasks, the transferability of different hidden layers, the effectiveness of hard parameter transfer and the use of multi-task learning as opposed to sequential based TL. They focus on transfer using hard parameter transfer, most relevantly, between and. They too find that lower level features are more general, therefore more useful to transfer to other similar task, whereas the output layer is more task specific. Another important point raised in their paper was that a large learning rate can in the transferred parameters being changed far from their original transferred state. As we will discuss, the method proposed here will inadvertently address this issue since the learning rates are kept intact within the ensembled models, a parameter adjustment is only made to their respective weight in a vote. BID7 have recently popularized transfer learning by transferring domain agnostic neural language models (AWD-LSTM Merity et al. FORMULA2). Similarly, lexical word definitions have also been recently used for transfer learning O' Neill & Buitelaar FORMULA2, which too provide a model that is learned independent of a domain. This mean the sample complexity for a specific task greatly reduces and we only require enough labels to do label fitting which requires fine-tuning of layers nearer to the output BID25. Before discussing the methodology we describe the current SoTA for pairwise learning in NLU. BID24 use a Word Embedding Correlation (WEC) model to score co-occurrence probabilities for Question-Answer sentence pairs on Yahoo! Answers dataset and Baidu Zhidao Q&A pairs using both a translation model and word embedding correlations. The objective of the paper was to find a correlation scoring function where a word vector is given while modelling word co-occurrence given as C(FORMULA2 have described a character-based intra attention network for NLI on the SNLI corpus, showing an improvement over the 5-hidden layer Bi-LSTM network introduced by used on the MultiNLI corpus. Here, the architecture also looks to solve to use attention to produce interactions to influence the sentence encoding pairs. Originally, this idea was introduced for pairwise learning by using three Attention-based Convolutional Neural Networks BID28 that use attention at different hidden layers and not only on the word level. Although, this approach shows good , word ordering is partially lost in the sentence encoded interdependent representations in CNNs, particularly when max or average pooling is applied on layers upstream. DISPLAYFORM0 In this section we start by describing a co-attention GRU network that is used as one of the baselines when comparing ensembled GRU networks for the pairwise learning-based tasks. We then describe the proposed transfer learning method. Co-Attention GRU Encoded representations for paired sentences are obtained from h DISPLAYFORM0 where h (l) represents the last hidden layer representation in a recurrent neural network. Since longer dependencies are difficult to encode, only using the last hidden state as the context vector c t can lead to words at the beginning of a sentence have diminishing effect on the overall representation. Furthermore, it ignores interdependencies between pairs of sentences which is the case for pairwise learning. Hence, in the single task learning case we consider using a cross-attention network as a baseline which accounts for interdependencies by placing more weight on words that are more salient to the opposite sentence when forming the hidden representation, using the attention mechanism BID0. The softmax function produces the attention weights α by passing all outputs of the source RNN, h S to the softmax conditioned on the target word of the opposite sentence h t. A context vector c t is computed as the sum of the attention weighted outputs byh s. This in a matrix A ∈ R |S|×|T | where |S| and |T | are the respective sentence lengths (the max length of a given batch). The final attention vector α t is used as a weighted input of the context vector c t and the hidden state output h t parameterized by a xavier uniform initialized weight vector W c to a hyperbolic tangent unit. Here we describe the two approaches that are considered for accelerating learning and avoiding negative transfer on T t given the voting parameters of a learned model from T s. We first start by describing a method that learns to guide weights on T t by measuring similarity between θŝ and θt during training by using moving averages on the slope of the error curve. This is then followed by a description on the use of smoothing splines to avoid large changes due to volatility in the error curve during training. Dropping Transfer Both dropout and bagging are common approaches for regularizing models, the former is commonly used in neural networks. Dropout trains a number of subnetworks by dropping parameters and/or input features during training while also have less parameter updates per epoch. Bagging trains multiple models by sampling instances x k ∈ R d from a distribution p(x) (e.g uniform distribution) prior to training. Herein, we refer to using both in conjunction as Dropping. The proposed methods is similar to Adaptive Boosting (AdaBoost) in that there is a weight assigned based on performance during training. However, in our proposed method, the weights are assigned based on the performance of each batch after Bagging, instead of each data sample. Furthermore, the use of Dropout promotes sparsity, combining both arithmetic mean and geometric mean model averaging. Avoiding negative transfer with standard AdaBoost is too costly in practice too use on large datasets and is prone to overfitting in the presence of noise BID12. A fundamental concern in TL is that we do not want to transfer irrelevant knowledge which leads to slower convergence and/or suboptimal performance. Therefore, dropping places soft attention based on the performance of each model from T s → T t using a softmax as a weighted vote. Once a target model f t is learned from only few examples on T t (referred to as few-shot learning), the weighted ensembled models from T s can be transferred and merged with the T t model. Equation DISPLAYFORM0 Equation 2 then shows a straightforward update rule that decays the importance of T s Dropping networks as the T t neural network begins to learn from only few examples. The prediction from few samples a l t is the single output from T l t and γ is the slope of the error curve that is updated at regular intervals during training. We expect this approach to lead to faster convergence and more general features as the regularization is in the form of a decaying constraint from a related task. The rate of the shift towards the T t model is proportional to the gradient of the error ∇ xs for a set of mini-batches xs. In our experiments, we have set the update of the slope to occur every 100 iterations. DISPLAYFORM1 The assumption is that in the initial stages of learning, incorporating past knowledge is more important. As the model specializes on the target task we then rely less on incorporating prior knowledge over time. In its simplest form, this can be represented as a moving average over the development set error curve so to choose δ t = E[∇ [t,t+k] ], where k is the size of the sliding window. In some cases an average over time is not suitable when the training error is volatile between slope estimations. Hence, alternative smoothing approaches would include kernel and spline models for fitting noisy, or volatile error curves. A kernel ψ can be used to smooth over the error curve, which takes the form of a Gaussian kernel ψ(x, DISPLAYFORM2 Another approach is to use Local Weighted Scatterplot Smoothing (LOWESS); which is a non-parametric regression technique that is more robust against outliers in comparison to standard least square regression by adding a penalty term. Equation 3 shows the regularized least squares function for a set of cubic smoothing splines ψ which are piecewise polynomials that are connected by knots, distributed uniformly across the given interval [0, T]. Splines are solved using least squares with a regularization term λθ 2 j ∀ j and ψ j a single piecewise polynomial at the subinterval [t, t + k] ∈ [0, T], as shown in Equation 3. Each subinterval represents the space that γ is adapted for over time i.e change the influence of the T s Dropping Network as T t model learns from few examples over time. This type of cubic spline is used for the subsequent section for Dropping Network transfer. DISPLAYFORM3 The standard cross-entropy (CE) loss is used as the objective as shown in Equation 4. DISPLAYFORM4 This approach is relatively straightforward and on average across all three datasets, 58% more computational time for training 10 smaller ensembles for each single-task was needed, in comparison to a larger global model on a single NVIDIA Quadro M2000 Graphic Processing Unit. Some benefits of the proposed method can be noted at this point. Firstly, the distance measure to related tasks is directly proportional to the online error of the target task. In contrast, hard parameter sharing does not address such issues, nor does recent approaches that use Gaussian Kernel Density estimates as parameter contraints on the target task O BID17. Secondly, although not the focus of this work, the T t model can be trained on a new task with more or less classes by adding or discarding connections on the last softmax layer. Lastly, by weighting the models within the ensemble that perform better on T t we mitigate negative transfer problems. We now discuss some of the main of the proposed Dropping Network transfer. FORMULA2 provides the first large scale corpus with a total of 570K annotated sentence pairs (much larger than previous semantic matching datasets such as the dataset that consisted of 9927 sentence pairs). As described in the opening statement of McCartney's thesis MacCartney FORMULA3, "the emphasis is on informal reasoning, lexical semantic knowledge, and variability of linguistic expression." The SNLI corpus addresses issues with previous manual and semi-automatically annotated datasets of its kind which suffer in quality, scale and entity co-referencing that leads to ambiguous and ill-defined labeling. They do this by grounding the instances with a given scenario which leaves a precedent for comparing the contradiction, entailment and neutrality between premise and hypothesis sentences. Since the introduction of this large annotated corpus, further resources for Multi-Genre NLI (MultiNLI) have recently been made available as apart of a Shared RepEval task Nangia et al. FORMULA2;. MultiNLI extends a 433k instance dataset to provide a wider coverage containing 10 distinct genres of both written and spoken English, leading to a more detailed analysis of where machine learning models perform well or not, unlike the original SNLI corpus that only relies only on image captions. As authors describe, "temporal reasoning, belief, and modality become irrelevant to task performance" are not addressed by the original SNLI corpus. Another motivation for curating the dataset is particularly relevant to this problem, that is the evaluation of transfer learning across domains, hence the inclusion of these datasets in the analysis. These two NLI datasets allow us to analyze the transferability for two closely related datasets. Question Matching (QM) is a relatively new pairwise learning task in NLU for semantic relatedness, first introduced by the Quora team in the form of a Kaggle competition 1. The task has implications for Question-Answering (QA) systems and more generally, machine comprehension. A known difficulty in QA is the problem of responding to a question with the most relevant answers. In order to respond appropriately, grouping and relating similar questions can greatly reduce the possible set of correct answers. For single-task learning, the baseline proposed for evaluating the co-attention model and the ensemblebased model consists of a standard GRU network with varying architecture settings for all three datasets. During experiments we tested different combinations of hyperparameter settings. All models are trained for 30,000 epochs, using a dropout rate p = 0.5 with Adaptive Momentum (ADAM) gradient based optimization in a 2-hidden layer network with an initial learning rate η = 0.001 and a batch size b T = 128. As a baseline for TL we use hard parameter transfer with fine tuning on 50% of X ∈ T s of upper layers. For comparison to other transfer approaches we note previous findings by BID29 which show that lower level features are more generalizable. Hence, it is common that lower level features are transferred and fixed for T t while the upper layers are fine tuned for the task, as described in Section 2.2. Therefore, the baseline comparison simply transfers all weights from θ s → θ t The evaluation is carried out on both the rate of convergence and optimal performance. Hence, we particularly analyze the speedup obtained in the early stages of learning. Table 1 shows the on all three datasets for single-task learning, the purpose of which is to clarify the potential performance if learned from most of the available training data (between 70%-80% of the overall dataset for the three datasets).The ensemble model slightly outperforms other networks proposed, while the co-attention network produces similar performance with a similar architecture to the ensemble models except for the use of local attention over hidden layers shared across both sentences. The improvements are most notable on MNLI, reaching competitive performance in comparison to state of the art (SoTA) on the RepEval task 2, held by BID2 which similarly uses a Gated Attention Network. These SoTA are considered as an upper bound to the potential performance when evaluating the Dropping based TL strategy for few shot learning. FIG4 demonstrates the performance of the zero-shot learning of the ensemble network which averages the probability estimates from each models prediction on the T t test set (few-shot T t training set or development set not included). As the ensembles learn on T s it is evident that most of the learning has already been carried out by 5,000-10,000 epochs. Producing entailment and contradiction predictions for multi-genre sources is significantly more difficult, demonstrated by lower test accuracy when transferring SNLI → MNLI, in comparison to MNLI → SNLI that performs better relative to recent SoTA on SNLI. TAB2 shows best performance of this hard parameter transfer from T s → T t. The QM dataset is not as "similar" in nature and in the zero-shot learning setting the model's weights a S and a Q are normalized to 1 (however, this could have been weighted based on a prior belief of how "similar" the tasks are). Hence, it is unsurprising that the QM dataset has reduced the test accuracy given that it is further to T t than S is. The second approach is shown on the LHS of TAB4 which is the baseline few-shot learning performance with fixed parameter transferred from T t on the lower layer with fine-tuning of the 2 nd layer. Here, we ensure that instances from each genre within MNLI are sampled at least 100 times and that the batch of 3% the original size of the corpus is used (14,000 instances). Since SNLI and QM are created from a single source, we did not to impose such a constraint, also using a 3% random sample for testing. Therefore, these and all subsequent denoted as Train Acc. % refers to the training accuracy on the small batches for each respective dataset. We see improvements that are made from further tuning on the small T t batch that are made, particularly on MNLI with a 2.815 percentage point increase in test accuracy. For both SNLI + QM → MNLI and MNLI + QM → SNLI cases final predictions are made by averaging over the class probability estimates before using CE loss. Dropping-GRU CSES On the RHS, we present the of the proposed method which transfers parameters from the Dropping network trained with the output shown in Equation 2 using a spline smoother with piecewise polynomials (as described in FIG4). As aforementioned, this approach finds the slope of the online error curve between sub-intervals so to choose γ i.e the balance between the source ensemble and target model trained on few examples. In the case with SNLI + QM (ie. SNLI + Question Matching) and MNLI + QM, 20 ensembles are transferred, 10 from each model with a dropout rate p d = 0.5. We note that unlike the previous two baselines methods shown in TAB2 and 3, the performance does not decrease by transferring the QM models to both SNLI and MultiNLI. This is explained by the use of the weighting scheme proposed with spline smoothing of the error curve i.e γ decreases at a faster rate for T t due to the ineffectiveness of the ensembles created on the QM dataset. In summary, we find transfer of MNLI + QM → SNLI and SNLI+QM → MNLI showing most improvement using the proposed transfer method, in comparison to standard hard and soft parameter transfer. This is reflected in the fact that the proposed method is the only one which improved on SNLI while still transferring the more distant QM dataset. The method for transfer only relies on one additional parameter γ. We find that in practice using a higher decay rate γ (0.9-0.95) is more suitable for closely related tasks. Decreasing γ in proportion to the slope of a smooth spline fitted to the online error curve performs better than arbitrary step changes or a fixed rate for γ (equivalent to static hard parameter ensemble transfer). Lastly, If a distant tasks has some knowledge transfer they can be overlooked if possible effects of negative transfer are not addressed. The proposed weighting scheme takes this into account, which is reflected on the RHS of TAB4, showing M + Q → S and S + Q → M show most improvement, in comparison to alternative approaches posed in TAB2 where transferring M + Q → S performed worse than M → S. Our proposed method combines neural network-based bagging with dynamic cubic spline error curve fitting to transition between source models and a single target model trained on only few target samples. We find our proposed method overcomes limitations in transfer learning such as avoiding negative transfer when attempting to transfer from more distant task, which arises during few-shot learning setting. This paper has empirically demonstrated this for learning complex semantic relationships between sentence pairs for pairwise learning tasks. Additionally, we find the co-attention network and the ensemble GRU network to perform comparably for single-task learning.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
HyeggoCN_4
A dynamic bagging methods approach to avoiding negatve transfer in neural network few-shot transfer learning
Ability to quantify and predict progression of a disease is fundamental for selecting an appropriate treatment. Many clinical metrics cannot be acquired frequently either because of their cost (e.g. MRI, gait analysis) or because they are inconvenient or harmful to a patient (e.g. biopsy, x-ray). In such scenarios, in order to estimate individual trajectories of disease progression, it is advantageous to leverage similarities between patients, i.e. the covariance of trajectories, and find a latent representation of progression. Most of existing methods for estimating trajectories do not account for events in-between observations, what dramatically decreases their adequacy for clinical practice. In this study, we develop a machine learning framework named Coordinatewise-Soft-Impute (CSI) for analyzing disease progression from sparse observations in the presence of confounding events. CSI is guaranteed to converge to the global minimum of the corresponding optimization problem. Experimental also demonstrates the effectiveness of CSI using both simulated and real dataset. The course of disease progression in individual patients is one of the biggest uncertainties in medical practice. In an ideal world, accurate, continuous assessment of a patient's condition helps with prevention and treatment. However, many medical tests are either harmful or inconvenient to perform frequently, and practitioners have to infer the development of disease from sparse, noisy observations. In its simplest form, the problem of modeling disease progressions is to fit the curve of y(t), t ∈ [t min, t max] for each patient, given sparse observations y:= (ỹ(t 1),...,ỹ(t n)). Due to the highdimensional nature of longitudinal data, existing usually restrict solutions to subspace of functions and utilize similarities between patients via enforcing low-rank structures. One popular approach is the mixed effect models, including Gaussian process approaches and functional principal components . While generative models are commonly used and have nice theoretical properties, their could be sensitive to the underlying distributional assumptions of observed data and hard to adapt to different applications. Another line of research is to pose the problem of disease progression estimation as an optimization problem. Kidzinski and Hastie. Kidziński & proposed a framework which formulates the problem as a matrix completion problem and solve it using matrix factorization techniques. This method is distribution-free and flexible to possible extensions. Meanwhile, both types of solutions model the natural progression of disease using observations of the targeted variables only. They fail to incorporate the existence and effect of human interference: medications, therapies, surgeries, etc. Two patients with similar symptoms initially may have different futures if they choose different treatments. Without that information, predictions can be way-off. To the best of our knowledge, existing literature talks little about modeling treatment effect on disease progression. In Kidziński & , authors use concurrent observations of auxillary variables (e.g. oxygen consumption to motor functions) to help estimate the target one, under the assumption that both variables reflect the intrinsic latent feature of the disease and are thus correlated. Treatments of various types, however, rely on human decisions and to some extent, an exogenous variable to the development of disease. Thus they need to modeled differently. In this work, we propose a model for tracking disease progression that includes the effects of treatments. We introduce the Coordinatewise-Soft-Impute (CSI) algorithm for fitting the model and investigate its theoretical and practical properties. The contribution of our work is threefold: First, we propose a model and an algorithm CSI, to estimate the progression of disease which incorporates the effect of treatment events. The framework is flexible, distribution-free, simple to implement and generalizable. Second, we prove that CSI converges to the global solution regardless of the initialization. Third, we compare the performance of CSI with various other existing methods on both simulated data and a dataset of Gillette Children's Hospital with patients diagnosed with Cerebral Palsy, and demonstrate the superior performances of CSI. The rest of the paper is organized as follows. In Section 2 we state the problem and review existing methods. Next, in Section 3 we describe the model and the algorithm. Theoretic properties of the algorithm are derived in Section 4. Finally, in Section 5 and 6 we provides empirical of CSI on the simulated and the real datesets respectively. We discuss some future directions in Section 7. Let y(t) be the trajectory of our objective variable, such as the size of tumor, over fixed time range t ∈ [t min, t max], and N be the number of patients. For each patient 1 ≤ i ≤ N, we measure its trajectory y i (t) at n i irregularly time points t i = [t i,1, t i,2, ..., t i,ni] and denote the as. We are primarily interested in estimating the disease progression trajectories To fit a continuous curve based on discrete observations, we restrict our estimations to a finitedimensional space of functions. Let {b i, i ∈ N} be a fixed basis of L 2 ([t min, t max]) (e.g. splines, Fourier basis) and b = {b i : 1 ≤ i ≤ K} be first K dimensions of it. The problem of estimating y i (t) can then be reduced to the problem of estimating the coefficients Though intuitive, the above method has two main drawbacks. First, when the number of observations per patient is less than or equal to the number of basis functions K, we can perfectly fit any curve without error, leading to overfitting. Moreover, this direct approach ignores the similarities between curves. Different patients may share similar trend of the trajectories which could potentially imporve the prediction. Below we describe two main lines of research improving on this, the mixed-effect model and the matrix completion model. In mixed-effect models, every trajectory y i (t) is assumed to be composed of two parts: the fixed effect µ(t) = m b(t) for some m ∈ R K that remains the same among all patients and a random effect w i ∈ R K that differs for each i ∈ {1, . . ., N}. In its simplest form, we assume where Σ is the K × K covariance matrix, σ is the standard deviation and are functions µ(t) and b(t) evaluated at the times t i, respectively. Estimations of model parameters µ, Σ can be made via expectation maximization (EM) algorithm . Individual coefficients w i can be estimated using the best unbiased linear predictor (BLUP) . In linear mixed-effect model, each trajectory is estimated with |w i | = K degrees of freedom, which can still be too complex when observations are sparse. One typical solution is to assume a low-rank structure of the covariance matrix Σ by introducing a contraction mapping A from the functional basis to a low-dimensional latent space. More specifically, one may rewrite the LMM model as where A is a K × q matrix with q < K andw i ∈ R q is the new, shorter random effect to be estimated. Methods based on low-rank approximations are widely adopted and applied in practice and different algorithms on fitting the model have been proposed (; ;). In the later sections, we will compare our algorithm with one specific implementation named functional-Principle-Component-Analysis (fPCA) , which uses EM algorithm for estimating model parameters and latent variables w i. While the probabilistic approach of mixed-effect models offers many theoretical advantages including convergence rates and inference testing, it is often sensitive to the assumptions on distributions, some of which are hard to verify in practice. To avoid the potential bias of distributional assumptions in mixed-effect models, Kidzinski and Hastie (Kidziński &) formulate the problem as a sparse matrix completion problem. We will review this approach in the current section. To reduce the continuous-time trajectories into matrices, we discretize the time range T ×K be the projection of the K-truncated basis b onto grid G. by rounding the time t i,j of every observation y i (t i,j) to the nearest time grid and regarding all other entries as missing values. Due to sparsity, we assume that no two observation y i (t i,j)'s are mapped to the same entry of Y. Let Ω denote the set of all observed entries of Y. For any matrix A, let P Ω (A) be the projection of A onto Ω, i.e. P Ω (A) = M where M i,j = A i,j for (i, j) ∈ Ω and M i,j = 0 otherwise. Similarly, we define P ⊥ Ω (A) = A − P Ω (A) to be the projection on the complement of Ω. Under this setting, the trajectory prediction problem is reduced to the problem of fitting a N × K matrix W such that W B ≈ Y on observed indices Ω. The direct way of estimating W is to solve the optimization problem where · F is the Fröbenius norm. Again, if K is larger than the number of observations for some subject we will overfit. To avoid this problem we need some additional constraints on W. A typical approach in the matrix completion community is to introduce a nuclear norm penalty-a relaxed version of the rank penalty while preserving convexity (; Candès &). The optimization problem with the nuclear norm penalty takes form where λ > 0 is the regularization parameter, · F is the Fröbenius norm, and · * is the nuclear norm, i.e. the sum of singular values. In Kidziński & , a Soft-Longitudinal-Impute (SLI) algorithm is proposed to solve (2.2) efficiently. We refer the readers to Kidziński & for detailed description of SLI while noting that it is also a special case of our algorithm 1 defined in the next section with µ fixed to be 0. In this section, we introduce our model on effect of treatments in disease progression. A wide variety of treatments with different effects and durations exist in medical practice and it is impossible to build a single model to encompass them all. In this study we take the simplified approach and regard treatment, with the example of one-time surgery in mind, as a non-recurring event with an additive effect on the targeted variable afterward. Due to the flexibility of formulation of optimization problem (2.1), we build our model based on matrix completion framework of Section 2.2. More specifically, let s(i) ∈ G be the time of treatment of the i'th patient, rounded to the closest τ k ∈ G (s(i) = ∞ if no treatment is performed). We encode the treatment information as a N × T zero-one matrix I S, where (I S) i,j = 1 if and only τ j ≥ s(i), i.e. patient i has already taken the treatment by time τ j. Each row of I S takes the form of (0, · · ·, 0, 1, · · ·, 1). Let µ denote the average additive effect of treatment among all patients. In practice, we have access to the sparse observation matrix Y and surgery matrix I S and aim to estimate the treatment effect µ and individual coefficient matrix W based on Y, I S and the fixed basis matrix B such that W B + µI S ≈ Y. Again, to avoid overfitting and exploit the similarities between individuals, we add a penalty term on the nuclear norm of W. The optimization problem is thus expressed as: for some λ > 0. Though the optimization problem (3.1) above does not admit an explicit analytical solution, it is not hard to solve for one of µ or W given the other one. For fixed µ, the problem reduces to the optimization problem (2.2) withỸ = Y − µI S and can be solved iteratively by the SLI algorithm Kidziński & , which we will also specify later in Algorithm 1. For fixed W, we have arg min where Ω S is the set of non-zero indices of I S. Optimization problem (3.2) can be solved by taking derivative with respect to µ directly, which yieldŝ The clean formulation of (3.3) motivates us to the following Coordinatewise-Soft-Impute (CSI) algorithm (Algorithm In the definition, we define operator S λ as for any matrix X,). Note that if we set µ ≡ 0 throughout the updates, then we get back to our base model SLI without treatment effect. In this section we study the convergence properties of Algorithm 1. Fix the regularization parameter λ ) be the value of (µ, W) in the k'th iteration of the algorithm, the exact definition of which is provided below in (4.4). We prove that Algorithm 1 reduces the loss function at each iteration and eventually converges to the global minimizer. λ ) converges to a limit point (μ λ,Ŵ λ) which solves the optimization problem: Moreover, (μ λ,Ŵ λ) satisfies that The proof of Theorem 1 relies on five technique Lemmas stated below. The detailed proofs of the lemmas and the proof to Theorem 1 are provided in Appendix A. The first two lemmas are on properties of the nuclear norm shrinkage operator S λ defined in Section 3.1. Lemma 1. Let W be an N × K matrix and B is an orthogonal T × K matrix of rank K. The solution to the optimization problem min W is defined in Section 3.1. Lemma 2. Operator S λ (·) satisfies the following inequality for any two matrices W 1, W 2 with matching dimensions: Lemma 1 shows that in the k-th step of Algorithm 1, is the minimizer for function The next lemma proves the sequence of loss functions is monotonically decreasing at each iteration. Lemma 3. For every fixed λ ≥ 0, the k'th step of the algorithm (µ Then with any starting point (µ The next lemma proves that differences F both converge to 0. Lemma 4. For any positive integer k, we have W Finally we show that if the sequence {(µ λ)} k, it has to converge to a solution of (4.1). In this section we illustrate properties of our Coordinatewise-Soft-Impute (CSI) algorithm via simulation study. The simulated data are generated from a mixed-effect model with low-rank covariance structure on W: for which the specific construction is deferred to Appendix B. Below we discuss the evaluation methods as well as the from simulation study. We compare the Coordinatewise-Soft-Impute (CSI) algorithm specified in Algorithm 1 with the vanilla algorithm SLI (corresponding toμ = 0 in our notation) defined in Kidziński & and the fPCA algorithm defined in based on mixed-effect model. We train all three algorithms on the same set of basis functions and choose the tuning parameters λ (for CSI and SLI) and R (for fPCA) using a 5-fold cross-validation. Each model is then re-trained using the whole training set and tested on a held-out test set Ω test consisting 10% of all data. The performance is evaluated in two aspects. First, for different combinations of the treatment effect µ and observation density ρ, we train each of the three algorithms on the simulated data set, and compute the relative squared error between the ground truth µ and estimationμ., i.e., RSE(μ) = (μ − µ) 2 /µ 2. Meanwhile, for different algorithms applied to the same data set, we compare the mean square error between observation Y and estimationŶ over test set Ω test, namely, We train our algorithms with all combinations of treatment effect µ ∈ {0, 0.2, 0.4, · · ·, 5}, observation rate ρ ∈ {0.1, 0.3, 0.5}, and thresholding parameter λ ∈ {0, 1, · · ·, 4} (for CSI or SLI) or rank R ∈ {2, 3, · · ·, 6} (for fPCA). For each fixed combination of parameters, we implemented each algorithm 10 times and average the test error. The are presented in Table 1 and Figure 1. From Table 1 and the left plot of Figure 1, we have the following findings: 1. CSI achieves better performance than SLI and fPCA, regardless of the treatment effect µ and observation rate ρ. Meanwhile SLI performs better than fPCA. 2. All three methods give comparable errors for smaller values of µ. In particular, our introduction of treatment effect µ does not over-fit the model in the case of µ = 0. 3. As the treatment effect µ increases, the performance of CSI remains the same whereas the performances of SLI and fPCA deteriorate rapidly. As a , CSI outperforms SLI and fPCA by a significant margin for large values of µ. For example, when ρ = 0.1, the MSE(Ŷ) of CSI decreases from 72.3% of SLI and 59.6% of fPCA at µ = 1 to 12.4% of SLI and 5.8% of fPCA at µ = 5. 4. All three algorithms suffer a higher MSE(Ŷ) with smaller observation rate ρ. The biggest decay comes from SLI with an average 118% increase in test error from ρ = 0.5 to ρ = 0.1. The performances of fPCA and CSI remains comparatively stable among different observation rate with a 6% and 12% increase respectively. This implies that our algorithm is tolerant to low observation rate. To further investigate CSI's ability to estimate µ, we plot the relative squared error ofμ using CSI with different observation rate in the right plot of Figure 1. As shown in Figure 1, regardless of the choice of observation rate ρ and treatment effect µ, RSE(μ) is always smaller than 1% and most of the estimations achieves error less than 0.1%. Therefore we could conclude that, even for sparse matrix Y, the CSI algorithm could still give very accurate estimate of the treatment effect µ. In this section, we apply our methods to real dataset on the progression of motor impairment and gait pathology among children with Cerebral Palsy (CP) and evaluate the effect of orthopaedic surgeries. Cerebral palsy is a group of permanent movement disorders that appear in early childhood. Orthopaedic surgery plays a major role in minimizing gait impairments related to CP . However, it could be hard to correctly evaluate the outcome of a surgery. For example, the seemingly positive outcome of a surgery may actually due to the natural improvement during puberty. Our objective is to single out the effect of surgeries from the natural progression of disease and use that extra piece of information for better predictions. We analyze a data set of Gillette Children's Hospital patients, visiting the clinic between 1994 and 2014, age ranging between 4 and 19 years, mostly diagnosed with Cerebral Palsy. The data set contains 84 visits of 36 patients without gait disorders and 6066 visits of 2898 patients with gait pathologies. Gait Deviation Index (GDI), one of the most commonly adopted metrics for gait functionalities , was measured and recorded at each clinic visit along with other data such as birthday, subtype of CP, date and type of previous surgery and other medical . Our main objective is to model individual disease progression quantified as GDI values. Due to insufficiency of data, we model surgeries of different types and multiple surgeries as a single additive effect on GDI measurements following the methodology from Section 3. We test the same three methods CSI, SLI and fPCA as in Section 5, and compare them to two benchmarks-the population mean of all patients (pMean) and the average GDI from previous visits of the same patient (rMean). All three algorithms was trained on the spline basis of K = 9 dimensions evaluated at a grid of T = 51 points, with regularization parameters λ ∈ {20, 25, ..., 40} for CSI and SLI and rank constraints r ∈ {2, . . ., 6} for fPCA. To ensure sufficient observations for training, we cross validate and test our models on patients with at least 4 visits and use the rest of the data as a common training set. The effective size of 2-fold validation sets and test set are 5% each. We compare the of each method/combination of parameters using the mean square error of GDI estimations on held-out entries as defined in (5.1). We run all five methods on the same training/validation/test set for 40 times and compare the mean and sd of test-errors. The are presented in Table 2 and Figure 2. Compared with the null model pMean (Column 2 of Table 2), fPCA gives roughly the same order of error; CSI, SLI and rowMean provide better predictions, achieving 62%, 66% and 73% of the test errors respectively. In particular, our algorithm CSI improves the of vanilla model SLI by 7%, it also provide a stable estimation with the smallest sd across multiple selections of test sets. We take a closer look at the low-rank decomposition of disease progression curves provided by algorithms. Fix one run of algorithm CSI with λ = 30, there are 6 non-zero singular value vectors, which we will refer as principal components. We illustrate the top 3 PCs scaled with corresponding singular values in Figure 3a. An example of predicted curve from patient ID 5416 is illustrated in Figure 3b, where the blue curve represents the prediction without estimated treatment effectμ = 4.33, green curve the final prediction and red dots actual observations. It can be seen that the additive treatment effect helps to model the sharp difference between the exam before exam (first observation) and later exams. In this paper, we propose a new framework in modeling the effect of treatment events in disease progression and prove a corresponding algorithm CSI. To the best of our knowledge, it's the first comprehensive model that explicitly incorporates the effect of treatment events. We would also like to mention that, although we focus on the case of disease progression in this paper, our framework is quite general and can be used to analyze data in any disciplines with sparse observations as well as external effects. There are several potential extensions to our current framework. Firstly, our framework could be extended to more complicated settings. In our model, treatments have been characterized as the binary matrix I S with a single parameter µ. In practice, each individual may take different types of surgeries for one or multiple times. Secondly, the treatment effect may be correlated with the latent variables of disease type, and can be estimated together with the random effect w i. Finally, our framework could be used to evaluate the true effect of a surgery. A natural question is: does surgery really help? CSI provides estimate of the surgery effect µ, it would be interesting to design certain statistical hypothesis testing/casual inference procedure to answer the proposed question. Though we are convinced that our work will not be the last word in estimating the disease progression, we hope our idea is useful for further research and we hope the readers could help to take it further. Proof of Lemma 1. Note that the solution of the optimization problem is given by = S λ (Z) (see for a proof). Therefore it suffices to show the minimizer of the optimization problem (A.1) is the same as the minimizer of the following problem: Using the fact that A 2 F = Tr(AA) and B B = I K, we have On the other hand as desired. Proof of Lemma 2. We refer the readers to the proof in Mazumder et al. (2010, Section 4, Lemma 3). Proof of Lemma 3. First we argue that µ λ, µ) and the first inequality immediately follows. We have Taking derivative with respect to µ directly gives µ λ, µ), as desired. For the rest two inequalities, notice that Here the (A.2) holds because we have (A.3) follows from the fact that W Proof of Lemma 4. First we analyze the behavior of {µ Meanwhile, the sequence is decreasing and lower bounded by 0 and therefore converge to a non-negative number, yielding the differences as desired. The sequence {W (k) λ } is slightly more complicated, direct calculation gives 2 F, (A.6) where (A.5) follows from Lemma 2, (A.6) can be derived pairing the 4 terms according to P Ω and P ⊥ Ω. By definition of µ where (A.7) follows from the Cauchy-Schwartz inequality. Combining (A.6) with (A.7), we get Now we are left to prove that the difference sequence {W} converges to zero. Combining (A.4) and (A.7) it suffices to prove that P and the left hand side converges to 0 because which completes the proof. Taking limits on both sides gives us the desire . Proof of Theorem 1. Let (μ λ,Ŵ λ) be one limit point then we have: here (A.8) uses Lemma 5 and (A.9) uses Lemma 2. guarantees 0 ∈ ∂ W f λ (Ŵ λ,μ). By taking derivative directly we have 0 = ∂ µ f λ (Ŵ λ,μ). Therefore (Ŵ λ,μ) is a stationary point for f λ (W, µ). Notice that the loss function f λ (W, µ) is a convex function with respect to (W, µ). Thus we have proved that the limit point (Ŵ λ,μ) minimizes the function f λ (W, µ). Let G be the grid of T equidistributed points and let B be the basis of K spline functions evaluated on grid G. We will simulate the N × K observation matrix Y with three parts Y = W B + µI S + E, where W follows a mixture-Gaussian distribution with low rank structure, I S is the treatment matrix with uniformly distributed starting time and E represents the i.i.d. measurement error. The specific procedures is described below. 1. Generating W given parameters κ ∈, r 1, r 2 ∈ R, s 1, s 2 ∈ R K ≥0: (a) Sample two K × K orthogonal matrices V 1, V 2 via singular-value-decomposing two random matrix. where diag[s] is the diagonal matrix with diagonal elements s, "·" represents coordinatewise multiplication, and we are recycling t, 1 − t and r i γ i to match the dimension. 2. Generating I S given parameter p tr ∈. (a) For each k = 1,..., N, sample T k uniformly at random from {1, . . ., T /p tr}. (b) Set I S ← (1{j ≥ T i}) 1≤i≤N,1≤j≤T. 3. Given parameter ∈ R ≥0, E is drawn from from i.i.d. Normal samples. 4. Given parameter µ ∈ R, let Y 0 ← W B + µI S + E. 5. Given parameter ρ ∈, drawn 0-1 matrix I Ω from i.i.d. Bernoulli(ρ) samples. Let Ω denote the set of non-zero entries of I Ω, namely, the set of observed data. Set, where Y ij = (Y 0) ij if (I Ω) ij = 1 NA otherwise. In actual simulation, we fix the auxiliary parameters as follows, The remaining parameters are treatment effect µ and observation rate ρ, which we allow to vary across different trials.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1gm-a4tDH
A novel matrix completion based algorithm to model disease progression with events
Multilingual Neural Machine Translation (NMT) systems are capable of translating between multiple source and target languages within a single system. An important indicator of generalization within these systems is the quality of zero-shot translation - translating between language pairs that the system has never seen during training. However, until now, the zero-shot performance of multilingual models has lagged far behind the quality that can be achieved by using a two step translation process that pivots through an intermediate language (usually English). In this work, we diagnose why multilingual models under-perform in zero shot settings. We propose explicit language invariance losses that guide an NMT encoder towards learning language agnostic representations. Our proposed strategies significantly improve zero-shot translation performance on WMT English-French-German and on the IWSLT 2017 shared task, and for the first time, match the performance of pivoting approaches while maintaining performance on supervised directions. In recent years, the emergence of sequence to sequence models has revolutionized machine translation. Neural models have reduced the need for pipelined components, in addition to significantly improving translation quality compared to their phrase based counterparts BID35. These models naturally decompose into an encoder and a decoder with a presumed separation of roles: The encoder encodes text in the source language into an intermediate latent representation, and the decoder generates the target language text conditioned on the encoder representation. This framework allows us to easily extend translation to a multilingual setting, wherein a single system is able to translate between multiple languages BID11 BID28.Multilingual NMT models have often been shown to improve translation quality over bilingual models, especially when evaluated on low resource language pairs BID14 BID20. Most strategies for training multilingual NMT models rely on some form of parameter sharing, and often differ only in terms of the architecture and the specific weights that are tied. They allow specialization in either the encoder or the decoder, but tend to share parameters at their interface. An underlying assumption of these parameter sharing strategies is that the model will automatically learn some kind of shared universally useful representation, or interlingua, ing in a single model that can translate between multiple languages. The existence of such a universal shared representation should naturally entail reasonable performance on zero-shot translation, where a model is evaluated on language pairs it has never seen together during training. Apart from potential practical benefits like reduced latency costs, zero-shot translation performance is a strong indicator of generalization. Enabling zero-shot translation with sufficient quality can significantly simplify translation systems, and pave the way towards a single multilingual model capable of translating between any two languages directly. However, despite being a problem of interest for a lot of recent research, the quality of zero-shot translation has lagged behind pivoting through a common language by 8-10 BLEU points BID15 BID24 BID21 BID27. In this paper we ask the question, What is the missing ingredient that will allow us to bridge this gap? Figure 1: The proposed multilingual NMT model along with the two training objectives. CE stands for the cross-entropy loss associated with maximum likelihood estimation for translation between English and other languages. Align represents the source language invariance loss that we impose on the representations of the encoder. While training on the translation objective, training samples (x, y) are drawn from the set of parallel sentences, D x,y. For the invariance losses, (x, y) could be drawn from D x,y for the cosine loss, or independent data distributions for the adversarial loss. Both losses are minimized simultaneously. Since we have supervised data only to and from English, one of x or y is always in English. In BID24, it was hinted that the extent of separation between language representations was negatively correlated with zero-shot translation performance. This is supported by theoretical and empirical observations in domain adaptation literature, where the extent of subspace alignment between the source and target domains is strongly associated with transfer performance BID7 BID8 BID17. Zero-shot translation is a special case of domain adaptation in multilingual models, where English is the source domain and other languages collectively form the target domain. Following this thread of domain adaptation and subspace alignment, we hypothesize that aligning encoder representations of different languages with that of English might be the missing ingredient to improving zero-shot translation performance. In this work, we develop auxiliary losses that can be applied to multilingual translation models during training, or as a fine-tuning step on a pre-trained model, to force encoder representations of different languages to align with English in a shared subspace. Our experiments demonstrate significant improvements on zero-shot translation performance and, for the first time, match the performance of pivoting approaches on WMT English-French-German (en-fr-de) and the IWSLT 2017 shared task, in all zero shot directions, without any meaningful regression in the supervised directions. We further analyze the model's representations in order to understand the effect of our explicit alignment losses. Our analysis reveals that tying weights in the encoder, by itself, is not sufficient to ensure shared representations. As a , standard multilingual models overfit to the supervised directions, and enter a failure mode when translating between zero-shot languages. Explicit alignment losses incentivize the model to use shared representations, ing in better generalization.2 ALIGNMENT OF LATENT REPRESENTATIONS 2.1 MULTILINGUAL NEURAL MACHINE TRANSLATION Let x = (x 1, x 2 ...x m) be a sentence in the source language and y = (y 1, y 2, ... y n) be its translation in the target language. For machine translation, our objective is to learn a model, p(y|x; θ). In modern NMT, we use sequence-to-sequence models supplemented with an attention mechanism BID5 to learn this distribution. These sequence-to-sequence models consist of an encoder, Enc(x) = z = (z 1, z 2, ...z m) parameterized with θ enc, and a decoder that learns to map from the latent representation z to y by modeling p(y|z; θ dec), again parameterized with θ dec. This model is trained to maximize the likelihood of the available parallel data, D x,y. DISPLAYFORM0 In multilingual training we jointly train a single model BID26 to translate from many possible source languages to many potential target languages. When only the decoder is informed about the desired target language, a special token to indicate the target language, < tl >, is input to the first step of the decoder. In this case, D x,y is the union of all the parallel data for each of the supervised translation directions. Note that either the source or the target is always English. For zero-shot translation to work, the encoder needs to produce language invariant feature representations of a sentence. Previous works learn these transferable features by using a weight sharing constraint and tying the weights of the encoders, the decoders, or the attentions across some or all languages BID11 BID24 BID27 BID14. They argue that sharing these layers across languages causes sentences that are translations of each other to cluster together in a common representation space. However, when a model is trained on just the end-to-end translation objective, there is no explicit incentive for the model to discover language invariant representations; given enough capacity, it is possible for the model to partition its intrinsic dimensions and overfit to the supervised translation directions. This would in intermediate encoder representations that are specific to individual languages. We now explore two classes of regularizers, Ω, that explicitly force the model to make the representations in all other languages similar to their English counterparts. We align the encoder representations of every language with English, since it is the only language that gets translated into all other languages during supervised training. Thus, English representations now form an implicit pivot in the latent space. The loss function we then minimize is: DISPLAYFORM0 where L CE is the cross-entropy loss and λ is a hyper-parameter that controls the contribution of the alignment loss Ω. Here we view zero-shot translation through the lens of domain adaptation, wherein English is the source domain and the other languages together constitute the target domain. BID7 and BID30 have shown that target risk can be bounded by the source risk plus a discrepancy metric between the source and target feature distribution. Treating the encoder as a deterministic feature extractor, the source distribution is Enc(x en)p(x en) and the target distribution is Enc(x t)p(x t). To enable zero-shot translation, our objective then is to minimize the discrepancy between these distributions by explicitly optimizing the following domain adversarial loss BID17: DISPLAYFORM0 where Disc is the discriminator and is parametrized by θ disc. D En are English sentences and D T are the sentences of all the other languages. Note that, unlike BID4 BID41, who also train the encoder adversarially with a language detecting discriminator, we are trying to align the distribution of encoder representations of all other languages to that of English and vice-versa. Our discriminator is just a binary predictor, independent of how many languages we are jointly training on. Architecturally, the discriminator is a feed-forward network that acts on the temporally max-pooled representation of the encoder output. We also experimented with a discriminator that made independent predictions for the encoder representation, z i, at each time-step i, but found the pooling based approach to work better. More involved discriminators that consider the sequential nature of the encoder representations may be more effective, but we do not explore them in this work. While adversarial approaches have the benefit of not needing parallel data, they only align the marginal distributions of the encoder's representations. Further, adversarial approaches are hard to optimize and are often susceptible to mode collapse, especially when the distribution to be modeled is multi-modal. Even if the discriminator is fully confused, there are no guarantees that the two learned distributions will be identical BID2.To resolve these potential issues, we attempt to make use of the available parallel data, and enforce an instance level correspondence between the pairs (x, y) ∈ D x,y, rather than just aligning the marginal distributions of Enc(x)p(x) and Enc(y)p(y) as in the case of domain-adversarial training. Previous work on multi-modal and multi-view representation learning has shown that, when given paired data, transferable representations can be learned by improving some measure of similarity between the corresponding views from each mode. Various similarity measures have been proposed such as Euclidean distance BID22, cosine distance BID16, correlation BID1 etc. In our case, the different views correspond to equivalent sentences in different languages. Note that Enc(x) and Enc(y) are actually a pair of sequences, and to compare them we would ideally have access to the word level correspondences between the two sentences. In the absence of this information, we make a bag-of-words assumption and align the pooled representation similar to BID19; BID10. Empirically, we find that max pooling and minimizing the cosine distance between the representations of parallel sentences similar to works well. We now minimize the distance function: DISPLAYFORM0 A multilingual model with a single encoder and a single decoder similar to BID24 is our baseline. This setup maximally enforces the parameter sharing constraint that previous works rely on to promote cross-lingual transfer. We first train our model solely on the translation loss until convergence, on all languages to and from English. This is our baseline multilingual model. We then fine-tune this model with the proposed alignment losses, in conjunction with the translation objective. We then compare the performance of the baseline model against the aligned models on both the supervised and the zero-shot translation directions. We also compare our zero-shot performance against the pivoting performance using the baseline model. For our en↔{fr, de} experiments, we train our models on the standard en→fr (39M) and en→de (4.5M) training datasets from WMT'14. We pre-process the data by applying the standard Moses pre-processing 1. We swap the source and target to get parallel data for the fr→en and de→en directions. The ing datasets are merged by oversampling the German portion to match the size of the French portion. This in a total of 158M sentence pairs. We get word counts and apply 32k BPE BID33 ) to obtain subwords. The target language < tl > tokens are also added to the vocabulary. We use newstest-2012 as the dev set and newstest-2013 as the test set. Both of these sets are 3-way parallel and have 3003 and 3000 sentences respectively. We run all our experiments with Transformers BID36, using the TransformerBase config. We train our model with a learning rate of 1.0 and 4000 warmup steps. Input dropout is set to 0.1. We use synchronized training with 16 Tesla P100 GPUs and train the model for 500k steps. The model is instructed on which language to translate a given input sentence into, by feeding in a unique < tl > token per target language. In our implementation, this token is pre-pended into the source sentence, but it could just as easily be fed into the decoder to the same effect. For the alignment experiments, we fine-tune a pre-trained multilingual model by jointly training on both the alignment and translation losses. For adversarial alignment, the discriminator is a feedforward network with 3 hidden layers of dimension 2048 using the leaky ReLU(α = 0.1) nonlinearity. λ was tuned to 1.0 for both the adversarial and the cosine alignment losses. Simple fine-tuning with SGD using a learning rate of 1e-4 works well and we do not need to train from scratch. We observe that the models converge within a few thousand updates. de → f r f r → de en → f r en → de f r → Table 1: Zero-shot with baseline and aligned models compared against pivoting. Zero-Shot are marked zs. Pivoting through English is performed using the baseline multilingual model. Our , in Table 1, demonstrate that both our approaches to align representations in large improvements in zero-shot translation quality for both directions, effectively closing the gap to the performance of the strong pivoting baseline. We didn't notice any significant differences between the performance of the two proposed alignment methods. Importantly, these improvements come at no cost to the quality in the supervised directions. While both the proposed approaches aren't significantly different in terms of final quality, we noticed that the adversarial regularizer was very sensitive to the initialization scheme and the choice of hyper-parameters. In comparison, the cosine distance loss was relatively stable, with λ being the only hyper-parameter controlling the weight of the alignment loss with respect to the translation loss. We further analyze the outputs of our baseline multilingual model in order to understand the effect of alignment on zero-shot performance. We identify the major effects that contribute to the poor zeroshot performance in multilingual models, and investigate how an explicit alignment loss resolves these pathologies. 94% 0% f r references 4% 0% 96% de references 4% 96% 0% Table 2: Percentage of sentences by language in reference translations and the sentences decoded using the baseline model (newstest2012)While investigating the high variance of the zero-shot translation score during multilingual training in the absence of alignment, we found that a significant fraction of the examples were not getting translated into the desired target language at all. Instead, they were either translated to English or simply copied. This phenomenon is likely a consequence of the fact that at training time, German and French source sentences were always translated into English. Because of this, the model never learns to properly attribute the target language to the < tl > token, and simply changing the < tl > token at test time is not effective. We count the number of sentences in each language using an automatic language identification tool and report the in Table 2.Further, we find that for a given sentence, all output tokens tend to be in the same language, and there is little to no code-switching. This was also observed by BID24, where it was explained as a cascading effect in the decoder: Once the decoder starts emitting tokens in one language, the conditional distribution p(y i |y i−1, ..., y 1) is heavily biased towards that particular language. With explicit alignment, we remove the target language information encoded into the source token representations. In the absence of this confounding information, the < tl > target token gives us more control to set the translation direction. Table 3: BLEU on subset of examples predicted in the right language by the direct translation using the baseline system (newstest2012)Here we try to isolate the gains our system achieves due to improvements in the learning of transferable features, from those that can be attributed to decoding to the desired language. We discount the errors that could be attributed to incorrect language errors and inspect the translation quality on the subset of examples where the baseline model decodes in the right language. We re-evaluate the BLEU scores of all systems and show the in Table 3. We find that the vanilla zero-shot translation system (Baseline) is much stronger than expected at first glance. It only lags the pivoting baseline by 0.5 BLEU points on French to German and by 2.7 BLEU points on German to French. We can now see that, even on this subset which was chosen to favor the baseline model, the representation alignment of our adapted model contributes to improving the quality of zero-shot translation by 0.7 and 2.2 BLEU points on French to German and German to French, respectively. We design a simple experiment to determine whether representations learned while training a multilingual translation model are truly cross-lingual. We probe our baseline and aligned multilingual models with 3-way aligned data to determine the extent to which their representations are functionally equivalent, during different stages in model training. Because source languages can have different sequence lengths and word orders for equivalent sentences, it is not possible to directly compare encoder output representations. However, it is possible to directly compare the representations extracted by the decoder from the encoder outputs for each language. Suppose we want to compare representations of semantically equivalent English and German sentences when translating into French. At time-step i in the decoder, we use the model to predict p(y i |Enc(x en), y 1:(i−1) ) and p(y i |Enc(x de), y 1:(i−1) ). However, in the seq2seq with attention formulation, these problems reduce to predicting p(y i |c We use a randomly sampled set of 100 parallel en-de-fr sentences extracted from our dev set, newstest2012, to perform this analysis. For each set of aligned sentences, we obtain the sequence of aligned context vectors (c en i, c de i) and plot the mean cosine distances for our baseline training run, and the incremental runs with alignment losses in FIG1. Our indicate that the vanilla multilingual model learns to align encoder representations over the course of training. However, in the absence of an external incentive, the alignment process arrests as training progresses. Incrementally training with the alignment losses in a more language-agnostic representation, which contributes to the improvements in zero-shot performance. Given the good on WMT en-fr-de, we now extend our experiments, to test the scalability of our approach to multiple languages. We work with the IWSLT-17 dataset which has transcripts of Ted talks in 5 languages: English (en), Dutch (nl), German (de), Italian (it), and Romanian (ro). The original dataset is multi-way parallel with approximately 220 thousand sentences per language, but for the sake of our experiments we only use the to/from English directions for training. The dev and test sets are also multi-way parallel and comprise around 900 and 1100 sentences per language pair respectively. We again use the transformer base architecture. We set the learning rate to 2.0 and the number of warmup steps to 8k. A dropout rate of 0.2 was applied to all connections of the transformer. We use the cosine loss with λ set to 0.001 because of how easy it is to tune. Our baseline model's scores on IWSLT-17 are suspiciously close to that of bridging, as seen in TAB3. We suspect this is because the data that we train on is multi-way parallel, and the English sentences are shared across the language pairs. This may be helping the model learn shared representations with the English sentences acting as pivots. Even so, we are able to gain 1 BLEU over the strong baseline system and demonstrate the applicability of our approach to larger groups of languages.5 RELATED WORK Multilingual NMT models were first proposed by BID11 and have since been explored in BID14; BID9 and several other works. While zero-shot translation was the direct goal of BID14, they were only able to achieve'zero-resource translation', by using their pre-trained multi-way multilingual model to generate pseudo-parallel data for fine-tuning. BID24 were the first to show the possibility of zero-shot translation by proposing a model that shared all the components and used a token to indicate the target language. BID32 propose a novel way to modulate the amount of sharing between languages, by using a parameter generator to generate the parameters for either the encoder or the decoder of the multilingual NMT system based on the source and target languages. They also report higher zero-shot translation scores with this approach. Learning coordinated representations with the use of parallel data has been explored thoroughly in the context of multi-view and multi-modal learning BID6. These often involve either auto-encoder like networks with a reconstruction objective, or paired feed-forward networks with a similarity based objective. This function used to encourage similarity may be Euclidean distance BID22, cosine distance BID16, partial order BID37, correlation BID1, etc. More recently a vast number of adversarial approaches have been proposed to learn domain invariant representations, by ensuring that they are indistinguishable by a discriminator network BID17.The use of aligned parallel data to learn shared representations is common in the field of crosslingual or multilingual representations, where work falls into three main categories. Obtaining representations from word level alignments -bilingual dictionaries or automatically generated word alignments -is the most popular approach BID13 BID42. The second category of methods try to leverage document level alignment, like parallel Wikipedia articles, to generate cross-lingual representations BID34 BID38. The final category of methods often use sentence level alignments, in the form of parallel translation data, to obtain cross-lingual representations BID23 BID18 BID29 BID0. Recent work by BID12 showed that the representations learned by a multilingual NMT system are widely applicable across tasks and languages. Parameter sharing based approaches have also been tried in the context of unsupervised NMT, where learning a shared latent space BID3 was believed to improve translation quality. Some approaches explore applying adversarial losses on the encoder, to ensure that the representations are language agnostic. However, recent work has shown that enforcing a shared latent space is not important for unsupervised NMT BID25, and the cycle consistency loss suffices by itself. In this work we propose explicit alignment losses, as an additional constraint for multilingual NMT models, with the goal of improving zero-shot translation. We view the zero-shot NMT problem in the light of subspace alignment for domain adaptation, and propose simple approaches to achieve this. Our experiments demonstrate significantly improved zero-shot translation performance that are, for the first time, comparable to strong pivoting based approaches. Through careful analyses we show how our proposed alignment losses in better representations, and thereby better zeroshot performance, while still maintaining performance on the supervised directions. Our proposed methods have been shown to work reliably on two public benchmarks datasets: WMT EnglishFrench-German and the IWSLT 2017 shared task.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByWMz305FQ
Simple similarity constraints on top of multilingual NMT enables high quality translation between unseen language pairs for the first time.
We prove the precise scaling, at finite depth and width, for the mean and variance of the neural tangent kernel (NTK) in a randomly initialized ReLU network. The standard deviation is exponential in the ratio of network depth to width. Thus, even in the limit of infinite overparameterization, the NTK is not deterministic if depth and width simultaneously tend to infinity. Moreover, we prove that for such deep and wide networks, the NTK has a non-trivial evolution during training by showing that the mean of its first SGD update is also exponential in the ratio of network depth to width. This is sharp contrast to the regime where depth is fixed and network width is very large. Our suggest that, unlike relatively shallow and wide networks, deep and wide ReLU networks are capable of learning data-dependent features even in the so-called lazy training regime. Modern neural networks are typically overparameterized: they have many more parameters than the size of the datasets on which they are trained. That some setting of parameters in such networks can interpolate the data is therefore not surprising. But it is a priori unexpected that not only can such interpolating parameter values can be found by stochastic gradient descent (SGD) on the highly non-convex empirical risk but also that the ing network function generalizes to unseen data. In an overparameterized neural network N (x) the individual parameters can be difficult to interpret, and one way to understand training is to rewrite the SGD updates ∆θ p = − λ ∂L ∂θ p, p = 1,..., P of trainable parameters θ = {θ p} P p=1 with a loss L and learning rate λ as kernel gradient descent updates for the values N (x) of the function computed by the network: Here B = {(x 1, y 1),..., (x |B|, y |B|)} is the current batch, the inner product is the empirical 2 inner product over B, and K N is the neural tangent kernel (NTK): Relation is valid to first order in λ. It translates between two ways of thinking about the difficulty of neural network optimization: (i) The parameter space view where the loss L, a complicated function of θ ∈ R #parameters, is minimized using gradient descent with respect to a simple (Euclidean) metric; (ii) The function space view where the loss L, which is a simple function of the network mapping x → N (x), is minimized over the manifold M N of all functions representable by the architecture of N using gradient descent with respect to a potentially complicated Riemannian metric K N on M N. A remarkable observation of is that K N simplifies dramatically when the network depth d is fixed and its width n tends to infinity. In this setting, by the universal approximation theorem , the manifold M N fills out any (reasonable) ambient linear space of functions. The in then show that the kernel K N in this limit is frozen throughout training to the infinite width limit of its average E[K N] at initialization, which depends on the depth and non-linearity of N but not on the dataset. This mapping between parameter space SGD and kernel gradient descent for a fixed kernel can be viewed as two separate statements. First, at initialization, the distribution of K N converges in the infinite width limit to the delta function on the infinite width limit of its mean E[K N]. Second, the infinite width limit of SGD dynamics in function space is kernel gradient descent for this limiting mean kernel for any fixed number of SGD iterations. As long as the loss L is well-behaved with respect to the network outputs N (x) and E[K N] is non-degenerate in the subspace of function space given by values on inputs from the dataset, SGD for infinitely wide networks will converge with probability 1 to a minimum of the loss. Further, kernel method-based theorems show that even in this infinitely overparameterized regime neural networks will have non-vacuous guarantees on generalization . But replacing neural network training by gradient descent for a fixed kernel in function space is also not completely satisfactory for several reasons. First, it suggests that no feature learning occurs during training for infinitely wide networks in the sense that the kernel E[K N] (and hence its associated feature map) is data-independent. In fact, empirically, networks with finite but large width trained with initially large learning rates often outperform NTK predictions at infinite width. One interpretation is that, at finite width, K N evolves through training, learning data-dependent features not captured by the infinite width limit of its mean at initialization. In part for such reasons, it is important to study both empirically and theoretically finite width corrections to K N. Another interpretation is that the specific NTK scaling of weights at initialization (b; a; ; 2018; a; b) and the implicit small learning rate limit obscure important aspects of SGD dynamics. Second, even in the infinite width limit, although K N is deterministic, it has no simple analytical formula for deep networks, since it is defined via a layer by layer recursion. In particular, the exact dependence, even in the infinite width limit, of K N on network depth is not well understood. Moreover, the joint statistical effects of depth and width on K N in finite size networks remain unclear, and the purpose of this article is to shed light on the simultaneous effects of depth and width on K N for finite but large widths n and any depth d. Our apply to fully connected ReLU networks at initialization for which our main contributions are: 1. In contrast to the regime in which the depth d is fixed but the width n is large, K N is not approximately deterministic at initialization so long as d/n is bounded away from 0. Specifically, for a fixed input x the normalized on-diagonal second moment of K N satisfies Thus, when d/n is bounded away from 0, even when both n, d are large, the standard deviation of K N (x, x) is at least as large as its mean, showing that its distribution at initialization is not close to a delta function. See Theorem 1. 2. Moreover, when L is the square loss, the average of the SGD update ∆K N (x, x) to K N (x, x) from a batch of size one containing x satisfies where n 0 is the input dimension. Therefore, if d 2 /nn 0 > 0, the NTK will have the potential to evolve in a data-dependent way. Moreover, if n 0 is comparable to n and d/n > 0 then it is possible that this evolution will have a well-defined expansion in d/n. See Theorem 2. In both statements above, means is bounded above and below by universal constants. We emphasize that our hold at finite d, n and the implicit constants in both and in the error terms Under review as a conference paper at ICLR 2020 2 ) are independent of d, n. Moreover, our precise , stated in §2 below, hold for networks with variable layer widths. We have denoted network width by n only for the sake of exposition. The appropriate generalization of d/n to networks with varying layer widths is the parameter which in light of the estimates in and plays the role of an inverse temperature. A number of articles (; ; ;) have followed up on the original NTK work Jacot et al. (2018) can be promoted to d/n expansions. Also, the sum-over-path approach to studying correlation functions in randomly initialized ReLU nets was previously taken up for the forward pass by and for the backward pass by and. Consider a ReLU network N with input dimension n 0, hidden layer widths n 1,..., n d−1, and output dimension n d = 1. We will assume that the output layer of N is linear and initialize the biases in N to zero. Therefore, for any input x ∈ R n0, the network N computes given by where for i = 1,..., d − 1 and µ is a fixed probability measure on R that we assume has a density with respect to Lebesgue measure and satisfies: µ is symmetric around 0, The three assumptions in hold for virtually all standard network initialization schemes with the exception of orthogonal weight initialization. But believe our extend hold also for this case but not do take up this issue. The on-diagonal NTK is and we emphasize that although we have initialized the biases to zero, they are not removed them from the list of trainable parameters. Our first is the following: Theorem 1 (Mean and Variance of NKT on Diagonal at Init). We have Moreover, we have that E[K N (x, x) 2 ] is bounded above and below by universal constants times In particular, if all the hidden layer widths are equal (i.e. n i = n, for i = 1, . . ., d − 1), we have where f g means f is bounded above and below by universal constants times g. This shows that in the deep and wide double scaling limit the NTK does not converge to a constant in probability. This is contrast to the wide and shallow regime where n i → ∞ and d < ∞ is fixed. Our next shows that when L is the square loss K N (x, x) is not frozen during training. To state it, fix an input x ∈ R n0 to N and define ∆K N (x, x) to be the update from one step of SGD with a batch of size 1 containing x (and learning rate λ). Theorem 2 (Mean of Time Derivative of NTK on Diagonal at Init). We have that E λ −1 ∆K N (x, x) is bounded above and below by universal constants times times a multiplicative error of size 1/n i, as in Theorem 1. In particular, if all the hidden layer widths are equal (i.e. n i = n, for i = 1, . . ., d − 1), we find Observe that when d is fixed and n i = n → ∞, the pre-factor in front of exp (5β) scales like 1/n. This is in keeping with the from and. Moreover, it shows that if d, n, n 0 grow in any way so that dβ/n 0 = d 2 /nn 0 → 0, the update ∆K N (x, x) to K N (x, x) from the batch {x} at initialization will have mean 0. It is unclear whether this will be true also for larger batches and when the arguments of K N are not equal. In contrast, if n i n and β = d/n is bounded away from 0, ∞, and the n 0 is proportional to d, the average update E[∆K N (x, x)] has the same order of magnitude as E[K N (x)]. The remainder of this article is structured as follows. First, we give an outline of the proofs of Theorems 1 and 2 in §3 and particularly in §3.1, which gives an in-depth but informal explanation of our strategy for computing moments of K N and its time derivative. Next, in the Appendix Section §A, we introduce some notation about paths and edges in the computation graph of N. This notation will be used in the proofs of Theorems 1 and 2 presented in the Appendix Section §B- §D. The computations in §B explain how to handle the contribution to K N and ∆K N coming only from the weights of the network. They are the most technical and we give them in full detail. Then, the discussion in §C and §D show how to adapt the method developed in §B to treat the contribution of biases and mixed bias-weight terms in K N, K 2 N and ∆K N. Since the arguments are simpler in these cases, we omit some details and focus only on highlighting the salient differences. 3 OVERVIEW OF PROOF OF THEOREMS 1 AND 2 The proofs of Theorems 1 and 2 are so similar that we will prove them at the same time. In this section and in §3.1 we present an overview of our argument. Then, we carry out the details in Appendix Sections §B- §D below. Fix an input x ∈ R n0 to N. Recall from that where we've set and have suppressed the dependence on x, N. Similarly, we have where we have introduced and have used that the loss on the batch {x} is given by 2 for some target value N * (x). To prove Theorem 1 we must estimate the following quantities: To prove Theorem 2, we must control in addition The most technically involved computations will turn out to be those involving only weights: namely, the terms. These terms are controlled by writing each as a sum over certain paths γ that traverse the network from the input to the output layers. The corresponding for terms involving the bias will then turn out to be very similar but with paths that start somewhere in the middle of network (corresponding to which bias term was used to differentiate the network output). The main about the pure weight contributions to K N is the following Proposition 3 (Pure weight moments for K N, ∆K N). We have Finally, We prove Proposition 3 in §B below. The proof already contains all the ideas necessary to treat the remaining moments. In §C and §D we explain how to modify the proof of Proposition 3 to prove the following two Propositions: Proposition 4 (Pure bias moments for K N, ∆K N). We have Moreover, Finally, with probability 1, we have ∆ bb = 0. Proposition 5 (Mixed bias-weight moments for K N, ∆K N). We have Further, E[∆ wb] is bounded above and below by universal constants times The statements in Theorems 1 and 2 that hold for general n i now follow directly from Propositions 3-5. The asymptotics when n i n follow from some routine algebra. Before turning to the details of the proof of Propositions 3-5 below, we give an intuitive explanation of the key steps in our sum-over-path analysis of the moments of Since the proofs of all three Propositions follow a similar structure and Proposition 3 is the most complicated, we will focus on explaining how to obtain the first 2 moments of K w. Since the biases are initialized to zero and K w involves only derivatives with respect to the weights, for the purposes of analyzing K w the biases play no role. Without the biases, the output of the neural network, N (x) can be express as a weighted sum over paths in the computational graph of the network: where Γ 1 a is the collection of paths in N starting at neuron a and the weight of a path wt(γ) is defined in in the Appendix and includes both the product of the weights along γ and the condition that every neuron in γ is open at x. The path γ begins at some neuron in the input layer of N and passes through a neuron in every subsequent layer until ending up at the unique neuron in the output layer (see). Being a product over edge weights in a given path, the derivative of wt(γ) with respect to a weight W e on an edge e of the computational graph of N is: There is a subtle point here that wt(γ) also involves indicator functions of the events that neurons along γ are open at x. However, with probability 1, the derivative with respect to W e of these indicator functions is identically 0 at x. The details are in Lemma 11. Because K w is a sum of derivatives squared (see), ignoring the dependence on the network input x, the kernel K w roughly takes the form where the sum is over collections (γ 1, γ 2) of two paths in the computation graph of N and edges e in the computational graph of N that lie on both (see Lemma 6 for the precise statement). When computing the mean, E[K w], by the mean zero assumption of the weights W e (see), the only contribution is when every edge in the computational graph of N is traversed by an even number of paths. Since there are exactly two paths, the only contribution is when the two paths are identical, dramatically simplifying the problem. This gives rise to the simple formula for E[K w] (see). The expression w is more complex. It involves sums over four paths in the computational graph of N as in the second statement of Lemma 6. Again recalling that the moments of the weights have mean 0, the only collections of paths that contribute to E[K 2 w] are those in which every edge in the computational graph of N is covered an even number of times: However, there are now several ways the four paths can interact to give such a configuration. It is the combinatorics of these interactions, together with the stipulation that the marked edges e 1, e 2 belong to particular pairs of paths, which complicates the analysis of E[K 2 w]. We estimate this expectation in several steps: 1. Obtain an exact formula for the expectation in: where F (Γ, e 1, e 2) is the product over the layers = 1,..., d in N of the "cost" of the interactions of γ 1,..., γ 4 between layers − 1 and. The precise formula is in Lemma 7. 2. Observe the dependence of F (Γ, e 1, e 2) on e 1, e 2 is only up to a multiplicative constant: F (Γ, e 1, e 2) F * (Γ). The precise relation is. This shows that, up to universal constants, γ1,γ2 togethe at layer 1 γ3,γ4 togethe at layer 2. This is captured precisely by the terms I j, II j defined in,. 3. Notice that F * (Γ) depends only on the un-ordered multiset of edges E = E Γ ∈ Σ 4 even determined by Γ (see for a precise definition). We therefore change variables in the sum from the previous step to find where Jacobian(E, e 1, e 2) counts how many collections of four paths Γ ∈ Γ 4 even that have the same E Γ also have paths γ 1, γ 2 pass through e 1 and paths γ 3, γ 4 pass through e 2. Lemma 8 gives a precise expression for this Jacobian. It turns outs, as explained just below Lemma 8, that Jacobian(E, e 1, e 2) 6 #loops(E), where a loop in E occurs when the four paths interact. More precisely, a loop occurs whenever all four paths pass through the same neuron in some layer (see Figures 1 and 2). 4. Change variables from unordered multisets of edges E ∈ Σ 4 even in which every edge is covered an even number of times to pairs of paths V ∈ Γ 2. The Jacobian turns out to be 2 −#loops(E) (Lemma 9), giving 5. Just like F * (V), the term 3 #loops(V) is again a product over layers in the computational graph of N of the "cost" of interactions between our four paths. Aggregating these two terms into a single functional F * (E) and factoring out the 1/n terms in F * (V) we find that: where the 1/n terms cause the sum to become an average over collections V of two independent paths in the computational graph of N, with each path sampling neurons uniformly at random in every layer. The precise , including the dependence on the input x, is in. 6. Finally, we use Proposition 10 to obtain for this expectation estimates above and below that match up multiplicative constants. Figure 1: Cartoon of the four paths γ 1, γ 2, γ 3, γ 4 between layers 1 and 2 in the case where there is no interaction. Paths stay with there original partners γ 1 with γ 2 and γ 3 with γ 4 at all intermediate layers. Figure 2: Cartoon of the four paths γ 1, γ 2, γ 3, γ 4 between layers 1 and 2 in the case where there is exactly one "loop" interaction between the marked layers. Paths swap away from their original partners exactly once at some intermediate layer after 1, and then swap back to their original partners before 2. Taken together Theorems 1 and 2 show that in fully connected ReLU nets that are both deep and wide the neural tangent kernel K N is genuinely stochastic and enjoys a non-trivial evolution during training. This suggests that in the overparameterized limit n, d → ∞ with d/n ∈ (0, ∞), the kernel K N may learn data-dependent features. Moreover, our show that the fluctuations of both K N and its time derivative are exponential in the inverse temperature β = d/n. It would be interesting to obtain an exact description of its statistics at initialization and to describe the law of its trajectory during training. Assuming this trajectory turns out to be data-dependent, our suggest that the double descent curve Belkin et al. (2018; 2019); that trades off complexity vs. generalization error may display significantly different behaviors depending on the mode of network overparameterization. However, it is also important to point out that the in;; show that, at least for fully connected ReLU nets, gradient-based training is not numerically stable unless d/n is relatively small (but not necessarily zero). Thus, we conjecture that there may exist a "weak feature learning" NTK regime in which network depth and width are both large but 0 < d/n 1. In such a regime, the network will be stable enough to train but flexible enough to learn data-dependent features. In the language of Chizat & Bach (2018b) one might say this regime displays weak lazy training in which the model can still be described by a stochastic positive definite kernel whose fluctuations can interact with data. Finally, it is an interesting question to what extent our hold for non-linearities other than ReLU and for network architectures other than fully connected (e.g. convolutional and residual). Typical ConvNets, for instance, are significantly wider than they are deep, and we leave it to future work to adapt the techniques from the present article to these more general settings. In this section, we introduce some notation, adapted in large part from , that will be used in the proofs of Theorems 1 and 2. For n ∈ N, we will write [n]:= {1, . . ., n}. It will also be convenient to denote k | every entry in a appears an even number of times}. Given a ReLU network N with input dimension n 0, hidden layer widths n 1,..., n d−1, and output dimension n d = 1, its computational graph is a directed multipartite graph whose vertex set is the disjoint union Definition 1 (Path in the computational graph of N)., a path γ in the computational graph of N from neuron z(1, α 1) to neuron z(2, α 2) is a collection of neurons in layers 1,..., 2: Further, we will write Given a collection of neurons γj is a path starting at neuron z(j,αj) ending at the output neuron z(d,1) Note that with this notation, we have Correspondingly, we will write If each edge e in the computational graph of N is assigned a weight W e, then associated to a path γ is a collection of weights: ) Definition 2 (Weight of a path in the computational graph of N). Fix 0 ≤ ≤ d, and let γ be a path in the computation graph of N starting at layer and ending at the output. The weight of a this path at a given input x to N is where, is the event that all neurons along γ are open for the input x. Here y is as in. Next, for an edge e ∈ [n i−1] × [n i] in the computational graph of N we will write for the layer of e. In the course of proving Theorems 1 and 2, it will be useful to associate to every Γ ∈ Γ k (n) an unordered multi-set of edges E Γ. Definition 3 (Unordered multisets of edges and their endpoints). For n, n, ∈ N set to be the unordered multiset of edges in the complete directed bi-paritite graph K n,n oriented from [n] to [n]. For every E ∈ Σ k (n, n) define its left and right endpoints to be where L(E), R(E) are unordered multi-sets. Using this notation, for any collection of edges between layers − 1 and that are present in Γ. Similarly, we will write Z,even }. We will moreover say that for a path γ an edge e = (α, β) ∈ [n i−1] × [n i] in the computational graph of N belongs to γ (written e ∈ γ) if Finally, for an edge e = (α, β) ∈ [n i−1] × [n i] in the computational graph of N, we set for the normalized and unnormalized weights on the edge corresponding to e (see). We begin with the well-known formula for the output of a ReLU net N with biases set to 0 and a linear final layer with one neuron: The weight of a path wt(γ) was defined in and includes both the product of the weights along γ and the condition that every neuron in γ is open at x. As explained in §A, the inner sum in is over paths γ in the computational graph of N that start at neuron a in the input layer and end at the output neuron and the random variables W γ are the normalized weights on the edge of γ between layer i − 1 and layer i (see). Differentiating this formula gives sum-over-path expressions for the derivatives of N with respect to both x and its trainable parameters. For the NTK and its first SGD update, the is the following: Lemma 6 (weight contribution to K N and ∆K N as a sum-over-paths). With probability 1, where the sum is over collections Γ of two paths in the computation graph of N and edges e that lie on both paths. Similarly, almost surely,, and plus a term that has mean 0. a, e ∈ γ, etc is defined in §A. We prove Lemma 6 in §B.1 below. Let us emphasize that the expressions for K 2 w and ∆ ww are almost identical. The main difference is that in the expression for ∆ ww, the second path γ 2 must contain both e 1 and e 2 while γ 4 has no restrictions. Hence, while for K 2 w the contribution from a collection of four paths Γ = (γ 1, γ 2, γ 3, γ 4) is the same as from the collection Γ = (γ 2, γ 1, γ 4, γ 3), for ∆ ww the contributions are different. This seemingly small discrepancy, as we shall see, causes the normalized expectation E[∆ ww]/E[K w] to converge to zero when d < ∞ is fixed and n i → ∞ (see the 1/n factors in the statement of Theorem 2). In contrast, in the same regime, the normalized second moment E[K Lemma 7 (Expectation of K w, K 2 w, ∆ ww as sums over 2, 4 paths). We have, where Similarly, where Finally, Lemma 7 is proved in §B.2. The expression is simple to evaluate due to the delta function in H(Γ, e). We obtain: where in the second-to-last equality we used that the number of paths in the comutational graph of N from a given neuron in the input to the output neuron equals i=1,...,d n i and in the last equality we used that n d = 1. This proves the first equality in Theorem 1. It therefore remains to evaluate and. Since they are so similar, we will continue to discuss them in parallel. To start, notice that the expression F (Γ, e 1, e 2) appearing in and satisfies where For the remainder of the proof we will write Thus, in particular, The advantage of F * (Γ) is that it does not depend on e 1, e 2. Observe that for every a = (α 1, α 2, α 3, α 4) ∈ [n 0] 4 even, we have that either α 1 = α 2, α 1 = α 3, or α 1 = α 4. Thus, by symmetry, the sum over Γ 4 even (n) in and we find and similarly, where F * (Γ)# {edges e 1, e 2 | e 1 ∈ γ 1 ∩ γ 2, e 2 ∈ γ 3 ∩ γ 4} F * (Γ)# {edges e 1, e 2 | e 1 ∈ γ 1 ∩ γ 2, e 2 ∈ γ 2, γ 3, e 1 = e 2}. To evaluate I j, II j let us write for the indicator function of the event that paths γ α, γ β pass through the same edge between layers i − 1, i in the computational graph of N. Observe that and # {edges e 1, e 2 | e 1 ∈ γ 1 ∩ γ 2, e 2 ∈ γ 2, γ 3, e 1 = e 2} = i2. Thus, we have where i2. To simplify I j,i1,i2 and II j,i1,i2 observe that F * (Γ) depends only on Γ only via the unordered edge multi-set (i.e. only which edges are covered matters; not their labelling) defined in Definition 3. Hence, we find that for j = 1, 2, 3, 4, i 1, i 2 = 1,..., d, The counts in I j, *,i1,i2 and II j, *,i1,i2 have a convenient representation in terms of Informally, the event C(E, i 1, i 2) indicates the presence of a "collision" of the four paths in Γ before the earlier of the layers i 1, i 2, while C(E, i 1, i 2) gives a "collision" between layers i 1, i 2; see Section 3.1 for the intuition behind calling these collisions. We also write Finally, for E ∈ Σ 4 a,even (n), we will define That is, a loop is created at layer i if the four edges in E all begin at occupy the same vertex in layer i − 1 but occupy two different vertices in layer i. We have the following Lemma. Lemma 8 (Evaluation of Counting Terms in and). Suppose E ∈ Σ 4 aj,even for some j = 1, 2, 3, 4. For each i 1, i 2 ∈ {1, . . ., d}, Similarly, We prove Lemma 8 in §B.3 below. Assuming it for now, observe that and that the conditions L(E) = a j are the same for j = 2, 3, 4 since the equality it is in the sense of unordered multi-sets. Thus, we find that E[K 2 w] is bounded above/below by a constant times Similarly, E[∆ ww] is bounded above/below by a constant times Observe that every unordered multi-set four edge multiset E ∈ Σ 4 even can be obtained by starting from some V ∈ Γ 2, considering its unordered edge multi-set E V and doubling all its edges. This map from Γ 2 to Σ 4 even is surjective but not injective. The sizes of the fibers is computed by the following Lemma., where as in, Lemma 9 is proved in §B.4. Using it and that 0 ≤ C(E, i 1, i 2) ≤ 1, the relation shows that E[K 2 w] is bounded above/below by a constant times Similarly, E[∆ ww] is bounded above/below by a constant times where, in analogy to, we have. Since the number of V in Γ 2 (n) with specified V equals, we find that so that for each and similarly, Here, E x is the expectation with respect to the probability measure on V = (v 1, v 2) ∈ Γ 2 obtained by taking v 1, v 2 independent, each drawn from the products of the measure We are now in a position to complete the proof of Theorems 1 and 2. To do this, we will evaluate the expectations E x above to leading order in i 1/n i with the help of the following elementary which is proven as Lemma 18 in. Proposition 10. Let A 0, A 1,..., A d be independent events with probabilities p 0,..., p d and B 0,..., B d be independent events with probabilities q 0,..., q d such that Denote by X i the indicator that the event A i happens, X i:= 1 {Ai}, and by Y i the indicator that B i happens, Then, if γ i ≥ 1 for every i, we have: where by convention α 0 = γ 0 = 1. In contrast, if γ i ≤ 1 for every i, we have: We first apply Proposition 10 to the estimates above for we find that Since the contribution for each layer in the product is bounded above and below by constants, we have that 2 is bounded below by a constant times and above by a constant times Here, note that the initial condition given by x and the terminal condition that all paths end at one neuron in the final layer are irrelevant. The expression is there precisely 3 ≤ 1, and K i = 1. Thus, since for i = 1,..., d − 1, the probability of X i is 1/n i + O(1/n 2 i), we find that where in the last inequality we used that 1 + x ≥ e When combined with this gives the lower bound in Proposition 3. The matching upper bound is obtained from in the same way using the opposite inequality from Proposition 10. To complete the proof of Proposition 3, we prove the analogous bounds for E[∆ ww] in a similar fashion. Namely, we fix 1 ≤ i 1 < i 2 ≤ d and write The set A is the event that the first collision between layers i 1, i 2 occurs at layer. We then have On the event A, notice that F * (V) only depends on the layers 1 ≤ i ≤ i 1 and layers < i ≤ d because the event A fixes what happens in layers i 1 < i ≤. Mimicking the estimates, and the application of Proposition 10 and using independence, we get that: Finally, we compute: Under review as a conference paper at ICLR 2020 Combining this we obtain that E[∆ ww]/ x 4 2 is bounded above and below by constants times This completes the proof of Proposition 3, modulo the proofs of Lemmas 6-9, which we supply below. Fix an input x ∈ R n0 to N. We will continue to write as in y (i) for the vector of pre-activations as layer i corresponding to x. We need the following simple Lemma. Lemma 11. With probability 1, either there exists i so that we have y For each fixed Γ this event defines a co-dimension 1 set in the space of all the weights. Hence, since the joint distribution of the weights has a density with respect to Lebesgue measure (see just before), the union of this (finite number) of events has measure 0. This shows that on the even that y = 0 for every, y (i) j = 0 with probability 1. Taking the union over i, j completes the proof. Lemma 11 shows that for our fixed x, with probability 1, the derivative of each ξ (i) j in vanishes. Hence, almost surely, for any edge e in the computational graph of N: This proves the formulas for K N, K 2 N. To derive the for ∆K N, we write where the loss L on a single batch containing only x is 1 2 (N (x) − N * (x)) 2. We therefore find Using and again applying Lemma 11, we find that with probability 1 Under review as a conference paper at ICLR 2020 Thus, almost surely To complete the proof of Lemma 6 it therefore remains to check that this last term has mean 0. To do this, recall that the output layer of N is assumed to be linear and that the distribution of each weight is symmetric around 0 (and hence has vanishing odd moments). Thus, the expectation over the weights in layer d has either 1 or 3 weights in it and so vanishes. Lemma 7 is almost a corollary of of Theorem 3 in and Proposition 2 in. The difference is that, in; , the biases in N were assumed to have a non-degenerate distribution, whereas here we've set them to zero. The nondegeneracy assumption is not really necessary, so we repeat here the proof from To compute the inner expectation, write F j for the sigma algebra generated by the weight in layers up to and including j. Let us also define the events: where we recall from that x (j) are the post-activations in layer j. Supposing first that e is not in layer d, the expectation becomes We have Thus, the expectation in becomes Note that given F d−2, the pre-activations y. Recall that by assumption, the weight matrix. This replacement leaves the product. On the event S d−1 (which occurs whenever y = 0 with probability 1 since we assumed that the distribution of each weight has a density relative to Lebesgue measure. Hence, symmetrizing over ± W (d), we find that. Similarly, if e is in layer i, then we automatically find that γ 1 (i − 1) = γ 2 (i − 1) and γ 1 (i) = γ 2 (i), giving an expectation of 1/n i−1 1. Proceeding in this way yields which is precisely. The proofs of and are similar. We have As before let us first assume that edges e 1, e 2 are not in layer d. Then, The the inner expectation is 1 each weight appears an even number of times Again symmetrizing with respect to ± W (d) and using that the pre-activation of different neurons are independent given the activations in the previous layer we find that, on the event {y where L is the event that |Γ(d − 1)| = |Γ(d)| = 1 and e 1, e 2 are not in layer d − 1. Proceeding in this way one layer at a time completes the proofs of and. Fix j = 1,..., 4, edges e 1, e 2 with (e 1) ≤ (e 2) in the computational graph of N and E ∈ Σ 4 aj,even. The key idea is to decompose E into loops. To do this, define For each i = 1,..., d there exists unique k = 1,..., #loops(E) so that We will say that two layers i, j = 1,..., d belong to the same loop of E if exists k = 1,..., #loops(E) so that We proceed layer by layer to count the number of Γ ∈ Γ 4 aj,even satisfying Γ = a j and E Γ = E. To do this, suppose we are given Γ(i − 1) ∈ [n i−1] 4 and we have L(E(i)) = 2. Then Γ(i − 1) is some permutation of (α 1, α 1, α 2, α 2) with α 1 = α 2. Moreover, for j = 1, 2 there is a unique edge (with multiplicity 2) in E(i) whose left endpoint is α j. Therefore, Γ(i − 1) determines Γ(i) when L(E(i)) = 2. In contrast, suppose L(E(i)) = 1. If R(E(i)) = 1, then E(i) consists of a single edge with multiplicity 4, which again determines Γ(i − 1), Γ(i). In short, Γ(i) determines Γ(j) for all j belonging to the same loop of E as i. Therefore, the initial condition Γ = a j determines Γ(i) for all i ≤ i 1 and the conditions e 1 ∈ γ 1, e 2 ∈ γ 2 determine Γ in the loops of E containing the layers of e 1, e 2. Finally, suppose L(E(i)) = 1 and R(E(i)) = 2 (i.e. i = i k (E) for some k = 1,..., d) and that e 1, e 2 are not contained in the same loop of E layer i. Then all 4 2 = 6 choices of Γ(i) satisfy Γ(i) = R(E(i)), accounting for the factor of 6 #loops(E). The concludes the proof in the case j = 1. the only difference in the cases j = 2, 3, 4 is that if γ 1 = γ 2 (and hence γ 3 = γ 4), then since (e 1) ≤ (e 2) in order to satisfy e 1 ∈ γ 1, γ 2 we must have that i 1 (E) < (e 1). The proof of Lemma 9 is essentially identical to the proof of Lemma 8. In fact it is slightly simpler since there are no distinguished edges e 1, e 2 to consider. We omit the details. In this section, we seek to estimate The approach is essentially identical to but somewhat simpler than our proof of Proposition 3 in §B. We will therefore focus here on explaining the salient differences. Our starting point is the following analog of Lemma 6, which gives a sum-over-paths expression for the bias contribution K b to the neural tangent kernel. To state it, let us define, for any collection Z = (z 1, . . ., z k) ∈ Z k of k neurons in N 1 {y Z >0}:= k j=1 1 {yz j >0}, to be the event that the pre-activations of the neurons z k are positive. Lemma 12 (K b as a sum over paths). With probability 1, where Z 1, Γ 2 (Z,Z), wt(γ) are defined in §A. Further, almost surely, The proof of this is a small modification of the proof of Lemma 6 and hence is omitted. Taking expectations, we therefore obtain the following analog to Lemma 7. Moreover, E[K The proof is identical to the argument used in §B.2 to establish Lemma 7, so we omit the details. The relation is easy to simplify: where we used that the number paths from a neuron in layer to the output of N equals Proof. The proof of Lemma 14 is a simplified version of the computation of E[K 2 w] (starting around and ending at the end of the proof of Proposition 3). Specifically, note that for Γ = (γ 1, . . ., γ 4) ∈ Γ 4 Z,even with (z 1) ≤ (z 2), the delta functions 1 {γ 1(i)=γ2(i)} 1 {γ 1(i −1)=γ2(i −1)} in the definition of H(Γ) ensures that γ 1, γ 2 go through the same neuron in layer (z 2). To condition on the index of this neuron, we recall that we denote by z(j, β) neuron number β in layer j. We have where Z = (z( (z 2), β), z((z 2), β), z 2, z 2 ) and. Since the inner sum in is independent of β by symmetry, we find where Z = (1, 1, z 2, z 2). The inner sum in is now precisely one of the terms I j from without counting terms involving edges e 1, e 2, except that the paths start at neuron 1 in layer (z 2). The changes of variables from Γ ∈ Γ 4 even to E ∈ Σ 4 even to V ∈ Γ 2 that we used to estimate the I j's are no far simpler. In particular, Lemma 8 still holds but without any of the A(E, i 1, i 2), C(E, i 1, i 2), C(E, i 1, i 2) terms. Thus, we find that where for the second estimate we applied Lemma 9 and have written Z = (1, z 2). Thus, as in the derivation of, we find that
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJgndT4KwB
The neural tangent kernel in a randomly initialized ReLU net is non-trivial fluctuations as long as the depth and width are comparable.
Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama,) that are valid only at certain points in time. For the problem of link prediction under temporal constraints, i.e., answering queries of the form (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4. We introduce new regularization schemes and present an extension of ComplEx that achieves state-of-the-art performance. Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods. Link prediction in relational data has been the subject of interest, given the widespread availability of such data and the breadth of its use in bioinformatics , recommender systems or Knowledge Base completion (a). Relational data is often temporal, for example, the action of buying an item or watching a movie is associated to a timestamp. Some medicines might not have the same adverse side effects depending on the subject's age. The task of temporal link prediction is to find missing links in graphs at precise points in time. In this work, we study temporal link prediction through the lens of temporal knowledge base completion, which provides varied benchmarks both in terms of the underlying data they represent, but also in terms of scale. A knowledge base is a set of facts (subject, predicate, object) about the world that are known to be true. Link prediction in a knowledge base amounts to answer incomplete queries of the form (subject, predicate, ?) by providing an accurate ranking of potential objects. In temporal knowledge bases, these facts have some temporal metadata attached. For example, facts might only hold for a certain time interval, in which case they will be annotated as such. Other facts might be event that happened at a certain point in time. Temporal link prediction amounts to answering queries of the form (subject, predicate, ?, timestamp). For example, we expect the ranking of queries (USA, president, ?, timestamp) to vary with the timestamps. As tensor factorization methods have proved successful for Knowledge Base Completion (a; ;), we express our Temporal Knowledge Base Completion problem as an order 4 tensor completion problem. That is, timestamps are discretized and used to index a 4-th mode in the binary tensor holding (subject, predicate, object, timestamps) facts. First, we introduce a ComplEx decomposition of this order 4 tensor, and link it with previous work on temporal Knowledge Base completion. This decomposition yields embeddings for each timestamps. A natural prior is for these timestamps representation to evolve slowly over time. We are able to introduce this prior as a regularizer for which the optimum is a variation on the nuclear p-norm. In order to deal with heterogeneous temporal knowledge bases where a significant amount of relations might be non-temporal, we add a non-temporal component to our decomposition. Experiments on available benchmarks show that our method outperforms the state of the art for similar number of parameters. We run additional experiments for larger, regularized models and obtain improvements of up to 0.07 absolute Mean Reciprocal Rank (MRR). Finally, we propose a dataset of 400k entities, based on Wikidata, with 7M train triples, of which 10% contain temporal validity information. This dataset is larger than usual benchmarks in the Knowledge Base completion community and could help bridge the gap between the method designed and the envisaged web-scale applications. Matrices and tensors are upper case letters. The i-th row of U is denoted by u i while it's j − th column is denoted by U:,j. The tensor product of two vectors is written ⊗ and the hadamard (elementwise) product ⊙. Static link prediction methods Standard tensor decomposition methods have lead to good (; ; ; Balažević et al., 2019) in Knowledge Base completion. The Canonical Polyadic (CP) Decomposition is the tensor equivalent to the low-rank decomposition of a matrix. A tensor X of canonical rank R can be written as: Setting U = W leads to the Distmult model, which has been successful, despite only being able to represent symmetric score functions. In order to keep the parameter sharing scheme but go beyond symmetric relations, use complex parameters and set W to the complex conjugate of U, U. Regularizing this algorithm with the variational form of the tensor nuclear norm as well as a slight transformation to the learning objective (also proposed in) leads to state of the art in. Other methods are not directly inspired from classical tensor decompositions. For example, TransE models the score as a distance of the translated subject to an object representation. This method has lead to many variations (; ;), but is limited in the relation systems it can model and does not lead to state of the art performances on current benchmarks. propose to generate the entity embeddings of a CP-like tensor decomposition by running a forward pass of a Graph Neural Network over the training Knowledge Base. The experiments included in this work did not lead to better link prediction performances than the same decomposition (Distmult) directly optimized . describes a bayesian model and learning method for representing temporal relations. The temporal smoothness prior used in this work is similar to the gradient penalty we describe in Section 3.3. However, learning one embedding matrix per timestamp is not applicable to the scales considered in this work. uses a tensor decomposition called ASALSAN to express temporal relations. This decomposition is related to RESCAL which underperforms on recent benchmarks due to overfitting (b). For temporal knowledge base completion, learns entity embeddings that change over time, by masking a fraction of the embedding weights with an activation function of learned frequencies. Based on the Tucker decomposition, ConT learns one new core tensor for each timestamp. Finally, viewing the time dimension as a sequence to be predicted, García-Durán et al. use recurrent neural nets to transform the embeddings of standard models such as TransE or Distmult to accomodate the temporal data. Published as a conference paper at ICLR 2020 by studying and extending a regularized CP decomposition of the training set seen as an order 4 tensor. We propose and study several regularizer suited to our decompositions. In this section, we are given facts (subject, predicate, object) annotated with timestamps, we discretize the timestamp range (eg. by reducing timestamps to years) in order to obtain a training set of 4-tuple (subject, predicate, object, time) indexing an order 4 tensor. We will show in Section 5.1 how we reduce each datasets to this setting. , we minimize, for each of the train tuples (i, j, k, l), the instantaneous multiclass loss: Note that this loss is only suited to queries of the type (subject, predicate, ?, time), which is the queries that were considered in related work. We consider another auxiliary loss in Section 6 which we will use on our Wikidata dataset. For a training set S (augmented with reciprocal relations ), and parametric tensor estimateX(θ), we minimize the following objective, with a weighted regularizer Ω: ]. The ComplEx decomposition can naturally be extended to this setting by adding a new factor T, we then have: We call this decomposition TComplEx. Intuitively, we added timestamps embedding that modulate the multi-linear dot product. Notice that the timestamp can be used to equivalently modulate the objects, predicates or subjects to obtain time-dependent representation: Contrary to DE-SimplE , we do not learn temporal embeddings that scale with the number of entities (as frequencies and biases), but rather embeddings that scale with the number of timestamps. The number of parameters for the two models are compared in Table 1. Some predicates might not be affected by timestamps. For example, Malia and Sasha will always be the daughters of Barack and Michelle Obama, whereas the "has occupation" predicate between two entities might very well change over time. In heterogeneous knowledge bases, where some predicates might be temporal and some might not be, we propose to decompose the tensorX as the sum of two tensors, one temporal, and the other non-temporal: Published as a conference paper at ICLR 2020 We call this decomposition TNTComplEx. suggests another way of introducing a non-temporal component, by only allowing a fraction γ of components of the embeddings to be modulated in time. By allowing this sharing of parameters between the temporal and non-temporal part of the tensor, our model removes one hyperparameter. Moreover, preliminary experiments showed that this model outperforms one without parameter sharing. Any order 4 tensor can be considered as an order 3 tensor by unfolding modes together. For a tensor X ∈ R N1×N2×N3×N4, unfolding modes 3 and 4 together will lead to tensorX ∈ R N1×N2×N3N4 . We can see both decompositions ( and) as order 3 tensors by unfolding the temporal and predicate modes together. Considering the decomposition implied by these unfoldings (see Appendix 8.1) leads us to the following weighted regularizers : The first regularizer weights objects, predicates and pairs (predicate, timestamp) according to their respective marginal probabilities. This regularizer is a variational form of the weighted nuclear 3-norm on an order 4 tensor (see subsection 3.4 and Appendix 8.3 for details and proof). The second regularizer is the sum of the nuclear 3 penalties on tensors We have more a priori structure on the temporal mode than on others. Notably, we expect smoothness of the application i → t i. In words, we expect neighboring timestamps to have close representations. Thus, we penalize the norm of the discrete derivative of the temporal embeddings: We show in Appendix 8.2 that the sum of Λ p and the variational form of the nuclear p norm lead to a variational form of a new tensor atomic norm. As was done in , we aim to use tensor nuclear p-norms as regularizers. The definition of the nuclear p-norm of a tensor of order D is: This formulation of the nuclear p-norm writes a tensor as a sum over atoms which are the rank-1 tensors of unit p-norm factors. The nuclear p-norm is NP-hard to compute . , a practical solution is to use the equivalent formulation of nuclear p-norm using their variational form, which can be conveniently written for p = D: For the equality above to hold, the infimum should be over all possible R. The practical solution is to fix R to the desired rank of the decomposition. Using this variational formulation as a regularizer leads to state of the art for order-3 tensors and is convenient in a stochastic gradient setting because it separates over each model coefficient. In addition, this formulation makes it easy to introduce a weighting as recommended in;. In order to learn under non-uniform sampling distributions, one should penalize the weighted norm: In subsection 3.3, we add another penalty in Equation which changes the norm of our atoms. In subsection 3.2, we introduced another variational form in Equation which allows to easily penalize the nuclear 3-norm of an order 4 tensor. This regularizer leads to different weighting. By considering the unfolding of the timestamp and predicate modes, we are able to weight according to the joint marginal of timestamps and predicates, rather than by the product of the marginals. This can be an important distinction if the two are not independent. We study the impact of regularization on the ICEWS05-15 dataset, for the TNTComplEx model. For details on the experimental set-up, see Section 5.1. The first effect we want to quantify is the effect of the regularizer Λ p. We run a grid search for the strength of both Λ p and Ω 3 and plot the convex hull as a function of the temporal regularization strength. As shown in Figure 1, imposing smoothness along the time mode brings an improvement of over 2 MRR point. The second effect we wish to quantify is the effect of the choice of regularizer Ω. A natural regularizer for TNTComplEx would be: We compare ∆ 4, ∆ 3 and ∆ 2 with Ω 3. The comparison is done with a temporal regularizer of 0 to reduce the experimental space. 2 is the common weight-decay frequently used in deep-learning. Such regularizers have been used in knowledge base completion (; 2016b;), however, showed that the infimum of this penalty is non-convex over tensors. 3 matches the order used in the Ω 3 regularizer, and in previous work on knowledge base completion . However, by the same arguments, its minimization does not lead to a convex penalty over tensors. There are two differences between ∆ 4 and Ω 3. First, whereas the first is a variational form of the nuclear 4-norm, the second is a variational form of the nuclear 3-norm which is closer to the nuclear 2-norm. Results for exact recovery of tensors have been generalized to the nuclear 2-norm, and to the extent of our knowledge, there has been no formal study of generalization properties or exact recovery under the nuclear p-norm for p greater than two. Second, the weighting in ∆ 4 is done separately over timestamps and predicates, whereas it is done jointly for Ω 3. This leads to using the joint empirical marginal as a weighting over timestamps and predicates. The impact of weighting on the guarantees that can be obtained are described more precisely in. The contribution of all these regularizers over a non-regularized model are summarized in Table 3. Note that careful regularization leads to a 0.05 MRR increase. A dataset based on Wikidata was proposed by García-Durán et al.. However, upon inspection, this dataset contains numerical data as entities, such as ELO rankings of chess players, which are not representative of practically useful link prediction problems. Also, in this dataset, temporal informations is specified in the form of "OccursSince" and "OccursUntil" statements appended to triples, which becomes unwieldy when a predicate holds for several intervals in time. Moreover, this dataset contains only 11k entities and 150k which is insufficient to benchmark methods at scale.. In order to adress these limitations, we created our own dataset from Wikidata, which we make available at dataseturl. Starting from Wikidata, we removed all entities that were instance of scholarly articles, proteins and others. We also removed disambiguation, template, category and project pages from wikipedia. Then, we removed all facts for which the object was not an entity. We iteratively filtered out entities that had degree at least 5 and predicates that had at least 50 occurrences. With this method, we obtained a dataset of 432715 entities, 407 predicates and 1724 timestamps (we only kept the years). Each datum is a triple (subject, predicate, object) together a timestamp range (begin, end) where begin, end or both can be unspecified. Our train set contains 7M such tuples, with about 10% partially specified temporal tuples. We kept a validation and test set of size 50k each. At train and test time, for a given datum (subject, predicate, object, [begin, end] ), we sample a timestamp (appearing in the dataset) uniformly at random, in the range [begin, end]. For datum without a temporal range, we sample over the maximum date range. Then, we rank the objects for the partial query (subject, predicate, ?, timestamp). We follow the experimental set-up in García-Durán et al.;. We use models from García-Durán et al. and as baselines since they are the best performing algorithms on the datasets considered. We report the filtered Mean Reciprocal Rank (MRR) defined in Nickel et al. (2016b). In order to obtaiqn comparable , we use Table 1 and dataset statistics to compute the rank for each (model, dataset) pair that matches the number of parameters used in. We also report at ranks 10 times higher. This higher rank set-up gives an estimation of the best possible performance attainable on these datasets, even though the dimension used might be impractical for applied systems. All our models are optimized with Adagrad , with a learning rate of 0.1, a batch-size of 1000. More details on the grid-search, actual ranks used and hyper-parameters are given in Appendix 8. (García-Durán et al., 2018) and DE-SimplE are the best numbers reported in the respective papers. Our models have as many parameters as DE-SimplE. Numbers in parentheses are for ranks multiplied by 10. We give on 3 datasets previously used in the litterature: ICEWS14, ICEWS15-05 and Yago15k. The ICEWS datasets are samplings from the Integrated Conflict Early Warning System (ICEWS) 1.García-Durán et al. introduced two subsampling of this data, ICEWS14 which contains all events occuring in 2014 and ICEWS05-15 which contains events occuring between 2005 and 2015. These datasets immediately fit in our framework, since the timestamps are already discretized. The Yago15K dataset (García-Durán et al., 2018) is a modification of FB15k which adds "occursSince" and "occursUntil" timestamps to each triples. Following the evaluation setting of García-Durán et al., during evaluation, the incomplete triples to complete are of the form (subject, predicate, ?, occursSince | occursUntil, timestamp) (with reciprocal predicates). Rather than deal with tensors of order 5, we choose to unfold the (occursSince, occursUntil) and the predicate mode together, multiplying its size by two. Some relations in Wikidata are highly unbalanced (eg. (?, InstanceOf, Human)). For such relations, a ranking evaluation would not make much sense. Instead, we only compute the Mean Reciprocal Rank for missing right hand sides, since the data is such that highly unbalanced relations happen on the left-hand side. However, we follow the same training scheme as for all the other dataset, including reciprocal relations in the training set. The cross-entropy loss evaluated on 400k entities puts a restriction on the dimensionality of embeddings at about d = 100 for a batch-size of 1000. We leave sampling of this loss, which would allow for higher dimensions to future work. We compare ComplEx with the temporal versions described in this paper. We report in Table 2. Note that ComplEx has performances that are stable through a tenfold increase of its number of parameters, a rank of 100 is enough to capture the static information of these datasets. For temporal models however, the performance increases a lot with the number of parameters. It is always beneficial to allow a separate modeling of non-temporal predicates, as the performances of TNTComplex show. Finally, our model match or beat the state of the art on all datasets, even at identical number of parameters. Since these datasets are small, we also report for higher ranks (10 times the number of parameters used for DE-SimplE). outweighs the Temporal MRR (T-MRR). A breakdown of the performances is available in table 4. TNTComplEx obtains performances that are comparable to ComplEx on non-temporal triples, but are better on temporal triples. Moreover, TNTComplEx can minimize the temporal cross-entropy and is thus more flexible on the queries it can answer. Training TNTComplEx on Wikidata with a rank of d = 100 with the full cross-entropy on a Quadro GP 100, we obtain a speed of 5.6k triples per second, leading to experiments time of 7.2 hours. This is to be compared with 5.8k triples per second when training ComplEx for experiments time of 6.9 hours. The additional complexity of our model does not lead to any real impact on runtime, which is dominated by the computation of the cross-entropy over 400k entities. The instantaneous loss described in equation, along with the timestamp sampling scheme described in the previous section only enforces correct rankings along the "object" tubes of our order-4 tensor. In order to enforce a stronger temporal consistency, and be able to answer queries of the type (subject, predicate, object, ?), we propose another cross-entropy loss along the temporal tubes: We optimize the sum of ℓ defined in Equation 1 andl defined in Equation 7. Doing so, we only lose 1 MRR point overall. However, we make our model better at answering queries along the time axis. The macro area under the precision recall curve is 0.92 for a TNTComplEx model learned with ℓ alone and 0.98 for a TNTComplEx model trained with ℓ +l. We plot in Figure 2 the scores along time for train triples (president of the french republic, office holder, {Jacques Chirac | Nicolas Sarkozy | François Hollande | Emmanuel Macron}, ). The periods where a score is highest matches closely the ground truth of start and end dates of these presidents mandates which is represented as a colored . This shows that our models are able to learn rankings that are correct along time intervals despite our training method only ever sampling timestamps within these intervals. Tensor methods have been successful for Knowledge Base completion. In this work, we suggest an extension of these methods to Temporal Knowledge Bases. Our methodology adapts well to the various form of these datasets: point-in-time, beginning and endings or intervals. We show that our methods reach higher performances than the state of the art for similar number of parameters. For several datasets, we also provide performances for higher dimensions. We hope that the gap between low-dimensional and high-dimensional models can motivate further research in models that have increased expressivity at lower number of parameters per entity. Finally, we propose a large scale temporal dataset which we believe represents the challenges of large scale temporal completion in knowledge bases. We give performances of our methods for low-ranks on this dataset. We believe that, given its scale, this dataset could also be an interesting addition to non-temporal knowledge base completion. Then according to , unfolding along modes 3 and 4 leads to an order three tensor of decompositionX Where • is the Khatri-Rao product , which is the column-wise Kronecker product: Note that for a fourth mode of size L: This justifies the regularizers used in Section 3.2. Consider the penalty: ∥ · ∥ τ 4 is a norm and lets us rewrite: ). Following the proof in which only uses homogeneity of the norms, we can show that Ω(U, V, W, T) is a variational form of an atomic norm with atoms: We consider the regularizer: ). Let D subj (resp. obj, pred/time) the diagonal matrix containing the cubic-roots of the marginal probabilities of each subject (resp. obj, pred/time) in the dataset. We denote by • the Kathri-Rao product between two matrices (the columnwise Kronecker product). Summing over the entire dataset, we obtain the penalty: ). Dropping the weightings to simplify notations, we state the equivalence between this regularizer and a variational form of the nuclear 3-norm of an order 4 tensor: The proof follows , noting that ∥u Statistics of all the datasets used in this work are gathered in Table 6: Results for TA (García-Durán et al., 2018) and DE-SimplE are the best numbers reported in the respective papers.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rke2P1BFwS
We propose new tensor decompositions and associated regularizers to obtain state of the art performances on temporal knowledge base completion.
The conventional approach to solving the recommendation problem greedily ranks individual document candidates by prediction scores. However, this method fails to optimize the slate as a whole, and hence, often struggles to capture biases caused by the page layout and document interdepedencies. The slate recommendation problem aims to directly find the optimally ordered subset of documents (i.e. slates) that best serve users’ interests. Solving this problem is hard due to the combinatorial explosion of document candidates and their display positions on the page. Therefore we propose a paradigm shift from the traditional viewpoint of solving a ranking problem to a direct slate generation framework. In this paper, we introduce List Conditional Variational Auto-Encoders (ListCVAE), which learn the joint distribution of documents on the slate conditioned on user responses, and directly generate full slates. Experiments on simulated and real-world data show that List-CVAE outperforms greedy ranking methods consistently on various scales of documents corpora. Recommender systems modeling is an important machine learning area in the IT industry, powering online advertisement, social networks and various content recommendation services BID0 ). In the context of document recommendation, its aim is to generate and display an ordered list of "documents" to users (called a "slate" in BID2 ; BID3), based on both user preferences and documents content. For large scale recommender systems, a common scalable approach at inference time is to first select a small subset of candidate documents S out of the entire document pool D. This step is called "candidate generation". Then a function approximator such as a neural network (e.g., a Multi-Layer Perceptron (MLP)) called the "ranking model" is used to predict probabilities of user engagements for each document in the small subset S and greedily generates a slate by sorting the top documents from S based on estimated prediction scores BID4. This two-step process is widely popular to solve large scale recommendation problems due to its scalability and fast inference at serving time. The candidate generation step can decrease the number of candidates from millions to hundreds or less, effectively dealing with scalability when faced with a large corpus of documents D. Since |S| is much smaller than |D|, the ranking model can be reasonably complicated without increasing latency. However, there are two main problems with this approach. First the candidate generation and the ranking models are not trained jointly, which can lead to having candidates in S that are not the highest scoring documents of the ranking model. Second and most importantly, the greedy ranking method suffers from numerous biases that come with the visual presentation of the slate and context in which documents are presented, both at training and serving time. For example, there exists positional biases caused by users paying more attention to prominent slate positions BID5, and contextual biases, due to interactions between documents presented together in the same slate, such as competition and complementarity, relative attractiveness, etc..In this paper, we propose a paradigm shift from the traditional viewpoint of solving a ranking problem to a direct slate generation framework. We consider a slate "optimal" when it maximizes some type of user engagement feedback, a typical desired scenario in recommender systems. For example, given a database of song tracks, the optimal slate can be an ordered list (in time or space) of k songs such that the user ideally likes every song in that list. Another example considers news articles, the optimal slate has k ordered articles such that every article is read by the user. In general, optimality can be defined as a desired user response vector on the slate and the proposed model should be agnostic to these problem-specific definitions. Solving the slate recommendation problem by direct slate generation differs from ranking in that first, the entire slate is used as a training example instead of single documents, preserving numerous biases encoded into the slate that might influence user responses. Secondly, it does not assume that more relevant documents should necessarily be put in earlier positions in the slate at serving time. Our model directly generates slates, taking into account all the relevant biases learned through training. In this paper, we apply Conditional Variational Auto-Encoders (CVAEs) BID7 BID8 to model the distributions of all documents in the same slate conditioned on the user response. All documents in a slate along with their positional, contextual biases are jointly encoded into the latent space, which is then sampled and combined with desired conditioning for direct slate generation, i.e. sampling from the learned conditional joint distribution. Therefore, the model first learns which slates give which type of responses and then directly generates similar slates given a desired response vector as the conditioning at inference time. We call our proposed model List-CVAE. The key contributions of our work are:1. To the best of our knowledge, this is the first model that provides a conditional generative modeling framework for slate recommendation by direct generation. It does not necessarily require a candidate generator at inference time and is flexible enough to work with any visual presentation of the slate as long as the ordering of display positions is fixed throughout training and inference times.2. To deal with the problem at scale, we introduce an architecture that uses pretrained document embeddings combined with a negatively downsampled k-head softmax layer within the List-CVAE model, where k is the slate size. The structure of this paper is the following. First we introduce related work using various CVAE-type models as well as other approaches to solve the slate generation problem. Next we introduce our List-CVAE modeling approach. The last part of the paper is devoted to experiments on both simulated and the real-world datasets.2 RELATED WORK Traditional matrix factorization techniques have been applied to recommender systems with success in modeling competitions such as the Netflix Prize BID10. Later research emerged on using autoencoders to improve on the of matrix factorization BID11 (CDAE, CDL). More recently several works use Boltzmann Machines BID13 and variants of VAE models in the Collaborative Filtering (CF) paradigm to model recommender systems BID14 BID15 BID16 ) (Collaborative VAE, JMVAE, CVAE-CF, JVAE-CF). See FIG0 for model structure comparisons. In this paper, unless specified otherwise, the user features and any context are routinely considered part of the conditioning variables (in Appendix A Personalization Test, we test List-CVAE generating personalized slates for different users). These models have primarily focused on modeling individual document or pairs of documents in the slate and applying greedy ordering at inference time. Our model is also using a VAE type structure and in particular, is closely related to the Joint Multimodel Variational Auto-Encoder (JMVAE) architecture FIG0 ). However, we use whole slates as input instead of single documents, and directly generate slates instead of using greedy ranking by prediction scores. Other relevant work from the Information Retrieval (IR) literature are listwise ranking methods BID17 BID18 BID19 BID20 BID21. These methods use listwise loss functions that take the contexts and positions of training examples into account. However, they eventually assign a prediction score for each document and greedily rank them at inference time. In the Reinforcement Learning (RL) literature, BID3 view the whole slates as actions and use a deterministic policy gradient update to learn a policy that generates these actions, given concatenated document features as input. Finally, the framework proposed by BID22 predicts user engagement for document and position pairs. It optimizes whole page layouts at inference time but may suffer from poor scalability due to the combinatorial explosion of all possible document position pairs. We formally define the slate recommendation problem as follows. Let D denote a corpus of documents and let k be the slate size. Then let r = (r 1, . . ., r k) be the user response vector, where r i ∈ R is the user's response on document d i. For example, if the problem is to maximize the number of clicks on a slate, then let r i ∈ {0, 1} denote whether the document d i is clicked, and thus an optimal slate DISPLAYFORM0 Variational Auto-Encoders (VAEs) are latent-variable models that define a joint density P θ (x, z) between observed variables x and latent variables z parametrized by a vector θ. Training such models requires marginalizing the latent variables in order to maximize the data likelihood P θ (x) = P θ (x, z)dz. Since we cannot solve this marginalization explicitly, we resort to a variational approximation. For this, a variational posterior density Q φ (z|x) parametrized by a vector φ is introduced and we optimize the variational Evidence Lower-Bound (ELBO) on the data loglikelihood: DISPLAYFORM0 DISPLAYFORM1 where KL is the Kullback-Leibler divergence and where P θ (z) is a prior distribution over latent variables. In a Conditional VAE (CVAE) we extend the distributions P θ (x, z) and Q φ (z|x) to also depend on an external condition c. The corresponding distributions are indicated by P θ (x, z|c) and Q φ (z|x, c). Taking the conditioning c into account, we can write the variational loss to minimize as DISPLAYFORM2 We assume that the slates s = (d 1, d 2, . . . d k) and the user response vectors r are jointly drawn from a distribution P D k ×R k. In this paper, we use a CVAE to model the joint distribution of all DISPLAYFORM0 DISPLAYFORM1 is the conditioning vector, where r = (r 1, r 2, . . ., r k) is the user responses on the slate s. The concatenation of s and c makes the input vector to the encoder. The latent variable z ∈ R m has a learned prior distribution N (µ 0, σ 0). The raw output from the decoder are k vectors x 1, x 2..., x k, each of which is mapped to a real document through taking the dot product with the matrix Φ containing all document embeddings. Thus produced k vectors of logits are then passed to the negatively downsampled k-head softmax operation. At inference time, c is the ideal condition whose concatenation with sampled z is the input to the decoder. documents in the slate conditioned on the user responses r, i.e. P(d 1, d 2, . . . d k |r). At inference time, the List-CVAE model attempts to generate an optimal slate by conditioning on the ideal user response r.As we explained in Section 1, "optimality" of a slate depends on the task. With that in mind, we define the mapping Φ: R k → C. It transforms a user response vector r into a vector in the conditioning space C that encodes the user engagement metric we wish to optimize for. For instance, if we want to maximize clicks on the slate, we can use the binary click response vectors and set the conditioning to c = Φ(r):= k i=0 r i. Then at inference time, the corresponding ideal user response r would be (1, 1, . . ., 1), and correspondingly the ideal conditioning would be c = Φ(r) = k i=0 1 = k. As usual with CVAEs, the decoder models a distribution P θ (s|z, c) that, conditioned on z, is easy to represent. In our case, P θ (s|z, c) models an independent probability for each document on the slate, represented by a softmax distribution. Note that the documents are only independent to each other conditional on z. In fact, the marginalized posterior P θ (s|c) = z P θ (s|z, c)P θ (z|c)dz can be arbitrarily complex. When the encoder encodes the input slate s into the latent space, it learns the joint distribution of the k documents in a fixed order, and thus also encodes any contextual, positional biases between documents and their respective positions into the latent variable z. The decoder learns these biases through reconstruction of the input slates from latent variables z with conditions. At inference time, the decoder reproduces the input slate distribution from the latent variable z with the ideal conditioning, taking into account all the biases learned during training time. Figure 3: Predictive prior distribution of the latent variable z in R 2, conditioned on ideal user response c = (1, 1, . . ., 1). The color map corresponds to the expected total responses of the corresponding slates. Plots are generated from the simulation experiment with 1000 documents and slate size 10.To shed light onto what is encoded in the latent space, we simplify the prior distribution of z to be a fixed Gaussian distribution N (0, I) in R 2. We train List-CVAE and plot the predictive prior z. As training evolves, generated output slates with low total responses are pushed towards the edge of the latent space while high response slates cluster towards a growing center area (Figure 3). Therefore after training, if we sample z from its prior distribution N (0, I) and generate the corresponding output slates, they are more likely to have high total responses. Since the number of documents in D can be large, we first embed the documents into a low dimensional space. Let Ψ: D → S q−1 be that normalized embedding where S q−1 denotes the unit sphere in R q. Ψ can easily be pretrained using a standard supervised model that predicts user responses from documents or through a standard auto-encoder technique. For the i-th document in the slate, our model produces a vector x i in R q that is then matched to each document from D via a dot-product. This operation produces k vectors of logits for k softmaxes, i.e. the k-head softmax. At training time, for large document corpora D, we uniformly randomly downsample negative documents and compute only a small subset of the logits for every training example, therefore efficiently scaling the nearest neighbor search to millions of documents with minimal model quality loss. We train this model as a CVAE by minimizing the sum of the reconstruction loss and the KLdivergence term: DISPLAYFORM2 where β is a function of the training step BID23.During inference, output slates are generated by first sampling z from the conditionally learned prior distribution N (µ, σ), concatenating with the ideal condition c = Φ(r), and passed into the decoder, generating (x 1, . . ., x k) from the learned P θ (s|z, c), and finally taking arg max over the dot-products with the full embedding matrix independently for each position i = 1,..., k. for i = 1,..., k, where B represents the Bernoulli distribution. During training, all models see uniformly randomly generated slates s ∼ U({1, n} k ) and their generated responses r. During inference time, we generate slates s by conditioning on c = (1, . . ., 1). We do not require document de-duplication since repetition may be desired in certain applications (e.g. generating temporal slates in an online advertisement session). Moreover List-CVAE should learn to produce the optimal slates whether those slates contain duplication or not from learning the training data distribution. Evaluation: For evaluation, we cannot use offline ranking evaluation metrics such as Normalized Discounted Cumulative Gain (NDCG) BID24, Mean Average Precision (MAP) or Inverse Propensity Score (IPS) BID26, etc. These metrics either require prediction scores for individual documents or assume that more relevant documents should appear in earlier ranking positions, unfairly favoring greedy ranking methods. Moreover, we find it limiting to use various diversity metrics since it is not always the case that a higher diversity-inclusive score gives better slates measured by user's total responses. Even though these metrics may be more transparent, they do not measure our end goal, which is to maximize the expected number of total responses on the generated slates. Instead, we evaluate the expected number of clicks over the distribution of generated slates and over the distribution of clicks on each document: DISPLAYFORM0 In practice, we distill the simulated environment of Eq. 5 using the cross-entropy loss onto a neural network model that officiates as our new simulation environment. The model consists of an embedding layer, which encodes documents into 8-dimensional embeddings. It then concatenates the embeddings of all the documents that form a slate and follows this concatenation with two hidden layers and a final softmax layer that predicts the slate response amongst the 2 k possible responses. Thus we call it the "response model". We use the response model to predict user responses on 100,000 sampled output slates for evaluation purposes. This allows us to accurately evaluate our output slates by List-CVAE and all other baseline models. Our experiments compare List-CVAE with several greedy ranking baselines that are often deployed in industry productions, namely Greedy MLP, Pairwise MLP, Position MLP and Greedy Long Short-Term Memory (LSTM) models. In addition to the greedy baselines, we also compare against auto-regressive (AR) versions of Position MLP and LSTM, as well as randomly-selected slates from the training set as a sanity check. List-CVAE generates slates s = arg max s∈{1,...,n} k P θ (s|z, c). The encoder and decoder of List-CVAE, as well as all the MLP-type models consist of two fully-connected neural network layers of the same size. Greedy MLP trains on (d i, r i) pairs and outputs the greedy slate consisting of the top k highestP (r|d) scoring documents. Pairwise MLP is an MLP model with a pairwise ranking loss DISPLAYFORM0 where L x is the cross entropy loss and (x 1, x 2) are pairs of documents randomly selected with different user responses from the same slate. We sweep on hyperparameters α and η in addition to the shared MLP model structure sweep. Position MLP uses position in the slate as a feature during training time and simply sets it to 0 for fast performance at inference time. AR Position MLP is identical to Position MLP with the exception that the position feature is set to each corresponding slate position at inference time (as such it takes into account position biases). Greedy LSTM is an LSTM model with fully-connect layers before and after the recurrent middle layers. We tune the hyperparameters corresponding to the number of layers and their respective widths. We use sequences of documents that form slates as the input at training time, and use single examples as inputs with sequence length 1 at inference time, which is similar to scoring documents as if they are in the first position of a slate of size 1. Then we greedily rank the documents based on their prediction scores. AR LSTM is identical to Greedy LSTM during training. During inference, however, it selects documents sequentially by first selecting the best document for position 1, then unrolling the LSTM for 2 steps to select the best document for position 2, and so on. This way it takes into account the context of all previous documents and their positions. Random selects slates uniformly randomly from the training set. Small-scale experiment (n = 100, 1000, k = 10): 0 1000 2000 3000 4000 5000Step FORMULA1 Step FORMULA2 We use the trained document embeddings from the response model for List-CVAE and all the baseline models. For List-CVAE, we also use trained priors P θ (z |c) = N (µ, σ) where µ, σ = f prior (c) and f prior is modeled by a small MLP. Additionally, since we found little difference between different hyperparameters, we fixed the width of all hidden layers to 128, the learning rate to 10 DISPLAYFORM1 and the number of latent dimensions to 16. For all other baseline models, we sweep on the learning rates, model structures and any model specific hyperparameters such as α, η for Position MLP and the forget bias for the LSTM model. FIG5 show the performance comparison when the number of documents n = 100, 1000 and slate size to k = 10. While List-CVAE is not quite capable of reaching a perfect performance of 10 clicks (which is probably even above the optimal upper bound), it easily outperforms all other ranking baselines after only a few training steps. Appendix A includes an additional personalization test. Due to a lack of publicly available large scale slate datasets, we use the data provided by the RecSys 2015 YOOCHOOSE Challenge BID27. This dataset consists of 9.2M user purchase sessions around 53K products. Each user session contains an ordered list of products on which the user clicked, and whether they decided to buy them. The List-CVAE model can be used on slates with temporal ordering. Thus we form slates of size 5 by taking consecutive clicked products. We then build user responses from whether the user bought them. We remove a portion of slates with no positive responses such that after removal they only account for 50% of the total number of slates. After filtering out products that are rarely bought, we get 375K slates of size 5 and a corpus of 10,000 candidate documents. FIG9 shows the user response distribution of the training data. Notice that in the response vector, 0 denotes a click without purchase and 1 denotes a purchase. For example, means the user clicked on all five products but bought only the first and the last products. Medium-scale experiment (n = 10, 000, k = 5):Similarly to the previous section, we train a two-layer response model that officiates as a new semisynthetic simulation environment. We use the same hyperparameters used previously. FIG9 shows that List-CVAE outperforms all other baseline models within 500 training steps, which corresponds to having seen less than 10 −11 % of all possible slates. Large-scale experiment (n = 1 million, 2 millions, k = 5):We synthesize 1,990k documents by adding independent Gaussian noise N (0, 10 −2 · I) to the original 10k documents and label the synthetic documents by predicted responses from the response model. Thus the new pool of candidate documents consists of 10k original documents and 1,990k synthetic ones, totaling 2 million documents. To match each of the k decoder outputs (x 1, x 2, . . ., x k) with Step 0 Step 0 Step 0 real documents, we uniformly randomly downsample the negative document examples keeping in total only 1000 logits (the dot product outputs in the decoder) during training. At inference time, we pick the argmax for each of k dot products with the full embedding matrix without sampling. This technique speeds up the total training and inference time for 2 million documents to merely 4 minutes on 1 GPU for both the response model (with 40k training steps) and List-CVAE (with 5k training steps). We ran 2 experiments with 1 million and 2 millions document respectively. From the shown in FIG9 and 5d, List-CVAE steadily outperforms all other baselines again. The greatly increased number of training examples helped List-CVAE really learn all the interactions between documents and their respective positional biases. The ing slates were able to receive close to 5 purchases on average due to the limited complexity provided by the response model. In practice, we may not have any close-to-optimal slates in the training data. Hence it is crucial that List-CVAE is able to generalize to unseen optimal conditions. To test its generalization capacity, we use the medium-scale experiment setup on RecSys 2015 dataset and eliminate from the training data all slates where the total user response exceeds some ratio h of the slate size k, i.e. k i=1 r i > hk for h = 80%, 60%, 40%, 20%. FIG10 shows test on increasingly difficult training sets from which to infer on the optimal slates. Without seeing any optimal slates FIG10 ) or slates with 4 or 5 total purchases FIG10 ), List-CVAE can still produce close to optimal slates. Even training on slates with only 0, 1 or 2 total purchases (h = 40%), List-CVAE still surpasses the performance of all greedy baselines within 1000 steps FIG10. Thus demonstrating the strong generalization power of the model. List-CVAE cannot learn much about the 0 1000 2000 3000 4000 5000Step 0 interactions between documents given only 0 or 1 total purchase per slate FIG10 ), whereas the MLP-type models learn purchase probabilities of single documents in the same way as in slates with higher responses. Although evaluation of our model requires choosing the ideal conditioning c at or near the edge of the support of P (c), we can always compromise generalization versus performance by controlling c in practice. Moreover, interactions between documents are governed by similar mechanisms whether they are from the optimal or sub-optimal slates. As the experiment indicate, List-CVAE can learn these mechanisms from sub-optimal slates and generalize to optimal slates. The List-CVAE model moves away from the conventional greedy ranking paradigm, and provides the first conditional generative modeling framework that approaches slate recommendation problem using direct slate generation. By modeling the conditional probability distribution of documents in a slate directly, this approach not only automatically picks up the positional and contextual biases between documents at both training and inference time, but also gracefully avoids the problem of combinatorial explosion of possible slates when the candidate set is large. The framework is flexible and can incorporate different types of conditional generative models. In this paper we showed its superior performance over popular greedy and auto-regressive baseline models with a conditional VAE model. In addition, the List-CVAE model has good scalability. We designed an architecture that uses pretrained document embeddings combined with a negatively downsampled k-head softmax layer that greatly speeds up the training, scaling easily to millions of documents. This test complements the small-scale experiment. To the 100 documents with slate size 10, we add user features into the conditioning c, by adding a set U of 50 different users to the simulation engine (|U| = 50, n = 100, k = 10), permuting the innate attractiveness of documents and their interactions matrix W by a user-specific function π u. Let be the response of the user u on the document d i. During training, the condition c is a concatenation of 16 dimensional user embeddings Θ(u) obtained from the response model, and responses r. At inference time, the model conditions on c = (r, Θ(u)) for each randomly generated test user u. We sweep over hidden layers of 512 or 1024 units in List-CVAE, and all baseline MLP structures. FIG12 show that slates generated by List-CVAE have on average higher clicks than those produced by the greedy baseline models although its convergence took longer to reach than in the small-scale experiment.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1xX42R5Fm
We used a CVAE type model structure to learn to directly generate slates/whole pages for recommendation systems.
Neural networks for structured data like graphs have been studied extensively in recent years. To date, the bulk of research activity has focused mainly on static graphs. However, most real-world networks are dynamic since their topology tends to change over time. Predicting the evolution of dynamic graphs is a task of high significance in the area of graph mining. Despite its practical importance, the task has not been explored in depth so far, mainly due to its challenging nature. In this paper, we propose a model that predicts the evolution of dynamic graphs. Specifically, we use a graph neural network along with a recurrent architecture to capture the temporal evolution patterns of dynamic graphs. Then, we employ a generative model which predicts the topology of the graph at the next time step and constructs a graph instance that corresponds to that topology. We evaluate the proposed model on several artificial datasets following common network evolving dynamics, as well as on real-world datasets. Results demonstrate the effectiveness of the proposed model. Graph neural networks (GNNs) have emerged in recent years as an effective tool for analyzing graph-structured data (; ; ;). These architectures bring the expressive power of deep learning into non-Euclidean data such as graphs, and have demonstrated convincing performance in several graph mining tasks, including graph classification , link prediction , and community detection ). So far, GNNs have been mainly applied to tasks that involve static graphs. However, most real-world networks are dynamic, i. e., nodes and edges are added and removed over time. Despite the success of GNNs in various applications, it is still not clear if these models are useful for learning from dynamic graphs. Although some models have been applied to this type of data, most studies have focused on predicting a low-dimensional representation (i. e., embedding) of the graph for the next time step (; ; ; ;). These representations can then be used in downstream tasks (; ; ;). Predicting the topology of the graph is a task that has not been properly addressed yet. Graph generation, another important task in graph mining, has attracted a lot of attention from the deep learning community in recent years. The objective of this task is to generate graphs that exhibit specific properties, e. g., degree distribution, node triangle participation, community structure etc. Traditionally, graphs are generated based on some network generation model such as the Erdős-Rényi model. These models focus on modeling one or more network properties, and neglect the others. Neural network approaches, on the other hand, can better capture the properties of graphs since they follow a supervised approach (; ;). These architectures minimize a loss function such as the reconstruction error of the adjacency matrix or the value of a graph comparison algorithm. Capitalizing on recent developments in neural networks for graph-structured data and graph generation, we propose in this paper, to the best of our knowledge, the first framework for predicting the evolution of the topology of networks in time. The proposed framework can be viewed as an encoderdecoder architecture. The "encoder" network takes a sequence of graphs as input and uses a GNN to produce a low-dimensional representation for each one of these graphs. These representations capture structural information about the input graphs. Then, it employs a recurrent architecture which predicts a representation for the future instance of the graph. The "decoder" network corresponds to a graph generation model which utilizes the predicted representation, and generates the topology of the graph for the next time step. The proposed model is evaluated over a series of experiments on synthetic and real-world datasets. To measure its effectiveness, the generated graphs need to be compared with the corresponding ground-truth graph instances. To this end, we use the Weisfeiler-Lehman subtree kernel which scales to very large graphs and has achieved state-of-the-art on many graph datasets . The proposed model is compared against several baseline methods. Results show that the proposed model is very competitive, and in most cases, outperforms the competing methods. The rest of this paper is organized as follows. Section 2 provides an overview of the related work and elaborates our contribution. Section 3 introduces some preliminary concepts and definitions related to the graph generation problem, followed by a detailed presentation of the components of the proposed model. Section 4 evaluates the proposed model on several tasks. Finally, Section 5 concludes. Our work is related to random graph models. These models are very popular in graph theory and network science. The Erdős-Rényi model (Erdős & Rényi, 1960), the preferential attachment model (Albert & Barabási, 2002), and the Kronecker graph model are some typical examples of such models. To predict how a graph structure will evolve over time, the values of the parameters of these models can be estimated based on the corresponding values of the observed graph instances, and then the estimated values can be passed on to these models to generate graphs. Other work along a similar direction includes neural network models which combine GNNs with RNNs (; ;). These models use GNNs to extract features from a graph and RNNs for sequence learning from the extracted features. Other similar approaches do not use GNNs, but they instead perform random walks or employ deep autoencoders . All these works focus on predicting how the node representations or the graph representations will evolve over time. However, some applications require predicting the topology of the graph, and not just its low-dimensional representation. The proposed model constitutes the first step towards this objective. In this Section, we first introduce basic concepts from graph theory, and define our notation. We then present EvoNet, the proposed framework for predicting the evolution of graphs. Since the proposed model comprises of several components, we describe all these components in detail. Let G = (V, E) be an undirected, unweighted graph, where V is the set of nodes and E is the set of edges. We will denote by n the number of vertices and by m the number of edges. We define a permutation of the nodes of G as a bijective function π: V → V, under which any graph property of G should be invariant. We are interested in the topology of a graph which is described by its adjacency matrix A π ∈ R n×n with a specific ordering of the nodes π 1. Each entry of the adjacency matrix is defined as In what follows, we consider the "topology", "structure" and "adjacency matrix" of a graph equivalent to each other. In many real-world networks, besides the adjacency matrix that encodes connectivity information, nodes and/or edges are annotated with a feature vector, which we denote as X ∈ R n×d and L ∈ R m×d, respectively. Hence, a graph object can be also written in the form of a triplet G = (A, X, L). In this paper, we use this triplet to represent all graphs. If a graph does not contain node/edge attributes, we assign attributes to it based on local properties (e. g., degree, k-core number, number of triangles, etc). An evolving network is a graph whose topology changes as a function of time. Interestingly, almost all real-world networks evolve over time by adding and removing nodes and/or edges. For instance, in social networks, people make and lose friends over time, while there are people who join the network and others who leave the network. An evolving graph is a sequence of graphs {G 0, G 1, . . ., G T} where G t = (A t, X t, E t) represents the state of the evolving graph at time step t. It should be noted that not only nodes and edges can evolve over time, but also node and edge attributes. However, in this paper, we keep node and edge attributes fixed, and we allow only the node and edge sets of the graphs to change as a function of time. The sequence can thus be written as {G t = (A t, X, E)} t∈ [0,T]. We are often interested in predicting what "comes next" in a sequence, based on data encountered in previous time steps. In our setting, this is equivalent to predicting G t based on the sequence {G k} k<t. In sequential modeling, we usually do not take into account the whole sequence, but only those instances within a fixed small window of size w before G t, which we denote as {G t−w, G t−w+1, . . ., G t−1}. We refer to these instances as the graph history. The problem is then to predict the topology of G t given its history. The proposed architecture is very similar to a typical sequence learning framework. The main difference lies in the fact that instead of vectors, in our setting, the elements of the sequence correspond to graphs. The combinatorial nature of graph-structured data increases the complexity of the problem and calls for more sophisticated architectures than the ones employed in traditional sequence learning tasks. Specifically, the proposed model consists of three components: a graph neural network (GNN) which generates a vector representation for each graph instance, a recurrent neural network (RNN) for sequential learning, and a graph generation model for predicting the graph topology at the next time step. This framework can also be viewed as an autoencoder-decoder model. The first two components correspond to an encoder network which maps the sequence of graphs into a sequence of vectors and predicts a representation for the next in the sequence graph. The decoder network consists of the last component of the model, and transforms the above representation into a graph. Figure 1 illustrates the proposed model. In what follows, we present the different components of EvoNet in detail. Graph Neural Networks (GNNs) have recently emerged as a dominant paradigm for performing machine learning tasks on graphs. Several GNN variants have been proposed in the past years. All these models employ some message passing procedure to update node representations. Specifically, each node updates its representation by aggregating the representations of its neighbors. After k iterations of the message passing procedure, each node obtains a feature vector which captures the structural information within its k-hop neighborhood. Then, GNNs compute a feature vector for the entire graph using some permutation invariant readout function such as summing the representations of all the nodes of the graph. As described below, the learning process can be divided into three phases: aggregation, update, and readout. Aggregation In this phase, the network computes a message for each node of the graph. To compute that message for a node, the network aggregates the representations of its neighbors. Formally, at time t + 1, a message vector m t+1 v is computed from the representations of the neighbors where AGGREGATE is a permutation invariant function. Furthermore, for the network to be end-toend trainable, this function needs to be differentiable. In our case, AGGREGATE is a multi-layer perceptron (MLP) followed by a sum function. h 2) The UPDATE function also needs to be differentiable. To combine the two feature vectors (i. e., h t v and m t+1 v), we have employed the Gated Recurrent Unit proposed in : Omitting biases for readability, we have: where the W and U matrices are trainable weight matrices, σ is the sigmoid function, and r v and z v are the parameters of the reset and update gates for a given node. Readout The Aggregation and Update steps are repeated for T time steps. The emerging node representations {h T v} v∈V are aggregated into a single vector which corresponds to the graph representation, as follows: ) where READOUT is a differentiable and permutation invariant function. This vector captures the topology of the input graph. To generate h G, we utilize Set2Set . Other functions such as the sum function were also considered, but were found less effective in preliminary experiments. Given an input sequence of graphs, we use the GNN described above to generate a vector representation for each graph in the sequence. Then, to process this sequence, we use a recurrent neural network (RNN). RNNs use their internal state (i. e., memory) to preserve sequential information. These networks exhibit temporal dynamic behavior, and can find correlations between sequential events Specifically, an RNN processes the input sequence in a series of time steps (i. e., one for each element in the sequence). For a given time step t, the hidden state h t at that time step is updated as: where f is a non-linear activation function. A generative RNN outputs a probability distribution over the next element of the sequence given its current state h t. RNNs can be trained to predict the next element (e. g., graph) in the sequence, i. e., it can learn the conditional distribution p(G t |G 1, . . ., G t−1). In our implementation, we use a Long Short-Term Memory (LSTM) network that reads sequentially the vectors {h Gi} i∈[t−w,t−1] produced by the GNN, and generates a vector h G T that represents the embedding of G T, i. e., the graph at the next time step. The embedding incorporates topological information and will serve as input to the graph generation module. Along with the GNN component, this architecture can be seen as a form of an encoder network. This network takes as input a sequence of graphs and projects them into a low-dimensional space. To generate a graph that corresponds to the evolution of the current graph instance, we capitalize on a recently-proposed framework for learning generative models of graphs . This framework models a graph in an autoregressive manner (i. e., a sequence of additions of new nodes and edges) to capture the complex joint probability of all nodes and edges in the graph. Formally, given a node ordering π, it considers a graph G as a sequence of vectors: where S π i = [a 1,i, . . ., a i−1,i] ∈ {0, 1} i−1 is the adjacency vector between node π(i) and the nodes preceding it ({π,..., π(i − 1)}). We adapt this framework to our supervised setting. The objective of the generative model is to maximize the likelihood of the observed graphs of the training set. Since a graph can be expressed as a sequence of adjacency vectors (given a node ordering), we can consider instead the distribution p(Ŝ π ; θ), which can be decomposed in an autoregressive manner into the following product: This product can be parameterized by a neural network. Specifically, following , we use a hierarchical RNN consisting of two levels: the graph-level RNN which maintains the state of the graph and generates new nodes and thus learns the distribution p(Ŝ π i |Ŝ π k:k<i) and the edge-level RNN which generates links between each generated node and previously-generated nodes and thus learns the distribution p(â π ji |â π li:l<j). More formally, we have: where h i is the state vector of the graph-level RNN (i. e., RNN 1) that encodes the current state of the graph sequence and is initialized by h G T, the predicted embedding of the graph at the next time step T. The output of the graph-level RNN corresponds to the initial state of the edge-level RNN (i. e., RNN 2). The ing value is then squashed by a sigmoid function to produce the probability of existence of an edgeâ j,i. In other words, the model learns the probability distribution of the existence of edges and a graph can then be sampled from this distribution, which will serve as the predicted topology for the next time step T. To train the model, the cross-entropy loss between existence of each edge and its probability of existence is minimized: Node ordering It should be mentioned that node ordering π has a large impact on the efficiency of the above generative model. Note that a good ordering can help us avoid the exploration of all possible node permutations in the sample space. Different strategies such as the Breadth-First-Search ordering scheme can be employed to improve scalability . However, in our setting, the nodes are distinguishable, i. e., node v of G i and node v of G i+1 correspond to the same entity. Hence, we can impose an ordering onto the nodes of the first instance of our sequence of graphs, and then utilize the same node ordering for the graphs of all subsequent time steps (we place new nodes at the end of the ordering). In this Section, we evaluate the performance of EvoNet on synthetic and real-world datasets for predicting the evolution of graph topology, and we compare it against several baseline methods. We use both synthetic and real-world datasets. The synthetic datasets consist of sequences of graphs where there is a specific pattern on how each graph emerges from the previous graph instance, i. e., add/remove some graph structure at each time step. The real-world datasets correspond to single graphs whose nodes incorporate temporal information. We decompose these graphs into sequences of snapshots based on their timestamps. We fix the length of the sequences to 1000 time steps. The size of the graphs in each sequence ranges from tens of nodes to several thousand of nodes. Path graph A path graph can be drawn such that all vertices and edges lie on a straight line. We denote a path graph of n nodes as P n. In other words, the path graph P n is a tree with two nodes of degree 1, and the other n − 2 nodes of degree 2. We consider two scenarios. In both cases the first graph in the sequence is P 3. In the first scenario, at each time step, we add one new node to the previous graph instance and we also add an edge between the new node and the last according to the previous ordering node. The second scenario follows the same pattern, however, every three steps, instead of adding a new node, we remove the first according to the previous ordering node (along with its edge). Cycle graph A cycle graph C n is a graph on n nodes containing a single cycle through all the nodes. Note that if we add an edge between the first and the last node of P n, we obtain C n. Similar to the above case, we use C 3 as the first graph in the sequence, and we again consider two scenarios. In the first scenario, at each time step, we increase the size of the cycle, i. e., from C i, we obtain C i+1 by adding a new node and two edges, the first between the new node and the first according to the previous ordering node and the second between the new node and the last according to the previous ordering node. In the second scenario, every three steps, we remove the first according to the ordering node (along with its edges), and we add an edge between the second and the last according to the ordering nodes. Ladder graph The ladder graph L n is a planar graph with 2n vertices and 3n − 2 edges. It is the cartesian product of two path graphs, as follows: L n = P n × P 2. As the name indicates, the ladder graph L n can be drawn as a ladder consisting of two rails and n rungs between them. We consider the following scenario: at each time step, we attach one rung (P 2) to the tail of the ladder (the two nodes of the rung are connected to the two last according to the ordering nodes). For all graphs, we set the attribute of each node equal to its degree, while we set the attribute of all edges to the same value (e. g., to 1). Besides synthetic datasets, we also evaluate EvoNet on real-world datasets. These datasets contain graphs derived from the Bitcoin transaction network, a who-trust-whom network of people who trade using Bitcoin (; . Due to the anonymity of Bitcoin users, platforms seek to maintain a record of users' reputation in Bitcoin trades to avoid fraudulent transactions. The nodes of the network represent Bitcoin users, while an edge indicates that a trade has been executed between its two endpoint users. Each edge is annotated with an integer between −10 and 10, which indicates the rating of the one user given by the other user. The network data are collected separately from two platforms: Bitcoin OTC 2 and Bitcoin Alpha 3 . More details about these two datasets are given in Table 1 . For all graphs, we set the attribute of each node equal to the average rating that the user has received from the rest of the community, and the attribute of each edge equal to the rating between its two endpoint users. We compare EvoNet against several random graph models: the Erdős-Rényi model (Erdős & Rényi, 1960), the Stochastic Block model , Barabási-Albert model (Albert & Barabási, 2002), and the Kronecker Graph model . These are the traditional methods to study the topology evolution of temporal graphs, by proposing a driven mechanism behind the evolution. In general, it is very challenging to measure the performance of a graph generative model since it requires comparing two graphs to each other, a long-standing problem in mathematics and computer science . We propose to use graph kernels to compare graphs to each other, and thus to evaluate the quality of the generated graphs. Graph kernels have emerged as one of the most effective tools for graph comparison in recent years . A graph kernel is a symmetric positive semidefinite function which takes two graphs as input, and measures their similarity. In our experiments, we employ the Weisfeiler-Lehman subtree kernel which counts label-based subtree-patterns . Note that we also normalize the kernel values, and thus the emerging values lie between 0 and 1. As previously mentioned, each dataset corresponds to a sequence of graphs where each sequence represents the evolution of the topology of a single graph in 1000 time steps. We use the first 80% of these graph instances for training and the rest of them serve as our test set. The window size w is set equal to 10, which means that we feed 10 consecutive graph instances to the model and predict the topology of the instance that directly follows the last of these 10 input instances. Each graph of the test set along with its corresponding predicted graph is then passed on to the Weisfeiler-Lehman subtree kernel which measures their similarity and thus the performance of the model. The hyperparameters of EvoNet are chosen based on its performance on a validation set. The parameters of the random graph models are set under the principle that the generated graphs need to share similar properties with the ground-truth graphs. For instance, in the case of the Erdős-Rényi model, the probability of adding an edge between two nodes is set to some value such that the density of the generated graph is identical to that of the ground-truth graph. However, since the model should not have access to such information (e. g., density of the ground-truth graph), we use an MLP to predict this property based on past data (i. e., the number of nodes and edges of the previous graph instances). This is in par with how the proposed model computes the size of the graphs to be generated (i. e., using also an MLP). Figure 2 illustrates the experimental on the synthetic datasets. Since the graph structures contained in the synthetic datasets are fairly simple, it is easy for the model to generate graphs very similar to the ground-truth graphs (normalized kernel values > 0.9). Hence, instead of reporting the kernel values, we compare the size of the predicted graphs against that of the ground-truth graphs. The figures visualize the increase of graph size on real sequence (orange) and predicted sequence (blue). For path graphs, in spite of small variance, we have an accurate prediction on the graph size. For ladder graph, we observe a mismatch at the beginning of the sequence for small size graphs but then a coincidence of the two lines on large size graphs. This mismatch on small graphs may be due to a more complex structure in ladder graphs such as cycles, as supported by the of cycle graph on the right figure, where we completely mispredict the size of cycle graphs. In fact, we fail to reconstruct the cycle structure in the prediction, with all the predicted graphs being path graphs. This failure could be related to the limitations of GNN mentioned in Real-World datasets Finally, we analyze the performance of our model on real datasets: the Bitcoin-OTC and Bitcoin-Alpha. We obtain the similarities between each pair of real and predicted graphs in the sequence and draw a histogram to illustrate the distribution of similarities. The are shown in Figure 3 respectively for the two datasets. Among all the traditional random graph models, Kronecker graph model (with learnable parameter) performs the best, however on both datasets, our proposed method EvoNet (in blue) outperforms tremendously all other methods, with an average similarity of 0.82 on BTC-OTC dataset and 0.55 on BTC-Alpha dataset. Detailed statistics can be found in Table 2. Overall, our experiments demonstrate the advantage of EvoNet over the traditional random graph models on predicting the evolution of dynamic graphs. In this paper, we proposed EvoNet, a model that predicts the evolution of dynamic graphs, following an encoder-decoder framework. We also proposed an evaluation methodology for this task which capitalizes on the well-established family of graph kernels. Experiments show that the proposed model outperforms traditional random graph methods on both synthetic and real-world datasets.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Byg5flHFDr
Combining graph neural networks and the RNN graph generative model, we propose a novel architecture that is able to learn from a sequence of evolving graphs and predict the graph topology evolution for the future timesteps
As for knowledge-based question answering, a fundamental problem is to relax the assumption of answerable questions from simple questions to compound questions. Traditional approaches firstly detect topic entity mentioned in questions, then traverse the knowledge graph to find relations as a multi-hop path to answers, while we propose a novel approach to leverage simple-question answerers to answer compound questions. Our model consists of two parts: (i) a novel learning-to-decompose agent that learns a policy to decompose a compound question into simple questions and (ii) three independent simple-question answerers that classify the corresponding relations for each simple question. Experiments demonstrate that our model learns complex rules of compositionality as stochastic policy, which benefits simple neural networks to achieve state-of-the-art on WebQuestions and MetaQA. We analyze the interpretable decomposition process as well as generated partitions. Knowledge-Based Question Answering (KBQA) is one of the most interesting approaches of answering a question, which bridges a curated knowledge base of tremendous facts to answerable questions. With question answering as a user-friendly interface, users can easily query a knowledge base through natural language, i.e., in their own words. In the past few years, many systems BID5 BID2; BID11 BID13 have achieved remarkable improvements in various datasets, such as WebQuestions BID5, SimpleQuestions BID6 and MetaQA.However, most of them BID31 BID6 BID10 BID34 BID36 assume that only simple questions are answerable. Simple questions are questions that have only one relation from the topic entity to unknown tail entities (answers, usually substituted by an interrogative word) while compound questions are questions that have multiple 1 relations. For example, "Who are the daughters of Barack Obama?" is a simple question and "Who is the mother of the daughters of Barack Obama?" is a compound question which can be decomposed into two simple questions. In this paper, we aim to relax the assumption of answerable questions from simple questions to compound questions. Figure 1 illustrates the process of answering compound questions. Intuitively, to answer a compound question, traditional approaches firstly detect topic entity mentioned in the question, as the starting point for traversing the knowledge graph, then find a chain of multiple (≤ 3) relations as a multi-hop 2 path to golden answers. We propose a learning-to-decompose agent which assists simple-question answerers to solve compound questions directly. Our agent learns a policy for decomposing compound question into simple ones in a meaningful way, guided by the feedback from the downstream simple-question answerers. The goal of the agent is to produce partitions and compute the compositional structure of questions 1 We assume that the number of corresponding relations is at most three. 2 We are aware of the term multi-hop question in the literature. We argue that compound question is a better fit for the context of KBQA since multi-hop characterizes a path, not a question. As for document-based QA, multi-hop also refers to routing over multiple evidence to answers. Figure 1: An example of answering compound questions. Given a question Q, we first identify the topic entity e with entity linking. By relation detection, a movie-to-actor relation f 1, an actor-tomovie relation f 2 and a movie-to-writer relation f 3 forms a path to the answers W i. Note that each relation f i corresponds to a part of the question. If we decomposes the question in a different way, we may find a movie-to-movie relation g as a shortcut, and g(e) = f 2 (f 1 (e)) = (f 2 • f 1)(e) holds. Our model discovered such composite rules. See section 4 for further discussion.with maximum information utilization. The intuition is that encouraging the model to learn structural compositions of compound questions will bias the model toward better generalizations about how the meaning of a question is encoded in terms of compositional structures on sequences of words, leading to better performance on downstream question answering tasks. We demonstrate that our agent captures the semantics of compound questions and generate interpretable decomposition. Experimental show that our novel approach achieves state-of-the-art performance in two challenging datasets (WebQuestions and MetaQA), without re-designing complex neural networks to answer compound questions. For combinational generalization BID4 on the search space of knowledge graph, many approaches BID31 BID34 tackle KBQA in a tandem manner, i.e., topic entity linking followed by relation detection. An important line of research focused on directly parsing the semantics of natural language questions to structured queries BID8 BID17 BID2 BID31. An intermediate meaning representation or logical form is generated for query construction. It often requires pre-defined rules or grammars BID5 ) based on hand-crafted features. By contrast, another line of research puts more emphasis on representing natural language questions instead of constructing knowledge graph queries. Employing CNNs BID11 BID34 or RNNs BID10 BID36, variable-length questions are compressed into their corresponding fix-length vector. Most approaches in this line focus on solving simple questions because of the limited expression power of fix-length vector, consistent with observations BID23 BID0 in Seq2Seq task such as Neural Machine Translation. Closely related to the second line of research, our proposed model learns to decompose compound question into simple questions, which eases the burden of learning vector representations for compound question. Once the decomposition process is completed, a simple-question answerer directly decodes the vector representation of simple questions to an inference chain of relations with the desired order, which resolves the bottleneck of KBQA. Many reinforcement learning approaches learn sentence representations in a bottom-up manner. BID35 learn tree structures for the order of composing words into sentences using reinforcement learning with Tree-LSTM BID24 BID40, while BID37 employ REINFORCE BID28 to select useful words sequentially. Either in tree structure or sequence, the vector representation is built up from the words, which benefits the downstream natural language processing task such as text classification BID22 and natural language inference BID7. By contrast, from the top down, our proposed model learns to decompose compound questions into simple questions, which helps to tackle the bottleneck of KBQA piece by piece. See section 3 for more details. Natural question understanding has attracted the attention of different communities. BID15 introduce SequentialQA task that requires to parse the text to SQL which locates table cells as answers. The questions in SequentialQA are decomposed from selected questions of WikiTableQuestions dataset BID19 by crowdsourced workers while we train an agent to decompose questions using reinforcement learning. BID25 propose a ComplexWebQuestion dataset that contains questions with compositional semantics while BID3 collects a dataset called ComplexQuestions focusing on multi-constrained knowledge-based question answering. The closest idea to our work is BID25 which adopts a pointer network to decompose questions and a reading comprehension model to retrieve answers from the Web. The main difference is that they leverage explicit supervisions to guide the pointer network to correctly decompose complex web questions based on human logic (e.g., conjunction or composition) while we allow the learning-to-decompose agent to discover good partition strategies that benefit downstream task. Note that it is not necessarily consistent with human intuition or linguistic knowledge. Without heavy feature engineering, semantic role labeling based on deep neural networks BID9 focus on capturing dependencies between predicates and arguments by learning to label semantic roles for each word. build an end-to-end system which takes only original text information as input features, showing that deep neural networks can outperform traditional approaches by a large margin without using any syntactic knowledge. BID18 improve the role classifier by incorporating vector representations of both input sentences and predicates. BID26 handle structural information and long-range dependencies with self-attention mechanism. This line of work concentrates on improving role classifier. It still requires rich supervisions for training the role classifier at the token level. Our approach also requires to label an action for each word, which is similar to role labeling. However, we train our approach at the sentence level which omits word-by-word annotations. Our learning-to-decompose agent generates such annotations on the fly by exploring the search space of strategies and increases the probability of good annotations according to the feedback. FIG0 illustrates an overview of our model and the data flow. Our model consists of two parts: a learning-to-decompose agent that decomposes each input question into at most three partitions and three identical simple-question answerers that map each partition to its corresponding relation independently. We refer to the learning-to-decompose agent as the agent and three simple-question answerers as the answerers in the rest of our paper for simplicity. Our main idea is to best divide an input question into at most three partitions which each partition contains the necessary information for the downstream simple-question answerer. Given an input A zoom-in version of the lower half of figure 2. Our agent consists of two components: a Memory Unit and an Action Unit. The Memory Unit observes current word at each time step t and updates the state of its own memory. We use a feedforward neural network as policy network for the Action Unit. question of N words 3 x = {x 1, x 2, . . ., x N}, we assume that a sequence of words is essentially a partially observable environment and we can only observe the corresponding vector representation o t = x t ∈ R D at time step t. FIG1 summarizes the process for generating decision of compound question decomposition. The agent has a Long Short-Term Memory (LSTM; BID14) cell unrolling for each time step to memorize input history. DISPLAYFORM0 where The state s t ∈ R 2H of the agent is defined as DISPLAYFORM1 DISPLAYFORM2 which maintained by the above memory cell (Eq. 1) unrolling for each time step. [·, ·] denotes the concatenation of two vectors. Action Unit The agent also has a stochastic policy network π(α|s; W π) where W π is the parameter of the network. Specifically, we use a two-layer feedforward network that takes the agent's state s as its input: DISPLAYFORM3 where W DISPLAYFORM4 π ∈ R 3×H and b DISPLAYFORM5 Following the learned policy, the agent decomposes a question of length N by generating a sequence of actions α t ∈ {1st, 2nd, 3rd}, t = 1, 2,..., N. Words under the same decision (e.g. 1st) will be appended into the same sub-sequence (e.g. the first partition).Formally, DISPLAYFORM6 denotes the partitions of a question. Note that in a partition, words are not necessarily consecutive 4. The relative position of two words in original question is preserved. t 1 + t 2 + t 3 = N holds for every question. Reward The episodic reward R will be +1 if the agent helps all the answerers to get the golden answers after each episode, or −1 otherwise. There is another reward function R = Σ log P (Y * | X) that is widely used in the literature of using reinforcement learning for natural language processing task BID1 BID37. We choose the former as reward function for lower variance. Each unique rollout (sequence of actions) corresponds to unique compound question decomposition. We do not assume that any question should be divided into exactly three parts. We allow our agent to explore the search space of partition strategies and to increase the probability of good ones. The goal of our agent is to learn partition strategies that benefits the answerers the most. With the help of the learning-to-decompose agent, simple-question answerers can answer compound questions. Once the question is decomposed into partitions as simple questions, each answerer takes its partition DISPLAYFORM0 } as input and classifies it as the corresponding relation in knowledge graph. For each partition x (k), we use LSTM network to construct simple-question representation directly. The partition embedding is the last hidden state of LSTM network, denoted by x (k) ∈ R 2H. We again use a two-layer feedforward neural network to make prediction, i.e. estimate the likelihood of golden relation r. DISPLAYFORM1 where DISPLAYFORM2 C is the number of classes. Each answerer only processes its corresponding partition and outputs a predicted relation. These three modules share no parameters except the embedding layer because our agent will generates conflict assignments for the same questions in different epoches. If all the answerers share same parameters in different layers, data conflicts undermine the decision boundary and leads to unstable training. Note that we use a classification network for sequential inputs that is as simple as possible. In addition to facilitating the subsequent theoretical analysis, the simple-question answerers we proposed are much simpler than good baselines for simple question answering over knowledge graph, without modern architecture features such as bi-directional process, read-write memory BID6, attention mechanism BID34 or residual connection BID36.The main reason is that our agent learns to decompose input compound questions to the simplified version which is answerable for such simple classifiers. This can be a strong evidence for validating the agent's ability on compound question decomposition. The agent and the answerers share the same embeddings. The agent can only observe word embeddings while the answerers are allowed to update them in the backward pass. We train three simple-question answerers separately using Cross Entropy loss between the predicted relation and the golden relation. These three answerers are independent of each other. We do not use the pre-train trick for all the experiments since we have already observed consistent convergence on different task settings. We reduce the variance of Monte-Carlo Policy Gradient estimator by taking multiple (≤ 5) rollouts for each question and subtracting a baseline that estimates the expected future reward given the observation at each time step. The Baseline We follow Ranzato et al. FORMULA0 which uses a linear regressor which takes the agent's memory state s t as input and minimizes the mean squared loss for training. Such a loss signal is used for updating the parameters of baseline only. The regressor is an unbiased estimator of expected future rewards since it only depends on the agent's memory states. Our agent learns a optimal policy to decompose compound questions into simple ones using MonteCarlo Policy Gradient (MCPG) method. The partitions of question is then feeded to corresponding simple-question answerers for policy evaluation. The agent takes the final episodic reward in return. The goal of our experiments is to evaluate our hypothesis that our model discovers useful question partitions and composition orders that benefit simple-question answerers to tackle compound question answering. Our experiments are three-fold. First, we trained the proposed model to master the order of arithmetic operators (e.g., + − ×÷) on an artificial dataset. Second, we evaluate our method on the standard benchmark dataset MetaQA. Finally, we discuss some interesting properties of our agent by case study. The agent's ability of compound question decomposition can be viewed as the ability of priority assignment. To validate the decomposition ability of our proposed model, we train our model to master the order of arithmetic operations. We generate an artificial dataset of complex algebraic expressions. (e.g. 1 + 2 − 3 × 4 ÷ 5 =? or 1 + (2 − 3) × 4 ÷ 5). The algebraic expression is essentially a question in math language which the corresponding answer is simply a real number. Specifically, the complex algebraic expression is a sequence of arithmetic operators including +, −, ×, ÷, (and). We randomly sample a symbol sequence of length N, with restriction of the legality of parentheses. The number of parentheses is P (≤ 2). The number of symbols surrounding by parentheses is Q. The position of parentheses is randomly selected to increase the diversity of expression patterns. For example, (+×)+(÷) and +×(+×)−÷ are data points (1+2×3)+(4÷5) and 1 + 2 × (3 + 4 × 5) − 6 ÷ 7 with N = 8.This task aims to test the learning-to-decompose agent whether it can assign a feasible order of arithmetic operations. We require the agent to assign higher priority for operations surrounding by parentheses and lower priority for the rest of operations. We also require that our agent can learn a policy from short expressions (N ≤ 8), which generalizes to long ones (13 ≤ N ≤ 16).We use 100-dimensional (D = 100) embeddings for symbols with Glorot initialization BID12. The dimension of hidden state and cell state of memory unit H is 128. We use the RMSProp optimizer BID27 to train all the networks with the parameters recommended in the original paper except the learning rate α. The learning rate for the agent and the answerers is 0.00001 while the learning rate for the baseline is 0.0001. We test the performance in different settings. TAB0 summarizes the experiment . DISPLAYFORM0 99.21 N = 8, P = 1, Q = 3 N = 13, P = 1, Q = 3 93.37 N = 8, P = 1, Q = 3 N = 13, P = 1, Q = 7 66.42The first line indicates that our agent learns an arithmetic skill that multiplication and division have higher priority than addition and subtraction. The second line indicates that our agent learns to discover the higher-priority expression between parentheses. The third line, compared to the second line, indicates that increasing the distance between two parentheses could harm the performance. We argue that this is because of the Long Short-Term Memory Unit of our agent suffers when carrying the information of left parenthesis for such a long distance. We evaluate our proposed model on the test set of two challenging KBQA research dataset, i.e., WebQuestions BID5 and MetaQA. Each question in both datasets is labeled with the golden topic entity and the inference chain of relations. The statistics of MetaQA dataset is shown in table 2. The number of compound questions in MetaQA is roughly twice that of simple questions. The max length of training questions is 16. The size of vocabulary in questions is 39,568. The coupled knowledge graph contains 43,234 entities and 9 relations. We also augmented the relation set with the inversed relations, as well as a "NO OP" relation as placeholder. The total number of relations we used is 14 since some inversed relations are meaningless. WebQuestions contains 2,834 questions for training, 944 questions for validation and 2,032 questions for testing respectively. We use 602 relations for the relation classification task. The number of compound questions in WebQuestions is roughly equal to that of simple questions. Note that a compound question in WebQuestions is decomposed into two partitions since the maximum number of corresponding relations is two. One can either assume topic entity of each question is linked or use a simple string matching heuristic like character trigrams matching to link topic entity to knowledge graph directly. We use the former setting while the performance of the latter is reasonable good. We tend to evaluate the relation detection performance directly. For both datasets, we use 100-dimensional (D = 100) word embeddings with Glorot initialization BID12. The dimension of hidden state and cell state of memory unit H is 128. We use the RMSProp optimizer BID27 to train the agent with the parameters recommended in the original paper except the learning rate. We train the rest of our model using Adam BID16 with default parameters. The learning rate for all the modules is 0.0001 no matter the optimizer it is. We use four samples for Monte-Carlo Policy Gradient estimator of REINFORCE. The metric for relation detection is overall accuracy that only cumulates itself if all relations of a compound question are correct. TAB2 presents our on MetaQA dataset. The last column for total accuracy is the most representative for our model's performance since the only assumption we made about input questions is the number of corresponding relations is at most three. BID36 on this dataset focus on leveraging information of the name of Freebase relation while we are only using question information for classification. We assume that the compound question can be decomposed into at most three simple questions. In practice, this generalized assumption of answerable questions is not necessary. One example is that WebQuestions only contains compound questions corresponding to two but not three relations. It indicates that people tend to ask less complicated questions more often. So we conduct an ablation study for the hyperparameters of this central assumption in our paper. We assume that all the questions in MetaQA dataset contain at most two corresponding relations. We run the same code with the same hyperparameters except we only use two simple-question answerers. The purpose of the evaluation is to prove that our model improves performance on 1-hop and 2-hop questions by giving up the ability to answer three-hop questions. TAB4 presents our on ablation test. We can draw a that there exists a trade-off between answering more complex questions and achieving better performance by limiting the size of search space. Figure 4 illustrates a continuous example of figure 1 for the case study, which is generated by our learning-to-decompose agent. Assuming the topic entity e is detected and replaced by a placeholder, the agent may discover two different structures of the question that is consistent with human intuition. Since the knowledge graph does not have a movie-to-movie relation named "share actor with", the lower partition can not help the answerers classify relations correctly. However, the upper partition will be rewarded. As a , our agent optimizes its strategies such that it can decompose the original question in the way that benefits the downstream answerers the most. We observe the fact that our model understands the concept of "share" as the behavior "take the inversed relation". That is, "share actors" in a question is decomposed to "share" and "actors" in two partitions. The corresponding formulation is g(e) = f 2 (f 1 (e)) = (f 2 • f 1)(e). We observe the same phenomenon on "share directors". We believe it is a set of strong evidence for supporting our main claims. Understanding compound questions, in terms of The Principle of Semantic Compositionality BID20, require one to decompose the meaning of a whole into the meaning of parts. While previous works focus on leveraging knowledge graph for generating a feasible path to answers, we Figure 4: A continuous example of figure 1. The hollow circle indicates the corresponding action the agent takes for each time step. The upper half is the actual prediction while the lower half is a potential partition. Since we do not allow a word to join two partitions, the agent learns to separate "share" and "actors" into different partitions to maximize information utilization.propose a novel approach making full use of question semantics efficiently, in terms of the Principle of Semantic Compositionality. In other words, it is counterintuitive that compressing the whole meaning of a variable-length sentence to a fixed-length vector, which leaves the burden to the downstream relation classifier. In contrast, we assume that a compound question can be decomposed into three simple questions at most. Our model generates partitions by a learned policy given a question. The vector representations of each partition are then fed into the downstream relation classifier. While previous works focus on leveraging knowledge graph for generating a feasible path to answers, we propose a novel approach making full use of question semantics efficiently, in terms of the Principle of Semantic Compositionality. Our learning-to-decompose agent can also serve as a plug-and-play module for other question answering task that requires to understand compound questions. This paper is an example of how to help the simple-question answerers to understand compound questions. The answerable question assumption must be relaxed in order to generalize question answering.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJl2ps0qKQ
We propose a learning-to-decompose agent that helps simple-question answerers to answer compound question over knowledge graph.
Energy based models outputs unmormalized log-probability values given datasamples. Such a estimation is essential in a variety of application problems suchas sample generation, denoising, sample restoration, outlier detection, Bayesianreasoning, and many more. However, standard maximum likelihood training iscomputationally expensive due to the requirement of sampling model distribution. Score matching potentially alleviates this problem, and denoising score matching is a particular convenient version. However, previous attemptsfailed to produce models capable of high quality sample synthesis. We believethat it is because they only performed denoising score matching over a singlenoise scale. To overcome this limitation, here we instead learn an energy functionusing all noise scales. When sampled using Annealed Langevin dynamics andsingle step denoising jump, our model produced high-quality samples comparableto state-of-the-art techniques such as GANs, in addition to assigning likelihood totest data comparable to previous likelihood models. Our model set a new sam-ple quality baseline in likelihood-based models. We further demonstrate that our model learns sample distribution and generalize well on an image inpainting tasks. Treating data as stochastic samples from a probability distribution and developing models that can learn such distributions is at the core for solving a large variety of application problems, such as error correction/denoising , outlier/novelty detection , sample generation , invariant pattern recognition, Bayesian reasoning which relies on good data priors, and many others. Energy-Based Models (EBMs) assign an energy E(x x x) to each data point x x x which implicitly defines a probability by the Boltzmann distribution p m (x x x) = e −E(x x x) /Z. Sampling from this distribution can be used as a generative process that yield plausible samples of x x x. Compared to other generative models, like GANs , flowbased models , or auto-regressive models (van den ;), energy-based models have significant advantages. First, they provide explicit (unnormalized) density information, compositionality , better mode coverage and flexibility . Further, they do not require special model architecture, unlike auto-regressive and flow-based models. Recently, Energy-based models has been successfully trained with maximum likelihood , but training can be very computationally demanding due to the need of sampling model distribution. Variants with a truncated sampling procedure have been proposed, such as contrastive divergence . Such models learn much faster with the draw back of not exploring the state space thoroughly . Score matching (SM) (Hyvärinen, 2005) circumvents the requirement of sampling the model distribution. In score matching, the score function is defined to be the gradient of log-density or the negative energy function. The expected L2 norm of difference between the model score function and the data score function are minimized. One convenient way of using score matching is learning the energy function corresponding to a Gaussian kernel Parzen density estimator of the data: p σ0 (x x x) = q σ0 (x x x|x x x)p(x x x)dx x x. Though hard to evaluate, the data score is well defined: s d (x x x) = ∇x x x log(p σ0 (x x x)), and the corresponding objective is: L SM (θ) = E pσ0(x x x) ∇x x x log(p σ0 (x x x)) + ∇x x x E(x x x; θ) studied the connection between denoising auto-encoder and score matching, and proved the remarkable that the following objective, named Denoising Score Matching (DSM), is equivalent to the objective above: L DSM (θ) = E pσ 0 (x x x,x x x) ∇x x x log(q σ0 (x x x|x x x)) + ∇x x x E(x x x; θ) Note that in the Parzen density score is replaced by the derivative of log density of the single noise kernel ∇x x x log(q σ0 (x x x|x x x)), which is much easier to evaluate. In the particular case of Gaussian noise, log(q σ0 (x x x|x x x)) = − (x x x−x x x) 2 2σ 2 0 + C, and therefore: The interpretation of objective is simple, it forces the energy gradient to align with the vector pointing from the noisy sample to the clean data sample. To optimize an objective involving the derivative of a function defined by a neural network, proposed the use of double backpropagation . Deep energy estimator networks first applied this technique to learn an energy function defined by a deep neural network. In this work and similarly in , an energy-based model was trained to match a Parzen density estimator of data with a certain noise magnitude. The previous models were able to perform denoising task, but they were unable to generate high-quality data samples from a random input initialization. Recently, trained an excellent generative model by fitting a series of score estimators coupled together in a single neural network, each matching the score of a Parzen estimator with a different noise magnitude. The questions we address here is why learning energy-based models with single noise level does not permit high-quality sample generation and what can be done to improve energy based models. Our work builds on key ideas from;;. Section 2, provides a geometric view of the learning problem in denoising score matching and provides a theoretical explanation why training with one noise level is insufficient if the data dimension is high. Section 3 presents a novel method for training energy based model, Multiscale Denoising Score Matching (MDSM). Section 4 describes empirical of the MDSM model and comparisons with other models. 2 A GEOMETRIC VIEW OF DENOISING SCORE MATCHING used denoising score matching with a range of noise levels, achieving great empirical . The authors explained that large noise perturbation are required to enable the learning of the score in low-data density regions. But it is still unclear why a series of different noise levels are necessary, rather than one single large noise level. , we analyze the learning process in denoising score matching based on measure concentration properties of high-dimensional random vectors. We adopt the common assumption that the data distribution to be learned is high-dimensional, but only has support around a relatively low-dimensional manifold (; ;). If the assumption holds, it causes a problem for score matching: The density, or the gradient of the density is then undefined outside the manifold, making it difficult to train a valid density model for the data distribution defined on the entire space. and discussed this problem and proposed to smooth the data distribution with a Gaussian kernel to alleviate the issue. To further understand the learning in denoising score matching when the data lie on a manifold X and the data dimension is high, two elementary properties of random Gaussian vectors in highdimensional spaces are helpful: First, the length distribution of random vectors becomes concentrated at √ dσ , where σ 2 is the variance of a single dimension. Second, a random vector is always close to orthogonal to a fixed vector . With these premises one can visualize the configuration of noisy and noiseless data points that enter the learning process: A data point x x x sampled from X and its noisy versionx x x always lie on a line which is almost perpendicular to the tangent space T x x x X and intersects X at x x x. Further, the distance vectors between (x x x,x x x) pairs all have similar length √ dσ. As a consequence, the set of noisy data points concentrate on a setX √ that has a distance with (Therefore, performing denoising score matching learning with (x x x,x x x) pairs generated with a fixed noise level σ, which is the approach taken previously except in, will match the score in the setX √ dσ, and enable denoising of noisy points in the same set. However, the learning provides little information about the density outside this set, farther or closer to the data manifold, as noisy samples outsideX √ dσ, rarely appear in the training process. An illustration is presented in Figure 1A. ) is very small in high-dimensional space, the score inX still plays a critical role in sampling from random initialization. This analysis may explain why models based on denoising score matching, trained with a single noise level encounter difficulties in generating data samples when initialized at random. For an empirical support of this explanation, see our experiments with models trained with single noise magnitudes (Appendix B). To remedy this problem, one has to apply a learning procedure of the sort proposed in, in which samples with different noise levels are used. Depending on the dimension of the data, the different noise levels have to be spaced narrowly enough to avoid empty regions in the data space. In the following, we will use Gaussian noise and employ a Gaussian scale mixture to produce the noisy data samples for the training (for details, See Section 3.1 and Appendix A). Another interesting property of denoising score matching was suggested in the denoising autoencoder literature . With increasing noise level, the learned features tend to have larger spatial scale. In our experiment we observe similar phenomenon when training model with denoising score matching with single noise scale. If one compare samples in Figure B.1, Appendix B, it is evident that noise level of 0.3 produced a model that learned short range correlation that spans only a few pixels, noise level of 0.6 learns longer stroke structure without coherent overall structure, and noise level of 1 learns more coherent long range structure without details such as stroke width variations. This suggests that training with single noise level in denoising score matching is not sufficient for learning a model capable of high-quality sample synthesis, as such a model have to capture data structure of all scales. Motivated by the analysis in section 2, we strive to develop an EBM based on denoising score matching that can be trained with noisy samples in which the noise level is not fixed but drawn from a distribution. The model should approximate the Parzen density estimator of the data p σ0 (x x x) = q σ0 (x x x|x x x)p(x x x)dx. Specifically, the learning should minimize the difference between the derivative of the energy and the score of p σ0 under the expectation E p M (x x x) rather than E pσ 0 (x x x), the expectation taken in standard denoising score matching. Here p M (x x x) = q M (x x x|x x x)p(x x x)dx is chosen to cover the signal space more evenly to avoid the measure concentration issue described above. The ing Multiscale Score Matching (MSM) objective is: Compared to the objective of denoising score matching, the only change in the new objective is the expectation. Both objectives are consistent, if p M (x x x) and p σ0 (x x x) have the same support, as shown formally in Proposition 1 of Appendix A. In Proposition 2, we prove that Equation 4 is equivalent to the following denoising score matching objective: The above hold for any noise kernel q σ0 (x x x|x x x), but Equation 5 contains the reversed expectation, which is difficult to evaluate in general. To proceed, we choose q σ0 (x x x|x x x) to be Gaussian, and also choose q M (x x x|x x x) to be a Gaussian scale mixture: q M (x x x|x x x) = q σ (x x x|x x x)p(σ)dσ and q σ (x x x|x x x) = N (x x x, σ 2 I d). After algebraic manipulation and one approximation (see the derivation following Proposition 2 in Appendix A), we can transform Equation 5 into a more convenient form, which we call Multiscale Denoising Score Matching (MDSM): The square loss term evaluated at noisy pointsx x x at larger distances from the true data points x x x will have larger magnitude. Therefore, in practice it is convenient to add a monotonically decreasing term l(σ) for balancing the different noise scales, e.g. l(σ) = 1 σ 2. Ideally, we want our model to learn the correct gradient everywhere, so we would need to add noise of all levels. However, learning denoising score matching at very large or very small noise levels is useless. At very large noise levels the information of the original sample is completely lost. Conversely, in the limit of small noise, the noisy sample is virtually indistinguishable from real data. In neither case one can learn a gradient which is informative about the data structure. Thus, the noise range needs only to be broad enough to encourage learning of data features over all scales. Particularly, we do not sample σ but instead choose a series of fixed σ values, we arrive at the final objective: It may seem that σ 0 is an important hyperparameter to our model, but after our approximation σ 0 become just a scaling factor in front of the energy function, and can be simply set to one as long as the temperature range during sampling is scaled accordingly (See Section 3.2). Therefore the only hyper-parameter is the rang of noise levels used during training. On the surface, objective looks similar to the one in. The important difference is that Equation 7 approximates a single distribution, namely p σ0 (x x x), the data smoothed with one fixed kernel q σ0 (x x x|x x x). In contrast, approximate the score of multiple distributions, the family of distributions {p σi (x x x): i = 1,..., n}, ing from the data smoothed by kernels of different widths σ i. Because our model learns only a single target distribution, it does not require noise magnitude as input. Langevin dynamics has been used to sample from neural network energy functions . However, the studies described difficulties with mode exploration unless very large number of sampling steps is used. To improve mode exploration, we propose incorporating simulated annealing in the Langevin dynamics. Simulated annealing improves mode exploration by sampling first at high temperature and then cooling down gradually. This has been successfully applied to challenging computational problems, such as combinatorial optimization. To apply simulated annealing to Langevin dynamics. Note that in a model of Brownian motion of a physical particle, the temperature in the Langevin equation enters as a factor √ T in front of the noise term, some literature uses β −1 where β = 1/T . Adopting the √ T convention, the Langevin sampling process is given by: where T t follows some annealing schedule, and denotes step length, which is fixed. During sampling, samples behave very much like physical particles under Brownian motion in a potential field. Because the particles have average energies close to the their current thermic energy, they explore the state space at different distances from data manifold depending on temperature. Eventually, they settle somewhere on the data manifold. The behavior of the particle's energy value during a typical annealing process is depicted in Appendix Figure F.1B. If the obtained sample is still slightly noisy, we can apply a single step gradient denoising jump to improve sample quality: This denoising procedure can be applied to noisy sample with any level of Gaussian noise because in our model the gradient automatically has the right magnitude to denoise the sample. This process is justified by the Empirical Bayes interpretation of this denoising process, as studied in. also call their sample generation process annealed Langevin dynamics. It should be noted that their sampling process does not coincide with Equation 8. Their sampling procedure is best understood as sequentially sampling a series of distributions corresponding to data distribution corrupted by different levels of noise. Training and Sampling Details. The proposed energy-based model is trained on standard image datasets, specifically MNIST, Fashion MNIST, CelebA and CIFAR-10 . During training we set σ 0 = 0.1 and train over a noise range of σ ∈ [0.05, 1.2], with the different noise uniformly spaced on the batch dimension. For MNIST and Fashion MNIST we used geometrically distributed noise in the range [0.1, 3]. The weighting factor l(σ) is always set to 1/σ 2 to make the square term roughly independent of σ. We fix the batch size at 128 and use the Adam optimizer with a learning rate of 5 × 10 −5. For MNIST and Fashion MNIST, we use a 12-Layer ResNet with 64 filters, for the CelebA and CIFAT-10 data sets we used a 18-Layer ResNet with 128 filters (a; . No normalization layer was used in any of the networks. We designed the output layer of all networks to take a generalized quadratic form . Because the energy function is anticipated to be approximately quadratic with respect to the noise level, this modification was able to boost the performance significantly. For more detail on training and model architecture, see Appendix D. One notable is that since our training method does not involve sampling, we achieved a speed up of roughly an order of magnitude compared to the maximum-likelihood training using Langevin dynamics 1. Our method thus enables the training of energy-based models even when limited computational resources prohibit maximum likelihood methods. We found that the choice of the maximum noise level has little effect on learning as long as it is large enough to encourage learning of the longest range features in the data. However, as expected, learning with too small or too large noise levels is not beneficial and can even destabilize the training process. Further, our method appeared to be relatively insensitive to how the noise levels are distributed over a chosen range. Geometrically spaced noise as in and linearly spaced noise both work, although in our case learning with linearly spaced noise was somewhat more robust. For sampling the learned energy function we used annealed Langevin dynamics with an empirically optimized annealing schedule,see Figure F.1B for the particular shape of annealing schedule we used. In contrast, annealing schedules with theoretical guaranteed convergence property takes extremely long . The range of temperatures to use in the sampling process depends on the choice of σ 0, as the equilibrium distribution is roughly images with Gaussian noise of magnitude √ T σ 0 added on top. To ease traveling between modes far apart and ensure even sampling, the initial temperature needs to be high enough to inject noise of sufficient magnitude. A choice of T = 100, which corresponds to added noise of magnitude √ 100 * 0.1 = 1, seems to be sufficient starting point. For step length we generally used 0.02, although any value within the range [0.015, 0.05] seemed to work fine. After the annealing process we performed a single step denoising to further enhance sample quality. and FID . We achieved Inception Score of 8.31 and FID of 31.7, comparable to modern GAN approaches. Scores for CelebA dataset are not reported here as they are not commonly reported and may depend on the specific pre-processing used. More samples and training images are provided in Appendix for visual inspection. We believe that visual assessment is still essential because of the possible issues with the Inception score . Indeed, we also found that the visually impressive samples were not necessarily the one achieving the highest Inception Score. Although overfitting is not a common concern for generative models, we still tested our model for overfitting. We found no indication for overfitting by comparing model samples with their nearest neighbors in the data set, see Figure C.1 in Appendix. Mode Coverage. We repeated with our model the 3 channel MNIST mode coverage experiment similar to the one in. An energy-based model was trained on 3-channel data where each channel is a random MNIST digit. Then 8000 samples were taken from the model and each channel was classified using a small MNIST classifier network. We obtained of the 966 modes, comparable to GAN approaches. Training was successful and our model assigned low energy to all the learned modes, but some modes were not accessed during sampling, likely due to the Langevin Dynamics failing to explore these modes. A better sampling technique such as or a Maximum Entropy Generator could improve this . Image Inpainting. Image impainting can be achieved with our model by clamping a part of the image to ground truth and performing the same annealed Langevin and Jump sampling procedure on the missing part of the image. Noise appropriate to the sampling temperature need to be added to the clamped inputs. The quality of inpainting of our model trained on CelebA and CIFAR-10 can be assessed in Figure 3. For CIFAR-10 inpainting we used the test set. Log likelihood estimation. For energy-based models, the log density can be obtained after estimating the partition function with Annealed Importance Sampling (AIS) or Reverse AIS . In our experiment on CIFAR-10 model, similar to reports in , there is still a substantial gap between AIS and Reverse AIS estimation, even after very substantial computational effort. In Table 1 We also report a density of 1.21 bits/dim on MNIST dataset, and we refer readers to for comparison to other models on this dataset. More details on this experiment is provided in the Appendix. Outlier Detection. and have reported intriguing behavior of high dimensional density models on out of distribution samples. Specifically, they showed that a lot of models assign higher likelihood to out of distribution samples than real data samples. We investigated whether our model behaves similarly. Our energy function is only trained outside the data manifold where samples are noisy, so the energy value at clean data points may not always be well behaved. Therefore, we added noise with magnitude σ 0 before measuring the energy value. We find that our network behaves similarly to previous likelihood models, it assigns lower energy, thus higher density, to some OOD samples. We show one example of this phenomenon in Appendix Figure F.1A. We also attempted to use the denoising performance, or the objective function to perform outlier detection. Intriguingly, the are similar as using the energy value. Denoising performance seems to correlate more with the variance of the original image than the content of the image. In this work we provided analyses and empirical for understanding the limitations of learning the structure of high-dimensional data with denoising score matching. We found that the objective function confines learning to a small set due to the measure concentration phenomenon in random vectors. Therefore, sampling the learned distribution outside the set where the gradient is learned does not produce good . One remedy to learn meaningful gradients in the entire space is to use samples during learning that are corrupted by different amounts of noise. applied this strategy very successfully. The central contribution of our paper is to investigate how to use a similar learning strategy in EBMs. Specifically, we proposed a novel EBM model, the Multiscale Denoising Score Matching (MDSM) model. The new model is capable of denoising, producing high-quality samples from random noise, and performing image inpainting. While also providing density information, our model learns an order of magnitude faster than models based on maximum likelihood. Our approach is conceptually similar to the idea of combining denoising autoencoder and annealing (; ;) though this idea was proposed in the context of pre-training neural networks for classification applications. Previous efforts of learning energy-based models with score matching (; were either computationally intensive or unable to produce high-quality samples comparable to those obtained by other generative models such as GANs. and trained energy-based model with the denoising score matching objective but the ing models cannot perform sample synthesis from random noise initialization. Recently, proposed the NCSN model, capable of high-quality sample synthesis. This model approximates the score of a family of distributions obtained by smoothing the data by kernels of different widths. The sampling in the NCSN model starts with sampling the distribution obtained with the coarsest kernel and successively switches to distributions obtained with finer kernels. Unlike NCSN, our method learns an energy-based model corresponding to p σ0 (x x x) for a fixed σ 0. This method improves score matching in high-dimensional space by matching the gradient of an energy function to the score of p σ0 (x x x) in a set that avoids measure concentration issue. All told, we offer a novel EBM model that achieves high-quality sample synthesis, which among other EBM approaches provides a new state-of-the art. Compared to the NCSN model, our model is more parsimonious than NCSN and can support single step denoising without prior knowledge of the noise magnitude. But our model performs sightly worse than the NCSN model, which could have several reasons. First, the derivation of Equation 6 requires an approximation to keep the training procedure tractable, which could reduce the performance. Second, the NCSNs output is a vector that, at least during optimization, does not always have to be the derivative of a scalar function. In contrast, in our model the network output is a scalar function. Thus it is possible that the NCSN model performs better because it explores a larger set of functions during optimization. In this section, we provide a formal discussion of the MDSM objective and suggest it as an improved score matching formulation in high-dimensional space. illustrated the connection between the model score −∇x x x E(x x x; θ) with the score of Parzen window density estimator ∇x x x log(p σ0 (x x x)). Specifically, the objective is Equation 1 which we restate here: Our key observation is: in high-dimensional space, due the concentration of measure, the expectation w.r.t. p σ0 (x x x) over weighs a thin shell at roughly distance √ dσ to the empirical distribution p(x). Though in theory this is not a problem, in practice this leads to that the score are only well matched on this shell. Based on this observation, we suggest to replace the expectation w.r.t. p σ0 (x x x) with a distribution p σ (x x x) that has the same support as p σ0 (x x x) but can avoid the measure concentration problem. We call this multiscale score matching and the objective is the following: Given that p M (x x x) and p σ0 (x x x) has the same support, it's clear that L M SM = 0 would be equivalent to L SM = 0. Due to the proof of the Theorem 2 in Hyvärinen, we have We follow the same procedure as in to prove this . Thus we have: The above analysis applies to any noise distribution, not limited to Gaussian. but L M DSM * has a reversed expectation form that is not easy to work with. To proceed further we study the case where q σ0 (x x x|x x x) is Gaussian and choose q M (x x x|x x x) as a Gaussian scale mixture and p M (x x x) = q M (x x x|x x x)p(x x x)dx. By Proposition 1 and Proposition 2, we have the following form to optimize: To minimize Equation (*), we can use the following importance sampling procedure : we can sample from the empirical distribution p(x x x), then sample the Gaussian scale mixture q M (x x x|x x x) and finally weight the sample by qσ 0 (x x x|x x x) q M (x x x|x x x). We expect the ratio to be close to 1 for the following reasons: Using Bayes rule, q σ0 (x x x|x x x) = p(x x x)qσ 0 (x x x|x x x) pσ 0 (x x x) we can see that q σ0 (x x x|x x x) only has support on discret data points x x x, same thing holds for q M (x x x|x x x). because inx x x is generated by adding Gaussian noise to real data sample, both estimators should give highly concentrated on the original sample point x x x. Therefore, in practice, we ignore the weighting factor and use Equation 6. Improving upon this approximation is left for future work. To compare with previous method, we trained energy-based model with denoising score matching using one noise level on MNIST, initialized the sampling with Gaussian noise of the same level, and sampled with Langevin dynamics at T = 1 for 1000 steps and perform one denoise jump to recover the model's best estimate of the clean sample, see Figure B.1. We used the same 12-layer ResNet as other MNIST experiments. Models were trained for 100000 steps before sampling. We demonstrate that the model does not simply memorize training examples by comparing model samples with their nearest neighbors in the training set. We use Fashion MNIST for this demonstration because overfitting can occur there easier than on more complicated datasets, see Figure C we used a 18-layer ResNet with 128 filters on the first layer. All network used the ELU activation function. We did not use any normalization in the ResBlocks and the filer number is doubled at each downsampling block. Details about the structure of our networks used can be found in our code release. All mentioned models can be trained on 2 GPUs within 2 days. Since the gradient of our energy model scales linearly with the noise, we expected our energy function to scale quadratically with noise magnitude. Therefore, we modified the standard energy-based network output layer to take a flexible quadratic form : where a i, c i, d i and b 1, b 2, b 3 are learnable parameters, and h i is the (flattened) output of last residual block. We found this modification to significantly improve performance compared to using a simple linear last layer. For CIFAR and CelebA we trained for 300k weight updates, saving a checkpoint every 5000 updates. We then took 1000 samples from each saved networks and used the network with the lowest FID score. For MNIST and fashion MNIST we simply trained for 100k updates and used the last checkpoint. During training we pad MNIST and Fashion MNIST to 32*32 for convenience and randomly flipped CelebA images. No other modification was performed. We only constrained the gradient of the energy function, the energy value itself could in principle be unbounded. However, we observed that they naturally stabilize so we did not explicitly regularize them. The annealing sampling schedule is optimized to improve sample quality for CIFAR-10 dataset, and consist of a total of 2700 steps. For other datasets the shape has less effect on sample quality, see Figure F.1 B for the shape of annealing schedule used. For the Log likelihood estimation we initialized reverse chain on test images, then sample 10000 intermediate distribution using 10 steps HMC updates each. Temperature schedule is roughly exponential shaped and the reference distribution is an isotropic Gaussian. The variance of estimation was generally less than 10% on the log scale. Due to the high variance of , and to avoid getting dominated by a single outlier, we report average of the log density instead of log of average density. We provide more inpainting examples and further demonstrate the mixing during sampling process in Figure E.1. We also provide more samples for readers to visually judge the quality of our sample generation in Figure E.2, E.3 and E.4. All samples are randomly selected. Energy values for CIFAR-10 train, CIFAR-10 test and SVHN datasets for a network trained on CIFAR-10 images. Note that the network does not over fit to the training set, but just like most deep likelihood model, it assigns lower energy to SVHN images than its own training data. B. Annealing schedule and a typical energy trace for a sample during Annealed Langevin Sampling. The energy of the sample is proportional to the temperature, indicating sampling is close to a quasi-static process.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJeFmkBtvB
Learned energy based model with score matching
A restricted Boltzmann machine (RBM) learns a probabilistic distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling. Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor (multi-way) input. Matrix-variate and tensor-variate RBMs, named MvRBM and TvRBM, have been proposed but are all restrictive by construction. This work presents the matrix product operator RBM (MPORBM) that utilizes a tensor network generalization of Mv/TvRBM, preserves input formats in both the visible and hidden layers, and in higher expressive power. A novel training algorithm integrating contrastive divergence and an alternating optimization procedure is also developed. directly applicable to matrix and tensor data. The first RBM designed for tensor inputs is described in, where the visible layer is represented as a tensor but the hidden layer is still a vector. Furthermore, 24 the connection between the visible and hidden layers is described by a canonical polyadic (CP) tensor 25 decomposition, which constrains the model representation capability. Another RBM related 26 model that utilizes tensor input is the matrix-variate RBM (MvRBM). The visible and hidden 27 layers in an MvRBM are both matrices. Nonetheless, to limit the number of parameters, an MvRBM 28 models the connection between the visible and hidden layers through two separate matrices, which 29 restricts the ability of the model to capture correlations between different data modes. All these issues have motivated this work. Specifically, we propose a matrix product operator (MPO) DISPLAYFORM0 The "building blocks" of the MPO are the 4-way tensors DISPLAYFORM1, also called the MPO-cores. DISPLAYFORM2 DISPLAYFORM3 the summations in BID0 and are the key ingredients in being able to express generic weight tensors W. The storage complexity of an MPORBM with uniform ranks and dimensions is O(dIJR 2), which is layer tensors into a d-way tensor, which is then added elementwise with the corresponding bias tensor. The final step in the computation of the conditional probability is an elementwise application of the 56 logistic sigmoid function on the ing tensor. Let Θ = {B, C, W DISPLAYFORM0 with respect to the model parameter Θ. Similar to the standard RBM BID0, the expression of the 60 gradient of the log-likelihood is: DISPLAYFORM1 We mainly use the contrastive divergence (CD) procedure to train the MPORBM model. First, a Gibbs chain is initialized with one particular training sample V = X train, followed by K times Gibbs sampling which in the chain {( summing over all edges. The derivatives of the log-likelihood with respect to the bias tensors B, C are MPO-cores, which we call CD-SU henceforth, will be demonstrated through numerical experiments. DISPLAYFORM0 DISPLAYFORM1 In the first experiment, we demonstrate the superior data classification accuracy of MPORBM using Finally, we show that an MPORBM is good at generative modeling exemplified by image completion. We tested this generative task on the binarized MNIST dataset: one half of the image was provided to
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rye9F3Vvo7
Propose a general tensor-based RBM model which can compress the model greatly at the same keep a strong model expression capacity
Autonomous driving is still considered as an “unsolved problem” given its inherent important variability and that many processes associated with its development like vehicle control and scenes recognition remain open issues. Despite reinforcement learning algorithms have achieved notable in games and some robotic manipulations, this technique has not been widely scaled up to the more challenging real world applications like autonomous driving. In this work, we propose a deep reinforcement learning (RL) algorithm embedding an actor critic architecture with multi-step returns to achieve a better robustness of the agent learning strategies when acting in complex and unstable environments. The experiment is conducted with Carla simulator offering a customizable and realistic urban driving conditions. The developed deep actor RL guided by a policy-evaluator critic distinctly surpasses the performance of a standard deep RL agent. An important approach for goal-oriented optimization is reinforcement learning (RL) inspired from behaviorist psychology BID25. The frame of RL is an agent learning through interaction with its environment driven by an impact (reward) signal. The environment return reinforces the agent to select new actions improving learning process, hence the name of reinforcement learning BID10. RL algorithms have achieved notable in many domains as games BID16 and advanced robotic manipulations BID13 beating human performance. However, standard RL strategies that randomly explore and learn faced problems lose efficiency and become computationally intractable when dealing with high-dimensional and complex environments BID26.Autonomous driving is one of the current highly challenging tasks that is still an "unsolved problem" more than one decade after the promising 2007 DARPA Urban Challenge BID4 ). The origin of its difficulty lies in the important variability inherent to the driving task (e.g. uncertainty of human behavior, diversity of driving styles, complexity of scene perception...).In this work, we propose to implement an advantage actor-critic approach with multi-step returns for autonomous driving. This type of RL has demonstrated good convergence performance and faster learning in several applications which make it among the preferred RL algorithms BID7. Actor-critic RL consolidates the robustness of the agent learning strategy by using a temporal difference (T D) update to control returns and guide exploration. The training and evaluation of the approach are conducted with the recent CARLA simulator BID6. Designed as a server-client system, where the server runs the simulation commands and renders the scene readings in return, CARLA is an interesting tool since physical autonomous urban driving generates major infrastructure costs and logistical difficulties. It particularly offers a realistic driving environment with challenging properties variability as weather conditions, illumination, and density of cars and pedestrians. The next sections review previous work on actor-critic RL and provide a detailed description of the proposed method. After presenting CARLA simulator and related application advantages, we evaluate our model using this environment and discuss experimental . Various types of RL algorithms have been introduced and are classified into three categories, actor, critic or actor-critic depending on whether they rely on a parameterized policy, a value function or a combination of both to predict actions BID12. In the actor-only methods, a gradient is generated to update the policy parameters in a direction of improvement BID28. Despite policy gradients offer tough convergence guarantees, they may suffer from high variance ing in slow learning BID2. On the other hand, critic-only methods built on value function approximation, use T D learning and show lower variance of estimated returns BID3. However, they lack reliable guarantee of converging and reaching the real optimum BID7.Actor-critic methods combine the advantages of the two previous ones by inducting a repetitive cycle of policy evaluation and improvement. BID1 is considered as the starting point that defined the basics of actor-critic algorithms commonly used in recent research. Since then, several algorithms have been developed with different directions of improvements. BID27, introduced the Fuzzy Actor-Critic Reinforcement Learning Network (FACRLN), which involves one neural network to approximate both the actor and the critic. Based on the same strategy, BID18 developed the Consolidated Actor-Critic Model (CACM). BID11 used for the first time a natural gradient BID0 for the policy updates in their actorcritic algorithm. BID23 presented the Deterministic Policy Gradient algorithm (DPG) that assign a learned value estimate to train a deterministic policy. Recently, BID17 proposed the Asynchronous Advantage Actor-Critic (A3C) algorithm where multiple agents operate in parallel allowing data decorrelation and learning experience diversity. Despite that several actor-critic methods have been developed, most of them were tested on standard RL benchmarks. The latter generally include basic tasks with low-level complexity comparatively to real world applications, like cart-pole balancing BID27 BID11, maze problems BID18, multi-armed bandit BID23, Atari games BID17 BID8 and OpenAI Gym tasks BID19. Our work contribution consists in extending actor-critic RL application to a very challenging task which is urban autonomous driving. The domain setting is particularly difficult to handle due to intricate and conflicting dynamics. Indeed, the driving agent must interact, in changing weather and lighting conditions and through a wide action space, with several actors that may behave unexpectedly, identify traffic rules and street lights, estimate appropriate speed and distance... Our approach, that will be detailed in the next section, incorporates an actor and a multi-step T D critic component to improve the stability of the RL method. The RL task considered in this work is a Markov Decision Process (MDP) T i defined according to the tuple (S, A, p, r, γ, ρ 0, H) where S is the set of states, A is the set of actions, p(s t+1 |s t, a t) is the state transition distribution predicting the probability to reach a state s t+1 in the next time step given current state and action, r is a reward function, γ is the discount factor, ρ 0 is the initial state distribution and H the horizon. Consider the sum of expected rewards (return) from a trajectory τ (0,H−1) = (s 0, a 0, ..., s H−1, a H−1, s H). A RL setting aims at learning a policy π of parameters θ (either deterministic or stochastic) that maps each state s to an optimal action a maximizing the return R of the trajectory. DISPLAYFORM0 Following the discounted return expressed above, we can define a state value function V (s): S → R and a state-action value function Q(s, a): A × S → R to measure, respectively, the current state and state-action returns estimated under policy π: DISPLAYFORM1 In value-based RL algorithms such as Q-learning, a value function is approximated to select the best action according to the maximum value attributed to each state and action pair. On the other hand, policy-based methods directly optimize a parameterized policy without using a value function. They use instead gradient descents like in the family of REINFORCE algorithms BID28 updating the policy parameters θ in the direction: DISPLAYFORM2 The main problem with policy based methods is that the score function R t uses the averaged rewards calculated at the end of a trajectory which may lead to the inclusion of "bad" actions and hence slow learning. The solution provided in actor-critic framework is to replace the reward function R t in the policy gradient (equation 4) with the action value function that will enable the agent to learn the long-term value of a state and therefore enhance its prediction decision: DISPLAYFORM3 Then train a critic to approximate this value function parameterized with ω and update the model accordingly. At this point, we can conclude that an efficient way to derive an optimal control of policies is to evaluate them using approximated value functions. Hence, building accurate value function estimators in better policy evaluation and faster learning. T D learning combining Monte Carlo method and dynamic programming BID25 has proved to be an effective way to calculate good approximations of value functions by allowing an efficient reuse of rewards during policy evaluation. It consists in taking an action according to the policy and bootstrapping the 1-step sampled return from the value function estimate ing in the below 1-step T D target: DISPLAYFORM4 Given the last return estimation, we obtain the 1-step T D update rule that allows the adjustment of the value function according to the T D error δ t with step size β: DISPLAYFORM5 At this level, the actor-critic algorithm still suffers from high variance. In order to reduce the variance of the policy gradient and stabilize learning, we can subtract a baseline function, e.g. the state value function, from the policy gradient. For that, we define the advantage function A(s t, a t) which calculates the improvement in predicting an action compared to the average V (s t): DISPLAYFORM6 An approximation of the advantage function is required since it involves two value functions Q(s t, a t) and V (s t). Therefore let's reformulate A(s t, a t) as the difference between the expected future reward and the actual reward that the agent receives from the environment BID9 ): DISPLAYFORM7 When used in the previous policy gradient (equation 5), this gives us the advantage of the actor policy gradient: DISPLAYFORM8 We can subsequently assume that T D error is a good candidate to estimate the advantage function. Accordingly, we deduce the final actor policy gradient: DISPLAYFORM9 Given the complex nature of the autonomous urban driving task, we will use a generalized version of T D learning by extending the bootstrapping over multiple time steps into the future. Algorithmically, we will define configurable multi-step returns within the T D target. Hence, T D error becomes: DISPLAYFORM10 Multi-step returns have been demonstrated to improve the performance of learning especially with the advent of deep RL BID17. Indeed, it allows the agent to gather more information on the environment before calculating the error in the critic estimates and updating the policy. So far, we have a good theoretical basis to launch our agent. The experiments carried out by the application of this approach in the Carla simulator will be presented in the next section. In this section we investigate the performance of an advantage actor-critic (A2C) algorithm embedding multi-step T D target updates on the challenging task of urban autonomous driving. The goal of our experimental evaluation is to demonstrate that the incorporation of a multi-step returns critic (MSRC) component in a deep RL framework consolidates the robustness of the agent by controlling and guiding its learning strategy. We expect a reduction of the actor gradient variance, an ascendant trend of episodic average returns and more generally a better performance comparatively to the case where the MSRC component is deactivated in the A2C algorithm. Environment. We conduct the experiments using CARLA simulator for autonomous driving which provides an interesting interface allowing our RL agent to control a vehicle and interact with a dynamic environment. Comparatively to existing platforms, Carla offers a customizable and quite realistic urban driving conditions with a set of advanced features for controlling the vehicle and gathering the environment feedback. It is designed as a server-client system where the server implemented in Unreal Engine 4 (UE4) 1 runs the simulation commands and returns the scene readings. The client implemented in Python sends the agent predicted actions mapped as driving commands and receives the ing simulation measures that will be interpreted as the agent rewards. Carla 3D environment consists of static objects as buildings, roads and vegetation and dynamic nonplayer characters, mainly pedestrians and vehicles. During training, we can episodically vary server settings as the traffic density (number of dynamic objects) and visual effects (weather and lightening conditions, sun position, cloudiness, precipitation...). Some examples of ing environments are illustrated in figure 1.Observation and action spaces. The agent interacts with the environment by generating actions and receiving observations over regular time steps. The action space selected for our experiments is built on the basis of three discrete driving instructions (steering, throttle, and brake) extended with some combinations in-between (turn left and accelerate/decelerate...). The observation space includes sensors outputs as color images produced by RGB cameras and derived depth and semantic segmentations. The second type of available observations consists in a range of measurements reporting the vehicle location (similarly to GPS) and speed, number of collisions, traffic rules and positioning of non-player dynamics characters. Rewards. A crucial role is played by rewards in building driving policies as they orient the agent predictions. In order to further optimal learning, the reward is shaped as a weighted sum of measurements extracted from the observations space described in the previous paragraph. The idea is to compute a difference between the current (step t) and the previous (step t − 1) measure of the selected observation then impact it positively or negatively on the aggregated reward. The positively weighted variables are distance traveled to target and speed in km/h. The negatively weighted variables are collisions damage (including collisions with vehicles, pedestrians and other), intersections with sidewalk and opposite lane. For example, the agent will get a reward if the distance to goal decreases and a penalty each time a collision or an intersection with the opposite lane is recorded. Experiment settings. The agent training follows a goal-directed navigation on straight roads from scratch. An episode is terminated when the target destination is reached or after a collision with a dynamic non-player character. The A2C networks are trained with 10 millions steps for 72 hours of simulated continuous driving. Motivated by the recent success achieved by deep RL in challenging domains BID17, we use convolutional neural networks (CNN) to approximate both the value function of the critic and the actor policy where the parameters are represented by the deep network weights. The CNN architectures consist of 4 convolutional layers, 3 max-pooling layers and one fully connected layer at the output. The discount factor is set as 0.9. We used 10-step rollouts, with initial learning rate set as 0, 0001. Learning rate is linearly decreased to zero over the course of training. While training the approach, a stochastic gradient descent is operated each 10 time steps and the ing policy model is stored only if its performance (accumulated rewards) exceeds the last retained model. The final stored model is then used in the test phase. Comparative evaluation. In the absence of various state-of-the-art works on the recent CARLA simulator, we choose to compare 2 versions of our algorithm: the original deep actor RL guided by the MSRC policy-evaluator versus a standard deep actor RL ing from the deactivation of the MSRC component in the original algorithm. In fact the few available state-of-the-art in CARLA environment BID6 BID14 report the percentage of successfully completed episodes. This type of quantitative evaluation doesn't meet our experiment objectives mentioned in the beginning of this section to evaluate and interpret the MSRC contribution in complex tasks like autonomous driving. Guided by the several works on RL strategies in different domains BID17, BID19, we selected episodic average and cumulative rewards metrics to evaluate our approach. FIG1 shows the generated reward in training phase. We use average episodic reward to describe the methods global performance and step reward to emphasize the predictions return variance. We can make few observations in this regard. In term of performance, our n-step A2C approach is dominant over almost all the 10000 training episodes confirming the efficiency of the RL strategy controlled by the MSRC. Furthermore, we noticed that regarding the best retained models, the A2C stored just few models in the 2000 first episodes, then this number drastically increased to 100 retained models in the remaining 8000 episodes. This means that our method early achieved the exploration phase and moved to exploitation from the training level of 2000 episodes. On the other hand, the standard deep RL totalized only 10 best models over the training phase reflecting the weak efficiency of a random strategy to solve a very complex and challenging problem like autonomous driving. A last visual interpretation that we can deduce from the step reward graph is that the variance of A2C predictions is significantly reduced relatively to the standard deep RL confirming the T D learning contribution in accomplishing a faster learning. Figure 3 recaps the testing phase evaluation following two different scenarios. First, the testing was conducted in the same environment and conditions as the training: Town 2 and Clear Noon weather (env1). From the episodic reward graph we can observe that our approach substantially outperforms the standard deep RL which means that training with multi-step returns critic leads to more efficient RL models. In the second scenario, both methods agents are tested in a different environment than training: Town 1 and in hard rainy conditions (env2). The n-step A2C is still more competitive than the standard deep RL showing superior generalization capabilities in the new unseen setting. Nevertheless, its performance has decreased in the second test scenario reflecting a certain fragility to changing environment. On the other side, the standard deep RL is still showing higher prediction return variance in the step reward graph confirming training phase . In this paper we addressed the limits of RL algorithms in solving high-dimensional and complex tasks. Combining both actor and critic methods advantages, the proposed approach implemented a continuous process of policy assessment and improvement using multi-step T D learning. Evaluated on the challenging problem of autonomous driving using CARLA simulator, our deep actor-critic algorithm demonstrated higher performance and faster learning capabilities than a standard deep RL. Furthermore, the showed a certain vulnerability of the approach when facing unseen testing conditions. Considering this paper as a preliminary attempt to scale up RL approaches to highdimensional real world applications like autonomous driving, we plan in future work to examine the performance of other RL methods such as deep Q-learning and Trust Region Policy Optimization BID22 on similar complex tasks. Furthermore, we propose to tackle the issue of non-stationary environments impact on RL methods robustness as a multi-task learning problem BID5. In such context, we will explore recently applied concepts and methodologies such as novel adaptive dynamic programming (ADP) approaches, context-aware and meta-learning strategies. The latter are currently attracting a keen research interest and particularly achieving promising advances in designing generalizable and fast adapting RL algorithms BID21 BID20. Subsequently, we will be able to increase driving tasks complexity and operate conclusive comparisons with the few available state-of-the-art experiments on CARLA simulator.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Bke03G85DN
An actor-critic reinforcement learning approach with multi-step returns applied to autonomous driving with Carla simulator.
A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability. In this paper, we propose new techniques that employ classification-based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data. These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets. They also indicate that GANs have significant problems in reproducing the more distributional properties of the training dataset. In particular, the diversity of such synthetic data is orders of magnitude smaller than that of the original data. Generative Adversarial Networks (GANs) BID6 have garnered a significant amount of attention due to their ability to learn generative models of multiple natural image datasets BID11 BID3 BID7 ). Since their conception, a fundamental question regarding GANs is to what extent they truly learn the underlying data distribution. This is a key issue for multiple reasons. From a scientific perspective, understanding the capabilities of common GANs can shed light on what precisely the adversarial training setup allows the GAN to learn. From an engineering standpoint, it is important to grasp the power and limitations of the GAN framework when applying it in concrete applications. Due to the broad potential applicability of GANs, researchers have investigated this question in a variety of ways. When we evaluate the quality of a GAN, an obvious first check is to establish that the generated samples lie in the support of the true distribution. In the case of images, this corresponds to checking if the generated samples look realistic. Indeed, visual inspection of generated images is currently the most common way of assessing the quality of a given GAN. Individual humans can performs this task quickly and reliably, and various GANs have achieved impressive for generating realistic-looking images of faces and indoor scenes BID13 BID3.Once we have established that GANs produce realistic-looking images, the next concern is that the GAN might simply be memorizing the training dataset. While this hypothesis cannot be ruled out entirely, there is evidence that GANs perform at least some non-trivial modeling of the unknown distribution. Previous studies show that interpolations in the latent space of the generator produce novel and meaningful image variations BID11, and that there is a clear disparity between generated samples and their nearest neighbors in the true dataset BID1.Taken together, these provide evidence that GANs could constitute successful distribution learning algorithms, which motivates studying their distributions in more detail. The direct approach is to compare the probability density assigned by the generator with estimates of the true distribution BID16. However, in the context of GANs and high-dimensional image distributions, this is complicated by two factors. First, GANs do not naturally provide probability estimates for their samples. Second, estimating the probability density of the true distribution is a challenging problem itself (the adversarial training framework specifically avoids this issue). Hence prior work has only investigated the probability density of GANs on simple datasets such as MNIST BID16.Since reliably computing probability densities in high dimensions is challenging, we can instead study the behavior of GANs in low-dimensional problems such as two-dimensional Gaussian mixtures. Here, a common failure of GANs is mode collapse, wherein the generator assigns a disproportionately large mass to a subset of modes from the true distribution BID5. This raises concerns about a lack of diversity in the synthetic GAN distributions, and recent work shows that the learned distributions of two common GANs indeed have (moderately) low support size for the CelebA dataset BID1. However, the approach of BID1 heavily relies on a human annotator in order to identify duplicates. Hence it does not easily scale to comparing many variants of GANs or asking more fine-grained questions than collision statistics. Overall, our understanding of synthetic GAN distributions remains blurry, largely due to the lack of versatile tools for a quantitative evaluation of GANs in realistic settings. The focus of this work is precisly to address this question:Can we develop principled and quantitative approaches to study synthetic GAN distributions?To this end, we propose two new evaluation techniques for synthetic GAN distributions. Our methods are inspired by the idea of comparing moments of distributions, which is at the heart of many methods in classical statistics. Although simple moments of high-dimensional distributions are often not semantically meaningful, we can extend this idea to distributions of realistic images by leveraging image statistics identified using convolutional neural networks. In particular, we train image classifiers in order to construct test functions corresponding to semantically meaningful properties of the distributions. An important feature of our approach is that it requires only light human supervision and can easily be scaled to evaluating many GANs and large synthetic datasets. Using our new evaluation techniques, we study five state-of-the-art GANs on the CelebA and LSUN datasets, arguably the two most common testbeds for advanced GANs. We find that most of the GANs significantly distort the relative frequency of even basic image attributes, such as the hair style of a person or the type of room in an indoor scene. This clearly indicates a mismatch between the true and synthetic distributions. Moreover, we conduct experiments to explore the diversity of GAN distributions. We use synthetic GAN data to train image classifiers and find that these have significantly lower accuracy than classifiers trained on the true data set. This points towards a lack of diversity in the GAN data, and again towards a discrepancy between the true and synthetic distributions. In fact, our additional examinations show that the diversity in GANs is only comparable to a subset of the true data that is 100× smaller. When comparing two distributions, a common first test is to compute low-order moments such as the mean and the variance. If the distributions are simple enough, these quantities provide a good understanding for how similar they are. Moreover, low-order moments have a precise definition and are usually quick to compute. On the other hand, low-order moments can also be misleading for more complicated, high-dimensional distributions. As a concrete example, consider a generative model of digits (such as MNIST). If a generator produces digits that are shifted by a significant amount yet otherwise perfect, we will probably still consider this as a good approximation of the true distribution. However, the expectation (mean moment) of the generator distribution can be very different from the expectation of the true data distribution. This raises the question of what other properties of high-dimensional image distributions are easy to test yet semantically meaningful. In the next two subsections, we describe two concrete approaches to evaluate synthetic GAN data that are easy to compute yet capture relevant information about the distribution. The common theme is that we employ convolutional neural networks in order to capture properties of the distributions that are hard to describe in a mathematically precise way, but usually well-defined for a human (e.g., what fraction of the images shows a smiling person?). Automating the process of annotating images with such high-level information will allow us to study various aspects of synthetic GAN data. Mode collapse refers to the tendency of the generator to concentrate a large probability mass on a few modes of the true distribution. While there is ample evidence for the presence of mode-collapse in GANs BID5 BID1 BID10, elegant visualizations of this phenomena are somewhat restricted to toy problems on low-dimensional distributions BID5 BID10. For image datasets, it is common to rely on human annotators and derived heuristics (see Section 2.3). While these methods have their merits, they are restrictive both in the scale and granularity of testing. Here we propose a classification-based tool to assess how good GANs are at assigning the right mass across broad concepts/modes. To do so, we use a trained classifier as an expert "annotator" that labels important features in synthetic data, and then analyze the ing distribution. Specifically, our goal is to investigate if a GAN trained on a well-balanced dataset (i.e., contains equal number of samples from each class) can learn to reproduce this balanced structure. Let DISPLAYFORM0 represent a dataset of size N with C classes, where (x i, y i) denote an image-label pair drawn from true data. If the dataset D is balanced, it contains N/C images per class. The procedure for computing class distribution in synthetic data is:1. Train an annotator (a multi-class classifier) using the dataset D. 2. Train an unconditional GAN on the images X from dataset D, without using class labels. 3. Create a synthetic dataset by sampling N images from a GAN and labeling them using the annotator from Step 1.The annotated data generated via the above procedure can provide insight into the GAN's class distribution at the scale of the entire dataset. Moreover, we can vary the granularity of mode analysis by choosing richer classification tasks, i.e., more challenging classes or a larger number of them. In Section 3.3, we use this technique to visualize mode collapse in several state-of-the-art GANs on the CelebA and LSUN datasets. All the studied GANs show significant mode collapse and the effect becomes more pronounced when the granularity of the annotator is increased (larger number of classes). We also investigate the temporal aspect of the GAN setup and find that the dominant mode varies widely over the course of training. Our approach also enables us to benchmark and compare GANs on different datasets based on the extent of mode collapse in the learned distributions. Our above method for inspecting distribution of modes in synthetic data provides a coarse look at the statistics of the underlying distribution. While the ing quantities are semantically meaningful, they capture only simple notions of diversity. To get a more holistic view on the sample diversity in the synthetic distribution, we now describe a second classification-based approach for evaluating GAN distributions. The main question that motivates it is: Can GANs recover the key aspects of real data to enable training a good classifier? We believe that this is an interesting measure of sample diversity for two reasons. First, classification of high-dimensional image data is a challenging problem, so a good training dataset will require a sufficiently diverse sample from the distribution. Second, augmenting data for classification problems is one of the proposed use cases of GANs (e.g., see the recent work of BID14).If GANs are truly able to capture the quality and diversity of the underlying data distribution, we expect almost no gap between classifiers trained on true data and synthetic data from a GAN. A generic method to produce data from GANs for classification is to train separate GANs for each class in the dataset D. 1 Samples from these class-wise GANs can then be pooled together to get a labeled synthetic dataset. Note that the labels are trivially determined based on the class modeled by the particular GAN from which a sample is drawn. We perform the following steps to assess the classification performance of synthetic data vs. true data:1. Train a classifier on the true data D (from Section 2.1) as a benchmark for comparison. 2. Train C separate unconditional GANs, one per class in dataset D. 3. Generate a balanced synthetic labeled dataset of size N by consolidating an equal number of samples drawn from each of these C GANs. The labels obtained by aggregating samples from per-class GANs are designated as "default" labels for the synthetic dataset. Note that by design, both true and synthetic datasets have N samples, with N/C examples per class. 4. Use synthetic labeled data from Step 3 to train classifier with the same architecture as Step 1.Comparing the classifiers from Steps 1 and 4 can then shed light on the disparity between the two distributions. BID11 conducted an experiment similar to Step 2 on the MNIST dataset using a conditional GAN. They found that samples from their DCGAN performed comparably to true data on nearest neighbor classification. We obtained similar good on MNIST, which could be due to the efficacy of GANs in learning the MNIST distribution or due to the ease of getting good accuracy on MNIST even with a small training set BID12. To clarify this question, we restrict our analysis to more complex datasets, specifically CelebA and LSUN.We evaluate the two following properties in our classification task: (i) How well can the GANs recover nuances of the decision boundary, which is reflected by how easily the classifier can fit the training data?(ii) How does the diversity of synthetic data compare to that of true data when measured by classification accuracy on a hold-out set of true data?We observe that all the studied GANs have very low diversity in this metric. In particular, the accuracy achieved by a classifier trained on GAN data is comparable only to the accuracy of a classifier trained on a 100× (or more) subsampled version of the true dataset. Even if we draw more samples from the GANs to produce a training set several times larger than the true dataset, there is no improvement in performance. Looking at the classification accuracy gives us a way to compare different models on a potential downstream application of GANs. Interestingly, we find that visual quality of samples does not necessarily correlate with good classification performance. In GAN literature, it is common to investigate performance using metrics that involve human supervision. BID1 proposed a measure based on manually counting duplicates in GAN samples as a heuristic for the support or diversity of the learned distribution. In BID16, manual classification of a small sample (100 images) of GAN generated MNIST images is used as a test for the GAN is missing certain modes. Such annotator-based metrics have clear advantages in identifying relevant failure-modes of synthetic samples, which explains why visual inspection (eyeballing) is still the most popular approach to assess GAN samples. There have also been various attempts to build good metrics for GANs that are not based on manual heuristics. Parzen window estimation can be used to approximate the log-likelihood of the distribution, though it is known to work poorly for high-dimensional data BID15. BID16 develop a method to get a better estimate for log-likelihood using annealed importance sampling. BID13 propose a metric known as Inception Score, where the entropy in the labels predicted by a pre-trained Inception network is used to assess the diversity in GAN samples. In the following sub-sections we describe the setup and for our classification-based GAN benchmarks. Additional details can be found in Section 5 in the Appendix. GANs have shown promise in generating realistic samples, ing in efforts to apply them to a broad spectrum of datasets. However, the Large-scale CelebFaces Attributes (CelebA) BID9 and Large-Scale Scene Understanding (LSUN) BID17 datasets remain the most popular and canonical ones in developing and evaluating GAN variants. Conveniently, these datasets also have rich annotations, making them particularly suited for our classification-based evaluations. Details on the setup for classification tasks for these datasets are given in the Appendix (Section 5). Using our framework, we perform a comparative study of several popular variants of GANs: 1. Deep Convolutional GAN (DCGAN): Convolutional GAN trained using a Jensen-Shannon divergence-based objective BID6 BID11. 2. Wasserstein GAN (WGAN): GAN that uses a Wasserstein distance-based objective. 3. Adversarially Learned Inference (ALI): GAN that uses joint adversarial training of generative and inference networks BID4. 4. Boundary Equilibrium GAN (BEGAN): Auto-encoder style GAN trained using Wasserstein distance objective BID2. Figure 1: Visualizations of mode collapse in the synthetic, GAN-generated data produced after training on our chosen subsets of CelebA and LSUN datasets. Left panel shows the relative distribution of classes in samples drawn from synthetic datasets extracted at the end of the training process, and compares is to the true data distribution (leftmost plots). On the right, shown is the evolution of analogous class distribution for different GANs over the course of training. BEGAN did not converge on the LSUN tasks and hence is excluded from the corresponding analysis.5. Improved GAN (ImGAN): GAN that uses semi-supervised learning (labels are part of GAN training), with various other architectural and procedural improvements BID13. All the aforementioned GANs are unconditional, however, ImGAN has access to class labels as a part of the semi-supervised training process. We use standard implementations for each of these models, details of which are provided in the Appendix (Section 5). We also used the prescribed hyper-parameter settings for each GAN, including number of iterations we train them for. Our analysis is based on 64 × 64 samples, which is a size at which GAN generated samples tend to be of high quality. We also use visual inspection to ascertain that the perceptual quality of GAN samples in our experiments is comparable to those reported in previous studies. We demonstrate sample images in FIG1 in the Appendix. BEGAN did not converge in our experiments on the LSUN dataset and hence is excluded from the corresponding analysis. In our study, we use two types of classification models: 1. ResNet: 32-Layer Residual network BID7. Here, we choose a ResNet as it is a standard classifier in vision and yields high accuracy on various datasets, making it a reliable baseline. 2. Linear Model: This is a network with one-fully connected layer between the input and output (no hidden layers) with a softmax non-linearity. If the dimensions of input x and outputŷ, are D and C (number of classes) respectively, then linear models implement the functionŷ = σ(DISPLAYFORM0 where W is a D × C matrix, b is a C × 1 vector and σ(·) is the softmax function. Due to it's simplicity, this model will serve as a useful baseline in some of our experiments. We always train the classifiers to convergence, with decaying learning rate and no data augmentation. Experimental for quantifying mode collapse through classification tasks, described in Section 2.1, are presented below. Table 2 in the Appendix gives details on datasets (subsets of CelebA and LSUN) used in our analysis, such as size (N), number of classes (C), and accuracy of the annotator, i.e., a classifier pre-trained on true data, which is then used to label the synthetic, GANgenerated data. Figure 1 presents class distribution in synthetic data, as specified by these annotators. The left panel compares the relative distribution of modes in true data (uniform) with that in various GAN-generated datasets. Each of these datasets is created by drawing N samples from the GAN after it was trained on the corresponding true dataset. The right panel illustrates the evolution of class distributions in various GANs over the course of training 2.Results: These visualization lead to the following findings:• All GANs seem to suffer from significant mode-collapse. This becomes more apparent when the annotator granularity is increased, by considering a larger set of classes. For instance, one should compare the relatively balanced class distributions in the 3-class LSUN task to the near-absence of some modes in the 5-class task.• Mode collapse is prevalent in GANs throughout the training process, and does not seem to recede over time. Instead the dominant mode(s) often fluctuate wildly over the course of the training.• For each task, often there is a common set of modes onto which the distint GANs exhibit collapse. In addition to viewing our method as an approach to analyze the mode collapse, we can also use it as a benchmark for GAN comparison. From this perspective, we can observe that, on CelebA, DCGAN and ALI learn somewhat balanced distributions, while WGAN, BEGAN and Improved GAN show prominent mode collapse. This is in contrast to the obtained LSUN, where, for example, WGAN exhibit relatively small mode collapse, while ALI suffers from significant mode collapse even on the simple 3-class task. This highlights the general challenge in real world applications of GANs: they often perform well on the datasets they were designed for (e.g. ALI on CelebA and WGAN on LSUN), but extension to new datasets is not straightforward. Temporal analysis of mode-collapse shows that there is wide variation in the dominant mode for WGAN and Improved GAN, whereas for BEGAN, the same mode(s) often dominates the entire training process. Using the procedure outlined in Section 2.2, we perform a quantitative assessment of sample diversity in GANs on the CelebA and LSUN datasets. We restrict our experiments to binary classification as we find they have sufficient complexity to highlight the disparity between true and synthetic data. Selected for classification-based evaluation of GANs are presented in Table 1. A more extensive study is presented in Table 3, and Figures 4 and 5 in the Appendix (Section 5).As a preliminary check, we inspect the quality of our labeled GAN datasets. For this, we use high-accuracy annotators from Section 2.1 to predict labels for GAN generated data and measure consistency between the predicted and default labels (label correctness). We also inspect confidence scores, defined as the softmax probabilities for predicted class, of the annotator. The motivation behind these metrics is that if the classifier can correctly and with high-confidence predict labels for labeled GAN samples, then it is likely that they are convincing examples of that class, and hence of good "quality". Empirical for label agreement and annotator confidence of GAN generated datasets are shown in Tables 1 and 3, and FIG4. We also report an equivalent Inception Score BID13 ), similar to that described in Section 2.3. Using the Inception network to get the label distribution may not be meaningful for face or scene images. Instead, we compute the Inception Score using the label distribution predicted from the annotator networks. Score is computed as exp(E x [KL(p(y|x))||p(y)]), where y refers to label predictions from the annotators 3. Table 1: Select from the comparative study on classification performance of true data vs. GANs on the CelebA and LSUN datasets. Label correctness measures the agreement between default labels for the synthetic datasets, and those predicted by the annotator, a classifier trained on true data. Shown alongside are the equivalent inception scores computed using labels predicted by the annotator (rather than an Inception Network). Training and test accuracies for a linear model on the various true and synthetic datasets are reported. Also presented are the corresponding accuracies for this classifier trained on down-sampled true data (↓ M) and oversampled synthetic data (↑ L). Test accuracy for ResNets trained on these datasets is also shown (training accuracy was always 100%), though it is noticeable that deep networks suffer from issues when trained on synthetic datasets. Next, we train classifiers using the true and labeled GAN-generated datasets and study their performance in terms of accuracy on a hold-out set of true data. ResNets (and other deep variants) yield good classification performance on true data, but suffer from severe overfitting on the synthetic data, leading to poor test accuracy. This already indicates a possible problem with GANs and the diversity of the data they generate. But to highlight this problem better and avoid the issues that stem from overfitting, we also look for a classifier which does not always overfit on the synthetic data. We, however, observed that even training simple networks, such as one fully connected layer with few hidden units, led to overfitting on synthetic data. Hence, we resorted to a very basic linear model described in Section 3.2. Tables 1 and 3 shows from binary classification experiments using linear models, with the training and test accuracies of the classifier on various datasets. Finally, to get a better understanding of the underlying "diversity" of synthetic datasets, we train linear models using down-sampled versions of true data (no augmentation), and compare this to the performance of synthetic data, as shown in Tables 1 and 3. Down-sampling the data by a factor of M, denoted as ↓ M implies selecting a random N/M subset of the data D. Visualizations of how GAN classification performance compares with (down-sampled) true data are in Figure 5 in the Appendix. A natural argument in the defense of GANs is that we can oversample them, i.e. generate datasets much larger than the size of training data. Results for linear models trained using a 10-fold oversampling of GANs (drawing 10N samples), denoted by ↑ 10, are show in Tables 1 and 3.Results: The major findings from these experiments are:• Based on Tables 1 and 3, and FIG4, we see strong agreement between annotator labels and true labels for synthetic data, on par with the scores for the test set of true data. It is thus apparent that the GAN images are of high-quality, as expected based on the visual inspection. These scores are lower for LSUN than CelebA, potentially due to lower quality of generated LSUN images. From these , we can get a broad understanding of how good GANs are at producing convincing/representative samples from different classes across datasets. This also shows that simple classification-based benchmarks can highlight relevant properties of synthetic datasets.• The equivalent inception score is not very informative and is similar for the true (hold-out set) and synthetic datasets. This is not surprising given the simple nature of our binary classification task and the fact that the true and synthetic datasets have almost a uniform distribution over labels.• It is evident from Table 1 that there is a large performance gap between true and synthetic data on classification tasks. Inspection of training accuracies shows that linear models are able to nearly fit the synthetic datasets, but are grossly underfitting on true data. Given the high scores of synthetic data on the previous experiments to assess dataset'quality' (Tables 1 and 3, and FIG4, it is likely that the poor classification performance is more indicative of lack of 'diversity'.• Comparing GAN performance to that of down-sampled true data reveals that the learned distribution, which was trained on datasets that have around hundred thousand data points exhibits diversity that is on par with what only mere couple of hundreds of true data samples constitute! This shows that, at least from the point of view of classification, the diversity of the GAN generated data is severely lacking.• Oversampling GANs by 10-fold to produce larger datasets does not improve classification performance. The disparity between true and synthetic data remains nearly unchanged even after this significant oversampling, further highlighting the lack of diversity in GANs. In terms of the of relative performance of various GANs, we observe that WGAN and ALI (on CelebA) perform better than the other GANs. While BEGAN samples have good perceptual quality (see FIG1, they consistently perform badly on our classification tasks. On the other hand, WGAN samples have relatively poor visual quality but seem to outperform other GANs in classification tasks. This is a strong indicator of the need to consider other metrics, such as the ones proposed in this paper, in addition to visual inspection to study GANs. For LSUN, the gap between true and synthetic data is much larger, with the classifiers getting near random performance on all the synthetic datasets. Note that these classifiers get poor test accuracy on LSUN but are not overfitting on the training data. In this case, we speculate the lower performance could be due to both lower quality and diversity of LSUN samples. In summary, our key experimental finding is that even simple classification-based tests can hold tremendous potential to shed insight on the learned distribution in GANs. This not only helps us to get a deeper understanding of many of the underlying issues, but also provides with a more quantitative and rigorous platform on which to compare different GANs. Our techniques could, in principle, be also applied to assess other generative models such as Variational Auto-Encoders (VAEs) BID8. However, VAEs have significant problems in generating realistic samples on the datasets used in our analysis in the first place -see BID1. In this paper, we put forth techniques for examining the ability of GANs to capture key characteristics of the training data, through the lens of classification. Our tools are scalable, quantitative and automatic (no need for visual inspection of images). They thus are capable of studying state-ofthe-art GANs on realistic, large-scale image datasets. Further, they serve as a mean to perform a nuanced comparison of GANs and to identify their relative merits, including properties that cannot be discerned from mere visual inspection. We then use the developed techniques to perform empirical studies on popular GANs on the CelebA and LSUN datasets. Our examination shows that mode collapse is indeed a prevalent issue for GANs. Also, we observe that synthetic GAN-generated datasets have significantly reduced diversity, at least when examined from a classification perspective. In fact, the diversity of such synthetic data is often few orders of magnitude smaller than that of the true data. Furthermore, this gap in diversity does not seem to be bridged by simply producing much larger datasets by oversampling GANs. Finally, we also notice that good perceptual quality of samples does not necessarily correlate -and might sometime even anti-correlate -with distribution diversity. These findings suggest that we need to go beyond the visual inspection-based evaluations and look for more quantitative tools for assessing quality of GANs, such as the ones presented in this paper. To assess GAN performance from the perspective of classification, we construct a set of classification tasks on the CelebA and LSUN datasets. In the case of the LSUN dataset, images are annotated with scene category labels, which makes it straightforward to use this data for binary and multiclass classification. On the other hand, each image in the CelebA dataset is labeled with 40 binary attributes. As a , a single image has multiple associated attribute labels. Here, we construct classification tasks can by considering binary combinations of an attribute(s) (examples are shown in FIG1). Attributes used in our experiments were chosen such that the ing dataset was large, and classifiers trained on true data got high-accuracy so as to be good annotators for the synthetic data. Details on datasets used in our classification tasks, such as training set size (N), number of classes (C), and accuracy of the annotator, i.e., a classifier pre-trained on true data which is used to label the synthetic GAN-generated data, are provided in Table 2. Table 2: Details of CelebA and LSUN subsets used for the studies in Section 3.3. Here, we use a classifier trained on true data as an annotator that let's us infer label distribution for the synthetic, GAN-generated data. N is the size of the training set and C is the number of classes in the true and synthetic datasets. Annotator's accuracy refers to the accuracy of the classifier on a test set of true data. For CelebA, we use a combination of attribute-wise binary classifiers as annotators due their higher accuracy compared to a single classifier trained jointly on all the four classes. Benchmarks were performed on standard implementations - For each of our benchmark experiments, we ascertain that the visual quality of samples produced by the GANs is comparable to that reported in prior work. Examples of random samples drawn for multi-class datasets from both true and synthetic data are shown in FIG1 for the CelebA dataset, and in FIG3 for the LSUN dataset. In the studies to observe mode collapse in GANs described in Sections 2.1 and 3.3, we use a pretrained classifier as an annotator to obtain the class distribution for datasets generated from unconditional GANs. FIG4 shows histograms of annotator confidence for the datasets used for benchmarking listed in Table 2. As can be seen in these figures, the annotator confidence for the synthetic data is comparable to that on the hold-out set of true data. Thus, it seems likely that the GAN generated samples are of good quality and are truly representative examples of their respective classes, as expected based on visual inspection. Table 3 presents an extension of the comparative study of classification performance of true and GAN generated data provided in Table 1. Visualizations of how test accuracies of a linear model classifier trained on GAN data compares with one trained on true data is shown in Figure 5 obtained by drawing N samples from GANs at the culmination of the training process. Based on these visualizations, it is apparent that GANs have comparable classification performance to a subset of training data that is more than a 100x smaller. Thus, from the perspective of classification, GANs have diversity on par with a few hundred true data samples. CelebA: Smiling/Not Smiling CelebA: Black Hair/Not Black Hair LSUN: Bedroom/Kitchen Figure 5: Illustration of the classification performance of true data compared with GAN-generated synthetic datasets based on experiments described in Section 3.4. Classification is performed using a basic linear model, described in Section 3.2, and performance is reported in terms of accuracy on a hold-out set of true data. In the plots, the bold curve captures the classification performance of models trained on true data vs the size of the true dataset (maximum size is N). Dashed lines represent performance of classifiers trained on various GAN-generated datasets of size N. These plots indicate that GAN samples have "diversity" comparable to a small subset (few hundred samples) of true data. Here the notion of diversity is one that is relevant for classification tasks. Table 3: Detailed version of the comparative study of the classification performance of true data and GANs on the CelebA and LSUN datasets shown in Table 1, based on experiments described in Section 3.4. Label correctness measures the agreement between default labels for the synthetic datasets, and those predicted by the annotator, a classifier trained on the true data. Shown alongside are the equivalent inception scores computed using labels predicted by the annotator (instead of the Inception Network). Training and test accuracies for a linear model classifier on the various true and synthetic datasets are reported. Also presented are the corresponding accuracies for a linear model trained on down-sampled true data (↓ M) and oversampled synthetic data (↑ L). Test accuracy for ResNets trained on these datasets is also shown (training accuracy was always 100%), though it is noticeable that deep networks suffer from issues when trained on synthetic datasets.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1FQEfZA-
We propose new methods for evaluating and quantifying the quality of synthetic GAN distributions from the perspective of classification tasks
The goal of survival clustering is to map subjects (e.g., users in a social network, patients in a medical study) to $K$ clusters ranging from low-risk to high-risk. Existing survival methods assume the presence of clear \textit{end-of-life} signals or introduce them artificially using a pre-defined timeout. In this paper, we forego this assumption and introduce a loss function that differentiates between the empirical lifetime distributions of the clusters using a modified Kuiper statistic. We learn a deep neural network by optimizing this loss, that performs a soft clustering of users into survival groups. We apply our method to a social network dataset with over 1M subjects, and show significant improvement in C-index compared to alternatives. Free online subscription services (e.g., Facebook, Pandora) use survival models to predict the relationship between observed subscriber covariates (e.g. usage patterns, session duration, gender, location, etc.) and how long a subscriber remains with an active account BID26 BID11. Using the same tools, healthcare providers make extensive use of survival models to predict the relationship between patient covariates (e.g. smoking, administering drug A or B) and the duration of a disease (e.g., herpes, cancer, etc.). In these scenarios, rarely there is an end-of-life signal: non-paying subscribers do not cancel their accounts, tests rarely declare a patient cancer-free. We want to assign subjects into K clusters, ranging from short-lived to long-lived subscribers (diseases).Despite the recent community interest in survival models BID1 BID33, existing survival analysis approaches require an unmistakable end-of-life signal (e.g., the subscriber deletes his or her account, the patient is declared disease-free), or a pre-defined endof-life "timeout" (e.g., the patient is declared disease-free after 5 years, the subscriber is declared permanently inactive after 100 days of inactivity). Methods that require end-of-life signals also include BID23 BID8 BID3 BID14 BID24 BID29 BID31 BID47 BID9 BID19 BID41 BID40 BID17 BID48 BID26 BID0 BID4 BID5 BID35 BID46 BID30.In this work, we propose to address the lifetime clustering problem without end-of-life signals for the first time, to the best of our knowledge. We begin by describing two possible datasets where such a clustering approach could be applied.• Social Network Dataset: Users join the social network at different times and participate in activities defined by the social network (login, send/receive comments). The covariates are the various attributes of a user like age, gender, number of friends, etc., and the inter-event time is the time between user's two consecutive activities. In this case, censoring is due to a fixed point of data collection that we denote t m, the time of measurement. Thus, time till censoring for a particular user is the time from her last activity to t m. Lifetime of a user is defined as the time from her joining till she permanently deletes her account.• Medical Dataset: Subjects join the medical study at the same time and are checked for the presence of a particular disease. The covariates are the attributes of the disease-causing cell in subject, inter-event time is the time between two consecutive observations of the presence of disease. The time to censoring is the difference between the time of last observation when the disease was present and the time of final observation. If the final observation for a subject indicates presence of the disease, then time to censoring is zero. Lifetime of the disease is defined as the time between the first observation of the disease and the time until it is permanently cured. We use a deep neural network and a new loss function, with a corresponding backpropagation modification, for clustering subjects without end-of-life signals. We are able to overcome the technical challenges of this problem, in part, thanks to the ability of deep neural networks to generalize while overfitting the training data BID49. The task is challenging for the following reasons:• The problem is fully unsupervised, as there is no pre-defined end-of-life timeout. While semisupervised clustering approaches exist BID0 BID4 BID5 BID35 BID46, they assume that end-of-life signals appearing before the observation time are observed; to the best of our knowledge, there are no fully unsupervised approach that can take complex input variables.• There is no hazard function that can be used to define the "cure" rate, as we cannot determine whether the disease is cured, or whether the subscriber will never return to the website, without observing for an infinitely long time.• Cluster assignments may depend on highly complex interactions between the observed covariates and the observed events. The unobserved lifetime distributions may not be smooth functions. Contributions. Using the ability of deep neural networks to model complex nonlinear relationships in the input data, our contribution is a loss function (using the p-value from a modified Kuiper nonparametric two-sample test BID28) and a backpropagation algorithm that can perform model-free (nonparametric) unsupervised clustering of subjects based on their latent lifetime distributions, even in the absence of end-of-life signals. The output of our algorithm is a trained deep neural network classifier that can (soft) assign test and training data subjects into K categories, from high-risk and to low-risk individuals. We apply our method to a large social network dataset and show that our approach is more robust than competing methods and obtains better clusters (higher C-index scores).Why deep neural networks. As with any optimization method that returns a point estimate (a set of neural network weights W in our case), our approach is subject to overfitting the training data. And because our loss function uses p-values, the optimization and overfitting have a rather negative name: p-hacking BID36. That is, the optimization is looking for a W (hypothesis) that decreases the p-value. Deep neural networks, however, are known to both overfit the training data and generalize well BID49. That is, the hypothesis (W) tends to also have small p-values in the (unseen) test data, despite overfitting in the training data (p-hacking).Outline: In section 3, we describe the traditional survival analysis concepts that assume the presence of end-of-life signals. In section 4, we define a loss function that quantifies the divergence between empirical lifetime distributions of two clusters without assuming end-of-life signals. We also provide a neural network approach to optimize said loss function. We describe the dataset used in our experiments followed by in section 5. In section 6, we describe a few methods in literature that are related to our work. Finally, we present our in section 7. In this section, we formally define the statistical framework underlying the clustering approach introduced later in this paper. We begin by defining the datasets relevant to the survival clustering task. Definition 1 (Dataset). Dataset D consists of a set of n subjects with each subject u having the following observable quantities DISPLAYFORM0, S u }, where X u are the covariates of subject DISPLAYFORM1 are the observed inter-event times (disease outbreaks, website usage), q u is the number of observed events of u, and S u is the time till censoring. Note that the two example datasets described in section 1 fit into this definition. For instance, in the social network dataset, for a particular user u, X u is a vector of her covariates (such as age, gender, etc.), Y u,i is the time between her ith and (i − 1)st activity (login, send/receive comments), and her time till censoring is given by, S u = t m − i Y u,i, where t m is the time of measurement. Next, we define the lifetime clustering problem applicable to the aforementioned datasets. Definition 2 (Clustering problem). Consider a dataset of n subjects, D, constructed as in definition 1. LetP (U k) be the latent lifetime distribution of all subjects U k = {u} that belong to cluster k ∈ {1, . . ., K}. Our goal is to find a mapping κ: X u → {1, . . ., K}, of covariates into clusters, in the set of all possible such mappings K, that maximizes the divergence d between the latent lifetime distributions of all subjects: DISPLAYFORM2 where U k (κ) is the set of users in U mapped to cluster k through κ, and d is a distribution divergence metric. κ * optimized in this fashion clusters the subjects into low-risk/high-risk groups. However, becauseP (U k) are latent distributions, we cannot directly optimize Eq.. Rather, our loss function must provide an indirect way to optimize Eq. without end-of-life signals. In what follows, we define the activity process of subjects in cluster k as a Random Marked Point Processes (RMPP).Definition 3 (Observable RMPP cluster process). Consider the k-th cluster. The RMPP is DISPLAYFORM3 is the inter-event time between the (i − 1)-st and the i-th activities, X k are the random variable representing the covariates of subjects in cluster k, S k is the time from last event until censoring at cluster k, and A k,i = 0 indicates an activity with an end-oflife signal, otherwise A k,i = 1. All these variables may be arbitrarily dependent. This definition is model-free, i.e., we will not prescribe a model for Φ k.Note that, at least theoretically, Φ k continues to evolve beyond the end-of-life signal, but this evolution will be ignored as it is irrelevant to us. The relative time of the i-th activity of a subject of cluster k, since the subject's first activity, is i ≤i Y k,i, as long as we haven't seen an end-of-life signal, i.e., i <i A k,i = 1.Definition 4 (RMPP Lifetime). The random variable that defines the lifetime of a subject of cluster k is DISPLAYFORM4 We now define censored lifetimes using Φ k.Definition 5 (RMPP Censored Lifetimes). The random variable that defines the last observed action time of a subject u of cluster k is DISPLAYFORM5 Let i (S k) be a random variable that denotes the number of events until the censoring time S k. The main challenge is not knowing when H k = T k, because we are unable to observed the end-of-life signal A k,i (S k) = 0. Clearly, this affects the decision of which subjects have been censored and which have not. Later, we introduce probability of end-of-life, p: DISPLAYFORM6, that provides a way around this challenge. In this section, we review the major concepts in survival analysis that are used in this paper. Let T u denote the lifetime of a subject u. For now, our description assumes an Oracle that provides end-of-life signals. Thus, in addition to Ψ u, we assume for each subject u and, another observable quantity, A u,i that denotes whether end-of-life has been reached at the user's ith activity. In survival applications, A u is typically used to specify if the required event did not occur until the end of study, known as right-censoring. We shall forego this assumption in subsequent sections and provide a way around the lack of these signals. Lifetime distribution & Hazard function (Oracle). Lifetime (or survival) distribution is defined as the probability that a subject u survives at least until time t, DISPLAYFORM0 where F u (t) is the cumulative distribution function of T u.In survival applications, it is typically convenient to define the hazard function, that represents the instantaneous rate of death of a subject given that she has survived until time t. The hazard function of a subject u is λ u (t) = dFu(t) Su(t), where dF u is the probability density of F u. Due to rightcensoring, we do not observe the true lifetimes of the subjects even in the presence of end-of-life signals, A u. We define the observed lifetime of subject u, H u, as the difference between the time of first event and time of last observed event, i.e., DISPLAYFORM0 provide a way to estimate the lifetime distribution for a set of subjects while incorporating the right censoring effect. The Kaplan-Meier estimates of lifetime distribution are given by, DISPLAYFORM1 where DISPLAYFORM2 ) denotes the number of subjects with end-of-life at time j, and r j = u∈D I[H u ≥ j] denotes the number of subjects at risk just prior to time j. Cox regression model BID12 ) is a widely used method in survival analysis to estimate the hazard function λ u (t) using the covariates, X u, of a subject u. The hazard function has the form, λ(t|X u) = λ 0 (t) · e {β T Xu}, where λ 0 (t) is a base hazard function common for all subjects, and β are the regression coefficients. The model assumes that the ratio of hazard functions of any two subjects is constant over time. This assumption is violated frequently in real-world datasets BID32. A near-extreme case when this assumption does not hold is shown in FIG0, where the survival curves of two groups of subjects cross each other. Survival Based Clustering Methods (Oracle). There have been relatively fewer works that perform survival based clustering. BID3 proposed a semi-supervised method for clustering survival data in which they assign Cox scores BID12 for each feature in their dataset and considered only the features with scores above a predetermined threshold. Then, an unsupervised clustering algorithm, like k-means, is used to group the individuals using only the selected features. BID17 proposed supervised sparse clustering as a modification to the sparse clustering algorithm of BID46. The sparse clustering algorithm has a modified k-means score that uses distinct weights in the feature set. Supervised sparse clustering initializes these feature weights using Cox scores BID12 and optimizes the same objective function. Both these methods assume the presence of end-of-life signals. In this paper, we consider the case when end-of-life signals are not available. We provide a loss function that quantifies the divergence between survival distributions of the clusters, and we minimize said loss function using a neural network in order to obtain the optimal clusters. Our goal is to cluster the subjects into K clusters ranging from low-risk to high-risk by keeping the empirical lifetime distributions of these K groups as different as they can be, while ensuring that the observed difference is statistically significant. In this section, we assume there are no end-of-life signals. We introduce a loss function that is based on a divergence measure between empirical lifetime distributions of two groups, and at the same time takes into account the uncertainty regarding the end-of-life of the subjects. Instead of a clear end-of-life signal, we specify a probability for each subject u that represents how likely her last observed activity coincides with her end-of-life. Definition 6 (Probability of end-of-life). Given a dataset D (Definition 1), we define a function, by an abuse of notation, p(X u, S u) → that gives a probability of end-of-life of each subject u. Divergence measures like Kullback-Leibler divergence and Earth-Mover's distance that do not incorporate the empirical nature of the given probability distributions are not appropriate for our task as they do not discourage highly imbalanced groups FIG0. This motivates the use of twosample tests that allow for the probability distributions to be empirical. Logrank test BID34 BID37 BID7 ) is commonly used to compare groups of subjects based on their hazard rates. However, the test assumes proportional hazards (section 3) and will not be able to find groups whose hazard rates are not proportional to each other FIG0. BID16 introduced Modified Kolmogorov-Smirnov (MKS) statistic that works for arbitrarily right-censored data and does not assume hazards proportionality. But MKS suffers from the same drawback as the standard Kolmogorov-Smirnov statistic, namely that it is not sensitive to the differences in the tails of the distributions. In this paper, we use p-value from the Kuiper statistic BID28 which extends the Kolmogorov-Smirnov statistic to increase the statistical power of distinguishing distribution tails BID45.Definition 7 (Optimization of Kuiper loss). Given a dataset D (Definition 1), we define a loss L(κ, p) where, by an abuse of notation, κ(X u) → is a mapping that performs soft clustering of subjects into two clusters 0 & 1 by outputting a probability of a subject belonging in cluster 0, and p(X u, S u) → is a function that gives a probability of end-of-life of a subject in D. Our goal is to obtainκ where the loss function DISPLAYFORM0 DISPLAYFORM1 returns the logarithm of a p-value from the Kuiper statistic BID38, with DISPLAYFORM2 and DISPLAYFORM3, and for k = 0, 1, DISPLAYFORM4 where DISPLAYFORM5 The following theorem states a few properties of our loss function. Theorem 1 (Kuiper loss properties). From Definition 7, consider two clusters with true lifetime distributionsP (U 1) andP (U 2). Assume an infinite number of samples/subjects. Then, the loss function defined in equation FORMULA12 has the following properties:(a) If the two clusters have distinct lifetime distributions, i.e. P (DISPLAYFORM6 (b) If the two clusters have the same stochastic process Ψ u (Definition 1), Ψ u = Ψ v, for any two subjects u and v, regardless of cluster assignments, then ∀κ, p, L(κ, p) → 0.We prove Theorem 1 in Appendix 8.1 by defining the activity process of the subjects using shifted Random Marked Point Processes. The loss defined above solves all the aforementioned issues; a) does not need clear end-of-life signals, b) use of a p-value forces sufficient number of examples in both groups, c) does not assume proportionality of hazards and works even for crossing survival curves, and d) accounts for differences at the tails. In this section, we describe the functions κ(·) and p(·) in definition 7 κ(·) gives the probability of a subject u being in cluster 0, and we define it using a neural network as follows, DISPLAYFORM0 where DISPLAYFORM1 are the weights and the biases of a neural network with L − 1 hidden layers, X u are the covariates of subject u, φ is an activation function (tanh or relU in our experiments), and σ is the softmax function. An example of a feedforward neural network that optimizes Kuiper loss is shown in FIG1.Next, we describe the form of end-of-life probability function, p(·). We make the reasonable assumption that p(·) is an increasing function of S u. For example, consider two subjects a and b, with last activities one year and one week before their respective time of censoring. Clearly, it is more likely that subject a's activity is her last one than that b's activity is her last one. In our experiments, we also assume that p(·) only depends on S u, and not on the covariates X u. Survival tasks commonly use the following naive technique to identify the end-of-life signal. They define p(·) using a step function, p(X u, S u) = 1[S u > W], where W is the size of an arbitrary window from the time of censoring (see FIG1 . However, this approach does not allow learning of the window size parameter W, and hence, the analysis can be highly coupled with the choice of W .We remedy this by choosing p(·) to be a smooth non-decreasing function of S u, parameters of which can be learnt by minimizing the loss function L(κ, p). We use the cumulative distribution function of an exponential distribution in our experiments, i.e, p(X u, S u) = 1 − e −β·Su FIG1. The rate parameter, β, is learnt using gradient descent along with the weights of the neural network. Extension to K Clusters Until now, we dealt with clustering the subjects into two groups. However, it is not hard to extend the framework for K clusters. We increase the number of units in the output layer from 2 to K. As before, a softmax function applied at the output layer gives probabilities that define a soft clustering of the subjects into K groups. These probabilities can be used to obtain the loss, L A,B, between any two groups, D A and D B.We define the total loss for K groups as the average of all the pairwise losses between individual groups and get the geometric mean of the pairwise p-values, i.e., DISPLAYFORM2 In other words, the loss L 1...K is minimized only if each of the individual p-values are low indicating that each group's lifetime distribution is different (in divergence) from every other group's lifetime distribution. Implementation We implement a feedforward neural network in Theano and use ADAM BID27 to optimize the loss L 1... K defined in equation 10. Each iteration of the optimization takes as input a batch of subjects (full batch or a minibatch), generates a value for the loss, calculates the gradients, and updates the parameters (β, DISPLAYFORM3). This is done repeatedly until there is no improvement in the validation loss. We use L2 regularization over the weights and experiment with different values for the regularization parameter. We also experiment with different neural network sizes (number of hidden layers, number of hidden units), activation functions for the hidden layers, and weight initialization techniques. We applied different deep learning techniques like batch normalization BID22 and dropout to better learn the neural network. In this paper, we analyze a large-scale social network dataset collected from Friendster. After processing 30TB of data, originally collected by the Internet Archive in June 2011, the ing network has around 15 million users with 335 million friendship links. Each user has profile information such as age, gender, marital status, occupation, and interests. Additionally, there are user comments on each other's profile pages with timestamps that indicate activity in the site. In our experiments, we only use the data up to March 2008 as Friendster's monthly active users have been significantly affected with the introduction of "new Facebook wall" BID39. From this, we only consider a subset of 1.1 million users who had participated in atleast one comment, and had specified their basic profile information like age and gender. We make our processed data available to the public at location (anonymized). We build the dataset D: DISPLAYFORM0, S u } (Definition 1) for our clustering task as follows. X u: We use each user's profile information (like age, gender, relationship status, occupation and location) as features. For the high-dimensional discrete attributes like location and occupation, we use 20 most frequently occurring values. To help improve the performance of competing methods, we also construct additional features using the user's activity over the initial 5 months (like number of comments sent and received, number of individuals interacted with, etc.). In total, we construct 60 features that are used for each of the models in our experiments. Y u,i: We define Y u,i as the time between u's ith comment (sent or received) and (i − 1)st comment (sent or received). q(u) is then defined as the total number of comments the user participated in. S u: We calculate S u as the time between u's last activity and the chosen time of measurement (March 2008). We experimented with different neural network architectures as shown in TAB1. In Table 1, we show the for a simple neural network configuration with one fully-connected hidden layer with 128 hidden units and tanh activation function. We use a batch size of 8192 and a learning rate of 10 −4. We also use batch normalization BID22 to facilitate convergence, and regularize the weights of the neural network using an L2 penalty of 0.01. Appendix 8.2 shows a more detailed evaluation of different architecture choices. We evaluate the models using 10-fold cross validation as follows. We split the dataset D randomly into 10 folds, sample 100,000 users without replacement from ith fold for testing and sample 100,000 users similarly from the remaining 9 folds for training. We use 20% of the training data as validation data for early stopping in our neural network training. We repeat this for i ranging from 1 to 10.We compare our clustering approach with the only two survival-based clustering methods in literature; a) Semi-supervised clustering BID3 ) and b) Supervised sparse clustering BID17. Since both these methods require clear end-of-life signals, we use an arbitrary window of 10 months (i.e., a pre-defined "timeout") prior to the time of measurement in order to obtain these signals FIG1. We also try window sizes of 0 months (only the users with activity at t m are censored) and 5 months, and obtain similar (not reported here). We test our approach in two cases -in the presence and lack of end-of-life signals. In the former case, we optimize the loss function in equation FORMULA2 keeping p(·) fixed to the end-of-life signals obtained from using a window of 10 months. In the latter case, our approach learns the latent end-of-life signals. We also experiment with a loss function based on the Kolmogorov-Smirnov statistic (denoted NN-KS) and report the performance for the same. We evaluate the clusters obtained from each of these methods using concordance index. Concordance Index Concordance index or C-index BID18 ) is a commonly used metric in survival applications BID1 BID33 to quantify a model's ability to discriminate between subjects with different lifetimes. It calculates the fraction of pairs of subjects for which the model predicts the correct order of survival while also incorporating censoring. We use the end-of-life signals calculated using a pre-defined "timeout" of 10 months. Rather than populating all possible pairs of users, we sample 10000 random pairs to calculate the C-index. Table 1 shows the C-index values for the baselines and the proposed method. Table 1: C-index (%) for clusters from different methods with K = 2, 3, 4 where K is the number of clusters. The proposed approach is more robust with lower values of standard deviations than the competing methods. DISPLAYFORM0 Discussion The proposed neural network approach performs better on average than the two competing methods. Even without end-of-life signals, the proposed approach achieves comparable scores for K = 3, 4 and the best C-index score for K = 2. Although NN-Kuiper is theoretically more robust than NN-KS because of its increased statistical power in distinguishing distribution tails BID45, we do not observe a performance difference in the Friendster dataset. Further, we use the endof-life signals obtained using a window of 10 months to calculate the empirical lifetime distributions of the clusters identified by the neural network (FIG2). The empirical lifetime distributions of clusters seem distinct from each other at K = 2 but not at K = 3, 4. In addition, there is not significant gain in the C-index values as we increase the number of clusters from K = 2 to K = 4. Hence, we can conclude that there are only two types of users in the Friendster dataset -long-lived and short-lived. Majority of the work in survival analysis has dealt with the task of predicting the survival outcome especially when the number of features is much higher than the number of subjects BID47 BID9 BID19 BID41. A number of approaches have also been proposed to perform feature selection in survival data BID24 BID29. In the social network scenario, BID43 tried to predict the relationship building time, that is, the time until a particular link is formed in the network. Many unsupervised approaches have been proposed to identify cancer subtypes in gene expression data without considering the survival outcome BID13 BID2 BID6. Traditional semi-supervised clustering methods BID0 BID4 BID5 BID35 do not perform well in this scenario since they do not provide a way to handle the issues with right censoring. Semi-supervised clustering BID3 and supervised sparse clustering Witten and Tibshirani (2010a) use Cox scores BID12 to identify features associated with survival. They treat these features differently in order to perform clustering based on the survival outcome. Although there are quite a few works on using neural networks to predict the hazard rates of individuals BID14, to the best of our knowledge, there hasn't been a work on using neural networks for a survival-based clustering task. Recently, Alaa and van der proposed a nonparametric Bayesian approach for survival analysis in the case of more than one competing events (multiple diseases). They not only assume the presence of end-of-life signals but also the type of event that caused the end-of-life. BID33 optimize a loss based on Cox's partial likelihood along with a penalty using a deep neural network to predict the probability of survival at a point in time. Here we considered the task of clustering subjects into low-risk/high-risk groups without observing any end-of-life signals. Extensive research has been done on what is known as frailty analysis, for predicting survival outcomes in the presence of clustered observations BID20 BID10 BID21. Although frailty models provide more flexibility in the presence of clustered observations, they do not provide a mechanism for obtaining the clusters themselves, which is our primary goal. In addition, our approach does not assume proportional hazards unlike most frailty models. In this work we introduced a Kuiper-based nonparametric loss function, and a corresponding backpropagation procedure (which backpropagates the loss over clusters rather than the loss per training example). These procedures are then used to train a feedforward neural network to inductively assign observed subject covariates into K survival-based clusters, from high-risk to low-risk subjects, without requiring an end-of-life signal. We showed that the ant neural network produces clusters with better C-index values than other competing methods. We also presented the survival distributions of the clusters obtained from our procedure and concluded that there were only two groups of users in the Friendster dataset. Both parts (a) and (b) of our proof need definition 3 that translates the observed data D u for subject u into a stochastic process. Proof of (a): If the two clusters have distinct lifetime distributions, it means that the distributions of T 0 and T 1 in eq. are different. Then, either the right-censoring δ in eq. does not allow us to see the difference between T 0 and T 1, and then there is no mappingsp andκ that can get the distribution of S 0 (t;κ,p) and S 1 (t;κ,p) to be distinct, implying an L(κ, p) → 0, as n → ∞ as the observations come from the same distribution, making the Kuiper score asymptotically equal to one; or δ does allow us to see the difference and then, clearlyp ≡ 0 with a mappingκ that assigns more than half of the subjects to their correct clusters, which would allow us to see the difference in H 0 and H 1, would give Kuiper score asymptotically equal to zero. Thus, L(κ, p) → −∞, as n → ∞.Proof of (b): Because κ only take the subject covariates as input, and there are no dependencies between the subject covariates and the subject lifetime in eq., any clustering based on the covariates will be a random assignment of users into clusters. Moreover, from eq., the censoring time of subject u, S u, has the same distribution for both clusters because the RMPPs are the same. Thus, H 0 d = H 1, i.e., H 0 and H 1 have the same distributions, and the Kuiper p-value test returns zero, L(κ, p) → 0, as n → ∞. Table 4: C-index (%) over different learning rates and batch sizes for the proposed NN approach with Kuiper loss (with learnt exponential) and K = 2.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJme6-ZR-
The goal of survival clustering is to map subjects into clusters. Without end-of-life signals, this is a challenging task. To address this task we propose a new loss function by modifying the Kuiper statistics.
Bayesian optimization (BO) is a popular methodology to tune the hyperparameters of expensive black-box functions. Despite its success, standard BO focuses on a single task at a time and is not designed to leverage information from related functions, such as tuning performance metrics of the same algorithm across multiple datasets. In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different metrics. The main idea is to regress the mapping from hyperparameter to metric quantiles with a semi-parametric Gaussian Copula distribution, which provides robustness against different scales or outliers that can occur in different tasks. We introduce two methods to leverage this estimation: a Thompson sampling strategy as well as a Gaussian Copula process using such quantile estimate as a prior. We show that these strategies can combine the estimation of multiple metrics such as runtime and accuracy, steering the optimization toward cheaper hyperparameters for the same level of accuracy. Experiments on an extensive set of hyperparameter tuning tasks demonstrate significant improvements over state-of-the-art methods. Tuning complex machine learning models such as deep neural networks can be a daunting task. Object detection or language understanding models often rely on deep neural networks with many tunable hyperparameters, and automatic hyperparameter optimization (HPO) techniques such as Bayesian optimization (BO) are critical to find the good hyperparameters in short time. BO addresses the black-box optimization problem by placing a probabilistic model on the function to minimize (e.g., the mapping of neural network hyperparameters to a validation loss), and determine which hyperparameters to evaluate next by trading off exploration and exploitation through an acquisition function. While traditional BO focuses on each problem in isolation, recent years have seen a surge of interest in transfer learning for HPO. The key idea is to exploit evaluations from previous, related tasks (e.g., the same neural network tuned on multiple datasets) to further speed up the hyperparameter search. A central challenge of hyperparameter transfer learning is that different tasks typically have different scales, varying noise levels, and possibly contain outliers, making it hard to learn a joint model. In this work, we show how a semi-parametric Gaussian Copula can be leveraged to learn a joint prior across datasets in such a way that scale issues vanish. We then demonstrate how such prior estimate can be used to transfer information across tasks and objectives. We propose two HPO strategies: a Copula Thompson Sampling and a Gaussian Copula Process. We show that these approaches can jointly model several objectives with potentially different scales, such as validation error and compute time, without requiring processing. We demonstrate significant speed-ups over a number of baselines in extensive experiments. The paper is organized as follows. Section 2 reviews related work on transfer learning for HPO. Section 3 introduces Copula regression, the building block for the HPO strategies we propose in Section 4. Specifically, we show how Copula regression can be applied to design two HPO strategies, one based on Thompson sampling and an alternative GP-based approach. Experimental are given in Section 5 where we evaluate both approaches against state-of-the-art methods on three algorithms. Finally, Section 6 outlines and further developments. A variety of methods have been developed to induce transfer learning in HPO. The most common approach is to model tasks jointly or via a conditional independence structure, which has been been explored through multi-output GPs , weighted combination of GPs (; ;), and neural networks, either fully Bayesian or hybrid (; ;). A different line of research has focused on the setting where tasks come over time as a sequence and models need to be updated online as new problems accrue. A way to achieve this is to fit a sequence of surrogate models to the residuals relative to predictions of the previously fitted model . Specifically, the GP over the new task is centered on the predictive mean of the previously learned GP. Finally, rather than fitting a surrogate model to all past data, some transfer can be achieved by warm-starting BO with the solutions to the previous BO problems (; b). A key challenge for joint models is that different black-boxes can exhibit heterogeneous scale and noise levels . To address this, some methods have instead focused on search-space level, aiming to prune it to focus on regions of the hyperparameter space where good configurations are likely to lie. An example is Wistuba et al. (2015a), where related tasks are used to learn a promising search space during HPO, defining task similarity in terms of the distance of the respective data set meta-features. A more recent alternative that does not require meta-features was introduced in , where a restricted search space in the form of a low-volume hyper-rectangle or hyper-ellipsoid is learned from the optimal hyperparameters of related tasks. Rank estimation can be used to alleviate scales issues however the difficulty of feeding back rank information to GP leads to restricting assumptions, for instance does not model the rank estimation uncertainty while uses independent GPs removing the adaptivity of the GP to the current task. Gaussian Copula Process (GCP) can also be used to alleviate scale issues on a single task at the extra cost of estimating the CDF of the data. Using GCP for HPO was proposed in to handle potentially non-Gaussian data, albeit only considering non-parametric homoskedastic priors for the single-task and single objective case. For each task denote with f j: R p → R the error function one wishes to minimize, and with the evaluations available for an arbitrary task. Given the evaluations on M tasks, we are interested in speeding up the optimization of an arbitrary new task f, namely in finding arg min x∈R p f (x) in the least number of evaluations. One possible approach to speed-up the optimization of f is to build a surrogate modelf (x). While using a Gaussian process is possible, scaling such an approach to the large number of evaluations available in a transfer learning setting is challenging. Instead, we propose fitting a parametric estimate of f θ (x) distribution which can be later used in HPO strategies as a prior of a Gaussian Copula Process. A key requirement here is to learn a joint model, e.g., we would like to find θ which fits well on all observed tasks f j. We show how this can be achieved with a semi-parametric Gaussian Copula in two steps: first we map all evaluations to quantiles with the empirical CDF, and then we fit a parametric Gaussian distribution on quantiles mapped through the Gaussian inverse CDF. First, observe that as every y i comes from the same distribution for a given task, the probability integral transform in u i = F (y i), where F is the cumulative distribution function of y. We then model the CDF of (u 1, . . ., u N) with a Gaussian Copula: where Φ is the standard normal CDF and φ µ,Σ is the CDF of a normal distribution parametrized by µ and Σ. Assuming F to be invertible, we define the change of variable z = Φ −1 • F (y) = ψ(y) and g = ψ • f. 1 We regress the marginal distribution of P (z|x) with a Gaussian distribution whose mean and variance are two deterministic parametric functions given by where h w h (x) ∈ R d is the output of a multi-layer perceptron (MLP) where are projection parameters and Ψ(t) = log(1 + exp t) is an activation mapping to positive numbers. The parameters θ = {w h, w µ, b µ, w σ, b σ} together with the parameters in MLP are learned by minimizing the Gaussian negative log-likelihood on the available evaluations, e.g., by minimizing with SGD. Here, the term ψ (ψ −1 (z)) accounts for the change of variable z = ψ(y). Due to the term ψ (ψ −1 (z)), errors committed where the quantile function changes rapidly have larger gradient than when the quantile function is flat. Note that while we weight evaluations of each tasks equally, one may alternatively normalize gradient contributions per number of task evaluations. The transformation ψ requires F, which needs to be estimated. Rather than using a parametric or density estimation approach, we use the empirical CDFF (t) = 1 N N i=1 1 yi≤t. While this estimator has the advantage of being non-parametric, it leads to infinite value when evaluating ψ at the minimum of maximum of y. To avoid this issue, we use the Winsorized cut-off estimator where N is the number of observations of y and choosing δ N = 1 4N 1/4 √ π log N strikes a bias-variance trade-off . This approach is semi-parametric in that the CDF is estimated with a non-parametric estimator and the Gaussian Copula is estimated with a parametric approach. The benefit of using a non-parametric estimator for the CDF is that it allows us to map the observations of each task to comparable distributions as z j ∼ N for all tasks j. This is critical to allow the joint learning of the parametric estimates µ θ and σ θ, which share their parameter θ across all tasks. As our experiments will show, one can regress a parametric estimate that has a standard error lower than 1. This means that information can be leveraged from the evaluations obtained on related tasks, whereas a trivial predictor for z would predict 0 and yield a standard error of 1. In the next section we show how this estimator can be leveraged to design two novel HPO strategies. 4 COPULA BASED HPO 4.1 COPULA THOMPSON SAMPLING Given the predictive distribution P (z|x) ∼ N (µ θ (x), σ θ (x)), it is straightforward to derive a Thompson sampling strategy exploiting knowledge from previous tasks. Given N candidate hyperparameter configurations x 1,..., x N, we sample from each predictive distributionz i ∼ N (µ θ (x i), σ θ (x i)) and then evaluate f (x i) where i = arg min izi. Pseudo-code is given in the appendix. While this approach can re-use information from previous tasks, it does not exploit the evaluations from the current task as each draw is independent of the observed evaluations. This can become an issue if the new black-box significantly differs from previous tasks. We now show that Gaussian Copula regression can be combined with a GP to both learn from previous tasks while adapting to the current task. Instead of modeling observations with a GP, we model them as a Gaussian Copula Process (GCP) . Observations are mapped through the bijection ψ = Φ −1 • F, where we recall that Φ is the standard normal CDF and that F is the CDF of y. As ψ is monotonically increasing and mapping into the line, we can alternatively view this modeling as a warped GP with a non-parametric warping. One advantage of this transformation is that z = ψ(y) follows a normal distribution, which may not be the case for y = f (x). In the specific case of HPO, y may represent accuracy scores in of a classifier where a Gaussian cannot be used. Furthermore, we can use the information gained on other tasks with µ θ and σ θ by using them as prior mean and variance. To do so, the following residual is modeled with a GP: where g = ψ • f. We use a Matérn-5/2 covariance kernel and automatic relevance determination hyperparameters, optimized by type II maximum likelihood to determine GP hyperparameters . Fitting the GP gives the predictive distribution of the residual surrogatê Because µ θ and σ θ are deterministic functions, the predictive distribution of the surrogateĝ is then given byĝ Using this predictive distribution, we can select the hyperparameter configuration maximizing the Expected Improvement (EI) of g(x). The EI can then be defined in closed form as, with Φ and φ being the CDF and PDF of the standard normal, respectively. When no observations are available, the empirical CDFF is not defined. Therefore, we warm-start the optimization on the new task by sampling a set of N 0 = 5 hyperparameter configurations via Thompson sampling, as described above. Pseudo-code is given in Algorithm 1. Learn the parameters θ of µ θ (x) and σ θ (x) on hold-out evaluations D M by minimizing equation 1. Sample an initial set of evaluations while Has budget do Fit the GP surrogater to the observations {(x,) | (x, y) ∈ D} Sample N candidate hyperparameters x 1,..., x N from the search space Compute the hyperparameter x i where i = arg max i EI(In addition to optimizing the accuracy of a black-box function, it is often desirable to optimize its runtime or memory consumption. For instance, given two hyperparameters with the same expected error, the one requiring fewer resources is preferable. For tasks where runtime is available, we use both runtime and error objectives by averaging in the transformed space, e.g., we set, where z error (x) = ψ(f error (x)) and z time (x) = ψ(f time (x)) denote the transformed error and time observations, respectively. This allows us to seamlessly optimize for time and error when running HPO, so that the cheaper hyperparameter is favored when two hyperparameters lead to a similar expected error. Notice many existing multi-objective methods can potentially be combined with our Copula transformation as an extension, which we believe is an interesting venue for future work. We considered the problem of tuning three algorithms on multiple datasets: XGBoost , a 2-layer feed-forward neural network (FCNET) , and the RNN-based time series prediction model proposed in (DeepAR). We tuned XGBoost on 9 libsvm datasets to minimize 1−AUC, and FCNET on 4 datasets from to minimize the test mean squared error. As for DeepAR, the evaluations were collected on the data provided by GluonTS , consisting of 6 datasets from the M4-competition and 5 datasets used in , and the goal is to minimize the quantile loss. Additionally, for DeepAR and FCNET the runtime to evaluate each hyperparameter configuration was available, and we ran additional experiments exploiting this objective. More details on the HPO setup are in Table 1, and the search spaces of the three problems is in Table 4 of the appendix. Lookup tables are used as advocated in , more details and statistics can be found in the appendix. We compare against a number of baselines. We consider random search and GP-based BO as two of the most popular HPO methods. As a transfer learning baseline, we consider warm-start GP , using the best-performing evaluations from all the tasks to warm start the GP on the target task (WS GP best). As an extension of WS GP best, we apply standardization on the objectives of the evaluations for every task and then use all of them to warm start the GP on the target task (WS GP all). We also compare against two recently published transfer learning methods for HPO: ABLR and a search space-based transfer learning method . ABLR is a transfer learning approach consisting of a shared neural network across tasks on top of which lies a Bayesian linear regression layer per task. transfers information by fitting a bounding box to contain the best hyperparameters from each previous task, and applies random search (Box RS) or GP-based BO (Box GP) in the learned search space. We assess the transfer learning capabilities of these methods in a leave-one-task-out setting: we sequentially leave out one dataset and then aggregate the for each algorithm. The performance of each method is first averaged over 30 replicates for one dataset in a task, and the relative improvements over random search are computed on every iteration for that dataset. The relative improvement for an optimizer (opt) is defined by (y random − y opt)/y random, which is upper bounded by 100%. Notice that all the objectives y are in R +. By computing the relative improvements, we can aggregate across all datasets for each algorithm. Finally, for all copula-based methods, we learn the mapping to copulas via a 3-layer MLP with 50 units per layer, optimized by ADAM with early-stopping. To give more insight into the components of our method, we perform a detailed ablation study to investigate the choice of the MLP and compare the copula estimation to simple standardization. Choice of copula estimators For copula-based methods, we use an MLP to estimate the output. We first compare to other possible options, including a linear model and a k-nearest neighbor estimator in a leave-one-out setting: we sequentially take the hyperparameter evaluations of one dataset as test set and use all evaluations from the other datasets as a training set. We report the RMSE in Table 5 of the appendix when predicting the error of the blackbox. From this table, it is clear that MLP tends to be the best performing estimator among the three. In addition, a low RMSE indicates that the task is close to the prior that we learned on all the other tasks, and we should thus expect transfer learning methods to perform well. As shown later by the BO experiments, FCNET has the lowest RMSE among the three algorithms, and all transfer learning methods indeed perform much better than single-task approaches. Homoskedastic and Heteroskedastic noise The proposed Copula estimator (MLP) uses heteroskedastic noise for the prior. We now compare it to a homoskedastic version where we only estimate the mean. The are summarized in Table 2 where average relative improvements over random search across all the iterations and replicates are shown. It is clear that heteroskedasticity tends to help on most datasets. Copula transformation and standardization In our method, we map objectives to be normally distributed in two steps: first we apply the probability integral transform, followed by a Copula transform using the inverse CDF of a Gaussian. To demonstrate the usefulness of such transformation, we compare it to a simple standardization of the objectives where mean and std are computed on each datasets separately. Results are reported in Table 2. It is clear that standardization performs significantly worse than the Copula transformation, indicating that it is not able to address the problem of varying scale and noise levels across tasks. Note that the relative improvement objective is not lower bounded, so that when random search finds very small values the scale of relative improvement can be arbitrary large (such as for the Protein dataset in FCNET). Table 2: Relative improvements over random search. TS std and GP std respectively using a simple standardization instead of the copula transformation. Ho and He stand for Homoskedastic and Heteroskedastic noise. We now compare the proposed methods to other HPO baselines. The on using only the error information are shown first followed by the using both time and error information. Results using only error information We start by studying the setting where only error objectives are used to learn the copula transformation. Within each task, we first aggregate 30 replicates for each method to compute the relative improvement over random search at every iteration, and then average the across all iterations. The are reported in Table 3, showing that CGP is the best method for almost every task except XGBoost. In XGBoost, there are several tasks on which methods without transfer learning perform quite well. This is not surprising as we observe in an ablation study on copula estimators (see Table 5 in the appendix) that some tasks in XGBoost have relatively high test errors, implying that the transferred prior will not help. In those tasks, CGP is usually the runner-up method after standard GP. We also report the at iteration 10, 50 and 100 in the Tables 7, 8 and 9 in the appendix where we observe CGP and Box RS are the most competitive methods at 10th iteration but at 100 iteration, CGP is clearly the best transfer learning method. This highlights the advantage of being adaptive to the target task of our method while making effective transfer in the beginning. We also show on two example datasets from each algorithm in Figure 1, reporting confidence intervals obtained via bootstrap. Note that the performance of the methods in the examples for DeepAR and XGBoost exhibit quite high variations, especially in the beginning of the BO. We conjecture this is due to an insufficient number of evaluations in the lookup tables. Nevertheless, the general trend is that CTS and CGP outperform all baselines, especially in the beginning of the BO. It can also be observed that CGP performs at least on par with the best method at the end of the BO. Box RS is also competitive at the beginning, but as expected fails to keep its advantage toward the end of the BO. Results using both error and time information We then studied the ability of the copula-based approaches to transfer information from multiple objectives. Notice it is possible to combine Copula transformation with other multi-objective BO methods and we will leave this direction as future work. We show two example tasks in DeepAR and FCNET in Figure 2, where we fix the total number of iterations and plot performance against time with 2 standard error. To obtain distributions over seeds for one method, we only consider the time range where 20 seeds are available,which explains why methods can start and end at different times. With the ability to leverage training time information, the copula-based approaches have a clear advantage over all baselines, especially at the beginning of the optimization. We also report aggregate performance over all the tasks in Table 6 in the appendix. Due to the different methods finishing the optimization at different times, we only compare them up to the time taken by the fastest method. For each method we first compute an average over 30 replicates, then compute the relative improvement over random search, and finally average the across all time points. The copula based methods converge to a good hyperparameter configuration significantly faster than all the considered baselines. Note that we obtain similar as for Hyperband-style methods , where the optimization can start much earlier than standard HPO, with the key difference that we only require a single machine. We introduced a new class of methods to accelerate hyperparameter optimization by exploiting evaluations from previous tasks. The key idea was to leverage a semi-parametric Gaussian Copula prior, using it to account for the different scale and noise levels across tasks. Experiments showed that we considerably outperform standard approaches to BO, and deal with heterogeneous tasks more robustly compared to a number of transfer learning approaches recently proposed in the literature. Finally, we showed that our approach can seamlessly combine multiple objectives, such as accuracy and runtime, further speeding up the search of good hyperparameter configurations. A number of directions for future work are open. First, we could combine our Copula-based HPO strategies with Hyperband-style optimizers . In addition, we could generalize our approach to deal with settings in which related problems are not limited to the same algorithm run over different datasets. This would allow for different hyperparameter dimensions across tasks, or perform transfer learning across different black-boxes. Learn the parameters θ of µ θ (x) and σ θ (x) on hold-out evaluations D M by minimizing equation 1. while Has budget do Sample N candidate hyperparameters x 1,..., x N from the search space where i = arg min izi end while To speed up experiments we used a lookup table approach advocated in which proposed to use an extrapolation model built on pre-generated evaluations to limit the number of blackbox evaluations, thus saving a significant amount of computational time. However, the extrapolation model can introduce noise and lead to inconsistencies compared to using real blackbox evaluations. As a , in this work we reduced BO to the problem of selecting the next hyperparameter configurations from a fixed set that has been evaluated in advance, so that no extrapolation error is introduced. All evaluations were obtained by querying each algorithm at hyperparameters sampled (log) uniformly at random from their search space as described in Table 4. The CDF on the error objectives is given in Figure 3. Results on different iterations. We plot the improvement over random research for all the methods at iteration 10, 50 and 100 at Table 7, 8 and 9, respectively. In short, at 10th iteration, transfer learning methods, especially our CGP and Box RS, performed much better than GP. But, when looking at at 50 and 100 iterations, CGP outperforms clearly all other transfer methods because of its improved adaptivity. More details on prior MLP architecture. The MLP used to regress µ θ and σ θ consists of 3 layers with 50 nodes, each with a dropout layer set to 0.5. The learning rate is set to 0.01, batch size to 64 and we optimize over 100 gradient updates 3 times, lowering the learning rate by 10 each time. Table 9: Relative improvements over random search at iteration 100.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryx4PJrtvS
We show how using semi-parametric prior estimations can speed up HPO significantly across datasets and metrics.
We propose Pure CapsNets (P-CapsNets) without routing procedures. Specifically, we make three modifications to CapsNets. First, we remove routing procedures from CapsNets based on the observation that the coupling coefficients can be learned implicitly. Second, we replace the convolutional layers in CapsNets to improve efficiency. Third, we package the capsules into rank-3 tensors to further improve efficiency. The experiment shows that P-CapsNets achieve better performance than CapsNets with varied routine procedures by using significantly fewer parameters on MNIST&CIFAR10. The high efficiency of P-CapsNets is even comparable to some deep compressing models. For example, we achieve more than 99% percent accuracy on MNIST by using only 3888 parameters. We visualize the capsules as well as the corresponding correlation matrix to show a possible way of initializing CapsNets in the future. We also explore the adversarial robustness of P-CapsNets compared to CNNs. Capsule Networks, or CapsNets, have been found to be more efficient for encoding the intrinsic spatial relationships among features (parts or a whole) than normal CNNs. For example, the CapsNet with dynamic routing can separate overlapping digits accurately, while the CapsNet with EM routing achieves lower error rate on smallNORB . However, the routing procedures of CapsNets (including dynamic routing and EM routing ) are computationally expensive. Several modified routing procedures have been proposed to improve the efficiency;; ), but they sometimes do not "behave as expected and often produce that are worse than simple baseline algorithms that assign the connection strengths uniformly or randomly" . Even we can afford the computation cost of the routing procedures, we still do not know whether the routing numbers we set for each layer serve our optimization target. For example, in the work of , the CapsNet models achieve the best performance when the routing number is set to 1 or 3, while other numbers cause performance degradation. For a 10-layer CapsNet, assuming we have to try three routing numbers for each layer, then 3 10 combinations have to be tested to find the best routing number assignment. This problem could significantly limit the scalability and efficiency of CapsNets. Here we propose P-CapsNets, which resolve this issue by removing the routing procedures and instead learning the coupling coefficients implicitly during capsule transformation (see Section 3 for details). Moreover, another issue with current CapsNets is that it is common to use several convolutional layers before feeding these features into a capsule layer. We find that using convolutional layers in CapsNets is not efficient, so we replace them with capsule layers. Inspired by , we also explore how to package the input of a CapsNet into rank-3 tensors to make P-CapsNets more representative. The capsule convolution in P-CapsNets can be considered as a more general version of 3D convolution. At each step, 3D convolution uses a 3D kernel to map a 3D tensor into a scalar (as Figure 1 shows) while the capsule convolution in Figure 2 adopts a 5D kernel to map a 5D tensor into a 5D tensor. Figure 1: 3D convolution: tensor-to-scalar mapping. The shape of input is 8ˆ8ˆ4. The shape of 3D kernel is 4ˆ4ˆ3. As a , the shape of output is 5ˆ5ˆ3. Yellow area shows current input area being convolved by kernel and corresponding output. Figure 2: Capsule convolution in P-CapsNets: tensor-to-tensor mapping. The input is a tensor of 1's which has a shape of 1ˆ5ˆ5ˆp3ˆ3ˆ3q (correspond to the the input channel, input height, input width, first capsule dimension, second capsule dimension, and third capsule dimension, respectively). The capsule kernel is also a tensor of 1's which has a shape of 4ˆ4ˆ1ˆ1ˆp3ˆ3ˆ3q -kernel height, kernel width, number of input channel, number of output channel, and the three dimensions of the 3D capsule. As a , we get an output tensor of 48's which has a shape of 1ˆ2ˆ2ˆp3ˆ3ˆ3q. Yellow areas show current input area being convolved by kernel and corresponding output. CapsNets organize neurons as capsules to mimic the biological neural systems. One key design of CapsNets is the routing procedure which can combine lower-level features as higher-level features to better model hierarchical relationships. There have been many papers on improving the expensive routing procedures since the idea of CapsNets was proposed. For example, improves the routing efficiency by 40% by using weighted kernel density estimation. propose an attention-based routing procedure which can accelerate the dynamic routing procedure. have found that these routing procedures are heuristic and sometimes perform even worse than random routing assignment. Incorporating routing procedures into the optimization process could be a solution. treats the routing procedure as a regularizer to minimize the clustering loss between adjacent capsule layers. approximates the routing procedure with master and aide interaction to ease the computation burden. incorporates the routing procedure into the training process to avoid the computational complexity of dynamic routing. Here we argue that from the viewpoint of optimization, the routing procedure, which is designed to acquire coupling coefficients between adjacent layers, can be learned and optimized implicitly, and may thus be unnecessary. This approach is different from the above CapsNets which instead focus on improving the efficiency of the routing procedures, not attempting to replace them altogether. We now describe our proposed P-CapsNet model in detail. We describe the three key ideas in the next three sections: that the routing procedures may not be needed, that packaging capsules into higher-rank tensors is beneficial, and that we do not need convolutional layers. The primary idea of routing procedures in CapsNets is to use the parts and learned part-whole relationship to vote for objects. Intuitively, identifying an object by counting the votes makes perfect sense. Mathematically, routing procedures can also be considered as linear combinations of tensors. This is similar to the convolution layers in CNNs in which the basic operation of a convolutional layer is linear combinations (scaling and addition), where v j is the jth output scalar, u i is the ith input scalar, and W ij is the weight. where c ij are called coupling coefficients which are usually acquired by a heuristic routing procedure . In , CNNs do linear combinations on scalars while CapsNets do linear combinations on tensors. Using a routing procedure to acquire linear coefficients makes sense. However, if Equation 2 is rewritten as, then from the viewpoint of optimization, it is not necessary to learn or calculate c ij and W ij separately since we can learn W 1 ij instead. In other words, we can learn the c ij implicitly by learning W 1 ij. Equation 2 is the basic operation of P-CapsNets only we extend it to the 3D case; please see Section 3.2 for details. By removing routing procedures, we no longer need an expensive step for computing coupling coefficients. At the same time, we can guarantee the learned W 1 ij " c ij W ij is optimized to serve a target, while the good properties of CapsNets could still be preserved (see section 4 for details). We conjecture that the strong modeling ability of CapsNets come from this tensor to tensor mapping between adjacent capsule layers. From the viewpoint of optimization, routing procedures do not contribute a lot either. Taking the CapsNets in as an example, the number of parameters in the transformation operation is 6ˆ6ˆ32ˆp8ˆ16q " 147, 456 while the number of parameters in the routing operation equals to 6ˆ6ˆ32ˆ10 " 11, 520 -the "routing parameters" only represent 7.25% of the total parameters and are thus negligible compared to the "transformation parameters." In other words, the benefit from routing procedures may be limited, even though they are the computational bottleneck. have a similar form. We argue that the "dimension transformation" step of CapsNets can be considered as a more general version of convolution. For example, if each 3D tensor in P-CapsNets becomes a scalar, then P-CapsNets would degrade to normal CNNs. As Figure 4 shows, the basic operation of 3D convolution is f: U P R gˆmˆn Ñ v P R while the basic operation of P-CapsNet is f: U P R gˆmˆn Ñ V P R gˆmˆp. The capsules in and are vectors and matrices. For example, the capsules in have dimensionality 1152ˆ10ˆ8ˆ16 which can convert each 8-dimensional tensor in the lower layer into a 16-dimensional tensor in the higher layer (32ˆ6ˆ6 " 1152 is the input number and 10 is the output number). We need a total of 1152ˆ10ˆ8ˆ16 " 1474560 parameters. If we package each input/output vector into 4ˆ2 and 4ˆ4 matrices, we need only 1152ˆ10ˆ2ˆ4 " 92160 parameters. This is the policy adopted by in which 16-dimensional tensors are converted into new 16-dimensional tensors by using 4ˆ4 tensors. In this way, the total number of parameters is reduced by a factor of 15. In this paper, the basic unit of input (U P R gˆmˆn), output (V P R gˆmˆp) and capsules (W P R gˆnˆp) are all rank-3 tensors. Assuming the kernel size is (khˆkw), the input capsule number (equivalent to the number of input feature maps in CNNs) is in. If we extend Equation 3 to the 3D case, and incorporate the convolution operation, then we obtain, V " rV 0,:,:, V 1,:,:,..., V g,:,: s " which shows how to obtain an output tensor from input tensors in the previous layer in P-CapsNets. Assuming a P-CapsNet model is supposed to fit a function f: R WˆHˆ3 Ñ R, the ground-truth label is y P R and the loss function L " ||f´y||. Then in back-propagation, we calculate the gradients with respect to the output U and with respect to the capsules W, The advantage of folding capsules into high-rank tensors is to reduce the computational cost of dimension transformation between adjacent capsule layers. For example, converting a 1ˆ16 tensor to another 1ˆ16 tensor, we need 16ˆ16 " 256 parameters. In contrast, if we fold both input/output vectors to three-dimensional tensors, for example, as 2ˆ4ˆ2, then we only need 16 parameters (the capsule shape is 2ˆ4ˆ2). For the same number of parameters, folded capsules might be more representative than unfolded ones. Figure 2 shows what happens in one capsule layer of P-CapsNets in detail. It is a common practice to embed convolutional layers in CapsNets, which makes these CapsNets a hybrid network with both convolutional and capsule layers (; ;). One argument for using several convolutional layers is to extract low level, multi-dimensional features. We argue that this claim is not so persuasive based on two observations, 1). The level of multi-dimensional entities that a model needs cannot be known in advance, and it does not matter, either, as long as the level serves our target; 2). Even if a model needs a low level of multi-dimensional entities, the capsule layer can still be used since it is a more general version of a convolutional layer. Based on the above observations, we build a "pure" CapsNet by using only capsule layers. One issue of P-CapsNets is how to process the input if they are not high-rank tensors. Our solution is simply adding new dimensions. For example, the first layer of a P-CapsNet can take 1ˆwˆhˆp1ˆ1ˆ3q tensors as the input (colored image), and take 1ˆwˆhˆp1ˆ1ˆ1q tensors as the input for gray-scale images. In , P-CapsNets make three modifications over CapsNets . First, we remove the routing procedures from all the capsule layers. Second, we replace all the convolutional layers with capsule layers. Third, we package all the capsules and input/output as rank-3 tensors to save parameters. We keep the loss and activation functions the same as in the previous work. Specifically, for each capsule layer, we use the squash function V "`1´1 e }V}˘V }V} in as the activation function. We also use the same margin loss function in for classification tasks, where T k = 1 iff class k is present, and m`ě 0.5, m´ě 0 are meta-parameters that represent the threshold for positive and negative samples respectively. λ is a weight that adjust the loss contribution for negative samples. We test our P-CapsNets model on MNIST and CIFAR10. P-CapsNets show higher efficiency than with various routing procedures as well as several deep compressing neural network models;;. For MNIST, P-CapsNets#0 achieve better performance than by using 40 times fewer parameters, as Table 1 shows. At the same time, P-CapsNets#3 achieve better performance than by using 87% fewer parameters. is the only model that outperforms P-CapsNets, but uses 80 times more parameters. Since P-CapsNets show high efficiency, it is interesting to compare P-CapsNets with some deep compressing models on MNIST. We choose five models that come from three algorithms as our baselines. As Table 2 shows, for the same number of parameter, P-CapsNets can always achieve a lower error rate. For example, P-CapsNets#2 achieves 99.15% accuracy by using only 3,888 parameters while the model achieves 98.44% by using 3,554 parameters. For P-CapsNet structures in Table 1 and Table 2, please check Appendix A for details. For CIFAR10, we also adopt a five-layer P-CapsNet (please see the Appendix) which has about 365,000 parameters. We follow the work of; to crop 24ˆ24 patches from each image during training, and use only the center 24ˆ24 patch during testing. We also use the same data augmentation trick as in (please see Appendix B for details). As Table 3 shows, P-CapsNet achieves better performance than several routing-based CapsNets by using fewer parameters. The only exception is Capsule-VAE which uses fewer parameters than P-CapsNets but the accuracy is lower. The structure of P-CapsNets#4 can be found in Appendix A. Dynamic 7 10.6 6.8M Atten-Caps (Attention (-) 1 11.39 «5.6M FRMS Fast Dynamic 1 15.6 1.2M FREM Fast Dynamic 1 14.3 1.2M CapsNets EM 1 11.9 458K P-CapsNets#4 -1 10.03 365K Capsule-VAE VB-Routing 1 11.2 323K Table 3: Comparison between P-CapsNets and several CapsNets with routing procedures in terms of error rate on CIFAR10. The number in each routing type is the number of routing times. In spite of the parameter-wise efficiency of P-CapsNets, one limitation is that we cannot find an appropriate acceleration solution like cuDNN since all current acceleration packages are convolution-based. To accelerate our training, we developed a customized acceleration solution based on cuda and CAFFE . The primary idea is reducing the communication times between CPUs and GPUs, and maximizing the number of canbe-paralleled operations. Please check Appendix C for details, and we plan to publicly release our code. We visualize the capsules (filters) of P-CapsNets trained on MNIST (the model used is the same as in Figure 7). The capsules in each layer are 7D tensors. We flatten each layer into a matrix to make it easier to visualize. For example, the first capsule layer has a shape of 3ˆ3ˆ1ˆ1ˆp1ˆ1ˆ16q, so we reshape it to a 9ˆ16 matrix. We do a similar reshaping for the following three layers, and the is shown in Figure 3. We observe that the capsules within each layer appear correlated with each other. To check if this is true, we print out the first two layers' correlation matrix for both the P-CapsNet model as well as a CNN model (which comes from , also trained on MNIST) for comparison. We compute Pearson product-moment correlation coefficients (a division of covariance matrix and multiplication of standard deviation) of filter elements in each of two convolution layers respectively. In our case, we draw two 25x25 correlation matrices from that reshaped conv1 (25x256) and conv2 (25x65536). Similarly, we generate two 9x9 correlation matrices of P-CapsNets from reshaped cconv1 (9x16) and conv2 (9x32). As Figure 4 shows, the filters of convolutional layers have lower correlations within kernels than P-CapsNet. The makes sense since the capsules in P-CapsNets are supposed to extract the same type of features while the filters in standard CNNs are supposed to extract different ones. The difference shown here suggests that we might rethink the initialization of CapsNets. Currently, our P-CapsNet, as well as other types of CaspNets all adopt initializing methods designed for CNNs, which might not be ideal. Epsilon Baseline P-CapsNets Table 4: Robustness of P-CapsNets. The attacking method is. In this table, we use The baseline is the same CNN model in. Generalization gap is the difference between a model's performance on training data and that on unseen data from the same distribution. We compare the generalization gap of P-CapsNets with that of the CNN baseline by marking out an area between training loss curve and testing loss curve, as Figure 5 shows. For visual comparison, we draw the curve per 20 iterations for baseline and 80 iterations for P-CapsNet, respectively. We can see that at the end of the training, the gap of training/testing loss of P-CapsNets is smaller than the CNN model. We conjecture that P-CapsNets have a better generalization ability. For black-box adversarial attack, claims that CapsNets is as vulnerable as CNNs. We find that P-CapsNets also suffer this issue, even more seriously than CNN models. Specifically, we adopt as the attacking method and use LeNet as the substitute model to generate one thousand testing adversarial images. As Table 4 shows, when epsilon increases from 0.05 to 0.3, the accuracy of the baseline and the P-CapsNet model fall to 54.51% and 25.11%, respectively. claims that CapsNets show far more resistance to white-box attack; we find an opposite for P-CapsNets. Specifically, we use UAP as our attacking method, and train a generative network (see Appendix A.2 for details) to generate universal perturbations to attack the CNN model as well as the P-CapsNet model shown in Figure 7 ). The universal perturbations are supposed to fool a model that predicts a targeted wrong label ((the ground truth label + 1) % 10). As Figure 6 shows, when attacked, the accuracy of the P-CapsNet model decreases more sharply than the baseline. It thus appears that P-CapsNets are more vulnerable to both white-box and black-box adversarial attacking compared to CNNs. One possible reason is that the P-CapsNets model we use here is significantly smaller than the CNN baseline (3688 versus 35.4M). It would be a fairer comparison if two models have a similar number of parameters. We propose P-CapsNets by making three modifications based on , 1) We replace all the convolutional layers with capsule layers, 2) We remove routing procedures from the whole network, and 3) We package capsules into rank-3 tensors to further improve the efficiency. The experiment shows that P-CapsNets can achieve better performance than multiple other CapsNets variants with different routing procedures, as well as than deep compressing models, by using fewer parameters. We visualize the capsules in P-CapsNets and point out that the initializing methods of CNNs might not be appropriate for CapsNets. We conclude that the capsule layers in P-CapsNets can be considered as a general version of 3D convolutional layers. We conjecture that CapsNets can encode the intrinsic spatial relationship between a part and a while efficiently, comes from the tensor-to-tensor mapping between adjacent capsule layers. This mapping is presumably also the reason for P-CapsNets' good performance. CapsNets#0, CapsNets#1, CapsNets#2, CapsNets#3), they are all five-layer CapsNets. Take CapsNets#2 as an example, the input are gray-scale images with a shape of 28ˆ28, we reshape it as a 6D tensor, 1ˆ28ˆ28ˆp1ˆ1ˆ1q to fit our P-CaspNets. The first capsule layer (CapsConv#1, as Figure 7 shows.), is a 7D tensor, 3ˆ3ˆ1ˆ1ˆp1ˆ1ˆ16q. Each dimension of the 7D tensor represents the kernel height, the kernel width, the number of input capsule feature map, the number of output capsule feature map, the capsule's first dimension, the capsule's second dimension, the capsule's third dimension. All the following feature maps and filters can be interpreted in a similar way. Similarly, the five capsule layers of P-CapsNets#0 are 3ˆ3ˆ1ˆ1ˆp1ˆ1ˆ32, 3ˆ3ˆ1ˆ2p 1ˆ8ˆ8q, 3ˆ3ˆ2ˆ4ˆp1ˆ8ˆ8q, 3ˆ3ˆ4ˆ2ˆp1ˆ8ˆ8, 3ˆ3ˆ2ˆ10ˆp1ˆ8ˆ8q respectively. The strides for each layers are p2, 1, 2, 1, 1q. The five capsule layers of P-CapsNets#1 are 3ˆ3ˆ1ˆ1ˆp1ˆ1ˆ16, 3ˆ3ˆ1ˆ1ˆp1ˆ4ˆ6q, 3ˆ3ˆ1ˆ1ˆp1ˆ6ˆ4q, 3ˆ3ˆ1ˆ1ˆp1ˆ4ˆ6, 3ˆ3ˆ1ˆ10ˆp1ˆ6ˆ4q respectively. The strides for each layers are p2, 1, 2, 1, 1q. The five capsule layers of P-CapsNets#3 are 3ˆ3ˆ1ˆ1ˆp1ˆ1ˆ32, 3ˆ3ˆ1ˆ4ˆp1ˆ8ˆ16q, 3ˆ3ˆ4ˆ8ˆp1ˆ16ˆ8q, 3ˆ3ˆ8ˆ4ˆp1ˆ8ˆ16, 3ˆ3ˆ4ˆ10ˆp1ˆ16ˆ16q respectively. The strides for each layers are p2, 1, 2, 1, 1q. The five capsule layers of P-CapsNets#4 are 3ˆ3ˆ1ˆ1ˆp1ˆ3ˆ32, 3ˆ3ˆ1ˆ4ˆp1ˆ8ˆ16q, 3ˆ3ˆ4ˆ8ˆp1ˆ16ˆ8q, 3ˆ3ˆ8ˆ10ˆp1ˆ8ˆ16, 3ˆ3ˆ10ˆ10ˆp1ˆ16ˆ16q respectively. The strides for each layers are p2, 1, 1, 2, 1q. The input of the generative network is a 100-dimension vector filled with a random number ranging from -1 to 1. Then the vector is fed to a fully-connected layer with 3456 output (the output is reshaped as 3ˆ3ˆ384). On top of the fully-connected layer, there are three deconvolutional layers. They are one deconvolutional layer with 192 output (the kernel size is 5, the stride is 1, no padding), one deconvolutional layer with 96 output (the kernel size is 4, the stride is 2, the padding size is 1), and one deconvolutional layer with 1 output (the kernel size is 4, the stride is 2, the padding size is 1) respectively. The final output of the three deconvolutional layers has the same shape as the input image (28ˆ28) which are the perturbations. For all the P-CapsNet models in the paper, We add a Leaky ReLU function(the negative slope is 0.1) and a squash function after each capsule layer. All the parameters are initialized by MSRA . For MNIST, we decrease the learning rate from 0.002 every 4000 steps by a factor of 0.5 during training. The batch size is 128, and we trained our model for 30 thousand iterations. The upper/lower bound of the margin loss is 0.5/0.1. The λ is 0.5. We adopt the same data augmentation as in , namely, shifting each image by up to 2 pixels in each direction with zero padding. For CIFAR10, we use a batch size of 256. The learning rate is 0.001, and we decrease it by a factor of 0.5 every 10 thousand iterations. We train our model for 50 thousand iterations. The upper/lower bound of the margin loss is 0.6/0.1. The λ is 0.5. Before training we first process each image by using Global Contrast Normalization (GCN), as Equation 8 shows. where, X and X 1 are the raw image and the normalized image. s, and α are meta-parameters whose values are 1, 1e´9, and 10. Then we apply Zero Component Analysis (ZCA) to the whole dataset. Specifically, we choose 10000 images X 2 randomly from the GCN-processed training set and calculate the mean image X 2 across all the pixels. Then we calculate the covariance matrix as well as the singular values and vectors, as, Equation 9 shows. U, S, V " SV DpCovpX 2´X 2 qq Finally, we can use Equation 10 to process each image in the dataset. or the customized convolution solution in CAFFE , too many communication times would be incorporated which slows the whole training process a lot. The communication overhead is so much that the training is slower than CPU-only mode. To overcome this issue, we parallel the operations within each kernel to minimize communication times. We build two P-CaspNets#3 models, one is CPU-only based, the other one is based on our parallel solution. The GPU is one TITAN Xp card, the CPU is Intel Xeon. As Table 5 shows, our solution achieves at least 4ˆfaster speed than the CPU mode for different batch sizes.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1gNfkrYvS
Routing procedures are not necessary for CapsNets
A recent line of work has studied the statistical properties of neural networks to great success from a {\it mean field theory} perspective, making and verifying very precise predictions of neural network behavior and test time performance. In this paper, we build upon these works to explore two methods for taming the behaviors of random residual networks (with only fully connected layers and no batchnorm). The first method is {\it width variation (WV)}, i.e. varying the widths of layers as a function of depth. We show that width decay reduces gradient explosion without affecting the mean forward dynamics of the random network. The second method is {\it variance variation (VV)}, i.e. changing the initialization variances of weights and biases over depth. We show VV, used appropriately, can reduce gradient explosion of tanh and ReLU resnets from $\exp(\Theta(\sqrt L))$ and $\exp(\Theta(L))$ respectively to constant $\Theta$. A complete phase-diagram is derived for how variance decay affects different dynamics, such as those of gradient and activation norms. In particular, we show the existence of many phase transitions where these dynamics switch between exponential, polynomial, logarithmic, and even constant behaviors. Using the obtained mean field theory, we are able to track surprisingly well how VV at initialization time affects training and test time performance on MNIST after a set number of epochs: the level sets of test/train set accuracies coincide with the level sets of the expectations of certain gradient norms or of metric expressivity (as defined in \cite{yang_meanfield_2017}), a measure of expansion in a random neural network. Based on insights from past works in deep mean field theory and information geometry, we also provide a new perspective on the gradient explosion/vanishing problems: they lead to ill-conditioning of the Fisher information matrix, causing optimization troubles. Deep mean field theory studies how random neural networks behave with increasing depth, as the width goes to infinity. In this limit, several pieces of seminal work used statistical physics BID7 ) and Gaussian Processes to show that neural networks exhibit remarkable regularity. Mean field theory also has a substantial history studying Boltzmann machines BID0 and sigmoid belief networks .Recently, a number of have revitalized the use of mean field theory in deep learning, with a focus on addressing practical design questions. , mean field theory is combined with Riemannian geometry to quantify the expressivity of random neural networks. and , a study of the critical phenomena of mean field neural networks and residual networks 1 is leveraged to theoretically predict test time relative performance of differential initialization schemes. Additionally, BID5 and have used related techniques to investigate properties of the loss landscape of deep networks. Together these have helped a large number of experimental observations onto more rigorous footing (; BID9 BID3 . Finally, deep mean field theory has proven to be a necessary underpinning for studies using random matrix theory to 1 without batchnorm and with only fully connected layers understand dynamical isometry in random neural networks . Overall, a program is emerging toward building a mean field theory for state-of-the-art neural architectures as used in the wild, so as to provide optimal initialization parameters quickly for any deep learning practitioner. In this paper, we contribute to this program by studying how width variation (WV), as practiced commonly, can change the behavior of quantities mentioned above, with gradient norm being of central concern. We find that WV can dramatically reduce gradient explosion without affecting the mean dynamics of forward computation, such as the activation norms, although possibly increasing deviation from the mean in the process (Section 6).We also study a second method, variance variation (VV), for manipulating the mean field dynamics of a random neural network (Section 7 and Appendix B). In this paper, we focus on its application to tanh and ReLU residual networks, where we show that VV can dramatically ameliorate gradient explosion, and in the case of ReLU resnet, activation explosion 2. Affirming the of and predicted by our theory, VV improves performances of tanh and ReLU resnets through these means. Previous works (; ;) have focused on how network architecture and activation functions affect the dynamics of mean field quantities, subject to the constraint that initialization variances and widths are constant across layers. In each combination of (architecture, activation), the mean field dynamics have the same kinds of asymptotics regardless of the variances. For example, tanh feedforward networks have exp(Θ(l)) forward and backward dynamics, while tanh residual networks have poly(l) forward and exp(Θ( √ l)) backward dynamics. Such asymptotics were considered characteristics of the (architecture, activation) combination . We show by counterexample that this perception is erroneous. In fact, as discussed above, WV can control the gradient dynamics arbitrarily and VV can control forward and backward dynamics jointly, all without changing the network architecture or activation. To the best of our knowledge, this is the first time methods for reducing gradient explosion or vanishing have been proposed that vary initialization variance and/or width across layers. With regard to ReLU resnets, we find that gradient norms and "metric expressivity" (as introduced in , also defined in Defn 4.2), make surprisingly good predictors, respectively in two separate phases, of how VV at initialization affects performance after a fixed amount of training time (Section 7.1). However, in one of these phases, larger gradient explosion seems to cause better performance, with no alternative course of explanation. In this paper we have no answer for why this occurs but hope to elucidate it for future work. With regard to tanh resnets, we find that, just as in , the optimal initialization balances trainability and expressivity: Decaying the variance too little means we suffer from gradient explosion, but decaying the variance too much means we suffer from not enough metric expressivity. We want to stress that in this work, by "performance" we do not mean absolute performance but rather relative performance between different initialization schemes. For example, we do not claim to know what initialization scheme is needed to make a particular neural network architecture solve ImageNet, but rather, conditioned on the architecture, whether one initialization is better than another in terms of test set accuracy after the same amount of training iterations. Before we begin the mean field analysis, we present a perspective on gradient explosion/vanishing problem from a combination of mean field theory and information geometry, which posits that such problem manifests in the ill-conditioning of the Fisher information matrix. Given a parametric family of probability distributions on R m, P:= {P θ} θ with θ = (θ 1, . . ., θ n) in R n, its Fisher information matrix is defined as F (θ):= [E z∼P θ (∂ i log P θ (z))(∂ j log P θ (z))] DISPLAYFORM0 (here ∂ i is partial derivative against θ i). It is known from information geometry that, under regularity conditions, P forms a Riemannian manifold with θ → P θ as its coordinate map and F (θ) as its Riemannian metric tensor (a fortiori it is positive definite) BID2. This fact is most famously used in the natural gradient method, which, akin to second order methods, computes from a gradient vector ∂E/∂θ a "natural direction of greatest descent" F (θ) −1 ∂E/∂θ that is invariant to reparametrization θ → θ BID1. This method and related ideas have been applied to great success in supervised, unsupervised, and reinforcement learning (for example, ; BID8 ; ; BID10 ;). An F (θ) with eigenvalues all approximately equal means that the neighborhood around P θ is isotropically curved and the gradient is approximately just the natural gradient up to a multiplicative constant. Conversely, an F (θ) with a large condition number κ(F (θ)) (the ratio of the largest over the smallest eigenvalue) means that the gradient is a poor proxy for the natural gradient and thus is much less efficient. From another angle, F (θ) is also the Hessian of the KL divergence τ → KL(P θ P τ) at τ = θ. If we were simply to minimize this KL divergence through gradient descent, then the number of iterations to convergence is proportional to κ(F (θ)) (in general, there is a lower bound of Ω( κ(F (θ))) for first order methods satisfying a mild condition) .For a random deep network (residual or not) suffering from gradient explosion in the mean, we show heuristically in this section that the condition number of its Fisher information matrix is exponentially large in depth with high probability 3. First partition θ into groups of parameters according to layer, θ = (θ 11, θ 12, . . ., θ 1k1, θ 21, . . ., θ 2k2, . . ., θ L1, . . ., θ Lk L), with θ lj denoting parameter j of layer l. We can then partition the Fisher information matrix F (θ) into blocks, with the diagonal blocks having sizes DISPLAYFORM1 According to the Hermitian min-max theorem, the largest eigenvalue of F (θ) is given by max x =1 x T F (θ)x and the smallest eigenvalue is given by min x =1 x T F (θ)x (both are positive as F (θ) is positive definite under our regularity assumptions). Thus κ(F (θ)) equals to their ratio and is lower bounded by a ratio of extremal diagonal terms max lj F (θ) lj,lj / min lj F (θ) lj,lj. Let Y (θ) be the expectation of Y (θ) with respect to random initialization of θ in some fixed method. Suppose there is gradient explosion such that E z (∂ lj log P θ (z)) 2 ∈ [exp(c l), exp(C l)] for universal constants c, C > 0 independent of j (this is true, for example, for feedforward tanh networks initialized in the chaotic region). By concentration of measure phenomenon (as seen in BID6 ; ; and this work), over randomization of parameters, E z (∂ lj log P θ (z)) 2 will in fact concentrate around its mean as width goes to infinity. Thus we have, with high probability, that diagonal entries F (θ) lj,lj = E z (∂ lj log P θ (z)) 2 ∈ [exp(cl), exp(Cl)] for some new constants 0 < c < c < C < C. Then the ratio max lj F (θ) lj,lj / min lj F (θ) lj,lj is at least exp(cL)/ exp(C1) = exp(cL − C), so that the κ(F (θ)) is exponential in L. The argument can be easily modified to accommodate gradient vanishing and other rates of gradient explosion/vanishing like exp(Θ(l α)).Thus such gradient dynamical problems cause the gradient to deviate from the natural gradient exponentially in an appropriate sense and violate the information geometry of the information manifold P θ. For the case of minimizing KL divergence from a specific distribution, they directly cause the number of gradient descent iterations to diverge exponentially. These issues cannot be solved by just adjusting the learning rate (though it can somewhat ameliorate the problem by taking conservative steps in such ill-conditioned regions). The desire to understand how initialization can affect final performances of a deep neural network has led to a resurgence of mean field techniques, this time applied to deep learning. A series of papers (; ; ;) have established the depth-wise dynamics of random neural networks (i.e. networks at initialization time). For example, showed that, in a random tanh classical feedforward neural network, activation norm converges exponentially fast in depth to a constant value, and so does the angle between images of two different input vectors at successive depths, which the authors proposed as a measure of expressivity that called "angular expressivity." then showed that the gradient norm of such a random network suffers from exponential explosion or vanishing during the course of backpropagation. But when the initialization variances lie on a "critical curve," the gradient is neither vanishing nor exploding, and, more importantly, the networks initialized on this "critical line" has the best test time performance after training for a fixed number of iterations. The mean field framework was extended to residual networks (with only fully connected layers and no batchnorm) in. There the authors showed that just by adding a skip connection to the feedforward network, the dynamics of a tanh network becomes subexponential. More crucially, they investigated both tanh and ReLU residual networks, and found that whereas gradient dynamics controls the test time performances of tanh resnets, "expressivity" controls those of ReLU resnets. This expressivity is, roughly speaking, how much distance a random network on average puts between two different input vectors; it was aptly named "metric expressivity." On the other hand, the "angular expressivity" proposed in (how much angle the network puts between two input vectors, as explained above) was not found to be predictive of relative test time performance of either tanh or ReLU resnets. More precisely, the optimal initialization scheme for tanh resnet seems to strike a delicate balance between trainability and expressivity, in that weight variance too large causes too much gradient explosion and causes training to fail, whereas weight variance too small causes the typical network to collapse to a constant function. The optimal variance σ 2 w satisfies σ 2 w L = const where L is depth. On the other hand, ReLU resnets have completely different behavior with respect to initialization variance; here the best initialization scheme is obtained by maximizing the weight variance (and as a consequence also maximizing the metric expressivity) without overflowing activation values of deeper layers into numerical infs. Indeed, trainability seems to not be a problem at all, as the gradient norm of weight parameters at each layer stays constant within O over the course of backpropagation. In this paper, we extend the of to include depthwise variation of widths and of variances. We show that they can be used to great effect to reduce gradient explosion as well as manipulating the expressivity (metric or angular) of the random network. , we find that they improve tanh resnet performance by taming gradient dynamics and improve ReLU resnet performance by preventing activations from numerically overflowing while maximizing metric expressivity. However, in certain regimes, worsening gradient explosion can mysteriously make ReLU resnet perform better, and we currently do not know how to explain this phenomenon. Notations and Settings. We adopt the notations of and review them briefly. Consider a vanilla feedforward neural network of L layers, with each layer l having N (l) neurons; here layer 0 is the input layer. Let DISPLAYFORM0 N ) be the input vector to the network, and let x (l) for l > 0 be the activation of layer l. Then a neural network is given by the equations x DISPLAYFORM1 is the bias vector, and (iv) φ is a nonlinearity, for example tanh or ReLU, which is applied coordinatewise to its input. To lighten up notation, we suppress the explicit layer numbers l and write BID11 BID7 adds an identity connection or skip shortcut that "jumps ahead" every couple layers. We adopt one of the simplified residual architectures defined in for ease of analysis 4, in which every residual block is given by DISPLAYFORM2 DISPLAYFORM3 where M (l) is the width of the "hidden layer" of the residual block, (v DISPLAYFORM4 is a new set of weights and (a DISPLAYFORM5 i=1 is a new set of biases for every layer l. If we were to change the width of a 4 It is called full residual network by , but in this paper, we will simply assume this architecture whenever we say residual network.residual network, as is done in practice, we need to insert "projection" residual blocks BID11 BID7 every couple layers. We assume the following simplified projection residual block in this paper, for the ease of presentation 5 : DISPLAYFORM6 6, and (π ij) N,N i,j=1 is the "projection" matrix. Note that we only consider fully-connected affine layers instead of convolutional layers. Deep mean field theory is interested in the "average behavior" of these network when the weights and biases, w DISPLAYFORM7 ij, and a DISPLAYFORM8 a; here we normalize the variance of weight parameters so that, for example, the variance of each h i is σ 2 w, assuming each x j is fixed. While previous works have all focused on fixing σ (l) • to be constant across depth, in this paper we are interested in studying varying σ (l)•. In particular, other than σ (l) π which we fix at 1 across depth (so that "projection" doesn't act like an "expansion" or "contraction"), we let σ DISPLAYFORM9 Hereafter the bar notation •, •, • do not apply to σs, so that, by σ a, for example, we always mean the constant σ a.We make the same statistical assumptions as in. In the interest of space, we relegate their discussion to Appendix A.Mean Field Quantities. Now we define the central quantities studied in this paper. Inevitably, as deep mean field theory analyzes neural networks closer and closer to those used in practice, the variables and notations become more and more complex; our paper is no different. We have however included a glossary of symbols (Table A.1) that will hopefully reduce notation confusions to the first time reader. Definition 4.1. Fix an input x. Define the length quantities q DISPLAYFORM10 Here (and in the following definitions) the expectations • are taken over all random initialization of weights and biases for all layers l, as N (l), M (l) → ∞ (large width limit). Note that in our definition, the index 1 can be replaced by any other index by Axiom 1. Thus p (l) is the typical magnitude (squared) of a neuronal activation at layer l. Definition 4.2. Fix two inputs x and x. We write • to denote a quantity • with respect to the input x. Then define the correlation quantities γ DISPLAYFORM11. Again, here the index 1 does not matter by Axiom 1. By metric expressivity, we mean s DISPLAYFORM12, and we will also call e (l) angular expressivity. , we assume p = p for the ease of presentation, but this is a nonessential assumption. Then, as we will see, DISPLAYFORM13 for all l, and as a , DISPLAYFORM14 Definition 4.3. Fix an input x and a gradient vector (∂E/∂x (L) i ) i of some loss function E with respect to the last layer x (L). Then define the gradient quantities χ (l):= (∂E/∂x DISPLAYFORM15 2 for • = a, b, and χ (l) DISPLAYFORM16 Here the expectations are taken with Axiom 2 in mind, over both random initialization of forward and backward weights and biases, as N → ∞ (large width limit). Again, the index 1 or 11 does not matter by Axiom 1. Just as in previous works in deep mean field theory (; ;), the primary tool for investigating behaviors of large width networks is the central limit theorem. Every time the activations of the previous layer pass through an affine layer whose weights are sampled i.i.d., the output is a sum of a large number of random variables, and thus follows an approximately Gaussian law. The output of the next nonlinearity is then a nonlinear transform of a Gaussian variable, with computable mean and variance. Repeating this logic gives us a depthwise dynamical system of the activation random variables. The gradient dynamics can be similarly derived, assuming Axiom 2. Theoretically and empirically, WV does not change the mean dynamics of forward quantities like activation norm p but can be used to control gradient dynamics χ. Intuitively, this is because each neuron at a width-changing layer "receives messages" from different numbers of neurons in the forward and backward computations. If for example N (l) = N (l−1) /2, then on the backward pass, the neuron receives half as many messages on the backward pass than in the forward, so we expect that its gradient should be half of what it would be when DISPLAYFORM0 On the other hand, VV will usually change both the forward and backward dynamics of mean field quantities. The phase transitions are many and complicated, but the overall trend is that, as we dampen the variance with depth, both forward and backward dynamics will dampen as well; the only major exception is weight gradients in ReLU resnets (see Appendix B.1). In contrast to WV which works the same for any nonlinearity, the phase diagram for VV is controlled by different quantities depending on the nonlinearity. We show through experiments that all of the complexities involved in VV theory are reflected in the practice of training neural networks: we can predict the contour lines of test time accuracy using only our mean field theory (Section 7 and Appendix B)Expressivity vs Trainability Tradeoff. made the observation that the optimal initialization scheme for tanh resnets makes an optimal tradeoff between expressivity and trainability: if the initialization variances are too big, then the random network will suffer from gradient explosion with high probability; if they are too small, then the random network will be approximately constant (i.e. has low metric expressivity) with high probability. They posited that such tradeoff between expressivity and trainability in ReLU resnets is not observed because the gradient against weight parameters w and v are bounded w.r.t. depth (so that there is no gradient explosion) 7, while (metric) expressivity is exponential, thus strongly dominating the effect on final performance. We confirm this behavior in tanh resnets when decaying their initialization variances with depth: When there is no decay, gradient explosion bottlenecks the test set accuracy after training; when we impose strong decay, gradient dynamics is mollified but now (metric) expressivity, being strongly constrained, bottlenecks performance (Section 7.2). Indeed, we can predict test set accuracy by level curves of gradient norm ratio χ DISPLAYFORM1 w in the region of small variance decay, while we can do the same with level curves of metric expressivity s when in the region of large decay FIG4. The performance peaks at the intersection of these two regions. The left two plots show that mean forward dynamics are more or less preserved, albeit variance explodes toward the deeper layers, where WV is applied. The last plot show that the gradient dynamics is essentially suppressed to be a constant compared to the exp(√ L) dynamics of tanh resnet without width decay. Dashed lines indicate theoretical estimates in all three plot; solid, simulated data, which is generated from random residual networks with 100 layers and N = 2048, and we half the widths at layers l = m 2 for m = 4, 5,..., 9.Also corroborating Yang and Schoenholz FORMULA2, we did not observe a tradeoff in ReLU resnets VV.In the regime with small to moderate variance decay, VV exerts its effect through metric expressivity, not gradient dynamics (Section 7.1) 8. However, when we impose strong decay, gradient explosion, but not metric expressivity, predicts performance, in the unexpected way that worse gradient explosion correlates with better performance; that is, expressivity and trainability (as measured by gradient explosion) are both worsening, yet the performance increases! We currently have no explanation for this phenomenon but hope to find one in future work. Width variation is first passingly mentioned in as a potential way to guide gradient dynamics for feedforward networks. We develop a complete mean field theory of WV for residual networks. Via TAB0.2 and Thm C.3, we see that width variation (WV) has two kinds of effects on the mean gradient norm: when compared to no width variation, it can multiply the squared gradient norm of biases b i or weights w ij by N/M (which doesn't "stack", i.e. does not affect the squared gradient norm of lower layers), or it can multiply the squared gradient norm of x i by N/N (which "stacks", in the sense above, through the dynamics of χ). We will focus on the latter "stacking" effect and assume DISPLAYFORM0 Suppose from layer l to layer m, χ (m) rises to r times χ (l). If we vary the width so that N (m−1) is rN (m), then this gradient expansion is canceled, and χ DISPLAYFORM1 so that it is as if we restarted backpropagation at layer m. Remarkably, changing the width does not change the mean field forward dynamics (for example the recurrences for p, q, γ, λ remain the same) (Thm C.2). But, as always, if we reduce the width as part of WV (say, keeping N the same but reducing the widths of later layers), the variance of the sampled dynamics will also increase; if we increase the width as part of WV (say, keeping N (L) the same but increasing the widths of earlier layers), the variance of the sampled dynamics will decrease. We can apply this theory of WV to tanh residual networks (φ = tanh in TAB0 .2) without VV. , tanh residual networks with all β • = 0 have gradient dynamics DISPLAYFORM2 If we place projection blocks projecting DISPLAYFORM3.., then the gradient norms would be bounded (above and below) across layers, as reasoned above. Indeed, this is what we see in FIG1. The rightmost subfigure compares, with log scale y-axis, the gradient The zig: fix βv = βa = 0; fix βw = β b and increase both from 0 to 2 (making Vr = βw + βv go from 0 to 2 as well); The zag: fix βv = 0, βw = β b = 2; increase βa from 0 to 2 (increasing Ur from 0 to 2 as well). For each setting of the hyperparameters, we train a network on MNIST with those hyperparameters for 30 epochs. We then report the accuracy that the network achieved on the training and test sets. The plots above are, in order from left to right, (a) zig/test, (b) zag/test, (c) zig/train, (d) zag/train. In the zig, we have overlaid a contour plot of s (computed from Thm C.2), which is almost identical to the contour plots of p and χ /χ (l); numbers indicate log 1 + log s. The dashed line is given DISPLAYFORM4 In the zag, we have overlaid a contour plot of χ DISPLAYFORM5 dynamics with no WV to that with WV as described above. We see that our theory tracks the mean gradient dynamics remarkably precisely for both the WV and the no WV cases, and indeed, WV effectively caps the gradient norm for l ≥ 16 (where WV is applied). The left two figures show the forward dynamics of p and e, and we see that the WV does not affect the mean dynamics as predicted by theory. However, we also see dramatic increase in deviation from the mean dynamics at every projection layer in the forward case. The backward dynamics (rightmost figure) similarly sees large deviations (1 standard deviation below mean is negative for χ a and χ w), although the deviations for χ is more tamed but still much larger than without WV.Therefore, width variation is unique in a few ways among all the techniques discussed in the mean field networks literature so far, including variance decay as studied below, adding skip connections, or changing activation functions: It can ameliorate or suppress altogether gradient explosion (or vanishing) problems without affecting the mean forward dynamics of p, q, λ, γ, c, e. To do so, it has to choose a trade-off from the following spectrum: At one end, we truncate neurons from the original network (say, keeping N the same), so that we have fewer parameters, less compute, but larger deviations from the mean dynamics. At the other, we add neuron to the original network (say, keeping N (L) the same), so that we have more parameters, more compute, and smaller deviations from the mean dynamics.7 VARIANCE VARIATION 7.1 RELU A zigzag of parameters controls various asymptotics of ReLU resnet: "zigging" V r:= min(β v + β b, β a) from 0 to > 1, and then "zagging" U r:= β v + β w from 0 to > 1. During the zig, the asymptotics of p is subdued from exp(poly(l)) to poly(l). During the zag, it is further reduced to Θ(log l) (at U r = 1) and Θ (when U r > 1). On the contrary, the gradient dynamics of weight parameters become more explosive along the zigzag. During the zig, χ DISPLAYFORM6 v increases from Θ(l βv) to a bigger poly(l). During the zig, both quantities increase the exponents of their polynomial dependence in l. In the interest of space, we stop here our sketch of VV dynamics for ReLU resnet, but refer the reader to Appendix B.1 for more detailed descriptions and Appendix C for proofs. To test our theory, we sweep through these two macro-phases of the parameter space in our experiments and train an array of randomly initialized ReLU resnets; are demonstrated in FIG2. The figure caption gives the experimental details involved. In addition, we provide heatmap and contour plots of various quantities of interest such as p, e, and χ In all experiments here, we pin σ•s all to 1. From left to right: (a) and (b). We sweep Ut: 0 1 in to ways, testing to what extent Ut determines the final performance. In the first way (a), we set all β•s equal to a common β and increase β: 0 1. In the second way (b), we clamp β b = βa = 1 and set βw = βv, increasing their common values from 0 to 1. The heatmaps are produced from final test set performance as in FIG2. As we can easily see, these two sweeps produce almost identical heatmaps. In both plots, there is a visible peak in the heatmaps in the upper left. On each of (a) and (b), we overlay the contours of χ DISPLAYFORM7 w (in blue) to the left of the peak and those of p (in green) to the right of the peak, the latter being very similar to those of s. The blue numbers indicate log χ DISPLAYFORM8 w while the green numbers indicate log p. (c). We fix Ut = 1 by fixing βv = βa = 1 and sweep βw = β b from 0 to 1, thereby sweeping Vt from 1 to 2. The heatmap is obtained again with the same procedure as in FIG2 from the test set after training. Overlaid on top is a contour plot of s, and the numbers indicate log s. Discussion. First notice that training fails in the upper left corner of the zig. This happens because of numerical instability caused by exploding p and χ /χ (l), which grow like exp(DISPLAYFORM9) (Thms C.9 and C.11). Indeed, one of the contour lines of p traces out almost exactly where training fails. The dashed line is a level set of the dominant term of asymptotic expansion of log p, and we see it agrees with the contour of p very well. By increasing β w = β b, we effectively solved the activation explosion problem observed by without changing the activation function. Second, observe that performance actually dips in the direction where χ /χ (l) decreases, quite counterintuitively 9. This can be explained (as in) by noting that gradient against weights, χ w and χ v, in fact respectively remain bounded and polynomial in l (and changes rather mildly with V r ; see FIG1 ; gradient against biases do experience the same behavior as χ, but in general they are much less important than the weights, as parameters go. In addition, the performance is also dipping in the direction where s decreases (exponentially) in (V r, L)-space. This is the quantity that essentially underlies the exponential expressivity (as told from an extrinsic curvature perspective) of Poole et al. FORMULA2; as s decreases dramatically, it gets harder and harder for a linear functional in the final layer to tell apart two input vectors. This exponential loss in expressivity dominates the effect on performance more than a polynomial reduction in gradient explosion. Third, it is remarkable that in the zag regime, the level curves of χ DISPLAYFORM10 v (but not those of p, s, e, χ, or χ w !) accurately predict the contour of the test set performance, in such a counterintuitive way that greater gradient explosion χ DISPLAYFORM11 v correlates with better performance FIG2 ). Especially when β a (and thus also U r) is large, the weight gradient dynamics are much more explosive than that of metric expressivity, so according to prevailing theory, gradient explosion should bottleneck performance, but instead the reality is the exact opposite. It is currently unclear if in certain situations like this, larger gradient expansion is actually beneficial, or if there is a quantity yet undiscovered which has the same level curves and can explain away this seeming paradox (like how s explains away χ /χ (l) above, in the zig regime). Of the quantities that appear in Fig. A.2, none foots this bill. As in the previous section, we briefly sketch the VV dynamics of tanh resnets, but defer a more detailed discussion to Appendix B.2 and the proofs to Appendix C. We are concerned with the scenario that q → ∞ with l, as otherwise higher layers become essentially unused by the network. The major phases of tanh resnet VV dynamics are then determined by when U t:= min(β v, β a) < 1 and when U t = 1 (Thms C.5 and C.6). Within the former, the gradient dynamics is controlled by W t:= 1 − β w + U t − 2β v; as W t starts positive, decrease to 0, and then becomes negative, the gradient ratio χ /χ (l) starts out as exp(poly(l)), turns into polynomial, and finally becomes bounded. When U t = 1, χ /χ (l) is always subpolynomial, with V t:= β v + β w making it bounded as V t increases past 1. On the other hand, the dynamics of p is quite simple, with p = Θ(l 1−Ut) when U t < 1 and p = Θ(log l) when U t = 1.This theory enables us to predict and optimize an VV initialization scheme for tanh resnets. We sweep through the the two phases described above, train the corresponding random networks, and exhibit the in FIG4. The figure caption details the experimental setup. Discussions. FIG4 and FIG4 sweep through U t from 0 to 1 in two different ways while obtaining almost identical test set performance heatmap, showing indeed that the hyperparameters β • exert their effect through U t = min(β v, β a). In each of these two plots, peak accuracies happen in the upper left. To the left of such a peak, gradient norm χ DISPLAYFORM0 w predicts the accuracy, while to the right of such a peak, metric expressivity s (and in this case p as well because they induce similar contours) does. But χ DISPLAYFORM1 w would not do well in the large β region because the slopes of its contours are too steep; conversely s would not predict well in the small β region because its contour slopes are not steep enough. Indeed, one sees that the slopes of the heatmap level set boundaries decrease as the accuracy levels increase (and as β decreases from 1), but when the level peaks, the slope suddenly becomes much steeper (compare the left boundary of the peak to its right boundary). Our observation here reaffirms the trainability vs expressivity tradeoff studied in.In FIG4, we study the U t = 1 phase. Here s alone predicts performance (though in the large β w = β b region, final accuracy becomes more random and the prediction is not as great). This is expected, as χ DISPLAYFORM2 • is now subpolynomial for all • = v, w, a, b (Thm C.6), so that trainability is not an issue. In this paper, we derived the mean field theory of width and variance variation and showed that they are powerful methods to control forward (VV) and backward (VV + WV) dynamics. We proved that even with a fixed architecture and activation function, the mean field dynamics of a residual neural network can still be manipulated at will by these two methods. Extraordinarily, the mean field theory we developed allowed us to accurately predict the performances of trained MNIST models relative to different initializations, but one puzzling aspect remains where test set accuracy seems to increase as gradient explosion worsens in one regime of random ReLU resnets. Open Problems. We solved a small part, width variation, of the program to construct mean field theories of state-of-the-art neural networks used in practice. Many open problems still remain, and the most important of them include but is not limited to 1. batchnorm, 2. convolution layers, and 3. recurrent layers. In addition, more work is needed to mathematically justify our "physical" assumptions Axiom 1 and Axiom 2 to a "math" problem. We hope readers will take note and contribute toward deep mean field theory. Jeffrey Pennington, Samuel Schoenholz, and Surya Ganguli. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4788-4798. Curran Associates, Inc., 2017.URL http://papers.nips.cc/paper/ 7064-resurrecting-the-sigmoid-in-deep-learning-through-dynamical-isometry-theory-a pdf. Ben Poole, Subhaneil Lahiri, Maithreyi Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In Advances In Neural Information Processing Systems, pages 3360-3368, 2016. DISPLAYFORM0 DISPLAYFORM1 The two cases for χ/χ are resp. for a projection and a normal residual block, assuming σπ = 1. The V and W operators are defined in Defn C.1. We make several key "mean field" assumptions, which were formulated in their entirety first in (though Axiom 2(a) has been stated first by ). While these assumptions may be mathematically unsatisfying, identifying them and discovering that they lead to highly precise prediction is in fact one of the most important contributions of deep mean field theory. We also assume that the gradient ∂E/∂x (l) i with respect to the loss function E satisfies (∂E/∂x DISPLAYFORM0 Axiom 2 (Gradient independence). (a) We assume the we use a different set of weights for backpropagation than those used to compute the network outputs, but sampled i.i.d. from the same distributions.(b) For any loss function E, we assume that the gradient at layer l, ∂E/∂x DISPLAYFORM1 i, is independent from all activations h (l) j and x (l−1) j from the previous layer. One can see that Axiom 1(a) is satisfied if the input x ∈ {±1} N and Axiom 1(b) is satisfied if Axiom 2 below is true and the gradient at the last layer ∂E/∂xL ∈ {±1} N. Axiom 2(a) was first made in for computing the mean field theory of gradients for feedforward tanh networks. This is similar to the practice of feedback alignment . As discuss in Section 6, WV is essentially a post-hoc technique to tweak an existing gradient dynamic without changing the forward dynamics. Thus in this section we assume all widths are constant, DISPLAYFORM0 for any m, l, n, so that WV can be applied as a "touch-up" if necessary. We will overview the phases and transitions due to VV, but defer all proofs to later sections. It was shown in that, with no variance decay, in an ReLU resnet, both the mean squared activation norm (p) and the mean squared gradient norm (χ) explode exponentially with depth, and this causes training to fail for even 100 layer networks. We show that this problem is in fact extremely easy to fix, requiring no architectural changes at all, only that β • be increased from 0 so that the randomization variances decay across depth (Thms C.9 and C.11).Gradient quantities. The main driver of this gradient mollification is V r:= β v + β w. When V r ∈, the gradient norm varies like χ /χ (l) = exp(Θ(l 1−Vr)); when V r = 0 this recapitulate the exponential behavior derived in. When V r = 1, it experiences a sharp phase transition, where now χ /χ (l) = poly(l). As V r becomes larger than 1, χ /χ (l) = Θ, all bounded! Fig. B.3 verifies this empirically, and in fact show that our computed asymptotic expansions in Thm C.11 are highly accurate predictors. It is both interesting and important to note that the gradient norms for actual trainable parameters, such as χ w and χ v, are affected differently by V r. In fact, χ DISPLAYFORM0 w is bounded with l when V r < 1 (the V r = 0 case is already observed in) but phase transitions to poly(l) for V r ≥ 1, while χ DISPLAYFORM1 v is already poly(l) when V r < 1, and remains so as V r is increased to > 1. Curiously, greater gradient explosion in χ v predicts better performance in the V r > 1 regime, and we currently do not know if this is intrinsic or there are confounding variables; see Section 7.1.Length quantities. Similarly, V r is the primary conduit for mollifying the behavior of squared activation norms p and q (Thm C.9). Like the gradient dynamics, p = exp(Θ(l 1−Vr)) when V r < 1; when V r = 0 this recapitulates the of. As V r rise to 1 and above, p experiences a phase transition into polynomial dynamics, but unlike the case of χ, it is not constant when V r > 1. Instead, a different parameter, U r:= min(β v + β b, β a) drives the asymptotics of p in the V r > 1 regime. When U r ∈ is small, p grows like Θ(l 1−Ur). The instant U r hits 1, p is just logarithmic, p = Θ(log l). As U r shoots past 1, p becomes constant. Thus the dynamics of p is governed by a zigzag through (β •) •=v,w,a space. Fig. B.4 goes through each of the five cases discussed above and verifies that our asymptotics are correct. Cosine distance. e = γ/p measures how well the input space geometry (angles, in this case) is preserved as the input space is propagated through each layer. Its dynamics are much simpler than those of p and χ above. If e = 1, then e (l) = 1 for all l trivially. If e < 1, then we have one of the following two cases, • If V r ≤ 1 or U r ≤ 1, then e → 1 irrespective of initial data p and γ. DISPLAYFORM2 is bounded above when Vr = 1.6. TAB0.2, while dashed lines indicate asymptotics proved in Thm C.9. In all but the leftmost plot, we show both p and ∆p = p − p (possibly adjusted for log factors) and their asymptotics. To facilitate visualization, all dashed lines except the red one in the leftmost plot are shifted vertically to match the end point of the corresponding solid lines. In the leftmost plot, the red lines are respectively log p (solid) and the leading term • If V r > 1 and U r > 1, then e converges to a fixed point e * < 1 which depends on the initial data p and γ, at a rate of Θ(l 1−Ur). Thus ReLU very much likes to collapse the input space into a single point (e = 1 means every two input vectors get mapped to the same output vector), and the only way to prevent this from happening is to make the β • s so high, that higher layers barely modifies the computation done by lower layers at all. Indeed, the second condition V r > 1 and U r > 1 ensures that p = Θ as discussed above (Thm C.9), so that as l → ∞, layer l's residual adds to x (l−1) only a vector of vanishing size compared to the size of x (l−1). While ReLU resnet depends heavily on V r = β v +β w and U r = min(β v +β b, β a), U t:= min(β v, β a) and W t:= 1 − β w + U t − 2β v are the key quantities determining the dynamics in the case of tanh resnet, with V t = β v + β w = V r playing a minor role. We will study tanh resnets in the setting where q → ∞ as l → ∞; otherwise, p is bounded, meaning that higher layers become essentially unused by the neural network (similar to the discussion made in Appendix B.1(Cosine distance) above). In this setting, it can be shown that U t ≤ 1 (Thm C.5).Gradient quantities. By Thm C.6, as long as U t stays below 1, the asymptotics of χ is entirely governed by W t, which is 1 when all β • s are 0 and most of the time decreases as β • s are increased. DISPLAYFORM0; the of are recovered by setting all β • s to 0, thus W t = 1 and DISPLAYFORM1 becomes polynomial, and as W t dips below 0, gradient expansion becomes bounded. DISPLAYFORM2 is automatically suppressed to be subpolynomial. The only minor phase transition here is going from V r = 1 to V r > 1 (and V r cannot be less than 1 by our assumption that q → ∞). In the former case, the gradient expansion is exp(Θ( √ log l)), while in the latter case it is bounded. Length quantities have simpler asymptotics determined by U t: either U t < 1 and p = Θ(l 1−Ut), or U t = 1 and p = Θ(log l) (Thm C.5). Cosine distance, unlike the case of ReLU resnets, can be controlled effectively by β a and β v (Thm C.7). When β a > β v, the magnitude of a (l) i drops much more quickly with depth than that of v (l) ij, so that higher layers experience the chaotic phase , driving e (l) toward the limit point e * = 0. On the other end, when DISPLAYFORM3 i with large l, so that the higher layers experience the stability phase , collapsing all inputs to the same output vector, sending e (l) → 1. Only when β a = β v could the fixed point e * be controlled explicitly by σ v and σ a, with e * given by the equation DISPLAYFORM4 DISPLAYFORM5 for some k ∈ Z (this is slightly different from the standard usage ofÕ), and DISPLAYFORM6 All asymptotic notations are sign-less, i.e. can indicate either positive or negative quantities, unless stated otherwise. We recall integral transforms from:Definition C.1. Define the transforms V and W by Vφ(ρ):= E[φ(z) 2: z ∼ N (0, ρ)] and DISPLAYFORM7 gave recurrences for mean field quantities p, q, γ, λ, χ under the assumption of constant initialization variances across depth. The proofs there carry over straightforwardly when variance varies from layer to layer. also derived backward dynamics for when the width of the network is not constant. Generalizing to the residual network case requires some careful justifications of independences, so we provide proof for gradient dynamics; but we omit the proof for the forward dynamics as it is not affected by nonconstant width and is almost identical to the constant variance case. Theorem C.2. For any nonlinearity φ in an FRN, regardless of whether widths vary across layers, DISPLAYFORM0 Theorem C.3. Suppose a random residual network receives a fixed gradient vector ∂E/∂x DISPLAYFORM1 with respect to some cost function E, at its last layer. For any nonlinearity φ in an FRN, under Axiom 1 and Axiom 2, wheneverφ(ζ) 2 has finite variance for any Gaussian variable ζ, DISPLAYFORM2 Proof. We will show the proof for the projection connection case; the identity connection case is similar but easier. DISPLAYFORM3. We have the following derivative computations: DISPLAYFORM4 where in the second equality, we expanded algebraically, and in the third equality, we use the symmetry assumption Axiom 1 and the independence assumption Axiom 2. Now, DISPLAYFORM5 by the independence of {v ik} i,k ∪ {h k} k ∪ {w kj} k,j (by our independence assumptions Axiom 2). Similarly, because {π ij} j ∪ {π i j} j ∪ {v ik} k ∪ {v i k} k for i = i is mutually independent by our assumptions, one can easily see that DISPLAYFORM6 For the other gradients, we have (where we apply Axiom 2 implicitly) DISPLAYFORM7 In this section we derive the asymptotics of various mean field quantities for tanh resnet. The main proof technique is to bound the dynamics in question with known dynamics of difference equations (as in). DISPLAYFORM0 Theorem C.5. Suppose φ = tanh, and q (l) → ∞ as l → ∞.1. If U t < 1, then 1 > β w + U t and DISPLAYFORM1 More specifically, DISPLAYFORM2 2. If U t = 1, then β w = 0, and DISPLAYFORM3 3. U t cannot be greater than 1.Proof. Claim 1. We have Vφ(q) = 1 − 2 π q −1/2 + Θ(q −3/2) by Lemma D.5. Thus DISPLAYFORM4 where we used the assumption 1 − U t > 0. Thus p (l) = Θ(l 1−Ut) and goes to infinity with l. DISPLAYFORM5 Because we assumed q → ∞, the first term necessarily dominates the second, and 1 − U t − β w > 0. The possible asymptotics of q are then DISPLAYFORM6 Then for q to go to infinity, β w has to be 0, so that q = Θ(log l) as well, and DISPLAYFORM7 Theorem C.6. Suppose φ = tanh, and q (l) → ∞ with l. Recall W t = 1 − β w + U t − 2β v and DISPLAYFORM8 Proof. By Thm C.3 and Lemma D.4, we have DISPLAYFORM9 where C = 2 3 2 π. Since q (l) has different growth rates depending on the hyperparameters, we need to consider different cases:• If U t < 1, then Thm C.5 implies β w + U t < 1 and q = Θ(l DISPLAYFORM10 • DISPLAYFORM11 • If U t = 1, then by Thm C.5, β w = 0 and q = Θ(log l). So l −βv−βw q −1/2 = Θ(l −βv−βw / √ log l).• If β v + β w < 1, then β v < 1 =⇒ U t < 1, contradiction.• DISPLAYFORM12 • DISPLAYFORM13 Theorem C.7. Suppose φ = tanh and q → ∞. If e < 1, then e (l) converges to a fixed point e *, given by the equations DISPLAYFORM14 Note that in the case β a = β v, we recover the fixed point of tanh residual network without variance decay. Proof. We have DISPLAYFORM15 Using Lemma D.1 and Lemma D.5, we can see that the LHS is monotonic (increasing or decreasing) for large enough l. Therefore e (l) is a bounded monotonic sequence for large enough l, a fortiori it has a fixed point. If we express e = e * +, DISPLAYFORM16 RHS It's easy to verify via Thm C.5 that either p/(p−p) = Θ(l log l) (when U t = 1) or p/(p−p) = Θ(l) (all other cases). If − = Ω((l log l) −1 ), then = Ω(log log l) (by Euler-MacLaurin formula), which would be a contradiction to = o. DISPLAYFORM17 10 Hence the RHS goes to 0 with l. LHS If β v > β a, then the LHS converges to 1. So e * = 1. DISPLAYFORM18 Vφ(q). As l → ∞, c ∼ e → e *, and Wφ(q, e * q) → 2 π arcsin(e *),and Vφ(q) → 1. Therefore, e * = 2 π arcsin(e *), for which there are 2 solutions, 0, the stable fixed point, and 1, the unstable fixed point. In particular, for all e < 1, e (l) → 0.If β v = β a, then taking limits l → ∞, we get DISPLAYFORM19 The asymptotics of ReLU resnet depends on the following values:Definition C.8. DISPLAYFORM0 Theorem C.9. Let φ = ReLU. Then we have the following asymptotics of p and q: DISPLAYFORM1 • If U r = 1 DISPLAYFORM2 • If V r = 1• If W r = 1 − U r p = Θ(l max(Wr,1−Ur) ), and q = Θ(l max(max(Wr,1−Ur)−βw,−β b ) ).• Otherwise DISPLAYFORM3 10 In fact, since must be o(log log · · · log l) for any chain of logs, DISPLAYFORM4 for any k, where log (k) is k-wise composition of log; so (−) DISPLAYFORM5 • p, q = exp(DISPLAYFORM6, R is the same R as in Lemma D.7, depending on only l, W r, and V r, with DISPLAYFORM7 and q = exp( DISPLAYFORM8 In particular, p, q = poly(l) if V r ≥ 1. DISPLAYFORM9 We apply Lemma D.8 with β = β v + β w, δ = • If V r ≤ 1 or U r ≤ 1, then lim l→∞ e (l) = 1.• If V r > 1 and U r > 1, then e (l) converges to a fixed point e * < 1, dependent on the initial data p and γ, at a rate of Θ(l −Ur+1).Proof. By BID4; , we have Wφ(q, cq) = Vφ(q)J 1 (c) = DISPLAYFORM10 As in the proof of Thm C.7, we have DISPLAYFORM11 where the last inequality holds for all large l, by Lemma D.1. So the LHS is nonnegative for large l, which implies e (l) is a bounded monotonically increasing sequence for large enough ls and thus has a limit e *.Writing e = e * +, we have DISPLAYFORM12 LHS. By Thm C.9, p = ω (assuming σ v, σ w > 0), so that p/(p − p) =Õ(l). As in the proof of Thm C.7, − cannot beΩ(l −1), or else → ∞. Thus (−)p/(p − p) = o, and the LHS in the limit l → ∞ becomes e *. In all cases, we will find e * = 1. • If p = ω(l βw−β b), then c ∼ e → e *, so that in the limit l → ∞, e * = J 1 (e *). This equation has solution e * = 1. DISPLAYFORM0 • If p = o(l βw−β b), then c → 1, so that e * = J 1 = 1.• DISPLAYFORM1, and we have an equation e * = J 1 (DISPLAYFORM2 Note that since e * ∈, ) iff the equality condition above is satisifed, i.e. e * = 1. DISPLAYFORM3 DISPLAYFORM4 by the same logic as in the proof of Lemma D.1. As in the above case, Lemma D.1 yields the following : DISPLAYFORM5. Since the RHS of this equation is a convex combination of J 1 (e *) and 1, and J 1 (e *) ≥ e * by the monotonicity of J 1, the equality can hold only if J 1 (e *) = e *. The only such e * is 1.• DISPLAYFORM6, and we have an equation DISPLAYFORM7. By the monotonicity of J 1, the RHS is at least DISPLAYFORM8, which is a convex combination of e * and 1. Since e * ≤ 1, the equality can hold only if e * = 1. DISPLAYFORM9 By Thm C.9, p = Θ and therefore γ = Θ. Both converge to fixed points p * and γ * (possibly dependent on initial data p and γ ) because they are monotonically increasing sequences. Thus e * = γ * /p *.Unwinding the proof of Thm C.9, we see that p − p = l −Ur, so that p/(p − p) = Θ(l Ur). Since the RHS of Eq. cannot blow up, it must be the case that DISPLAYFORM10 1) for some constant F, then the LHS becomes e * + F in the limit. Yet, unless γ = p, e * < 1. Therefore, F > 0 (or else, like in the case of V r ≤ 1 or U r ≤ 1, e * = 1) whenever γ < p, and = Θ(l −Ur+1).Theorem C.11. Suppose φ = ReLU.• DISPLAYFORM11 • If U r ∈ DISPLAYFORM12 Under review as a conference paper at ICLR 2018 DISPLAYFORM13 • DISPLAYFORM14 Proof. Using Thm C.3 and the fact that Vφ(q) = 1 2 q, Vφ(q) = 1 2, we get DISPLAYFORM15 χ (l) = Θ by Lemma D.7. By Thm C.9:• If V r > 1, then DISPLAYFORM16 χ (l) = Θ by Lemma D.7.• If U r ∈ p = Θ(l 1−Ur) and q = Θ(l max(1−Ur−βw,−β b) ). So • If W r = 1 − U r p = Θ(l max(Wr,1−Ur) ), and q = Θ(l max(max(Wr,1−Ur)−βw,−β b ) ). So • We have p = exp(In this section, we present many lemmas used in the proofs of the main theorems. In all cases, the lemmas here have already appeared in some form in , and for completeness, we either include them and their proofs here or improve upon and extend them, with the blessing of the authors. Lemma D.1. Asymptotically, c = σ DISPLAYFORM0 DISPLAYFORM1 Proof. Euler-MacLaurin formula. Lemma D.7. Suppose (l) satisfies the recurrence (l) = (l−1) (1 + δ l β). for some nonzero constant δ ∈ R independent of l.• If β > 1, then (l) = Θ.• If β = 1, then (l) = Θ(l δ).• If 0 < β < 1, then (l) = exp(Exponentiating gives the desired . Lemma D.8. Suppose (l) = Cl −α + (l−1) (1 + δ/l β) for α ∈ R, C = 0, and δ = 0. Then• If β > 1, then DISPLAYFORM2 • (l) = Θ(log l) if α = 1;• (l) = Θ if α > 1.• If β = 1, then DISPLAYFORM3 • (l) = Θ(l δ log l) if 1 − δ = α.• If β < 1, then Furthermore, for β = −δ = 1: (l) ∼ l −1 if α > 2, (l) ∼ l 1−α if α < 2, and (l) ∼ l δ log l if α = 2. DISPLAYFORM4 Proof. We can unwind the recurrence to get A fortiori, (l) = e δ 1−β l 1−β +Θ(l max(0,1−2β) ).For our "furthermore" claim: the case of δ = −1 telescopes, so that the upper and lower constants hidden in Θ can both be taken to be 1.
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJGY8GbR-
By setting the width or the initialization variance of each layer differently, we can actually subdue gradient explosion problems in residual networks (with fully connected layers and no batchnorm). A mathematical theory is developed that not only tells you how to do it, but also surprisingly is able to predict, after you apply such tricks, how fast your network trains to achieve a certain test set performance. This is some black magic stuff, and it's called "Deep Mean Field Theory."
We explore the collaborative multi-agent setting where a team of deep reinforcement learning agents attempt to solve a shared task in partially observable environments. In this scenario, learning an effective communication protocol is key. We propose a communication protocol that allows for targeted communication, where agents learn \emph{what} messages to send and \emph{who} to send them to. Additionally, we introduce a multi-stage communication approach where the agents co-ordinate via several rounds of communication before taking an action in the environment. We evaluate our approach on several cooperative multi-agent tasks, of varying difficulties with varying number of agents, in a variety of environments ranging from 2D grid layouts of shapes and simulated traffic junctions to complex 3D indoor environments. We demonstrate the benefits of targeted as well as multi-stage communication. Moreover, we show that the targeted communication strategies learned by the agents are quite interpretable and intuitive. Effective communication is a key ability for collaborative multi-agents systems. Indeed, intelligent agents (humans or artificial) in real-world scenarios can significantly benefit from exchanging information that enables them to coordinate, strategize, and utilize their combined sensory experiences to act in the physical world. The ability to communicate has wide-ranging applications for artificial agents -from multi-player gameplay in simulated games (e.g. DoTA, Quake, StarCraft) or physical worlds (e.g. robot soccer), to networks of self-driving cars communicating with each other to achieve safe and swift transport, to teams of robots on search-and-rescue missions deployed in hostile and fast-evolving environments. A salient property of human communication is the ability to hold targeted interactions. Rather than the'one-size-fits-all' approach of broadcasting messages to all participating agents, as has been previously explored BID19 BID4, it can be useful to direct certain messages to specific recipients. This enables a more flexible collaboration strategy in complex environments. For example, within a team of search-and-rescue robots with a diverse set of roles and goals, a message for a fire-fighter ("smoke is coming from the kitchen") is largely meaningless for a bomb-defuser. In this work we develop a collaborative multi-agent deep reinforcement learning approach that supports targeted communication. Crucially, each individual agent actively selects which other agents to send messages to. This targeted communication behavior is operationalized via a simple signaturebased soft attention mechanism: along with the message, the sender broadcasts a key which encodes properties of agents the message is intended for, and is used by receivers to gauge the relevance of the message. This communication mechanism is learned implicitly, without any attention supervision, as a of end-to-end training using a downstream task-specific team reward. The inductive bias provided by soft attention in the communication architecture is sufficient to enable agents to 1) communicate agent-goal-specific messages (e.g. guide fire-fighter towards fire, bomb-defuser towards bomb, etc.), 2) be adaptive to variable team sizes (e.g. the size of the local neighborhood a self-driving car can communicate with changes as it moves), and 3) be interpretable through predicted attention probabilities that allow for inspection of which agent is communicating what message and to whom. Multi-agent systems fall at the intersection of game theory, distributed systems, and Artificial Intelligence in general BID18, and thus have a rich and diverse literature. Our work builds on and is related to prior work in deep multi-agent reinforcement learning, the centralized training and decentralized execution paradigm, and emergent communication protocols. Multi-Agent Reinforcement Learning (MARL). Within MARL (see BID1 for a survey), our work is related to recent efforts on using recurrent neural networks to approximate agent policies BID6, algorithms stabilizing multi-agent training BID13 BID5, and tasks in novel application domains such as coordination and navigation in 3D simulated environments BID17 BID16 BID8.Centralized Training & Decentralized Execution. Both BID19 and adopt a fully centralized framework at both training and test time -a central controller processes local observations from all agents and outputs a probability distribution over joint actions. In this setting, any controller (e.g. a fully-connected network) can be viewed as implicitly encoding communication. BID19 present an efficient architecture to learn a centralized controller invariant to agent permutations -by sharing weights and averaging as in BID23. proposes to replace averaging by an attentional mechanism to allow targeted interactions between agents. While closely related to our communication architecture, his work only considers fully supervised one-next-step prediction tasks, while we tackle the full reinforcement learning problem with tasks requiring planning over long time horizons. Moreover, a centralized controller quickly becomes intractable in real-world tasks with many agents and high-dimensional observation spaces (e.g. navigation in House3D BID22). To address these weaknesses, we adopt the framework of centralized learning but decentralized execution (following BID4 ; BID13 BID19 No No Yes Yes (REINFORCE) VAIN No Yes Yes No (Supervised) ATOC BID9 Yes sonable trade-off between allowing agents to globally coordinate while retaining tractability (since the communicated messages are much lower-dimensional than the observation space).Emergent Communication Protocols. Our work is also related to recent work on learning communication protocols in a completely end-to-end manner with reinforcement learning -from perceptual input (e.g. pixels) to communication symbols (discrete or continuous) to actions (e.g. navigating in an environment). While BID4 BID10 BID3 BID14 BID12 constrain agents to communicate with discrete symbols with the explicit goal to study emergence of language, our work operates in the paradigm of learning a continuous communication protocol in order to solve a downstream task BID19; BID9. While BID9 ) also operate in a decentralized execution setting and use an attentional communication mechanism, their setup is significantly different from ours as they use attention to decide when to communicate, not who to communicate with ('who' depends on a hand-tuned neighborhood parameter in their work). TAB2 summarizes the main axes of comparison between our work and previous efforts in this exciting space. Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs). A Dec-POMDP is a cooperative multi-agent extension of a partially observable Markov decision process BID15 ). For N agents, it is defined by a set of states S describing possible configurations of all agents, a global reward function R, a transition probability function T, and for each agent i P 1,..., N a set of allowed actions A i, a set of possible observations Ω i and an observation function O i. Operationally, at each time step every agent picks an action a i based on its local observation ω i following its own stochastic policy π θi pa i |ω i q. The system randomly transitions to the next state s 1 given the current state and joint action T ps 1 |s, a 1,..., a N q. The agent team receives a global reward r " Rps, a 1,..., a N q while each agent receives a local observation of the new state O i pω i |s 1 q. Agents aim to maximize the total expected return J " ř T t"0 γ t r t where γ is a discount factor and T is the episode time horizon. Actor-Critic Algorithms. Policy gradient methods directly adjust the parameters θ of the policy in order to maximize the objective Jpθq " E s"pπ,a"π θ psq rRps, aqs by taking steps in the direction of ∇Jpθq. We can write the gradient with respect to the policy parameters as DISPLAYFORM0 where Q π ps, aq is called the action-value, it is the expected remaining discounted reward if we take action a in state s and follow policy π thereafter. Actor-Critic algorithms learn an approximation of the unknown true action-value functionQps, aq by e.g. temporal-difference learning BID20. ThisQps, aq is called the Critic while the policy π θ is called the Actor. Multi-Agent Actor-Critic. BID13 propose a multi-agent Actor-Critic algorithm adapted to centralized learning and decentralized execution. Each agent learns its own individual policy π θi pa i |ω i q conditioned on local observation ω i, using a centralized Critic which estimates the joint action-valueQps, a 1,..., a N q. We now describe our multi-agent communication architecture in detail. Recall that we have N agents with policies tπ 1,..., π N u, respectively parameterized by tθ 1,..., θ N u, jointly performing a cooperative task. At every timestep t, the ith agent for all i P t1,..., N u sees a local observation ω t i, and must select a discrete environment action a t i " π θi and a continuous communication message m t i, received by other agents at the next timestep, in order to maximize global reward r t " R. Since no agent has access to the underlying state of the environment s t, there is incentive in communicating with each other and being mutually helpful to do better as a team. Policies and Decentralized Execution. Each agent is essentially modeled as a Dec-POMDP augmented with communication. Each agent's policy π θi is implemented as a 1-layer Gated Recurrent Unit BID2. At every timestep, the local observation ω t i and a vector c t i aggregating messages sent by all agents at the previous timestep (described in more detail below) are used to update the hidden state h t i of the GRU, which encodes the entire message-action-observation history up to time t. From this internal state representation, the agent's policy π θi pa t i | h t i q predicts a categorical distribution over the space of actions, and another output head produces an outgoing message vector m t i. Note that for all our experiments, agents are symmetric and policies are instantiated from the same set of shared parameters; i.e. θ 1 "... " θ N. This considerably speeds up learning. Centralized Critic. Following prior work BID13 BID5, we operate under the centralized learning and decentralized execution paradigm wherein during training, a centralized critic guides the optimization of individual agent policies. The centralized Critic takes as input predicted actions ta t 1,..., a t N u and internal state representations th t 1,..., h t N u from all agents to estimate the joint action-valueQ t at every timestep. The centralized Critic is learned by temporal difference BID20 and the gradient of the expected return Jpθ i q " ErRs with respect to policy parameters is approximated by: DISPLAYFORM0 Note that compared to an individual criticQ i ph t i, a t i q for each agent, having a centralized critic leads to considerably lower variance in policy gradient estimates since it takes into account actions from all agents. At test time, the critic is not needed anymore and policy execution is fully decentralized. At the receiving end, each agent (indexed by j) predicts a query vector q DISPLAYFORM1 and uses it to compute a dot product with signatures of all N messages. This is scaled by 1{? d k followed by a softmax to obtain attention weight α ji for each message value vector: DISPLAYFORM2 Note that equation 2 also includes α ii corresponding to the ability to self-attend BID21, which we empirically found to improve performance, especially in situations when an agent has found the goal in a coordinated navigation task and all it is required to do is stay at the goal, so others benefit from attending to this agent's message but return communication is not needed. and internal state h t j are first used to predict the next internal state h 1 t j taking into account a first round of communication: DISPLAYFORM0 Next, h 1 t j is used to predict signature, query, value followed by repeating Eqns 1-4 for multiple rounds until we get a final aggregated message vector c t`1 j to be used as input at the next timestep. We evaluate our targeted multi-agent communication architecture on a variety of tasks and environments. All our models were trained with a batched synchronous version of the multi-agent ActorCritic described above, using RMSProp with a learning rate of 7ˆ10´4 and α " 0.99, batch size 16, discount factor γ " 0.99 and entropy regularization coefficient 0.01 for agent policies. All our agent policies are instantiated from the same set of shared parameters; i.e. θ 1 " ... " θ N . Each agent's GRU hidden state is 128-d, message signature/query is 16-d, and message value is 32-d (unless specified otherwise). All are averaged over 5 independent runs with different seeds. The SHAPES dataset was introduced by BID0 1, and originally created for testing compositional visual reasoning for the task of visual question answering. It consists of synthetic images of 2D colored shapes arranged in a grid (3ˆ3 cells in the original dataset) along with corresponding question-answer pairs. There are 3 shapes (circle, square, triangle), 3 colors (red, green, blue), and 2 sizes (small, big) in total (see FIG1 .We convert each image from SHAPES into an active environment where agents can now be spawned at different regions of the image, observe a 5ˆ5 local patch around them and their coordinates, and take actions to move around -tup, down, left, right, stayu. Each agent is tasked with navigating to a specified goal state in the environment -t'red', 'blue square', 'small green circle', etc. u -and the reward for each agent at every timestep is based on team performance i.e. r t " # agents on goal # agents . 1 github.com/jacobandreas/nmn2/tree/shapes (a) 4 agents have to find rred, red, green, blues respectively. t " 1: inital spawn locations; t " 2: 4 was on red at t " 1 so 1 and 2 attend to messages from 4 since they have to find red. 3 has found its goal (green) and is self-attending; t " 6: 4 attends to messages from 2 as 2 is on 4's target -blue; t " 8: 1 finds red, so 1 and 2 shift attention to 1; t " 21: all agents are at their respective goal locations and primarily self-attending.(b) 8 agents have to find red on a large 100ˆ100 environment. t " 7: Agent 2 finds red and signals all other agents; t " 7 to t " 150: All agents make their way to 2's location and eventually converge around red. Having a symmetric, team-based reward incentivizes agents to cooperate with each other in finding each agent's goal. For example, as shown in FIG1, if agent 2's goal is to find red and agent 4's goal is to find blue, it is in agent 4's interest to let agent 2 know if it passes by red (t " 2) during its exploration / quest for blue and vice versa (t " 6). SHAPES serves as a flexible testbed for carefully controlling and analyzing the effect of changing the size of the environment, no. of agents, goal configurations, etc. How does targeting work in the communication learnt by TarMAC? Recall that each agent predicts a signature and value vector as part of the message it sends, and a query vector to attend to incoming messages. The communication is targeted because the attention probabilities are a function of both the sender's signature and receiver's query vectors. So it is not just the receiver deciding how much of each message to listen to. The sender also sends out signatures that affects how much of each message is sent to each receiver. The sender's signature could encode parts of its observation most relevant to other agents' goals (for example, it would be futile to convey coordinates in the signature), and the message value could contain the agent's own location. For example, in FIG1, at t " 6, we see that when agent 2 passes by blue, agent 4 starts attending to agent 2. Here, agent 2's signature encodes the color it observes (which is blue), and agent 4's query encodes its goal (which is also blue) leading to high attention probability. Agent 2's message value encodes coordinates agent 4 has to navigate to, as can be seen at t " 21 when agent 4 reaches there. Environment and Task. The simulated traffic junction environments from BID19 consist of cars moving along pre-assigned, potentially intersecting routes on one or more road junctions. The total number of cars is fixed at N max and at every timestep, new cars get added to the environment with probability p arrive. Once a car completes its route, it becomes available to be sampled and added back to the environment with a different route assignment. Each car has a limited visibility of a 3ˆ3 region around it, but is free to communicate with all other cars. The action space for each car at every timestep is gas and brake, and the reward consists of a linear time penalty´0.01τ, where τ is the number of timesteps since car has been active, and a collision penalty r collision "´10. Quantitative Results. We compare our approach with CommNets BID19 on the easy and hard difficulties of the traffic junction environment. The easy task has one junction of two one-way roads on a 7ˆ7 grid with N max " 5 and p arrive " 0.30, while the hard task has four connected junctions of two-way roads on a 18ˆ18 grid with N max " 20 and p arrive " 0.05.See FIG5, 4b for an example of the four two-way junctions in the hard task. As shown in Model Interpretation. Interpreting the learned policies, FIG5 shows braking probabilities at different locations: cars tend to brake close to or right before entering traffic junctions, which is reasonable since junctions have the highest chances for collisions. Turning our attention to attention probabilities FIG5 ), we can see that cars are most-attended to when in the'internal grid' -right after crossing the 1st junction and before hitting the 2nd junction. These attention probabilities are intuitive: cars learn to attentively attend to specific sensitive locations with the most relevant local observations to avoid collisions. Finally, FIG5 compares total number of cars in the environment vs. number of cars being attended to with probability ą 0.1 at any time. Interestingly, these are (loosely) positively correlated, with Spearman's σ " 0.49, which shows that TarMAC is able to adapt to variable number of agents. Crucially, agents learn this dynamic targeting behavior purely from task rewards with no handcoding! Note that the right shift between the two curves is expected, as it takes a few timesteps of communication for team size changes to propagate. At a relative time shift of 3, the Spearman's rank correlation between the two curves goes up to 0.53.Message size vs. multi-stage communication. We study performance of TarMAC with varying message value size and number of rounds of communication on the'hard' variant of the traffic junction task. As can be seen in FIG3, multiple rounds of communication leads to significantly higher performance than simply increasing message size, demonstrating the advantage of multistage communication. In fact, decreasing message size to a single scalar performs almost as well as 64-d, perhaps because even a single real number can be sufficiently partitioned to cover the space of meanings/messages that need to be conveyed for this task. Finally, we benchmark TarMAC on a cooperative point-goal navigation task in House3D BID22. House3D provides a rich and diverse set of publicly-available 2 3D indoor environments, wherein agents do not have access to the top-down map and must navigate purely from first-person vision. Similar to SHAPES, the agents are tasked with finding a specified goal (such as 'fireplace'), spawned at random locations in the environment and allowed to communicate with each other and move around. Each agent gets a shaped reward based on progress towards the specified target. An episode is successful if all agents end within 0.5m of the target object in 50 navigation steps. TAB10 shows success rates on a find[fireplace] task in House3D. A no-communication navigation policy trained with the same reward structure gets a success rate of 62.1%. Mean-pooled communication (no attention) performs slightly better with a success rate of 64.3%, and TarMAC achieves the best success rate at 68.9%. FIG6 visualizes predicted navigation trajectories of 4 agents. Note that the communication vectors are significantly more compact (32-d) than the high-dimensional observation space, making our approach particularly attractive for scaling to large teams. We introduced TarMAC, an architecture for multi-agent reinforcement learning which allows targeted interactions between agents and multiple stages of collaborative reasoning at every timestep. Evaluation on three diverse environments show that our model is able to learn intuitive attention behavior and improves performance, with downstream task-specific team reward as sole supervision. While multi-agent navigation experiments in House3D show promising performance, we aim to exhaustively benchmark TarMAC on more challenging 3D navigation tasks because we believe this is where decentralized targeted communication can have the most impact -as it allows scaling to a large number of agents with large observation spaces. Given that the 3D navigation problem is hard in and of itself, it would be particularly interesting to investigate combinations with recent advances orthogonal to our approach (e.g. spatial memory, planning networks) with the TarMAC framework.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1e572A5tQ
Targeted communication in multi-agent cooperative reinforcement learning
It is difficult for the beginners of etching latte art to make well-balanced patterns by using two fluids with different viscosities such as foamed milk and syrup. Even though making etching latte art while watching making videos which show the procedure, it is difficult to keep balance. Thus well-balanced etching latte art cannot be made easily. In this paper, we propose a system which supports the beginners to make well-balanced etching latte art by projecting a making procedure of etching latte art directly onto a cappuccino. The experiment show the progress by using our system. We also discuss about the similarity of the etching latte art and the design templates by using subtraction. Etching latte art is the practice of literally drawing on a coffee with a thin rod, such as a toothpick, in order to create images in the coffee. There are several kinds of making method of etching latte art depending on tools and toppings. A method which is often introduced as easy one for beginners is putting syrup directly onto milk foam and etching to make patterns as shown in Figure 1. The color combination automatically makes the drink look impressive by using syrup, so baristas are under less pressure to create a difficult design. However, it is difficult for beginners to imagine how they ought to put syrup and etch in order to make beautiful patterns since etching latte art offers two fluids with different viscosities. On top of this, even though they can watch videos which show a making procedure of etching latte art, etching latte art made by imitating hardly looks well-balanced. It is impossible to make well-balanced etching latte art without repeated practice. In this paper, we develop a support system which helps even beginners to make well-balanced latte art by directly projecting a making procedure of etching latte art using syrup, which has high needs, onto a cappuccino. Moreover, projecting a deformation of fluid with viscosity such as syrup which is difficult to imagine as animations in order to support beginners to understand the deformation of fluid with viscosity. We indicate the usefulness of this system through a questionnaire survey and the similarity to the design templates using subtraction. There are two kinds of latte art, etching and free pouring. The typical making method of the former is putting syrup on the milk foam and making patterns by etching whereas the latter does not use any tools and makes patterns with only flow of milk as poring. Hu and Chi proposed a simulation method which considered viscosity of milk to express the flow of milk in latte art. Their research expressed the flow of milk which is really similar to one in actual latte art. From the viewpoint of practicing latte art, however, users have to estimate paths of pouring milk and acquire the manipulation of the milk jug from the of the simulation. Moreover, it will not seem to be able to understand the flow of milk unless the users have advanced skills. Pikalo developed an automated latte art machine which used a modified inkjet cartridge to infuse tiny droplets of colorant into the upper layer of the beverage. By this machine, unlimitedly designed latte art can be easily made like a printer. This machine lets everyone have original latte art without any barista skills. However, this machine cannot make latte art with milk foam like one in free pour latte art. Therefore, baristas still have to practice latte art to make other kinds of latte art. Kawai et al. developed a free pour latte art support system by showing how to pour the milk to make latte art designed by the users as animated lines. This system targets baristas who have experience of making basic free pour latte art and know how much milk has to be poured. People who never made free pour latte art have to repeat the practice several times. Flagg et al. developed a painting support system by projecting a painting procedure on the canvas. In order to avoid users' shadow hiding a projected painting procedure, they put two projectors behind users. This system is quite large-scale and involves costs. Morioka et al. visually supported cooking by projecting how to chop ingredients on the proper place of that. This system is able to indicate how to chop ingredients, detailed notes, and cooking procedures which is hard to understand as cooking while reading a recipe. However, to use this system, users have to prepare a dedicated kitchen which has a hole on the ceiling. This system is quite large-scale since it projects instructions from a hole made on the ceiling. Xie et al. proposed an interactive system which lets common users build large-scale balloon art in an easy and enjoyable way by using spatial segmented reality solution. This system provides fabrication guidance to illustrate the differences between the depth maps of the target three-dimensional shape and the current work in progress. In addition, they design a shaking animation for each number to increase user immersion. Yoshida et al. proposed an architecture-scale, computerassisted digital fabrication method which used a depth camera and projection mapping. This system captured the current work using a depth camera, compared the scanned geometry with the target shape, and then projected the guiding information based on an evaluation. They mentioned that AR devices such as head-mounted displays (HMDs) could serve as interfaces, but would require calibration on each device. The proposed projector-camera guidance system can be prepared with a simple calibration process, and would also allow for intuitive information-sharing among workers. As indicated in this paper, projectors are able to project instructions on hand. Therefore, difference between instructions and users' manipulation would be reduced. We decided to support making etching latte art with a small projector as well. We show the system overview in Figure 2. The system configuration is written below (Figure 3). • A laptop computer connecting to a projector to show a making procedure on a cappuccino. • A small projector shows a making procedure on a cappuccino. • A tripod for a small projector. Firstly, users select a pattern from Heart Ring, Leaf, or Spider Web as shown in Figure 2 (a). Next, a making procedure of selected etching latte art is projected on the cappuccino (Figure 2(b) ). Then, the animation of syrup deformation is projected on the cappuccino (Figure 2(c) ). Finally, a making procedure is projected on the cappuccino once again. Users put syrup to trace the projected image. Then, etching latte art is completed by etching to trace the projected lines (Figure 2(d) ). Animations at each step can be played whenever the user wants. In our system, users select a pattern from three kinds of etching latte art (the first column in Table 1). After the selection, a making procedure written below is projected on the cappuccino. Firstly, proper places to put syrup corresponding on selected etching latte art are projected on the cappuccino (the second column in Table 1). Next, manipulation of the pick is projected on the cappuccino (the third column in Table 1). It is hard to confirm a making procedure of etching latte art in books since they just line up several frames of a making video as shown in Figure 1. Our system displays how to put syrup and how to manipulate the pick separately. As far as the authors know, there is no system which shows a making procedure of etching latte art like our system. It is difficult for beginners to imagine the syrup deformation by manipulating the pick. Our system helps users to understand the syrup deformation as manipulating the pick by directly projecting prepared animations (Table 2). In our system, the syrup is drawn in brown and the manipulation of the pick is drawn in blue. The animations were created by Adobe After Effects considering two fluids with different viscosities. It takes approximately 30 minutes to create each design template and two hours to create each animation of syrup deformation. We have developed a system which has functions mentioned in the previous sub-sections. As shown in Figure 3, a cappuccino is put in front of a user and a projector mounted on a tripod is placed on the left side of the cappuccino (if the user is left-handed, it is placed on the right side of the cappuccino). We project a making procedure of etching latte art and animations of syrup deformation from the top. In order to evaluate our system, we conducted an experiment. Twelve etching latte art beginners participated in the experiment. We divided them into two groups (Group 1 and Group 2). Participants make two etching latte art in different methods (making by oneself and making with our system). Group 1 makes etching latte art by themselves firstly. Then they make etching latte art with our system. Whereas Group 2 makes etching latte art with our system firstly. Then they make etching latte art by themselves. In this experiment, we selected patterns from three etching latte art in order to avoid everyone making the same pattern. After making etching latte art, participants filled out the questionnaire. We also took a questionnaire survey to inexperienced people in order to ask which etching latte art (made by oneself or made with our system) looks more well-balanced for each participant's etching latte art. Moreover, we created foreground images from subtraction for each design template and etching latte art in order to show which etching latte art is more similar to each design template. Making by Oneself Participants watch a making video of etching latte art they will make (Table 3). Then, they make etching latte art by themselves while watching the making video. Making with Our System Participants make etching latte art with our system. Firstly, they watch a making procedure projected on the cappuccino. Secondly, CG animations of syrup deformation (animation speed is almost the same as actually making etching latte art) are projected. Finally, a making procedure of etching latte art is projected on the cappuccino once again and participants make etching latte art by tracing the projected making procedure. Notes as Making Etching Latte Art Generally, baristas use well-steamed silky foamed milk which has fair quality tiny bubbles steamed by a steamer attached to an espresso machine for business use. However, it is difficult to make such good quality foamed milk with a household milk frother. Milk foamed by a milk frother has big bubbles and they break easily, so syrup put on such milk foam must spread. Therefore, in this experiment, participants made etching latte art with yoghurt. There is no difference between yoghurt and foamed milk as manipulating the pick. Thus, we consider that using yoghurt in the experiment does not affect the evaluation of this system. We do not have to care about the difference from using our system with a cappuccino. The of the making etching latte art are shown in Table 4. We compare and evaluate the etching latte art made by oneself (Table 4 "By oneself" Line) and the etching latte art made with our system (Table 4 "Our system" Line). Participants A, B, G, and H made "Heart Ring" (Table 4 A, B, G, H). Participants B and H put too much syrup as making by themselves. Therefore, the hearts are too big and their shape is not desired one. Whereas, the participants were able to adjust the amount of syrup as making with our system. As a , the shape of each heart is clearer and the etching latte art looks better quality. Participants C, D, I, and J made "Leaf" (Table 4 C, D, I, J). Participants C and J were not able to draw a line vertically. Participants J was not able to give the same distance between the syrup. Due to these problems, their etching latte art looks distorted. They were able to make well-balanced etching latte art which has the same distance between the syrup by using our system. Participants E, F, K, and L made "Spider Web" (Table 4 E, F, K, L). Participants E, F, and K were not able to draw a spiral with the certain space. The equally spaced spiral was made with our system and they were able to make better balanced etching latte art. As we mentioned, the etching latte art supported by our system is good quality. It is indicated that even beginners are able to make well-balanced etching latte art with our system. Participants compared two etching latte arts made in different methods. We conducted a questionnaire survey. The questionnaire consists of the five questions below. Question 1. Can you imagine how to make the etching latte art before watching the making video? Question 2. Is it easy to make the etching latte art while watching the making video? We also took some views, impressions, and improvement points of our system. The of the participants' questionnaires are shown in Table 5. The participants who answered 1pt or 2pt in Question 1 over 80 percent and the average is 1.92pt which is low. From this , patterns of etching latte art are complex for people who see them for the first time. It is difficult to imagine how to make it for them. The Average of Question 2 is 3.00pt, however, five participants answered 2pt and only two participants answered 5pt. There are some comments about this question. I could not understand where I should put the syrup since it was hard to get sense of the place and the size of syrup from the making video. It was difficult to manipulate the pick and I could not make desired pattern. Whereas, all participants answered 4pt or 5pt in Question 5. We consider that it is possible to make certain quality etching latte art with our system. Our system is popular with participants since it projects the making procedure directly on the cappuccino, so they do not have to watch another screen displaying making videos while making etching latte art. In Question 3 and 4, the participants who answered 4pt or 5pt over 90 percent and the average is over 4.50pt. We can say that the animations of syrup deformation in our system help users to properly understand how syrup deforms as drawing lines by the pick. Also, the animation speed of syrup deformation is ideal. The comments about our system are written below. It was easy to draw a line with a pick since I just needed to trace a line projected on the cappuccino, so it was clear where and how much syrup I should put. I was delighted that I could make etching latte art even though I had never tried it since the making procedure was easy to understand how to make it. The animation of syrup deformation indicated how the syrup deforms, so I could imagine it. From the of Table 4 and comments to, comparing to making latte art as watching making videos on another screen, even beginners are able to make etching latte art easily by using our system since it directly projects the making procedure on the cappuccino, so it is popular with users. We show improvement points from the participants. I might have been able to put proper amount of syrup if I also confirm how to put the syrup in animations. When I trace lines from left to right, the lines were hard to confirm because of my shadow at times. I got a bit confused because a lot of syrup stayed at the center and the line projected on the syrup could not be seen. Users adjust the amount of the syrup by putting it on the pattern projected on the cappuccino, however, this pattern is a static image, so some participants put too much syrup as written in comment. We consider that preparing animations which show how to put the syrup helps users to clearly understand and imagine the speed and the amount of the syrup. Also, as written in comments and, sometimes the projected making procedure was hard to see due to the position of the user's hands or the color of the . We will resolve this problem by using multiple projectors. We conducted a questionnaire survey about etching latte art had been made by participants. Sixty inexperienced people who had not participated in the experiment were asked which etching latte art (made by oneself or made with our system) looked more similar to the design template for each participant's etching latte art. The information that if the etching latte art had been made by oneself or with our system was hidden. The of inexperienced people's questionnaires are shown in Figre 4. Ten participants out of twelve got the that their etching latte art made with our system is similar to the design template than one made by themselves. About the etching latte art made by Participant G, they put the proper amount of syrup to the proper place even without our system. Their two etching latte arts are both look well-balanced. About the etching latte art made by Participant I, they could not trace the making procedure properly since they put the syrup too quickly. After making etching latte art with our system, they said that it might have been easier to make well-balanced etching latte art if our system showed how to put the syrup in animations as well. We will improve the system to resolve this issue by creating new animations which show the proper speed to put the syrup. We created foreground images from subtraction for each design template and etching latte art in order to quantitatively evaluate which etching latte art is more similar to the design template. White pixels in the foreground images indicate the difference between the design template and the etching latte art and black pixels indicate the same parts. We normalize the black pixels in order to quantify the similarity between each design template and etching latte art. The larger number indicates the higher similarity. The of the subtraction are shown in Table 6. Ten participants out of twelve got the that their etching latte art made with our system is similar to the design template than one made by themselves. About the etching latte art made by Participant A, the place of each heart was adjusted by our system, however, they put too much syrup. As a , the difference from the design template is big. About the etching latte art made by Participant D, the syrup is off to right. As a , the difference from the design template is big. However, the difference of similarities between the etching latte art made by themselves and the etching latte art made with our system is only 0.001 which is really little difference. We need to indicate the proper amount of the syrup more clearly in order to get higher similarity from subtraction. We have developed the system which supports etching latte art beginners to practice and make etching late art and also help them to understand the syrup deformation by directly projecting a making procedure and animations of syrup deformation onto the cappuccino. The participants' evaluations verified the usefulness of our system. The of the inexperienced people's questionnaire and the participants' questionnaire show that more than 80 percent of participants made better-balanced etching latte art with our system. However, each evaluation says that two participants made betterbalanced etching latte art by themselves and they are all different participants. From this , we confirm there are some instances that human beings suppose the etching latte art is similar to the design template even though the of the subtraction says it is not similar to the design template, and vice versa. In our future work, we will improve the system with considering what kind of etching latte art human beings prefer and develop a system which creates animations of syrup deformation automatically. We also handle the development factors got in the survey. Table 4: Experimental . Group 1 makes etching latte art by themselves firstly. Whereas Group 2 makes etching latte art with our system firstly. Table 5: Participants' questionnaire . Table 6: Results of subtraction. Similarities are represented by a number in the range of 0.000 to 1.000 (1.000 indicates totally the same as the design template).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Tu1NiBXxf0
We have developed an etching latte art support system which projects the making procedure directly onto a cappuccino to help the beginners to make well-balanced etching latte art.
We focus on temporal self-supervision for GAN-based video generation tasks. While adversarial training successfully yields generative models for a variety of areas, temporal relationship in the generated data is much less explored. This is crucial for sequential generation tasks, e.g. video super-resolution and unpaired video translation. For the former, state-of-the-art methods often favor simpler norm losses such as L2 over adversarial training. However, their averaging nature easily leads to temporally smooth with an undesirable lack of spatial detail. For unpaired video translation, existing approaches modify the generator networks to form spatio-temporal cycle consistencies. In contrast, we focus on improving the learning objectives and propose a temporally self-supervised algorithm. For both tasks, we show that temporal adversarial learning is key to achieving temporally coherent solutions without sacrificing spatial detail. We also propose a novel Ping-Pong loss to improve the long-term temporal consistency. It effectively prevents recurrent networks from accumulating artifacts temporally without depressing detailed features. We also propose a first set of metrics to quantitatively evaluate the accuracy as well as the perceptual quality of the temporal evolution. A series of user studies confirms the rankings computed with these metrics. Generative adversarial models (GANs) have been extremely successful at learning complex distributions such as natural images . However, for sequence generation, directly applying GANs without carefully engineered constraints typically in strong artifacts over time due to the significant difficulties introduced by the temporal changes. In particular, conditional video generation tasks are very challenging learning problems where generators should not only learn to represent the data distribution of the target domain, but also learn to correlate the output distribution over time with conditional inputs. Their central objective is to faithfully reproduce the temporal dynamics of the target domain and not resort to trivial solutions such as features that arbitrarily appear and disappear over time. In our work, we propose a novel adversarial learning method for a recurrent training approach that supervises both spatial content as well as temporal relationships. We apply our approach to two video-related tasks that offer substantially different challenges: video super-resolution (VSR) and unpaired video translation (UVT). With no ground truth motion available, the spatio-temporal adversarial loss and the recurrent structure enable our model to generate realistic while keeping the generated structures coherent over time. With the two learning tasks we demonstrate how spatio-temporal adversarial training can be employed in paired as well as unpaired data domains. In addition to the adversarial network which supervises the short-term temporal coherence, long-term consistency is self-supervised using a novel bi-directional loss formulation, which we refer to as "Ping-Pong" (PP) loss in the following. The PP loss effectively avoids the temporal accumulation of artifacts, which can potentially benefit a variety of recurrent architectures. The central contributions of our work are: a spatio-temporal discriminator unit together with a careful analysis of training objectives for realistic and coherent video generation tasks, a novel PP loss supervising long-term consistency, in addition to a set of metrics for quantifying temporal coherence based on motion estimation and perceptual distance. Together, our contributions lead to models that outperform previous work in terms of temporally-coherent detail, which we quantify with a wide range of metrics and user studies. features, but collapses to essentially static outputs of Obama. It manages to transfer facial expressions back to Trump using tiny differences encoded in its Obama outputs, instead of learning a meaningful mapping. Being able to establish the correct temporal cycle-consistency between domains, ours and RecycleGAN can generate correct blinking motions. Our model outperforms the latter in terms of coherent detail that is generated. Deep learning has made great progress for image generation tasks. While regular losses such as L 2 offer good performance for image super-resolution (SR) tasks in terms of PSNR metrics, GAN researchers found adversarial training to significantly improve the perceptual quality in multi-modal problems including image SR , image translations , and others. Perceptual metrics are proposed to reliably evaluate image similarity by considering semantic features instead of pixel-wise errors. Video generation tasks, on the other hand, require realistic to change naturally over time. Recent works in VSR improve the spatial detail and temporal coherence by either using multiple low-resolution (LR) frames as inputs (; ;), or recurrently using previously estimated outputs. The latter has the advantage to re-use high-frequency details over time. In general, adversarial learning is less explored for VSR and applying it in conjunction with a recurrent structure gives rise to a special form of temporal mode collapse, as we will explain below. For video translation tasks, GANs are more commonly used but discriminators typically only supervise the spatial content. E.g., does not employ temporal constrains and generators can fail to learn the temporal cycle-consistency. In order to learn temporal dynamics, RecycleGAN proposes to use a prediction network in addition to a generator, while a concurrent work chose to learn motion translation in addition to spatial content translation. Being orthogonal to these works, we propose a spatiotemporal adversarial training for both VSR and UVT and we show that temporal self-supervision is crucial for improving spatio-temporal correlations without sacrificing spatial detail. While L 2 temporal losses based on warping are used to enforce temporal smoothness in video style transfer tasks , concurrent GAN-based VSR work (Pérez-) and UVT work , it leads to an undesirable smooth over spatial detail and temporal changes in outputs. Likewise, the L 2 temporal metric represents a sub-optimal way to quantify temporal coherence and perceptual metrics that evaluate natural temporal changes are unavailable up to now. We work on this open issue, propose two improved temporal metric and demonstrate the advantages of temporal self-supervision over direct temporal losses. Previous work, e.g. tempoGAN and vid2vid (b), have proposed adversarial temporal losses to achieve time consistency. While tempoGAN employs a second temporal discriminator with multiple aligned frames to assess the realism of temporal changes, it is not suitable for videos, as it relies on ground truth motions and employs a single-frame processing that is sub-optimal for natural images. On the other hand, vid2vid focuses on paired video translations and proposes a video discriminator based on a conditional motion input that is estimated from the paired ground-truth sequences. We focus on more difficult unpaired translation tasks instead, and demonstrate the gains in quality of our approach in the evaluation section. For tracking and optical flow estimation, L2-based time-cycle losses were proposed to constrain motions and tracked correspondences using symmetric video inputs. By optimizing indirectly via motion compensation or tracking, this loss improves the accuracy of the . For video generation, we propose a PP loss that also makes use of symmetric sequences. However, we directly constrain the PP loss via the generated video content, which successfully improves the long-term temporal consistency in the video . Generative Network Before explaining the temporal self-supervision in more detail, we outline the generative model to be supervised. Our generator networks produce image sequences in a frame-recurrent manner with the help of a recurrent generator G and a flow estimator F. We follow previous work, where G produces output g t in the target domain B from conditional input frame a t from the input domain A, and recursively uses the previous generated output g t−1. F is trained to estimate the motion v t between a t−1 and a t, which is then used as a motion compensation that aligns g t−1 to the current frame. This procedure, also shown in Fig. 2a ), can be summarized as: g t = G(a t, W (g t−1, v t)), where v t = F(a t−1, a t) and W is the warping operation. While one generator is enough to map data from A to B for paired tasks such as VSR, unpaired generation requires a second generator to establish cycle consistency. . In the UVT task, we use two recurrent generators, mapping from domain A to B and back. As shown in Fig. 2b, vt)) to enforce consistency. A ResNet architecture is used for the VSR generator G and a encoder-decoder structure is applied to UVT generators and F. We intentionally keep generators simple and in line with previous work, in order to demonstrate the advantages of the temporal self-supervision that we will explain in the following paragraphs. Spatio-Temporal Adversarial Self-Supervision The central building block of our approach is a novel spatio-temporal discriminator D s,t that receives triplets of frames. This contrasts with typically used spatial discriminators which supervise only a single image. By concatenating multiple adjacent frames along the channel dimension, the frame triplets form an important building block for learning because they can provide networks with gradient information regarding the realism of spatial structures as well as short-term temporal information, such as first-and second-order time derivatives. We propose a D s,t architecture, illustrated in Fig. 3 and Fig. 4, that primarily receives two types of triplets: three adjacent frames and the corresponding warped ones. We warp later frames backward and previous ones forward. While original frames contain the full spatio-temporal infor-mation, warped frames more easily yield temporal information with their aligned content. For the input variants we use the following notation: Ig = {gt−1, gt, gt+1}, I b = {bt−1, bt, bt+1}; For VSR tasks, D s,t should guide the generator to learn the correlation between LR inputs and highresolution (HR) targets. Therefore, three LR frames I a = {a t−1, a t, a t+1} from the input domain are used as a conditional input. The input of D s,t can be summarized as I b s,t = {I b, I wb, I a} labelled as real and the generated inputs I g s,t = {I g, I wg, I a} labelled as fake. In this way, the conditional D s,t will penalize G if I g contains less spatial details or unrealistic artifacts according to I a, I b. At the same time, temporal relationships between the generated images I wg and those of the ground truth I wb should match. With our setup, the discriminator profits from the warped frames to classify realistic and unnatural temporal changes, and for situations where the motion estimation is less accurate, the discriminator can fall back to the original, i.e. not warped, images. For UVT tasks, we demonstrate that the temporal cycleconsistency between different domains can be established using the supervision of unconditional spatio-temporal discriminators. This is in contrast to previous work which focuses on the generative networks to form spatio-temporal cycle links. Our approach actually yields improved , as we will show below, and Fig. 1 shows a preview of the quality that can be achieved using spatio-temporal discriminators. In practice, we found it crucial to ensure that generators first learn reasonable spatial features, and only then improve their temporal correlation. Therefore, different to the D s,t of VST that always receives 3 concatenated triplets as an input, the unconditional D s,t of UVT only takes one triplet at a time. Focusing on the generated data, the input for a single batch can either be a static triplet of Isg = {gt, gt, gt}, the warped triplet I wg, or the original triplet I g. The same holds for the reference data of the target domain, as shown in Fig. 4. With sufficient but complex information contained in these triplets, transition techniques are applied so that the network can consider the spatio-temporal information step by step, i.e., we initially start with 100% static triplets I sg as the input. Then, over the course of training, 25% of them transition to I wg triplets with simpler temporal information, with another 25% transition to I g afterwards, leading to a (50%,25%,25%) distribution of triplets. Details of the transition calculations are given in Appendix D. Here, the warping is again performed via F. While non-adversarial training typically employs loss formulations with static goals, the GAN training yields dynamic goals due to discriminative networks discovering the learning objectives over the course of the training run. Therefore, their inputs have strong influence on the training process and the final . Modifying the inputs in a controlled manner can lead to different and substantial improvements if done correctly, as will be shown in Sec. 4. Although the proposed concatenation of several frames seems like a simple change that has been used in a variety of projects, it is an important operation that allows discriminators to understand spatio-temporal data distributions. As will be shown below, it can effectively reduce temporal problems encountered by spatial GANs. While L 2 −based temporal losses are widely used in the field of video generation, the spatiotemporal adversarial loss is crucial for preventing the inference of blurred structures in multi-modal data-sets. Compared to GANs using multiple discriminators, the single D s,t network can learn to balance the spatial and temporal aspects from the reference data and avoid inconsistent sharpness as well as overly smooth . Additionally, by extracting shared spatio-temporal features, it allows for smaller network sizes. Self-Supervision for Long-term Temporal Consistency When relying on a previous output as input, i.e., for frame-recurrent architectures, generated structures easily accumulate frame by frame. In an adversarial training, generators learn to heavily rely on previously generated frames and can easily converge towards strongly reinforcing spatial features over longer periods of time. For videos, this especially occurs along directions of motion, and these solutions can be seen as a special form of temporal mode collapse. We have noticed this issue in a variety of recurrent architectures, examples are shown in Fig. 5 a) and the Dst in Fig. 1. While this issue could be alleviated by training with longer sequences, we generally want generators to be able to work with sequences of arbitrary length for inference. To address this inherent problem of recurrent generators, we propose a new Result trained with PP loss. These artifacts are removed successfully for the latter. c) The ground-truth image. With our PP loss (shown on the right), the L 2 distance between gt and g t is minimized to remove drifting artifacts and improve temporal coherence. bi-directional "Ping-Pong" loss. For natural videos, a sequence with forward order as well as its reversed counterpart offer valid information. Thus, from any input of length n, we can construct a symmetric PP sequence in form of a 1,...a n−1, a n, a n−1,...a 1 as shown in Fig. 5. When inferring this in a frame-recurrent manner, the generated should not strengthen any invalid features from frame to frame. Rather, the should stay close to valid information and be symmetric, i.e., the forward gt = G(at, gt−1) and the one generated from the reversed part, g t = G(at, g t+1), should be identical. Based on this observation, we train our networks with extended PP sequences and constrain the generated outputs from both "legs" to be the same using the loss: Note that in contrast to the generator loss, the L 2 norm is a correct choice here: We are not faced with multi-modal data where an L 2 norm would lead to undesirable averaging, but rather aim to constrain the recurrent generator to its own, unique version over time. The PP terms provide constraints for short term consistency via gn−1 − gn−1 2, while terms such as g1 − g1 2 prevent long-term drifts of the . As shown in Fig. 5 (b), this PP loss successfully removes drifting artifacts while appropriate high-frequency details are preserved. In addition, it effectively extends the training data set, and as such represents a useful form of data augmentation. A comparison is shown in Appendix E to disentangle the effects of the augmentation of PP sequences and the temporal constrains. The show that the temporal constraint is the key to reliably suppressing the temporal accumulation of artifacts, achieving consistency, and allowing models to infer much longer sequences than seen during training. Perceptual Loss Terms As perceptual metrics, both pre-trained NNs (; a) and in-training discriminators were successfully used in previous work. Here, we use feature maps from a pre-trained VGG-19 network , as well as D s,t itself. In the VSR task, we can encourage the generator to produce features similar to the ground truth ones by increasing the cosine similarity between their feature maps. In UVT tasks without paired ground truth data, we still want the generators to match the distribution of features in the target domain. Similar to a style loss in traditional style transfer , we here compute the D s,t feature correlations measured by the Gram matrix instead. The feature maps of D s,t contain both spatial and temporal information, and hence are especially well suited for the perceptual loss. We now explain how to integrate the spatio-temporal discriminator into the paired and unpaired tasks. We use a standard discriminator loss for the D s,t of VSR and a least-square discriminator loss for the D s,t of UVT. Correspondingly, a non-saturated L adv is used for the G and F of VSR, and a least-squares one is used for the UVT generators. As summarized in Table 1, G and F are trained with the mean squared loss L content, adversarial losses L adv, perceptual losses L φ, the PP loss L PP, and a warping loss L warp, where again g, b and Φ stand for generated samples, ground truth images and feature maps of VGG-19 or D s,t. We only show losses for the mapping from A to B for UVT tasks, as the backward mapping simply mirrors the terms. We refer to our full model for both tasks as TecoGAN below. 1 Training parameters and details are given in Appendix G. In the following, we illustrate the effects of temporal supervision using two ablation studies. In the first one, models trained with ablated loss functions show how L adv and L PP change the overall learning objectives. Next, full UVT models are trained with different D s,t inputs. This highlights how differently the corresponding discriminators converge to different spatio-temporal equilibriums, and the general importance of providing suitable data distributions from the target domain. While we provide qualitative and quantitative evaluations in the following, we also refer the reader to our supplemental html document 2, with video clips that more clearly highlight the temporal differences. Loss Ablation Study Below we compare variants of our full TecoGAN model to EnhanceNet (ENet) , FRVSR , and DUF for VSR, and CycleGAN and RecycleGAN for UVT. Specifically, ENet and CycleGAN represent state-of-the-art single-image adversarial models without temporal information, FRVSR and DUF are state-of-the-art VSR methods without adversarial losses, and RecycleGAN is a spatial adversarial model with a prediction network learning the temporal evolution. For VSR, we first train a DsOnly model that uses a frame-recurrent G and F with a VGG-19 loss and only the regular spatial discriminator. Compared to ENet, which exhibits strong incoherence due to the lack of temporal information, DsOnly improves temporal coherence thanks to the framerecurrent connection, but there are noticeable high-frequency changes between frames. The temporal profiles of DsOnly in Fig. 6 and 8, correspondingly contain sharp and broken lines. When adding a temporal discriminator in addition to the spatial one (DsDt), this version generates more coherent , and its temporal profiles are sharp and coherent. However, DsDt often produces the drifting artifacts discussed in Sec. 3, as the generator learns to reinforce existing details from previous frames to fool D s with sharpness, and satisfying D t with good temporal coherence in the form of persistent detail. While this strategy works for generating short sequences during training, the strengthening effect can lead to very undesirable artifacts for long-sequence inferences. By adding the self-supervision for long-term temporal consistency L pp, we arrive at the DsDtPP model, which effectively suppresses these drifting artifacts with an improved temporal coherence. In Fig. 6 and Fig. 8, DsDtPP in continuous yet detailed temporal profiles without streaks from temporal drifting. Although DsDtPP generates good , it is difficult in practice to balance the generator and the two discriminators. The shown here were achieved only after numerous runs manually tuning the weights of the different loss terms. By using the proposed D s,t discriminator instead, we get a first complete model for our method, denoted as TecoGAN. This network is trained with a discriminator that achieves an excellent quality with an effectively halved network size, as illustrated on the right of Fig. 7. The single discriminator correspondingly leads to a significant reduction in resource usage. Using two discriminators requires ca. 70% more GPU memory, and leads to a reduced training performance by ca. 20%. The TecoGAN model yields similar perceptual and temporal quality to DsDtPP with a significantly faster and more stable training. Since the TecoGAN model requires less training resources, we also trained a larger generator with 50% more weights. In the following we will focus on this larger single-discriminator architecture with PP loss as our full TecoGAN model for VSR. Compared to the TecoGAN model, it can generate more details, and the training process is more stable, indicating that the larger generator and D s,t are more evenly balanced. Result images and temporal profiles are shown in Fig. 6 and Fig. 8. Video are shown in Sec. 4 of the supplemental material. We also carry out a similar ablation study for the UVT task. Again, we start from a single-image GAN-based model, a CycleGAN variant which already has two pairs of spatial generators and discriminators. Then, we train the DsOnly variant by adding flow estimation via F and extending the spatial generators to frame-recurrent ones. By augmenting the two discriminators to use the triplet inputs proposed in Sec. 3, we arrive at the Dst model with spatio-temporal discriminators, which does not yet use the PP loss. Although UVT tasks are substantially different from VSR tasks, the comparisons in Fig. 1 and Sec. 4.6 of our supplemental material yield similar . In these tests, we use renderings of 3D fluid simulations of rising smoke as our unpaired training data. These simulations are generated with randomized numerical simulations using a resolution of 64 3 for domain A and 256 3 for domain B, and both are visualized with images of size 256 2. Therefore, video translation from domain A to B is a tough task, as the latter contains significantly more turbulent and small-scale motions. With no temporal information available, the CycleGAN variant generates HR smoke that strongly flickers. The DsOnly model offers better temporal coherence by relying on its frame-recurrent input, but it learns a solution that largely ignores the current input and fails to keep reasonable spatio-temporal cycle-consistency links between the two domains. On the contrary, our D s,t enables the Dst model to learn the correlation between the spatial and temporal aspects, thus improving the cycle-consistency. However, without L pp, the Dst model (like the DsDt model of VSR) reinforces detail over time in an undesirable way. This manifests itself as inappropriate smoke density in empty regions. Using our full TecoGAN model which includes L pp, yields the best , with detailed smoke structures and very good spatio-temporal cycle-consistency. For comparison, a DsDtPP model involving a larger number of separate networks, i.e. four discriminators, two frame-recurrent generators and the F, is trained. By weighting the temporal adversarial losses from Dt with 0.3 and the spatial ones from Ds with 0.5, we arrived at a balanced training run. Although this model performs similarly to the TecoGAN model on the smoke dataset, the proposed spatio-temporal D s,t architecture represents a more preferable choice in practice, as it learns a natural balance of temporal and spatial components by itself, and requires fewer resources. Continuing along this direction, it will be interesting future work to evaluate variants, such as a shared D s,t for both domains, i.e. a multi-class classifier network. Besides the smoke dataset, an ablation study for the Obama and Trump dataset from Fig. 1 shows a very similar behavior, as can be seen in the supplemental material. Spatio-temporal Adversarial Equilibriums Our evaluation so far highlights that temporal adversarial learning is crucial for achieving spatial detail that is coherent over time for VSR, and for enabling the generators to learn the spatio-temporal correlation between domains in UVT. Next, we will shed light on the complex spatio-temporal adversarial learning objectives by varying the information provided to the discriminator network. The following tests D s,t networks that are identical apart from changing inputs, and we focus on the smoke dataset. In order to learn the spatial and temporal features of the target domain as well as their correlation, the simplest input for D s,t consists of only the original, unwarped triplets, i.e. {I g or I b}. Using these, we train a baseline model, which yields a sub-optimal quality: it lacks sharp spatial structures, and contains coherent but dull motions. Despite containing the full information, these input triplets prevent D s,t from providing the desired supervision. For paired video translation tasks, the vid2vid network achieves improved temporal coherence by using a video discriminator to supervise the output sequence conditioned with the ground-truth motion. With no ground-truth data available, we train a vid2vid variant by using the estimated motions and original triplets, i.e {I g + F (g t−1, g t) + F (g t+1, g t) or I b + F (b t−1, b t) + F (b t+1, b t)}, as the input for D s,t. However, the do not significantly improve. The motions are only partially reliable, and hence don't help for the difficult unpaired translation task. Therefore, the discriminator still fails to fully correlate spatial and temporal features. We then train a third model, concat, using the original triplets and the warped ones, i.e. {I g +I wg or I b +I wb}. In this case, the model learns to generate more spatial details with a more vivid motion. I.e., the improved temporal information from the warped triplets gives the discriminator important cues. However, the motion still does not fully resemble the target domain. We arrive at our final TecoGAN model for UVT by controlling the composition of the input data: as outlined above, we first provide only static triplets {I sg or I sb}, and then apply the transitions of warped triplets {I wg or I wb}, and original triplets {I g or I b} over the course of training. In this way, the network can first learn to extract spatial features, and build on them to establish temporal features. Finally, discriminators learn features about the correlation of spatial and temporal content by analyzing the original triplets, and provide gradients such that the generators learn to use the motion information from the input and establish a correlation between the motions in the two unpaired domains. Consequently, the discriminator, despite receiving only a single triplet at once, can guide the generator to produce detailed structures that move coherently. Video comparisons are shown in Sec 5. of the supplemental material. Results and Metric Evaluation While the visual discussed above provide a first indicator of the quality our approach achieves, quantitative evaluations are crucial for automated evaluations across larger numbers of samples. Below we focus on the VSR task as ground-truth data is available in this case. We conduct user studies and present evaluations of the different models w.r.t. established spatial metrics. We also motivate and propose two novel temporal metrics to quantify temporal coherence. A visual summary is shown in Fig. 7. For evaluating image demonstrated that there is an inherent trade-off between the perceptual quality of the and the distortion measured with vector norms or lowlevel structures such as PSNR and SSIM. On the other hand, metrics based on deep feature maps such as LPIPS can capture more semantic similarities. We measure the PSNR and LPIPS using the Vid4 scenes. With a PSNR decrease of less than 2dB over DUF which has twice the model size of ours, TecoGAN outperforms all methods by more than 40% on LPIPS. While traditional temporal metrics based on vector norm differences of warped frames, e.g. T-diff, can be easily deceived by very blurry , e.g. bi-cubic interpolated ones, we propose to use a tandem of two new metrics, tOF and tLP, to measure the consistence over time. tOF measures the pixel-wise difference of motions estimated from sequences, and tLP measures perceptual changes over time using deep feature map: where OF represents an optical flow estimation with and LP is the perceptual LPIPS metric. In tLP, the behavior of the reference is also considered, as natural videos exhibit a certain degree of changes over time. In conjunction, both pixel-wise differences and perceptual changes are crucial for quantifying realistic temporal coherence. While they could be combined into a single score, we list both measurements separately, as their relative importance could vary in different application settings. Our evaluation with these temporal metrics in Table 2 shows that all temporal adversarial models outperform spatial adversarial ones, and the full TecoGAN model performs very well: With a large amount of spatial detail, it still achieves good temporal coherence, on par with non-adversarial methods such as DUF and FRVSR. For VSR, we have confirmed these automated evaluations with several user studies. Across all of them, we find that the majority of the participants considered the TecoGAN to be closest to the ground truth. For the UVT tasks, where no ground-truth data is available, we can still evaluate tOF and tLP metrics by comparing the motion and the perceptual changes of the output data w.r.t. the ones from the input data, i.e., tOF = OF (at−1, at) − OF (g Table 3, although it is worth to point out that the tOF is less informative in this case, as the motion in the target domain is not necessarily pixel-wise aligned with the input. Overall, TecoGAN achieves good tLP scores thanks to its temporal coherence, on par with RecycleGAN, and its spatial detail is on par with CycleGAN. As for VSR, a perceptual evaluation by humans in the right column of Table 3 confirms our metric evaluations for the UVT task (details in Appendix C). In paired as well as unpaired data domains, we have demonstrated that it is possible to learn stable temporal functions with GANs thanks to the proposed discriminator architecture and PP loss. We have shown that this yields coherent and sharp details for VSR problems that go beyond what can be achieved with direct supervision. In UVT, we have shown that our architecture guides the training process to successfully establish the spatio-temporal cycle consistency between two domains. These are reflected in the proposed metrics and user studies. While our method generates very realistic for a wide range of natural images, our method can generate temporally coherent yet sub-optimal details in certain cases such as under-resolved faces and text in VSR, or UVT tasks with strongly different motion between two domains. For the latter case, it would be interesting to apply both our method and motion translation from concurrent work . This can make it easier for the generator to learn from our temporal self supervision. In our method, the interplay of the different loss terms in the non-linear training procedure does not provide a guarantee that all goals are fully reached every time. However, we found our method to be stable over a large number of training runs, and we anticipate that it will provide a very useful basis for wide range of generative models for temporal data sets. In the following, we first provide qualitative analysis(Appendix A) using multiple that are mentioned but omitted in our main document due to space constraints. We then explain details of the metrics and present the quantitative analysis based on them(Appendix B). The conducted user studies are in support of our TecoGAN network and proposed temporal metrics (Appendix C). Then, we give technical details of our spatio-temporal discriminator (Sec. D), details of network architectures and training parameters (Appendix F, Appendix G). In the end, we discuss the performance of our approach in Appendix H. For the VSR task, we test our model on a wide range of video data, including the generally used Vid4 dataset shown in Fig. 8 and 12, detailed scenes from the movie Tears of Steel shown in Fig. 12, and others shown in Fig. 9. As mentioned in the main document, the TecoGAN model is trained with down-sampled inputs and it can similarly work with original images that were not down-sampled or filtered, such as a data-set of real-world photos . In Fig. 10, we compared our to two other methods that have used the same dataset. With the help of adversarial learning, our model is able to generate improved and realistic details in down-sampled images as well as captured images. For UVT tasks, we train models for Obama and Trump translations, LR-and HR-smoke simulation translations, as well as translations between smoke simulations and real-smoke captures. While smoke simulations usually contain strong numerical viscosity with details limited by the simulation resolution, the real smoke, captured using the setup from , contains vivid fluid motions with many vortices and high-frequency details. As shown in Fig. 11, our method can be used to narrow the gap between simulations and real-world phenomenon. Spatial Metrics We evaluate all VSR methods with PSNR together with the human-calibrated LPIPS metric . While higher PSNR values indicate a better pixel-wise accuracy, lower LPIPS values represent better perceptual quality and closer semantic similarity. Mean values of the Vid4 scenes are shown on the top of Table 4. Trained with direct vector norms losses, FRVSR and DUF achieve high PSNR scores. However, the undesirable smoothing induced by these losses manifests themselves in larger LPIPS distances. ENet, on the other hand, with no information from neighboring frames, yields the lowest PSNR and achieves an LPIPS score that is only slightly better than DUF and FRVSR. TecoGAN model with adversarial training achieves an excellent LPIPS score, with a PSNR decrease of less than 2dB over DUF, which is very reasonable, since PSNR and perceptual quality were shown to be anti-correlated , especially in regions where PSNR is very high. Based on good perceptual quality and reasonable pixel-wise accuracy, TecoGAN outperforms all other methods by more than 40% for LPIPS. Temporal Metrics For both VSR and UVT, evaluating temporal coherence without ground-truth motion is a very challenging problem. The metric T-diff = gt − W (gt−1, vt) 1 was used by as a rough assessment of temporal differences. As shown on bottom of Table 4, T-diff, due to its local nature, is easily deceived by blurry method such as the bi-cubic interrelation and can not correlate well with visual assessments of coherence. By measuring the pixel-wise motion difference using tOF in together with the perceptual changes over time using tLP, we show the temporal evaluations for the VSR task in the middle of Table 4. Not surprisingly, the of ENet show larger errors for all metrics due to their strongly flickering content. Bi-cubic up-sampling, DUF, and FRVSR achieve very low T-diff errors due to their smooth , representing an easy, but undesirable avenue for achieving coherency. However, the overly smooth changes of the former two are identified by the tLP scores. While our DsOnly model generates sharper at the expense of temporal coherence, it still outperforms ENet there. By adding temporal information to discriminators, our DsDt, DsDt+PP, TecoGAN and TecoGAN improve in terms of temporal metrics. Especially the full TecoGAN model stands out here. For the UVT tasks, temporal motions are evaluated by comparing to the input sequence. With sharp spatial features and coherent motion, TecoGAN outperforms previous work on the Obama&Trump dataset, as shown in Table 3. Since temporal metrics can trivially be reduced for blurry image content, we found it important to evaluate with a combination of spatial and temporal metrics. Given that perceptual metrics are already widely used for image evaluations, we believe it is the right time to consider perceptual changes in temporal evaluations, as we did with our proposed temporal coherence metrics. Although not perfect, they are not easily deceived. Specifically, tOF is more robust than a direct pixel-wise metric as it compares motions instead of image content. In the supplemental material, we visualize the motion difference and it can well reflect the visual inconsistencies. On the other hand, we found that our calculation of tLP is a general concept that can work reliably with different perceptual metric: When repeating the tLP evaluation with the PieAPP metric instead of LP, i.e., tPieP = f (yt−1, yt) − f (gt−1, gt) 1, where f(·) indicates the perceptual error function of PieAPP, we get close to identical , listed in Fig. 13. The from tPieP also closely match the LPIPS-based evaluation: our network architecture can generate realistic and temporally coherent detail, and the metrics we propose allow for a stable, automated evaluation of the temporal perception of a generated video sequence. Besides the previously evaluated the Vid4 dataset, with graphs shown in Fig. 14, 15, we also get similar evaluation on the Tears of Steel data-sets (room, bridge, and face, in the following referred to as ToS scenes) and corresponding are shown in Table 5 and Fig. 16. In all tests, we follow the procedures of previous work (; to make the outputs of all methods comparable, i.e., for all images, we first exclude spatial borders with a distance of 8 pixels to the image sides, then further shrink borders such that the LR input image is divisible by 8 and for spatial metrics, we ignore the first two and the last two frames, while for temporal metrics, we ignore first three and last two frames, as an additional previous frame is required for inference. In the following, we conduct user studies for the Vid4 scenes. By comparing the user study and the metric breakdowns shown in Table 4, we found our metrics to reliably capture the human temporal perception, as shown in Appendix C. We conduct several user studies for the VSR task using five different methods, namely bi-cubic interpolation, ENet, FRVSR, DUF and our TecoGAN. The established 2AFC design (Fechner & Wundt, 1889;) is applied, i.e., participants have a pair-wise choice, with the groundtruth video shown as reference. One example can be seen in Fig. 17. The videos are synchronized and looped until user made the final decision. With no control to stop videos, users Participants cannot stop or influence the playback, and hence can focus more on the whole video, instead of specific spatial details. Videos positions (left/A or right/B) are randomized. After collecting 1000 votes from 50 users for every scene, i.e. twice for all possible pairs (5 × 4/2 = 10 pairs), we follow common procedure and compute scores for all models with the Bradley-Terry model. The outcomes for the Vid4 scenes can be seen in Fig. 18 (overall scores are listed in Table 2 of the main document). From the Bradley-Terry scores for the Vid4 scenes we can see that the TecoGAN model performs very well, and achieves the first place in three cases, as well as a second place in the walk scene. The latter is most likely caused by the overall slightly smoother images of the walk scene, in conjunction with the presence of several human faces, where our model can lead to the generation of unexpected details. However, overall the user study shows that users preferred the TecoGAN output over the other two deep-learning methods with a 63.5% probability. This also matches with our metric evaluations. In Table 4, while TecoGAN achieves spatial (LPIPS) improvements in all scenes, DUF and FRVSR are not far behind in the walk scene. In terms of temporal metrics tOF and tLP, TecoGAN achieves similar or lower scores compared to FRVSR and DUF for calendar, foliage and city scenes. The lower performance of our model for the walk scene is likewise captured by higher tOF and tLP scores. Overall, the metrics confirm the performance of our TecoGAN approach and match the of the user studies, which indicate that our proposed temporal metrics successfully capture important temporal aspects of human perception. For UVT tasks which have no ground-truth data, we carried out two sets of user studies: One uses an arbitrary sample from the target domain as the reference and the other uses the actual input from the source domain as the reference. On the Obama&Trump data-sets, we evaluate from CycleGAN, RecycleGAN, and TecoGAN following the same modality, i.e. a 2AFC design with 50 users for each run. E.g., on the left of Fig. 19, users evaluate the generated Obama in reference with the input Trump on the y-axis, while an arbitrary Obama video is shown as the reference on the x-axis. Effectively, the y-axis is more important than the x-axis as it indicates whether the translated preserves the original expression. A consistent ranking of TecoGAN > RecycleGAN > CycleGAN is shown on the y-axis with clear separations, i.e. standard errors don't overlap. The x-axis indicates whether the inferred matches the general spatio-temporal content of the target domain. Our TecoGAN model also receives the highest scores here, although the responses are slightly more spread out. On the right of Fig. 19, we summarize both studies in a single graph highlighting that the TecoGAN model is consistently preferred by the participants of our user studies. Figure 20: Near image boundaries, flow estimation is less accurate and warping often fails to align well. First two columns show original and warped frames and the third one shows differences after warping (ideally all black). The top row shows things move into the view with problems near lower boundaries, while the second row has objects moving out of the view. with the help of the flow estimation network F. However, at the boundary of images, the output of F is usually less accurate due to the lack of reliable neighborhood information. There is a higher chance that objects move into the field of view, or leave suddenly, which significantly affects the images warped with the inferred motion. An example is shown in Fig. 20 Figure 18: Tables and bar graphs to warp the previous frame in accordance with the detail that G can synthesize. However, F does not adjust the motion estimation only to reduce the adversarial loss. Curriculum Learning for UVT Discriminators As mentioned in the main part, we train the UVT D s,t with 100% spatial triplets at the very beginning. During training, 25% of them gradually transfer into warped triplets and another 25% transfer into original triplets. The transfer of the warped triplets can be represented as: (1−α)I cg +αI wg, with α growing form 0 to 1. For the original triplets, we additionally fade the "warping" operation out by using (1 − α)I cg + α{W (g t−1, v t * β), g t, W (g t+1, v t * β)}, again with α growing form 0 to 1 and β decreasing from 1 to 0. We found this smooth transition to be helpful for a stable training. Since training with sequences of arbitrary length is not possible with current hardware, problems such as the streaking artifacts discussed above generally arise for recurrent models. In the proposed PP loss, both the Ping-Pang data augmentation and the temporal consistency constraint contribute to solving these problems. In order to show their separated contributions, we trained another TecoGAN variant that only employs the data augmentation without the constraint (i.e., λ p = 0 in Table 1 In this section, we use the following notation to specify all network architectures used: conc represents the concatenation of two tensors along the channel dimension; C/CT (input, kernel size, output channel, stride size) stands for the convolution and transposed convolution operation, respectively; "+" denotes element-wise addition; BilinearUp2 up-samples input tensors by a factor of 2 using bi-linear interpolation; BicubicResize4(input) increases the resolution of the input tensor to 4 times higher via bi-cubic up-sampling; Dense(input, output size) is a densely-connected layer, which uses Xavier initialization for the kernel weights. The architecture of our VSR generator G is: conc(x t, W (g t−1, v t)) → l in; C(l in, 3, 64, 1), ReLU → l 0; ResidualBlock(l i) → l i+1 with i = 0,..., n − 1; CT (l n, 3, 64, 2), ReLU → l up2; CT (l up2, 3, 64, 2), ReLU → l up4; C(l up4, 3, 3, 1), ReLU → l res; BicubicResize4(x t) + l res → g t. In TecoGAN, there are 10 sequential residual blocks in the generator (l n = l 10), while the TecoGAN generator has 16 residual blocks (l n = l 16). Each ResidualBlock(l i) contains the following operations: C(l i, 3, 64, 1), ReLU → r i; C(r i, 3, 64, 1) + l i → l i+1. The VSR D s,t's architecture is: IN g s,t or IN y s,t → l in; C(l in, 3, 64, 1), Leaky ReLU → l 0; C(l 0, 4, 64, 2), BatchNorm, Leaky ReLU → l 1; C(l 1, 4, 64, 2), BatchNorm, Leaky ReLU → l 2; C(l 2, 4, 128, 2), BatchNorm, Leaky ReLU → l 3; C(l 3, 4, 256, 2), BatchNorm, Leaky ReLU → l 4; Dense(l 4, 1), sigmoid → l out. VSR discriminators used in our variant models, DsDt, DsDtPP and DsOnly, have a similar architecture as D s,t. They only differ in terms of their inputs. The flow estimation network F has the following architecture: conc(x t, x t−1) → l in; C(l in, 3, 32, 1), Leaky ReLU → l 0; C(l 0, 3, 32, 1), Leaky ReLU, MaxPooling → l 1; C(l 1, 3, 64, 1), Leaky ReLU → l 2; C(l 2, 3, 64, 1), Leaky ReLU, MaxPooling → l 3; C(l 3, 3, 128, 1), Leaky ReLU → l 4; C(l 4, 3, 128, 1), Leaky ReLU, MaxPooling → l 5; C(l 5, 3, 256, 1), Leaky ReLU → l 6; C(l 6, 3, 256, 1), Leaky ReLU, BilinearUp2 → l 7; C(l 7, 3, 128, 1), Leaky ReLU → l 8; For all UVT tasks, we use a learning rate of 10 −4 to train the first 90k batches and the last 10k batches are trained with the learning rate decay from 10 −4 to 0. Images of the input domain are cropped into a size of 256 × 256 when training, while the original size is 288 × 288. While the Additional training parameters are also listed in Table 6. For UVT, L content and L φ are only used to improve the convergence of the training process. We fade out the L content in the first 10k batches and the L φ is used for the first 80k and faded out in last 20k. TecoGAN is implemented in TensorFlow. While generator and discriminator are trained together, we only need the trained generator network for the inference of new outputs after training, i.e., the whole discriminator network can be discarded. We evaluate the models on a Nvidia GeForce GTX 1080Ti GPU with 11G memory, the ing VSR performance for which is given in Table 2. The VSR TecoGAN model and FRVSR have the same number of weights (843587 in the SRNet, i.e. generator network, and 1.7M in F), and thus show very similar performance characteristics with around 37 ms spent for one frame. The larger VSR TecoGAN model with 1286723 weights in the generator is slightly slower than TecoGAN, spending 42 ms per frame. In the UVT task, generators spend around 60 ms per frame with a size of 512 × 512. However, compared with the DUF model, with has more than 6 million weights in total, the TecoGAN performance significantly better thanks to its reduced size.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1ltgp4FwS
We propose temporal self-supervisions for learning stable temporal functions with GANs.
The scarcity of labeled training data often prohibits the internationalization of NLP models to multiple languages. Cross-lingual understanding has made progress in this area using language universal representations. However, most current approaches focus on the problem as one of aligning language and do not address the natural domain drift across languages and cultures. In this paper, We address the domain gap in the setting of semi-supervised cross-lingual document classification, where labeled data is available in a source language and only unlabeled data is available in the target language. We combine a state-of-the-art unsupervised learning method, masked language modeling pre-training, with a recent method for semi-supervised learning, Unsupervised Data Augmentation (UDA), to simultaneously close the language and the domain gap. We show that addressing the domain gap in cross-lingual tasks is crucial. We improve over strong baselines and achieve a new state-of-the-art for cross-lingual document classification. Recent advances in Natural Language Processing have enabled us to train high-accuracy systems for many language tasks. However, training an accurate system still requires a large amount of training data. It is inefficient to collect data for a new task and it is virtually impossible to annotate a separate data set for each language. To go beyond English and a few popular languages, we need methods that can learn from data in one language and apply it to others. Cross-Lingual Understanding (XLU) has emerged as a field concerned with learning models on data in one language and applying it to others. Much of the work in XLU focuses on the zero-shot setting, which assumes that labeled data is available in one source language (usually English) and not in any of the target languages in which the model is evaluated. The labeled data can be used to train a high quality model in the source language. One then relies on general domain parallel corpora and monolingual corpora to learn to'transfer' from the source language to the target language. Transfer methods can explicitly rely on machine translation models built from such parallel corpora. Alternatively, one can use such corpora to learn language universal representations to produce features to train a model in one language, which one can directly apply to other languages. Such representations can be in the form of cross-lingual word embeddings, contextual word embeddings, or sentence embeddings (; ;). Using such techniques, recent work has demonstrated reasonable zero-shot performance for crosslingual document classification and natural language inference . What we have so far described is a simplified view of XLU, which focuses solely on the problem of aligning languages. This view assumes that, if we had access to a perfect translation system, and translated our source training data into the target language, the ing model would perform as well as if we had collected a similarly sized labeled dataset directly in our target language. Existing work in XLU to date also works under this assumption. However, in real world applications, we must also bridge the domain gap across different languages, as well as the language gap. No task is ever identical in two languages, even if we group them under the same label, e.g.'news document classification' or'product reviews sentiment analysis'. A Chinese customer might express sentiment differently than his American counterpart. Or French news might simply cover different topics than English news. As a , any approach which ignores this domain drift will fall short of native in-language performance in real world XLU. In this paper, we propose to jointly tackle both language and domain transfer. We consider the semi-supervised XLU task, where in addition to labeled data in a source language, we have access to unlabeled data in the target language. Using this unlabeled data, we combine the aforementioned cross-lingual methods with recently proposed unsupervised domain adaptation and weak supervision techniques on the task of cross-lingual document classification (XLDC). In particular, we focus on two approaches for domain adaptation. The first method is based on masked language model (MLM) pre-training (as in) using unlabeled target language corpora. Such methods have been shown to improve over general purpose pre-trained models such as BERT in the weakly supervised setting . The second method is unsupervised data augmentation (UDA) , where synthetic paraphrases are generated from the unlabeled corpus, and the model is trained on a label consistency loss. While both of these techniques were proposed previously, in both cases there are some open questions when applying them on the cross-lingual problems. For instance when performing data augmentation, one could generate augmented paraphrases in either the source or the target language or both. We experiment with various approaches and provide guidelines with ablation studies. Furthermore, we find that the value of additional labeled data in the source language is limited due to the train-test discrepancy of XLDC tasks. We propose to alleviate this issue by using self-training technique to do the domain adaptation from the source language into the target language. By combining these methods, we are able to reduce error rates by an average 44% over a strong XLM baseline, setting a new state-of-the-art for cross-lingual document classification. Cross-lingual document classification was first introduced in . The subsequent work proposes the cross-lingual sentiment classification datasets, and (; ;) have extended this to the news domain. Cross-lingual understanding has also been applied to other NLP tasks, with datasets available in dependency parsing , natural language inference (XNLI) and question answering ). Cross-lingual methods gained popularity with the advent of cross-lingual word embeddings . Since then, many methods have been proposed to better align the word embedding spaces of different languages (see for a survey). Recently, more sophisticated extensions have been proposed based on seq2seq training of cross-lingual sentence embeddings and contextual word embeddings pre-trained on masked language modeling, notably multilingual BERT and the cross-lingual language model (XLM) of. We use XLM as our baseline representation in all experiments, as it's the current state-of-the-art on the commonly used XNLI benchmark for cross-lingual understanding. Domain adaptation, closely related to transfer learning, has a rich history in machine learning and natural language processing . Such methods have long been applied to document classification tasks (; ; ;). Domain adaptation for NLP is intimately related to transfer learning and semi-supervised learning . Transfer learning has made tremendous advances recently due to the success of pre-training representations using language modeling as a source task (; ;). While such representations trained on large amounts of general domain text have been shown to transfer well generally, performance still suffers when the target domain is sufficiently different than what the models were pre-trained on. In such cases, it is known that further pre-training the language model on in-domain text is helpful . It is natural to use unsupervised domain data for this task, when available . The study of weakly supervised learning in language processing is relatively new . Most recently, has introduced an unsupervised data augmentation (UDA) technique to demonstrate improvements in the few-shot learning setting. Here, we extend this technique to facilitate cross-lingual and cross-domain transfer. In this section, we formally define the problem discussed in this paper and describe the proposed approach in detail. In vanilla zero-shot cross-lingual document classification, it is assumed that we have available a labeled dataset in a source language (English in our case), which we can denote by L src = {(x, y)|x ∼ P src (x)}, where P src (x) is the prior distribution of task data in the source language. It is assumed that no data is available for the task in the target language. General purpose parallel and monolingual resources are used to train a cross-lingual classifier on the labeled source data, which is then applied to the target language data at test time. In this work, we also assume access to a large unlabeled corpus in the target language, U tgt = {x|x ∼ P tgt (x)}, which is usually the case in practical applications. We aim to utilize this domain-specific unlabeled data to help model be adapted to the target domain and gain the better generalization ability. We refer to this setting as semi-supervised XLDC, although we're still in the zero-shot setting, in that no labeled data is used in the target language. There are two standard ways to transfer knowledge across the languages in the vanilla zero-shot setting: using a translation system to translate the labeled samples, such as translate-train and translate-test methods, and learning a multilingual embedding system to obtain a language irrelevant representations of the data. In this paper, we adopt the second approach as the basic model, and utilize the XLM model as our base model, which has been pre-trained by large-scale parallel and monolingual data from various languages. Because XLM is a multilingual embedding system, a baseline is obtained by fine-tuning XLM with the labeled set L src (x) and directly applying the ing model to the target language. In the experiments section, we also discuss the combination of the XLM and the translation based approaches. As argued in Introduction, even with a perfect translation or multilingual embedding system, we still face the domain-mismatch problem. This mismatch may limit the generalization ability of the model during testing time. To fully adapt the classifier to the target distribution, we explore the following approaches, each of which leverages unlabeled data in the target language in different ways. Masked Language Model pre-training BERT and its derivations (such as XLM) are trained on general domain corpora. A standard practice is to to further pre-train to adapt to a particular domain when data is available. This technique can let model learn the prior knowledge from the target domain and lead to better performance. We refer to this approach as the MLM pre-training. However, in the cross-lingual setting, fine-tuning the XLM model in the target language can make the model degenerate in the source language, decreasing its ability to transfer across languages. This problem prohibits the fine-tuning baseline to learn a reasonable model. Therefore, in this case, we take care to use this method in combination with the translate-train method, which translates all labeled samples into the target language. Unsupervised Data Augmentation The second approach is utilizing the state-of-the-art semisupervised learning technique, Unsupervised Data Augmentation (UDA) algorithm , to leverage the unlabeled data. The objective function of UDA can be written as, wherex is an augmented sample generated by a predefined augmentation functionx = q(x). The augmentation function can be a paraphrase generation model, or a noising heuristic. Here, we use a machine translation system as the praraphrase generation model. The UDA loss enforces the classifier to produce label consistent predictions for pairs of original and augmented samples. With UDA method, the model is better adapted to the target domain by learning a label consistent model over the target domain. In the cross-lingual setting, there are multiple ways of generating augmented samples using translation. One could translate samples from the target language into the source language and use this crosslingual pair as the augmented sample. Alternatively, one could translate back into the target language and use only target-language augmented samples. We find that the latter works best. It is also possible to do data augmentation using source domain unlabeled data. The of these comparisons are included in out detailed ablation study in the experiments section. Alleviating the Train-Test Discrepancy of the UDA Method With the UDA algorithm, the classifier is able to explore the prior information on the target domain, however it still suffers from the train-test discrepancy. During the testing phase, our goal is to maximize the classifier performance on the real samples in target language, which focus on the samples from the distribution P tgt (x). Upon observing the training objective of the UDA method, Eq., one can see that the data x that feed to model in the training phrase is sampled from three domains: the source domain P src (x), the target domain P tgt (x) and the augmented sample domain P aug (x). On the other hand, the testing phrase only processes data from the target domain P tgt (x). The source and target domain are mismatched, due to differences in language as argued earlier. Furthermore, the augmented domain, although generated from the target domain, can also be mismatched, due to artifacts introduced by the translation system. This can be especially problematic, since the UDA method needs diversity in the augmented samples to perform well , which trades off against their quality. We propose to apply the self-training technique to tackle this problem. We first train a classifier based on the UDA algorithm and denote it as f * (x), which is the teacher model used to score the unlabeled data U tgt from the target domain. Then we fine-tune a new XLM model using the soft classification loss function with the pseudo-labeled data, which is written as, Follwing this process, we obtain a new classifier trained only based on the target domain, which does not suffer from the train-test mismatch problem. We show that this process provides better generalization ability compared to the teacher model. A process diagram of this method is presented in figure 1. In this section, we present a comprehensive study on two benchmark tasks, cross-lingual sentiment classification, and cross-lingual news classification. In this task, we test the proposed framework on a sentiment classification benchmark in three target languages, i.e. French, German and Chinese. The French, German and English data come from the benchmark cross-lingual Amazon reviews dataset , which we denote as amazon-fr, amazon-de and amazon-en. We merge training and testing samples from all product categories in one language, which leads to 6000 training samples. However, for the purpose of facilitating fair comparison with previous work, we also provide for category-wise settings to compare previous state-of-the-art performance. The number of unlabeled samples from amazon-fr is 54K, and from amazon-de is 310K. For Chinese, we use the Chinese Amazon (amazon-cn) (b) and Dianping datasets. Dianping is a business review website similar to Yelp. The training data for amazon-cn is amazon-en, and for dianping it is the Yelp dataset (a). In these two cases, the size of the training sample is 2000. For both amazon-cn and dianping datasets, we have 4M unlabeled examples. Because the number of the unlabeled set is very large, we randomly sample 10% for the UDA algorithm. News Classification We use the MLDoc dataset for this task. The MLdoc dataset is a subset of RCV2 multilingual news dataset . It has 4 categories, i.e. Corporate/Industrial, Economics, Government/Social and Markets, and each category has 250 training samples. We use the rest of the news documents in RCV2 dataset as the unlabeled data. The number of unlabeled samples for each language ranges from 5K to 20K, which is relatively smaller compared to the sentiment classification task. Because the XLM model is pre-trained on 15 languages, we ignore languages which are not supported by XLM in the above benchmark datasets. The pre-processing scripts for the above datasets, augmented samples, pretrained models and experiment settings needed to reproduce will be released in the Github repo. As introduced in section 3.3, we apply MLM pre-training on the unlabeled data corpus to obtain a domain-specific XLM, denoted as XLM f t in the following sections. The pre-training strategies for the two tasks are slightly different. In the sentiment classification task, because the size of the unlabeled corpus in each target domain is large enough, we fine-tune an XLM with MLM loss for each target domain respectively. In contrast, we do not have enough unlabeled data in each language in the MLDoc dataset, therefore we integrate unlabeled data from all languages as the training corpus. As a the XLM f t still preserves its language universality in this task. We compare the follwing models: • Fine-tune (Ft): Fine-tuning the pre-trained model with the source-domain training set. In the case of XLM f t, the training set is translated into the target language. • Fine-tune with UDA (UDA): This method utilizes the unlabeled data from the target domain by optimizing the UDA loss function (Eq.). • Self-training based on the UDA model (UDA+Self): We first train the Ft model and UDA model, and choose the one with better develop accuracy as the teacher model. The teacher model is used to train a new XLM student using only unlabeled data U tgt in the target domain, as described above. We report the of applying these three methods on both the original XLM model and the XLM f t model. In order to keep the notation simple, we use parenthesis after the method name to indicate which basic model was used, such as UDA(XLM f t). The details about the implementation and hyper-parameter tuning are included in Appendix A.1. The for the cross-lingual sentiment classification task are summarized in table 1. As our experiment setting on the cross-lingual amazon dataset is different from previous publications, in order to provide a fair comparison with previous works, we summarize the of the standard category-wise setting in table 3. The for cross-lingual news classification is included in table 2. The last column "Unlabeled" in these tables indicates whether this method utilizes the unlabeled data. For the monolingual baselines, the models are trained with labeled data from the target domain. The size of the labeled set is the same as the English training set used for cross-lingual experiments. We can summarize our findings as follows: • Looking at Ft(XLM) , it is clear that without the help of unlabeled data from the target domain, there still exists a substantial gap between the model performance of the cross-lingual settings and the monolingual baselines, even when using state-of-the-art pre-trained cross-lingual representations. • Both the UDA algorithm and MLM pre-training can offer significant improvements by utilizing the unlabeled data. In the sentiment classification task, where the unlabeled data size is larger, Ft(XLM f t) model usnig MLM pre-training consistently provides larger improvements compared with the UDA method. On the other hand, the MLM method is relatively more resource intensive and takes longer to converge (see Appendix A.5). In contrast, in the MLdoc dataset, when the size of the unlabeled samples is limited, the UDA method is more helpful. • The combination of both methods -as in the UDA(XLM f t) model -consistently outperforms either method alone. In this case the additional improvement provided by the UDA algorithm is smaller, but still consistent. • In the sentiment classification task, we observe the self-training technique consistently improves over its teacher model. It offers best in both XLM and XLM f t based classifiers. The demonstrate that self-training process is able to alleviate the train-test distribution mismatch problem and provide better generalization ability. In the MLdoc dataset, self-training also achieves the best overall, however the gains are less clear. We hypothesize that this technique is not as useful without enough number of unlabeled samples. • Finally, comparing with the best cross-lingual and monolingual fine-tune baseline, we are able to completely close the performance gap by utilizing unlabeled data in the target language. Furthermore, our framework reaches new state-of-the-art , improving over vanilla XLM baselines by 44% on average. Furthermore, we provide an additional baseline, which only uses English samples to perform semisupervised learning, whose details are in Appendix A.2. The experment show that it lags behind the ones using unlabeled data from the target domain. This observation also justifies the importance of information from the target domain in the XLDC task. Table 3: Error rates for the sentiment classification task by product category. The pre-XLM sota are provided by. In this section, we provide evidence for the train-test domain discrepancy in the context of the UDA method, by showing that adding more labeled data in the source language does not improve target task accuracy after a certain point. Figure 2 plots the target model performance vs. the number of labeled training samples in the cross-lingual and monolingual settings respectively. The figures are based on the UDA(XLM) method with 6 runs in the Yelp-Dianping cross-lingual setting. The dot is the average accuracy and the filling area contains one standard derivation. We observe that, in the cross-lingual setting, the model performance peaks at around 10k training samples per category, and becomes worse with the larger training set. In contrast, the performance of the model improves consistently with more labeled data in the monolingual setting. This suggests that more training data from the source domain could harm model generalization ability in the target domain with UDA approach in the cross-lingual setting. In order to alleviate this issue, we propose to utilize the self-training technique, which abandons the data from the source domain and the augmentation domain, to maximize its performance in the target domain. Next, we explore different augmentation strategies and their influence on the final performance. As stated in section 3.3, the augmentation strategy used in the main experiment is that we first translate the samples into English and translate them back to its original language. We refer to this strategy as augmenting "from target domain to target domain" and abbreviate it as t2t. We also explore two additional augmentation strategies: First, we do not translate the samples back to the target language and directly use English samples as the augmented samples, denoted as t2s. Naturally, the parallel samples in two languages have the same sentiment information and different input format which are suitable to be used as the augmentation sample pairs for the multilingual system such as XLM. The second approach is to leverage unlabeled data from other language domains. Here, we attempt to use the English unlabeled data. We translate them into the target language as the augmented samples. This strategy is denoted as s2t. Table 4: Error rates when using different augmentation strategies and their combinations. Results for sentiment classification shown on the left, and news document classification on the right. Table 4 summarizes the performance of the proposed augmentation strategies and their combinations with the UDA(XLM) method in the sentiment classification and the UDA(XLM f t) in the news classification settings. From the , we conclude that t2t is the best performing approach, as it's the best matched to the target domain. Leveraging the unlabeled data from other domains does not offer consistent improvement, however can provide additional value in isolated cases. We include additional ablations regarding translation system in the appendix, including the application of translate-train method in our experiments (section A.3) and effects of hyper-parameters (section A.4), and finally we discuss the application of our framework in the monolingual cross-domain document classification problem setting in Appendix B. In this paper, we tackled the domain mismatch challenge in cross-lingual document classification -an important, yet often overlooked problem in cross-lingual understanding. We provided evidence for the existence and importance of this problem, even when utilizing strong pre-trained cross-lingual representations. We proposed a framework combining cross-lingual transfer techniques with three domain adaptation methods; unsupervised data augmentation, masked language model pre-training and self-training, which can leverage unlabeled data in the target language to moderate the domain gap. Our show that by removing the domain discrepancy, we can close the performance gap between crosslingual transfer and monolingual baselines almost completely for the document classification task. We are also able to improve the state-of-the-art in this area by a large margin. While document classification is by no means the most challenging task for XLU, we believe the strong gains that we demonstrated could be extended to other cross-lingual tasks, such as cross-lingual question answering and event detection. Developing cross-lingual methods which are competitive with in-language models for real world, semantically challenging NLP problems remains an open problem and subject of future research. The experiments in this paper are based on the PyTorch and Pytext package. We use the Adam as the optimizer. For all experiments, we grid search the learning rate in the set {5 × 10 −6, 1 × 10 −5, 2 × 10 −5}. When using UDA method, we also try the three different annealing strategies introduced in the UDA paper , and the λ in is always set as 1. The batch size in the Ft and UDA+Self method is 128. In the UDA method, the batch size is 16 for the labeled data and 80 for the unlabeled data. Due to the limitation of the GPU memory, in all experiments, we set the length of samples as 256, and cut the input tokens exceeding this length. Finally, we report the with the best hyper-parameters. As for the augmentation process, we sweep the temperature which controls the diversity of beam search in translation. The best temperature for "en-de, en-fr, en-es" and "en-ru" are 1.0 and 0.6, the sampling space is the whole vocabulary. In the "en-zh" setting, the temperature is 1.0 and the sampling space is the top 100 tokens in the vocabulary. We note that this uses the Facebook production translation models, and could vary when other translation systems are applied. For reproducibility, we will release the augmented datasets that we generated. Here, we provide a baseline only using English samples to perform semi-supervised learning. More specifically, we first train the model with English unlabeled data and augmented samples, then tests it on different target domains. This approach is similar to the traditional translate-test method. This method offers a baseline, which merely increases the size of data but without providing the target domain information. During the test phrasing, we experiment with two input strategies. One is using the original test samples, and another is translating the samples into English. We report the (Table 5) of the UDA(XLM) method with two input strategies and compare them with the main , which uses the unlabeled data from the target domain. First, we observe that the performance of using original and translated samples is similar. Second, compared with Ft(XLM) baselines in section 4.3, utilizing the unlabeled data from the English domain is slightly better than only training with labeled data, but it still lags behind the performance of using the unlabeled data from the target domain. Table 5: The first part is the baseline using the English unlabeled data. The second part is the using the unlabeled data from the target domain, which are copied from the section 4.3. A.3 ABLATION STUDY: TRANSLATE-TRAIN As discussed earlier, fine-tuning XLM on the target language would depreciate the multilingual ability of the model. We apply the translate-train method to tackle this problem. In order to understand the influence of this strategy when using the proposed framework, we perform an ablation study. We test 3 input strategies: English: use the original English data as training data. tr-train: use the translate-train strategy, which translate the training data into the target language. both: we combine the and as the training data. We report the of the UDA(XLM) method in the sentiment classification tasks and UDA(XLM f t) method in the news classification tasks in Table 6: Ablation Study about the translate-train strategies. Results for sentiment classification shown on the left, and news document classification on the right. We observe that in most cases, using the original training examples achieves the best performance. However, in special cases such as MLDoc-ru, the translate-train method achieves better performance. We recommend trying both approaches in practice. Given a translation system, we use the sample decoding strategy to translate the sample. The sample space is the entire vocabulary space. We tune the temperature of µ of the softmax distribution. As discussed in , this controls the trade-off between quality and diversity. When µ = 0, the sampling reduces to the greedy search and produce the best quality samples. When µ = 1, the sampling produces diverse outputs but loses some semantic information. The table A.4 illustrates the influence of µ value to the final performance in the English-to-French and English-to-German settings. The show that temperature has a significant influence on the final performance. However, because the quality of translation systems for different language pairs are not the same, their best temperature also varies. In the appendix A.1, we include the best temperature values for the translation systems used in this paper. Table 7: Effect of the temperature of the translation sampling decoder. A.5 COMPUTATION TIME OF UDA AND MLM PRETRAINING From the main in section 4.3, we can see that MLM pre-training can offer better improvements over the UDA method. However, it is also more resource intensive, since MLM pre-training is a token level task with a large output space, which leads to more computationally intensive updates and also takes longer to converge. In our experiments, we used NVIDIA V100-16G GPUs to train all models. 8 GPUs were used to train Ft and UDA methods, and 32 GPUs to perform MLM pretraining. In the "amazonen->amazonfr" setting, for example, the unlabeled set contains 50K unlabeled samples and 8M tokens after BPE tokenization. The Ft method takes 3.2 GPU hours to converge. The UDA method training takes 16.8 GPU hours, excluding the time it takes to generate augmented samples (which we handle as part of data pre-processing). MLM pre-training takes upwards of 500 GPU hours to converge. This is another factor which should be taken into account. As further evidence that our method addresses the domain mismatch, we apply out framework to the monolingual cross-domain document classification problem. We again focus on sentiment classification where data comes from two different domains, product reviews (amazon-en, amazon-cn) and business reviews (Yelp and Dianping). We train and test on the same language, only transferring across domains. We consider the two domain-pairs, amazonen-yelp and amazoncn-dianping. The are illustrated in table 8. Conclusions are similar to the cross-domain setting (section 4.3): • There exists a clear gap between the cross-domain and in-domain of the Ft method, even when using strong pre-trained representations. • By leveraging the unlabeled data from the target domain, we can significantly boost the model performance. • Best are achieved with our combined approach and almost completely matches the indomain baselines.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryeot3EYPB
Semi-supervised Cross-lingual Document Classification
A distinct commonality between HMMs and RNNs is that they both learn hidden representations for sequential data. In addition, it has been noted that the backward computation of the Baum-Welch algorithm for HMMs is a special case of the back propagation algorithm used for neural networks . Do these observations suggest that, despite their many apparent differences, HMMs are a special case of RNNs? In this paper, we investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization, to answer this question. In particular, we investigate three key design factors—independence assumptions between the hidden states and the observation, the placement of softmax, and the use of non-linearity—in order to pin down their empirical effects. We present a comprehensive empirical study to provide insights on the interplay between expressivity and interpretability with respect to language modeling and parts-of-speech induction. Sequence is a common structure among many forms of naturally occurring data, including speech, text, video, and DNA. As such, sequence modeling has long been a core research problem across several fields of machine learning and AI. By far the most widely used approach for decades is the Hidden Markov Models of BID1; BID10, which assumes a sequence of discrete latent variables to generate a sequence of observed variables. When the latent variables are unobserved, unsupervised training of HMMs can be performed via the Baum-Welch algorithm (which, in turn, is based on the forward-backward algorithm), as a special case of ExpectationMaximization (EM) BID4 ). Importantly, the discrete nature of the latent variables has the benefit of interpretability, as they recover contextual clustering of the output variables. In contrast, Recurrent Neural Networks (RNNs), introduced later in the form of BID11 and BID6 networks, assume continuous latent representations. Notably, unlike the hidden states of HMMs, there is no probabilistic interpretation of the hidden states of RNNs, regardless of their many different architectural variants (e.g. LSTMs of BID9, GRUs of BID3 and RANs of BID13).Despite their many apparent differences, both HMMs and RNNs model hidden representations for sequential data. At the heart of both models are: a state at time t, a transition function f: h t−1 → h t in latent space, and an emission function g: h t → x t. In addition, it has been noted that the backward computation in the Baum-Welch algorithm is a special case of back-propagation for neural networks BID5 ). Therefore, a natural question arises as to the fundamental relationship between HMMs and RNNs. Might HMMs be a special case of RNNs?In this paper, we investigate a series of architectural transformations between HMMs and RNNsboth through theoretical derivations and empirical hybridization. In particular, we demonstrate that the forward marginal inference for an HMM-accumulating forward probabilities to compute the marginal emission and hidden state distributions at each time step-can be reformulated as equations for computing an RNN cell. In addition, we investigate three key design factors-independence assumptions between the hidden states and the observation, the placement of soft-max, and the use of non-linearity-in order to pin down their empirical effects. Above each of the models we indicate the type of transition and emission cells used. H for HMM, R for RNN/Elman and F is a novel Fusion defined in §3.3. It is particularly important to understanding this work to track when a vector is a distribution (resides in a simplex) versus in the unit cube (e.g. after a sigmoid non-linearity). These cases are indicated by c i and c i, respectively. Our work is supported by several earlier works such as BID23 and BID25 that have also noted the connection between RNNs and HMMs (see §7 for more detailed discussion). Our contribution is to provide the first thorough theoretical investigation into the model variants, carefully controlling for every design choices, along with comprehensive empirical analysis over the spectrum of possible hybridization between HMMs and RNNs. We find that the key elements to better performance of the HMMs are the use of a sigmoid instead of softmax linearity in the recurrent cell, and the use of an unnormalized output distribution matrix in the emission computation. On the other hand, multiplicative integration of the previous hidden state and input embedding, and intermediate normalizations in the cell computation are less consequential. We also find that HMM outperforms other RNNs variants for unsupervised prediction of the next POS tag, demonstrating the advantages of discrete bottlenecks for increased interpretability. The rest of the paper is structured as follows. First, we present in §2 the derivation of HMM marginal inference as a special case of RNN computation. Next in §3, we explore a gradual transformation of HMMs into RNNs. In §4, we present the reverse transformation of Elman RNNs back to HMMs. Finally, building on these continua, we provide empirical analysis in §5 and §6 to pin point the empirical effects of varying design choices over the possible hybridization between HMMs and RNNs. We discuss related work in §7 and conclude in §8. We start by defining HMMs as sequence models, together with the forward-backward algorithm which is used for inference. Then we show that, by rewriting the forward algorithm, the computation can be viewed as updating a hidden state at each time step by feeding the previous word prediction, and then computing the next word distribution, similar to the way RNNs are structured. The ing architecture corresponds to the first cell in FIG0. (1:n) = {x,..., x (n) } be a sequence of random variables, where each x is drawn from a vocabulary V of size v, and an instance x is represented as an integer w or a one-hot vector e (w), where w corresponds to an index in V.1 We also define a corresponding sequence of hidden variables h(1:n) = {h,..., h (n) }, where h ∈ {1, 2, . . . m}. The distribution P (x) is defined by marginalizing over h, and factorizes as follows: DISPLAYFORM0 We define the hidden state distribution, referred to as the transition distribution, as DISPLAYFORM1 and the emission (output) distribution as DISPLAYFORM2 Inference for HMMs (marginalizing over the hidden states to compute the observed sequence probabilities) is performed with the forward-backward algorithm. The backward algorithm is equivalent to automatically differentiating the forward algorithm BID5. Therefore, while traditional HMM implementations had to implement both the forward and backward algorithm, and train the model with the EM algorithm, we only implement the forward algorithm in standard deep learning software, and perform end-to-end minibatched SGD training, efficiently parallelized on the GPU.Let w = {w,..., w (n) } be the observed sequence, and w (i) the one-hot representation of w (i). The forward probabilities a are defined recurrently (i.e., sequentially recursively) as DISPLAYFORM0 DISPLAYFORM1 This can be rewritten by defining DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 and substituting a, so that equation 6 is rewritten as (left below) or if expressed directly in terms of the parameters used to define the distributions with vectorized computations (right below): DISPLAYFORM5 DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 Here w (i) used as a one-hot vector, and the bias vectors b and d are omitted for clarity. Note that the computation of s (i) can be delayed until time step i + 1. The computation step can therefore be rewritten to let c be the recurrent vector (equivalent logspace formulations presented on the right): DISPLAYFORM9, DISPLAYFORM10This can be viewed as a step of a recurrent neural network with tied input and output embeddings: Equation 14 embeds the previous prediction, equations 15 and 16, the transition step, updates the hidden state c, corresponding to the cell of a RNN, and equations 17 and 18, the emission step, computes the output next word probability. We can now compare this formulation against the definition of a Elman RNN with tied embeddings and a sigmoid non-linearity. These equations correspond to the first and last cells in FIG0. The Elman RNN has the same parameters, except for an additional input matrix U ∈ R m×m. FIG0. We will evaluate these model variants empirically, and also investigate their interpretability. DISPLAYFORM11 By relaxing the independence assumption of the HMM transition probability distribution we can increase the expressiveness of the HMM "cell" by modelling more complex interactions between the fed word and the hidden state. Following Tran et al. FORMULA0 we define the transition distribution as DISPLAYFORM0 where W ∈ R m×m×m, B ∈ R m×m. As the tensor-based methods increases the number of parameters considerably, we also propose an additive version: DISPLAYFORM0 DISPLAYFORM1 Gating-based feeding: Finally we propose a more expressive model where interaction is controlled via a gating mechanism and the feeding step uses unnormalized embeddings (this does not violate the HMM factorization): DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 2 where normalize(y) = DISPLAYFORM5 Another way to make HMMs more expressive is to relax their independence assumptions through delaying when vectors are normalized to probability distributions by applying the softmax function. The computation of the recurrent vector c (i) = P (h (i) |x (1:i−1) ) is replaced with DISPLAYFORM0 Both c and s are still valid probability distributions, but the independence assumption in the distribution over h (i) no longer holds. A further transformation is to delay the emission softmax until after multiplication with the hidden vector. This effectively replaces the HMM's emission computation with that of the RNN: DISPLAYFORM0 This formulation breaks the independence assumption that the output distribution is only conditioned on the hidden state assignment. Instead it can be viewed as taking the expectation over the (unnormalized) embeddings with respect to the state distribution c, then softmaxed (H R in FIG0 . We can go further towards RNNs and replace the softmax in the transition by a sigmoid non-linearity. The sigmoid is placed in the same position as the delayed softmax. The recurrent state c is no longer a distribution so the output has to be renormalized so the emission still computes a distribution: DISPLAYFORM0 DISPLAYFORM1 This model could also be combined with a delayed emission softmax -which we'll see makes it closer to an Elman RNN. This model is indicated as F for fusion in FIG0 4 TRANSFORMING AN RNN TOWARDS AN HMM Analogously to making the HMM more similar to Elman RNNs, we can make Elman networks more similar to HMMs. Examples of these transformations can be seen in the last 2 cells in FIG0 . First, we use the Elman cell with an HMM emission. This requires the hidden state be a distribution, thus we consider two options. One is to replace the sigmoid non-linearity with the softmax function: DISPLAYFORM0 DISPLAYFORM1 This model is depicted as R H in FIG0 . The second formulation is to keep the sigmoid nonlinearity, but normalize the hidden state output in the emission computation: DISPLAYFORM2 DISPLAYFORM3 In the HMM cell, the integration of the previous recurrent state and the input embedding is modelled through an element-wise product instead of adding affine transformations of the two vectors. We can modify the Elman cell to do a similar multiplicative integration: DISPLAYFORM0 Or, using a single transformation matrix: DISPLAYFORM1 Finally, and most extreme, we experiment with replacing the sigmoid non-linearity with a softmax: DISPLAYFORM0 And a more flexible variant, where the softmax is applied only to compute the emission distribution, while the sigmoid non-linearity is still applied to recurrent state: DISPLAYFORM1 DISPLAYFORM2 Our formulations investigate a series of small architectural changes to HMMs and Elman cells. In particular, these changes raise questions about the expressivity and importance of normalization within the recurrence and independence assumptions during emission. In this section, we analyze the effects of these changes quantitatively via a standard language modeling benchmark. We follow the standard PTB language modeling setup BID2 BID16. We work with one-layer models to enable a direct comparison between RNNs and HMMs and a budget of 10 million parameters (typically corresponding to hidden state sizes of around 900). Models are trained with batched backpropagation through time (35 steps). Input and output embeddings are tied in all models. Models are optimized with a grid search over optimizer parameters for two strategies: SGD 4 and AMSProp. AMSProp is based on the optimization setup proposed by BID14. We see from the in TAB4 (also depicted in Figure 2) that the HMM models perform significantly worse than the Elman network, as expected. Interestingly, many of the HMM variants that Figure 2: This plot shows how perplexities change under our transformations, and which lead the models to converge and pass each other.in principle have more expressivity or weaker independence assumptions do not perform better than the vanilla HMM. This includes delaying the transition or emission softmax, and most of the feeding models. The exception is the gated feeding model, which does substantially better, showing that gating is an effective way of incorporating more context into the transition matrix. Using a sigmoid non-linearity before the output of the HMM cell (instead of a softmax) does improve performance (by 44 ppl), and combining that with delaying the emission softmax gives a substantial improvement (almost another 100 ppl), making it much closer to some of the RNN variants. We also evaluate variants of Elman RNNs: Just replacing the sigmoid non-linearity with the softmax function leads to a substantial drop in performance (120 ppl), although it still performs better than the HMM variants where the recurrent state is a distribution. Another way to investigate the effect of the softmax is to normalize the hidden state output just before applying the emission function, while keeping the sigmoid non-linearity: This performs somewhat worse than the softmax non-linearity, which indicates that it is significant whether the input to the emission function is normalized or softmaxed before multiplying with the (emission) embedding matrix. As a comparison for how much the softmax non-linearity acts as a bottleneck, a neural bigram model outperforms these approaches, obtaining 177 validation perplexity on this same setup. Replacing the RNN emission function with that of an HMM leads to even worse performance than the HMM: Using a softmax non-linearity or a sigmoid followed by normalization does not make a significant difference. Using multiplicative integration leads to only a small drop in performance compared to a vanilla Elman RNN, and doing so with a single transformation matrix (making it comparable to what an RNN is doing) leads to only a small further drop. In contrast, preliminary experiments showed that the second transformation matrix is crucial in the performance of the vanilla Elman network. In our experimental setup an LSTM performs only slightly better than the Elman network (80 vs 87 perplexity). While more extensive hyperparameter tuning BID14 or more sophisticated optimization and regularization techniques BID15 would likely improve performance, that is not the goal of this evaluation. A strength of HMM bottlenecks is forcing the model to produce an interpretable hidden representation. A classic example of this property is part-of-speech tag induction. It is therefore natural to ask whether changes in the architecture of our models correlate with their ability to discover syntactic properties. We evaluate this by analyzing the models implicitly predicted tag distribution at each time step. Specifically, while no model is likely to predict the correct next word, we assume the HMMs errors will preserve basic tag-tag patterns of the language, and that this may not be true for RNNs. We test this by computing the accuracy of predicting the tag of the word in the sequence out of the next word distribution. None of the models were trained to perform this task. First, we compute a tag distribution p(t|w) for every word in the training portion of the Penn Treebank. Next, we multiply this value by the model's p(w) = x i, and sum across the vocabulary. This provides us the model's distribution over tags at the given time p(t) i. We compare the most likely marginal tag against the ground truth to compute a tagging accuracy. This evaluation rewards models which place their emission probability mass predominantly on words of the correct part-of-speech. We compute this metric across both the full PTB tagset and the universal tags of BID18.The HMM allows for Viterbi decoding which allows us to compute p(t|max dim (c i)). The more distributed the models' representations are, the more the tag distribution given the max dimension will differ from the complete marginal. For HMMs with distributional hidden states the maximum dimension provided the best performance. In contrast, Elman models perform best when conditioned on the full hidden state. Results are shown in TAB2 and plotted against perplexity in FIG3. DISPLAYFORM0 Recently, a number of recent papers have identified variants of gated RNNs which are simpler than LSTMs but perform competitively or satisfy properties that LSTMs lack. Foerster et al. FORMULA0 proposed RNNs without recurrent non-linearities to improve interpretability. BID0 proposed gated RNN variants with type constraints. BID17 identified a class of RNNs called rational recurrences, in which the hidden states can be computed by WFSAs. Another strand of recent work proposed neural models that learn discrete, interpretable structure: BID26 introduced a mixture of softmax model where the output distribution is conditioned on discrete latent variable. BID20 proposed a language model that jointly learns unsupervised syntactic (tree) structure, while BID21 used neural hidden Markov models for Part-of-Speech induction. BID24 and BID22 proposed models for segmental structure over sequences, while neural transduction models with discrete latent alignments have also been proposed BID27. In this work, we presented a theoretical and empirical investigation into the model variants over the spectrum of possible hybridization between HMMs and RNNs. By carefully controlling for every design choices, we provide new insights into several factors including independence assumptions, the placement of softmax, and the use of nonliniarity and how these choices influence the interplay between expressiveness and interpretability. Comprehensive empirical demonstrate that the key elements to better performance of the HMM are the use of a sigmoid instead of softmax linearity in the recurrent cell, and the use of an unnormalized output distribution matrix in the emission computation. Multiplicative integration of the previous hidden state and input embedding, and intermediate normalizations in the cell computation are less consequential. We also find that HMM outperforms other RNNs variants in a next POS tag prediction task, which demonstrates the advantages of models with discrete bottlenecks in increased interpretability.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyesB2RqFQ
Are HMMs a special case of RNNs? We investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization and provide new insights.
We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate). We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier. However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable. Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family. Deep learning has led to tremendous advances in supervised learning BID39 BID20 BID46; however, unsupervised learning remains a challenging area. Recent advances in variational inference (VI) BID23 BID33, have led to an explosion of research in the area of deep latent-variable models and breakthroughs in our ability to model natural high-dimensional data. This class of models typically optimize a lower bound on the log-likelihood of the data known as the evidence lower bound (ELBO), and leverage the "reparameterization trick" to make large-scale training feasible. However, a number of papers have observed that VAEs trained with powerful decoders can learn to ignore the latent variables BID44 BID6. We demonstrate this empirically and explain the issue theoretically by deriving the ELBO in terms of the mutual information between X, the data, and Z, the latent variables. Having done so, we show that the previously-described β-VAE objective has a theoretical justification in terms of a Legendre-transformation of a constrained optimization of the mutual information. This leads to the core point of this paper, which is that the optimal rate of information in a model is taskdependent, and optimizing the ELBO directly makes the selection of that rate purely a function of architectural choices, whereas by using β-VAE or other constrained optimization objectives, practitioners can learn models with optimal rates for their particular task without having to do extensive architectural search. Mutual information provides a reparameterization-independent measure of dependence between two random variables. Computing mutual information exactly in high dimensions is problematic BID29 ), so we turn to recently developed tools in variational inference to approximate it. We find that a natural lower and upper bound on the mutual information between the input and latent variable can be simply related to the ELBO, and understood in terms of two terms: a lower bound that depends on the distortion, or how well an input can be reconstructed through the encoder and decoder, and an upper bound that measures the rate, or how much information is retained about the input. Together these terms provide a unifying perspective on the set of optimal models given a dataset, and show that there exists a continuum of models that make very different trade-offs in terms of rate and distortion. By leveraging additional information about the amount of information contained in the latent variable, we show that we can recover the ground-truth generative model used to create the data in a toy model. We perform extensive experiments on MNIST and Omniglot using a variety of encoder, decoder, and prior architectures and demonstrate how our framework provides a simple and intuitive mechanism for understanding the trade-offs made by these models. We further show that we can control this tradeoff directly by optimizing the β-VAE objective, rather than the ELBO. By varying β, we can learn many models with the same architecture and comparable generative performance (in terms of marginal data log likelihood), but that exhibit qualitatively different behavior in terms of the usage of the latent variable and variability of the decoder. Unsupervised Representation Learning Depending on the task, there are many desiderata for a good representation. Here we focus on one aspect of a learned representation: the amount of information that the latent variable contains about the input. In the absence of additional knowledge of a "downstream" task, we focus on the ability to recover or reconstruct the input from the representation. Given a set of samples from a true data distribution p * (x), our goal is to learn a representation that contains a particular amount of information and from which the input can be reconstructed as well as possible. We will convert each observed data vector x into a latent representation z using any stochastic encoder e(z|x) of our choosing. This then induces the joint distribution p e (x, z) = p * (x)e(z|x) and the corresponding marginal p e (z) = dx p * (x)e(z|x) (the "aggregated posterior" in BID27 ; BID44) and conditional p e (x|z) = p e (x, z)/p e (z).A good representation Z must contain information about the input X which we define as follows:I rep (X; Z) = dx dz p e (x, z) log p e (x, z) p * (x)p e (z).We will call this the representational mutual information, to distinguish it from the generative mutual information we discuss in Appendix C. Equation 1 is hard to compute, since we do not have access to the true data density p * (x), and computing the marginal p e (z) = dx p e (x, z) can be challenging. As demonstrated in BID5 BID2; BID39 there exist useful, tractable variational bounds on mutual information. The detailed derivation for this case is included in Appendices B.1 and B.2 for completeness. These yield the following lower and upper bounds: DISPLAYFORM0 where d(x|z) (the "decoder") is a variational approximation to p e (x|z), m(z) (the "marginal") is a variational approximation to p e (z), and all the integrals can be approximated using Monte Carlo given a finite sample of data from p * (x), as we discuss below. In connection with rate-distortion theory, we can interpret the upper bound as the rate R of our representation BID42. This rate term measures the average number of additional nats necessary to encode samples from the encoder using an entropic code constructed from the marginal, being an average KL divergence. Unlike most rate-distortion work (Cover & BID9, where the marginal is assumed a fixed property of the channel, here the marginal is a completely general distribution, which we assume is learnable. Similarly, we can interpret the lower bound as the data entropy H, which measures the complexity of the dataset (a fixed but unknown constant), minus the distortion D, which measures our ability to accurately reconstruct samples: DISPLAYFORM1 This distortion term is defined in terms of an arbitrary decoding distribution d(x|z), which we consider a learnable distribution. This contrasts with most of the compression literature where distortion is typically measured using a fixed perceptual loss (Ballé et al., 2017). Combining these equations, we get the "sandwich equation" DISPLAYFORM2 Phase Diagram From the sandwich equation, we see that H − D − R ≤ 0. This is a bound that must hold for any set of four distributions p * (x), e(z|x), d(x|z), m(z). The inequality places strict limits on which values of rate and distortion are achievable, and allows us to reason about all possible solutions in a two dimensional RD-plane. A sketch of this phase diagram is shown in FIG0.First, we consider the data entropy term. For discrete data 1, all probabilities in X are bounded above by one and both the data entropy and distortion are non-negative (H ≥ 0, D ≥ 0). The rate is also non-negative (R ≥ 0), because it is an average KL divergence, for either continuous or discrete Z. The positivity constraints and the sandwich equation separate the RD-plane into feasible and infeasible regions, visualized in FIG0. The boundary between these regions is a convex curve (thick black line). Even given complete freedom in specifying the encoder e(z|x), decoder d(x|z) and marginal approximation m(z), and infinite data, we can never cross this bounding line. We now explain qualitatively what the different areas of this diagram correspond to. For simplicity, we will consider the infinite model family limit, where we have complete freedom in specifying e(z|x), d(x|z) and m(z) but consider the data distribution p * (x) fixed. The bottom horizontal line corresponds to the zero distortion setting, which implies that we can perfectly encode and decode our data; we call this the auto-encoding limit. The lowest possible rate is given by H, the entropy of the data. This corresponds to the point (R = H, D = 0). (In this case, our lower bound is tight, and hence d(x|z) = p e (x|z).) We can obtain higher rates at fixed distortion by making the marginal approximation m(z) a weaker approximation to p e (z), since only the rate and not the distortion depends on m(z).The left vertical line corresponds to the zero rate setting. Since R = 0 =⇒ e(z|x) = m(z), we see that our encoding distribution e(z|x) must itself be independent of x. Thus the latent representation is not encoding any information about the input and we have failed to create a useful learned representation. However, by using a suitably powerful decoder, d(x|z), that is able to capture correlations between the components of x (e.g., an autoregressive model, such as pixelCNN , or an acausal MRF model, such as BID10 ), we can still reduce the distortion to the lower bound of H, thus achieving the point (R = 0, D = H); we call this the auto-decoding limit. Hence we see that we can do density estimation without learning a good representation, as we will verify empirically in Section 4. (Note that since R is an upper bound on the mutual information, in the limit that R = 0, the bound must be tight, which guarantees that m(z) = p e (z).) We can achieve solutions further up on the D-axis, while keeping the rate fixed, simply by making the decoder worse, since only the distortion and not the rate depends on d(x|z).Finally, we discuss solutions along the diagonal line. Such points satisfy D = H − R, and hence both of our bounds are tight, so m(z) = p e (z) and d(x|z) = p e (x|z). (Proofs of these claims are given in Sections B.3 and B.4 respectively.)So far, we have considered the infinite model family limit. If we have only finite parametric families for each of d(x|z), m(z), e(z|x), we expect in general that our bounds will not be tight. Any failure of the approximate marginal m(z) to model the true marginal p e (z), or the decoder d(x|z) to model the true likelihood p e (x|z), will lead to a gap with respect to the optimal black surface. However, it will still be the case that H − D − R ≤ 0. This suggests that there will still be a one dimensional optimal surface, D(R), or R(D) where optimality is defined to be the tightest achievable sandwiched bound within the parametric family. We will use the term RD curve to refer to this optimal surface in the rate-distortion (RD) plane. Since the data entropy H is outside our control, this surface can be found by means of constrained optimization, either minimizing the distortion at some fixed rate, or minimizing the rate at some fixed distortion, as we show below. Furthermore, by the same arguments as above, this surface should be monotonic in both R and D, since for any solution, with only very mild assumptions on the form of the parametric families, we should always be able to make m(z) less accurate in order to increase the rate at fixed distortion (see shift from red curve to blue curve in FIG0), or make the decoder d(x|z) less accurate to increase the distortion at fixed rate (see shift from red curve to green curve in FIG0).Optimization In this section, we discuss how we can find models that target a given point on the RD curve. Recall that the rate R and distortion D are given by DISPLAYFORM3 These can both be approximated using a Monte Carlo sample from our training set. We also require that the terms log d(x|z), log m(z) and log e(z|x) be efficient to compute, and that e(z|x) be efficient to sample from. In Section 4, we will describe the modeling choices we made for our experiments. In order to explore the qualitatively different optimal solutions along the frontier, we need to explore different rate-distortion trade-offs. One way to do this would be to perform some form of constrained optimization at fixed rate. Alternatively, instead of considering the rate as fixed, and tracing out the optimal distortion as a function of the rate D(R), we can perform the Legendre transformation and can find the optimal rate and distortion for a fixed β = ∂D ∂R, by minimizing min e(z|x),m(z),d(x|z) D + βR. Writing this objective out in full, we get DISPLAYFORM4 If we set β = 1, this matches the ELBO objective used when training a VAE BID23, with the distortion term matching the reconstruction loss, and the rate term matching the "KL term". Note, however, that this objective does not distinguish between any of the points along the diagonal of the optimal RD curve, all of which have β = 1 and the same ELBO. Thus the ELBO objective alone (and the marginal likelihood) cannot distinguish between models that make no use of the latent variable (autodecoders) versus models that make large use of the latent variable and learn useful representations for reconstruction (autoencoders). This is demonstrated experimentally in Section 4.If we allow a general β ≥ 0, we get the β-VAE objective used in (; BID39 . This allows us to smoothly interpolate between auto-encoding behavior (β = 0), where the distortion is low but the rate is high, to auto-decoding behavior (β = ∞), where the distortion is high but the rate is low, all without having to change the model architecture. However, unlike; Alemi et al. FORMULA0, we additionally optimize over the marginal m(z) and compare across a variety of architectures, thus exploring a much larger solution space, which we illustrate empirically in Section 4. Here we present an overview of the most closely related work. A more detailed treatment can be found in Appendix D.Model families for unsupervised learning with neural networks. There are two broad areas of active research in deep latent-variable models with neural networks: methods based on the variational autoencoder (VAE), introduced by BID23; BID33, and methods based on generative adversarial networks (GANs), introduced by BID15. In this paper, we focus on the VAE family of models. In particular, we consider recent variants using inverse autoregressive flow (IAF) BID24, masked autoregressive flow (MAF) BID30, PixelCNN++ , and the VampPrior BID44, as well as common Conv/Deconv encoders and decoders. Information Theory and machine learning. BID5 was the first to introduce tractable variational bounds on mutual information, and made close analogies and comparisons to maximum likelihood learning and variational autoencoders. The information bottleneck framework BID43 BID36 BID42 BID39 BID0 allows a model to smoothly trade off the minimality of the learned representation (Z) from data (X) by minimizing their mutual information, I(X; Z), against the informativeness of the representation for the task at hand (Y) by maximizing their mutual information, I(Z; Y). This constrained optimization problem is rephrased with the Lagrange multiplier, β, to the unconstrained optimization of I(X; Z) − βI(Z; Y). BID42 plot an RD curve similar to the one in this paper, but they only consider the supervised setting, and they do not consider the information content that is implicit in powerful stochastic decoders. proposed the β-VAE for unsupervised learning, which is a generalization of the original VAE in which the KL term is scaled by β, similar to this paper. However, they only considered β > 1. In this paper, we show that when using powerful autoregressive decoders, using β ≥ 1 in the model ignoring the latent code, so it is necessary to use β < 1.Generative Models and Compression. Much recent work has explored the use of latent-variable generative models for image compression. Ballé et al. studies the problem explicitly in terms of the rate/distortion plane, adjusting a Lagrange multiplier on the distortion term to explore the convex hull of a model's optimal performance. Johnston et al. FORMULA0 uses a recurrent VAE architecture to achieve state-of-the-art image compression rates, posing the loss as minimizing distortion at a fixed rate. writes the VAE loss as R + βD. BID34 shows that a GAN optimization procedure can also be applied to the problem of compression. All of these efforts focus on rate/distortion tradeoffs for individual models, but don't explore how the selection of the model itself affects the rate/distortion curve. Because we explore many combinations of modeling choices, we are able to more deeply understand how model selection impacts the rate/distortion curve, and to point out the area where all current models are lacking -the auto-encoding limit. Generative compression models also have to work with both quantized latent spaces and approximately fixed decoder model families trained with perceptual losses such as MS-SSIM BID47, which constrain the form of the learned distribution. Our work does not assume either of these constraints are present for the tasks of interest. Toy Model In this section, we empirically show a case where the usual ELBO objective can learn a model which perfectly captures the true data distribution, p * (x), but which fails to learn a useful latent representation. However, by training the same model such that we minimize the distortion, subject to achieving a desired target rate R *, we can recover a latent representation that closely matches the true generative process (up to a reparameterization), while also perfectly capturing the true data distribution. We create a simple data generating process that consists of a true latent variable Z * = {z 0, z 1} ∼ Ber(0.7) with added Gaussian noise and discretization. The magnitude of the noise was chosen so that the true generative model had I(x; z *) = 0.5 nats of mutual information between the observations and the latent. We additionally choose a model family with sufficient power to perfectly autoencode or autodecode. See Appendix E for more detail on the data generation and model. FIG1 shows various distributions computed using three models. For the left column, we use a hand-engineered encoder e(z|x), decoder d(x|z), and marginal m(z) constructed with knowledge of the true data generating mechanism to illustrate an optimal model. For the middle and right columns, we learn e(z|x), d(x|z), and m(z) using effectively infinite data sampled from p * (x) directly. The middle column is trained with ELBO. The right column is trained by targeting R = 0.5 while minimizing D.2 In both cases, we see that p DISPLAYFORM0 for both trained models, indicating that optimization found the global optimum of the respective objectives. However, the VAE fails to learn a useful representation, only yielding a rate of R = 0.0002 nats, 3 while the Target Rate model achieves R = 0.4999 nats. Additionally, it nearly perfectly reproduces the true generative process, as can be seen by comparing the yellow and purple regions in the z-space plots (middle row) -both the optimal model and the Target Rate model have two clusters, one with about 70% of the probability mass, corresponding to class 0 (purple shaded region), and the other with about 30% of the mass (yellow shaded region) corresponding to class 1. In contrast, the z-space of the VAE completely mixes the yellow and purple regions, only learning a single cluster. Note that we reproduced essentially identical with dozens of different random initializations for both the VAE and the Target Rate model -these are not cherry-picked. vs minimizing distortion for a fixed rate (right column). Top: Three distributions in data space: the true data distribution, p * (x), the model's generative distribution, g(x) = z m(z)d(x|z), and the empirical data reconstruction distribution, d(x) = x zp (x)e(z|x)d(x|z). Middle: Four distributions in latent space: the learned (or computed) marginal m(z), the empirical induced marginal e(z) = xp (x)e(z|x), the empirical distribution over z values for data vectors in the set X0 = {xn : zn = 0}, which we denote by e(z0) in purple, and the empirical distribution over z values for data vectors in the set X1 = {xn : zn = 1}, which we denote by e(z1) in yellow. Bottom: DISPLAYFORM1 MNIST. In this section, we show how comparing models in terms of rate and distortion separately is more useful than simply observing marginal log likelihoods. We examine several VAE model architectures that have been proposed in the literature. We use the static binary MNIST dataset originally produced for BID26 4. In appendix A, we show analogous for the Omniglot dataset .We will consider simple and complex variants for the encoder and decoder, and three different types of marginal. The simple encoder is a CNN with a fully factored 64 dimensional Gaussian for e(z|x); the more complex encoder is similar, but followed by 4 steps of mean-only Gaussian inverse autoregressive flow BID24, with each step implemented as a 3 hidden layer MADE BID14 with 640 units in each hidden layer. The simple decoder is a multilayer deconvolutional network; the more powerful decoder is a PixelCNN++ model. The simple marginal is a fixed isotropic Gaussian, as is commonly used with VAEs; the more complicated version has a 4 step 3 layer MADE BID14 mean-only Gaussian autoregressive flow BID30. We also consider the setting in which the marginal uses the VampPrior from BID44. We will denote the particular model combination 2 Note that the target value R = I(x; z *) = 0.5 is computed with knowledge of the true data generating distribution. However, this is the only information that is "leaked" to our method, and in general it is not hard to guess reasonable targets for R for a given task and dataset.by the tuple (+/−, +/−, +/ − /v), depending on whether we use a simple (−) or complex (+) (or (v) VampPrior) version for the (encoder, decoder, marginal) respectively. In total we consider 2 × 2 × 3 = 12 models. We train them all to minimize the objective in Equation 4. Full details can be found in Appendix F. Runs were performed at various values of β ranging from 0.1 to 10.0, both with and without KL annealing BID6. RD curve. FIG2 show the RD plot for 12 models on the MNIST dataset. Dashed lines represent the best achieved test ELBO of 80.2 nats, which then sets an upper bound on the true data entropy H for the static MNIST dataset. This implies that any RD value above the dashed line is in principle achievable in a powerful enough model. The stepwise black curves show the monotonic Pareto frontier of achieved RD points across all model families. Points participating in this curve are denoted with a × on the right. The grey solid line shows the corresponding convex hull, which we approach closely across all rates. Strong decoder model families dominate at the lowest and highest rates. Weak decoder models dominate at intermediate rates. Strong marginal models dominate strong encoder models at most rates. Across our model families we appear to be pushing up against an approximately smooth RD curve. The 12 model families we considered here, arguably a representation of the classes of models considered in the VAE literature, in general perform much worse in the auto-encoding limit (bottom right corner) of the RD plane. This is likely due to a lack of power in our current marginal approximations. FIG2 shows the same raw data, but where we plot ELBO=R + D versus R. Here some of the differences between individual model families performances are more easily resolved. Broadly, models with a deconvolutional decoder perform well at intermediate~22 nat rates, but quickly suffer large distortion penalties as they move away from that point. This is perhaps unsurprising considering we trained on the binary MNIST dataset, for which the measured pixel level sampling entropy on the test set is approximately 22 nats. Models with a powerful autoregressive decoder perform well at low rates, but for values of β ≥ 1 tend to collapse to pure autodecoding models. With the use of the VampPrior and KL annealing however, β = 1 models can exist at finite rates of around 8 nats. Our framework helps explain the observed difficulties in the literature of training a useful VAE with a powerful decoder, and the observed utility of techniques like "free bits" BID24, "soft free bits" and KL annealing BID6. Each of these effectively trains at a reduced β, moving up along the RD curve. Without any additional modifications, simply training at reduced β is a simpler way to achieve nonvanishing rates, without additional architectual adjustments like in the variational lossy autoencoder.Analyzing model performance using the RD curve gives a much more insightful comparison of relative model performance than simply comparing marginal data log likelihoods. In particular, we managed to achieve models with five-sample IWAE BID7 estimates below 82 nats (a competitive rate for single layer latent variable models BID44) for rates spanning from 10 −4 to 30 nats. While all of those models have competitive ELBOs or marginal log likelihood, they differ substantially in the tradeoffs they make between rate and distortion, and those differences in qualitatively different model behavior, as illustrated in FIG3.The interaction between latent variables and powerful decoders. Within any particular model family, we can smoothly move between and explore its performance at varying rates. An illustrative example is shown in FIG3, where we study the effect of changing β (using KL annealing from low to high) on the same -+v model, corresponding to a VAE with a simple encoder, a powerful PixelCNN++ decoder, and a powerful VampPrior marginal. In FIG3 we assess how well the models do at reconstructing their inputs. We pick an image x at random, encode it using z ∼ e(z|x), and then reconstruct it usingx ∼ d(x|z). When β = 1.10 (left column), the model obtains R = 0.0004, D = 80.6, ELBO = 80.6 nats. The tiny rate indicates that the decoder ignores its latent code, and hence the reconstructions are independent of the input x. For example, when the input is x = 8, the reconstruction isx = 3. However, the generated images sampled from the decoder look good (this is an example of an autodecoder). At the other extreme, when β = 0.05 (right column), the model obtains R = 156, D = 4.8, ELBO=161 nats. Here the model does an excellent job of auto-encoding, generating nearly pixel perfect reconstructions. However, samples from this model's prior, as shown on the right, are of very poor quality, reflected in the worse ELBO and IWAE values. At intermediate values, such as β = 1.0, (R = 6.2, D = 74.1, ELBO=80.3) the model seems to retain semantically meaningful information about the input, such as its class and width of the strokes, but maintains variation in the individual reconstructions. In particular, notice that the individual "2" sent in is reconstructed as a similar "2" but with a visible loop at the bottom. This model also has very good generated samples. This intermediate rate encoding arguably typifies what we want to achieve in unsupervised learning: we have learned a highly compressed representation that retains salient features of the data. In the third column, the β = 0.15 model (R = 120.3, D = 8.1, ELBO=128) we have very good reconstructions FIG3 one can visually inspect while still obtaining a good degree of compression. This model arguably typifies the domain most compression work is interested in, where most perceivable variations in the digit are retained in the compression. However, at these higher rates the failures of our current architectures to approach their theoretical performance becomes more apparent, as the corresponding ELBO of 128 nats is much higher than the 81 nats we obtain at low rates. This is also evident in the visual degradation in the generated samples. While it is popular to visualize both the reconstructions and generated samples from VAEs, we suggest researchers visually compare several sampled decodings using the same sample of the latent variable, whether it be from the encoder or the prior, as done here in FIG3. By using a single sample of the latent variable, but decoding it multiple times, one can visually inspect what features of the input are captured in the observed value for the rate. This is particularly important to do when using powerful decoders, such as autoregressive models. We have motivated the β-VAE objective on information theoretic grounds, and demonstrated that comparing model architectures in terms of the rate-distortion plot offers a much better look at their performance and tradeoffs than simply comparing their marginal log likelihoods. Additionally, we have shown a simple way to fix models that ignore the latent space due to the use of a powerful decoder: simply reduce β and retrain. This fix is much easier to implement than other solutions that have been proposed in the literature, and comes with a clear theoretical justification. We strongly encourage future work to report rate and distortion values independently, rather than just reporting the log likelihood. If future work proposes new architectural regularization techniques, we suggest the authors train their objective at various rate distortion tradeoffs to demonstrate and quantify the region of the RD plane where their method dominates. Through a large set of experiments we have demonstrated the performance at various rates and distortion tradeoffs for a set of representative architectures currently under study, confirming the power of autoregressive decoders, especially at low rates. We have also shown that current approaches seem to have a hard time achieving high rates at low distortion. This suggests a set of experiments with a simple encoder / decoder pair but a powerful autoregressive marginal posterior approximation, which should in principle be able to reach the autoencoding limit, with vanishing distortion and rates approaching the data entropy. Interpreting the β-VAE objective as a constrained optimization problem also hints at the possibility of applying more powerful constrained optimization techniques, which we hope will be able to advance the state of the art in unsupervised representation learning. A ON OMNIGLOT FIG4 plots the RD curve for various models fit to the Omniglot dataset BID25, in the same form as the MNIST in FIG2. Here we explored βs for the powerful decoder models ranging from 1.1 to 0.1, and βs of 0.9, 1.0, and 1.1 for the weaker decoder models. On Omniglot, the powerful decoder models dominate over the weaker decoder models. The powerful decoder models with their autoregressive form most naturally sit at very low rates. We were able to obtain finite rates by means of KL annealing. Further experiments will help to fill in the details especially as we explore differing β values for these architectures on the Omniglot dataset. Our best achieved ELBO was at 90.37 nats, set by the ++-model with β = 1.0 and KL annealing. This model obtains R = 0.77, D = 89.60, ELBO = 90.37 and is nearly auto-decoding. We found 14 models with ELBOs below 91.2 nats ranging in rates from 0.0074 nats to 10.92 nats. Similar to FIG3 in FIG5 we show sample reconstruction and generated images from the same "-+v" model family trained with KL annealing but at various βs. Just like in the MNIST case, this demonstrates that we can smoothly interpolate between auto-decoding and auto-encoding behavior in a single model family, simply by adjusting the β value. Our lower bound is established by the fact that Kullback-Leibler (KL) divergences are positive semidefinite DISPLAYFORM0 which implies for any distribution p(x|z): DISPLAYFORM1 DISPLAYFORM2 The upper bound is established again by the positive semidefinite quality of KL divergence. DISPLAYFORM0 = dx dz p e (x, z) log e(z|x) − dx dz p e (x, z) log p e (z) = dx dz p e (x, z) log e(z|x) − dz p e (z) log p e (z) DISPLAYFORM1 Here we establish that the optimal marginal approximation p(z), is precisely the marginal distribution of the encoder. R ≡ dx p * (x) dz e(z|x) log e(z|x) m(z) Consider the variational derivative of the rate with respect to the marginal approximation: DISPLAYFORM0 Where in the last line we have taken the first order variation, which must vanish if the total variation is to vanish. In particular, in order for this variation to vanish, since we are considering an arbitrary δm(z), except for the fact that the integral of this variation must vanish, in order for the first order variation in the rate to vanish it must be true that for every value of x, z we have that: DISPLAYFORM1 which when normalized gives: DISPLAYFORM2 or that the marginal approximation is the true encoder marginal. Next consider the variation in the distortion in terms of the decoding distribution with a fixed encoding distribution. DISPLAYFORM0 Similar to the section above, we took only the leading variation into account, which itself must vanish for the full variation to vanish. Since our variation in the decoder must integrate to 0, this term will vanish for every x, z we have that: DISPLAYFORM1 when normalized this gives: DISPLAYFORM2 which ensures that our decoding distribution is the correct posterior induced by our data and encoder. The lower bound is established as all other bounds have been established, with the positive semidefiniteness of KL divergences. DISPLAYFORM0 which implies for any distribution q(z|x): DISPLAYFORM1 The upper bound is establish again by the positive semidefinite quality of KL divergence. DISPLAYFORM0 Given any four distributions: p * (x) -a density over some data space X, e(z|x) -a stochastic map from that data to a new representational space Z, d(x|z) -a stochastic map in the reverse direction from Z to X, and m(z) -some density in the Z space; we were able to find an inequality relating three functionals of these densities that must always hold. We found this inequality by deriving upper and lower bounds on the mutual information in the joint density defined by the natural representational path through the four distributions, p e (x, z) = p * (x)e(z|x). Doing so naturally made us consider the existence of two other distributions d(x|z) and m(z). Let's consider the mutual information along this new generative path. DISPLAYFORM1 DISPLAYFORM2 Just as before we can easily establish both a variational lower and upper bound on this mutual information. For the lower bound (proved in Section B.5), we have: DISPLAYFORM3 Where we need to make a variational approximation to the decoder posterior, itself a distribution mapping X to Z. Since we already have such a distribution from our other considerations, we can certainly use the encoding distribution q(z|x) for this purpose, and since the bound holds for any choice it will hold with this choice. We will call this bound E since it gives the distortion as measured through the encoder as it attempts to encode the generated samples back to their latent representation. We can also find a variational upper bound on the generative mutual information (proved in Section B.6): DISPLAYFORM4 This time we need a variational approximation to the marginal density of our generative model, which we denote as q(x). We call this bound G for the rate in the generative model. Together these establish both lower and upper bounds on the generative mutual information: DISPLAYFORM5 In our early experiments, it appears as though additionally constraining or targeting values for these generative mutual information bounds is important to ensure consistency in the underlying joint distributions. In particular, we notice a tendency of models trained with the β-VAE objective to have loose bounds on the generative mutual information when β varies away from 1. In light of the appearance of a new independent density estimate q(x) in deriving our variational upper bound on the mutual information in the generative model, let's actually use that to rearrange our variational lower bound on the representational mutual information. DISPLAYFORM0 q(x) Doing this, we can express our lower bound in terms of two reparameterization independent functionals: DISPLAYFORM1 This new reparameterization couples together the bounds we derived both the representational mutual information and the generative mutual information, using q(x) in both. The new function S we've described is intractable on its own, but when split into the data entropy and a cross entropy term, suggests we set a target cross entropy on our own density estimate q(x) with respect to the empirical data distribution that might be finite in the case of finite data. Together we have an equivalent way to formulate our original bounds on the representaional mutual information DISPLAYFORM2 We believe this reparameterization offers and important and potential way to directly control for overfitting. In particular, given that we compute our objectives using a finite sample from the true data distribution, it will generically be true that KL[p(x) || p * (x)] ≥ 0. In particular, the usual mode we operate in is one in which we only ever observe each example once in the training set, suggesting that in particular an estimate for this divergence would be: DISPLAYFORM3 Early experiments suggest this offers a useful target for S in the reparameterized objective that can prevent overfitting, at least in our toy problems. Here we expand on the brief related work in Section 3. Much recent work has leveraged information theory to improve our understanding of machine learning in general, and unsupervised learning specifically. In BID42, the authors present theory for the success of supervised deep learning as approximately optimizing the information bottleneck objective, and also theoretically predict a supervised variant of the rate/distortion plane we describe here. BID37 further proposes that training of deep learning models proceeds in two phases: error minimization followed by compression. They suggest that the compression phase diffuses the conditional entropy of the individual layers of the model, and when the model has converged, it lies near the information bottleneck optimal frontier on the proposed rate/distortion plane. In BID16 the authors motivate the β-VAE objective from a combined neuroscience and information theoretic perspective. propose that β should be greater than 1 to properly learn disentangled representations in an unsupervised manner. described the issue of the too-powerful decoder when training standard VAEs, where β = 1. They proposed a bits-back BID18 model to understand this phenomenon, as well as a noise-injection technique to combat it. Our approach removes the need for an additional noise source in the decoder, and concisely rephrases the problem as finding the optimal β for the chosen model family, which can now be as powerful as we like, without risk of ignoring the latent space and collapsing to an autodecoding model. BID6 suggested annealing the weight of the KL term of the ELBO (KL[q(z|x) || p(z)]) from 0 to 1 to make it possible to train an RNN decoder without ignoring the latent space. BID40 applies the same idea to ladder network decoders. We relate this idea of KL annealing to our optimal rate/distortion curve, and show empirically that KL annealing does not in general attain the performance possible when setting a fixed β or a fixed target rate. In BID0, the authors proposed an information bottleneck approach to the activations of a network, termed Information Dropout, as a form of regularization that explains and generalizes Dropout BID38. They suggest that, without such a form of regularization, standard SGD training only provides a sufficient statistic, but does not in general provide two other desiderata: minimality and invariance to nuisance factors. Both of these would be provided by a procedure that directly optimized the information bottleneck. They propose that simply injecting noise adaptively into the otherwise deterministic function computed by the deep network is sufficient to cause SGD to optimize toward disentangled and invariant representations. BID1 expands on this exploration of sufficiency, minimality, and invariance in deep networks. In particular they propose that architectural bottlenecks and depth both promote invariance directly, and they decompose the standard cross entropy loss used in supervised learning into four terms, including one which they name'overfitting', and which, without other regularization, an optimization procedure can easily increase in order to reduce the total loss. BID7 presented an importance-weighted variant of the VAE objective. By increasing the number of samples taken from the encoder during training, they are able to tighten the variational lower bound and improve the test log likelihood. BID32 proposed to use normalizing flows to approximate the true posterior during inference, in order to overcome the problem of the standard mean-field posterior approximation used in VAEs lacking sufficient representational power to model complex posterior distributions. Normalizing flow permits the use of a deep network to compute a differentiable function with a computable determinant of a random variable and have the ing function be a valid normalized distribution. BID24 expanded on this idea by introducing inverse autoregressive flow (IAF). IAF takes advantage of properties of current autoregressive models, including their expressive power and particulars of their Jacobians when inverted, and used them to learn expressive, parallelizeable normalizing flows that are efficient to compute when using high dimensional latent spaces for the posterior. Autoregressive models have also been applied successfully to the density estimation problem, as well as high quality sample generation. MADE BID14 proposed directly masking the parameters of an autoencoder during generation such that a given unit makes its predictions based solely on the first d activations of the layer below. This enforces that the autoencoder maintains the "autoregressive" property. In, the authors presented a recurrent neural network that can autoregressively predict the pixels of an image, as well as provide tractable density estimation. This work was expanded to a convolutional model called PixelCNN (van den), which enforced the autoregressive property by masking the convolutional filters. , the authors further improved the performance with PixelCNN++ with a collection of architecture changes that allow for much faster training. Finally, BID30 proposed another unification of normalizing flow models with autoregressive models for density estimation. The authors observe that the conditional ordering constraints required for valid autoregressive modeling enforces a choice which may be arbitrarily incorrect for any particular problem. In their proposal, Masked Autoregressive Flow (MAF), they explicitly model the random number generation process with stacked MADE layers. This particular choice means that MAF is fast at density estimation, whereas the nearly identical IAF architecture is fast at sampling. BID44 proposed a novel method for learning the marginal posterior, m(z) (written q(z) in that work): learn k pseudo-inputs that can be mixed to approximate any of the true samples x ∼ p * (x). Data generation. The true data generating distribution is as follows. We first sample a latent binary variable, z ∼ Ber(0.7), then sample a latent 1d continuous value from that variable, h|z ∼ N (h|µ z, σ z), and finally we observe a discretized value, x = discretize(h; B), where B is a set of 30 equally spaced bins. We set µ z and σ z such that R * ≡ I(x; z) = 0.5 nats, in the true generative process, representing the ideal rate target for a latent variable model. 2 ], where z is the one-hot encoding of the latent categorical variable, and x is the one-hot encoding of the observed categorical variable. Thus the encoder has 2K = 60 parameters. We use a decoder of the same form, but with different parameters: DISPLAYFORM0 Finally, we use a variational marginal, m(z i) = π i. Given this, the true joint distribution has the form p e (x, z) = p * (x)e(z|x), with marginal m(z) = x p e (x, z) and conditional p e (x|z) = p e (x, z)/p e (z). We used the static binary MNIST dataset originally produced for BID26 5, and the Omniglot dataset from Lake et al. FORMULA0; BID7.As stated in the main text, for our experiments we considered twelve different model families corresponding to a simple and complex choice for the encoder and decoder and three different choices for the marginal. Unless otherwise specified, all layers used a linearly gated activation function activation function BID11, h(x) = (W 1 x + b 2)σ(W 2 x + b 2). For the encoder, the simple encoder was a convolutional encoder outputting parameters to a diagonal Gaussian distribution. The inputs were first transformed to be between -1 and 1. The architecture contained 5 convolutional layers, summarized in the format Conv (depth, kernel size, stride, padding), followed by a linear layer to read out the mean and a linear layer with softplus nonlinearity to read out the variance of the diagonal Gaussiann distribution.• Input • Conv (32, 5, 1, same) • Conv (32, 5, 2, same) • Conv (64, 5, 1, same) • Conv (64, 5, 2, same) • Conv (256, 7, 1, valid) • Gauss (Linear, Softplus (Linear))For the more complicated encoder, the same 5 convolutional layer architecture was used, followed by 4 steps of mean-only Gaussian inverse autoregressive flow, with each step's location parameters computed using a 3 layer MADE style masked network with 640 units in the hidden layers and ReLU activations. The simple decoder was a transposed convolutional network, with 6 layers of transposed convolution, denoted as Deconv (depth, kernel size, stride, padding) followed by a linear convolutional layer parameterizing an independent Bernoulli distribution over all of the pixels:• Input FIG0 • Deconv (64, 7, 1, valid) • Deconv (64, 5, 1, same)• Deconv (64, 5, 2, same)• Deconv (32, 5, 1, same)• Deconv (32, 5, 2, same)• Deconv (32, 4, 1, same)• Bernoulli (Linear Conv (1, 5, 1, same))The complicated decoder was a slightly modified PixelCNN++ style network 6. However in place of the original RELU activation functions we used linearly gated activation functions and used six blocks (with sizes (28 × 28) -(14 × 14) -(7 × 7) -(7 × 7) -(14 × 14) -(28 × 28)) of two resnet layers in each block. All internal layers had a feature depth of 64. Shortcut connections were used throughout between matching sized featured maps. The 64-dimensional latent representation was sent through a dense lineary gated layer to produce a 784-dimensional representation that was reshaped to (28 × 28 × 1) and concatenated with the target image to produce a (28 × 28 × 2) dimensional input. The final output (of size (28 × 28 × 64)) was sent through a (1 × 1) convolution down to depth 1. These were interpreted as the logits for a Bernoulli distribution defined on each pixel. We used three different types of marginals. The simplest architecture (denoted (-)), was just a fixed isotropic gaussian distribution in 64 dimensions with means fixed at 0 and variance fixed at 1.The complicated marginal (+) was created by transforming the isotropic Gaussian base distribution with 4 layers of mean-only Gaussian autoregressive flow, with each steps location parameters computed using a 3 layer MADE style masked network with 640 units in the hidden layers and relu activations. This network resembles the architecture used in BID30.The last choice of marginal was based on VampPrior and denoted with (v), which uses a mixture of the encoder distributions computed on a set of pseudo-inputs to parameterize the prior BID44. We add an additional learned set of weights on the mixture distributions that are constrained to sum to one using a softmax function: m(z) = N i=1 w i e(z|φ i) where N are the number of pseudo-inputs, w are the weights, e is the encoder, and φ are the pseudo-inputs that have the same dimensionality as the inputs. The models were all trained using the β-VAE objective at various values of β. No form of explicit regularization was used. The models were trained with Adam BID22 with normalized gradients BID48 for 200 epochs to get good convergence on the training set, with a fixed learning rate of 3 × 10 −4 for the first 100 epochs and a linearly decreasing learning rate towards 0 at the 200th epoch.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1rRWl-Cb
We provide an information theoretic and experimental analysis of state-of-the-art variational autoencoders.
Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance. Learning with graph structured data, such as molecules, social, biological, and financial networks, requires effective representation of their graph structure BID14. Recently, there has been a surge of interest in Graph Neural Network (GNN) approaches for representation learning of graphs BID23 BID13 BID21 BID34 BID37. GNNs broadly follow a recursive neighborhood aggregation (or message passing) scheme, where each node aggregates feature vectors of its neighbors to compute its new feature vector BID37 BID12. After k iterations of aggregation, a node is represented by its transformed feature vector, which captures the structural information within the node's k-hop neighborhood. The representation of an entire graph can then be obtained through pooling BID39, for example, by summing the representation vectors of all nodes in the graph. Many GNN variants with different neighborhood aggregation and graph-level pooling schemes have been proposed BID31 BID3 BID6 BID8 BID13 BID19 BID21 BID23 BID34 BID28 BID37 BID29 BID35 BID39. Empirically, these GNNs have achieved state-of-the-art performance in many tasks such as node classification, link prediction, and graph classification. However, the design of new GNNs is mostly based on empirical intuition, heuristics, and experimental trial-anderror. There is little theoretical understanding of the properties and limitations of GNNs, and formal analysis of GNNs' representational capacity is limited. Here, we present a theoretical framework for analyzing the representational power of GNNs. We formally characterize how expressive different GNN variants are in learning to represent and distinguish between different graph structures. Our framework is inspired by the close connection between GNNs and the Weisfeiler-Lehman (WL) graph isomorphism test BID36, a powerful test known to distinguish a broad class of graphs BID2. Similar to GNNs, the WL test iteratively updates a given node's feature vector by aggregating feature vectors of its network neighbors. What makes the WL test so powerful is its injective aggregation update that maps different node neighborhoods to different feature vectors. Our key insight is that a GNN can have as large discriminative power as the WL test if the GNN's aggregation scheme is highly expressive and can model injective functions. To mathematically formalize the above insight, our framework first represents the set of feature vectors of a given node's neighbors as a multiset, i.e., a set with possibly repeating elements. Then, the neighbor aggregation in GNNs can be thought of as an aggregation function over the multiset. Hence, to have strong representational power, a GNN must be able to aggregate different multisets into different representations. We rigorously study several variants of multiset functions and theoretically characterize their discriminative power, i.e., how well different aggregation functions can distinguish different multisets. The more discriminative the multiset function is, the more powerful the representational power of the underlying GNN.Our main are summarized as follows: 1) We show that GNNs are at most as powerful as the WL test in distinguishing graph structures.2) We establish conditions on the neighbor aggregation and graph readout functions under which the ing GNN is as powerful as the WL test.3) We identify graph structures that cannot be distinguished by popular GNN variants, such as GCN BID21 and GraphSAGE (a), and we precisely characterize the kinds of graph structures such GNN-based models can capture. We develop a simple neural architecture, Graph Isomorphism Network (GIN), and show that its discriminative/representational power is equal to the power of the WL test. We validate our theory via experiments on graph classification datasets, where the expressive power of GNNs is crucial to capture graph structures. In particular, we compare the performance of GNNs with various aggregation functions. Our confirm that the most powerful GNN by our theory, i.e., Graph Isomorphism Network (GIN), also empirically has high representational power as it almost perfectly fits the training data, whereas the less powerful GNN variants often severely underfit the training data. In addition, the representationally more powerful GNNs outperform the others by test set accuracy and achieve state-of-the-art performance on many graph classification benchmarks. We begin by summarizing some of the most common GNN models and, along the way, introduce our notation. Let G = (V, E) denote a graph with node feature vectors X v for v ∈ V. There are two tasks of interest: Node classification, where each node v ∈ V has an associated label y v and the goal is to learn a representation vector h v of v such that v's label can be predicted as y v = f (h v); Graph classification, where, given a set of graphs {G 1, ..., G N} ⊆ G and their labels {y 1, ..., y N} ⊆ Y, we aim to learn a representation vector h G that helps predict the label of an entire graph, DISPLAYFORM0 Graph Neural Networks. GNNs use the graph structure and node features X v to learn a representation vector of a node, h v, or the entire graph, h G. Modern GNNs follow a neighborhood aggregation strategy, where we iteratively update the representation of a node by aggregating representations of its neighbors. After k iterations of aggregation, a node's representation captures the structural information within its k-hop network neighborhood. Formally, the k-th layer of a GNN is GNNs is crucial. A number of architectures for AGGREGATE have been proposed. In the pooling variant of GraphSAGE BID13, AGGREGATE has been formulated as DISPLAYFORM1 DISPLAYFORM2 where W is a learnable matrix, and MAX represents an element-wise max-pooling. The COMBINE step could be a concatenation followed by a linear mapping W · h DISPLAYFORM3 as in GraphSAGE. In Graph Convolutional Networks (GCN) BID21, the element-wise mean pooling is used instead, and the AGGREGATE and COMBINE steps are integrated as follows: DISPLAYFORM4 Many other GNNs can be represented similarly to Eq. 2.1 BID37 BID12.For node classification, the node representation h DISPLAYFORM5 of the final iteration is used for prediction. For graph classification, the READOUT function aggregates node features from the final iteration to obtain the entire graph's representation h G: DISPLAYFORM6 (2.4)READOUT can be a simple permutation invariant function such as summation or a more sophisticated graph-level pooling function BID39.Weisfeiler-Lehman test. The graph isomorphism problem asks whether two graphs are topologically identical. This is a challenging problem: no polynomial-time algorithm is known for it yet BID10 BID11 BID1. Apart from some corner cases BID4, the Weisfeiler-Lehman (WL) test of graph isomorphism BID36 is an effective and computationally efficient test that distinguishes a broad class of graphs BID2. Its 1-dimensional form, "naïve vertex refinement", is analogous to neighbor aggregation in GNNs. The WL test iteratively aggregates the labels of nodes and their neighborhoods, and hashes the aggregated labels into unique new labels. The algorithm decides that two graphs are non-isomorphic if at some iteration the labels of the nodes between the two graphs differ. Based on the WL test, BID32 proposed the WL subtree kernel that measures the similarity between graphs. The kernel uses the counts of node labels at different iterations of the WL test as the feature vector of a graph. Intuitively, a node's label at the k-th iteration of WL test represents a subtree structure of height k rooted at the node (Figure 1). Thus, the graph features considered by the WL subtree kernel are essentially counts of different rooted subtrees in the graph. We start with an overview of our framework for analyzing the expressive power of GNNs. Figure 1 illustrates our idea. A GNN recursively updates each node's feature vector to capture the network structure and features of other nodes around it, i.e., its rooted subtree structures (Figure 1). Throughout the paper, we assume node input features are from a countable universe. For finite graphs, node feature vectors at deeper layers of any fixed model are also from a countable universe. For notational simplicity, we can assign each feature vector a unique label in {a, b, c . . .}. Then, feature vectors of a set of neighboring nodes form a multiset (Figure 1): the same element can appear multiple times since different nodes can have identical feature vectors. Definition 1 (Multiset). A multiset is a generalized concept of a set that allows multiple instances for its elements. More formally, a multiset is a 2-tuple X = (S, m) where S is the underlying set of X that is formed from its distinct elements, and m: S → N ≥1 gives the multiplicity of the elements. To study the representational power of a GNN, we analyze when a GNN maps two nodes to the same location in the embedding space. Intuitively, a maximally powerful GNN maps two nodes to the same location only if they have identical subtree structures with identical features on the corresponding nodes. Since subtree structures are defined recursively via node neighborhoods (Figure 1), we can reduce our analysis to the question whether a GNN maps two neighborhoods (i.e., two multisets) to the same embedding or representation. A maximally powerful GNN would never map two different neighborhoods, i.e., multisets of feature vectors, to the same representation. This means its aggregation scheme must be injective. Thus, we abstract a GNN's aggregation scheme as a class of functions over multisets that their neural networks can represent, and analyze whether they are able to represent injective multiset functions. Next, we use this reasoning to develop a maximally powerful GNN. In Section 5, we study popular GNN variants and see that their aggregation schemes are inherently not injective and thus less powerful, but that they can capture other interesting properties of graphs. First, we characterize the maximum representational capacity of a general class of GNN-based models. Ideally, a maximally powerful GNN could distinguish different graph structures by mapping them to different representations in the embedding space. This ability to map any two different graphs to different embeddings, however, implies solving the challenging graph isomorphism problem. That is, we want isomorphic graphs to be mapped to the same representation and non-isomorphic ones to different representations. In our analysis, we characterize the representational capacity of GNNs via a slightly weaker criterion: a powerful heuristic called Weisfeiler-Lehman (WL) graph isomorphism test, that is known to work well in general, with a few exceptions, e.g., regular graphs BID4 BID7 BID9. Lemma 2. Let G 1 and G 2 be any two non-isomorphic graphs. If a graph neural network A: G → R d maps G 1 and G 2 to different embeddings, the Weisfeiler-Lehman graph isomorphism test also decides G 1 and G 2 are not isomorphic. Proofs of all Lemmas and Theorems can be found in the Appendix. Hence, any aggregation-based GNN is at most as powerful as the WL test in distinguishing different graphs. A natural follow-up question is whether there exist GNNs that are, in principle, as powerful as the WL test? Our answer, in Theorem 3, is yes: if the neighbor aggregation and graph-level readout functions are injective, then the ing GNN is as powerful as the WL test. DISPLAYFORM0 With a sufficient number of GNN layers, A maps any graphs G 1 and G 2 that the Weisfeiler-Lehman test of isomorphism decides as non-isomorphic, to different embeddings if the following conditions hold: a) A aggregates and updates node features iteratively with DISPLAYFORM1 where the functions f, which operates on multisets, and φ are injective.b) A's graph-level readout, which operates on the multiset of node features h DISPLAYFORM2, is injective. We prove Theorem 3 in the appendix. For countable sets, injectiveness well characterizes whether a function preserves the distinctness of inputs. Uncountable sets, where node features are continuous, need some further considerations. In addition, it would be interesting to characterize how close together the learned features lie in a function's image. We leave these questions for future work, and focus on the case where input node features are from a countable set (that can be a subset of an uncountable set such as R n).Lemma 4. Assume the input feature space X is countable. Let g (k) be the function parameterized by a GNN's k-th layer for k = 1,..., L, where g is defined on multisets X ⊂ X of bounded size. The range of g (k), i.e., the space of node hidden features h DISPLAYFORM3 Here, it is also worth discussing an important benefit of GNNs beyond distinguishing different graphs, that is, capturing similarity of graph structures. Note that node feature vectors in the WL test are essentially one-hot encodings and thus cannot capture the similarity between subtrees. In contrast, a GNN satisfying the criteria in Theorem 3 generalizes the WL test by learning to embed the subtrees to low-dimensional space. This enables GNNs to not only discriminate different structures, but also to learn to map similar graph structures to similar embeddings and capture dependencies between graph structures. Capturing structural similarity of the node labels is shown to be helpful for generalization particularly when the co-occurrence of subtrees is sparse across different graphs or there are noisy edges and node features BID38. Having developed conditions for a maximally powerful GNN, we next develop a simple architecture, Graph Isomorphism Network (GIN), that provably satisfies the conditions in Theorem 3. This model generalizes the WL test and hence achieves maximum discriminative power among GNNs. To model injective multiset functions for the neighbor aggregation, we develop a theory of "deep multisets", i.e., parameterizing universal multiset functions with neural networks. Our next lemma states that sum aggregators can represent injective, in fact, universal functions over multisets. Lemma 5. Assume X is countable. There exists a function f: X → R n so that h(X) = x∈X f (x) is unique for each multiset X ⊂ X of bounded size. Moreover, any multiset function g can be decomposed as g (X) = φ x∈X f (x) for some function φ. We prove Lemma 5 in the appendix. The proof extends the setting in BID40 ) from sets to multisets. An important distinction between deep multisets and sets is that certain popular injective set functions, such as the mean aggregator, are not injective multiset functions. With the mechanism for modeling universal multiset functions in Lemma 5 as a building block, we can conceive aggregation schemes that can represent universal functions over a node and the multiset of its neighbors, and thus will satisfy the injectiveness condition (a) in Theorem 3. Our next corollary provides a simple and concrete formulation among many such aggregation schemes. Corollary 6. Assume X is countable. There exists a function f: X → R n so that for infinitely many choices of, including all irrational numbers, h(c, X) = (1 +) · f (c) + x∈X f (x) is unique for each pair (c, X), where c ∈ X and X ⊂ X is a multiset of bounded size. Moreover, any function g over such pairs can be decomposed as g (c, X) = ϕ (1 +) · f (c) + x∈X f (x) for some function ϕ.We can use multi-layer perceptrons (MLPs) to model and learn f and ϕ in Corollary 6, thanks to the universal approximation theorem BID16 BID15. In practice, we model f (k+1) • ϕ (k) with one MLP, because MLPs can represent the composition of functions. In the first iteration, we do not need MLPs before summation if input features are one-hot encodings as their summation alone is injective. We can make a learnable parameter or a fixed scalar. Then, GIN updates node representations as DISPLAYFORM0 Generally, there may exist many other powerful GNNs. GIN is one such example among many maximally powerful GNNs, while being simple. Node embeddings learned by GIN can be directly used for tasks like node classification and link prediction. For graph classification tasks we propose the following "readout" function that, given embeddings of individual nodes, produces the embedding of the entire graph. An important aspect of the graph-level readout is that node representations, corresponding to subtree structures, get more refined and global as the number of iterations increases. A sufficient number of iterations is key to achieving good discriminative power. Yet, features from earlier iterations may sometimes generalize better. To consider all structural information, we use information from all depths/iterations of the model. We achieve this by an architecture similar to Jumping Knowledge DISPLAYFORM0 Figure 2: Ranking by expressive power for sum, mean and max aggregators over a multiset. Left panel shows the input multiset, i.e., the network neighborhood to be aggregated. The next three panels illustrate the aspects of the multiset a given aggregator is able to capture: sum captures the full multiset, mean captures the proportion/distribution of elements of a given type, and the max aggregator ignores multiplicities (reduces the multiset to a simple set).vs. Networks BID37, where we replace Eq. 2.4 with graph representations concatenated across all iterations/layers of GIN: DISPLAYFORM1 By Theorem 3 and Corollary 6, if GIN replaces READOUT in Eq. 4.2 with summing all node features from the same iterations (we do not need an extra MLP before summation for the same reason as in Eq. 4.1), it provably generalizes the WL test and the WL subtree kernel. Next, we study GNNs that do not satisfy the conditions in Theorem 3, including GCN BID21 and GraphSAGE (a). We conduct ablation studies on two aspects of the aggregator in Eq. 4.1: 1-layer perceptrons instead of MLPs and mean or max-pooling instead of the sum. We will see that these GNN variants get confused by surprisingly simple graphs and are less powerful than the WL test. Nonetheless, models with mean aggregators like GCN perform well for node classification tasks. To better understand this, we precisely characterize what different GNN variants can and cannot capture about a graph and discuss the implications for learning with graphs. The function f in Lemma 5 helps map distinct multisets to unique embeddings. It can be parameterized by an MLP by the universal approximation theorem BID15. Nonetheless, many existing GNNs instead use a 1-layer perceptron σ • W BID8 BID21 ), a linear mapping followed by a non-linear activation function such as a ReLU. Such 1-layer mappings are examples of Generalized Linear Models BID25. Therefore, we are interested in understanding whether 1-layer perceptrons are enough for graph learning. Lemma 7 suggests that there are indeed network neighborhoods (multisets) that models with 1-layer perceptrons can never distinguish. Lemma 7. There exist finite multisets X 1 = X 2 so that for any linear mapping W, DISPLAYFORM0 The main idea of the proof for Lemma 7 is that 1-layer perceptrons can behave much like linear mappings, so the GNN layers degenerate into simply summing over neighborhood features. Our proof builds on the fact that the bias term is lacking in the linear mapping. With the bias term and sufficiently large output dimensionality, 1-layer perceptrons might be able to distinguish different multisets. Nonetheless, unlike models using MLPs, the 1-layer perceptron (even with the bias term) is not a universal approximator of multiset functions. Consequently, even if GNNs with 1-layer perceptrons can embed different graphs to different locations to some degree, such embeddings may not adequately capture structural similarity, and can be difficult for simple classifiers, e.g., linear classifiers, to fit. In Section 7, we will empirically see that GNNs with 1-layer perceptrons, when applied to graph classification, sometimes severely underfit training data and often perform worse than GNNs with MLPs in terms of test accuracy. What happens if we replace the sum in h (X) = x∈X f (x) with mean or max-pooling as in GCN and GraphSAGE? Mean and max-pooling aggregators are still well-defined multiset functions because they are permutation invariant. But, they are not injective. Figure 2 ranks the three aggregators by their representational power, and FIG0 illustrates pairs of structures that the mean and max-pooling aggregators fail to distinguish. Here, node colors denote different node features, and we assume the GNNs aggregate neighbors first before combining them with the central node labeled as v and v.In FIG0, every node has the same feature a and f (a) is the same across all nodes (for any function f). When performing neighborhood aggregation, the mean or maximum over f (a) remains f (a) and, by induction, we always obtain the same node representation everywhere. Thus, in this case mean and max-pooling aggregators fail to capture any structural information. In contrast, the sum aggregator distinguishes the structures because 2 · f (a) and 3 · f (a) give different values. The same argument can be applied to any unlabeled graph. If node degrees instead of a constant value is used as node input features, in principle, mean can recover sum, but max-pooling cannot. FIG0 suggests that mean and max have trouble distinguishing graphs with nodes that have repeating features. Let h color (r for red, g for green) denote node features transformed by f. FIG0 shows that maximum over the neighborhood of the blue nodes v and v yields max (h g, h r) and max (h g, h r, h r), which collapse to the same representation (even though the corresponding graph structures are different). Thus, max-pooling fails to distinguish them. In contrast, the sum aggregator still works because 1 2 (h g + h r) and 1 3 (h g + h r + h r) are in general not equivalent. Similarly, in FIG0, both mean and max fail as DISPLAYFORM0 To characterize the class of multisets that the mean aggregator can distinguish, consider the example X 1 = (S, m) and X 2 = (S, k · m), where X 1 and X 2 have the same set of distinct elements, but X 2 contains k copies of each element of X 1. Any mean aggregator maps X 1 and X 2 to the same embedding, because it simply takes averages over individual element features. Thus, the mean captures the distribution (proportions) of elements in a multiset, but not the exact multiset. Corollary 8. Assume X is countable. There exists a function f: X → R n so that for h(X) = 1 |X| x∈X f (x), h(X 1) = h(X 2) if and only if multisets X 1 and X 2 have the same distribution. That is, assuming |X 2 | ≥ |X 1 |, we have X 1 = (S, m) and X 2 = (S, k · m) for some k ∈ N ≥1.The mean aggregator may perform well if, for the task, the statistical and distributional information in the graph is more important than the exact structure. Moreover, when node features are diverse and rarely repeat, the mean aggregator is as powerful as the sum aggregator. This may explain why, despite the limitations identified in Section 5.2, GNNs with mean aggregators are effective for node classification tasks, such as classifying article subjects and community detection, where node features are rich and the distribution of the neighborhood features provides a strong signal for the task. The examples in FIG0 illustrate that max-pooling considers multiple nodes with the same feature as only one node (i.e., treats a multiset as a set). Max-pooling captures neither the exact structure nor the distribution. However, it may be suitable for tasks where it is important to identify representative elements or the "skeleton", rather than to distinguish the exact structure or distribution. BID27 empirically show that the max-pooling aggregator learns to identify the skeleton of a 3D point cloud and that it is robust to noise and outliers. For completeness, the next corollary shows that the max-pooling aggregator captures the underlying set of a multiset. Corollary 9. Assume X is countable. Then there exists a function f: X → R ∞ so that for h(X) = max x∈X f (x), h(X 1) = h(X 2) if and only if X 1 and X 2 have the same underlying set. There are other non-standard neighbor aggregation schemes that we do not cover, e.g., weighted average via attention BID34 and LSTM pooling BID13 BID24. We emphasize that our theoretical framework is general enough to characterize the representaional power of any aggregation-based GNNs. In the future, it would be interesting to apply our framework to analyze and understand other aggregation schemes. Despite the empirical success of GNNs, there has been relatively little work that mathematically studies their properties. An exception to this is the work of BID30 who shows that the perhaps earliest GNN model BID31 can approximate measurable functions in probability. BID22 show that their proposed architecture lies in the RKHS of graph kernels, but do not study explicitly which graphs it can distinguish. Each of these works focuses on a specific architecture and do not easily generalize to multple architectures. In contrast, our above provide a general framework for analyzing and characterizing the expressive power of a broad class of GNNs. Recently, many GNN-based architectures have been proposed, including sum aggregation and MLP encoding BID3 BID31 BID8, and most without theoretical derivation. In contrast to many prior GNN architectures, our Graph Isomorphism Network (GIN) is theoretically motivated, simple yet powerful. We evaluate and compare the training and test performance of GIN and less powerful GNN variants. Training set performance allows us to compare different GNN models based on their representational power and test set performance quantifies generalization ability. Datasets. We use 9 graph classification benchmarks: 4 bioinformatics datasets (MUTAG, PTC, NCI1, PROTEINS) and 5 social network datasets (COLLAB, IMDB-BINARY, IMDB-MULTI, REDDIT-BINARY and REDDIT-MULTI5K) BID38. Importantly, our goal here is not to allow the models to rely on the input node features but mainly learn from the network structure. Thus, in the bioinformatic graphs, the nodes have categorical input features but in the social networks, they have no features. For social networks we create node features as follows: for the REDDIT datasets, we set all node feature vectors to be the same (thus, features here are uninformative); for the other social graphs, we use one-hot encodings of node degrees. Dataset statistics are summarized in Table 1, and more details of the data can be found in Appendix I. We evaluate GINs (Eqs. 4.1 and 4.2) and the less powerful GNN variants. Under the GIN framework, we consider two variants: a GIN that learns in Eq. 4.1 by gradient descent, which we call GIN-, and a simpler (slightly less powerful) 2 GIN, where in Eq. 4.1 is fixed to 0, which we call GIN-0. As we will see, GIN-0 shows strong empirical performance: not only does GIN-0 fit training data equally well as GIN-, it also demonstrates good generalization, slightly but consistently outperforming GIN-in terms of test accuracy. For the less powerful GNN variants, we consider architectures that replace the sum in the GIN-0 aggregation with mean or max-pooling 3, or replace MLPs with 1-layer perceptrons, i.e., a linear mapping followed by ReLU. In FIG1 and Table 1, a model is named by the aggregator/perceptron it uses. Here mean-1-layer and max-1-layer correspond to GCN and GraphSAGE, respectively, up to minor architecture modifications. We apply the same graph-level readout (READOUT in Eq. 4.2) for GINs and all the GNN variants, specifically, sum readout on bioinformatics datasets and mean readout on social datasets due to better test performance. Following BID38 BID26, we perform 10-fold crossvalidation with LIB-SVM BID5. We report the average and standard deviation of validation accuracies across the 10 folds within the cross-validation. For all configurations, 5 GNN layers (including the input layer) are applied, and all MLPs have 2 layers. Batch normalization BID17 is applied on every hidden layer. We use the Adam optimizer BID20 with initial learning rate 0.01 and decay the learning rate by 0.5 every 50 epochs. The hyper-parameters we tune for each dataset are: the number of hidden units ∈ {16, 32} for bioinformatics graphs and 64 for social graphs; the batch size ∈ {32, 128}; the dropout ratio ∈ {0, 0.5} after the dense layer BID33; the number of epochs, i.e., a single epoch with the best cross-validation accuracy averaged over the 10 folds was selected. Note that due to the small dataset sizes, an alternative setting, where hyper-parameter selection is done using a validation set, is extremely unstable, e.g., for MUTAG, the validation set only contains 18 data points. We also report the training accuracy of different GNNs, where all the hyper-parameters were fixed across the datasets: 5 GNN layers (including the input layer), hidden units of size 64, minibatch of size 128, and 0.5 dropout ratio. For comparison, the training accuracy of the WL subtree kernel is reported, where we set the number of iterations to 4, which is comparable to the 5 GNN layers. Baselines. We compare the GNNs above with a number of state-of-the-art baselines for graph classification: the WL subtree kernel BID32, where C-SVM BID5 ) was used as a classifier; the hyper-parameters we tune are C of the SVM and the number of WL iterations ∈ {1, 2, . . ., 6}; state-of-the-art deep learning architectures, i.e., Diffusionconvolutional neural networks (DCNN) BID0, PATCHY-SAN BID26 and Deep Graph CNN (DGCNN); Anonymous Walk Embeddings (AWL) BID18. For the deep learning methods and AWL, we report the accuracies reported in the original papers. Training set performance. We validate our theoretical analysis of the representational power of GNNs by comparing their training accuracies. Models with higher representational power should have higher training set accuracy. FIG1 shows training curves of GINs and less powerful GNN variants with the same hyper-parameter settings. First, both the theoretically most powerful GNN, i.e. GIN-and GIN-0, are able to almost perfectly fit all the training sets. In our experiments, explicit learning of in GIN-yields no gain in fitting training data compared to fixing to 0 as in GIN-0. In comparison, the GNN variants using mean/max pooling or 1-layer perceptrons severely underfit on many datasets. In particular, the training accuracy patterns align with our ranking by the models' Datasets IMDB-B IMDB-M RDT-B RDT-M5K COLLAB MUTAG PROTEINS PTC NCI1 Datasets # graphs 1000 1500 2000 5000 5000 188 1113 344 4110 # classes 2 3 2 5 3 2 2 2 74.5 ± 5.9 51.5 ± 3.6 87.9 ± 2.5 54.7 ± 2.9 73.9 ± 1.9 87.9 ± 9.8 --- SUM-MLP (GIN-0) 75.1 ± 5.1 52.3 ± 2.8 92.4 ± 2.5 57.5 ± 1.5 80.2 ± 1.9 89.4 ± 5.6 76.2 ± 2.8 64.6 ± 7.0 82.7 ± 1.7 SUM-MLP (GIN-) 74.3 ± 5.1 52.1 ± 3.6 92.2 ± 2.3 57.0 ± 1.7 80.1 ± 1.9 89.0 ± 6.0 75.9 ± 3.8 63.7 ± 8.2 82.7 ± 1.6 SUM-1-LAYER 74.1 ± 5.0 52.2 ± 2.4 90.0 ± 2.7 55.1 ± 1.6 80.6 ± 1.9 90.0 ± 8.8 76.2 ± 2.6 63.1 ± 5.7 82.0 ± 1.5 MEAN-MLP 73.7 ± 3.7 52.3 ± 3.1 50.0 ± 0.0 20.0 ± 0.0 79.2 ± 2.3 83.5 ± 6.3 75.5 ± 3.4 66.6 ± 6.9 80.9 ± 1.8 MEAN-1-LAYER (GCN) 74.0 ± 3.4 51.9 ± 3.8 50.0 ± 0.0 20.0 ± 0.0 79.0 ± 1.8 85.6 ± 5.8 76.0 ± 3.2 64.2 ± 4.3 80.2 ± 2.0 MAX-MLP 73.2 ± 5.8 51.1 ± 3.6 ---84.0 ± 6.1 76.0 ± 3.2 64.6 ± 10.2 77.8 ± 1.3 MAX-1-LAYER (GraphSAGE) 72.3 ± 5.3 50.9 ± 2.2 ---85.1 ± 7.6 75.9 ± 3.2 63.9 ± 7.7 77.7 ± 1.5 Table 1: Test set classification accuracies (%). The best-performing GNNs are highlighted with boldface. On datasets where GINs' accuracy is not strictly the highest among GNN variants, we see that GINs are still comparable to the best GNN because a paired t-test at significance level 10% does not distinguish GINs from the best; thus, GINs are also highlighted with boldface. If a baseline performs significantly better than all GNNs, we highlight it with boldface and asterisk.representational power: GNN variants with MLPs tend to have higher training accuracies than those with 1-layer perceptrons, and GNNs with sum aggregators tend to fit the training sets better than those with mean and max-pooling aggregators. On our datasets, training accuracies of the GNNs never exceed those of the WL subtree kernel. This is expected because GNNs generally have lower discriminative power than the WL test. For example, on IMDBBINARY, none of the models can perfectly fit the training set, and the GNNs achieve at most the same training accuracy as the WL kernel. This pattern aligns with our that the WL test provides an upper bound for the representational capacity of the aggregation-based GNNs. However, the WL kernel is not able to learn how to combine node features, which might be quite informative for a given prediction task as we will see next. Test set performance. Next, we compare test accuracies. Although our theoretical do not directly speak about the generalization ability of GNNs, it is reasonable to expect that GNNs with strong expressive power can accurately capture graph structures of interest and thus generalize well. Table 1 compares test accuracies of GINs (Sum-MLP), other GNN variants, as well as the state-of-the-art baselines. First, GINs, especially GIN-0, outperform (or achieve comparable performance as) the less powerful GNN variants on all the 9 datasets, achieving state-of-the-art performance. GINs shine on the social network datasets, which contain a relatively large number of training graphs. For the Reddit datasets, all nodes share the same scalar as node feature. Here, GINs and sum-aggregation GNNs accurately capture the graph structure and significantly outperform other models. Mean-aggregation GNNs, however, fail to capture any structures of the unlabeled graphs (as predicted in Section 5.2) and do not perform better than random guessing. Even if node degrees are provided as input features, mean-based GNNs perform much worse than sum-based GNNs (the accuracy of the GNN with mean-MLP aggregation is 71.2±4.6% on REDDIT-BINARY and 41.3±2.1% on REDDIT-MULTI5K).Comparing GINs (GIN-0 and GIN-), we observe that GIN-0 slightly but consistently outperforms GIN-. Since both models fit training data equally well, the better generalization of GIN-0 may be explained by its simplicity compared to GIN-. In this paper, we developed theoretical foundations for reasoning about the expressive power of GNNs, and proved tight bounds on the representational capacity of popular GNN variants. We also designed a provably maximally powerful GNN under the neighborhood aggregation framework. An interesting direction for future work is to go beyond neighborhood aggregation (or message passing) in order to pursue possibly even more powerful architectures for learning with graphs. To complete the picture, it would also be interesting to understand and improve the generalization properties of GNNs as well as better understand their optimization landscape. A PROOF FOR LEMMA 2Proof. Suppose after k iterations, a graph neural network A has A(G 1) = A(G 2) but the WL test cannot decide G 1 and G 2 are non-isomorphic. It follows that from iteration 0 to k in the WL test, G 1 and G 2 always have the same collection of node labels. In particular, because G 1 and G 2 have the same WL node labels for iteration i and i + 1 for any i = 0,..., k − 1, G 1 and G 2 have the same collection, i.e. multiset, of WL node labels l DISPLAYFORM0 as well as the same collection of node neighborhoods l DISPLAYFORM1. Otherwise, the WL test would have obtained different collections of node labels at iteration i + 1 for G 1 and G 2 as different multisets get unique new labels. The WL test always relabels different multisets of neighboring nodes into different new labels. We show that on the same graph G = G 1 or G 2, if WL node labels l DISPLAYFORM2 u for any iteration i. This apparently holds for i = 0 because WL and GNN starts with the same node features. Suppose this holds for iteration j, if for any u, v, l DISPLAYFORM3, then it must be the case that DISPLAYFORM4 By our assumption on iteration j, we must have DISPLAYFORM5 In the aggregation process of the GNN, the same AGGREGATE and COMBINE are applied. The same input, i.e. neighborhood features, generates the same output. Thus, h DISPLAYFORM6. By induction, if WL node labels l DISPLAYFORM7 u, we always have GNN node features h DISPLAYFORM8 u for any iteration i. This creates a valid mapping φ such that h DISPLAYFORM9 v ) for any v ∈ G. It follows from G 1 and G 2 have the same multiset of WL neighborhood labels that G 1 and G 2 also have the same collection of GNN neighborhood features DISPLAYFORM10 are the same. In particular, we have the same collection of GNN node features DISPLAYFORM11 for G 1 and G 2. Because the graph level readout function is permutation invariant with respect to the collection of node features, A(G 1) = A(G 2). Hence we have reached a contradiction. Proof. Let A be a graph neural network where the condition holds. Let G 1, G 2 be any graphs which the WL test decides as non-isomorphic at iteration K. Because the graph-level readout function is injective, i.e., it maps distinct multiset of node features into unique embeddings, it sufficies to show that A's neighborhood aggregation process, with sufficient iterations, embeds G 1 and G 2 into different multisets of node features. Let us assume A updates node representations as DISPLAYFORM0 with injective funtions f and φ. The WL test applies a predetermined injective hash function g to update the WL node labels l Proof. We first prove that there exists a mapping f so that x∈X f (x) is unique for each multiset X of bounded size. Because X is countable, there exists a mapping Z: X → N from x ∈ X to natural numbers. Because the cardinality of multisets X is bounded, there exists a number N ∈ N so that |X| < N for all X. Then an example of such f is f (x) = N −Z(x). This f can be viewed as a more compressed form of an one-hot vector or N -digit presentation. Thus, h(X) = x∈X f (x) is an injective function of multisets. φ x∈X f (x) is permutation invariant so it is a well-defined multiset function. For any multiset function g, we can construct such φ by letting φ x∈X f (x) = g(X). Note that such φ is well-defined because h(X) = x∈X f (x) is injective. Proof. Following the proof of Lemma 5, we consider f (x) = N −Z(x), where N and Z: X → N are the same as defined in Appendix D. Let h(c, X) ≡ (1 +) · f (c) + x∈X f (x). Our goal is show that for any (c, X) = (c, X) with c, c ∈ X and X, X ⊂ X, h(c, X) = h(c, X) holds, if is an irrational number. We prove by contradiction. For any (c, X), suppose there exists (c, X) such that (c, X) = (c, X) but h(c, X) = h(c, X) holds. Let us consider the following two cases: c = c but X = X, and c = c. For the first case, h(c, X) = h(c, X) implies x∈X f (x) = x∈X f (x). It follows from Lemma 5 that the equality will not hold, because with f (x) = N −Z(x), X = X implies x∈X f (x) = x∈X f (x). Thus, we reach a contradiction. For the second case, we can similarly rewrite h(c, X) = h(c, X) as Because is an irrational number and f (c) − f (c) is a non-zero rational number, L.H.S. of Eq. E.1 is irrational. On the other hand, R.H.S. of Eq. E.1, the sum of a finite number of rational numbers, is rational. Hence the equality in Eq. E.1 cannot hold, and we have reached a contradiction. For any function g over the pairs (c, X), we can construct such ϕ for the desired decomposition by letting ϕ (1 +) · f (c) + x∈X f (x) = g(c, X). Note that such ϕ is well-defined because h(c, X) = (1 +) · f (c) + x∈X f (x) is injective. F PROOF FOR LEMMA 7Proof. Let us consider the example X 1 = {1, 1, 1, 1, 1} and X 2 = {2, 3}, i.e. two different multisets of positive numbers that sum up to the same value. We will be using the homogeneity of ReLU.Let W be an arbitrary linear transform that maps x ∈ X 1, X 2 into R n. It is clear that, at the same coordinates, W x are either positive or negative for all x because all x in X 1 and X 2 are positive. It follows that ReLU(W x) are either positive or 0 at the same coordinate for all x in X 1, X 2. For the coordinates where ReLU(W x) are 0, we have x∈X1 ReLU (W x) = x∈X2 ReLU (W x). For the coordinates where W x are positive, linearity still holds. It follows from linearity that x∈X ReLU (W x) = ReLU W x∈X x where X could be X 1 or X 2. Because x∈X1 x = x∈X2 x, we have the following as desired.
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryGs6iA5Km
We develop theoretical foundations for the expressive power of GNNs and design a provably most powerful GNN.
We introduce MTLAB, a new algorithm for learning multiple related tasks with strong theoretical guarantees. Its key idea is to perform learning sequentially over the data of all tasks, without interruptions or restarts at task boundaries. Predictors for individual tasks are derived from this process by an additional online-to-batch conversion step. By learning across task boundaries, MTLAB achieves a sublinear regret of true risks in the number of tasks. In the lifelong learning setting, this leads to an improved generalization bound that converges with the total number of samples across all observed tasks, instead of the number of examples per tasks or the number of tasks independently. At the same time, it is widely applicable: it can handle finite sets of tasks, as common in multi-task learning, as well as stochastic task sequences, as studied in lifelong learning. In recent years, machine learning has become a core technology in many commercially relevant applications. One observation in this context was that real-world learning tasks often do not occur in isolation, but rather as collections or temporal sequences of many, often highly related tasks. Examples include click-through rate prediction for online ads, personalized voice recognition for smart devices, or handwriting recognition of different languages. Multi-task learning BID3 has been developed exactly to handle such situations. It is based on an intuitive idea that sharing information between tasks should help the learning process and therefore lead to improved prediction quality. In practice, however, this is not guaranteed and multi-task learning can even lead to a reduction of prediction quality, so called negative transfer. The question when negative transfer occurs and how it can be avoided has triggered a surge of research interest to better understanding the theoretical properties of multi-task learning, as well as related research areas, such as lifelong learning BID1 BID9, where more and more tasks occur sequentially, and task curriculum learning, where the order in which to learn tasks needs to be determined. In this work, we describe a new approach to multi-task learning that has strong theoretical guarantees, in particular improving the rate of convergence over some previous work. Our core idea is to decouple the process of predictor learning from the task structure. This is also the main difference of our approach to previous work, which typically learned one predictor for each task. We treat the available data for all tasks as parts of a single large online-learning problem, in which individual tasks simply correspond to subsets of the data stream that is processed. To obtain predictors for the individual tasks, we make use of online-to-batch conversion methods. We name the method MTLAB (multi-task learning across boundaries).Our main contribution is a sublinear bound on the task regret of MTLAB with true risks. As a corollary, we show that MTLAB improves the existing convergence rates in the case of lifelong learning. From the regret-type bounds, we derive high probability bounds on the expected risk of each task, which constitutes a second main contribution of our work. For real-world problems, not all tasks might be related to all previous ones. Our third contribution is a theoretically well-founded, yet practical, mechanism to avoid negative transfer in this case: we show that by splitting the set of tasks into homogeneous groups and using MTLAB to learn individual predictors on each of the ing subsequences of samples, one obtains the same strong guarantees for each of the learned predictors while avoiding negative transfer. In this section we present the main notation and introduce the MTLAB approach to information transfer between tasks. We face a sequence of tasks k 1,..., k n,..., where each k t from a task environment K, and the sequence is a random realization of a stochastic process over K. Note that this general formulation includes the situations most commonly studied in the literature: the case of finitely many fixed tasks (in which case the distribution over the tasks sequence is a delta peak) and the lifelong learning setting with i.i.d. BID1 BID9 or non-i.i.d. tasks.All tasks share the same input set X, output set Y, and hypothesis set H. Each task k t, however, has its own associated joint probability distribution, D t, over X × Y, conditioned on k t. Whenever we observe a task k t, we receive a set S t = {(x t,i, y t,i)} mt i=1 sampled i.i.d. from the task distribution D t, and we are given a loss function, DISPLAYFORM0 that measures the quality of predictions. Alternatively, one can assume that all tasks share the same, a priori known, loss function. Learning a task k t means to identify a hypothesis h ∈ H with as small as possible per-task risk er t (h), which is defined as DISPLAYFORM1 The PAC-Bayes framework, originated in BID7 BID12, studies the performance of stochastic (Gibbs) predictors. A stochastic predictor is defined by a probability distribution Q over the hypotheses set. For any Gibbs predictor with a distribution Q we define the corresponding true risk of a predictor as DISPLAYFORM2 As described in the introduction, we do not require that data for all tasks is available at the same time. Instead, we adopt an online learning protocol for tasks: at step t we observe the dataset S t for task k t, and we output the distributionQ t. Our first goal is, at any step n, to bound the regret of a learned sequence of predictorsQ 1,...,Q n with respect to any fixed reference distribution Q from some set, ∆, of distributions, i.e. DISPLAYFORM0 Note that the regret is defined using true risks, that we do not observe, in contrast to empirical risks. This makes the problem setting very different from the traditional online learning where the empirical performance is considered. The main idea of MTLAB is to run an online learning algorithm on the samples from all tasks, essentially ignoring the task structure of the problem, and then use a properly defined online-to-batch conversion to obtain predictors for the individual tasks. In this paper, we work with Input: decision set ∆, initial distribution P, learning rate η Initialization: Q 1,0 = P At any time point t = 1, 2,...: BID4 run on the level of samples. Let P be some prior distribution over H. We set Q 1,0 = P and, once we receive a dataset S t = {(x t,1, y t,1),..., (x t,mt, y t,mt)} on step t, we compute a sequence of predictors Q t,i each being a solution to DISPLAYFORM1 DISPLAYFORM2 for all i = 1,..., m t with η > 0. Afterwards, the algorithm outputs a predictorQ t = 1 mt mt i=1 Q t,i for task t, and sets Q t+1,0 = Q t,mt, to be used as a starting distribution for the next task. We call the above procedure MTLAB (multi-task learning across task boundaries) and summarize it in Figure 1. Our first main is a regret bound for the true risks of the sequence of distributions that it produces. Theorem 1. Letm = n/(n t=1 1/m t) be the harmonic mean of m 1,..., m n and let P be a fixed prior distribution that is chosen independently of the data. The predictors produced by MTLAB satisfy with probability 1 − δ (over the random training sets) uniformly over Q ∈ ∆ DISPLAYFORM3 Corollary. Set η = m n. Then, with probability 1 − δ, it holds uniformly over DISPLAYFORM4 To put this into perspective, we compare it to the average regret bounds given in BID0, where the goal is to find the best possible data representation for tasks. Even though the settings are a bit different, it gives a good idea of the qualitative nature of our . BID0 provides O(DISPLAYFORM5) bound (if all tasks are of the same size m) that can be sometimes improved to O(DISPLAYFORM6 In either case, convergence happen only in the regime when the number of tasks and the amount of data for each task both tend to infinity. In contrast to this, the right hand side of inequality converges to zero even if only one of the two quantities grows, so in particular for the most common case that the number of tasks grows to infinity, but the amount of data per task remains bounded. The examples of real-world implementations of MTLAB are provided in the supplementary material. We obtain further insight into the behavior of MTLAB by comparing it to the situation in which each task is learned independently. A more traditional PAC-Bayes bound (e.g. BID6) states that with probability 1 − δ the following inequality holds for all Q DISPLAYFORM0 This inequality suggests a learning algorithm, namely to minimize the upper bound with respect to Q. In principle, MTLAB is based on a similar objective, but it acts on the sample level and it automatically provides relevant prior distributions for each task. Thereby it is able to achieve better guarantee than one could get by combining separate bounds of the form for multiple tasks. The bound of Theorem 1 holds for any stochastic process over the tasks. In particular, it holds in special case where tasks are sampled independently from a hyper distribution over the task environment, which is usually called lifelong learning BID1 BID9. In this setting, we have a fixed distribution T over K, and the sequence k 1,..., k n is an i.i.d. sample from this distribution. One can then define the lifelong risk as DISPLAYFORM0 where D k and k are the distribution and loss function for a task k, respectively. The risk of the Gibbs predictor is then DISPLAYFORM1..,Q n be the output of MTLAB, then we define the corresponding batch solution asQ n = 1 n n t=1Q t and observe DISPLAYFORM2 Using Theorem 1 we obtain the following guarantee. Theorem 2. In the lifelong learning setting, if we run MT-LAB with η = √m √ n, for any fixed prior distribution P that is chosen independently from the data, with probability 1 − δ DISPLAYFORM3 Typical for this setting, such as shown in BID9 BID5 BID0, show the additive convergence rate O(DISPLAYFORM4, which goes to zero only in the case of infinite data and infinite tasks. In contrast, the generalization error for MTLAB converges in the most realistic scenario of finite data per task and increasing number of tasks. The of the previous section provide guarantees on MTLAB's multi-task regret. In this section we compliment those by presenting a modification that provides guarantees for individual risks of each task. The detailed proofs of all statements can be found in the supplementary material. As a start, let us consider a bound that can be obtained immediately from Theorem 1. We make use of the following notion of relatedness between tasks that is commonly used in the field of domain adaptation BID2 . Definition 1. For a fixed hypothesis class H, the discrepancy between tasks k i and k j is defined as DISPLAYFORM0 The following theorem is an immediate corollary of Theorem 1. Theorem 3. Let P be a fixed prior distribution that is chosen independently of the data. LetQ t be a sequence of predictors produced by MTLAB run with η = m n and let Q n = 1 n n t=1Q t . Then the following inequality holds with probability 1 − δ, uniformly over Q ∈ ∆ DISPLAYFORM1 DISPLAYFORM2 This bound resembles the guarantees typical in the setting of learning from drifting distributions BID8 . It converges if 1 n n i=1 disc(k i, k n) → 0 with n, so if either tasks are identical to each other, or if tasks get suitably more similar on average with growing n. This is a good example of possible negative transfer: when the previous tasks are not related to the current one as measured by the discrepancies, the average discrepancy term will prevent the bound from convergence. The main question is if we can avoid the negative transfer and improve upon the bound of Theorem 3 in the case when 1 n n i=1 disc(k i, k n) does not vanish over time. Consider, for example, a simple case of two alternating tasks, i.e. DISPLAYFORM3 If we split the sequence of tasks into two subsequences, one for tasks with even and one for tasks with odd indices, and then run MT-LAB separately for each sequence, we could nevertheless guarantee the convergence of the error rate for the ing procedure. Unfortunately, it is rather easy to construct examples in which convergence to zero is not achievable, even with the best possible split of the sequence of tasks into subsequences. Consequently, we redefine our goal to prove error rates that converge below a given threshold ε. We present an online algorithm, MTLAB.MS (for MTLAB with Multiple Sequences), that splits the tasks into subsequences on the fly given some distance dist(k i, k j) between tasks. MTLAB.MS keeps a representative task for each subsequence, and we use the distances to the representatives to decide which subsequence to extend with the new task, or if a new subsequence needs to be initialized. Pseudo-code for MTLAB.MS is provided in Algorithm 2. The notationQ, P = MTLAB(S, P) denotes a single run of MTLAB that takes a dataset S, runs its learning procedure starting from distribution P and outputs two distributions: the final distribution P to be used in the subsequent runs and the aggregate distributionQ that is a final predictor for the task. Further notation used are: I n are the indices of the tasks in the subsequence chosen at step n, s n = |I n | is the size of this subsequence,m n is the harmonic average of the sizes of tasks in the chosen subsequence and η n is the learning rate of MTLAB associated with the chosen subsequence. The following theorem shows that if MTLAB.MS could be run with the task discrepancies as distances, it would, for any given threshold ε, yield subsequences with generalization error below ε. Theorem 4. Let P be a fixed prior distribution that is chosen independently of the data. If we run MTLAB.MS with dist(k i, k j) = disc(k i, k j), we get with probability 1 − δ, DISPLAYFORM4 This theorem works when the transfer algorithm uses a fixed learning rate η for each subsequence. It is possible to prove a similar statement for the case when the parameters are optimized for the length of each subsequence using the machinery developed in BID13. However, the final statement gets more complicated and adds little to the discussions in the current paper. Therefore, we leave this extension for future work. Input: task distance dist, prior distribution P, threshold ε Initialization: set of representative tasks R = ∅ set of priors P = ∅ At any time point t = 1, 2,...:• receive dataset S t.• set I = {r ∈ R : dist(k r, k t) ≤ ε} • if I = ∅ then -add t to the set of representatives R -set P(t) = P • choose the closest representatives r = argmin DISPLAYFORM5 • run the transfer algorithm: DISPLAYFORM6 • set P(r) = P • outputQ t Figure 2. MTLAB.MS algorithm Theorem 4 confirms that it is possible to avoid effects of negative transfer by carefully choosing the tasks we transfer knowledge from at each step. MTLAB.MS is a computationally efficient way of doing this. In practice, however, the true discrepancy values are unknown. The most direct method to determine the right subsequence for each task is to estimate the discrepancies from the data and use the estimates in the MTLAB.MS algorithm. In the supplementary material we detail two approaches for discrepancy estimation: a) using a part of the labelled training data and b) using separate unlabelled datasets. In both cases it is possible to prove the statements similar to Theorem 4. We introduced a new and widely applicable algorithm for sequentially learning of multiple tasks. By performing learning across tasks boundaries it is able to achieve a sublinear regret bound and improves the convergence rates in the lifelong learning scenario. MTLAB's way of not interrupting or restarting the learning process at task boundaries in faster convergence rates than what can be achieved by learning individual predictors for each task: in particular, the generalization error decreases with the product of the number of tasks and the number of samples per task, instead of separately in each of these quantities. We also introduced a mechanism for the situation when the tasks to be learned are not all related to each other. We show that by constructing suitable subsequences of task, the convergence properties can hold even in this case.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkllV5Bs24
A new algorithm for online multi-task learning that learns without restarts at the task borders
Recent work has exhibited the surprising cross-lingual abilities of multilingual BERT (M-BERT) -- surprising since it is trained without any cross-lingual objective and with no aligned data. In this work, we provide a comprehensive study of the contribution of different components in M-BERT to its cross-lingual ability. We study the impact of linguistic properties of the languages, the architecture of the model, and of the learning objectives. The experimental study is done in the context of three typologically different languages -- Spanish, Hindi, and Russian -- and using two conceptually different NLP tasks, textual entailment and named entity recognition. Among our key is the fact that lexical overlap between languages plays a negligible role in the cross-lingual success, while the depth of the network is an important part of it Embeddings of natural language text via unsupervised learning, coupled with sufficient supervised training data, have been ubiquitous in NLP in recent years and have shown success in a wide range of monolingual NLP tasks, mostly in English. Training models for other languages have been shown more difficult, and recent approaches relied on bilingual embeddings that allowed the transfer of supervision in high resource languages like English to models in lower resource languages; however, inducing these bilingual embeddings required some level of supervision . Multilingual BERT 1 (M-BERT), a Transformer-based language model trained on raw Wikipedia text of 104 languages suggests an entirely different approach. Not only the model is contextual, but its training also requires no supervision -no alignment between the languages is done. Nevertheless, and despite being trained with no explicit cross-lingual objective, M-BERT produces a representation that seems to generalize well across languages for a variety of downstream tasks . In this work, we attempt to develop an understanding of the success of M-BERT. We study a range of aspects on a couple of different NLP tasks, in order to identify the key components in the success of the model. Our study is done in the context of only two languages, source (typically English) and target (multiple, quite different languages). By involving only a pair of languages, we can study the performance on a given target language, ensuring that it is influenced only by the cross-lingual transfer from the source language, without having to worry about a third language interfering. We analyze the two-languages version of M-BERT (B-BERT, from now on) in three orthogonal dimensions: (i) Linguistics properties and similarities of target and source languages; (ii) Network Architecture, and (iii) Input and Learning Objective. One hypothesis that came up when the people thoughts about the success of M-BERT is due to some level of language similarity. This could be lexical similarity (shared words or word-parts) or structural similarities, or both. We, therefore, investigate the contribution of word-piece overlap -the extent to which the same word-pieces appear in both source and target languages -and distinguish it from other similarities, which we call structural similarity between the source and target languages. Surprisingly, as we show, B-BERT is cross-lingual even when there is absolutely no word-piece overlap. That is, other aspects of language similarity must be contributing to the cross-lingual capabilities of the model. This is contrary to hypothesis that M-BERT gains its power from shared word-pieces. Furthermore, we show that the amount of word-piece overlap in B-BERT's training data contributes little to performance improvements. Our study of the model architecture addresses the importance of (i) The network depth, (ii) the number of attention heads, and (iii) the total number of model parameters in B-BERT. Our suggest that depth and the total number of parameters of B-BERT are crucial for both monolingual and cross-lingual performance, whereas multi-head attention is not a significant factor -a single attention head B-BERT can already give satisfactory . To understand the role of the learning objective and the input representation, we study the effect of (i) the next sentence prediction objective, (ii) the language identifier in the training data, and (iii) the level of tokenization in the input representation (character, word-piece, or word tokenization). Our indicate that the next sentence prediction objective actually hurts the performance of the model while identifying the language in the input does not affect B-BERT's performance crosslingually. Our experiments also show that character-level and word-level tokenization of the input in significantly worse performance than word-piece level tokenization. Overall, we provide an extensive set of experiments on three source-target language pairs, EnglishSpanish, English-Russian, and English-Hindi. We chose these target languages since they vary in scripts and typological features. We evaluate the performance of B-BERT on two very different downstream tasks: cross-lingual Named Entity Recognition -a sequence prediction task the requires only local context -and cross-lingual that requires more global representation of the text. Ours is not the first study of M-BERT. and identified the cross-lingual success of the model and tried to understand it. The former by considering M-BERT layerwise, relating cross-lingual performance with the amount of shared word-pieces and the latter by considering the model's ability to transfer between languages as a function of word order similarity in languages. However, both works treated M-BERT as a black box and compared M-BERT's performance on different languages. This work, on the other hand, examines how B-BERT performs cross-lingually by probing its components, along multiple aspects. We also note that some of the architectural have been observed earlier, if not investigated, in other contexts.; argued that the next Sentence prediction objective of BERT (the monolingual model) is not very useful; we show that this is the case in the cross-lingual setting. prunes attention heads for a transformer based machine translation model and argues that most attention heads are not important; in this work, we show that the number of attention heads is not important in the cross-lingual setting. Our contributions are threefold: (i) we provide the first extensive study of the aspects of the multilingual BERT that give rise to its cross-lingual ability. (ii) We develop a methodology that facilitates the analysis of similarities between languages and their impact on cross-lingual models; we do this by mapping English to a Fake-English language, that is identical in all aspects to English but shares not word-pieces with any target language. Finally, (iii) we develop a set of insights into B-BERT, along linguistics, architectural, and learning dimensions, that would contribute to further understanding and to the development of more advanced cross-lingual neural models. BERT is Transformer based pre-training language representation model that has been widely used in the field of Natural Language Processing. BERT is trained with Masked Language Modelling (MLM) and Next Sentence Prediction (NSP) objectives. Input to BERT is a pair of sentences 2 A and B, such that half of the time B comes after A in the original text and the rest of the time B is a randomly sampled sentence. Some tokens from the input are randomly masked, and the MLM objective is to predict the masked tokens. The NSP objective is to predict whether B is the actual next sentence of A or not. argues that MLM enables a deep representation from both directions, and NSP helps understand the relationship between two sentences and can be beneficial to representations. BERT follows two-steps 1. Pre-training and, 2. Fine-tuning. BERT is pre-trained using the above mentioned MLM and NSP objective on BooksCorpus and English Wikipedia text, and for any supervised downstream task, BERT is initialized with the pre-trained weights and fine-tuned using the labeled data. BERT uses wordpiece tokenization , which creates wordpiece vocabulary in a data-driven approach. Multilingual BERT is pre-trained in the same way as monolingual BERT except using Wikipedia text from the top 104 languages. To account for the differences in the size of Wikipedia, some languages are sub-sampled, and some are super-sampled using exponential smoothing. It's worth mentioning that there are no cross-lingual objectives specifically designed nor any cross-lingual data, e.g. parallel corpus, used. In this section, we analyze the reason for the cross-lingual ability of multilingual BERT (actually B-BERT) in three dimensions. (i) Linguistics (ii) Architecture and (iii) Input and Learning Objective. Languages share similarities with each other. For example, English and Spanish have words that look seemingly the same; English and Russian both have a Subject-Verb-Object(SVO) order 3; English and Hindi, despite in different scripts, use the same Arabic numerals 4. The similarity between languages can be a reason of M-BERT's cross-lingual ability. In this linguistics point of view, we study the contribution of word-piece overlap -the similarity of languages arising from the same characters/words used across languages as well as code-switching data -and structure similarity, the part of linguistic similarity that is not explained by word-piece overlap, and does not rely on the script of the language We hypothesize that the cross-lingual effectiveness of B-BERT comes from the architecture of BERT itself being able to extract good semantic and structural features. We study the depth, number of attention heads, and the total number of parameters of the transformer model to explore the influence of each part to the cross-lingual ability. Finally, we study the effect of learning objectives and input. The Next Sentence Prediction objective is shown to be unnecessary in monolingual settings, and we try to analyze its effect in the crosslingual setting. B-BERT follows BERT and uses a word-piece vocabulary. Word-pieces can be seen as a tradeoff between characters and words. We compare these three ways of tokenizing the input on how they affect cross-lingual transferring. In this work, we conduct all our experiments on two conceptually different downstream tasks -crosslingual Textual Entailment (TE) and cross-lingual Named Entity Recognition (NER). TE measures natural language understanding (NLU) at a sentence and sentence pair level, whereas NER measures NLU at a token level. We use the Cross-lingual Natural Language Inference (XNLI) dataset to evaluate cross-lingual TE performance and LORELEI dataset for Cross-Lingual NER. XNLI is a standard cross-lingual textual entailment dataset that extends MultiNLI dataset by creating a new dev and test set and manually translating into 14 different languages. Each input consists of a premise and hypothesis pair, and the task is to classify the relationship between premise and hypothesis into one of the three labels: entailment, contradiction, and neutral. While training, both premise, and hypotheses are in English, and while testing, both are in the target language. XNLI uses the same set of premises and hypotheses for all the language, making the comparison across languages possible. Named Entity Recognition is the task of identifying and labeling text spans as named entities, such as people's names and locations. The NER dataset we use consists of news and social media text labeled by native speakers following the same guideline in several languages, including English, Hindi, Spanish, and Russian. We subsample 80%, 10%, 10% of English NER data as training, development, and testing. We use the whole dataset of Hindi, Spanish, and Russian for testing purposes. The vocabulary size is fixed at 60000 and is estimated through the unigram language model in the SentencePiece library . We denote B-BERT trained on language A and B as A-B, e.g., B-BERT trained on English (en) and Hindi (hi) as en-hi, similarly for Spanish (es) and Russian (ru). For pretraining, we subsample en, es, and ru Wikipedia to 1GB and use the entire Wikipedia for Hindi. Unless otherwise specified, for B-BERT training, we use a batch size of 32, the learning rate of 0.0001, and 2M training steps. For XNLI, we use the same finetuning approach as BERT uses in English and report accuracy. For NER, we extract BERT representations as features and finetune a Bi-LSTM CRF model and report entity span F 1 score averaged from 5 runs with its standard deviation. hypothesizes that the cross-lingual ability of M-BERT arises because of the shared word-pieces between source and target languages. However, our experiments show that B-BERT is cross-lingual even when there is no word-piece overlap. Further, hypothesizes that for cross-lingual transfer learning source language should be selected such that it shares more word-pieces with the target language. However, our experiment suggests that structural similarity is much more important. Motivated by the above two hypotheses, in this section, we study the contribution of word-piece overlap and structural similarity for the cross-lingual ability of B-BERT. M-BERT model is trained using Wikipedia text from 104 languages, and the texts from different languages share some common wordpiece vocabulary (like numbers, links, etc.. including actual words, if they have the same script), we refer to this as word-piece overlap. The previous work hypothesizes that M-BERT generalizes across languages because these shared wordpieces have to be mapped to shared space forcing the other co-occurring word-pieces to be mapped to the same shared space. In this section, we perform experiments to compare cross-lingual performance with and without word-piece overlap. We construct a new corpus -Fake-English (enfake), by shifting the Unicode of each character in English Wikipedia text by a large constant so that there is strictly no character overlap with any other Wikipedia text. In this work, we consider Fake-English as a different language. languages, and for two tasks (XNLI and NER), we show the contribution of word-pieces to the success of the model. In every two consecutive rows, we show for a pair (e.g., English-Spanish) and then for the corresponding pair after mapping English to a disjoint set of word-pieces. The gap between the performance in each group of two rows indicates the loss due to completely eliminating the word-piece contribution. We add an asterisk to the number for NER when the are statistically significant at the 0.05 level. We measure the contribution of word-piece overlap as the drop in performance when the word-piece overlap is removed. From Table 1, we can see B-BERT is cross-lingual even when there is no wordpiece overlap. We can also see that the contribution of word-piece overlap is very small, which is quite surprising and contradictory to the hypothesis by . We define the structure of a language as every property of an individual language that is invariant to the script of the language, e.g., morphology, word-ordering, word frequency, word-pair frequency are all part of the structure of a language. Note that English and Fake-English don't share any vocabulary/characters, but they have exactly the same structure. From Table 1, we can see that BERT transfers very well from Fake-English to English. Also note that, despite not sharing any vocabulary, Fake-English transfers to Spanish, Hindi, Russian almost as well as English. On XNLI, where the scores between languages can be compared, the cross-lingual transferability from FakeEnglish to Spanish is much better than from Fake-English to Hindi/Russian. Since they do not share any word-pieces, this better transferability comes from the structure being closer between Spanish and Fake-English. These suggest that we should shed more light on studying the structural similarity between languages. In this study, we don't further dissect the structure of language as currently, the definition of "Structure of a Language" is fuzzy. Despite its amorphous definition, our experiment clearly shows that structural similarity is crucial for cross-lingual transfer. 3.3 ARCHITECTURE From Section 3.2, we observe that B-BERT recognizes the language structure effectively, We envisage that BERT potentially gains the ability to recognize language structure because of its architecture. In this section, we study the contribution of different components of B-BERT architecture namely (i) depth, (ii) multi-head attention and, (iii) the total number of parameters. The motivation is to understand which components are crucial for its cross-lingual ability. We perform all our cross-lingual experiments on the XNLI dataset with Fake-English as the source and Russian as the target language; we measure cross-lingual ability by the difference between the performance of Fake-English and Russian (lesser the difference better the cross-lingual ability). We presume the ability of B-BERT to extract good semantic and structural features is a crucial reason for its cross-lingual effectiveness, and the deepness of B-BERT helps it extract good language features. In this section, we study the effect of depth on both the monolingual and cross-lingual performance of B-BERT. Table 2: The Effect of Depth of B-BERT Architecture: We use Fake-English and Russian B-BERT and study the effect of depth of B-BERT on the performance of Fake-English and the Russian language on XNLI data. We vary the depth and fix both the number of attention heads and the number of parameters -the size of hidden and intermediate units are changed so that the total number of parameters remains almost the same. We train only on Fake-English and test on both Fake-English and Russian and report their test accuracy. The difference between the performance on Fake-English and Russian(∆) is our measure of cross-lingual ability (lesser the difference, better the cross-lingual ability). From Table 2, we can see that a deeper model not only perform better on English but are also better cross-lingual(∆). We can also see a strong correlation between performance on English and crosslingual ability (∆), which further supports our assumption that the ability to extract good semantic and structural features is a crucial reason for its cross-lingual effectiveness. In this section, we study the effect of multi-head attention on the cross-lingual ability of B-BERT. We fix the depth and the total number of parameters -which is a function of depth and size of hidden and intermediate and study the performance for the different number of attention heads. From Table 3, we can see that the number of attention heads doesn't have a significant effect on cross-lingual ability(∆) -B-BERT is satisfactorily cross-lingual even with a single attention head, which is in agreement with the recent study on monolingual BERT (; Table 3 : The Effect of Multi-head Attention: We study the effect of the number of attention heads of B-BERT on the performance of Fake-English and Russian language on XNLI data. We fix both the number of depth and number of parameters of B-BERT and vary the number of attention heads. The difference between the performance on Fake-English and Russian(∆) is our measure of cross-lingual ability. Similar to the depth, we also anticipate that a large number of parameters could potentially help B-BERT extract good semantic and structural features. We study the effect of the total number of parameters on cross-lingual performance by fixing the number of attention heads and depth; we change the number of parameters by changing the size of hidden and intermediate units (size of intermediate units is always 4× size of hidden units). From Table 4, we can see that the total number of parameters is not as significant as depth; however, below a threshold, the number of parameters seems significant, which suggests that B-BERT requires a certain minimum number parameters to extract good semantic and structural feature. Table 4: The Effect of Total Number of Parameters: We study the effect of the total number of Parameters of B-BERT on the performance of Fake-English and Russian language on XNLI data. We fix both the number of depth and number of attention heads of B-BERT and vary the total number of parameters by changing the size of hidden and intermediate units. The difference between the performance on Fake-English and Russian(∆) is our measure of cross-lingual ability. In this section, we study the effect of input representation and learning objectives on the crosslingual ability of B-BERT. BERT is a Transformer model trained with MLM and NSP objectives. XLM shows that the Transformer model trained with Causal Language Modeling (CLM) objective is also cross-lingual; however, it also observes that pre-training with MLM objective consistently outperforms the one with CLM. In this work, we don't study further the effect of MLM objective. Recent works (; show that the NSP objective hurts the performance of several monolingual tasks; in this work, we verify if the NSP objective helps or hurts cross-lingual performance. states that they intentionally do not use any marker to identify language so that cross-lingual transfer works, however, our experiments suggest that adding a language identity marker to the input doesn't hurt the cross-lingual performance of BERT. We are also interested in studying the effect of characters and words vocabulary instead of word-pieces. Characters provide handling unseen words better than words, words carry more semantic and syntactic information inside it, and word-pieces is more of a middle ground of these two. The input to the BERT is a pair of sentences separated by a special token such that half the time the second sentence is the next and rest half the time, it is a random sentence. The NSP objective of BERT (B-BERT) is to predict whether the second sentence comes after the first one in the original text. We study the effect of NSP objective by comparing the performance of B-BERT pre-trained with and without this objective. From Table 5, we can see that the NSP objective hurts the crosslingual performance even more than monolingual performance. In this work, we argue that B-BERT is cross-lingual because of its ability to recognize language structure and semantics, and hence we presume adding a language identity marker doesn't affect its cross-lingual ability. Even if we don't add a language identity marker, BERT learns language identity (We study the effect of adding a language identifier in the input data. We use different end of string([SEP]) tokens for different languages serving as language identity marker. Column "With Lang-id" and "No Lang-id" show the performance when B-BERT is trained with and without language identity marker in the input. In this section, we compare the performance of B-BERT with character, word-piece, and word tokenized input. For character B-BERT, we use all the characters as vocabulary, and for word B-BERT, we use the most frequent 100000 words as vocabulary. From Table 7, we can see that both monolingual and cross-lingual performance of B-BERT with word-piece tokenized input is better than the character as well as word tokenized input. We believe that this is because wordpieces carry much more information than characters, and word-pieces address unseen words better than words. with different tokenized input on XNLI and NER data. Column Char, WordPiece, Word reports the performance of B-BERT with character, wordpiece and work tokenized input respectively. We use 2k batch size and 500k epochs. This paper provides a systematic empirical study addressing the cross-lingual ability of B-BERT. The analysis presented here covers three dimensions: Linguistics properties and similarities of the source and target languages, Neural Architecture, and Input representation and Learning Objective. In order to gauge the language similarity aspect needed to make B-BERT successful, we created a new language -Fake-English -and this allows us to study the effect of word-piece overlap while maintaining all other properties of the source language. Our experiments reveal some interesting and surprising like the fact that word-piece overlap on the one hand, and multi-head attention on the other, are both not significant, whereas structural similarity and the depth of B-BERT are crucial for its cross-lingual ability. While, in order to better control interference among languages, we studied the cross-lingual ability of B-BERT instead of those of M-BERT, it would be interesting now to extend this study, allowing for more interactions among languages. We leave it to future work to study these interactions. In particular, one important question is to understand the extent to which adding to M-BERT languages that are related to the target language, helps the model's cross-lingual ability. We introduced the term Structural Similarity, despite its obscure definition, and show its significance in cross-lingual ability. Another interesting future work could be to develop a better definition and, consequently, a finer set of experiments, to better understand the Structural similarity and study its individual components. Finally, we note an interesting observation made in Table 8. We observe a drastic drop in the entailment performance of B-BERT when the premise and hypothesis are in different languages. (This data was created using XNLI when in the original form the languages contain same premise and hypothesis pair). One of the possible explanations could be that BERT is learning to make textual entailment decisions by matching words or phrases in the premise to those in the hypothesis. This question, too, is left as a future direction. In the main text, we defined structural similarity as all the properties of a language that is invariant to the script of the language, like morphology, word-ordering, word-frequency, etc.. Here, we analyze 2 sub-components of structural similarity -word-ordering similarity and word-frequency (Unigram frequency) similarity to understand the concept of structural similarity better. Words are ordered differently between languages. For example, English has a Subject-Verb-Object order, while Hindi has a Subject-Object-Verb order. We analyze whether similarity in how words are ordered affects learning cross-lingual transferability. We study the effect of word-ordering similarity -one component of structural similarity -by destroying the word-ordering structure by shuffling some percentage of random words in sentences during pretraining. We shuffle both source (fakeEnglish) and target language (permuting any one of them would also be sufficient). This way, the similarity of word-ordering is hidden from B-BERT. We model how much we permute each sentence, for example, when the sentence is 100% permuted, each sentence can be treated as a Bag of Words. For each word, the words that appear in its context (other words in the sentence) is not changed. Note that during fine-tuning on source language (and testing on target language), we do not permute, as the cross-lingual ability is gained only during the pretraining process. From Table 9, we can see that the performance drops significantly when we destroy word-order similarity. However, the cross-lingual performance is still quite good, which indicates that there are other components of structural similarity, which could contribute to the cross-lingual ability of B-BERT. Table 9: Contribution of Word-Ordering similarity: We study the importance of word-order similarity by analysing the performance of XNLI and NER when some percent of word-order similarity is destroyed. For a certain percent p, we randomly shuffle p * L words in the sentence, where L is the number of total number of words in that sentence. For example, enfake-es-permute-0.25 indicates that in each sentence a random 25% of words are shuffled and enfake-es-permute-1.00 indicates every word in the sentence is randomly shuffled, similarly for others. We can see that word-order similarity is quite important, however there must be other components of structural similarity that could contribute for the cross-lingual ability, as the the performance of enfake-es-permute-1.00 is still pretty good. Here, we study whether only word frequency allows for good cross-lingual representations -no, it is not much useful. We collect the frequency of words in the target language and generate a new monolingual corpus by sampling words based on the frequency, i.e., each sentence is a set of random words sampled from the same unigram frequency as the original target language. The only information about target language BERT has is its unigram frequency and sub-word level information. We train B-BERT using Fake-English and this newly generated target corpus. From Table 10, we can see that the performance is very low, although non-trivial (random performance is 33.33%). Therefore, unigram frequencies alone don't contain enough information for cross-lingual learning (bi-gram or tri-gram frequencies might be useful). All the experiments in the main text are conducted on Bilingual BERT. However, the hold even for the multilingual case; to further illustrate this, we experiment on four language BERT (en, es, hi, ru). From table 11, we can see that the performance on XNLI is comparable even with just 15% of parameters, and just 1, 3 attention heads when the depth is good enough, which is in agreement with our observations in the main text. Also, Currently, there is a lot of interest in reducing the size of BERT , using various pruning techniques. Our analysis shows that we can get comparable performance by maintaining (or increasing) depth. Indeed, by changing the number of parameters from 132.78M to 20.05M (approx 85% less) by reducing only the hidden layer sizes, the English performance drops only from 79.0 to 75.0 (about 4%), which is comparable to the drop in MNLI performance of , where the performance drops by -4.7% for 88.4% reduction of parameters (English XNLI is similar to MNLI). We believe that these pruning techniques could be combined with the insights from the paper to get much better . Table 11: Significantly smaller multilingual BERT: We show that the insights derived from bilingual BERT is valid even in the case of multilingual BERT (4 language BERT). Further, we also show that with enough depth, we only need a fewer number of parameters and attention heads to get comparable . Here we show more on the effect of the number of parameters to get more insights on the threshold on the number of parameters. The experimental setting was similar to that of table 4. From table 12, we can notice a drastic change in the performance of Russian when the number of parameters is decreased from 11.83M to 7.23M, so we can consider this range to be the required minimum number of parameters, at least for the 12 layer and 12 attention heads situation. Table 12: The Effect of Total Number of Parameters: We study the effect of the total number of Parameters of B-BERT on the performance of Fake-English and Russian language on XNLI data. We fix both the number of depth and number of attention heads of B-BERT and vary the total number of parameters by changing the size of hidden and intermediate units. The difference between the performance on Fake-English and Russian(∆) is our measure of cross-lingual ability. We can see that there is a drastic change in performance when we reduce the number of parameters from 11.83M to 7.23M, so this is threshold for kind of a number of parameters.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJeT3yrtDr
Cross-Lingual Ability of Multilingual BERT: An Empirical Study
We analyze the dynamics of training deep ReLU networks and their implications on generalization capability. Using a teacher-student setting, we discovered a novel relationship between the gradient received by hidden student nodes and the activations of teacher nodes for deep ReLU networks. With this relationship and the assumption of small overlapping teacher node activations, we prove that student nodes whose weights are initialized to be close to teacher nodes converge to them at a faster rate, and in over-parameterized regimes and 2-layer case, while a small set of lucky nodes do converge to the teacher nodes, the fan-out weights of other nodes converge to zero. This framework provides insight into multiple puzzling phenomena in deep learning like over-parameterization, implicit regularization, lottery tickets, etc. We verify our assumption by showing that the majority of BatchNorm biases of pre-trained VGG11/16 models are negative. Experiments on random deep teacher networks with Gaussian inputs, teacher network pre-trained on CIFAR-10 and extensive ablation studies validate our multiple theoretical predictions. Although neural networks have made strong empirical progress in a diverse set of domains (e.g., computer vision (16; 32; 10), speech recognition (11; 1), natural language processing (22; 3), and games (30; 31; 35; 23)), a number of fundamental questions still remain unsolved. How can Stochastic Gradient Descent (SGD) find good solutions to a complicated non-convex optimization problem? Why do neural networks generalize? How can networks trained with SGD fit both random noise and structured data (38; 17; 24), but prioritize structured models, even in the presence of massive noise? Why are flat minima related to good generalization? Why does overparameterization lead to better generalization (25; 39; 33; 26; 19)? Why do lottery tickets exist (6; 7)?In this paper, we propose a theoretical framework for multilayered ReLU networks. Based on this framework, we try to explain these puzzling empirical phenomena with a unified view. We adopt a teacher-student setting where the label provided to an over-parameterized deep student ReLU network is the output of a fixed teacher ReLU network of the same depth and unknown weights (FIG0). In this perspective, hidden student nodes are randomly initialized with different activation regions. (Fig. 2(a) ). During optimization, student nodes compete with each other to explain teacher nodes. Theorem 4 shows that lucky student nodes which have greater overlap with teacher nodes converge to those teacher nodes at a fast rate, ing in winner-takeall behavior. Furthermore, Theorem 5 shows that if a subset of student nodes are close to the teacher nodes, they converge to them and the fan-out weights of other irrelevant nodes of the same layer vanishes. With this framework, we can explain various neural network behaviors as follows:Fitting both structured and random data. Under gradient descent dynamics, some student nodes, which happen to overlap substantially with teacher nodes, will move into the teacher node and cover them. This is true for both structured data that corresponds to small teacher networks with few intermediate nodes, or noisy/random data that correspond to large teachers with many intermediate nodes. This explains why the same network can fit both structured and random data (Fig. 2(a-b) ).Over-parameterization. In over-parameterization, lots of student nodes are initialized randomly at each layer. Any teacher node is more likely to have a substantial overlap with some student nodes, which leads to fast convergence (Fig. 2(a) and (c), Thm. 4), consistent with (6; 7). This also explains that training models whose capacity just fit the data (or teacher) yields worse performance.Flat minima. Deep networks often converge to "flat minima" whose Hessian has a lot of small eigenvalues (28; 29; 21; 2). Furthermore, while controversial, flat minima seem to be associated with good generalization, while sharp minima often lead to poor generalization (12; 14; 36; 20). In our theory, when fitting with structured data, only a few lucky student nodes converge to the teacher, while for other nodes, their fan-out weights shrink towards zero, making them (and their fan-in weights) irrelevant to the final outcome (Thm. 5), yielding flat minima in which movement along most dimensions ("unlucky nodes") in minimal change in output. On the other hand, sharp min- Figure 2. Explanation of implicit regularization. Blue are activation regions of teacher nodes, while orange are students'. (a) When the data labels are structured, the underlying teacher network is small and each layer has few nodes. Over-parameterization (lots of red regions) covers them all. Moreover, those student nodes that heavily overlap with the teacher nodes converge faster (Thm. 4), yield good generalization performance. (b) If a dataset contains random labels, the underlying teacher network that can fit to it has a lot of nodes. Over-parameterization can still handle them and achieves zero training error.(a) (b) (c) Figure 3. Explanation of lottery ticket phenomenon. (a) A successful training with over-parameterization (2 filters in the teacher network and 4 filters in the student network). Node j3 and j4 are lucky draws with strong overlap with two teacher node j • 1 and j • 2, and thus converges with high weight magnitude. (b) Lottery ticket phenomenon: initialize node j3 and j4 with the same initial weight, clamp the weight of j1 and j2 to zero, and retrain the model, the test performance becomes better since j3 and j4 still converge to their teacher node, respectively. (c) If we reinitialize node j3 and j4, it is highly likely that they are not overlapping with teacher node j ima is related to noisy data (Fig. 2(d) ), in which more student nodes match with the teacher. Implicit regularization. On the other hand, the snapping behavior enforces winner-take-all: after optimization, a teacher node is fully covered (explained) by a few student nodes, rather than splitting amongst student nodes due to over-parameterization. This explains why the same network, once trained with structured data, can generalize to the test set. Lottery Tickets. Lottery Tickets (6; 7) is an interesting phenomenon: if we reset "salient weights" (trained weights with large magnitude) back to the values before optimization but after initialization, prune other weights (often > 90% of total weights) and retrain the model, the test performance is the same or better; if we reinitialize salient weights, the test performance is much worse. In our theory, the salient weights are those lucky regions (E j3 and E j4 in Fig. 3) that happen to overlap with some teacher nodes after initialization and converge to them in optimization. Therefore, if we reset their weights and prune others away, they can still converge to the same set of teacher nodes, and potentially achieve better performance due to less interference with other irrelevant nodes. However, if we reinitialize them, they are likely to fall into unfavorable regions which cannot cover teacher nodes, and therefore lead to poor performance (Fig. 3(c) ), just like in the case of under-parameterization. The details of our proposed theory can be found in Appendix (Sec. 5). Here we list the summary. First we show that for multilayered ReLU, there exists a relationship between the gradient g j (x) of a student node j and teacher and student's activations of the same layer (Thm. 1): DISPLAYFORM0 where f j • is the activation of node j• in the teacher, and j is the node at the same layer in the student. For each node j, we don't know which teacher node corresponds to it, hence the linear combination terms. Typically the number of student nodes is much more than that of teachers'. Thm. 1 applies to arbitrarily deep ReLU networks. Then with a mild assumption (Assumption 1), we can write the gradient update rule of each layer l in the following DISPLAYFORM1 where L and L * are correlations matrix of activations from the bottom layers, and H and H * are modulation matrix from the top layers. We then make an assumption that different teacher nodes of the same layer have small overlap in node activations (Assumption 3 and FIG4, and verify it in VGG16/VGG11 by showing that the majority of their BatchNorm bias are negative FIG0 . With this assumption, we prove two theorems:• When the number of student nodes is the same as the number of teacher nodes (m l = n l), and each student's weight vector w j is close to a corresponding teacher w * j •, then the dynamics of Eqn. 2 yields (recovery) convergence w j → w * j • (Thm. 4). Furthermore, such convergence is super-linear (i.e., the convergence rate is higher when the weights are closer).• In the over-parameterization setting (n l > m l), we show that in the 2-layer case, with the help of toplayer, the portion of weights W u that are close to teacher W * converge (W u → W *). For other irrelevant weights, while their final values heavily depends on initialization, with the help of top-down modulation, their fan-out top-layer weights converge to zero, and thus have no influence on the network output. To make Theorem 4 and Theorem 5 work, we make Assumption 3 that the activation field of different teacher nodes should be well-separated. To justify this, we analyze the BatchNorm bias of pre-trained VGG11 and VGG16. We check the BatchNorm bias c 1 as both VGG11 and VGG16 use Linear-BatchNorm-ReLU architecture. After BatchNorm first normalizes the input data into zero mean distribution, the BatchNorm bias determines how much data pass the ReLU threshold. If the bias is negative, then a small portion of data pass ReLU gating and Assumption 3 is likely to hold. FIG5, it is quite clear that the majority of BatchNorm bias parameters are negative, in particular for the top layers. We evaluate both the fully connected (FC) and ConvNet setting. For FC, we use a ReLU teacher network of size 50-75-100-125. For ConvNet, we use a teacher with channel size 64-64-64-64. The student networks have the same depth but with 10x more nodes/channels at each layer, such that they are substnatially over-parameterized. When BatchNorm is added, it is added after ReLU.We use random i.i.d Gaussian inputs with mean 0 and std 10 (abbreviated as GAUS) and CIFAR-10 as our dataset in the experiments. GAUS generates infinite number of samples while CIFAR-10 is a finite dataset. For GAUS, we use a random teacher network as the label provider (with 100 classes). To make sure the weights of the teacher are weakly overlapped, we sample each entry of w * j from [−0.5, −0.25, 0, 0.25, 0.5], making sure they are non-zero and mutually different within the same layer, and sample biases from U [−0.5, 0.5]. In the FC case, the data dimension is 20 while in the ConvNet case it is 16 × 16. For CIFAR-10 we use a pre-trained teacher network with BatchNorm. In the FC case, it has an accuracy of 54.95%; for ConvNet, the accuracy is 86.4%. We repeat 5 times for all experiments, with different random seed and report min/max values. Two metrics are used to check our prediction that some lucky student nodes converge to the teacher:Normalized correlationρ. We compute normalized correlation (or cosine similarity) ρ between teacher and student activations evaluated on a validation set. At each layer, we average the best correlation over teacher nodes: ρ = mean j • max j ρ jj •, where ρ jj • is computed for each teacher and student pairs (j, j•).ρ ≈ 1 means that most teacher nodes are covered by at least one student. Mean Rankr. After training, each teacher node j• has the most correlated student node j. We check the correlation rank of j, normalized to (0=rank first), back at initialization and at different epoches, and average them over teacher nodes to yield mean rankr. Smallr means that student nodes that initially correlate well with the teacher keeps the lead toward the end of training. Experiments are summarized in Fig. 5 and FIG3.ρ indeed grows during training, in particular for low layers that are closer to the input, whereρ moves towards 1. Furthermore, the final winning student nodes also have a good rank at the early stage of training. BatchNorm helps a lot, in particular for the CNN case with GAUS dataset. For CIFAR-10, the final evaluation accuracy (see Appendix) learned by the student is often ∼ 1% higher than the teacher. Using BatchNorm accelerates the growth of accuracy, improvesr, but seems not to accelerate the growth ofρ. The theory also predicts that the top-down modulation β helps the convergence. For this, we plot β * jj • at different layers during optimization on GAUS. For better visualization, we align each student node index j with a teacher node j• according to highest ρ. Despite the fact that correlations are computed from the low-layer weights, it matches well with the top-layer modulation (identity matrix structure in FIG0). More ablation studies are in Sec. 8. We propose a novel mathematical framework for multilayered ReLU networks. This could tentatively explain many puzzling empirical phenomena in deep learning.. Correlationρ and mean rankr over training on GAUS.ρ steadily grows andr quickly improves over time. Layer-0 (the lowest layer that is closest to the input) shows best match with teacher nodes and best mean rank. BatchNorm helps achieve both better correlation and lowerr, in particular for the CNN case. Simon S Du, Jason D Lee, Yuandong Tian, Barnabas Poczos, and Aarti Singh. Gradient descent learns onehidden-layer cnn: Don't be afraid of spurious local minima.. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Training pruned neural networks.. Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. The lottery ticket hypothesis at scale. arXiv preprint arXiv:1903.01611, 2019. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pages 1135-1143, 2015. Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IEEE international conference on neural networks, pages 293-299.. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.[333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 5. Appendix: Mathematical Framework Notation. Consider a student network and its associated teacher network ( FIG0). Denote the input as x. For each node j, denote f j (x) as the activation, f j (x) as the ReLU gating, and g j (x) as the backpropagated gradient, all as functions of x. We use the superscript• to represent a teacher node (e.g., j •). Therefore, g j • never appears as teacher nodes are not updated. We use w jk to represent weight between node j and k in the student network. Similarly, w * j • k • represents the weight between node j• and k • in the teacher network. We focus on multi-layered ReLU networks. We use the following equality extensively: σ(x) = σ (x)x. For ReLU node j, we use E j ≡ {x : f j (x) > 0} as the activation region of node j. Objective. We assume that both the teacher and the student output probabilities over C classes. We use the output of teacher as the input of the student. At the top layer, each node c in the student corresponds to each node c • in the teacher. Therefore, the objective is: DISPLAYFORM0 By the backpropagation rule, we know that for each sample x, the (negative) gradient DISPLAYFORM1 The gradient gets backpropagated until the first layer is reached. Note that here, the gradient g c (x) sent to node c is correlated with the activation of the corresponding teacher node f c • (x) and other student nodes at the same layer. Intuitively, this means that the gradient "pushes" the student node c to align with class c• of the teacher. If so, then the student learns the corresponding class well. A natural question arises:Are student nodes at intermediate layers correlated with teacher nodes at the same layers?One might wonder this is hard since the student's intermediate layer receives no direct supervision from the corresponding teacher layer, but relies only on backpropagated gradient. Surprisingly, the following theorem shows that it is possible for every intermediate layer: DISPLAYFORM2. If all nodes j at layer l satisfies Eqn. 4 DISPLAYFORM3 then all nodes k at layer l − 1 also satisfies Eqn. 4 with β * kk • (x) and β kk (x) defined as follows: DISPLAYFORM4 Note that this formulation allows different number of nodes for the teacher and student. In particular, we consider the over-parameterization setting: the number of nodes on the student side is much larger (e.g., 5-10x) than the number of nodes on the teacher side. Using Theorem 1, we discover a novel and concise form of gradient update rule: Assumption 1 (Separation of Expectations). DISPLAYFORM5 DISPLAYFORM6 Theorem 2. If Assumption 1 holds, the gradient dynamics of deep ReLU networks with objective (Eqn. 3) is: DISPLAYFORM7 Here we explain the notations. DISPLAYFORM8 We can define similar notations for W (which has n l columns/filters), β, D, H and L FIG4 In the following, we will use Eqn. 8 to analyze the dynamics of the multi-layer ReLU networks. For convenience, we first define the two functions ψ l and ψ d (σ is the ReLU function): DISPLAYFORM0 We assume these two functions have the following property. Assumption 2 (Lipschitz condition). There exists K d and K l so that: DISPLAYFORM1 Using this, we know that DISPLAYFORM2, and so on. For brevity, denote d * * DISPLAYFORM3 is heavy) and so on. We impose the following assumption: Assumption 3 (Small Overlap between teacher nodes). There exists l 1 and d 1 so that: DISPLAYFORM4 Intuitively, this means that the probability of the simultaneous activation of two teacher nodes j 1 and j 2 is small. One such case is that the teacher has negative bias, which means that they cut corners in the input space FIG4. We have empirically verified that the majority of biases in BatchNorm layers (after the data are whitened) are negative in VGG11/16 trained on ImageNet (Sec. 3.1). Batch Normalization has been extensively used to speed up the training, reduce the tuning efforts and improve the test performance of neural networks. Here we use an interesting property of BatchNorm: the total "energy" of the incoming weights of each node j is conserved over training iterations: Theorem 3 (Conserved Quantity in Batch Normalization). For Linear → ReLU → BN or Linear → BN → ReLU configuration, w j of a filter j before BN remains constant in training. FIG0.See Appendix for the proof. This may partially explain why BN has stabilization effect: energy will not leak from one layer to nearby ones. Due to this property, in the following, for convenience we assume w j 2 = w * j 2 = 1, and the gradientẇ j is always orthogonal to the current weight w j. Note that on the teacher side we can always push the magnitude component to the upper layer; on the student side, random initialization naturally leads to constant magnitude of weights. If n l = m l, L * l = L l = I (e.g., the input of layer l is whitened) and β * l+1 = β l+1 = 11 T (all β entries are 1), then the following theorem shows that weight recovery could follow (we use j as j •). 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 Teacher Student Convergence Figure 8. Over-parameterization and top-down modulation. Thm. 5 shows that under certain conditions, the relevant weights Wu → W * and weights connecting to irrelevant student nodes Vr → 0. DISPLAYFORM0 See Appendix for the proof. Here we list a few remarks:Faster convergence near w * j. we can see that due to the fact that h * jj in general becomes larger when w j → w * j (since cos θ 0 can be close to 1), we expect a super-linear convergence near w * j. This brings about an interesting winner-take-all mechanism: if the initial overlap between a student node j and a particular teacher node is large, then the student node will snap to it (FIG0).Importance of projection operator P ⊥ wj. Intuitively, the projection is needed to remove any ambiguity related to weight scaling, in which the output remains constant if the top-layer weights are multiplied by a constant α, while the low-layer weights are divided by α. Previous works also uses similar techniques while we justify it with BN. Without P ⊥ wj, convergence can be harder. In the over-parameterization case (n l > m l, e.g., 5-10x), we arrange the variables into two parts: W = [W u, W r], where W u contains m l columns (same size as W *), while W r contains n l − m l columns. We use [u] (or u-set) to specify nodes 1 ≤ j ≤ m, and [r] (or r-set) for the remaining part. In this case, if we want to show "the main component" W u converges to W *, we will meet with one core question: to where will W r converge, or whether W r will even converge at all? We need to consider not only the dynamics of the current layer, but also the dynamics of the upper layer. See Appendix for the proof (and definition ofλ in Eqn. 47). The intuition is: if W u is close to W * and W r are far away from them due to Assumption 3, the off-diagonal elements of L and L * are smaller than diagonal ones. This causes V u to move towards V * and V r to move towards zero. When V r becomes small, so does β jj for j ∈ [r] or j ∈ [r]. This in turn suppresses the effect of W r and accelerates the convergence of W u. V r → 0 exponentially so that W r stays close to its initial locations, and Assumption 3 holds for all iterations. A few remarks:Flat minima. Since V r → 0, W r can be changed arbitrarily without affecting the outputs of the neural network. This could explain why there are many flat directions in trained networks, and why many eigenvalues of the Hessian are close to zero.Understanding of pruning methods. Theorem 5 naturally relates two different unstructured network pruning approaches: pruning small weights in magnitude (8; 6) and pruning weights suggested by Hessian (18; 9). It also suggests a principled structured pruning method: instead of pruning a filter by checking its weight norm, pruning accordingly to its top-down modulation. Accelerated convergence and learning rate schedule. For simplicity, the theorem uses a uniform (and conservative) 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 γ throughout the iterations. In practice, γ is initially small (due to noise introduced by r-set) but will be large after a few iterations when V r vanishes. Given the same learning rate, this leads to accelerated convergence. At some point, the learning rate η becomes too large, leading to fluctuation. In this case, η needs to be reduced. Many-to-one mapping. Theorem 5 shows that under strict conditions, there is one-to-one correspondence between teacher and student nodes. In general this is not the case. Two students nodes can be both in the vicinity of a teacher node w * j and converge towards it, until that node is fully explained. We leave it to the future work for rigid mathematical analysis of many-to-one mappings. Random initialization. One nice thing about Theorem 5 is that it only requires the initial W u − W * to be small. In contrast, there is no requirement for small V r. Therefore, we could expect that with more over-parameterization and random initialization, in each layer l, it is more likely to find the u-set (of fixed size m l), or the lucky weights, so that W u is quite close to W *. At the same time, we don't need to worry about W r which grows with more over-parameterization. Moreover, random initialization often gives orthogonal weight vectors, which naturally leads to Assumption 3. Using a similar approach, we could extend this analysis to multi-layer cases. We conjecture that similar behaviors happen: for each layer, due to over-parameterization, the weights of some lucky student nodes are close to the teacher ones. While these converge to the teacher, the final values of others irrelevant weights are initialization-dependent. If the irrelevant nodes connect to lucky nodes at the upper-layer, then similar to Thm. 5, the corresponding fan-out weights converge to zero. On the other hand, if they connect to nodes that are also irrelevant, then these fan-out weights are not-determined and their final values depends on initialization. However, it doesn't matter since these upper-layer irrelevant nodes eventually meet with zero weights if going up recursively, since the top-most output layer has no over-parameterization. We leave a formal analysis to future work. Proof. The first part of gradient backpropagated to node j is: ) 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 Therefore, for the gradient to node k, we have: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 And similar for β kk (x). Therefore, by mathematical induction, we know that all gradient at nodes in different layer follows the same form. Proof. Using Thm. 1, we can write down weight update for weight w jk that connects node j and node k: DISPLAYFORM0 Note that j •, k •, j and k run over all parents and children nodes on the teacher side. This formulation works for overparameterization (e.g., j• and j can run over different nodes). Applying Assumption 1 and rearrange terms in matrix form yields Eqn. 8. Proof. Given a batch with size N, denote pre-batchnorm activations as DISPLAYFORM0 T and gradients as DISPLAYFORM1 T (See FIG0).f = (f − µ)/σ is its whitened version, and c 0f + c 1 is the final output of BN. Here µ = DISPLAYFORM2 2 and c 1, c 0 are learnable parameters. With vector notation, the gradient update in BN has a compact form with clear geometric meaning:Lemma 1 (Backpropagation of Batch Norm). For a top-down gradient g, BN layer gives the following gradient update (P ⊥ f,1 is the orthogonal complementary projection of subspace {f, 1}): DISPLAYFORM3 Intuitively, the back-propagated gradient J BN (f)g is zero-mean and perpendicular to the input activation f of BN layer, as illustrated in FIG0. Unlike (15; 37) that analyzes BN in an approximate manner, in Thm. 1 we do not impose any assumptions. Given Lemma 1, we can prove Thm. 3. FIG0, using the property that E x g lin j f lin j = 0 (the expectation is taken over batch) and the weight update ruleẇ jk = E x g lin j f k (over the same batch), we have: DISPLAYFORM4 For FIG0, note that DISPLAYFORM5 rl j = 0 and follows. For simplicity, in the following, we use δw j = w j − w * j. Lemma 2 (Bottom Bounds). Assume all w j = w j = 1. Denote DISPLAYFORM0 If Assumption 2 holds, we have: DISPLAYFORM1 If Assumption 3 also holds, then: DISPLAYFORM2 Proof. We have for j = j: DISPLAYFORM3 If Assumption 3 also holds, we have: DISPLAYFORM4 Lemma 3 (Top Bounds). Denote DISPLAYFORM5 If Assumption 2 holds, we have: DISPLAYFORM6 If Assumption 3 also holds, then: DISPLAYFORM7 Proof. The proof is similar to Lemma 2.Lemma 4 (Quadratic fall-off for diagonal elements of L). For node j, we have: DISPLAYFORM8 Proof. The intuition here is that both the volume of the affected area and the weight difference are proportional to δw j. l * jj − l jj is their product and thus proportional to δw j 2. See FIG0. Proof. First of all, note that δw j = 2 sin θj 2 ≤ 2 sin θ0 2. So given θ 0, we also have a bound for δw j. When β = β * = 11 T, the matrix form can be written as the following: DISPLAYFORM0 by using P ⊥ wj w j ≡ 0 (and thus h jj doesn't matter). Since w j is conserved, it suffices to check whether the projected weight vector P ⊥ w * j w j of w j onto the complementary space of the ground truth node w * j, goes to zero: DISPLAYFORM1 Denote θ j = ∠(w j, w * j) and a simple calculation gives that sin θ j = P ⊥ w * j w j. First we have: DISPLAYFORM2 From Lemma 2, we knows that DISPLAYFORM3 Note that here we have δw j = 2 sin DISPLAYFORM4 We discuss finite step with very small learning rate η > 0: DISPLAYFORM5 Here DISPLAYFORM6 is an iteration independent constant. We set γ = cos DISPLAYFORM7 jj and from Lemma 2 we know d * jj ≥d for all j. Then given the inductive hypothesis that sin θ t j ≤ (1 − ηdγ) t−1 sin θ 0, we have: DISPLAYFORM8 Therefore, sin θ DISPLAYFORM9 The proof can be decomposed in the following three lemma. Lemma 5 (Top-layer contraction). If (W-Separation) holds for t, then (V -Contraction)) holds for iteration t + 1.Lemma 6 (Bottom-layer contraction). If (V -Contraction) holds for t, then (W u -Contraction) holds for t + 1 and (W r -Bound) holds for t + 1.Lemma 7 (Weight separation). If (W-Separation) holds for t, (W r -Bound) holds for t + 1 and (W u -Contraction) holds for t + 1, then (W-Separation) holds for t + 1.As suggested by FIG0, if all the three lemmas are true then the induction hypothesis are true. In the following, we will prove the three lemmas.7.6.1. LEMMA 5Proof. On the top-layer, we haveV = L DISPLAYFORM10, where v j is the j-th row of the matrix V.For each component, we can write: DISPLAYFORM11 Note that there is no projection (if there is any, the projection should be in the columns rather than the rows).If (W-Separation) is true, we know that for j = j, DISPLAYFORM12 Now we discuss j ∈ [u] and j ∈ [r]:Relevant nodes. For j ∈ [u], the first two terms are: DISPLAYFORM13 From Lemma 4 we know that: ) 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 initial value is DISPLAYFORM14 DISPLAYFORM15 Therefore, we prove that (W r -Bound) holds for iteration t + 1. Proof. Simply followed from combining Lemma 3, Lemma 2 and weight bounds (W u -Contraction) and (V -Contraction). Besides, we also perform ablation studies on GAUS.Size of teacher network. As shown in FIG0, for small teacher networks (FC 10-15-20-25), the convergence is much faster and training without BatchNorm is faster than training with BatchNorm. For large teacher networks, BatchNorm definitely increases convergence speed and growth ofρ. Finite versus Infinite Dataset. We also repeat the experiments with a pre-generated finite dataset of GAUS in the CNN case, and find that the convergence of node similarity stalls after a few iterations. This is because some nodes receive very few data points in their activated regions, which is not a problem for infinite dataset. We suspect that this is probably the reason why CIFAR-10, as a finite dataset, does not show similar behavior as GAUS. 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 Layer-0Layer-1Layer-2Layer-3 β* at initialization β* after optimization H* after optimization FIG0. Visualization of (transpose of) H * and β * matrix before and after optimization (using GAUS). Student node indices are reordered according to teacher-student node correlations. After optimization, student node who has high correlation with the teacher node also has high β entries. Such a behavior is more prominent in H * matrix that combines β * and the activation patterns D * of student nodes (Sec. 5).
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1xcwNr22E
A theoretical framework for deep ReLU network that can explains multiple puzzling phenomena like over-parameterization, implicit regularization, lottery tickets, etc.
State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent studies showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging , they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet. In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way. Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs. Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or English-Chinese. We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation. Our code, embeddings and dictionaries are publicly available. Most successful methods for learning distributed representations of words (e.g. BID32 a); BID34; ) rely on the distributional hypothesis of BID20, which states that words occurring in similar contexts tend to have similar meanings. BID28 show that the skip-gram with negative sampling method of BID32 amounts to factorizing a word-context co-occurrence matrix, whose entries are the pointwise mutual information of the respective word and context pairs. Exploiting word cooccurrence statistics leads to word vectors that reflect the semantic similarities and dissimilarities: similar words are close in the embedding space and conversely. BID31 first noticed that continuous word embedding spaces exhibit similar structures across languages, even when considering distant language pairs like English and Vietnamese. They proposed to exploit this similarity by learning a linear mapping from a source to a target embedding space. They employed a parallel vocabulary of five thousand words as anchor points to learn this mapping and evaluated their approach on a word translation task. Since then, several studies aimed at improving these cross-lingual word embeddings BID12; BID47;; BID0; BID1; BID43 ), but they all rely on bilingual word lexicons. Recent attempts at reducing the need for bilingual supervision BID43 employ identical character strings to form a parallel vocabulary. The iterative method of BID2 gradually aligns embedding spaces, starting from a parallel vocabulary of aligned digits. These methods are however limited to similar languages sharing a common alphabet, such as European languages. Some recent methods explored distribution-based approach BID7 or adversarial training BID50 to obtain cross-lingual word embeddings without any parallel data. While these approaches sound appealing, their performance is significantly below supervised methods. To sum up, current methods have either not reached competitive performance, or they still require parallel data, such as aligned corpora BID18 BID46 or a seed parallel lexicon BID11.In this paper, we introduce a model that either is on par, or outperforms supervised state-of-the-art methods, without employing any cross-lingual annotated data. We only use two large monolingual corpora, one in the source and one in the target language. Our method leverages adversarial training to learn a linear mapping from a source to a target space and operates in two steps. First, in a twoplayer game, a discriminator is trained to distinguish between the mapped source embeddings and the target embeddings, while the mapping (which can be seen as a generator) is jointly trained to fool the discriminator. Second, we extract a synthetic dictionary from the ing shared embedding space and fine-tune the mapping with the closed-form Procrustes solution from BID42. Since the method is unsupervised, cross-lingual data can not be used to select the best model. To overcome this issue, we introduce an unsupervised selection metric that is highly correlated with the mapping quality and that we use both as a stopping criterion and to select the best hyper-parameters. In summary, this paper makes the following main contributions:• We present an unsupervised approach that reaches or outperforms state-of-the-art supervised approaches on several language pairs and on three different evaluation tasks, namely word translation, sentence translation retrieval, and cross-lingual word similarity. On a standard word translation retrieval benchmark, using 200k vocabularies, our method reaches 66.2% accuracy on English-Italian while the best supervised approach is at 63.7%.• We introduce a cross-domain similarity adaptation to mitigate the so-called hubness problem (points tending to be nearest neighbors of many points in high-dimensional spaces). It is inspired by the self-tuning method from BID48, but adapted to our two-domain scenario in which we must consider a bi-partite graph for neighbors. This approach significantly improves the absolute performance, and outperforms the state of the art both in supervised and unsupervised setups on word-translation benchmarks.• We propose an unsupervised criterion that is highly correlated with the quality of the mapping, that can be used both as a stopping criterion and to select the best hyper-parameters.• We release high-quality dictionaries for 12 oriented languages pairs, as well as the corresponding supervised and unsupervised word embeddings.• We demonstrate the effectiveness of our method using an example of a low-resource language pair where parallel corpora are not available (English-Esperanto) for which our method is particularly suited. The paper is organized as follows. Section 2 describes our unsupervised approach with adversarial training and our refinement procedure. We then present our training procedure with unsupervised model selection in Section 3. We report in Section 4 our on several cross-lingual tasks for several language pairs and compare our approach to supervised methods. Finally, we explain how our approach differs from recent related work on learning cross-lingual word embeddings. In this paper, we always assume that we have two sets of embeddings trained independently on monolingual data. Our work focuses on learning a mapping between the two sets such that translations are close in the shared space. BID31 show that they can exploit the similarities of monolingual embedding spaces to learn such a mapping. For this purpose, they use a known dictionary of n = 5000 pairs of words {x i, y i} i∈{1,n}, and learn a linear mapping W between the source and the target space such that DISPLAYFORM0 where d is the dimension of the embeddings, M d (R) is the space of d × d matrices of real numbers, and X and Y are two aligned matrices of size d × n containing the embeddings of the words in the parallel vocabulary. The translation t of any source word s is defined as t = argmax t cos(W x s, y t). There are two distributions of word embeddings, English words in red denoted by X and Italian words in blue denoted by Y, which we want to align/translate. Each dot represents a word in that space. The size of the dot is proportional to the frequency of the words in the training corpus of that language. (B) Using adversarial learning, we learn a rotation matrix W which roughly aligns the two distributions. The green stars are randomly selected words that are fed to the discriminator to determine whether the two word embeddings come from the same distribution. (C) The mapping W is further refined via Procrustes. This method uses frequent words aligned by the previous step as anchor points, and minimizes an energy function that corresponds to a spring system between anchor points. The refined mapping is then used to map all words in the dictionary. (D) Finally, we translate by using the mapping W and a distance metric, dubbed CSLS, that expands the space where there is high density of points (like the area around the word "cat"), so that "hubs" (like the word "cat") become less close to other word vectors than they would otherwise (compare to the same region in panel (A)).In practice, BID31 obtained better on the word translation task using a simple linear mapping, and did not observe any improvement when using more advanced strategies like multilayer neural networks. BID47 showed that these are improved by enforcing an orthogonality constraint on W. In that case, the equation boils down to the Procrustes problem, which advantageously offers a closed form solution obtained from the singular value decomposition (SVD) of Y X T: DISPLAYFORM1 In this paper, we show how to learn this mapping W without cross-lingual supervision; an illustration of the approach is given in FIG0. First, we learn an initial proxy of W by using an adversarial criterion. Then, we use the words that match the best as anchor points for Procrustes. Finally, we improve performance over less frequent words by changing the metric of the space, which leads to spread more of those points in dense regions. Next, we describe the details of each of these steps. In this section, we present our domain-adversarial approach for learning W without cross-lingual supervision. Let X = {x 1, ..., x n} and Y = {y 1, ..., y m} be two sets of n and m word embeddings coming from a source and a target language respectively. A model is trained to discriminate between elements randomly sampled from W X = {W x 1, ..., W x n} and Y. We call this model the discriminator. W is trained to prevent the discriminator from making accurate predictions. As a , this is a two-player game, where the discriminator aims at maximizing its ability to identify the origin of an embedding, and W aims at preventing the discriminator from doing so by making W X and Y as similar as possible. This approach is in line with the work of BID15, who proposed to learn latent representations invariant to the input domain, where in our case, a domain is represented by a language (source or target).Discriminator objective We refer to the discriminator parameters as θ D. We consider the probability P θ D source = 1 z that a vector z is the mapping of a source embedding (as opposed to a target embedding) according to the discriminator. The discriminator loss can be written as: DISPLAYFORM0 Mapping objective In the unsupervised setting, W is now trained so that the discriminator is unable to accurately predict the embedding origins: DISPLAYFORM1 Learning algorithm To train our model, we follow the standard training procedure of deep adversarial networks of BID17. For every input sample, the discriminator and the mapping matrix W are trained successively with stochastic gradient updates to respectively minimize L D and L W. The details of training are given in the next section. The matrix W obtained with adversarial training gives good performance (see Table 1), but the are still not on par with the supervised approach. In fact, the adversarial approach tries to align all words irrespective of their frequencies. However, rare words have embeddings that are less updated and are more likely to appear in different contexts in each corpus, which makes them harder to align. Under the assumption that the mapping is linear, it is then better to infer the global mapping using only the most frequent words as anchors. Besides, the accuracy on the most frequent word pairs is high after adversarial training. To refine our mapping, we build a synthetic parallel vocabulary using the W just learned with adversarial training. Specifically, we consider the most frequent words and retain only mutual nearest neighbors to ensure a high-quality dictionary. Subsequently, we apply the Procrustes solution in on this generated dictionary. Considering the improved solution generated with the Procrustes algorithm, it is possible to generate a more accurate dictionary and apply this method iteratively, similarly to BID2. However, given that the synthetic dictionary obtained using adversarial training is already strong, we only observe small improvements when doing more than one iteration, i.e., the improvements on the word translation task are usually below 1%. In this subsection, our motivation is to produce reliable matching pairs between two languages: we want to improve the comparison metric such that the nearest neighbor of a source word, in the target language, is more likely to have as a nearest neighbor this particular source word. Nearest neighbors are by nature asymmetric: y being a K-NN of x does not imply that x is a K-NN of y. In high-dimensional spaces BID36, this leads to a phenomenon that is detrimental to matching pairs based on a nearest neighbor rule: some vectors, dubbed hubs, are with high probability nearest neighbors of many other points, while others (anti-hubs) are not nearest neighbors of any point. This problem has been observed in different areas, from matching image features in vision BID22 to translating words in text understanding applications. Various solutions have been proposed to mitigate this issue, some being reminiscent of pre-processing already existing in spectral clustering algorithms BID48.However, most studies aiming at mitigating hubness consider a single feature distribution. In our case, we have two domains, one for each language. This particular case is taken into account by, who propose a pairing rule based on reverse ranks, and the inverted soft-max (ISF) by Smith et al. FORMULA0, which we evaluate in our experimental section. These methods are not fully satisfactory because the similarity updates are different for the words of the source and target languages. Additionally, ISF requires to cross-validate a parameter, whose estimation is noisy in an unsupervised setting where we do not have a direct cross-validation criterion. In contrast, we consider a bi-partite neighborhood graph, in which each word of a given dictionary is connected to its K nearest neighbors in the other language. We denote by N T (W x s) the neighborhood, on this bi-partite graph, associated with a mapped source word embedding W x s. All K elements of N T (W x s) are words from the target language. Similarly we denote by N S (y t) the neighborhood associated with a word t of the target language. We consider the mean similarity of a source embedding x s to its target neighborhood as DISPLAYFORM0 where cos(., .) is the cosine similarity. Likewise we denote by r S (y t) the mean similarity of a target word y t to its neighborhood. These quantities are computed for all source and target word vectors with the efficient nearest neighbors implementation by BID23. We use them to define a similarity measure CSLS(., .) between mapped source words and target words, as DISPLAYFORM1 Intuitively, this update increases the similarity associated with isolated word vectors. Conversely it decreases the ones of vectors lying in dense areas. Our experiments show that the CSLS significantly increases the accuracy for word translation retrieval, while not requiring any parameter tuning.3 TRAINING AND ARCHITECTURAL CHOICES We use unsupervised word vectors that were trained using fastText 2. These correspond to monolingual embeddings of dimension 300 trained on Wikipedia corpora; therefore, the mapping W has size 300 × 300. Words are lower-cased, and those that appear less than 5 times are discarded for training. As a post-processing step, we only select the first 200k most frequent words in our experiments. For our discriminator, we use a multilayer perceptron with two hidden layers of size 2048, and Leaky-ReLU activation functions. The input to the discriminator is corrupted with dropout noise with a rate of 0.1. As suggested by BID16, we include a smoothing coefficient s = 0.2 in the discriminator predictions. We use stochastic gradient descent with a batch size of 32, a learning rate of 0.1 and a decay of 0.95 both for the discriminator and W. We divide the learning rate by 2 every time our unsupervised validation criterion decreases. The embedding quality of rare words is generally not as good as the one of frequent words BID29 ), and we observed that feeding the discriminator with rare words had a small, but not negligible negative impact. As a , we only feed the discriminator with the 50,000 most frequent words. At each training step, the word embeddings given to the discriminator are sampled uniformly. Sampling them according to the word frequency did not have any noticeable impact on the . BID43 showed that imposing an orthogonal constraint to the linear operator led to better performance. Using an orthogonal matrix has several advantages. First, it ensures that the monolingual quality of the embeddings is preserved. Indeed, an orthogonal matrix preserves the dot product of vectors, as well as their 2 distances, and is therefore an isometry of the Euclidean space (such as a rotation). Moreover, it made the training procedure more stable in our experiments. In this work, we propose to use a simple update step to ensure that the matrix W stays close to an orthogonal matrix during training BID8 ). Specifically, we alternate the update of our model with the following update rule on the matrix W: where β = 0.01 is usually found to perform well. This method ensures that the matrix stays close to the manifold of orthogonal matrices after each update. In practice, we observe that the eigenvalues of our matrices all have a modulus close to 1, as expected. The refinement step requires to generate a new dictionary at each iteration. In order for the Procrustes solution to work well, it is best to apply it on correct word pairs. As a , we use the CSLS method described in Section 2.3 to select more accurate translation pairs in the dictionary. To increase even more the quality of the dictionary, and ensure that W is learned from correct translation pairs, we only consider mutual nearest neighbors, i.e. pairs of words that are mutually nearest neighbors of each other according to CSLS. This significantly decreases the size of the generated dictionary, but improves its accuracy, as well as the overall performance. Selecting the best model is a challenging, yet important task in the unsupervised setting, as it is not possible to use a validation set (using a validation set would mean that we possess parallel data). To address this issue, we perform model selection using an unsupervised criterion that quantifies the closeness of the source and target embedding spaces. Specifically, we consider the 10k most frequent source words, and use CSLS to generate a translation for each of them. We then compute the average cosine similarity between these deemed translations, and use this average as a validation metric. We found that this simple criterion is better correlated with the performance on the evaluation tasks than optimal transport distances such as the Wasserstein distance BID40 ). FIG1 shows the correlation between the evaluation score and this unsupervised criterion (without stabilization by learning rate shrinkage). We use it as a stopping criterion during training, and also for hyperparameter selection in all our experiments. In this section, we empirically demonstrate the effectiveness of our unsupervised approach on several benchmarks, and compare it with state-of-the-art supervised methods. We first present the cross-lingual evaluation tasks that we consider to evaluate the quality of our cross-lingual word embeddings. Then, we present our baseline model. Last, we compare our unsupervised approach to our baseline and to previous methods. In the appendix, we offer a complementary analysis on the alignment of several sets of English embeddings trained with different methods and corpora. Word translation The task considers the problem of retrieving the translation of given source words. The problem with most available bilingual dictionaries is that they are generated using online tools like Google Translate, and do not take into account the polysemy of words. Failing to capture word polysemy in the vocabulary leads to a wrong evaluation of the quality of the word embedding space. Other dictionaries are generated using phrase tables of machine translation systems, but they are very noisy or trained on relatively small parallel corpora. For this task, we create high-quality en-es es-en en-fr fr-en en-de de-en en-ru ru-en en-zh zh-en en-eo eo-en dictionaries of up to 100k pairs of words using an internal translation tool to alleviate this issue. We make these dictionaries publicly available as part of the MUSE library 3.We report on these bilingual dictionaries, as well on those released by to allow for a direct comparison with previous approaches. For each language pair, we consider 1,500 query source and 200k target words. Following standard practice, we measure how many times one of the correct translations of a source word is retrieved, and report precision@k for k = 1, 5, 10.Cross-lingual semantic word similarity We also evaluate the quality of our cross-lingual word embeddings space using word similarity tasks. This task aims at evaluating how well the cosine similarity between two words of different languages correlates with a human-labeled score. We use the SemEval 2017 competition data (Camacho-Collados et al. FORMULA0) which provides large, highquality and well-balanced datasets composed of nominal pairs that are manually scored according to a well-defined similarity scale. We report Pearson correlation. Sentence translation retrieval Going from the word to the sentence level, we consider bag-ofwords aggregation methods to perform sentence retrieval on the Europarl corpus. We consider 2,000 source sentence queries and 200k target sentences for each language pair and report the precision@k for k = 1, 5, 10, which accounts for the fraction of pairs for which the correct translation of the source words is in the k-th nearest neighbors. We use the idf-weighted average to merge word into sentence embeddings. The idf weights are obtained using other 300k sentences from Europarl. In what follows, we present the on word translation retrieval using our bilingual dictionaries in Table 1 and our comparison to previous work in TAB1 where we significantly outperform previous approaches. We also present on the sentence translation retrieval task in TAB3 and the cross-lingual word similarity task in Table 4. Finally, we present on word-by-word translation for English-Esperanto in Table 5.Baselines In our experiments, we consider a supervised baseline that uses the solution of the Procrustes formula given in, and trained on a dictionary of 5,000 source words. This baseline can be combined with different similarity measures: NN for nearest neighbor similarity, ISF for Inverted SoftMax and the CSLS approach described in Section 2.2.Cross-domain similarity local scaling This approach has a single parameter K defining the size of the neighborhood. The performance is very stable and therefore K does not need cross-validation: the are essentially the same for K = 5, 10 and 50, therefore we set K = 10 in all experiments. In Table 1 provides a strong and robust gain in performance across all language pairs, with up to 7.2% in eneo. We observe that Procrustes-CSLS is almost systematically better than Procrustes-ISF, while being computationally faster and not requiring hyper-parameter tuning. In TAB1, we compare our Procrustes-CSLS approach to previous models presented in BID31;; Smith et al. FORMULA0; BID2 on the English-Italian word translation task, on which state-of-the-art models have been already compared. We show that our Procrustes-CSLS approach obtains an accuracy of 44.9%, outperforming all previous approaches. In TAB3, we also obtain a strong gain in accuracy in the Italian-English sentence retrieval task using CSLS, from 53.5% to 69.5%, outperforming previous approaches by an absolute gain of more than 20%.Impact of the monolingual embeddings For the word translation task, we obtained a significant boost in performance when considering fastText embeddings trained on Wikipedia, as opposed to previously used CBOW embeddings trained on the WaCky datasets BID3 ), as can been seen in TAB1. Among the two factors of variation, we noticed that this boost in performance was mostly due to the change in corpora. The fastText embeddings, which incorporates more syntactic information about the words, obtained only two percent more accuracy compared to CBOW embeddings trained on the same corpus, out of the 18.8% gain. We hypothesize that this gain is due to the similar co-occurrence statistics of Wikipedia corpora. Figure 3 in the appendix shows on the alignment of different monolingual embeddings and concurs with this hypothesis. We also obtained better for monolingual evaluation tasks such as word similarities and word analogies when training our embeddings on the Wikipedia corpora. Adversarial approach Table 1 shows that the adversarial approach provides a strong system for learning cross-lingual embeddings without parallel data. On the es-en and en-fr language pairs, Adv-CSLS obtains a P@1 of 79.7% and 77.8%, which is only 3.2% and 3.3% below the supervised approach. Additionally, we observe that most systems still obtain decent on distant languages that do not share a common alphabet (en-ru and en-zh), for which method exploiting identical character strings are just not applicable BID2 ). This method allows us to build a strong synthetic vocabulary using similarities obtained with CSLS. The gain in absolute accuracy observed with CSLS on the Procrustes method is even more important here, with differences between Adv-NN and Adv-CSLS of up to 8.4% on es-en. As a simple baseline, we tried to match the first two moments of the projected source and target embeddings, which amounts to solving DISPLAYFORM0 and solving the sign ambiguity BID45. This attempt was not successful, which we explain by the fact that this method tries to align only the first two moments, while adversarial training matches all the moments and can learn to focus on specific areas of the distributions instead of considering global statistics. Refinement: closing the gap with supervised approaches The refinement step on the synthetic bilingual vocabulary constructed after adversarial training brings an additional and significant gain in performance, closing the gap between our approach and the supervised baseline. In Table 1, we observe that our unsupervised method even outperforms our strong supervised baseline on en-it and en-es, and is able to retrieve the correct translation of a source word with up to 83% accuracy. The better performance of the unsupervised approach can be explained by the strong similarity of cooccurrence statistics between the languages, and by the limitation in the supervised approach that uses a pre-defined fixed-size vocabulary (of 5,000 unique source words): in our case the refinement step can potentially use more anchor points. In TAB3, we also observe a strong gain in accuracy Table 4: Cross-lingual wordsim task. NASARI (Camacho-Collados et al. FORMULA0) refers to the official SemEval2017 baseline. We report Pearson correlation.en-eo eo-en Dictionary -NN 6.1 11.9 Dictionary -CSLS 11.1 14.3 Table 5: BLEU score on English-Esperanto. Although being a naive approach, word-byword translation is enough to get a rough idea of the input sentence. The quality of the generated dictionary has a significant impact on the BLEU score.(up to 15%) on sentence retrieval using bag-of-words embeddings, which is consistent with the gain observed on the word retrieval task. Application to a low-resource language pair and to machine translation Our method is particularly suited for low-resource languages for which there only exists a very limited amount of parallel data. We apply it to the English-Esperanto language pair. We use the fastText embeddings trained on Wikipedia, and create a dictionary based on an online lexicon. The performance of our unsupervised approach on English-Esperanto is of 28.2%, compared to 29.3% with the supervised method. On Esperanto-English, our unsupervised approach obtains 25.6%, which is 1.3% better than the supervised method. The dictionary we use for that language pair does not take into account the polysemy of words, which explains why the are lower than on other language pairs. People commonly report the P@5 to alleviate this issue. In particular, the P@5 for English-Esperanto and Esperanto-English is of 46.5% and 43.9% respectively. To show the impact of such a dictionary on machine translation, we apply it to the English-Esperanto Tatoeba corpora BID44. We remove all pairs containing sentences with unknown words, ing in about 60k pairs. Then, we translate sentences in both directions by doing word-byword translation. In Table 5, we report the BLEU score with this method, when using a dictionary generated using nearest neighbors, and CSLS. With CSLS, this naive approach obtains 11.1 and 14.3 BLEU on English-Esperanto and Esperanto-English respectively. Table 6 in the appendix shows some examples of sentences in Esperanto translated into English using word-by-word translation. As one can see, the meaning is mostly conveyed in the translated sentences, but the translations contain some simple errors. For instance, the "mi" is translated into "sorry" instead of "i", etc. The translations could easily be improved using a language model. Work on bilingual lexicon induction without parallel corpora has a long tradition, starting with the seminal works by BID37 BID13. Similar to our approach, they exploit the distributional structure, but using discrete word representations such as TF-IDF vectors. Following studies by BID14 BID38 BID41; BID25 BID19 BID21 leverage statistical similarities between two languages to learn small dictionaries of a few hundred words. These methods need to be initialized with a seed bilingual lexicon, using for instance the edit distance between source and target words. This can be seen as prior knowledge, only available for closely related languages. There is also a large amount of studies in statistical decipherment, where the machine translation problem is reduced to a deciphering problem, and the source language is considered as a ciphertext BID39 BID35. Although initially not based on distributional semantics, recent studies show that the use of word embeddings can bring significant improvement in statistical decipherment BID10.The rise of distributed word embeddings has revived some of these approaches, now with the goal of aligning embedding spaces instead of just aligning vocabularies. Cross-lingual word embeddings can be used to extract bilingual lexicons by computing the nearest neighbor of a source word, but also allow other applications such as sentence retrieval or cross-lingual document classification BID24. In general, they are used as building blocks for various cross-lingual language processing systems. More recently, several approaches have been proposed to learn bilingual dictionaries mapping from the source to the target space BID31 BID51 BID12 BID0. In particular, BID47 showed that adding an orthogonality constraint to the mapping can significantly improve performance, and has a closed-form solution. This approach was further referred to as the Procrustes approach in BID43.The hubness problem for cross-lingual word embedding spaces was investigated by. The authors added a correction to the word retrieval algorithm by incorporating a nearest neighbors reciprocity term. More similar to our cross-domain similarity local scaling approach, BID43 introduced the inverted-softmax to down-weight similarities involving oftenretrieved hub words. Intuitively, given a query source word and a candidate target word, they estimate the probability that the candidate translates back to the query, rather than the probability that the query translates to the candidate. Recent work by BID43 leveraged identical character strings in both source and target languages to create a dictionary with low supervision, on which they applied the Procrustes algorithm. Similar to this approach, recent work by BID2 used identical digits and numbers to form an initial seed dictionary, and performed an update similar to our refinement step, but iteratively until convergence. While they showed they could obtain good using as little as twenty parallel words, their method still needs cross-lingual information and is not suitable for languages that do not share a common alphabet. For instance, the method of BID2 on our dataset does not work on the word translation task for any of the language pairs, because the digits were filtered out from the datasets used to train the fastText embeddings. This iterative EMbased algorithm initialized with a seed lexicon has also been explored in other studies BID19 BID26.There has been a few attempts to align monolingual word vector spaces with no supervision. Similar to our work, BID50 employed adversarial training, but their approach is different than ours in multiple ways. First, they rely on sharp drops of the discriminator accuracy for model selection. In our experiments, their model selection criterion does not correlate with the overall model performance, as shown in FIG1. Furthermore, it does not allow for hyper-parameters tuning, since it selects the best model over a single experiment. We argue it is a serious limitation, since the best hyper-parameters vary significantly across language pairs. Despite considering small vocabularies of a few thousand words, their method obtained weak compared to supervised approaches. More recently, BID49 proposed to minimize the earth-mover distance after adversarial training. They compare their only to their supervised baseline trained with a small seed lexicon, which is one to two orders of magnitude smaller than what we report here. In this work, we show for the first time that one can align word embedding spaces without any cross-lingual supervision, i.e., solely based on unaligned datasets of each language, while reaching or outperforming the quality of previous supervised approaches in several cases. Using adversarial training, we are able to initialize a linear mapping between a source and a target space, which we also use to produce a synthetic parallel dictionary. It is then possible to apply the same techniques proposed for supervised techniques, namely a Procrustean optimization. Two key ingredients contribute to the success of our approach: First we propose a simple criterion that is used as an effective unsupervised validation metric. Second we propose the similarity measure CSLS, which mitigates the hubness problem and drastically increases the word translation accuracy. As a , our approach produces high-quality dictionaries between different pairs of languages, with up to 83.3% on the Spanish-English word translation task. This performance is on par with supervised approaches. Our method is also effective on the English-Esperanto pair, thereby showing that it works for lowresource language pairs, and can be used as a first step towards unsupervised machine translation. In order to gain a better understanding of the impact of using similar corpora or similar word embedding methods, we investigated merging two English monolingual embedding spaces using either Wikipedia or the Gigaword corpus BID33 ), and either Skip-Gram, CBOW or fastText methods (see Figure 3). Figure 3: English to English word alignment accuracy. Evolution of word translation retrieval accuracy with regard to word frequency, using either Wikipedia (Wiki) or the Gigaword corpus (Giga), and either skip-gram, continuous bag-of-words (CBOW) or fastText embeddings. The model can learn to perfectly align embeddings trained on the same corpus but with different seeds (a), as well as embeddings learned using different models (overall, when employing CSLS which is more accurate on rare words) (b). However, the model has more trouble aligning embeddings trained on different corpora (Wikipedia and Gigaword) (c). This can be explained by the difference in co-occurrence statistics of the two corpora, particularly on the rarer words. Performance can be further deteriorated by using both different models and different types of corpus (d).Source mi kelkfoje parolas kun mia najbaro tra la barilo. Hypothesis sorry sometimes speaks with my neighbor across the barrier. Reference i sometimes talk to my neighbor across the fence. Source la viro malanta ili ludas la pianon. Hypothesis the man behind they plays the piano. Reference the man behind them is playing the piano. Source bonvole protektu min kontra tiuj malbonaj viroj. Hypothesis gratefully protects hi against those worst men. Reference please defend me from such bad men. Table 6: Esperanto-English. Examples of fully unsupervised word-by-word translations. The translations reflect the meaning of the source sentences, and could potentially be improved using a simple language model.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H196sainb
Aligning languages without the Rosetta Stone: with no parallel data, we construct bilingual dictionaries using adversarial training, cross-domain local scaling, and an accurate proxy criterion for cross-validation.
Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA). The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section of the image. In contrast, we treat counting as a sequential decision process and force our model to make discrete choices of what to count. Specifically, the model sequentially selects from detected objects and learns interactions between objects that influence subsequent selections. A distinction of our approach is its intuitive and interpretable output, as discrete counts are automatically grounded in the image. Furthermore, our method outperforms the state of the art architecture for VQA on multiple metrics that evaluate counting. Visual question answering (VQA) is an important benchmark to test for context-specific reasoning over complex images. While the field has seen substantial progress, counting-based questions have seen the least improvement. Intuitively, counting should involve finding the number of distinct scene elements or objects that meet some criteria, see Fig. 1 for an example. In contrast, the predominant approach to VQA involves representing the visual input with the final feature map of a convolutional neural network (CNN), attending to regions based on an encoding of the question, and classifying the answer from the attention-weighted image features BID32 BID31 b; BID7 BID14. Our intuition about counting seems at odds with the effects of attention, where a weighted average obscures any notion of distinct elements. As such, we are motivated to re-think the typical approach to counting in VQA and propose a method that embraces the discrete nature of the task. Our approach is partly inspired by recent work that represents images as a set of distinct objects, as identified by object detection, and making use of the relationships between these objects BID26. We experiment with counting systems that build off of the vision module used for these two works, which represents each image as a set of detected objects. For training and evaluation, we create a new dataset, HowMany-QA. It is taken from the countingspecific union of VQA 2.0 BID10 and Visual Genome QA .We introduce the Interpretable Reinforcement Learning Counter (IRLC), which treats counting as a sequential decision process. We treat learning to count as learning to enumerate the relevant objects in the scene. As a , IRLC not only returns a count but also the objects supporting its answer. This output is produced through an iterative method. Each step of this sequence has two stages: First, an object is selected to be added to the count. Second, the model adjusts the priority given to unselected objects based on their configuration with the selected objects (Fig. 1). We supervise only the final count and train the decision process using reinforcement learning (RL).Additional experiments highlight the importance of the iterative approach when using this manner of weak supervision. Furthermore, we train the current state of the art model for VQA on HowMany-QA and find that IRLC achieves a higher accuracy and lower count error. Lastly, we compare the Figure 1: IRLC takes as input a counting question and image. Detected objects are added to the returned count through a sequential decision process. The above example illustrates actual model behavior after training.grounded counts of our model to the attentional focus of the state of the art baseline to demonstrate the interpretability gained through our approach. Visual representations for counting. As a standalone problem, counting from images has received some attention but typically within specific problem domains. BID23 explore training a CNN to count directly from synthetic data. Counts can also be estimated by learning to produce density maps for some category of interest (typically people), as in; Oñoro-Rubio & López-; BID34. Density estimation simplifies the more challenging approach of counting by instance-by-instance detection BID19. Methods to detect objects and their bounding boxes have advanced considerably BID8 BID21 BID6 BID35 but tuning redundancy reduction steps in order to count is unreliable. Here, we overcome this limitation by allowing flexible, question specific interactions during counting. Alternative approaches attempt to model subitizing, which describes the human ability to quickly and accurately gauge numerosity when at most a few objects are present. demonstrates that CNNs may be trained towards a similar ability when estimating the number of salient objects in a scene. This approach was extended to counting 80 classes of objects simultaneously in. Their model is trained to estimate counts within each subdivision of the full image, where local counts are typically within the subitizing range. The above studies apply counting to a fixed set of object categories. In contrast, we are interested in counting during visual question answering, where the criteria for counting change from question to question and can be arbitrarily complex. This places our work in a different setting than those that count from the image alone. For example, apply their trained models to a subset of VQA questions, but their analysis was limited to the specific subset of examples where the question and answer labels agreed with the object detection labels they use for training. Here, we overcome this limitation by learning to count directly from question/answer pairs. Visual question answering. The potential of deep learning to fuse visual and linguistic reasoning has been recognized for some time a). Visual question answering poses the challenge of retrieving question-specific information from an associated image, often requiring complex scene understanding and flexible reasoning. In recent years, a number of datasets have been introduced for studying this problem (; BID20 BID37 BID0 BID10). The majority of recent progress has been aimed at the so-named "VQA" datasets BID0 BID10, where counting questions represent roughly 11% of the data. Though our focus is on counting questions specifically, prior work on VQA is highly relevant. An early baseline for VQA represents the question and image at a coarse granularity, respectively using a "bag of words" embedding along with spatially-pooled CNN outputs to classify the answer BID36. In BID20, a similar fixed-length image representation is fused with the question embeddings as input to a recurrent neural network (RNN), from which the answer is classified. Attention. More recent variants have chosen to represent the image at a finer granularity by omitting the spatial pooling of the CNN feature map and instead use attention to focus relevant image regions before producing an answer BID32 BID31 b; BID7 BID14. These works use the spatially-tiled feature vectors output by a CNN to represent the image; others follow the intuition that a more meaningful representation may come from parsing the feature map according to the locations of objects in the scene BID24 BID13. Notably, using object detection was a key design choice for the winning submission for the VQA 2017 challenge. Work directed at VQA with synthetic images (which sidesteps the challenges created by computer vision) has further demonstrated the utility that relationships may provide as an additional form of image annotation BID26.Interpretable VQA. The use of "scene graphs" in real-image VQA would have the desirable property that intermediate model variables would be grounded in concepts explicitly, a step towards making neural reasoning more transparent. A conceptual parallel to this is found in Neural Module Networks BID2 b; BID12, which gain interpretability by grounding the reasoning process itself in defined concepts. The general concept of interpretable VQA has been the subject of recent interest. We address this at the level of counting in VQA. We show that, despite the challenge presented by this particular task, an intuitive approach gains in both performance and interpretability over state of the art. Within the field of VQA, the majority of progress has been aimed at the VQA dataset BID0 and, more recently, VQA 2.0 BID10, which expands the total number of questions in the dataset and attempts to reduce bias by balancing answers to repeated questions. VQA 2.0 consists of 1.1M questions pertaining to the 205K images from COCO . The examples are divided according to the official COCO splits. In addition to VQA 2.0, we incorporate the Visual Genome (VG) dataset . Visual Genome consists of 108K images, roughly half of which are part of COCO. VG includes its own visual question answering dataset. We include examples from that dataset when they pertain to an image in the VQA 2.0 training set. In order to evaluate counting specifically, we define a subset of the QA pairs, which we refer to as HowMany-QA. Our inclusion criteria were designed to filter QA pairs where the question asks for a count, as opposed to simply an answer in the form of a number (Fig 2). For the first condition, we require that the question contains one of the following phrases: "how many", "number of", "amount of", or "count of". We also reject a question if it contains the phrase "number of the", since this phrase frequently refers to a printed number rather than a count (i.e. "what is the number of the bus?"). Lastly, we require that the ground-truth answer is a number between 0 to 20 (inclusive). The original VQA 2.0 train set includes roughly 444K QA pairs, of which 57,606 are labeled as having Figure 2: Examples of question-answer pairs that are excluded from HowMany-QA. This selection exemplifies the common types of "number" questions that do not require counting and therefore distract from our objective: (from left to right) time, general number-based answers, ballparking, and reading numbers from images. Importantly, the standard VQA evaluation metrics do not distinguish these from counting questions; instead, performance is reported for "number" questions as a whole. Due to our filter and focus on counting questions, we cannot make use of the official test data since its annotations are not available. Hence, we divide the validation data into separate development and test sets. More specifically, we apply the above criteria to the official validation data and select 5,000 of the ing QA pairs to serve as the test data. The remaining 17,714 QA pairs are used as the development set. As mentioned above, the HowMany-QA training data is augmented with available QA pairs from Visual Genome, which are selected using the same criteria. A breakdown of the size and composition of HowMany-QA is provided in TAB4. All models compared in this work are trained and evaluated on HowMany-QA. To facilitate future comparison to our work, we have made the training, development, and test question IDs available for download. In this work, we focus specifically on counting in the setting of visual question answering (where the criteria for counting changes on a question-by-question basis). In addition, we are interested in model interpretability. We explore this notion by experimenting with models that are capable of producing question-guided counts which are visually grounded in object proposals. Rather than substantially modifying existing counting approaches -such as -we compare three models whose architectures naturally fit within our experimental scope. These models each produce a count from the outputs of an object detection module and use identical strategies to encode the question and compare it to the detected objects. The models differ only in terms of how these components are used to produce a count FIG0. Our approach is inspired by the strategy of and. Their model, which represents current state of the art in VQA, infers objects as the input to the questionanswering system. This inference is performed using the Faster R-CNN architecture BID21. The Faster R-CNN proposes a set of regions corresponding to objects in the image. It encodes the image as a set of bounding boxes {b 1, ..., b N}, b i ∈ R 4 and complementary set of object encodings {v 1, ..., v N}, v i ∈ R 2048, corresponding to the locations and feature representations of each of the N detected objects, respectively (blue box in FIG0).Rather than train our own vision module from scratch, we make use of the publicly available object proposals learned in. These provide rich, object-centric representations for each image in our dataset. These representations are fixed when learning to count and are shared across each of the QA models we experiment with. Each architecture encodes the question and compares it to each detected object via a scoring function. We define q as the final hidden state of an LSTM BID11 after processing the question and compute a score vector for each object FIG0, green boxes): DISPLAYFORM0 DISPLAYFORM1 Here, x t denotes the word embedding of the question token at position t and s i ∈ R n denotes the score vector encoding the relevance of object i to the question. Following, we implement the scoring function f S: R m → R n as a layer of Gated Tanh Units (GTU) (van den BID28 . [,] denotes vector concatenation. We experiment with jointly training the scoring function to perform caption grounding, which we supervise using region captions from Visual Genome. Region captions provide linguistic descriptions of localized regions within the image, and the goal of caption grounding is to identify which object a given caption describes (details provided in Section B.1 of the Appendix). Caption grounding uses a strategy identical to that for question answering (Eqs. 1 and 2): an LSTM is used to encode each caption and the scoring function f S is used to encode its relevance to each detected object. The weights of the scoring function are tied for counting and caption grounding FIG0. We include from experiments where caption grounding is ignored. Interpretable RL Counter (IRLC). For our proposed model, we aim to learn how to count by learning what to count. We assume that each counting question implicitly refers to a subset of the objects within a scene that meet some variable criteria. In this sense, the goal of our model is to enumerate that subset of objects. To implement this as a sequential decision process, we need to represent the probability of selecting a given action and how each action affects subsequent choices. To that end, we project the object scores s ∈ R N xn to a vector of logits κ ∈ R N, representing how likely each object is to be counted, where N is the number of detected objects: DISPLAYFORM0 And we compute a matrix of interaction terms ρ ∈ R N xN that are used to update the logits κ. The value ρ ij represents how selecting object i will change κ j. We calculate this interaction from a compressed representation of the question (W q), the dot product of the normalized object vectors (v T iv j), the object coordinates (b i and b j), and basic overlap statistics (IoU ij, O ij, and O ji): DISPLAYFORM1 where f ρ: x ∈ R m ⇒ R is a 2-layer MLP with ReLU activations. For each step t of the counting sequence we greedily select the action with the highest value (interpreted as either selecting the next object to count or terminating), and update κ accordingly: DISPLAYFORM2 DISPLAYFORM3 where ζ is a learnable scalar representing the logit value of the terminal action, and κ 0 is the of Equation 3 FIG0. The action a t is expressed as the index of the selected object. ρ(a t, ·) denotes the row of ρ indexed by a t. Each object is only allowed to be counted once. We define the count C as the timestep when the terminal action was selected t: a t = N + 1.This approach bears some similarity to Non-Maximal Suppression (NMS), a staple technique in object detection to suppress redundant proposals. However, our approach is far less rigid and allows the question to determine how similar and/or overlapping objects interact. Training IRLC. Because the process of generating a count requires making discrete decisions, training requires that we use techniques from Reinforcement Learning. Given our formulation, a natural choice is to apply REINFORCE BID29. To do so, we calculate a distribution over action probabilities p t from κ t and generate a count by iteratively sampling actions from the distribution: DISPLAYFORM4 DISPLAYFORM5 We calculate the reward using Self-Critical Sequence training BID22 BID17, a variation of policy gradient. We define E = |C − C GT | to be the count error and define the reward as R = E greedy − E, where E greedy is the baseline count error obtained by greedy action selection (which is also how the count is measured at test time). From this, we define our (unnormalized) counting loss as DISPLAYFORM6 Additionally, we include two auxiliary objectives to aid learning. For each sampled sequence, we measure the total negative policy entropy H across the observed time steps. We also measure the average interaction strength at each time step and collect the total DISPLAYFORM7 where L 1 is the Huber loss from Eq 12. Including the entropy objective is a common strategy when using policy gradient BID30; ) and is used to improve exploration. The interaction penalty is motivated by the a priori expectation that interactions should be sparse. During training, we minimize a weighted sum of the three losses, normalized by the number of decision steps. As before, we provide training and implementation details in the Appendix (Sec. B.2).SoftCount. As a baseline approach, we train a model to count directly from the outputs s of the scoring function. For each object, we project its score vector s i to a scalar value and apply a sigmoid nonlinearity, denoted as σ, to assign the object a count value between 0 and 1. The total count is the sum of these fractional, object-specific count values. We train this model by minimizing the Huber loss associated with the absolute difference e between the predicted count C and the ground truth count C GT: DISPLAYFORM8 DISPLAYFORM9 For evaluation, we round the estimated count C to the nearest integer and limit the output to the maximum ground truth count (in this case, 20).Attention Baseline (UpDown). As a second baseline, we re-implement the QA architecture introduced in, which the authors refer to as UpDown -see also for additional details. We focus on this architecture for three main reasons. First, it represents the current state of the art for VQA 2.0. Second, it was designed to use the visual representations we employ. And, third, it exemplifies the common two-stage approach of deploying question-based attention over image regions (here, detected objects) to get a fixed-length visual representation DISPLAYFORM10 and then classifying the answer based on this average and the question encoding DISPLAYFORM11 DISPLAYFORM12 where s ∈ R N xn denotes the matrix of score vectors for each of the N detected objects and α ∈ R N denotes the attention weights. Here, each function f is implemented as a GTU layer and ⊗ denotes element-wise multiplication. For training, we use a cross entropy loss, with the target given by the ground-truth count. At test time, we use the most probable count given by p. We use two metrics for evaluation. For consistency with past work, we report the standard VQA test metric of accuracy. Since accuracy does not measure the degree of error we also report root-meansquared-error (RMSE), which captures the typical deviation between the estimated and ground-truth count and emphasizes extreme errors. Details are provided in the Appendix (Sec. D).To better understand the performance of the above models, we also report the performance of two non-visual baselines. The first baseline (Guess1) shows the performance when the estimated count is always 1 (the most common answer in the training set). The second baseline (LSTM) learns to predict the count directly from a linear projection of the question embedding q (Eq. 1). IRLC achieves the highest overall accuracy and (with SoftCount) the lowest overall RMSE on the test set TAB6 ). Interestingly, SoftCount clearly lags in accuracy but is competitive in RMSE, arguing that accuracy and RMSE are not redundant. We observe this to from the fact that IRLC is less prone to small errors and very slightly more prone to large errors (which disproportionately impact RMSE). However, whereas UpDown improves in accuracy at the cost of RMSE, IRLC is substantially more accurate without sacrificing overall RMSE.To gain more insight into the performance of these models, we calculate these metrics within the development set after separating the data according to how common the subject of the count is during training 1. We break up the questions into 5 roughly equal-sized bins representing increasingly uncommon subjects. We include a 6th bin for subjects never seen during training. The accuracy and RMSE across the development set are reported for each of these bins in Figure 5.Organizing the data this way reveals two main trends. First, all models perform better when asked to count subjects that were common during training. Second, the performance improvements offered by IRLC over UpDown persist over all groupings of the development data. We introduce a novel analysis to quantify how well counted objects match the subject of the question. To perform this analysis, we form generic questions that refer to the object categories in the COCO object detection dataset. We take the object proposals counted in response to a given question and compare them to the ground truth COCO labels to determine how relevant the counted object proposals are. Our metric takes on a value of 1 when the counted objects perfectly map onto the category to which the question refers. Values around 0 indicate that the counted objects were not relevant to the question. Section D of the Appendix details how the grounding quality metric is calculated. We perform this analysis for each of the 80 COCO categories using the images in the HowMany-QA development set. FIG2 compares the grounding quality of SoftCount and IRLC, where each point represents the average grounding quality for a particular COCO category. As with the previous two metrics, grounding quality is highest for COCO categories that are more common during training. We observe that IRLC consistently grounds its counts in objects that are more relevant to the question than does SoftCount. A paired t-test shows that this trend is statistically significant (p < 10 −15). Examples of failure cases with common and rare subjects ("people" and "ties," respectively). Each example shows the output of IRLC, where boxes correspond to counted objects, and the output of UpDown, where boxes are shaded according to their attention weights. The design of IRLC is inspired by the ideal of interpretable VQA BID4. One hallmark of interpretability is the ability to predict failure modes. We argue that this is made more approachable by requiring IRLC to identify the objects in the scene that it chooses to count. FIG3 illustrates two failure cases that exemplify observed trends in IRLC. In particular, IRLC has little trouble counting people (they are the most common subject) but encounters difficulty with referring phrases (in this case, "sitting on the bench"). When asked to count ties (a rare subject), IRLC includes a sleeve in the output, demonstrating the tendency to misidentify objects with few training examples. These failures are obvious by virtue of the grounded counts, which point out exactly which objects IRLC counted. In comparison, the attention focus of UpDown (representing the closest analogy to a grounded output) does not identify any pattern. From the attention weights, it is unclear which scene elements form the basis of the returned count. Indeed, the two models may share similar deficits. We observe that, in many cases, they produce similar counts. However, we stress that without IRLC and the chance to observe such similarities such deficits of the UpDown model would be difficult to identify. The Appendix includes further visualizations and comparisons of model output, including examples of how IRLC uses the iterative decision process to produce discrete, grounded counts (Sec. A). We present an interpretable approach to counting in visual question answering, based on learning to enumerate objects in a scene. By using RL, we are able to train our model to make binary decisions about whether a detected object contributes to the final count. We experiment with two additional baselines and control for variations due to visual representations and for the mechanism of visuallinguistic comparison. Our approach achieves state of the art for each of the evaluation metrics. In addition, our model identifies the objects that contribute to each count. These groundings provide traction for identifying the aspects of the task that the model has failed to learn and thereby improve not only performance but also interpretability. A EXAMPLES Figure 8: Example outputs produced by each model. For SoftCount, objects are shaded according to the fractional count of each (0=transparent; 1=opaque). For UpDown, we similarly shade the objects but use the attention focus to determine opacity. For IRLC, we plot only the boxes from objects that were selected as part of the count. At each timestep, we illustrate the unchosen boxes in pink, and shade each box according to κ t (corresponding to the probability that the box would be selected at that time step; see main text). We also show the already-selected boxes in blue. For each of the questions, the counting sequence terminates at t = 3, meaning that the returned count C is 3. For each of these questions, that is the correct answer. The example on the far right is a'correct failure,' a case where the correct answer is returned but the counted objects are not related to the question. These kinds of subtle failures are revealed with the grounded counts. We experiment with jointly training counting and caption grounding. The goal of caption grounding is, given a set of objects and a caption, to identify the object that the caption describes. Identical to the first stages of answering the counting question, we use an LSTM to encode the caption and compare it to each of the objects using the scoring function: DISPLAYFORM0 DISPLAYFORM1 where h ∈ R 1024, x t i ∈ R 300 is the embedding for the token at timestep t of the caption, T is the caption length, and f S: R m → R n is the scoring function (Sec. 4.3). The embedding of object i is denoted by v i and the relevance of this object to the caption is encoded by the score vector s i.We project each such score vector to a scalar logit α i and apply a softmax nonlinearity to estimate p ∈ R N, where N is the number of object proposals and p i denotes the probability that the caption describes object proposal i: DISPLAYFORM2 During training, we randomly select four of the images in the batch of examples to use for caption grounding (rather than the full 32 images that make up a batch). To create training data from the region captions in Visual Genome, we assign each caption to one of the detected object proposals. To do so, we compute the intersection over union between the ground truth region that the caption describes and the coordinates of each object proposal. We assign the object proposal with the largest IoU to the caption. If the maximum IoU for the given caption is less than 0.5, we ignore it during training. We compute the grounding probability p for each caption to which we can successfully assign a detection and train using the cross entropy loss averaged over the captions. We weight the loss associated with caption grounding by 0.1 relative to the counting loss. Each of the considered counting models makes use of the same basic architecture for encoding the question and comparing it with each of the detected objects. For each model, we initialized the word embeddings from GloVe BID18 and encoded the question with an LSTM of hidden size 1024. The only differences in the model-specific implementations of the language module was the hidden size of the scoring function f S. We determined these specifics from the optimal settings observed during initial experiments. We use a hidden size of 512 for SoftCount and UpDown and a hidden size of 2048 for IRLC. We observed that the former two models were more prone to overfitting, whereas IRLC benefited from the increased capacity. When training on counting, we optimize using Adam BID15. For SoftCount and UpDown, we use a learning rate of 3x10 −4 and decay the learning rate by 0.8 when the training accuracy plateaus. For IRLC, we use a learning rate of 5x10 −4 and decay the learning rate by 0.99999 every iteration. For all models, we regularize using dropout and apply early stopping based on the development set accuracy (see below).When training IRLC, we apply the sampling procedure 5 times per question and average the losses. We weight the entropy penalty P H and interaction penalty P I (Eq. 10) both by 0.005 relative to the counting loss. These penalty weights yield the best development set accuracy within the hyperparameter search we performed (Fig. 10). IRLC auxiliary loss. We performed a grid search to determine the optimal setting for the weights of the auxiliary losses for training IRLC. From our observations, the entropy penalty is important to balance the exploration of the model during training. In addition, the interaction penalty prevents Figure 11: HowMany-QA test set performance for models trained with the full HowMany-QA training data (blue) and trained without the additional data from Visual Genome (green). degenerate counting strategies. The of the grid search suggest that these auxiliary losses improve performance but become unhelpful if given too much weight (Fig. 10). In any case, IRLC outperforms the baseline models across the range of settings explored. Data augmentation with Visual Genome. Here, we compare performance on the HowMany-QA test set for models trained with and without additional data from Visual Genome QA. In all cases, performance benefits from the additional training data (Fig. 11). On average, excluding Visual Genome from the training data decreases accuracy by 2.7% and increases RMSE by 0.12. Interestingly, the performance of IRLC is most robust to the loss of training data. Ordinality of UpDown output. Whereas the training objectives for SoftCount and IRLC intrinsically reflect the ordinal nature of counts, the same is not true for UpDown. For example, the loss experienced by SoftCount and IRLC reflect the degree of error between the estimated count and the ground truth target; however, UpDown is trained only to place high probability mass on the ground truth value (missing by 1 or by 10 are treated as equally incorrect). We examine the patterns in the output count probabilities from UpDown to ask whether the model learns an ordinal representation despite its non-ordinal training objective. Figure 12 illustrates these trends. When the estimated count is less than 5, the second-most probable count is very frequently adjacent to the most probable count. When the estimated count is larger than 5, the probability distribution is less smooth, such Figure 12: (Left) Average count probability (Eq. 15) from UpDown, grouped according to the estimated count. (Right) Cumulative distribution of the absolute difference between the top two predicted counts, shown for when the most likely count was less than 5 (blue) and when it was greater than or equal to 5 (green). The probability distributions are much less smooth when the estimated count is large.that the second-most probable count is often considerably different than the most probable count. This suggests that UpDown learns ordinality for lower count values (where training data is abundant) but fails to generalize this concept to larger counts (where training data is more sparse). Accuracy. The VQA dataset includes annotations from ten human reviewers per question. The accuracy of a given answer a depends on how many of the provided answers it agrees with. It is scored as correct if at least 3 humans answers agree:Acc (a) = min #humans that said a 3, 1.Each answer's accuracy is averaged over each 10-choose-9 set of human answers. As described in the main text, we only consider examples where the consensus answer was in the range of 0-20. We use all ten labels to calculate accuracy, regardless of whether individual labels deviate from this range. Thee accuracy values we report are taken from the average accuracy over some set of examples. RMSE. This metric simply quantifies the typical deviation between the model count and the groundtruth. Across a set of N, we calculate this metric as DISPLAYFORM0 whereĈ i and C i are the predicted and ground truth counts, respectively, for question i. RMSE is a measurement of error, so lower is better. Grounding Quality. We introduce a new evaluation method for quantifying how relevant the objects counted by a model are to the type of object it was asked to count. This evaluation metric takes advantage of the ground truth labels included in the COCO dataset. These labels annotate each object instance of 80 different categories for each of the images in the development set. We make use of GloVe embeddings to compute semantic similarity. We use GloVe (x) ∈ R 300 to denote the (L2 normalized) GloVe embedding of category x. For each image m, the analysis is carried out in two stages. First, we assign one of the COCO categories (or ) to each of the object proposals used for counting. For each object proposal, we find the object in the COCO labels with the largest IoU. If the IoU is above 0.5, we assign the object proposal to the category of the COCO object, otherwise we assign the object proposal to the . Below, we use k m i to denote the category assigned to object proposal i for image m. Second, for each of the COCO categories present in image m, we use the category q (i.e. q = "car") to build a question (i.e. "How many cars are there?"). For SoftCount and IRLC, the count returned in response to this question is the sum of each object proposal's inferred count value: DISPLAYFORM1 where N m is the number of object proposals in image m and w (m,q) i is the count value given to proposal i. We use the count values to compute a weighted sum of the semantic similarity between the assigned object proposal categories k and the question category q: DISPLAYFORM2 where semantic similarity is estimated from the dot product between the embeddings of the assigned category and the question category. If k m i corresponds to the category, we replace its embedding with a vector of zeros. The final metric is computed for each COCO category by accumulating the over all images that contain a label for that category and normalizing by the net count to get an average: DISPLAYFORM3 The interpretation of this metric is straightforward: on average, how relevant are the counted objects to the subject of the question.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1J2ZyZ0Z
We perform counting for visual question answering; our model produces interpretable outputs by counting directly from detected objects.
Graphs are fundamental data structures required to model many important real-world data, from knowledge graphs, physical and social interactions to molecules and proteins. In this paper, we study the problem of learning generative models of graphs from a dataset of graphs of interest. After learning, these models can be used to generate samples with similar properties as the ones in the dataset. Such models can be useful in a lot of applications, e.g. drug discovery and knowledge graph construction. The task of learning generative models of graphs, however, has its unique challenges. In particular, how to handle symmetries in graphs and ordering of its elements during the generation process are important issues. We propose a generic graph neural net based model that is capable of generating any arbitrary graph. We study its performance on a few graph generation tasks compared to baselines that exploit domain knowledge. We discuss potential issues and open problems for such generative models going forward. Graphs are natural representations of information in many problem domains. For example, relations between entities in knowledge graphs and social networks are well captured by graphs, and they are also good for modeling the physical world, e.g. molecular structure and the interactions between objects in physical systems. Thus, the ability to capture the distribution of a particular family of graphs has many applications. For instance, sampling from the graph model can lead to the discovery of new configurations that share same global properties as is, for example, required in drug discovery BID10. Obtaining graph-structured semantic representations for natural language sentences BID15 requires the ability to model (conditional) distributions on graphs. Distributions on graphs can also provide priors for Bayesian structure learning of graphical models BID23.Probabilistic models of graphs have been studied for a long time, from at least two perspectives. On one hand, there are random graph models that robustly assign probabilities to large classes of graphs BID8 BID1. These make strong independence assumptions and are designed to capture only certain graph properties, like degree distribution and diameter. While these are effective models of the distributions of graphs found in some domains, such as social networks, they are poor models of more richly structured graphs where small structural differences can be functionally significant, such as those encountered in chemistry or when representing the meaning of natural language sentences. As an alternative, a more expressive class of models makes use of graph grammars, which generalize devices from formal language theory so as to produce non-sequential structures BID27. Graph grammars are systems of rewrite rules that incrementally derive an output graph via a sequence of transformations of intermediate graphs. While symbolic graph grammars can be made stochastic or otherwise weighted using standard techniques BID5, from a learnability standpoint, two problems remain. First, inducing grammars from a set of unannotated graphs is nontrivial since formalism-appropriate derivation steps must be inferred and transformed into rules BID17 Aguiñaga et al., 2016, for example). Second, as with linear output grammars, graph grammars make a hard distinction between what is in the language and what is excluded, making such models problematic for applications where it is inappropriate to assign 0 probability to certain graphs. In this work we develop an expressive model which makes no assumptions on the graphs and can therefore assign probabilities to any arbitrary graph.1 Our model generates graphs in a manner similar to graph grammars, where during the course of a derivation new structure (specifically, a new node or a new edge) is added to the existing graph, and where the probability of that addition event depends on the history of the graph derivation. To represent the graph during each step of the derivation, we use a representation based on graph-structured neural networks (graph nets). Recently there has been a surge of interest in graph nets for learning graph representations and solving graph prediction problems BID11 BID6 BID2 BID14 BID9. These models are structured according to the graph being utilized, and are parameterized independent of graph sizes therefore invariant to isomorphism, providing a good match for our purposes. We evaluate our model by fitting graphs in three problem domains: generating random graphs with certain common topological properties (e.g., cyclicity); generating molecule graphs; and conditional generation of parse trees. Our proposed model performs better than random graph models and LSTM baselines on and FORMULA0 and is close to a LSTM sequence to sequence with attention model on. We also analyze the challenges our model is facing, e.g. the difficulty of learning and optimization, and discuss possible ways to make it better. The earliest probabilistic model of graphs developed by BID8 assumed an independent identical probability for each possible edge. This model leads to rich mathematical theory on random graphs, but it is too simplistic to model more complicated graphs that violate this i.i.d. assumption. Most of the more recent random graph models involve some form of "preferential attachment", for example in BID1 ) the more connections a node has, the more likely it will be connect to new nodes added to the graph. Another class of graph models aim to capture the small diameter and local clustering properties in graphs, like the small-world model . Such models usually just capture one property of the graphs we want to model and are not flexible enough to model a wide range of graphs. BID18 proposed the Kronecker graphs model which is capable of modeling multiple properties of graphs, but it still only has limited capacity to allow tractable mathematical analysis. There are a significant amount of work from the natural language processing and program synthesis communities on modeling the generation of trees. proposed a recursive neural network model to build parse trees for natural language and visual scenes. BID21 developed probabilistic models of parsed syntax trees for source code. Vinyals et al. (2015c) flattened a tree into a sequence and then modeled parse tree generation as a sequence to sequence task. BID7 proposed recurrent neural network models capable of modeling any top-down transition-based parsing process for generating parse trees. BID16 developed models for context-free grammars for generating SMILES string representations for molecule structures. Such tree models are very good at their task of generating trees, but they are incapable of generating more general graphs that contain more complicated loopy structures. Our graph generative model is based on a class of neural net models we call graph nets. Originally developed in BID28, a range of variants of such graph structured neural net models have been developed and applied to various graph problems more recently BID11 BID14 BID2 BID9. Such models learn representations of graphs, nodes and edges based on a propagation process which communicates information across a graph, and are invariant to graph isomorphism because of the graph size independent parameterization. We use these graph nets to learn representations for making various decisions in the graph generation process. Our work share some similarity to the recent work of BID12, where a graph is constructed to solve reasoning problems. The main difference between our work and BID12 our goal in this paper is to learn and represent unconditional or conditional densities on a space of graphs given a representative sample of graphs, whereas BID12 is primarily interested in using graphs as intermediate representations in reasoning tasks. However, BID12 do offer a probabilistic semantics for their graphs (the soft, real-valued node and connectivity strengths). But, as a generative model, BID12 did make a few strong assumptions for the generation process, e.g. a fixed number of nodes for each sentence, independent probability for edges given a batch of new nodes, etc.; while our model doesn't make any of these assumptions. On the other side, as we are modeling graph structures, the samples from our model are graphs where an edge or node either exists or does not exist; whereas in BID12 all the graph components, e.g. existence of a node or edge, are all soft, and it is this form of soft node / edge connectivity that was been used for other reasoning tasks. Dense and soft representation may be good for some applications, while the sparse discrete graph structures may be good for others. Potentially, our graph generative model can also be used in an end-to-end pipeline to solve prediction problems as well, like BID12. Our generative model of graphs is a sequential process which generates one node at a time and connects each node to the partial graph already generated by creating edges one by one. The actions by which our model generates graphs is illustrated in FIG0 (for the formal presentation, refer to Algorithm 1 in Appendix A). Briefly, in this generative process, in each iteration we sample whether to add a new node of a particular type or terminate; if a node type is chosen, we add a node of this type to the graph and check if any further edges are needed to connect the new node to the existing graph; if yes we select a node in the graph and add an edge connecting the new node to the selected node. The algorithm goes back to step and repeats until the model decides not to add another edge. Finally, the algorithm goes back to step to add subsequent nodes. There are many different ways to tweak this generation process. For example, edges can be made directional or typed by jointly modeling the node selection process with type and direction random variables (in the molecule generation experiments below, we use typed nodes and edges). Additionally, constraints on certain structural aspects of graphs can be imposed such as forbidding self-loops or multiple edges between a pair of nodes. The graph generation process can be seen as a sequence of decisions, i.e., add a new node or not (with probabilities provided by an f addnode module), add a new edge or not (probabilities provided by f addedge), and pick one node to connect to the new node (probabilities provided by f nodes). One example graph with corresponding decision sequence is shown in Figure 6 in the Appendix. Note that different ordering of the nodes and edges can lead to different decision sequences for the same graph, how to properly handle these orderings is therefore an important issue which we will discuss below. Once the graph is transformed into such a sequence of structure building actions, we can use a number of different generative models to model it. One obvious choice is to treat the sequences as sentences in natural language, and use conventional LSTM language models. We propose to use graph nets to model this sequential decision process instead. That is, we define the modules that provide probabilities for the structure building events (f addnode, f addedge and f nodes) in terms of graph nets. As graph nets make use of the structure of the graph to create representations of nodes and edges via an information propagation process, this parameterization will be more sensitive to the structures being constructed than might be possible in an LSTM-based action sequence model. For any graph G = (V, E), we associate a node embedding vector h v ∈ R H with each node v ∈ V. These vectors can be computed initially from node inputs, e.g. node type embeddings, and then propagated on the graph to aggregate information from the local neighborhood. The propagation process is an iterative process, in each round of propagation, a "message" vector is computed on each edge, and after all the messages are computed, each node collects all incoming messages and updates its own representation, as characterized in Eq. 1, 2 and 3, where f e and f n are mappings that can be parameterized as neural networks, x u,v is a feature vector for the edge (u, v), e.g. edge type embedding, m u→v is the message vector from u to v 2, a v is the aggregated incoming message for node v and h v is the new representation for node v after one round of propagation. A typical choice for f e and f n is to use fully-connected neural nets for both, but f n can also be any recurrent neural network core like GRU or LSTM as well. In our experience LSTM and GRU cores perform similarly, we therefore use the simpler GRUs for f n throughout our experiments. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 Given a set of node embeddings h V = {h 1, . . ., h |V |}, one round of propagation denoted as prop(h V, G) returns a set of transformed node embeddings h V which aggregates information from each node's neighbors (as specified by G). It does not change the graph structure. Multiple rounds of propagation, i.e. prop(prop(· · · (h V, G), · · ·, G), can be used to aggregate information across a larger neighborhood. Furthermore, different rounds of propagation can have different set of parameters to further increase the capacity of this model, all our experiments use this setting. To compute a vector representation for the whole graph, we first map the node representations to a higher dimensional h DISPLAYFORM3, then these mapped vectors are summed together to obtain a single vector h G (Eq. 4). The dimensionality of h G is chosen to be higher than that of h v as the graph contains more information than individual nodes. A particularly useful variant of this aggregation module is to use a separate gating network which predicts g G v = σ(g m (h v)) for each node, where σ is the logistic sigmoid function and g m is another mapping function, and computes h G as a gated sum (Eq. 5). Also the sum can be replaced with other reduce operators like mean or max. We use gated sum in all our experiments. We denote the aggregation operation across the graph without propagation as h G = R(h V, G). Our graph generative model defines a distribution over the sequence of graph generating decisions by defining a probability distribution over possible outcomes for each step. Each of the decision steps is modeled using one of the three modules defined according to the following equations: DISPLAYFORM0 (a) f addnode (G) In this module, we take an existing graph G as input, together with its node representations h V, to produce the parameters necessary to make the decision whether to terminate the algorithm or add another node (this will be probabilities for each node type if nodes are typed).To compute these probabilities, we first run T rounds of propagation to update node vectors, after which we compute a graph representation vector and predict an output from there through a standard MLP followed by softmax or logistic sigmoid. This process is formulated in Eq. 6, 7, 8. Here the superscript (T) indicates the after running the propagation T times. f an is a MLP that maps the graph representation vector h G to the action output space, here it is the probability (or a vector of probability values) of adding a new node (type) or terminating. After the predictions are made, the new node vectors h (T)V are carried over to the next step, and the same carry-over is applied after each and any decision step. This makes the node vectors recurrent, across both the propagation steps and the different decision steps. DISPLAYFORM1 This module is similar to (a), we only change the output module slightly as in Eq. 9 to get the probability of adding an edge to the newly created node v through a different MLP f ae, after getting the graph representation vector h G.(c) f nodes (G, v) In this module, after T rounds of propagation, we compute a score for each node (Eq. 10), which is then passed through a softmax to be properly normalized (Eq. 11). f s maps node state pairs h u and h v to a score s u for connecting u to the new node v, and p(y) is the output distribution over nodes. This can be extended to handle typed edges by making s u a vector of scores same size as the number of edge types, and taking the softmax over all nodes and edge types. Initializing Node States Whenever a new node is added to the graph, we need to initialize its state vector. If there are some inputs associated with the node, they can be used to get the initialization vector. We also aggregate across the graph to get a graph vector, and use it as an extra source of input for initialization. More concretely, the node state for a new node v is initialized as the following: DISPLAYFORM2 Here x v is any input feature associated with the node, e.g. node type embeddings, and R init (h V, G) computes a graph representation, f init is an MLP. If not using R init (h V, G) as part of the input to the initialization module, nodes with the same input features added at different stages of the generation process will have the same initialization. Adding the graph vector fixes this issue. Conditional Generative Model The graph generative model described above can also be used to do conditional generation, where some input is used to condition the generation process. We only need to make a few minor changes to the model architecture, by making a few design decisions about where to add in the conditioning information. The conditioning information comes in the form of a vector, and then it can be added in one or more of the following modules: the propagation process; the output component for the three modules, i.e. in f n, f e and f s; the node state initialization module f init. In our experiments, we use the conditioning information only in f n and f init. Standard techniques for improving conditioning like attention can also be used, where we can use the graph representation to compute a query vector. Our graph generative model defines a joint distribution p(G, π) over graphs G and node and edge ordering π (corresponding to the derivation in a traditional graph grammar). When generating samples, both the graph itself and an ordering are generated by the model. For both training and evaluation, we are interested in the marginal p(G) = π∈P(G) p(G, π). This marginal is, however, intractable to compute for moderately large graphs as it involves a sum over all possible permutations. To evaluate this marginal likelihood we therefore need to use either sampling or some approximation instead. One Monte-Carlo estimate is based on importance sampling, where DISPLAYFORM0 Here q(π|G) is any proposal distribution over permutations, and the estimate can be obtained by generating a few samples from q(π | G) and then average p(G, π)/q(π | G) for the samples. The variance of this estimate is minimized when q(π | G) = p(π | G). When a fixed canonical ordering is available for any arbitrary G, we can use it to train and evaluate our model by taking q(π | G) to be a delta function that puts all the probability on this canonical ordering. This choice of q, however, In training, since direct optimization of log p(G) is intractable, we can therefore learn the joint distribution p(G, π) instead by maximizing the expected joint log-likelihood DISPLAYFORM1 Given a dataset of graphs, we can get samples from p data (G) fairly easily, and we have the freedom to choose p data (π|G) for training. Since the maximizer of Eq. 14 is p(G, π) = p data (G, π), to make the training process match the evaluation process, we can take DISPLAYFORM2. Training with such a p data (π | G) will drive the posterior of the model distribution p(π | G) close to the proposal distribution q(π | G), therefore improving the quality of our estimate of the marginal probability. Ordering is an important issue for our graph model, in the experiments we always use a fixed ordering or uniform random ordering for training, and leave the potentially better solution of learning an ordering to future work. In particular, in the learning to rank literature there is an extensive body of work on learning distributions over permutations, for example the Mallows model BID22 and the Plackett-Luce model BID26 BID20, which may be used here. Interested readers can also refer to BID18 a; ) for discussions of similar ordering issues from different angles. We study the properties and performance of different graph generation models and odering strategies on three different tasks. More experiment and detailed settings are included in Appendix C. In the first experiment, we train graph generative models on three sets of synthetic undirected graphs: cycles, trees, and graphs generated by the Barabasi-Albert model BID1, which is a good model for power-law degree distribution. We generate data on the fly during training, all cycles and trees have between 10 to 20 nodes, and the Barabasi-Albert model is set to generate graphs of 15 nodes and each node is connected to 2 existing nodes when added to the graph. For comparison, we contast our model against the BID8 random graph model and a LSTM baseline. We estimate the edge probability parameter p in the Erdős-Rényi model using maximum likelihood. For the LSTM model, we sequentialized the decision sequences (see Figure 6 for an example) used by the graph model and trained LSTM language models on them. During training, for each graph we uniformly randomly permute the orderings of the nodes and order the edges by node indices, and then present the permuted graph to the graph model and the LSTM model. In experiments on all three sets, we used a graph model with node state dimensionality of 16 and set the number of propagation steps T = 2, and the LSTM model has a hidden state size of 64. The two models have roughly the same number of parameters (LSTM 36k, graph model 32k).The training curves plotting − log p(G, π) with G, π sampled from the training distribution, comparing the graph model and the LSTM model, are shown in FIG1. From these curves we can clearly see that the graph models train faster and have better asymptotic performance as well. Figure 3: Degree histogram for samples generated by models trained on BarabasiAlbert Graphs. The histogram labeled "Ground Truth" shows the data distribution estimated from 10,000 examples. Since our graphs have topological properties, we can also evaluate the samples of these models and see how well they align with these properties. We generated 10,000 samples from each model. For cycles and trees, we evaluate what percentage of samples are actually cycles or trees. For graphs generated by the Barabasi-Albert model, we compute the node degree distribution. The are shown in Table 1 and Figure 3. Again we can see that the proposed graph model has the capability of matching the training data well in terms of all these metrics. Note that we used the same graph model on three different sets of graphs, and the model learns to adapt to the data. Here the success of the graph model compared to the LSTM baseline can be partly attributed to the ability to refer to specific nodes in a graph. The ability to do this inevitably requires keeping track of a varying set of objects and then pointing to them, which is non-trivial for a LSTM to do. Pointer networks (b) can be used to handle the pointers, but building a varying set of objects is challenging in the first place, and the graph model provides a way to do it. In the second experiment, we train graph generative models for the task of molecule generation. Recently, there has been a number of papers tackling this problem by using RNN language models on SMILES string representations of molecules BID10 BID29 BID4 ). An example of a molecule and its corresponding SMILES string are shown in Figure 4. BID16 took one step further and used context free grammar to model the SMILES strings. However, inherently molecules are graph structured objects where it is possible to have cycles. We used the ChEMBL database (the latest version, 23) for this study; previous versions of ChEMBL were also used in BID29 BID24 for molecule generation. We filtered the database and chose to model molecules with at most 20 heavy atoms. This ed in a training / validation / testing split of 130,830 / 26,166 / 104,664 examples each. The chemical toolkit is used to convert between the SMILES strings and the graph representation of the molecules. Both the nodes and the edges in molecule graphs are typed. All the model hyperparameters are tuned on the validation set, number of propagation steps T is chosen from {1, 2}.We compare the graph model with baseline LSTM language models trained on the SMILES strings as well as the graph generating sequences used by the graph model. RDKit can produce canonical SMILES representations for each molecule with associated edge ordering, we therefore train the models using these canonicalized representations. We also trained these models with permuted ordering. For the graph model, we randomly permute the node ordering and change the edge ordering correspondingly, for the LSTM on SMILES, we first convert the SMILES string into a graph representation, permute the node ordering and then convert back to a SMILES string without canonicalization, similar to BID3 Table 3: Negative log-likelihood evaluation on small molecules with no more than 6 nodes. We evaluate the negative log-likelihood for all models with the canonical ordering on the test set. We also generate 100,000 samples from each model and evaluate how many of them are valid well-formatted molecule representations and how many of the generated samples are not already seen in the training set following BID29 BID24. The are shown in TAB3, which also lists the type of graph generating sequence and the ordering the models are trained on. Note that the models trained with random ordering are not tailored to the canonical ordering used in evaluation. In Appendix C.2, we show the distribution of a few chemical metrics for the generated samples to further assess the their quality. The LSTM on SMILES strings has a slight edge in terms of likelihood evaluated under canonical ordering (which is domain specific), but the graph model generates significantly more valid and novel samples. It is also interesting that the LSTM model trained with random ordering improves performance on canonical ordering, this is probably related to overfitting. Lastly, when compared using the generic graph generation decision sequence, the Graph architecture outperforms LSTM in NLL as well. It is intractable to estimate the marginal likelihood p(G) = π p(G, π) for large molecules. However, for small molecules this is possible. We did the enumeration and evaluated the 6 models on small molecules with no more than 6 nodes. As we evaluate, we compare the negative log-likelihood we got with the fixed ordering and the best possible ordering, as well as the true marginal, the are shown in Table 3. On these small molecules, the graph model trained with random ordering has better marginal likelihood, and surprisingly for the models trained with fixed ordering, the canonical ordering they are trained on are not always the best ordering, which suggests that there are big potential for actually learning an ordering. Figure 5 shows a visualization of the molecule generation processes for the graph model. The model trained with canonical ordering learns to generate nodes and immediately connect it to the latest part of the generated graph, while the model trained with random ordering took a completely different approach by generating pieces first and then connect them together at the end. In the last experiment, we look at a conditional graph generation task -generating parse trees given an input natural language sentence. We took the Wall Street Journal dataset with sequentialized parse trees used in (c), and trained LSTM sequence to sequence models with attention as the baselines on both the sequentialized trees as well as on the decision sequences used by the graph model. In the dataset the parse trees are sequentialized following a top-down depth-first traversal ordering, we therefore used this ordering to train our graph model as well. Besides this, we also conducted experiments using the breadth-first traversal ordering. We changed our graph model slightly and replaced the loop for generating edges to a single step that picks one node as the parent DISPLAYFORM0 Step FORMULA2 Step FORMULA2 Step FORMULA0 Step FORMULA2 Step FORMULA2 Step 25 Final Sample Table 4: Parse tree generation , evaluated on the Eval set.for each new node to adapt to the tree structure. This shortens the decision sequence for the graph model, although the flattened parse tree sequence the LSTM uses is still shorter. We also employed an attention mechanism to get better conditioning information as for the sequence to sequence model. Table 4 shows the perplexity of different models on this task. Since the length of the decision sequences for the graph model and sequentialized trees are different, we normalized the log-likelihood of all models using the length of the flattened parse trees to make them comparable. To measure sample quality we used another metric that checks if the generated parse tree exactly matches the ground truth tree. From these we can see that the LSTM on sequentialized trees is better on both metrics, but the graph model does better than the LSTM trained on the same and more generic graph generating decision sequences, which is compatible with what we observed in the molecule generation experiment. One important issue for the graph model is that it relies on the propagation process to communicate information on the graph structure, and during training we only run propagation for a fixed T steps, and in this case T = 2. Therefore after a change to the tree structure, it is not possible for other remote parts to be aware of this change in such a small number of propagation steps. Increasing T can make information flow further on the graph, however the more propagation steps we use the slower the graph model would become, and more difficult it would be to train them. For this task, a tree-structured model like R3NN BID25 ) may be a better fit which can propagate information on the whole tree by doing one bottom-up and one top-down pass in each iteration. On the other hand, the graph model is modeling a longer sequence than the sequentialized tree sequence, and the graph structure is constantly changing therefore so as the model structure, which makes training of such graph models to be considerably harder than LSTMs. The graph model in the proposed form is a powerful model capable of generating arbitrary graphs. However, as we have seen in the experiments and the analysis, there are still a number of challenges facing these models. Here we discuss a few of these challenges and possible solutions going forward. Ordering Ordering of nodes and edges is critical for both learning and evaluation. In the experiments we always used predefined distribution over orderings. However, it may be possible to learn an ordering of nodes and edges by treating the ordering π as a latent variable, this is an interesting direction to explore in the future. Long Sequences The generation process used by the graph model is typically a long sequence of decisions. If other forms of sequentializing the graph is available, e.g. SMILES strings or flattened parse trees, then such sequences are typically 2-3x shorter. This is a significant disadvantage for the graph model, it not only makes it harder to get the likelihood right, but also makes training more difficult. To alleviate this problem we can tweak the graph model to be more tied to the problem domain, and reduce multiple decision steps and loops to single steps. Scalability Scalability is a challenge to the graph generative model we proposed in this paper. Large graphs typically lead to very long graph generating sequences. On the other side, the graph nets use a fixed T propagation steps to propagate information on the graph. However, large graphs require large T s to have sufficient information flow, this would also limit the scalability of these models. To solve this problem, we may use models that sequentially sweep over edges, like BID25, or come up with ways to do coarse-to-fine generation. We have found that training such graph models is more difficult than training typical LSTM models. The sequence these models are trained on are really long, but also the model structure is constantly changing, which leads to various scaling issues and only adds to the difficulty. We found lowering the learning rate can solve a lot of the instability problem, but more satisfying solutions may be obtained by tweaking the model architecture. In this paper, we proposed a powerful deep generative model capable of generating arbitrary graphs through a sequential process. We studied its properties on a few graph generation problems. This model has shown great promise and has unique advantages over standard LSTM models. We hope that our can spur further research in this direction to obtain better generative models of graphs. DISPLAYFORM0 DISPLAYFORM1 Incorporate node v t 6: DISPLAYFORM2 Probability of adding an edge to v t 8: DISPLAYFORM3 Sample whether to add an edge to v t while z t,i = 1 do Add edges pointing to new node v t 10: DISPLAYFORM0 Probabilities of selecting each node in V t 11: DISPLAYFORM1 13: DISPLAYFORM2 The graph generation process is presented in Algorithm 1 for reference. <add node (node 0)> <don't add edge> <add node (node 1)> <add edge> <pick node 0 (edge)> <don't add edge> <add node (node 2)> <add edge> <pick node 0 (edge)> <add edge> <pick node 1 (edge)> <don't add edge> <don't add node> Possible Sequence 2: <add node (node 1)> <don't add edge> <add node (node 0)> <add edge> <pick node 1 (edge)> <don't add edge> <add node (node 2)> <add edge> <pick node 1 (edge)> <add edge> <pick node 0 (edge)> <don't add edge> <don't add node> Figure 6: An example graph and two corresponding decision sequences. Figure 6 shows an example graph. Here the graph contains three nodes {0, 1, 2}, and three edges {,,}. Consider generating nodes in the order of 0, 1 and 2, and generating edge before, then the corresponding decision sequence is the one shown on the left. Here the decisions are indented to clearly show the two loop levels. On the right we show another possible generating sequence generating node 1 first, and then node 0 and 2. In general, for each graph there might be many different possible orderings that can generate it. In this section we present more implementation details about our graph generative model. The message function f e is implemented as a fully connected neural network, as the following: DISPLAYFORM0 ). We can also use an additional edge function f e to compute the message in the reverse direction as DISPLAYFORM1 ). When not using reverse messages, the node activation vectors are computed as DISPLAYFORM2 When reverse messages are used, the node activations are DISPLAYFORM3 The node update function f n is implemented as a recurrent cell in RNNs, as the following: DISPLAYFORM4, where RNNCell can be a vanilla RNN cell, where DISPLAYFORM5 In the experiments, we used a linear layer in the message functions f e in place of the MLP, and we set the dimensionality of the outputs to be twice the dimensionality of the node state vectors h u. For the synthetic graphs and molecules, f e and f e share the same set of parameters, while for the parsing task, f e and f e have different parameters. We always use GRU cells in our model. Overall GRU cells and LSTM cells perform equally well, and both are significantly better than the vanilla RNN cells, but GRU cells are slightly faster than the LSTM cells. Note that each round of propagation can be thought of as a graph propagation "layer". When propagating for a fixed number of T rounds, we can have tied parameters on all layers, but we found using different parameters on all layers perform consistently better. We use untied weights in all experiments. For aggregating across the graph to get graph representation vectors, we first map the node representations h v into a higher dimensional space h DISPLAYFORM6 where f m is another MLP, and then h G = v∈V h G v is the graph representation vector. We found gated sum DISPLAYFORM7 to be consistently better than a simple sum, where g DISPLAYFORM8 ) is a gating vector. In the experiments we always use this form of gated sum, and both f m and g m are implemented as a single linear layer, and the dimensionality of h G is set to twice the dimensionality of h v. (a) f addnode (G) This module takes an existing graph as input and produce a binary (non-typed nodes) or categorical output (typed nodes). More concretely, after obtaining a graph representation h G, we feed that into an MLP f an to output scores. For graphs where the nodes are not typed, we have f an (h G) ∈ R and the probability of adding one more node is DISPLAYFORM0 For graphs where the nodes can be one of K types, we make f an output a DISPLAYFORM1, ∀k then p(add one more node with type k|G) = p k. We add an extra type K + 1 to represent the decision of not adding any more nodes. In the experiments, f an is always implemented as a linear layer and we found this to be sufficient. DISPLAYFORM2 This module takes the current graph and a newly added node v as input and produces a probability of adding an edge. In terms of implementation it is treated as exactly the same as (a), except that we add the new node into the graph first, and use a different set of parameters both in the propagation module and in the output module where we use a separate f ae in place of f an. This module always produces Bernoulli probabilities, i.e. probability for either adding one edge or not. Typed edges are handled in (c). DISPLAYFORM3 This module picks one of the nodes in the graph to be connected to node v. After propagation, we have node representation vectors h (T) u for all u ∈ V, then a score s u ∈ R for each node u is computed as DISPLAYFORM4 The probability of a node being selected is then a softmax over these scores DISPLAYFORM5 For graphs with J types of edges, we produce a vector s u ∈ R J for each node u, by simply changing the output size of the MLP for f s. Then the probability of a node u and edge type j being selected is a softmax over all scores across all nodes and edge types DISPLAYFORM6. When a new node v is created, its node vector h v need to be initialized. In our model the node vector h v is initialized using inputs from a few different sources: a node type embedding or any other node features that are available; a summary of the current graph, computed as a graph representation vector after aggregation; any conditioning information, if available. Among these, node type embedding e comes from a standard embedding module; is implemented as a graph aggregation operation, more specifically DISPLAYFORM0 where g h v is then initialized as DISPLAYFORM1 The conditioning vector c summarizes any conditional input information, for images this can be the output of a convolutional neural network, for text this can be the output of an LSTM encoder. In the parse tree generation task, we employed an attention mechanism similar to the one used in Vinyals et al. (2015c).More specifically, we used an LSTM to obtain the representation of each input word h c i, for i ∈ {1, ..., L}. Whenever a node is created in the graph, we compute a query vector DISPLAYFORM2 which is again an aggregate over all node vectors. This query vector is used to compute a score for each input word as u For learning we have a set of training graphs, and we train our model to maximize the expected joint likelihood E pdata(G) E pdata(π|G) [log p(G, π)] as discussed in Section 3.4.Given a graph G and a specified ordering π of the nodes and edges, we can obtain a particular graph generating sequence (Appendix A shows an example of this). The log-likelihood log p(G, π) can then be computed for this sequence, where the likelihood for each individual step is computed using the output modules described in B.2.For p data (π|G) we explored two possibilities: canonical ordering in the particular domain; uniform random ordering. The canonical ordering is a fixed ordering of a graph nodes and edges given a graph. For molecules, the SMILES string specified an ordering of nodes and edges which we use as the canonical ordering. In the implementation we used the default ordering provided in the chemical toolbox rdkit as the canonical ordering. For parsing we tried two canonical orderings, depth-first-traversal ordering and breadth-first-traversal ordering. For uniform random ordering we first generate a random permutation of node indices which gives us the node ordering, and then sort the edges according to the node indices to get edge ordering. When evaluating the marginals we take the permutations on edges into account as well. In this section we describe more detailed experiment setup and present more experiment not included in the main paper. For this experiment the hidden size of the LSTM model is set to 64 and the size of node states in the graph model is 16, number of propagation steps T = 2.For both models we selected the learning rates from {0.001, 0.0005, 0.0002} on each of the three sets. We used the Adam BID13 optimizer for both. Model Details Our graph model has a node state dimensionality of 128, the LSTM models have hidden size of 512. The two models have roughly the same number of parameters (around 2 million). Our graph model uses GRU cores as f n, we have tried LSTMs as well but they perform similarly as GRUs. We have also tried GRUs for the baselines, but LSTM models work slightly better. The node state dimensionality and learning rate are chosen according to grid search in {32, 64, 128, 256} × {0.001, 0.0005, 0.0002, 0.0001}, while for the LSTM models the hidden size and learning rate are chosen from {128, 256, 512, 1024}×{0.001, 0.0005, 0.0002}. The best learning rate for the graph model is 0.0001, while for the LSTM model the learning rate is 0.0002 or 0.0005. The LSTM model used a dropout rate of 0.5, while the graph model used a dropout rate of 0.2 which is applied to the last layer of the output modules. As discussed in the main paper, the graph model is significantly more unstable than the LSTM model, and therefore a much smaller learning rate should be used. The number of propagation steps T is chosen from {1, 2}, increasing T is in principle beneficial for the graph representations, but it is also more expensive. For this task a small T is already showing a good performance so we didn't explore much further. Overall the graph model is roughly 2-3x slower than the LSTM model with similar amount of parameters in our comparison. Here we examine the distribution of chemical metrics for the valid samples generated from trained models. For this study we chose a range of chemical metrics available from RDKit FORMULA0, and computed the metrics for 100,000 samples generated from each model. For reference, we also computed the same metrics for the training set, and compare the sample metrics with the training set metrics. For each metric, we create a histogram to show its distribution across the samples, and compare the histogram to the histogram on the training set by computing the KL divergence between them. The are shown in FIG6. Note that all models are able to match the training distribution on these metrics quite well, notably the graph model and LSTM model trained on permuted node and edge sequences has a bias towards generating molecules with higher SA scores which is a measure of the ease of synthesizing the molecules. This is probably due to the fact that these models are trained to generate molecular graphs in arbitrary order (as apposed to following the canonical order that makes sense chemically), therefore more likely to generate things that are harder to synthesize. However, this can be overcome if we train with RL to optimize for this metric. The graph model trained with permuted nodes and edges also has a slight bias toward generating larger molecules with more atoms and bonds. We also note that the graph and LSTM models trained on permuted nodes and edge sequences can still be improved as they are not even overfitting after 1 million training steps. This is because with node and edge permutation, these models see on the order of n! times more data than the other models. Given more training time these models can improve further. Changing the bias for f addnode and f addedge Since our graph generation model is very modular, it is possible to tweak the model after it has been trained. For example, we can tweak a single bias parameter in f addnode and f addedge to increase or decrease the graph size and edge density. In FIG7 (a) we show the shift in the distribution of number of atoms for the samples when changing the f addnode bias. As the bias changes, the samples change accordingly while the model is still able to generate a high percentage of valid samples. Figure 8 (b) shows the shift in the distribution of number of bonds for the samples when changing the f addedge bias. The number of bonds, i.e. number of edges in the molecular graph, changes as this bias changes. Note that this level of fine-grained control of edge density in sample generation is not straightforward to achieve with LSTM models trained on SMILES strings. Note that however here the increasing the f addedge slightly changed the average node degree, but negatively affected the total number of bonds. This is because the edge density also affected the molecule size, and when the bias is negative, the model tend to generate larger molecules to compensate for this change, and when this bias is positive, the model tend to generate smaller molecules. Combining f addedge bias and f addnode bias can achieve the net effect of changing edge density. Step-by-step molecule generation visualization Here we show a few examples for step-by-step molecule generation. Figure 9 shows an example of such step-by-step generation process for a graph model trained on canonical ordering, and FIG0 shows one such example for a graph model trained on permuted random ordering. Overfitting the Canonical Ordering When trained with canonical ordering, our model will adapt its graph generating behavior to the ordering it is being trained on, Figure 9 and FIG0 show examples on how the ordering used for training can affect the graph generation behavior. On the other side, training with canonical ordering can in overfitting more quickly than training with uniform random ordering. In our experiments, training with uniform random ordering rarely DISPLAYFORM0 Figure 9:Step-by-step generation process visualization for a graph model trained with canonical ordering.overfits at all, but with canonical ordering the model overfits much more quickly. Effectively, with random ordering the model will see potentially factorially many possible orderings for the same graph, which can help reduce overfitting, but this also makes learning harder as many orderings do not exploit the structure of the graphs at all. Another interesting observation we have about training with canonical ordering is that models trained with canonical ordering may not assign the highest probabilities to the canonical ordering after training. From Table 3 we can see that the log-likelihood for the canonical ordering (labeled "fixed ordering") is not always the same as the best possible ordering, even though they are quite close. FIG0 shows an example histogram of negative log-likelihood log p(G, π) across all possible orderings π for a small molecule under a model trained with canonical ordering. We can see that the small negative log-likelihood values concentrate on very few orderings, and a large number of orderings have significantly larger NLL. This shows that the model can learn to concentrate probabilities to orderings close to the canonical ordering, but it still "leaks" some probability to other orderings. Model Details In this experiment we used a graph model with node state dimensionality of 64, and an LSTM encoder with hidden size 256. Attention over input is implemented using a graph aggregation operation to compute a query vector and then use it to attend to the encoder LSTM states, as described in B.3. The baseline LSTM models have hidden size 512 for both the encoder and the decoder. Dropout of 0.5 is applied to both the encoder and the decoder. For the graph model the dropout in the decoder is reduced to 0.2 and applied to various output modules and the node initialization module. The baseline models have more than 2 times more parameters than the graph model (52M vs 24M), mostly due to using a larger encoder. The node state dimensionality for the graph model and the hidden size of the encoder LSTM is chosen from a grid search {16, 32, 64, 128} × {128, 256, 512}. For the LSTM seq2seq model the size of the encoder and decoder are always tied and selected from {128, 256, 512}. For all models the learning rate is selected from {0.001, 0.0005, 0.0002}.For the LSTM encoder, the input text is always reversed, which empirically is silghtly better than the normal order. For the graph model we experimented with T ∈ {1, 2, 3, 4, 5}. Larger T can in principle be beneficial for getting better graph representations, however this also means more computation time and more instability. T = 2 in a reasonable balance for this task. Figure 10:Step-by-step generation process visualization for a graph model trained with permuted random ordering.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hy1d-ebAb
We study the graph generation problem and propose a powerful deep generative model capable of generating arbitrary graphs.
We introduce a neural network that represents sentences by composing their words according to induced binary parse trees. We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser. Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM. It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees. As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation. We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task. Finally, we show how performance can be improved with an attention mechanism which fully exploits the parse chart, by attending over all possible subspans of the sentence. Recurrent neural networks, in particular the Long Short-Term Memory (LSTM) architecture BID10 and some of its variants BID8 BID1, have been widely applied to problems in natural language processing. Examples include language modelling BID35 BID13, textual entailment BID2 BID30, and machine translation BID1 BID36 amongst others. The topology of an LSTM network is linear: words are read sequentially, normally in left-to-right order. However, language is known to have an underlying hierarchical, tree-like structure BID4. How to capture this structure in a neural network, and whether doing so leads to improved performance on common linguistic tasks, is an open question. The Tree-LSTM network BID37 BID41 provides a possible answer, by generalising the LSTM to tree-structured topologies. It was shown to be more effective than a standard LSTM in semantic relatedness and sentiment analysis tasks. Despite their superior performance on these tasks, Tree-LSTM networks have the drawback of requiring an extra labelling of the input sentences in the form of parse trees. These can be either provided by an automatic parser BID37, or taken from a gold-standard resource such as the Penn Treebank BID18. BID39 proposed to remove this requirement by including a shift-reduce parser in the model, to be optimised alongside the composition function based on a downstream task. This makes the full model non-differentiable so it needs to be trained with reinforcement learning, which can be slow due to high variance. Our proposed approach is to include a fully differentiable chart parser in the model, inspired by the CYK constituency parser BID5 BID40 BID15. Due to the parser being differentiable, the entire model can be trained end-to-end for a downstream task by using stochastic gradient descent. Our model is also unsupervised with respect to the parse trees, similar to BID39. We show that the proposed method outperforms baseline Tree-LSTM architectures based on fully left-branching, right-branching, and supervised parse trees on a textual entailment task and a reverse dictionary task. We also introduce an attention mechanism in the spirit of BID1 for our model, which attends over all possible subspans of the source sentence via the parse chart. Our work can be seen as part of a wider class of sentence embedding models that let their composition order be guided by a tree structure. These can be further split into two groups: models that rely on traditional syntactic parse trees, usually provided as input, and models that induce a tree structure based on some downstream task. In the first group, BID27 take inspiration from the standard Montagovian semantic treatment of composition. They model nouns as vectors, and relational words that take arguments (such as adjectives, that combine with nouns) as tensors, with tensor contraction representing application BID6. These tensors are trained via linear regression based on a downstream task, but the tree that determines their order of application is expected to be provided as input. BID32 and BID33 also rely on external trees, but use recursive neural networks as the composition function. Instead of using a single parse tree, BID20 propose a model that takes as input a parse forest from an external parser, in order to deal with uncertainty. The authors use a convolutional neural network composition function and, like our model, rely on a mechanism similar to the one employed by the CYK parser to process the trees. BID23 propose a related model, also making use of syntactic information and convolutional networks to obtain a representation in a bottom-up manner. Convolutional neural networks can also be used to produce embeddings without the use of tree structures, such as in BID14. BID3 propose an RNN that produces sentence embeddings optimised for a downstream task, with a composition function that works similarly to a shift-reduce parser. The model is able to operate on unparsed data by using an integrated parser. However, it is trained to mimic the decisions that would be taken by an external parser, and is therefore not free to explore using different tree structures. introduce a probabilistic model of sentences that explicitly models nested, hierarchical relationships among words and phrases. They too rely on a shift-reduce parsing mechanism to obtain trees, trained on a corpus of gold-standard trees. In the second group, BID39 shows the most similarities to our proposed model. The authors use reinforcement learning to learn tree structures for a neural network model similar to BID3, taking performance on a downstream task that uses the computed sentence representations as the reward signal. BID16 take a slightly different approach: they formalise a dependency parser as a graphical model, viewed as an extension to attention mechanisms, and hand-optimise the backpropagation step through the inference algorithm. All the models take a sentence as input, represented as an ordered sequence of words. Each word w i ∈ V in the vocabulary is encoded as a (learned) word embedding w i ∈ R d. The models then output a sentence representation h ∈ R D, where the output space R D does not necessarily coincide with the input space R d. Our simplest baseline is a bag-of-words (BoW) model. Due to its reliance on addition, which is commutative, any information on the original order of words is lost. Given a sentence encoded by embeddings w 1,..., w n it computes DISPLAYFORM0 where W is a learned input projection matrix. An obvious choice for a baseline is the popular Long Short-Term Memory (LSTM) architecture of BID10. It is a recurrent neural network that, given a sentence encoded by embeddings w 1,..., w T, runs for T time steps t = 1... T and computes DISPLAYFORM0 where σ(x) = 1 1+e −x is the standard logistic function. The LSTM is parametrised by the matrices W ∈ R 4D×d, U ∈ R 4D×D, and the bias vector b ∈ R 4D. The vectors σ(i t), σ(f t), σ(o t) ∈ R D are known as input, forget, and output gates respectively, while we call the vector tanh(u t) the candidate update. We take h T, the h-state of the last time step, as the final representation of the sentence. Following the recommendation of BID12, we deviate slightly from the vanilla LSTM architecture described above by also adding a bias of 1 to the forget gate, which was found to improve performance. Tree-LSTMs are a family of extensions of the LSTM architecture to tree structures BID37 BID41. We implement the version designed for binary constituency trees. Given a node with children labelled L and R, its representation is computed as DISPLAYFORM0 where w in is a word embedding, only nonzero at the leaves of the parse tree; and h L, h R and c L, c R are the node children's h-and c-states, only nonzero at the branches. These computations are repeated recursively following the tree structure, and the representation of the whole sentence is given by the h-state of the root node. Analogously to our LSTM implementation, here we also add a bias of 1 to the forget gates. While the Tree-LSTM is very powerful, it requires as input not only the sentence, but also a parse tree structure defined over it. Our proposed extension optimises this step away, by including a basic CYK-style BID5 BID40 BID15 chart parser in the model. The parser has the property of being fully differentiable, and can therefore be trained jointly with the Tree-LSTM composition function for some downstream task. The CYK parser relies on a chart data structure, which provides a convenient way of representing the possible binary parse trees of a sentence, according to some grammar. Here we use the chart as an efficient means to store all possible binary-branching trees, effectively using a grammar with only a single non-terminal. This is sketched in simplified form in TAB0 for an example input. The chart is drawn as a diagonal matrix, where the bottom row contains the individual words of the input sentence. The n th row contains all cells with branch nodes spanning n words (here each cell is represented simply by the span -see FIG0 below for a forest representation of the nodes in all possible trees). By combining nodes in this chart in various ways it is possible to efficiently represent every binary parse tree of the input sentence. The unsupervised Tree-LSTM uses an analogous chart to guide the order of composition. Instead of storing sets of non-terminals, however, as in a standard chart parser, here each cell is made up of a pair of vectors (h, c) representing the state of the Tree-LSTM RNN at that particular node in the tree. The process starts at the bottom row, where each cell is filled in by calculating the Tree-LSTM output- with w set to the embedding of the corresponding word. These are the leaves of the parse tree. Then, the second row is computed by repeatedly calling the Tree-LSTM with the appropriate children. This row contains the nodes that are directly combining two leaves. They might not all be needed for the final parse tree: some leaves might connect directly to higher-level nodes, which have not yet been considered. However, they are all computed, as we cannot yet know whether there are better ways of connecting them to the tree. This decision is made at a later stage. Starting from the third row, ambiguity arises since constituents can be built up in more than one way: for example, the constituent "neuro linguistic programming" in TAB0 can be made up either by combining the leaf "neuro" and the second-row node "linguistic programming", or by combining the second-row node "neuro linguistic" and the leaf "programming". In these cases, all possible compositions are performed, leading to a set of candidate constituents (c 1, h 2),..., (c n, h n). Each is assigned an energy, given by DISPLAYFORM0 where cos(·, ·) indicates the cosine similarity function and u is a (trained) vector of weights. All energies are then passed through a softmax function to normalise them, and the cell representation is finally calculated as a weighted sum of all candidates using the softmax output: DISPLAYFORM1 DISPLAYFORM2 The softmax uses a temperature hyperparameter t which, for small values, has the effect of making the distribution sparse by making the highest score tend to 1. In all our experiments the temperature is initialised as t = 1, and is smoothly decreasing as t = 1 /2 e, where e ∈ Q is the fraction of training epochs that have been completed. In the limit as t → 0 +, this mechanism will only select the highest scoring option, and is equivalent to the argmax operation. The same procedure is repeated for all higher rows, and the final output is given by the h-state of the top cell of the chart. The whole process is sketched in FIG0 for an example sentence. Note how, for instance, the final sentence representation can be obtained in three different ways, each represented by a coloured circle. All are computed, and the final representation is a weighted sum of the three, represented by the dotted lines. When the temperature t in reaches very low values, this effectively reduces to the single "best" tree, as selected by gradient descent. All models are implemented in Python 3.5.2 with the DyNet neural network library BID26 at commit 25be489. The code for all following experiments will be made available on the first author's website 1 shortly after the publication date of this article. Performance on the development data is used to determine when to stop training. Each model is trained three times, and the test set performance is reported for the model performing best on the development set. The textual entailment model was trained on a 2.2 GHz Intel Xeon E5-2660 CPU, and took three days to converge. The reverse dictionary model was trained on a NVIDIA GeForce GTX TITAN Black GPU, and took five days to converge. In addition to the baselines already described in §3, for the following experiments we also train two additional Tree-LSTM models that use a fixed composition order: one that uses a fully left-branching tree, and one that uses a fully right-branching tree. We test our model and baselines on the Stanford Natural Language Inference task BID2, consisting of 570 k manually annotated pairs of sentences. Given two sentences, the aim is to predict whether the first entails, contradicts, or is neutral with respect to the second. For example, given "children smiling and waving at camera" and "there are children present", the model would be expected to predict entailment. For this experiment, we choose 100D input embeddings, initialised with 100D GloVe vectors BID28 and with out-of-vocabulary words set to the average of all other vectors. This in a 100 × 37 369 word embedding matrix, fine-tuned during training. For the supervised Tree-LSTM model, we used the parse trees included in the dataset. For training we used the Adam optimisation algorithm BID17, with a batch size of 16.Given a pair of sentences, one of the models is used to produce the embeddings s 1, s 2 ∈ R 100. Following BID39 and BID3, we then compute DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 where B ∈ R 3×200 and b ∈ R 3 are trained parameters. TAB1 lists the accuracy and number of parameters for our model, baselines, as well as other sentence embedding models in the literature. When the information is available, we report both the number of intrinsic model parameters as well as the number of word embedding parameters. For other models these figures are based on the data from the SNLI website 2 and the original papers. Attention is a mechanism which allows a model to soft-search for relevant parts of a sentence. It has been shown to be effective in a variety of linguistic tasks, such as machine translation BID1 BID38, summarisation BID29, and textual entailment BID31.In the spirit of BID1, we modify our LSTM model such that it returns not just the output of the last time step, but rather the outputs for all steps. Thus, we no longer have a single pair of vectors s 1, s 2 as in FORMULA6, but rather two lists of vectors s 1,1,..., s 1,n1 and s 2,1,..., s 2,n2. Then, we replace s 1 in with DISPLAYFORM3 where f is the attention mechanism, with vector parameter a and matrix parameters A i, A s. This can be interpreted as attending over sentence 1, informed by the context of sentence 2 via the vector s 2,n2. Similarly, s 2 is replaced by an analogously defined s 2, with separate attention parameters. We also extend the mechanism of BID1 to the Unsupervised Tree-LSTM. In this case, instead of attending over the list of outputs of an LSTM at different time steps, attention is over the whole chart structure described in §3.4. Thus, the model is no longer attending over all words in the source sentences, but rather over all their possible subspans. The for both attention-augmented models are reported in TAB2. Table 4: Median rank (lower is better) and accuracies (higher is better) at 10 and 100 on the three test sets for the reverse dictionary task: seen words (S), unseen words (U), and concept descriptions (C). We also test our model and baselines on the reverse dictionary task of BID9, which consists of 852 k word-definition pairs. The aim is to retrieve the name of a concept from a list of words, given its definition. For example, when provided with the sentence "control consisting of a mechanical device for controlling fluid flow", a model would be expected to rank the word "valve" above other confounders in a list. We use three test sets provided by the authors: two sets involving word definitions, either seen during training or held out; and one set involving concept descriptions instead of formal definitions. Performance is measured via three statistics: the median rank of the correct answer over a list of over 66 k words; and the proportion of cases in which the correct answer appears in the top 10 and 100 ranked words (top 10 accuracy and top 100 accuracy).As output embeddings, we use the 500D CBOW vectors BID25 provided by the authors. As input embeddings we use the same vectors, reduced to 256 dimensions with PCA. Given a training definition as a sequence of (input) embeddings w 1,..., w n ∈ R 256, the model produces an embedding s ∈ R 256 which is then mapped to the output space via a trained projection matrix W ∈ R 500×256. The training objective to be maximised is then the cosine similarity cos(Ws, d) between the definition embedding and the (output) embedding d of the word being defined. For the supervised Tree-LSTM model, we additionally parsed the definitions with Stanford CoreNLP to obtain parse trees. We use simple stochastic gradient descent for training. The first 128 batches are held out from the training set to be used as development data. The softmax temperature in is allowed to decrease as described in §3.4 until it reaches a value of 0.005, and then kept constant. This was found to have the best performance on the development set. Table 4 shows the for our model and baselines, as well as the numbers for the cosine-based "w2v" models of BID9, taken directly from their paper.4 Our bag-of-words model consists of 193.8 k parameters; our LSTM uses 653 k parameters; the fixed-branching, supervised, and unsupervised Tree-LSTM models all use 1.1 M parameters. On top of these, the input word embeddings consist of 113 123 × 256 parameters. Output embeddings are not counted as they are not updated during training. The in TAB1 show a strong performance of the Unsupervised Tree-LSTM against our tested baselines, as well as other similar methods in the literature with a comparable number of parameters. For the textual entailment task, our model outperforms all baselines including the supervised Tree-LSTM, as well as some of the other sentence embedding models in the literature with a higher number of parameters. The use of attention, extended for the Unsupervised Tree-LSTM to be over all possible subspans, further improves performance. In the reverse dictionary task, the poor performance of the supervised Tree-LSTM can be explained by the unusual tokenisation used in the dataset of BID9: punctuation is simply stripped, turning e.g. "(archaic) a section of a poem" into "archaic a section of a poem", or stripping away the semicolons in long lists of synonyms. On the one hand, this might seem unfair on the supervised Tree-LSTM, which received suboptimal trees as input. On the other hand, it demonstrates the robustness of our method to noisy data. Our model also performed well in comparison to the LSTM and the other Tree-LSTM baselines. Despite the slower training time due to the additional complexity, FIG2 shows how our model needed fewer training examples to reach convergence in this task. Following BID39, we also manually inspect the learned trees to see how closely they match conventional syntax trees, as would typically be assigned by trained linguists. We analyse the same four sentences they chose. The trees produced by our model are shown in Figure 3. One notable feature is the fact that verbs are joined with their subject noun phrases first, which differs from the standard verb phrase structure. However, formalisms such as combinatory categorial grammar BID34, through type-raising and composition operators, do allow such constituents. The spans of prepositional phrases in (b), (c) and (d) are correctly identified at the highest level; but only in (d) does the structure of the subtree match convention. As could be expected, other features such as the attachment of the full stops or of some determiners do not appear to match human intuition. We presented a fully differentiable model to jointly learn sentence embeddings and syntax, based on the Tree-LSTM composition function. We demonstrated its benefits over standard Tree-LSTM on a textual entailment task and a reverse dictionary task. Introducing an attention mechanism over the parse chart was shown to further improve performance for the textual entailment task. The model is conceptually simple, and easy to train via backpropagation and stochastic gradient descent with popular deep learning toolkits based on dynamic computation graphs such as DyNet BID26 and PyTorch. The unsupervised Tree-LSTM we presented is relatively simple, but could be plausibly improved by combining it with aspects of other models. It should be noted in particular that, the function assigning an energy to alternative ways of forming constituents, is extremely basic and does not rely on any global information on the sentence. Using a more complex function, perhaps relying on a mechanism such as the tracking LSTM in BID3, might lead to improvements in performance. Techniques such as batch normalization BID11 or layer normalization BID0 might also lead to further improvements. In future work, it may be possible to obtain trees closer to human intuition by training models to perform well on multiple tasks instead of a single one, an important feature for intelligent agents to demonstrate BID21. Elastic weight consolidation BID19 has been shown to help with multitask learning, and could be readily applied to our model.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJMuY-gRW
Represent sentences by composing them with Tree-LSTMs according to automatically induced parse trees.
Pruning neural networks for wiring length efficiency is considered. Three techniques are proposed and experimentally tested: distance-based regularization, nested-rank pruning, and layer-by-layer bipartite matching. The first two algorithms are used in the training and pruning phases, respectively, and the third is used in the arranging neurons phase. Experiments show that distance-based regularization with weight based pruning tends to perform the best, with or without layer-by-layer bipartite matching. These suggest that these techniques may be useful in creating neural networks for implementation in widely deployed specialized circuits. 3. After iterating between the previous two steps a sufficient number of times, apply layer-by-layer bipartite matching to further optimize the energy of the layouts. The algorithm uses the realization that finding the optimal permutation of nodes in one layer that minimizes the wiring length to the nodes of other layers assuming their positions are fixed is equivalent to the weighted bipartite matching problem, for which the Hungarian algorithm is polynomial-time and exact BID19. Apply this optimization algorithm layer by layer to the nodes of the pruned network. We run pruning experiments on a fully-connected neural network for MNIST, which contains two hidden layers of 300 and 100 units, respectively (this is the standard LeNet-300-100 architecture that has been widely studied in the pruning literature). We also try pruning the fully connected layers of a 10-layer convolutional network trained on the street-view house numbers dataset BID20. We show energy-accuracy curves for one setting of hyperparameters for each of these datasets in FIG0.In TAB1 we show a subset of the of a hyperparameter grid search for these two datasets. We record the accuracy and energy after each pruning iteration, and then for each set of hyperparameters choose the model with the lowest energy greater than some threshold accuracy. For each target accuracy we show the weight-based (which is comparable to the technique of BID2 and forms a baseline) and the on the distance-based regularization technique. We found that nested rank pruning can perform better than pure weight based pruning, however distance-based regularization tends to outperform techniques that use nested-rank pruning, although sometimes distance-based regularization with nested-rank pruning performs best in the lower accuracy, low energy regime as can be seen in the right graph of FIG0. In these tables we obtain a wide range of values at the highest accuracy (which we suspect is due to randomness in initial accuracy) but more consistency at the lower accuracies. For MNIST, our best performing set of hyperparameters in a compression ratio of 1.64 percent at 98%, comparable to state-of-the art for this initial architecture and dataset BID21. In Table 3 we apply the bipartite matching heuristic to the best performing network obtained using weight-based regularization and the best performing network using weight-distance based regularization for each target accuracy. Across both datasets the distance-based regularization outperforms weight-based regularization on average across four trials, in some cases by close to 70%. In this paper we consider the novel problem of learning accurate neural networks that have low total wiring length because this corresponds to energy consumption in the fundamental limit. We introduce weight-distance regularization, nested rank pruning, and layer-by-layer bipartite matching and show through ablation studies that all of these algorithms are effective, and can even reach state-of-the-art compression ratios. The suggests that these techniques may be worth the computational effort if the neural network is to be widely deployed, if significantly lower energy is worth the slight decrease in accuracy, or if the application is to be deployed on either a specialized circuit or general purpose processor. Table 2: Average and standard deviation over four trials for Street View House Numbers task on both the wiring length metric (energy) and remaining edges metric (edges). We note that with the appropriate hyperparameter setting our algorithm outperforms the baseline weight based techniques (p=0) often on both the energy and number of remaining edges metric. Table 3: Results of applying the bipartite matching algorithm on the best performing weight-based pruning network and best performing distance-based regularization method before and after applying layer-by-layer bipartite matching. Average and standard deviation over 4 trials presented.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rygpbR2Pi7
Three new algorithms with ablation studies to prune neural network to optimize for wiring length, as opposed to number of remaining weights.
Given a video and a sentence, the goal of weakly-supervised video moment retrieval is to locate the video segment which is described by the sentence without having access to temporal annotations during training. Instead, a model must learn how to identify the correct segment (i.e. moment) when only being provided with video-sentence pairs. Thus, an inherent challenge is automatically inferring the latent correspondence between visual and language representations. To facilitate this alignment, we propose our Weakly-supervised Moment Alignment Network (wMAN) which exploits a multi-level co-attention mechanism to learn richer multimodal representations. The aforementioned mechanism is comprised of a Frame-By-Word interaction module as well as a novel Word-Conditioned Visual Graph (WCVG). Our approach also incorporates a novel application of positional encodings, commonly used in Transformers, to learn visual-semantic representations that contain contextual information of their relative positions in the temporal sequence through iterative message-passing. Comprehensive experiments on the DiDeMo and Charades-STA datasets demonstrate the effectiveness of our learned representations: our combined wMAN model not only outperforms the state-of-the-art weakly-supervised method by a significant margin but also does better than strongly-supervised state-of-the-art methods on some metrics. Video understanding has been a mainstay of artificial intelligence research. Recent work has sought to better reason about videos by learning more effective spatio-temporal representations . The video moment retrieval task, also known as text-to-clip retrieval, combines language and video understanding to find activities described by a natural language sentence. The main objective of the task is to identify the video segment within a longer video that is most relevant to a sentence. This requires a model to learn the mapping of correspondences (alignment) between the visual and natural language modalities. In the strongly-supervised setting, existing methods generally learn joint visual-semantic representations by projecting video and language representations into a common embedding space and leverage provided temporal annotations to learn regressive functions for localization. However, such temporal annotations are often ambiguous and expensive to collect. seeks to circumvent these problems by proposing to address this task in the weakly-supervised setting where only full video-sentence pairs are provided as weak labels. However, the lack of temporal annotations renders the aforementioned approaches infeasible. In their approach (Figure 1a), proposes a Text-Guided Attention (TGA) mechanism to attend on segment-level features w.r.t. the sentence-level representations. However, such an approach treats the segment-level visual representations as independent inputs and ignores the contextual information derived from other segments in the video. More importantly, it does not exploit the fine-grained semantics of each word in the sentence. Consequently, existing methods are not able to reason about the latent alignment between the visual and language representations comprehensively. Figure 1: Given a video and a sentence, our aim is to retrieve the most relevant segment (the red bounding box in this example). Existing methods consider video frames as independent inputs and ignore the contextual information derived from other frames in the video. They compute a similarity score between the segment and the entire sentence to determine their relevance to each other. In contrast, our proposed approach aggregates contextual information from all the frames using graph propagation and leverages fine-grained frame-by-word interactions for more accurate retrieval. (Only some interactions are shown to prevent overcrowding the figure.) In this paper, we take another step towards addressing the limitations of current weakly-supervised video moment retrieval methods by exploiting the fine-grained temporal and visual relevance of each video frame to each word (Figure 1b). Our approach is built on two core insights: 1) The temporal occurrence of frames or segments in a video provides vital visual information required to reason about the presence of an event; 2) The semantics of the query are integral to reasoning about the relationships between entities in the video. With this in mind, we propose our Weakly-Supervised Moment Alignment Network (wMAN). An illustrative overview of our model is shown in Figure 2. The key component of wMAN is a multi-level co-attention mechanism that is encapsulated by a Frame-by-Word (FBW) interaction module as well as a Word-Conditioned Visual Graph (WCVG). To begin, we exploit the similarity scores of all possible pairs of visual frame and word features to create frame-specific sentence representations and word-specific video representations. The intuition is that frames relevant to a word should have a higher measure of similarity as compared to the rest. The word representations are updated by their word-specific video representations to create visual-semantic representations. Then a graph (WCVG) is built upon the frame and visualsemantic representations as nodes and introduces another level of attention between them. During the message-passing process, the frame nodes are iteratively updated with relational information from the visual-semantic nodes to create the final temporally-aware multimodal representations. The contribution of each visual-semantic node to a frame node is dynamically weighted based on their similarity. To learn such representations, wMAN also incorporates positional encodings into the visual representations to integrate contextual information about their relative positions. Such contextual information encourages the learning of temporally-aware multimodal representations. To learn these representations, we use a Multiple Instance Learning (MIL) framework that is similar in nature to the Stacked Cross Attention Network (SCAN) model. The SCAN model leverages image region-by-word interactions to learn better representations for image-text matching. In addition, the WCVG module draws inspiration from the Language-Conditioned Graph Network (LCGN) by which seeks to create context-aware object features in an image. However, the LCGN model works with sentence-level representations, which does not account for the semantics of each word to each visual node comprehensively. wMAN also distinguishes itself from the above-mentioned models by extracting temporally-aware multimodal representations from videos and their corresponding descriptions, whereas SCAN and LCGN only work on images. The contributions of our paper are summarized below: • We propose a simple yet intuitive MIL approach for weakly-supervised video moment retrieval from language queries by exploiting fine-grained frame-by-word alignment. • Our novel Word-Conditioned Visual Graph learns richer visual-semantic context through a multi-level co-attention mechanism. • We introduce a novel application of positional embeddings in video representations to learn temporally-aware multimodal representations. To demonstrate the effectiveness of our learned temporally-aware multimodal representations, we perform extensive experiments over two datasets, Didemo and Charades-STA , where we outperform the state-of-the-art weakly supervised model by a significant margin and strongly-supervised state-of-the-art models on some metrics. Most of the recent works in video moment retrieval based on natural language queries (; ; ; ; ;) are in the strongly-supervised setting, where the provided temporal annotations can be used to improve the alignment between the visual and language modalities. Among them, the Moment Alignment Network (MAN) introduced by utilizes a structured graph network to model temporal relationships between candidate moments, but one of the distinguishing factors with our wMAN is that our iterative message-passing process is conditioned on the multimodal interactions between frame and word representations. The TGN model bears some resemblance to ours by leveraging frame-by-word interactions to improve performance. However, it only uses a single level of attention which is not able to infer the correspondence between the visual and language modalities comprehensively. In addition, we reiterate that all these methods train their models using strong supervision, whereas we address the weakly supervised setting of this task. There are also a number of closely-related tasks to video moment retrieval such as temporal activity detection in videos. A general pipeline of proposal and classification is adopted by various temporal activity detection models (; ;) with the temporal proposals learnt by temporal coordinate regression. However, these approaches assume you are provided with a predefined list of activities, rather than an open-ended list provided via natural language queries at test time. Methods for visual phrase grounding also tend to be provided with natural language queries as input;;;; ), but the task is performed over image regions to locate a related bounding box rather than video segments to locate the correct moment. In the video moment retrieval task, given a ground truth video-sentence pair, the goal is to retrieve the most relevant video moment related to the description. The weakly-supervised version of this task we address can be formulated under the multiple instance learning (MIL) paradigm. When training using MIL, one receives a bag of items, where the bag is labeled as a positive if at least one item in the bag is a positive, and is labeled as a negative otherwise. In weakly-supervised moment retrieval, we are provided with a video-sentence pair (i.e., a bag) and the video segments are the items that we must learn to correctly label as relevant to the sentence (i.e., positive) or not. , we assume sentences are only associated with their ground truth video, and any other videos are negative examples. To build a good video-sentence representation, we introduce our Weakly-Supervised Moment Alignment Network (wMAN), which learns context-aware visualsemantic representations from fine-grained frame-by-word interactions. As seen in Figure 2, our network has two major components - representation learning constructed from the Frame-ByWord attention and Positional Embeddings , described in Section 3.1, and a Word-Conditioned Visual Graph where we update video segment representations based on context from the rest of the video, described in Section 3.2. These learned video segment representations are used to determine their relevance to their corresponding attended sentence representations using a LogSumExp (LSE) pooling similarity metric, described in Section 3.3. In this section we discuss our initial video and sentence representations which are updated with contextual information in Section 3.2. Each word in an input sentence is encoded using GloVe embeddings and then fed into a Gated Recurrent Unit (GRU) . The output of this GRU is denoted as W = {w 1, w 2, ···, w Q} where Q is the number of words in the sentence. Each frame in the input video is encoded using a pretrained Convolutional Neural Network (CNN). In the case of a 3D CNN this actually corresponds to a small chunk of sequential frames, but we shall refer to this as a frame representation throughout this paper for simplicity. To capture long-range dependencies, we feed the frame features into a Long Short-Term Memory (LSTM) . The latent hidden state output from the LSTM are concatenated with positional encodings (described below) to form the initial video representations, denoted as V = {v 1, v 2, ..., v N} where N is the number of frame features for video V. To provide some notion of the relative position of each frame we include the PE features which have been used in language tasks like learning language representations using BERT . These PE features can be thought of similar to the temporal endpoint features (TEF) used in prior work for strongly supervised moment retrieval task (e.g.,), but the PE features provide information about the temporal position of each frame rather than the rough position at the segment level. For the desired PE features of dimension d, let pos indicates the temporal position of each frame, i is the index of the feature being encoded, and M is a scalar constant, then the PE features are defined as: Through experiments, we found the hyper-parameter M = 10,000 works well for all videos. These PE features are concatenated with the LSTM encoded frame features at corresponding frame position before going to the cross-modal interaction layers. Rather than relating a sentence-level representation with each frame as done in prior work , we aggregate similarity scores between all frame and word combinations from the input video and sentence. These Frame-By-Word (FBW) similarity scores are used to compute attention weights to identify which frame and word combinations are important for retrieving the correct video segment. More formally, for N video frames and Q words in the input, we compute: Note that v now represents the concatenation of the video frame features and the PE features. Frame-Specific Sentence Representations. We obtain the normalized relevance of each word w.r.t. to each frame from the FBW similarity matrix, and use it to compute attention for each word: Using the above-mentioned attention weights, a weighted combination of all the words are created, with correlated words to the frame gaining high attention. Intuitively, a word-frame pair should have a high similarity score if the frame contains a reference to the word. Then the frame-specific sentence representation emphasizes words relevant to the frame and is defined as: Note that these frame-specific sentence representations don't participate in the iterative messagepassing process (Section 3.2). Instead, they are used to infer the final similarity score between a video segment and the query (Section 3.3). Word-Specific Video Representations. To determine the normalized relevance of each frame w.r.t. to each word, we compute the attention weights of each frame: Similarly, we attend to the visual frame features with respect to each word by creating a weighted combination of visual frame features determined by the relevance of each frame to the word. The formulation of each word-specific video-representation is defined as: These word-specific video representations are used in our Word-Conditioned Visual Graph, which we will discuss in the next section. Given the sets of visual representations, word representations and their corresponding word-specific video representations, the WCVG aims to learn temporally-aware multimodal representations by integrating visual-semantic and contextual information into the visual features. To begin, the word representations are updated with their corresponding video representations to create a new visualsemantic representation w vis j by concatenating each word w j and the word-specific video representation f j. Intuitively, the visual-semantic representations not only contain the semantic context of each word but also a summary of the video with respect to each word. A fully connected graph is then constructed with the visual features v i and the embedded attention of visual-semantic representations w vis j as nodes. Iterative Word-Conditioned Message-Passing The iterative message-passing process introduces a second round of FBW interaction similar to that in Section 3.1.1 to infer the latent temporal correspondence between each frame v i and visual-semantic representation w. To realize this, we first learn a projection W 1 followed by a ReLU of w vis j to obtain a new word representation to compute a new similarity matrix s ij on every message-passing iteration, namely, we obtain a replacement for w j in Eq. via w j = ReLU (W 1 (w vis j)). Updates of Visual Representations During the update process, each visual-semantic node sends its message (represented by its representation) to each visual node weighted by their edge weights. The representations of the visual nodes at the t-th iteration are updated by summing up the incoming messages as follows: where a ij is obtained by applying Eq. to the newly computed FBW similarity matrix s ij, and W 2 is a learned projection to make v t i the same dimensions as the frame-specific sentence representation l i (refer to Eq. ) which are finally used to compute a sentence-segment similarity score. The final updated visual representations are used to compute the relevance of each frame to its attended sentence-representations. A segment is defined as any arbitrary continuous sequence of visual features. We denote a segment as S = {v where K is the number of frame features contained within the segment S. We adopt the LogSumExp (LSE) pooling similarity metric used in SCAN, to determine the relevance each proposal segment has to the query: λ is a hyperparameter that weighs the relevance of the most salient parts of the video segment to the corresponding frame-specific sentence representations. Finally, following , given a triplet (X +, Y +, Y −), where (X +, Y +) is a positive pair and (X +, Y −) a negative pair, we use a margin-based ranking loss L T to train our model which ensures the positive pair's similarity score is better than the negative pair's by at least a margin. Our model's loss is then defined as: Sim LSE is used as the similarity metric between all pairs. At test time, Sim LSE is also used to rank the candidate temporal segments generated by sliding windows, and the top scoring segments will the localized segments corresponding to the input query sentence. We evaluate the capability of wMAN to accurately localize video moments based on natural language queries without temporal annotations on two datasets -DiDeMo and Charades-STA. On the DiDeMo dataset, we adopt the mean Intersection-Over-Union (IOU) and Recall@N at IOU threshold = θ. Recall@N represents the percentage of the test sliding window samples which have a overlap of at least θ with the ground-truth segments. mIOU is the average IOU with the groundtruth segments for the highest ranking segment to each query input. On the Charades-STA dataset, only the Recall@N metric is used for evaluation. Charades-STA The Charades-STA dataset is built upon the original Charades [] dataset which contains video-level paragraph descriptions and temporal annotations for activities. Charades-STA is created by breaking down the paragraphs to generate sentence-level annotations and aligning the sentences with corresponding video segments. In total, it contains 12,408 and 3,720 query-moment pairs in the training and test sets respectively. For fair comparison with the weakly model TGA , we use the same non-overlapping sliding windows of sizes 128 and 256 frames to generate candidate temporal segments. These 21 segments will used to compute the similarities with the input query and the top scored segment will be returned as the localization . For fair comparison, we utilize the same input features as the state-of-the-art method . Specifically, the word representations are initialized with GloVe embeddings and fine-tuned during the training process. For the experiments on DiDeMo, we use the provided mean-pooled visual frame and optical flow features. The visual frame features are extracted from the fc7 layer of VGG-16 [] pretrained on ImageNet []. The input visual features for our experiments on Charades-STA are C3D [] features. We adopt an initial learning rate of 1e − 5 and a margin= 0.5 used in our model's triplet loss (Eq. 9). In addition, we use three iterations for the message-passing process. Our model is trained end-to-end using the ADAM optimizer. The in Table 1 show that our full model outperforms the TGA model by a significant margin on all metrics. In particular, the Recall@1 accuracy when IOU = 0.7 obtained by our model is almost doubled that of TGA. It is notable that we observe a consistent trend of the Recall@1 accuracies improving the most across all IOU values. This not only demonstrates the importance of richer joint visual-semantic representations for accurate localization but also the superior capability of our model to learn them. Our model also performs comparably to the strongly-supervised MAN model on several metrics. To better understand the contributions of each component of our model, we present a comprehensive set of ablation experiments in Table 2. Note that our combined wMAN model is comprised of the FBW and WCVG components as well as the incorporation of PEs. The obtained by our FBW variant demonstrate that capturing fine-grained frame-by-word interactions is essential to inferring the latent temporal alignment between these two modalities. More importantly, the in the second row (FBW-WCVG) show that the second stage of multimodal attention, introduced by the WCVG module, encourages the augmented learning of intermodal relationships. Finally, we also observe that incorporating positional encodings into the visual representations (FBW-WCVG + PE) are especially helpful in improving Recall@1 accuracies for all IOU values. We provide for a model variant that include TEFs which encode the location of each video segment. In Table 2, our experiments show that TEFs actually hurt performance slightly. Our model variant with PEs (FBW- To gain insights into the fine-grained interactions between frames and words, we provide visualizations in Figure 3 . Our model is able to determine the most salient frames with respect to each word relatively well. In both examples, we observe that the top three salient frames with respect to each word are generally distributed over the same subset of frames. This seems to be indicative of the fact that our model leverages contextual information from all video frames as well as words in determining the salience of each frame to a specific word. Table 3 reports the on the DiDeMo dataset. In addition to reporting the state-of-the-art weakly supervised , we also include the obtained by strongly-supervised methods. It can be observed that our model outperforms the TGA model by a significant margin, even tripling the Recall@1 accuracy achieved by them. This demonstrates the effect of learning richer joint visualsemantic representations on the accurate localization of video moments. In fact, our full model outperforms the strongly-supervised TGN and MCN models on the Recall@1 metric by approximately 10%. We observe a consistent trend in the ablation studies (Table 4) as with those of Charades-STA. In particular, through comparing the ablation models FBW and FBW-WCVG, we demonstrate the effectiveness of our multi-level co-attention mechanism in WCVG where it improves the Recall@1 accuracy by a significant margin. Similar to our observations in Table 2, PEs help to encourage accurate latent alignment between the visual and language modalities, while TEFs fail in this aspect. In this work, we propose our weakly-supervised Moment Alignment Network with WordConditioned Visual Graph which exploits a multi-level co-attention mechanism to infer the latent alignment between visual and language representations at fine-grained word and frame level. Learning context-aware visual-semantic representations helps our model to reason about the temporal occurrence of an event as well as the relationships of entities described in the natural language query. (b) Figure 3: Visualization of the final relevance weights of each word in the query with respect to each frame. Here, we display the top three weights assigned to the frames for each phrase. The colors of the three numbers indicate the correspondence to the words in the query sentence. We also show the ground truth (GT) temporal annotation as well as our predicted weakly localized temporal segments in seconds. The highly correlated frames to each query word generally fall into the GT temporal segment in both examples. In Table 5, we show the comparisons of the different methods with different number of model parameters. While wMAN has 18M parameters as compared to 3M parameters in TGA, the performance gains are not simply attributed to the number of model parameters. We increase the dimensions of visual and semantic representations as well as corresponding fully-connected layers in the TGA model which leads to a total of 19M parameters. Despite having more parameters than wMAN, it still does significantly worse on all metrics. We also provide obtained by a direct adaptation of the Language-Conditioned Graph Network (LCGN), which is designed to work on the image level for VQA as well. While LCGN leverages attention over the words in the natural language query, the computed attention is only conditioned on the entire sentence without contextual information derived from the objects' visual representations. In contrast, the co-attention mechanism in our combined wMAN model is conditioned on both semantic and contextual visual information derived from words and video frames respectively. LCGN is also a lot more complicated and requires significantly more computing resources than wMAN. Despite possessing much more parameters than wMAN, it is still not able to achieve comparable to ours. In this section, we include ablation on the number of message-passing rounds required to learn effective visual-semantic representations. In our experiments, we have found that three rounds work best on both Charades-Sta and DiDeMo. C ABLATION ON TEST SETS The in Tables 8 and 9 emphasize the importance of both PEs and WCVG in our approach. We observe the best performances when PEs are integrated into the visual-semantic representations learned by the WCVG. The consistency of these performance gains across both datasets seems indicative of the increased capability of our model to learn temporally-aware visual-semantic representations.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJx4rerFwB
Weakly-Supervised Text-Based Video Moment Retrieval
In machine learning tasks, overtting frequently crops up when the number of samples of target domain is insufficient, for the generalization ability of the classifier is poor in this circumstance. To solve this problem, transfer learning utilizes the knowledge of similar domains to improve the robustness of the learner. The main idea of existing transfer learning algorithms is to reduce the dierence between domains by sample selection or domain adaptation. However, no matter what transfer learning algorithm we use, the difference always exists and the hybrid training of source and target data leads to reducing fitting capability of the learner on target domain. Moreover, when the relatedness between domains is too low, negative transfer is more likely to occur. To tackle the problem, we proposed a two-phase transfer learning architecture based on ensemble learning, which uses the existing transfer learning algorithms to train the weak learners in the first stage, and uses the predictions of target data to train the final learner in the second stage. Under this architecture, the fitting capability and generalization capability can be guaranteed at the same time. We evaluated the proposed method on public datasets, which demonstrates the effectiveness and robustness of our proposed method. Transfer learning has attracted more and more attention since it was first proposed in 1995 BID11 and is becoming an important field of machine learning. The main purpose of transfer learning is to solve the problem that the same distributed data is hard to get in practical applications by using different distributed data of similar domains. Several different kinds of transfer stratagies are proposed in recent years, transfer learning can be devided into 4 categories BID17, including instance-based transfer learning, feature-based transfer learning, parameter-based transfer learning and relation-based transfer learning. In this paper, we focus on how to enhance the performance of instance-based transfer learning and feature-based transfer learning when limited labeled data from target domain can be obtained. In transfer learning tasks, when diff-distribution data is obtained to improve the generalization ability of learners, the fitting ability on target data set will be affected more or less, especially when the domains are not relative enough, negative transfer might occur BID11, it's hard to trade off between generalization and fitting. Most of the existing methods to prevent negative transfer learning are based on similarity measure(e.g., maximum mean distance(MMD), KL divergence), which is used for choosing useful knowledge on source domains. However, similarity and transferability are not equivalent concepts. To solve those problems, we proposed a novel transfer learning architecture to improve the fitting capability of final learner on target domain and the generalization capability is provided by weak learners. As shown in FIG0, to decrease the learning error on target training set when limited labeled data on target domain can be obtained, ensemble learning is introduced and the performances of transfer learning algorithms are significantly improved as a . In the first stage, traditional transfer learning algorithms are applied to diversify training data(e.g., Adaptive weight adjustment of boosting-based transfer learning or different parameter settings of domain adaptation). Then diversified training data is fed to several weak classifiers to improve the generalization ability on target data. To guarantee the fitting capability on target data, the predictions of target data is vectorized to be fed to the final estimator. This architecture brings the following advantages:• When the similarity between domains is low, the final estimator can still achieve good performance on target training set. Firstly, source data and target data are hybrid together to train the weak learners, then super learner is used to fit the predictions of target data.• Parameter setting is simplified and performance is better than individual estimators under normal conditions. To test the effectiveness of the method, we respectively modified TrAdaboost BID1 and BDA BID16 as the base algorithms for data diversification and desired is achieved.1.1 RELATED WORK TrAdaboost proposed by BID1 is a typical instance-based transfer learning algorithm, which transfer knowledge by reweighting samples of target domain and source domain according to the training error. In this method, source samples are used directly for hybrid training. As the earliest boosting based transfer learning method, there are many inherent defects in TrAdaboost (e.g., high requirements for similarity between domains, negative transfer can easily happen). Moreover, TrAdaboost is extended from Adaboost and use WMA(Weighted Majority Algorithm) BID6 to update the weights, and the source instance that is not correctly classified on a consistent basis would converge to zero by ⌈N/2⌉ and would not be used in the final classifier's output since that classifier only uses boosting iterations⌈N/2⌉ → N. Two weakness caused by disregarding first half of ensembles analysised in BID0 are in the list below:• As the source weights convergence rapidly, after ⌈N/2⌉ iterations, the source weights will be too low to make full use of source knowledge.• Later classifiers merely focus on the harder instances. To deal with rapid convergence of TrAdaboost, BID2 proposed TransferBoost, which apply a 2-phase training process at each iteration to test whether negative transfer has occurred and adjust the weights according to the . BID0 introduces an adaptive factor in weights update to slow down the convergence. BID19 proposed multisource TrAdaboost, aimed at utilize instances from multiple source domains to improve the transfer performance. In this paper, we still use the WMA to achieve data diversification in experiment of instance-based transfer learning, but stacking rather than boosting is used in final predictions. Feature based transfer learning mainly realizes transfer learning by reducing the distribution difference(e.g., MMD) between source domain and target domain by feature mapping, which is the most studied method for transfer knowledge in recent years. BID12 proposed transfer components analysis(TCA) as early as 2011, TCA achieve knowledge transfer by mapping the feature of source domain and target domain to a new feature space where the MMD between domains can be minimized. However, not using labels brings a defect that only marginal distribution can be matched. To address the problem, BID8 proposed joint distribution adaptation(JDA) which fit the marginal distribution and conditional distribution at the same time, for unlabeled target data, it utilizes pseudo-labels provided by classifier trained on source data. After that, BID16 extended JDA for imbalanced data. In neural networks, it's easy to transfer knowledge by pre-train and fine-tune because feature extracted by lower layers are mostly common for different tasks BID20, to transfer knowledge in higher layers which extract task-specific features, BID15, BID9 and BID10 add MMD to the optimization target in higher layers. The learning performance in the target domain using the source data could be poorer than that without using the source data, This phenomenon is called negative transfer BID11. To avoid negative transfer, BID7 point out that one of the most important research issues in transfer learning is to determine whether a given source domain is effective in transferring knowledge to a target domain, and then to determine how much of the knowledge should be transferred from a source domain to a target domain. Researchers have proposed to evaluate the relatedness between the source and target domains. When limited labeled target data can be obtained, two of the methods are listed below:• Introduce the predened parameters to qualify the relevance between the source and target domains. However, it is very labor consuming and time costing to manually select their proper values.• Examine the relatedness between domains directly to guide transfer learning. The notion of positive transferability was first introduced in BID13 for the assessment of synergy between the source and target domains in their prediction models, and a criterion to measure the positive transferability between sample pairs of different domains in terms of their prediction distributions is proposed in that research. BID14 proposed a kernel method to evaluate the task relatedness and the instance similarities to avoid negative transfer BID4 proposed a method to detection the occurance of negative transfer which can also delay the point of negative transfer in the process of transfer learning. BID3 remind that most previous work treats knowledge from every source domain as a valuable contribution to the task on the target domain could increase the risk of negative transfer. A two-phase multiple source transfer framework is proposed, which can effectively downgrade the contributions of irrelevant source domains and properly evaluate the importance of source domains even when the class distributions are imbalanced. Stacking is one of the ensemble learning methods that fuses multiple weak classifiers to get a better performance than any single one BID5. When using stacking, diversification of weak learners has an important impact on the performance of ensemble BID18. Here are some common ways to achieve diversification:• Diversifying input data: using different subset of samples or features.• Diversifying outputs: classifiers are only for certain categories.• Diversifying models: using different classification models. In this paper, we can also regard the proposed architecture as a stacking model which uses transfer learning algorithms to achieve input diversification. In this section, we introduce how the instance-based transfer learning is applied to the proposed architecture. we use TrAdaboost as an example and make a simple modification to turn it to stacking. In TrAdaboost, we need a large number of labeled data on source domain and limited labeled data on the target domain. We use X = X S ∪ X T to represent the feature space. Source space(X S) and target space(X T) are defined as DISPLAYFORM0 respectively. Then the hybrid training data set is defined in equation 1. DISPLAYFORM1 Weight vector is initialized firstly by w 1 = (w DISPLAYFORM2 In t th iteration, the weights are updated by Equation 3. DISPLAYFORM3 Here, β t = ϵ t 1−ϵt and β = 1/(1+ √ 2 ln n/N). It is noteworthy that original TrAdaboost is for binary classification. In order to facilitate experimental analysis and comparison, we extend the traditional TrAdaboost to a multi-classification algorithm according to Multi-class Adaboost proposed in BID21, then β t and beta defined as: DISPLAYFORM4 K is the class number. Equation 5 defines the final output for each class. DISPLAYFORM5 Moreover, for single-label problem, we use softmax to transfer P k (x) to probabilities. To address the rapid convergence problem, BID2 proposed TransferBoost, which utilizes all the weak classifiers for ensemble, but in the experiment, early stop can improve the final performance, so which classifiers should be chosen is still a problem. BID0 proposed dynamic TrAdaboost, In this algorithm, an adaptive factor is introduced to limit the convergence rate of source sample weight. However, it's not always effective in practical use. Theoretical upper bound of training error in target domain is not changed in dynamic TrAdaboost, which is related to the training error on target domain, we have: DISPLAYFORM6 Algorithm 1 stacked generalization for instance-based transfer learning. Call Learner, providing it labeled target data set with the distribution w t. Then get back a hypothesis of S. DISPLAYFORM7 Calculate the error on S:6: DISPLAYFORM0 Get a subset of source task DISPLAYFORM1 Call learner, providing it S t ∪ T. Then get back a hypothesis of T. Calculate the error on T using equation. 2.10:updata β t using equation. 4. Update the new weight vector using equation. 3. 12: end for 13: Construct probability vectors by concatenating DISPLAYFORM0 Although dynamic TrAdaboost can improve the weights of source samples after iteration ⌈N/2⌉ and the generalization capability is improved, it's very likely that the error rate on source domain ϵ t increases, sometimes it even aggravates the occurrence of negative transfer when the domains are not similar enough. We use stacking to address the problems above in this section, in the data diversification stage, TrAdaboost is used to reweight samples for each weak classifier. Meanwhile, because we make use of all the weak classifiers, to avoid the high source weights of irrelative source samples negatively effects on the task in early iterations, a two-phase sampling mechanism is introduced in our algorithm. A formal description of stacking for instance-based transfer learning is given in Algorithm 1. The main difference between stacking for instance-based transfer learning and TrAdaboost are listed as follows:• A two-phase sampling process and an extra parameter λ is introduced. Firstly, target data is fed to weak learner and the weighted error rate of source samples are used to decide which samples can be used for hybrid learning by comparing with the threshold λ. As the source weights reduces with the number of iterations increasing, more and more source samples will be utilized.• Stacking rather than TrAdaboost is used to get the final output. We construct a feature matrix by the outputs of weak learners on target data, then use a super learner(e.g., LogitRegression in our experiment) to fit the labels. In this way, training error on target set can be minimized. When compared with TrAdaboost, stacking is insensitive to the performance of each weak classifier because the training error on target data can be minimized in stacking, which means it's more robust in most cases and brings some benefits:• When using stacking, all of the weak classifiers could be used.• When source domain is not related enough, stacking performs better. One of a popular methods for feature-based transfer learning to achieve knowledge transfer is domain adaptation, which minimizes the distribution difference between domains by mapping features to a new space, where we could measure the distribution difference by MMD. Generally speaking, we use P (X S), P (X T) and P (Y S |X S), P (Y T |X T) to represent the marginal distribution and conditional distribution of source domain and target domain respectively. In Pan et al. FORMULA2, transfer component analysis(TCA) was proposed to find a mapping which makes P (ϕ(X S)) ≈ P (ϕ(X T)), the MMD between domains in TCA is defined as: DISPLAYFORM0 Long et al. FORMULA2 proposed joint distibution adaptation(JDA) to minimize the differences of marginal distribution and conditinal distribution at the same time, the MMD of conditional distribution is defined as: DISPLAYFORM1 In BID16, balanced distribution adaptation was proposed, in which algorithm, an extra parameter is introduced to adjust the importance of the distributions, the optimization target is defined by Equation FORMULA13: DISPLAYFORM2 To solve the nonlinear problem, we could use a kernel matrix defined by: DISPLAYFORM3, then the optimization proplem can be formalized as: DISPLAYFORM4 Where H is a centering matrix, M 0 and M c represent the MMD matrix of marginal distribution and conditional distribution respectively, A is the mapping matrix. In domain adaptation, performace is sensitive to the selection of parameters(e.g., kernel type, kernel param or regularization param). For instance, if we use rbf kernel, as defined in Equation 11, to construct the kernel matrix. selection of kernel param σ has an influence on the mapping. In this paper, we use BDA as a base algorithm in the proposed architecture to achieve data diversification by using different kernel types and parameter settings. DISPLAYFORM5 By taking adavantage of stacking, we could get a better transfer performance than any single algorithm. Here, we introduce how we choose the kernel parameter in our experiments. In ensemble learning, it's significant to use unrelated weak classifiers for a better performance (i.e., learners should have different kinds of classification capabilities). moreover, performances of learners shouldn't be too different or the poor learners will have an negative effect on ensemble. In another word, we choose the kernel parameter in a largest range where the performance is acceptable. We take the following steps to select parameters. Firstly, search a best value of kernel parameter σ for weak classifier, where the accuracy on validation set is Accuracy max. secondly, set a threshold parameter λ and find an interval (σ min, σ max) around σ where the accuracy on validation set satisfy Accuracy max − λ ≤ Accuracy when σ ∈ (σ min, σ max). Finally, select N parameters in (σ min, σ max) by uniformly-spaced sampling. When multiple type of kernels are utilized, we choose parameter sets for each seperately by repeating the above steps. In our method, the settings of λ and N should be taken into consideration, if λ is too large, the performance of each learner can't be guaranteed, if λ is too small, training data can't be diversified enough. Set N to a large number would help to get a better performance in most cases, while the complexity could be high. Algorithm 2 stacked generalization for feature-based transfer learning. DISPLAYFORM6 Contruct kernel matrix K t using κ t. Solve the eigendecomposition problem and use d smallest eigenvectors to build A t. Train t th leaner on {A DISPLAYFORM0 Call learner t providing DISPLAYFORM1 Construct probability vectors by concatenating DISPLAYFORM2, where K is the number of classes. 10: end for 11: Construct target feature matrix DISPLAYFORM3 Algorithm 2 presents the detail of our method. In our algorithm, kernel function κ t can be differentiated by kernel types or kernel params. In t th iteration, we choose κ t to mapping the feature space, then get the matrix A by BDA, A ⊤ K tar and A ⊤ K src are feed to weak learner. After that, we concatenate predictions of A ⊤ K tar as features of the super leaner. In this paper, we assume that there are limited labeled data on target set, so we use modified BDA, which uses real label rather than pseudo label, to adapt conditional distribution. To evaluate the effectiveness of our method, 6 public datasets and 11 domains as shown in TAB3 are used in our experiment of instance-based transfer learning and feature-based transfer learning. The detail is described as follow: Figure 3 : Accuracy of instance-based transfer on mnist vs. usps changed with the iterations and ratio of #target. Figure 3 shows the experiment of transfer between mnist and USPS under different iterations and ratios of #source and #target. We observe that stacking achieves much better performance than TrAdaboost. Firstly, acuuracy of stacking method is higher when ratio changing from 1% to 10%. Especially, the fewer the labeled target samples are, the more improvement stacking method could achieve. Secondly, few iterations are required for stacking method to achieve a relatively good performance, when the curve is close to convergence, there's still about 5% improvement compared with TrAdaboost. Moreover, in both transfer tasks USPS→ mnist and mnist → USPS, stacking method performs significantly better than TrAdaboost. The reason why stacking performs better is analyzed in section 3.1, we assume that the introduction of source leads to under fitting on target data when hybrid training, to confirm our hypothesis, we made a comparision of training error on source data and target data between TrAdaboost and stacking method, FIG2 shows the of four of the transfer tasks. TAB4 shows the of all the transfer tasks under 20 iterations. BDA is chosen as the base algorithm to achieve data diversification in our experiment, we mainly test the infulence of different kernel functions has on the perfomance and the effectiveness of the method proposed in section 3.2. The ratio of #source and #target is set to 5% and we use rbf kernel, poly kernel and sam kernel to conduct our experiment, for the sake of simplicity, kernel function is defined in TAB5, where γ is variable for different weak learners. DISPLAYFORM0 To observe how the selection of kernels affects the feature distribution, we visualize the feature representations learned by BDA under different kernel parameters and kernel types when adapting domains A and C. As shown in FIG5 (a), data distribution and similarity between domains change with the kernel parameters, when compared with FIG5 (b), which presents feature distribution of rbf kernel, it's obvious that using different kernel types can provide more diversity. In this paper, to construct the kernel set by sampling parameters in a range where the performance is not too worse than the best one, we followed the method given in section 3.2 and set threshold λ varies from 5% to 10% for different tasks. For each kernel type, we select 10 different parameters(i.e., 10 weak classifiers) for stacking. Tabel 4 shows the comparison between single algorithm and ensemble learning. For each kernel type, we give the best accuracy, average accuracy of weak learners and accuracy of ensemble learning, Randomforest and LogitRegression as the weak learner and super learner respectively. Accuracy of integrating all the kernel types is shown in the last column and the best performance of each task is in bold. We can learn from the table that ensemble learning outperfoms the best single learner in all the tasks, and in most cases, using both of rbf and another type of kernels are able to improve the performance. However, when should we use multiple kernel types in stacking needs to be further studied. In summary, the reason why the proposed method can improve the performance of feature-based transfer learning is listed as follows: Firstly, we use super learner to fit the target domain, so the bias of weak learners introduced by hybrid training with source data is reduced. Secondly, multiple kernels are utilized to achieve data diversification, so we could integrate the classification ability of weak learners trained on diff-distribution data. In this paper, we proposed a 2-phase transfer learning architecture, which uses the traditional transfer learning algorithm to achieve data diversification in the first stage and the target data is fitted in the second stage by stacking method, so the generalization ability and fitting ability on target data could be satisfied at the same time. The experiment of instance-based transfer learning and feature-based transfer learning on 11 domains proves the validity of our method. In summary, this framework has the following advantages:• No matter if source domain and target domain are similar, the training error on target data set can be minimized theoretically.• We reduce the risk of negative transfer in a simple and effective way without a similarity measure.• Introduction of ensemble learning gives a better performance than any single learner.• Most existing transfer learning algorithm can be integrated into this framework. Moreover, there're still some problems require our further study, some other data diversification method for transfer learning might be useful in our model, such as changing the parameter µ in BDA, integrating multiple kinds of transfer learning algorithms, or even applying this framework for multi-source transfer learning.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryxOIsA5FQ
How to use stacked generalization to improve the performance of existing transfer learning algorithms when limited labeled data is available.
Deep learning has achieved astonishing on many tasks with large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the ing problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system. Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility. The ing DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time. A Deep learning has achieved astonishing on many tasks with large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the ing problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples -often collected online in real-time -and model errors may lead to drastic damages of the system. Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility. The ing DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time. In the last five years, deep learning has propelled most areas of learning forward at an impressive pace BID22 BID31 BID41 -with the exception of physically embodied systems. This lag in comparison to other application areas is somewhat surprising as learning physical models is critical for applications that control embodied systems, reason about prior actions or plan future actions (e.g., service robotics, industrial automation). Instead, most engineers prefer classical off-the-shelf modeling as it ensures physical plausibility -at a high cost of precise measurements1 and engineering effort. These plausible representations are preferred as these models guarantee to extrapolate to new samples, while learned models only achieve good performance in the vicinity of the training data. To learn a model that obtains physically plausible representations, we propose to use the insights from physics as a model prior for deep learning. In particular, the combination of deep learning and physics seems natural as the compositional structure of deep networks enables the efficient computation of the derivatives at machine precision BID35 and, thus, can encode a differential equation describing physical processes. Therefore, we suggest to encode the physics prior in the form of a differential in the network topology. This adapted topology amplifies the information content of the training samples, regularizes the end-to-end training, and emphasizes robust models capable of extrapolating to new samples while simultaneously ensuring physical plausibility. Hereby, we concentrate on learning models of mechanical systems using the Euler-Lagrange-Equation, a second order ordinary differential equation (ODE) originating from Lagrangian Mechanics, as physics prior. We focus on learning models of mechanical systems as this problem is one of the fundamental challenges of robotics BID7 BID39. The contribution of this work is twofold. First, we derive a network topology called Deep Lagrangian Networks (DeLaN) encoding the Euler-Lagrange equation originating from Lagrangian Mechanics. This topology can be trained using standard end-to-end optimization techniques while maintaining physical plausibility. Therefore, the obtained model must comply with physics. Unlike previous approaches to learning physics BID1 BID25, which engineered fixed features from physical assumptions requiring knowledge of the specific physical embodiment, we are'only' enforcing physics upon a generic deep network. For DeLaN only the system state and the control signal are specific to the physical system but neither the proposed network structure nor the training procedure. Second, we extensively evaluate the proposed approach by using the model to control a simulated 2 degrees of freedom (dof) robot and the physical 7-dof robot Barrett WAM in real time. We demonstrate DeLaN's control performance where DeLaN learns the dynamics model online starting from random initialization. In comparison to analytic-and other learned models, DeLaN yields a better control performance while at the same time extrapolates to new desired trajectories. In the following we provide an overview about related work (Section 2) and briefly summarize Lagrangian Mechanics (Section 3). Subsequently, we derive our proposed approach DeLaN and the necessary characteristics for end-to-end training are shown (Section 4). Finally, the experiments in Section 5 evaluate the model learning performance for both simulated and physical robots. Here, DeLaN outperforms existing approaches. Models describing system dynamics, i.e. the coupling of control input τ τ τ and system state q, are essential for model-based control approaches BID17. Depending on the control approach, the control law relies either on the forward model f, mapping from control input to the change of system state, or on the inverse model f −1, mapping from system change to control input, i.e., DISPLAYFORM0 Examples for application of these models are inverse dynamics control BID7, which uses the inverse model to compensate system dynamics, while model-predictive control BID4 and optimal control BID46 use the forward model to plan the control input. These models can be either derived from physics or learned from data. The physics models must be derived for the individual system embodiment and requires precise knowledge of the physical properties BID0. When learning the model2, mostly standard machine learning techniques are applied to fit either the forward-or inverse-model to the training data. E.g., authors used Linear Regression BID39 BID14, Gaussian Mixture Regression BID3 BID20, Gaussian Process Regression BID21 BID34 BID32, Support Vector Regression BID5 BID10, feedforward- BID18 BID26 BID25 BID38 or recurrent neural networks BID37 to fit the model to the observed measurements. Only few approaches incorporate prior knowledge into the learning problem. BID38 use the graph representation of the kinematic structure as input. While the work of BID1, commonly referenced as the standard system identification technique for robot manipulators BID40, uses the Newton-Euler formalism to derive physics features using the kinematic structure and the joint measurements such that the learning of the dynamics model simplifies to linear regression. Similarly, BID25 hard-code these physics features within a neural network and learn the dynamics parameters using gradient descent rather than linear regression. Even though these physics features are derived from physics, the 2Further information can be found in the model learning survey by BID33 learned parameters for mass, center of gravity and inertia must not necessarily comply with physics as the learned parameters may violate the positive definiteness of the inertia matrix or the parallel axis theorem BID44. Furthermore, the linear regression is commonly underdetermined and only allows to infer linear combinations of the dynamics parameters and cannot be applied to close-loop kinematics BID40.DeLaN follows the line of structured learning problems but in contrast to previous approaches guarantees physical plausibility and provides a more general formulation. This general formulation enables DeLaN to learn the dynamics for any kinematic structure, including kinematic trees and closed-loop kinematics, and in addition does not require any knowledge about the kinematic structure. Therefore, DeLaN is identical for all mechanical systems, which is in strong contrast to the NewtonEuler approaches, where the features are specific to the kinematic structure. Only the system state and input is specific to the system but neither the network topology nor the optimization procedure. The combination of differential equations and Neural Networks has previously been investigated in literature. Early on BID23 BID24 proposed to learn the solution of partial differential equations (PDE) using neural networks and currently this topic is being rediscovered by BID35; BID42; BID28. Most research focuses on using machine learning to overcome the limitations of PDE solvers. E.g., BID42 proposed the Deep Galerkin method to solve a high-dimensional PDE from scattered data. Only the work of BID36 took the opposite standpoint of using the knowledge of the specific differential equation to structure the learning problem and achieve lower sample complexity. In this paper, we follow the same motivation as BID36 but take a different approach. Rather than explicitly solving the differential equation, DeLaN only uses the structure of the differential equation to guide the learning problem of inferring the equations of motion. Thereby the differential equation is only implicitly solved. In addition, the proposed approach uses different encoding of the partial derivatives, which achieves the efficient computation within a single feed-forward pass, enabling the application within control loops. Describing the equations of motion for mechanical systems has been extensively studied and various formalisms to derive these equations exist. The most prominent are Newtonian-, Hamiltonianand Lagrangian-Mechanics. Within this work Lagrangian Mechanics is used, more specifically the Euler-Lagrange formulation with non-conservative forces and generalized coordinates.3 Generalized coordinates are coordinates that uniquely define the system configuration. This formalism defines the Lagrangian L as a function of generalized coordinates q describing the complete dynamics of a given system. The Lagrangian is not unique and every L which yields the correct equations of motion is valid. The Lagrangian is generally chosen to be DISPLAYFORM0 where T is the kinetic energy and V is the potential energy. The kinetic energy T can be computed for all choices of generalized coordinates using T = 1 2 q T H(q) q, whereas H(q) is the symmetric and positive definite inertia matrix BID7. The positive definiteness ensures that all non-zero velocities lead to positive kinetic energy. Applying the calculus of variations yields the Euler-Lagrange equation with non-conservative forces described by DISPLAYFORM1 where τ τ τ are generalized forces. Substituting L and dV/dq = g(q) into Equation 3 yields the second order ordinary differential equation (ODE) described by DISPLAYFORM2 3More information can be found in the textbooks BID13 BID7 BID9 where c describes the forces generated by the Centripetal and Coriolis forces BID9. Using this ODE any multi-particle mechanical system with holonomic constraints can be described. For example various authors used this ODE to manually derived the equations of motion for coupled pendulums BID13, robotic manipulators with flexible joints BID2 BID43, parallel robots BID30 BID11 BID27 or legged robots BID15 BID12. DISPLAYFORM3 Starting from the Euler-Lagrange equation FORMULA4 ), traditional engineering approaches would estimate H(q) and g(q) from the approximated or measured masses, lengths and moments of inertia. On the contrary most traditional model learning approaches would ignore the structure and learn the inverse dynamics model directly from data. DeLaN bridges this gap by incorporating the structure introduced by the ODE into the learning problem and learns the parameters in an end-to-end fashion. More concretely, DeLaN approximates the inverse model by representing the unknown functions g(q) and H(q) as a feed-forward networks. Rather than representing H(q) directly, the lower-triangular matrix L(q) is represented as deep network. Therefore, g(q) and H(q) are described bŷ DISPLAYFORM4 where. refers to an approximation and θ and ψ are the respective network parameters. The parameters θ and ψ can be obtained by minimizing the violation of the physical law described by Lagrangian Mechanics. Therefore, the optimization problem is described by DISPLAYFORM5 withf DISPLAYFORM6 wheref −1 is the inverse model and can be any differentiable loss function. The computational graph off −1 is shown in FIG0.Using this formulation one can conclude further properties of the learned model. NeitherL nor g are functions of q or q and, hence, the obtained parameters should, within limits, generalize to arbitrary velocities and accelerations. In addition, the obtained model can be reformulated and used as a forward model. Solving Equation 6 for q yields the forward model described bŷ DISPLAYFORM7 whereLL T is guaranteed to be invertible due to the positive definite constraint FORMULA8 ). However, solving the optimization problem of Equation 5 directly is not possible due to the ill-posedness of the Lagrangian L not being unique. The Euler-Lagrange equation is invariant to linear transformation and, hence, the Lagrangian L = αL + β solves the Euler-Lagrange equation if α is non-zero and L is a valid Lagrangian. This problem can be mitigated by adding an additional penalty term to Equation 5 described by DISPLAYFORM8 where Ω is the L 2 -norm of the network weights. Solving the optimization problem of Equation 9 with a gradient based end-to-end learning approach is non-trivial due to the positive definite constraint FORMULA8 ) and the derivatives contained inf −1. In particular, d(LL T)/dt and ∂ q T LL T q /∂q i cannot be computed using automatic differentiation as t is not an input of the network and most implementations of automatic differentiation do not allow the backpropagation of the gradient through the computed derivatives. Therefore, the derivatives contained inf −1 must be computed analytically to exploit the full gradient information for training of the parameters. In the following we introduce a network structure that fulfills the positive-definite constraint for all parameters (Section 4.1), prove that the derivatives d(LL T)/dt and ∂ q T LL T q /∂q i can be computed analytically (Section 4.2) and show an efficient implementation for computing the derivatives using a single feed-forward pass (Section 4.3). Using these three properties the ing network architecture can be used within a real-time control loop and trained using standard end-to-end optimization techniques. The derivatives d LL T /dt and ∂ q T LL T q /∂q i are required for computing the control signal τ τ τ using the inverse model and, hence, must be available within the forward pass. In addition, the second order derivatives, used within the backpropagation of the gradients, must exist to train the network using end-to-end training. To enable the computation of the second order derivatives using automatic differentiation the forward computation must be performed analytically. Both derivatives, d LL T /dt and ∂ q T LL T q /∂q i, have closed form solutions and can be derived by first computing the respective derivative of L and second substituting the reshaped derivative of the vectorized form l. For the temporal derivative d LL T /dt this yields DISPLAYFORM0 whereas dL/dt can be substituted with the reshaped form of DISPLAYFORM1 where i refers to the i-th network layer consisting of an affine transformation and the non-linearity g, i.e., DISPLAYFORM2 can be simplified as the network weights W i and biases b i are time-invariant, i.e., dW i /dt = 0 and db i /dt = 0. Therefore, dl/dt is described by DISPLAYFORM3 Due to the compositional structure of the network and the differentiability of the non-linearity, the derivative with respect to the network input dl/dq can be computed by recursively applying the chain rule, i.e., DISPLAYFORM4 where g is the derivative of the non-linearity. Similarly to the previous derivation, the partial derivative of the quadratic term can be computed using the chain rule, which yields DISPLAYFORM5 whereas ∂L/∂q i can be constructed using the columns of previously derived ∂l/∂q. Therefore, all derivatives included withinf can be computed in closed form. The derivatives of Section 4.2 must be computed within a real-time control loop and only add minimal computational complexity in order to not break the real-time constraint. l and ∂l/∂q, required within Equation 10 and Equation 14, can be simultaneously computed using an extended standard layer. Extending the affine transformation and non-linearity of the standard layer with an additional sub-graph for computing ∂h i /∂h i−1 yields the Lagrangian layer described by DISPLAYFORM0 The computational graph of the Lagrangian layer is shown in FIG1. Chaining the Lagrangian layer yields the compositional structure of ∂l/∂q (Equation 13) and enables the efficient computation of ∂l/∂q. Additional reshaping operations compute dL/dt and ∂L/∂q i. To demonstrate the applicability and extrapolation of DeLaN, the proposed network topology is applied to model-based control for a simulated 2-dof robot FIG2 ) and the physical 7-dof robot Barrett WAM FIG2 ). The performance of DeLaN is evaluated using the tracking error on train and test trajectories and compared to a learned and analytic model. This evaluation scheme follows existing work BID34 BID38 as the tracking error is the relevant performance indicator while the mean squared error (MSE)4 obtained using sample based optimization exaggerates model performance BID16. In addition to most previous work, we strictly limit all model predictions to real-time and perform the learning online, i.e., the models are randomly initialized and must learn the model during the experiment. Within the experiment the robot executes multiple desired trajectories with specified joint positions, velocities and accelerations. The control signal, consisting of motor torques, is generated using a non-linear feedforward controller, i.e., a low gain PD-Controller augmented with a feed-forward torque τ τ τ f f to compensate system dynamics. The control law is described by DISPLAYFORM0 where K p, K d are the controller gains and q d, q d, q d the desired joint positions, velocities and accelerations. The control-loop is shown in FIG2. For all experiments the control frequency is set to 500Hz while the desired joint state and respectively τ τ τ f f is updated with a frequency of f d = 200Hz. All feed-forward torques are computed online and, hence, the computation time is strictly limited to T ≤ 1/200s. The tracking performance is defined as the sum of the MSE evaluated at the sampling points of the reference trajectory. For the desired trajectories two different data sets are used. The first data set contains all single stroke characters5 while the second data set uses cosine curves in joint space FIG2 ). The 20 characters are spatially and temporally re-scaled to comply with the robot kinematics. The joint references are computed using the inverse kinematics. Due to the different characters, the desired trajectories contain smooth and sharp turns and cover a wide variety of different shapes but are limited to a small task space region. In contrast, the cosine trajectories are smooth but cover a large task space region. The performance of DeLaN is compared to an analytic inverse dynamics model, a standard feedforward neural network (FF-NN) and a PD-Controller. For the analytic models the torque is computed using the Recursive Newton-Euler algorithm (RNE) BID29, which computes the feedforward torque using estimated physical properties of the system, i.e. the link dimensions, masses and moments of inertia. For implementations the open-source library PyBullet is used. Both deep networks use the same dimensionality, ReLu nonlinearities and must learn the system dynamics online starting from random initialization. The training samples containing joint states and applied torques (q, q, q, τ τ τ) 0,...T are directly read from the control loop as shown in FIG2.4An offline comparisons evaluating the MSE on datasets can be found in the Appendix A. 5The data set was created by BID45 and is available at ) The training runs in a separate process on the same machine and solves the optimization problem online. Once the training process computed a new model, the inverse modelf −1 of the control loop is updated. The 2-dof robot shown in FIG2 is simulated using PyBullet and executes the character and cosine trajectories. FIG3 shows the ground truth torques of the characters'a','d','e', the torque ground truth components and the learned decomposition using DeLaN FIG3. Even though DeLaN is trained on the super-imposed torques, DeLaN learns to disambiguate the inertial force H q, the Coriolis and Centrifugal force c(q, q) and the gravitational force g(q) as the respective curves overlap closely. Hence, DeLaN is capable of learning the underlying physical model using the proposed network topology trained with standard end-to-end optimization. FIG3 shows the offline MSE on the test set averaged over multiple seeds for the FF-NN and DeLaN w.r.t. to different training set sizes. The different training set sizes correspond to the combination of n random characters, i.e., a training set size of 1 corresponds to training the model on a single character and evaluating the performance on the remaining 19 characters. DeLaN clearly obtains a lower test MSE compared to the FF-NN. Especially the difference in performance increases when the training set is reduced. This increasing difference on the test MSE highlights the reduced sample complexity and the good extrapolation to unseen samples. This difference in performance is amplified on the real-time control-task where the models are learned online starting from random initialization. FIG4 and b shows the accumulated tracking error per testing character and the testing error averaged over all test characters while FIG4 shows the qualitative comparison of the control performance6. It is important to point out that all shown are averaged over multiple seeds and only incorporate characters not used for training and, hence, focus the evaluation on the extrapolation to new trajectories. The qualitative comparison shows that DeLaN is able to execute all 20 characters when trained on 8 random characters. The obtained tracking error is comparable to the analytic model, which in this case contains the simulation parameters and is optimal. In contrast, the FF-NN shows significant deviation from the desired trajectories when trained on 8 random characters. The quantitative comparison of the accumulated tracking error over seeds FIG4 shows that DeLaN obtains lower tracking error on all training set sizes compared to the FF-NN. This good performance using only few training characters shows that DeLaN has a lower sample complexity and better extrapolation to unseen trajectories compared to the FF-NN. comparable. When the velocities are increased the performance of FF-NN deteriorates because the new trajectories do not lie within the vicinity of the training distribution as the domain of the FF-NN is defined as (q, q, q). Therefore, FF-NN cannot extrapolate to the testing data. In contrast, the domain of the networksL andĝ composing DeLaN only consist of q, rather than (q, q, q). This reduced domain enables DeLaN, within limit, to extrapolate to the test trajectories. The increase in tracking error is caused by the structure off −1, where model errors to scale quadratic with velocities. However, the obtained tracking error on the testing trajectories is significantly lower compared to FF-NN. For physical experiments the desired trajectories are executed on the Barrett WAM, a robot with direct cable drives. The direct cable drives produce high torques generating fast and dexterous movements but yield complex dynamics, which cannot be modelled using rigid-body dynamics due to the variable stiffness and lengths of the cables7. Therefore, the Barrett WAM is ideal for testing the applicability of model learning and analytic models8 on complex dynamics. For the physical experiments we focus on the cosine trajectories as these trajectories produce dynamic movements while character trajectories are mainly dominated by the gravitational forces. In addition, only the dynamics of the four lower joints are learned because these joints dominate the dynamics and the upper joints cannot be sufficiently excited to retrieve the dynamics parameters. FIG5 and d show the tracking error on the cosine trajectories using the the simulated Barrett WAM while FIG5 and f show the tracking error of the physical Barrett WAM. It is important to note, that the simulation only simulates the rigid-body dynamics not including the direct cables drives and the simulation parameters are inconsistent with the parameters of the analytic model. Therefore, the analytic model is not optimal. On the training trajectories executed on the physical system the FF-NN performs better compared to DeLaN and the analytic model. DeLaN achieves slightly better tracking error than the analytic model, which uses the same rigid-body assumptions as DeLaN. That shows DeLaN can learn a dynamics model of the WAM but is limited by the model assumptions of Lagrangian Mechanics. These assumptions cannot represent the dynamics of the 7The cable drives and cables could be modelled simplistically using two joints connected by massless spring. 8The analytic model of the Barrett WAM is obtained using a publicly available URDF We introduced the concept of incorporating a physics prior within the deep learning framework to achieve lower sample complexity and better extrapolation. In particular, we proposed Deep Lagrangian Networks (DeLaN), a deep network on which Lagrangian Mechanics is imposed. This specific network topology enabled us to learn the system dynamics using end-to-end training while maintaining physical plausibility. We showed that DeLaN is able to learn the underlying physics from a super-imposed signal, as DeLaN can recover the contribution of the inertial-, gravitational and centripetal forces from sensor data. The quantitative evaluation within a real-time control loop assessing the tracking error showed that DeLaN can learn the system dynamics online, obtains lower sample complexity and better generalization compared to a feed-forward neural network. DeLaN can extrapolate to new trajectories as well as to increased velocities, where the performance of the feedforward network deteriorates due to the overfitting to the training data. When applied to a physical systems with complex dynamics the bounded representational power of the physics prior can be limiting. However, this limited representational power enforces the physical plausibility and obtains the lower sample complexity and substantially better generalization. In future work the physics prior should be extended to represent a wider system class by introducing additional non-conservative forces within the Lagrangian. The mean squared error averaged of 20 seeds on the training-(a) and test-set (b) of the character trajectories for the two joint robot. The models are trained offline using n characters and tested using the remaining 20 − n characters. The training samples are corrupted with white noise, while the performance is tested on noise-free trajectories. To evaluate the performance of DeLaN without the control task, DeLaN was trained offline on previously collected data and evaluated using the mean squared error (MSE) on the test and training set. For comparison, DeLaN is compared to the system identification approach (SI) described by BID1, a feed-forward neural network (FF-NN) and the Recursive Newton Euler algorithm (RNE) using an analytic model. For this comparison, one must point out that the system identification approach relies on the availability of the kinematics, as the Jacobians and transformations w.r.t. to every link must be known to compute the necessary features. In contrast, neither DeLaN nor the FF-NN require this knowledge and must implicitly also learn the kinematics. FIG6 shows the MSE averaged over 20 seeds on the character data set executed on the two-joint robot. For this data set, the models are trained using noisy samples and evaluated on the noise-free and previously unseen characters. The FF-NN performs the best on the training set, but overfits to the training data. Therefore, the FF-NN does not generalize to unseen characters. In contrast, the SI approach does not overfit to the noise and extrapolates to previously unseen characters. In comparison, the structure of DeLaN regularizes the training and prevents the overfitting to the corrupted training data. Therefore, DeLaN extrapolates better than the FF-NN but not as good as the SI approach. Similar can be observed on the cosine data set using the Barrett WAM simulated in SL FIG7. The FF-NN performs best on the training trajectory but the performance deteriorates when this network extrapolates to higher velocities. SI performs worse on the training trajectory but extrapolates to higher velocities. In comparison, DeLaN performs comparable to the SI approach on the training trajectory, extrapolates significantly better than the FF-NN but does not extrapolate as good as the SI approach. For the physical system FIG7, the differ from the in simulation. On the physical system the SI approach only achieves the same performance as RNE, which is significantly worse compared to the performance of DeLaN and the FF-NN. When evaluating the extrapolation to higher velocities, the analytic model and the SI approach extrapolate to higher velocities, while the MSE for the FF-NN significantly increases. In comparison, DeLaN extrapolates better compared to the FF-NN but not as good as the analytic model or the SI approach. This performance difference between the simulation and physical system can be explained by the underlying model assumptions and the robustness to noise. While DeLaN only assumes rigidbody dynamics, the SI approach also assumes the exact knowledge of the kinematic structure. For simulation both assumptions are valid. However, for the physical system, the exact kinematics are unknown due to production imperfections and the direct cable drives applying torques to flexible joints violate the rigid-body assumption. Therefore, the SI approach performs significantly worse on the physical system. Furthermore, the noise robustness becomes more important for the physical system due to the inherent sensor noise. While the linear regression of the SI approach is easily corrupted by noise or outliers, the gradient based optimization of the networks is more robust to noise. This robustness can be observed in Figure 9, which shows the correlation between the variance of Gaussian noise corrupting the training data and the MSE of the simulated and noise-free cosine trajectories. With increasing noise levels, the MSE of the SI approach increases significantly faster compared to the models learned using gradient descent. Concluding, the extrapolation of DeLaN to unseen trajectories and higher velocities is not as good as the SI approach but significantly better than the generic FF-NN. This increased extrapolation compared to the generic network is achieved by the Lagrangian Mechanics prior of DeLaN. Even though this prior promotes extrapolation, the prior also hinders the performance on the physical robot, because the prior cannot represent the dynamics of the direct cable drives. Therefore, DeLaN performs worse than the FF-NN, which does not assume any model structure. However, DeLaN outperforms the SI approach on the physical system, which also assumes rigid-body dynamics and requires the exact knowledge of the kinematics.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BklHpjCqKm
This paper introduces a physics prior for Deep Learning and applies the resulting network topology for model-based control.
Many challenging prediction problems, from molecular optimization to program synthesis, involve creating complex structured objects as outputs. However, available training data may not be sufficient for a generative model to learn all possible complex transformations. By leveraging the idea that evaluation is easier than generation, we show how a simple, broadly applicable, iterative target augmentation scheme can be surprisingly effective in guiding the training and use of such models. Our scheme views the generative model as a prior distribution, and employs a separately trained filter as the likelihood. In each augmentation step, we filter the model's outputs to obtain additional prediction targets for the next training epoch. Our method is applicable in the supervised as well as semi-supervised settings. We demonstrate that our approach yields significant gains over strong baselines both in molecular optimization and program synthesis. In particular, our augmented model outperforms the previous state-of-the-art in molecular optimization by over 10% in absolute gain. Deep architectures are becoming increasingly adept at generating complex objects such as images, text, molecules, or programs. Many useful generation problems can be seen as translation tasks, where the goal is to take a source (precursor) object such as a molecule and turn it into a target satisfying given design characteristics. Indeed, molecular optimization of this kind is a key step in drug development, though the adoption of automated tools remains limited due to accuracy concerns. We propose here a simple, broadly applicable meta-algorithm to improve translation quality. Translation is a challenging task for many reasons. Objects are complex and the available training data pairs do not fully exemplify the intricate ways in which valid targets can be created from the precursors. Moreover, precursors provided at test time may differ substantially from those available during training -a scenario common in drug development. While data augmentation and semisupervised methods have been used to address some of these challenges, the focus has been on either simple prediction tasks (e.g., classification) or augmenting data primarily on the source side. We show, in contrast, that iteratively augmenting translation targets significantly improves performance on complex generation tasks in which each precursor corresponds to multiple possible outputs. Our iterative target augmentation approach builds on the idea that it is easier to evaluate candidate objects than to generate them. Thus a learned predictor of target object quality (a filter) can be used to effectively guide the generation process. To this end, we construct an external filter and apply it to the complex generative model's sampled translations of training set precursors. Candidate translations that pass the filter criteria become part of the training data for the next training epoch. The translation model is therefore iteratively guided to generate candidates that pass the filter. The generative model can be viewed as an adaptively tuned prior distribution over complex objects, with the filter as the likelihood. For this reason, it is helpful to apply the filter at test time as well, or to use the approach transductively 1 to adapt the generation process to novel test cases. The approach is reminiscent of self-training or reranking approaches employed with some success for parsing . However, in our case, it is the candidate generator that is complex while the filter is relatively simple and remains fixed during the iterative process. We demonstrate that our meta-algorithm is quite effective and consistent in its ability to improve translation quality in the supervised setting. On a program synthesis task , under the same neural architecture, our augmented model outperforms their MLE baseline by 8% and their RL model by 3% in top-1 generalization accuracy (in absolute measure). On molecular optimization (a), their sequence to sequence translation baseline, when combined with our target data augmentation, achieves a new state-of-the-art and outperforms their graph based approach by over 10% in success rate. Their graph based methods are also improved by iterative target augmentation with more than 10% absolute gain. The reflect the difficulty of generation in comparison to evaluation; indeed, the gains persist even if the filter quality is reduced somewhat. Source side augmentation with unlabeled precursors (the semi-supervised setting) can further improve , but only when combined with the filter in the target data augmentation framework. We provide ablation experiments to empirically highlight the effect of our method and also offer some theoretical insights for why it is effective. Molecular Optimization The goal of molecular optimization is to learn to modify compounds so as to improve their chemical properties.;; used reinforcement learning approaches, while Jin et al. (2019a; b) formulated this problem as graphto-graph translation and significantly outperformed previous methods. However, their performance remains imperfect due to the limited size of given training sets. Our work uses property prediction models to check whether generated molecules have desired chemical properties. Recent advances in graph convolutional networks have provided effective solutions to predict those properties in silico. In this work, we use an off-the-shelf property prediction model to filter proposed translation pairs during data augmentation. Program Synthesis Program synthesis is the task of generating a program (using domain-specific language) based on given input-output specifications (; ;). One can check a generated program's correctness by simply executing it on each input and verifying its output.; leverage this idea in their respective decoding procedures, while also using structural constraints on valid programs. Semi-supervised Learning Our method is related to various approaches in semi-supervised learning. In image and text classification, data augmentation and label guessing are commonly applied to obtain artificial labels for unlabeled data. In machine translation, sample new targets from a stationary distribution in order to match the model distribution to the exponentiated payoff distribution centered at a single target sentence. Back-translation creates extra translation pairs by using a backward translation system to translate unlabeled sentences from a target language into a source language. In contrast, our method works in the forward direction because many translation tasks are not symmetric. Moreover, our data augmentation is carried out over multiple iterations, in which we use the augmented model to generate new data for the next iteration. In syntactic parsing, our method is closely related to self-training . They generate new parse trees from unlabeled sentences by applying an existing parser followed by a reranker, and then treat the ing parse trees as new training targets. However, their method is not iterative, and their reranker is explicitly trained to operate over the top k outputs of the parser; in contrast, our filter is independent of the generative model. In addition we show that our approach, which can be viewed as iteratively combining reranking and self-training, is theoretically motivated and can improve the performance of highly complex neural models in multiple domains. Co-training and tri-training also augment a parsing dataset by adding targets on which multiple baseline models agree. Instead of using multiple learners, our method uses task-specific constraints to select correct outputs. Our iterative target augmentation framework can be applied to any conditional generation task with task-specific constraints. For example, molecular optimization (a; b) is the task of transforming a given molecule X into another compound Y with improved chemical properties, Figure 1: Illustration of our data generation process in the program synthesis setting. Given an input-output specification, we first use our generation model to generate candidate programs, and then select correct programs using our external filter. Images of input-output specification and the program A are from. for attempt in 1,..., C do Train model on D. 14: for epoch in 1,..., n 2 do Iterative target augmentation 15: while constraining Y to remain similar to X. Program synthesis is the task of generating a program Y satisfying input specification X; for example, X may be a set of input-output test cases which Y must pass. Without loss of generality, we formulate the generation task as a translation problem. For a given input X, the model learns to generate an output Y satisfying the constraint c. The proposed augmentation framework can be applied to any translation model M trained on an existing dataset As illustrated in Figure 1, our method is an iterative procedure in which each iteration consists of the following two steps: Step: Let D t be the training set at iteration t. To construct each next training set D t+1, we feed each input X i ∈ D (the original training set, not D t) into the translation model up to C times to sample C candidate translations We take the first K distinct translations for each X i satisfying the constraint c and add them to D t+1. When we do not find K distinct valid translations, we simply add the original translation Y i to D t+1. • Training Step: We continue to train the model M t over the new training set D t+1 for one epoch. The above training procedure is summarized in Algorithm 1. As the constraint c is known a priori, we can construct an external filter to remove generated outputs that violate c during the augmentation step. At test time, we also use this filter to screen predicted outputs. To propose the final translation of a given input X, we have the model generate up to L outputs until we find one satisfying the constraint c. If all L attempts fail for a particular input, we just output the first of the failed attempts. Finally, as an additional improvement, we observe that the augmentation step can be carried out for unlabeled inputs X that have no corresponding Y. Thus we can further augment our training dataset in the transductive setting by including test set inputs during the augmentation step, or in the semi-supervised setting by simply including unlabeled inputs. We provide here some theoretical motivation for our iterative target augmentation framework. For simplicity, we consider an external filter c X,Y that is a binary indicator function representing whether output Y satisfies the desired constraint in relation to input X. In other words, we would like to generate Y such that Y ∈ B(X) = {Y |c X,Y = 1}. If the initial translation model P (Y |X) serves as a reasonable prior distribution over outputs, we could simply "invert" the filter and use as the ideal translation model. While this posterior calculation is typically not feasible but could be approximated through samples, it relies heavily on the appropriateness of the prior (model prior to augmentation). Instead, we go a step further and iteratively optimize our parametrically defined prior translation model P θ (Y |X). Note that the ing prior can become much more concentrated around acceptable translations. We maximize the log-likelihood that candidate translations satisfy the constraints implicitly encoded in the filter In many cases there are multiple viable outputs for any given input X. The training data may provide only one (or none) of them. Therefore, we treat the output structure Y as a latent variable, and expand the inner term of Eq. as Since the above objective involves discrete latent variables Y, we propose to maximize Eq. using the standard EM algorithm , especially its incremental, approximate variant. The target augmentation step in our approach is a sampled version of the E-step where the posterior samples are drawn with rejection sampling guided by the filter. The number of samples K controls the quality of approximation to the posterior. 3 The additional training step based on the augmented targets corresponds to a generalized M-step. More precisely, let P (t) θ (Y |X) be the current translation model after t epochs of augmentation training. In epoch t + 1, the augmentation step first samples C different candidates for each input X using the old model P (t) parameterized by θ (t), and then removes those which violate the constraint c, interpretable as samples from the current posterior As a , the training step maximizes the EM auxiliary objective via stochastic gradient descent: We train the model with multiple iterations and show empirically that model performance indeed keeps improving as we add more iterations. The EM approach is likely to converge to a different and better-performing translation model than the initial posterior calculation discussed above. We demonstrate the broad applicability of iterative target augmentation by applying it to two tasks of different domains: molecular optimization and program synthesis. Figure 2: Illustration of molecular optimization. Molecules can be modeled as graphs, with atoms as nodes and bonds as edges. Here, the task is to train a translation model to modify a given input molecule into a target molecule with higher drug-likeness (QED) score. The constraint has two components: the output Y must be highly drug-like, and must be sufficiently similar to the input X. The goal of molecular optimization is to learn to modify molecules so as to improve their chemical properties. As illustrated in Figure 2, this task is formulated as a graph-to-graph translation problem. Similar to machine translation, the training set is a set of molecular pairs {(X, Y)}. X is the input molecule (precursor) and Y is a similar molecule with improved chemical properties. Each molecule in the training set D is further labeled with its property score. Our method is well-suited to this task because the target molecule is not unique: each precursor molecule can be modified in many different ways to optimize its properties. External Filter The constraint for this task contains two parts: 1) the chemical property of Y must exceed a certain threshold β, and 2) the molecular similarity between X and Y must exceed a certain threshold δ. The molecular similarity sim(X, Y) is defined as Tanimoto similarity on Morgan fingerprints , which measures structural overlap between two molecules. In real world settings, ground truth values of chemical properties are often evaluated through experimental assays, which are too expensive and time-consuming to run for iterative target augmentation. Therefore, we construct an in silico property predictor F 1 to approximate the true property evaluator F 0. To train this property prediction model, we use the molecules in the training set and their labeled property values. The predictor F 1 is parameterized as a graph convolutional network and trained using the Chemprop package . During data augmentation, we use F 1 to filter out molecules whose predicted property is under the threshold β. We follow the evaluation setup of Jin et al. (2019b) for two molecular optimization tasks: 1. QED Optimization: The task is to improve the drug-likeness (QED) of a given compound X. The task is to optimize biological activity against the dopamine type 2 receptor (DRD2). The similarity constraint is sim(X, Y) ≥ 0.4 and the property constraint is DRD2(Y) ≥ 0.5, where DRD2(Y) ∈ is the predicted probability of biological activity given by the model from. We treat the output of the in silico evaluators from and as ground truth, and we use them only during test-time evaluation. Evaluation Metrics. During evaluation, we are interested both in the probability that the model will find a successful modification for a given molecule, as well as the diversity of the successful modifications when there are multiple. Table 1: Performance of different models on QED and DRD2 optimization tasks. Italicized models with + are augmented with iterative target augmentation. We emphasize that iterative target augmentation remains critical to performance in the semi-supervised and transductive settings; data augmentation without an external filter instead decreases performance. 1. Success: The fraction of molecules X for which any of the outputs Y 1... Y Z meet the required similarity and property constraints (specified previously for each task). This is our main metric. 2. Diversity: For each molecule X, we measure the average Tanimoto distance (defined as 1 − sim(Y i, Y j)) between pairs within the set of successfully translated compounds among Y 1... Y Z. If there are one or fewer successful translations then the diversity is 0. We average this quantity across all test molecules. We consider the following two model architectures from Jin et al. (2019a) to show that our augmentation scheme is not tied to specific neural architectures. 1. VSeq2Seq, a sequence-to-sequence translation model generating molecules by their SMILES string . 2. HierGNN, a hierarchical graph-to-graph architecture that achieves state-of-the-art performance on the QED and DRD2 tasks, outperforming VSeq2Seq by a wide margin. We apply our iterative augmentation procedure to the above two models, generating up to K = 4 new targets per precursor during each epoch of iterative target augmentation. Additionally, we evaluate our augmentation of VSeq2Seq in a transductive setting, as well as in a semi-supervised setting where we provide 100K additional source-side precursors from the ZINC database . Full hyperparameters are in Appendix A. As shown in Table 1, our iterative augmentation paradigm significantly improves the performance of VSeq2Seq and HierGNN. On both datasets, the translation success rate increases by over 10% in absolute terms for both models. In fact, VSeq2Seq+, our augmentation of the simple VSeq2Seq model, outperforms the non-augmented version of HierGNN. This strongly confirms our hypothesis about the inherent challenge of learning translation models in data sparse scenarios. Moreover, we find that adding more precursors during data augmentation further improves the VSeq2Seq model. On the QED dataset, the translation success rate improves from 89.0% to 92.6% by just adding test set molecules as precursors (VSeq2Seq+, transductive). When instead adding 100K presursors from the external ZINC database, the performance further increases to 95.0% (VSeq2Seq+, semisupervised). We observe similar improvements for the DRD2 task as well. Beyond accuracy gain, our augmentation strategy also improves the diversity of generated molecules. For instance, on the DRD2 dataset, our approach yields 100% relative gain in terms of output diversity. Although the property predictor used in data augmentation is different from the ground truth property evaluator used at test time, the difference in evaluators does not derail the overall training process. Here we analyze the influence of the quality of the property predictor used in data augmentation. Specifically, we rerun our experiments using less accurate predictors in the property-predicting component of our external filter. We obtain these less accurate predictors by undertraining Chemprop and decreasing its hidden dimension. For comparison, we also report with the oracle property predictor which is the ground truth property evaluator. As shown in Figure 3, on the DRD2 dataset, we are able to maintain strong performance despite using predictors that deviate significantly from the ground truth. This implies that our framework can potentially be applied to other properties that are harder to predict. On the QED dataset, our Table 2: Ablation analysis of filtering at training and test time. "Train" indicates a model whose training process uses data augmentation according to our framework. "Test" indicates a model that uses the external filter at prediction time to discard candidate outputs which fail to pass the filter. The evaluation for VSeq2Seq(no-filter) is conducted after 10 augmentation epochs, as the best validation set performance only decreases over the course of training. method is less tolerant to inaccurate property prediction because the property constraint is much tighter -it requires the QED score of an output Y to be in the range [0.9, 1.0]. Importance of External Filtering Our full model (VSeq2Seq+) uses the external filter during both training and testing. We further experiment with Vseq2seq(test), a version of our model trained without data augmentation but which uses the external filter to remove invalid outputs at test time. As shown in Table 2, VSeq2Seq(test) performs significantly worse than our full model trained under data augmentation. Similarly, a model VSeq2Seq(train) trained with the data augmentation but without the prediction time filtering also performs much worse than the full model. In addition, we run an augmentation-only version of the model without an external filter. This model (referred to as VSeq2Seq(no-filter) in Table 2 ) augments the data in each epoch by simply using the first K distinct candidate translations for each precursor X in the training set, without using the external filter at all. In addition, we provide this model with the 100K unlabeled precursors from the semi-supervised setting. Nevertheless, we find that the performance of this model steadily declines from that of the bootstrapped starting point with each data augmentation epoch. Thus the external filter is necessary to prevent poor targets from leading the model training astray. In program synthesis, the source is a set of input-output specifications for the program, and the target is a program that passes all test cases. Our method is suitable for this task because the target program is not unique. Multiple programs may be consistent with the given input-output specifications. The external filter is straightforward for this task: we simply check whether the generated output passes all test cases. Note that at evaluation time, each instance contains extra held-out input-output test cases; the program must pass these in addition to the given test cases in order to be considered correct. When we perform prediction time filtering, we do not use held-out test cases in our filter. Top-1 Generalization MLE 71.91 MLE + RL + Beam Search 77.12 80.17 Table 3: Model performance on Karel program synthesis task. MLE+ is our augmented version of the MLE model . Our task is based on the educational Karel programming language used for evaluation in and. Commands in the Karel language guide a robot's actions in a 2D grid, and may include for loops, while loops, and conditionals. Figure 1 contains an example. We follow the experiment setup of. Evaluation Metrics. The evaluation metric is top-1 generalization. This metric measures how often the model can generate a program that passes the input-output test cases on the test set. At test time, we use our model to generate up to L candidate programs and select the first one to pass the input-output specifications (not including held-out test cases). Models and Baselines. Our main baseline is the MLE baseline from. This model consists of a CNN encoder for the input-output grids and a LSTM decoder along with a handcoded syntax checker. It is trained to maximize the likelihood of the provided target program. Our model is the augmentation of this MLE baseline by our iterative target augmentation framework. As with molecular optimization, we generate up to K = 4 new targets per precursor during each augmentation step. Additionally, we compare against the best model from , which finetunes the same MLE architecture using an RL method with beam search to estimate gradients. We use the same hyperparameters as the original MLE baseline; see Appendix A for details. Table 3 shows the performance of our model in comparison to previous work. Our model (MLE+) outperforms the base MLE model in model by a wide margin. Moreover, our model outperforms the best reinforcement learning model (RL + Beam Search) in , which was trained to directly maximize the generalization metric. This demonstrates the efficacy of our approach in the program synthesis domain. Since our augmentation framework is complementary to architectural improvements, we hypothesize that other techniques, such as execution based synthesis , can benefit from our approach as well. In this work, we have presented an iterative target augmentation framework for generation tasks with multiple possible outputs. Our approach is theoretically motivated, and we demonstrate strong empirical on both the molecular optimization and program synthesis tasks, significantly outperforming baseline models on each task. Moreover, we find that iterative target augmentation is complementary to architectural improvements, and that its effect can be quite robust to the quality of the external filter. Finally, in principle our approach is applicable to other domains as well. Our augmented models share the same hyperparameters as their baseline counterparts in all cases. For the VSeq2Seq model we use batch size 64, embedding and hidden dimension 300, VAE latent dimension 30, and an LSTM with depth 1 (bidirectional in the encoder, unidirectional in the decoder). For models using iterative target augmentation, n 1 is set to 5 and n 2 is set to 10, while for the baseline models we train for 20 epochs (corresponding to n 1 = 20, n 2 = 0). The HierGNN model shares the same hyperparameters as in Jin et al. (2019a). For the training time and prediction time filtering parameters, we set K = 4, C = 200, and L = 10 for both the QED and DRD2 tasks. For the Karel program synthesis task, we use the same hyperparameters as the MLE baseline model in. We use a beam size of 64 at test time, the same as the MLE baseline, but simply sample programs from the decoder distribution when running iterative target augmentation during training. The baseline model is trained for 100 epochs, while for the model employing iterative target augmentation we train as normal for n 1 = 15 epochs followed by n 2 = 50 epochs of iterative target augmentation. Due to the large size of the full training dataset, in each epoch of iterative augmentation we use 1 10 of the dataset, so in total we make 5 passes over the entire dataset. For the training time and prediction time filtering parameters, we set K = 4, C = 50, and L = 10. In Table 4 we provide the training, validation, and test set sizes for all of our tasks. For each task we use the same splits as our baselines. Training Set Validation Set Test Set QED 88306 360 800 DRD2 34404 500 1000 Karel 1116854 2500 2500 Table 4: Number of source-target pairs in training, validation, and test sets for each task. In Figure 5, we provide the validation set performance per iterative target augmentation epoch for our VSeq2Seq+ model on both the QED and DRD2 tasks. The corresponding figure for the MLE+ model on the Karel task is in the main text in Figure 4. In our molecular optimization tasks, we experiment with the effect of modifying K, the number of new targets added per precursor during each training epoch. In all other experiments we have used K = 4. Since taking K = 0 corresponds to the base non-augmented model, it is unsurprising that performance may suffer when K is too small. However, as shown in Table 5, at least in molecular optimization there is relatively little change in performance for K much larger than 4. We also experiment with a version of our method which continually grows the training dataset by keeping all augmented targets, instead of discarding new targets at the end of each epoch. We chose the latter version for our main experiments due to its closer alignment to our EM motivation. However, we demonstrate in Table 6 that performance gains from continually growing the dataset are small to insignificant in our molecular optimization tasks. Table 6: Performance of our proposed augmentation scheme, VSeq2Seq+, compared to an alternative version (VSeq2Seq+, keep-targets) which keeps all generated targets and continually grows the training dataset. In Table 7 we provide the same ablation analysis that we provided in the main text for molecular optimization, demonstrating that both training time iterative target augmentation as well as prediction time filtering are beneficial to model performance. However, we note that even MLE(train), our model without prediction time filtering, outperforms the best RL method from. Table 7: Ablation analysis of filtering at training and test time. "Train" indicates a model whose training process uses data augmentation according to our framework. "Test" indicates a model that uses the external filter at prediction time to discard candidate outputs which fail to pass the filter. Note that MLE and MLE(test) are based on an MLE checkpoint which underperforms the published from by 1 point, due to training for fewer epochs.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rylztAEYvr
We improve generative models by proposing a meta-algorithm that filters new training data from the model's outputs.
The Boltzmann distribution is a natural model for many systems, from brains to materials and biomolecules, but is often of limited utility for fitting data because Monte Carlo algorithms are unable to simulate it in available time. This gap between the expressive capabilities and sampling practicalities of energy-based models is exemplified by the protein folding problem, since energy landscapes underlie contemporary knowledge of protein biophysics but computer simulations are challenged to fold all but the smallest proteins from first principles. In this work we aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data. We compose a neural energy function with a novel and efficient simulator based on Langevin dynamics to build an end-to-end-differentiable model of atomic protein structure given amino acid sequence information. We introduce techniques for stabilizing backpropagation under long roll-outs and demonstrate the model's capacity to make multimodal predictions and to, in some cases, generalize to unobserved protein fold types when trained on a large corpus of protein structures. Many natural systems, such as cells in a tissue or atoms in a protein, organize into complex structures from simple underlying interactions. Explaining and predicting how macroscopic structures such as these arise from simple interactions is a major goal of science and, increasingly, machine learning. The Boltzmann distribution is a foundational model for relating local interactions to system behavior, but can be difficult to fit to data. Given an energy function U ✓ [x], the probability of a system configuration x scales exponentially with energy as DISPLAYFORM0 where the (typically intractable) constant Z normalizes the distribution. Importantly, simple energy functions U ✓ [x] consisting of weak, local interactions can collectively encode complex system behaviors, such as the structures of materials and molecules or, when endowed with latent variables, the statistics of images, sound, and text BID0 BID17. Unfortunately, learning model parameters✓ and generating samples x ⇠ p ✓ (x) of the Boltzmann distribution is difficult in practice, as these procedures depend on expensive Monte Carlo simulations that may struggle to mix effectively. These difficulties have driven a shift towards generative models that are easier to learn and sample from, such as directed latent variable models and autoregressive models .The protein folding problem provides a prime example of both the power of energy-based models at describing complex relationships in data as well as the challenge of generating samples from them. Decades of research in biochemistry and biophysics support an energy landscape theory of An unrolled simulator as a model for protein structure. NEMO combines a neural energy function for coarse protein structure, a stochastic simulator based on Langevin dynamics with learned (amortized) initialization, and an atomic imputation network to build atomic coordinate output from sequence information. It is trained end-to-end by backpropagating through the unrolled folding simulation.protein folding , in which the folds that natural protein sequences adopt are those that minimize free energy. Without the availability of external information such as coevolutionary information or homologous structures (Martí-) to constrain the energy function, however, contemporary simulations are challenged to generate globally favorable low-energy structures in available time. How can we get the representational benefits of energy-based models with the sampling efficiency of directed models? Here we explore a potential solution of directly training an unrolled simulator of an energy function as a model for data. By directly training the sampling process, we eschew the question'when has the simulator converged' and instead demand that it produce a useful answer in a fixed amount of time. Leveraging this idea, we construct an end-to-end differentiable model of protein structure that is trained by backpropagtion through folding (FIG0). NEMO (Neural energy modeling and optimization) can learn at scale to generate 3D protein structures consisting of hundreds of points directly from sequence information. Our main contributions are:• Neural energy simulator model for protein structure that composes a deep energy function, unrolled Langevin dynamics, and an atomic imputation network for an end-to-end differentiable model of protein structure given sequence information• Efficient sampling algorithm that is based on a transform integrator for efficient sampling in transformed coordinate systems• Stabilization techniques for long roll-outs of simulators that can exhibit chaotic dynamics and, in turn, exploding gradients during backpropagation• Systematic analysis of combinatorial generalization with a new dataset of protein sequence and structure Protein modeling Our model builds on a long history of coarse-grained modeling of protein structure (; Figure 2 : A neural energy function models coarse grained structure and is sampled by internal coordinate dynamics. (A) The energy function is formulated as a Markov Random Field with structure-based features and sequence-based weights computed by neural networks FIG2 ). (B) To rapidly sample low-energy configurations, the Langevin dynamics simulator leverages both (i) an internal coordinate parameterization, which is more effective for global rearrangements, and (ii) a Cartesian parameterization, which is more effective for localized structural refinement. (C) The base features of the structure network are rotationally and translationally invariant internal coordinates (not shown), pairwise distances, and pairwise orientations.2016; BID2 BID5. Structured Prediction Energy Networks (SPENs) with unrolled optimization BID6 are a highly similar approach to ours, differing in terms of the use of optimization rather than sampling. Additional methodologically related work includes approaches to learn energy functions and samplers simultaneously (; BID25 ; BID20 BID8, to learn efficient MCMC operators BID20), to build expressive approximating distributions with unrolled Monte Carlo simulations BID18 BID23, and to learn the parameters of simulators with implicitly defined likelihoods 1 BID10 BID24. Overview NEMO is an end-to-end differentiable model of protein structure X conditioned on sequence information s consisting of three components (FIG0): (i) a neural energy function U ✓ [x; s] for coarse grained structure x given sequence, (ii) an unrolled simulator that generates approximate samples from U via internal coordinate Langevin dynamics (§ 2.3), and (iii) an imputation network that generates an atomic model X from the final coarse-grained sample x (T) (§ 2.4). All components are trained simultaneously via backpropagation through the unrolled process. Proteins Proteins are linear polymers (sequences) of amino acids that fold into defined 3D structures. The 20 natural amino acids have a common monomer structure [-(N-H)-(C-R)-(C=O)-] with variable side-chain R groups that can differ in properties such as hydrophobicity, charge, and ability to form hydrogen bonds. When placed in solvent (such as water or a lipid membrane), interactions between the side-chains, backbone, and solvent drive proteins into particular 3D configurations ('folds'), which are the basis for understanding protein properties such as biochemical activity, ligand binding, and interactions with drugs. Coordinate representations We predict protein structure X in terms of 5 positions per amino acid: the four heavy atoms of the backbone (N, C ↵, and carbonyl C=O) and the center of mass of the side chain R group. While it is well-established that the locations of C ↵ carbons are sufficient to reconstruct a full atomic structure , we include these additional positions for evaluating backbone hydrogen bonding (secondary structure) and coarse side-chain placement. Internally, the differentiable simulator generates an initial coarse-grained structure (1-position-peramino-acid) with the loss function targeted to the midpoint of the C ↵ carbon and the side chain center of mass. Sequence conditioning We consider two modes for conditioning our model on sequence information: 1-seq, in which s is an L ⇥ 20 matrix containing a one-hot encoding of the amino acid sequence, and Profile, in which s is an L ⇥ 40 matrix encoding both the amino acid sequence and a profile of evolutionarily related sequences (§ B.7).Internal coordinates In contrast to Cartesian coordinates x, which parameterize structure in terms of absolute positions of points x i 2 R 3, internal coordinates z parameterize structure in terms of relative distances and angles between points. We adopt a standard convention for internal coordinates of chains BID13 where each point x i is placed in a spherical coordinate system defined by the three preceding points x i 1, x i 2, x i 3 in terms of a radius (bond length 2) b i 2, a polar angle (bond angle) a i 2 [0, ⇡), and an azimuthal angle (dihedral angle) d i 2 [0, 2⇡) (Figure 2B). We define z i = {b i,ã i, d i}, whereb i,ã i are unconstrained parameterizations of b i and a i (§ A.1). The transformation x = F(z) from internal coordinates to Cartesian is then defined by the recurrence DISPLAYFORM0 ||ûi 1 ⇥ûi|| is a unit vector normal to each bond plane. The inverse transformation z = F 1 (x) is simpler to compute, as it only involves local (and fully parallelizable) calculations of distances and angles (§ A.1). Deep Markov Random Field We model the distribution of a structure x conditioned on a sequence s with the Boltzmann distribution, p ✓ (x|s) = 1. Internal coordinates z All internal coordinates except 6 are invariant to rotation and translation 3 and we mask these in the energy loss.2. Distances D ij = kx i x j k between all pairs of points. We further process these by 4 radial basis functions with (learned) Gaussian kernels.3. Orientation vectorsv ij, which are unit vectors encoding the relative position of point x j in a local coordinate system of x i with base vectorsû DISPLAYFORM0 kûi û i+1 k,n i+1, and the cross product thereof. Langevin dynamics The Langevin dynamics is a stochastic differential equation that aymptotically samples from the Boltzmann distribution (Equation 1). It is typically simulated by a first-order discretization as DISPLAYFORM0 Internal coordinate dynamics The efficiency with which Langevin dynamics explores conformational space is highly dependent on the geometry (and thus parameterization) of the energy landscape U (x). While Cartesian dynamics are efficient at local structural rearrangement, internal coordinate dynamics much more efficiently sample global, coherent changes to the topology of the fold (Figure 2B). We interleave the Cartesian Langevin dynamics with preconditioned Internal Coordinate dynamics, DISPLAYFORM1 where C is a preconditioning matrix that sets the relative scaling of changes to each degree of freedom. For all simulations we unroll T = 250 time steps, each of which is comprised of one Cartesian step followed by one internal coordinate step (Equation 9, § A.3).Transform integrator Simulating internal coordinate dynamics is often computationally intensive as it requires rebuilding Cartesian geometry x from internal coordinates z with F(z) BID13 which is an intrinsically sequential process. Here we bypass the need for recomputing coordinate transformations at every step by instead computing on-the-fly transformation integration (Figure 3). The idea is to directly apply coordinate updates in one coordinate system to another by numerically integrating the Jacobian. This can be favorable when the Jacobian has a simple structure, such as in our case where it requires only distributed cross products. Local reference frame reconstruction The imputation network builds an atomic model X from the final coarse coordinates x (T). Each atomic coordinate X i,j of atom type j at position i is placed in a local reference frame as DISPLAYFORM0 where e i,j (z; ✓) and r i,j (z; ✓) are computed by a 1D convolutional neural network (FIG2). We train and evaluate the model on a set of ⇠67,000 protein structures (domains) that are hierarchically and temporally split. The model is trained by gradient descent using a composite loss that combines terms from likelihood-based and empirical-risk minimization-based training. Compute forces fz = @x @z DISPLAYFORM0 Compute forces fz = @x @z DISPLAYFORM1 Figure 3: A transform integrator simulates Langevin dynamics in a more favorable coordinate system (e.g. internal coordinates z) directly in terms of the untransformed state variables (e.g. Cartesian x). This exchanges the cost of an inner-loop transformation step (e.g. geometry construction F(z)) for an extra Jacobian evaluation, which is fully parallelizable on modern hardware (e.g. GPUs). Structural stratification There are several scales of generalization in protein structure prediction, which range from predicting the structure of a sequence that differs from the training set at a few positions to predicting a 3D fold topology that is absent from training set. To test these various levels of generalization systematically across many different protein families, we built a dataset on top of the CATH hierarchical classification of protein folds BID11. CATH hierarchically organizes proteins from the Protein Data Bank BID7 into domains (individual folds) that are classified at the levels of Class, Architecture, Topology, and Homologous superfamily (from general to specific). We collected protein domains from CATH releases 4.1 and 4.2 up to length 200 and hierarchically and temporally split this set (§ B.1) into training (⇠35k folds), validation (⇠21k folds), and test sets (⇠10k folds). The final test set is subdivided into four subsets: C, A, T, and H, based on the level of maximal similarity between a given test domain and domains in the training set. For example, domains in the C or A sets may share class and potentially architecture classifications with train but will not share topology (i.e. fold). Likelihood The gradient of the data-averaged log likelihood of the Boltzmann distribution is DISPLAYFORM0 which, when ascended, will minimize the average energy of samples from the data relative to samples from the model. In an automatic differentiation setting, we implement a Monte Carlo estimator for (the negative of) this gradient by adding the energy gap, DISPLAYFORM1 to the loss, where? is an identity operator that sets the gradient to zero 4.Empirical Risk In addition to the likelihood loss, which backpropagates through the energy function but not the whole simulation, we developed an empirical risk loss composing several measures of protein model quality. It takes the form schematized in FIG2. Our combined loss sums all of the terms L = L ER + L ML without weighting. DISPLAYFORM2 We found that the long roll-outs of our simulator were prone to chaotic dynamics and exploding gradients, as seen in other work (; BID12 . Unfortunately, when chaotic dynamics do occur, it is typical for all gradients to explode (across learning steps) and standard techniques such as gradient clipping BID14 are unable to rescue learning (§ B.5). To stabilize training, we developed two complimentary techniques that regularize against chaotic simulator dynamics while still facilitating learning when they arise. They are• Lyapunov regularization We regularize the simulator time-step function (rather than the energy function) to be approximately 1-Lipschitz. (If exactly satisfied, this eliminates the possibility of chaotic dynamics.)• Damped backpropagation through time We exponentially decay gradient accumulation on the backwards pass of automatic differentiation by multiplying each backwards iteration by a damping factor. We adaptively tune to cancel the scale of the exploding gradients. This can be thought of as a continuous relaxation of and a quantitatively tunable alternative to truncated backpropagation through time. Figure 5: Examples of fold generalization at topology and architecture level. These predicted structures show a range of prediction accuracy for structural generalization (C and A) tasks, with the TM-score comparing the top ranked 3D-Jury pick against the target. The largest clusters are the three most-populated clusters derived from 100 models per domain with a within-cluster cutoff of TM > 0.5. CATH IDs: 2oy8A03; 5c3uA02; 2y6xA00; 3cimB00; 4ykaC00; 2f09A00; 3i5qA02; 2ayxA01. For each of the 10,381 protein structures in our test set, we sampled 100 models from NEMO, clustered them by structural similarity, and selected a representative structure by a standard consensus algorithm . For evaluation of performance we focus on the TM-Score BID27, a measure of structural similarity between 0 and 1 for which TM > 0.5 is typically considered an approximate reconstruction of a fold. Calibrated uncertainty We find that, when the model is confident (i.e. the number of distinct structural clusters is low ⇠1-3), it is also accurate with some predictions having average TM > 0.5 FIG1. Unsurprisingly, the confidence of the model tends to go with the difficulty of generalization, with the most confident predictions from the H test set and the least confident from C. Structural generalization However, even when sequence identity is low and generalization difficulty is high FIG1, center), the model is still able to make some accurate predictions of 3D structures. Figure 5 illustrates some of these successful predictions at C and A levels, specifically 4ykaC00, 5c3uA02 and beta sheet formation in 2oy8A03. We observe that the predictive distribution is multimodal with non-trivial differences between the clusters representing alternate packing of the chain. In some of the models there is uneven distribition of uncertainty along the chain, which sometimes corresponded to loosely packed regions of the protein. Comparison to an end-to-end baseline We constructed a baseline model that is a non-iterative replica of NEMO which replaces the coarse-grained simulator module (and energy function) with a two-layer bidirectional LSTM that directly predicts coarse internal coordinates z (followed by transformation to Cartesian coordinates with F). We trained this baseline across a range of hyperparameter values and found that for difficult C, A, and T tasks, NEMO generalized more effectively than the RNNs TAB1. For the best performing 2x300 architecture, we trained two additional replicates and report the averaged perfomance in FIG1 (right).Additionally, we report the of a sequence-only NEMO model in TAB1. Paralleling secondary structure prediction BID16 ), we find that the availability of evolutionary information has significant impact on prediction quality. This work presents a novel approach for protein structure prediction that combines the inductive bias of simulators with the speed of directed models. A major advantage of the approach is that model sampling (inference) times can be considerably faster than conventional approaches to protein structure prediction TAB5 ). There are two major disadvantages. First, the computational cost of training and sampling is higher than that of angle-predicting RNNs FIG0 ) such as our baseline or. Consequently, those methods have been scaled to larger datasets than ours (in protein length and diversity) which are more relevant to protein structure prediction tasks. Second, the instability of backpropagating through long simulations is unavoidable and only partially remedied by our approaches of Lipschitz regularization and gradient damping. These approaches can also lead to slower learning and less expressive energy functions. Methods for efficient (i.e. subquadratic) N -body simulations and for more principled stabilization of deep networks may be relevant to addressing both of these challenges in the future. We described a model for protein structure given sequence information that combines a coarse-grained neural energy function and an unrolled simulation into an end-to-end differentiable model. To realize this idea at the scale of real proteins, we introduced an efficient simulator for Langevin dynamics in transformed coordinate systems and stabilization techniques for backpropagating through long simulator roll-outs. We find that that model is able to predict the structures of protein molecules with hundreds of atoms while capturing structural uncertainty, and that the model can structurally generalize to distant fold classifications more effectively than a strong baseline. (MPNN, bottom left), and outputs energy function weights l as well as simulator hyperparameters (top center). Second, the simulator iteratively modifies the structure via Langevin dynamics based on the gradient of the energy landscape (Forces, bottom center). Third, the imputation network constructs predicted atomic coordinates X from the final simulator time step x (T). During training, the true atomic coordinates X (Data), predicted atomic coordinates X, simulator trajectory x,..., x (T), and secondary structure predictions SS (Model) feed into a composite loss function (Loss, bottom right), which is then optimized via backpropagation. Inverse transformation The inverse transformation z = F 1 (x) involves fully local computations of bong lengths and angles. DISPLAYFORM0 Jacobian The Jacobian @x @z defines the infinitesimal response of the Cartesian coordinates x to perturbations of the internal coordinates z. It will be important for both converting Cartesian forces into angular torques and bond forces as well as the development of our transform integrator. It is defined element-wise as DISPLAYFORM1 The Jacobian has a simple form that can be understood by imagining the protein backbone as a robot arm that is planted at x 0 (Figure 2B). Increasing or decreasing the bond length b i extends or retracts all downstream coordinates along the bonds axis, moving a bond angle a i drives circular motion of all downstream coordinates around the bond normal vectorn i centered at x i 1, and moving a dihedral angle d i drives circular motion of downstream coordinate x j around bond vectorû i 1 centered at x i 1.Unconstrained representations Bond lengths and angles are subject to the constraints b i > 0 and 0 < a i < ⇡. We enforce these constraints by representing these degrees of freedom in terms of fully unconstrained variablesb i andã i via the transformations b i = log ⇣ 1 + eb i ⌘ and a i = ⇡ 1+e ã i. All references to the internal coordinates z and Jacobians @x @z will refer to the use of fully unconstrained representations TAB3. FIG2 provides an overall schematic of the model, including the components of the energy function. CNN primitives All convolutional neural network primitives in the model schematic (FIG2) follow a common structure consisting of stacks of residual blocks. Each residual block includes DISPLAYFORM0 consists of a layer of channel mixing (1x1 convolution), a variable-sized convolution layer, and a second layer of channel mixing. We use dropout with p = 0.9 and Batch Renormalization on all convolutional layers. Batch Renormalization rather than Normalization was necessary rather owing to the large variation in sizes of the structures of the proteins and ing large variation in mini-batch statistics. Why sampling vs. optimization Deterministic methods for optimizing the energy U (x; s) such as gradient descent or quasi-Newton methods can effectively seek local minima of the energy surface, but are challenged to optimize globally and completely ignore the contribution of the widths of energy minima (entropy) to their probability. We prefer sampling to optimization for three reasons: (i) noise in sampling algorithms can facilitate faster global conformational exploration by overcoming local minima and saddle points, (ii) sampling generates populations of states that respect the width (entropy) of wells in U and can be used for uncertainty quantification, and (iii) sampling allows training with an approximate Maximum Likelihood objective (Equation 5).Langevin Dynamics The Langevin dynamics are a stochastic dynamics that sample from the canonical ensemble. They are defined as a continuous-time stochastic differential equation, and are simulated in discrete time with the first order discretization DISPLAYFORM0 Each time step of ✏ involves a descent step down the energy gradient plus a perturbation of Gaussian noise. Importantly, as time tends toward to infinity, the time-distribution of the Langevin dynamics converges to the canonical ensemble. Our goal is to design a dynamics that converge to an approximate sample in a very short period of time. The comparison of this algorithm with naive integration is given in Figure 8. The corrector step is important for eliminating the large second-order errors that arise in curvilinear motions caused by angle changes (Figure 2B and Figure 8). In principle higher-order numerical integration methods or more time steps could increase accuracy at the cost of more evaluations of the Jacobian, but we found that second-order effects seemed to be the most relevant on our timescales. Mixed integrator Cartesian dynamics favor local structural rearrangements, such as the transitioning from a helical to an extended conformation, while internal coordinate dynamics favor global motions such as the change of the overall fold topology. Since both kinds of structural rearrangements are important to the folding process, we form a hybrid integrator (Algorithm 3) by taking one step with each integrator per force evaluation. Translational and rotational detrending Both Cartesian and Internal coordinates are overparameterized with 3L degrees of freedom, since only 3L 6 degrees of freedom are necessary to encode a centered and un-oriented structure 5. As a consequence, a significant fraction of the per time-step changes x can be explained by rigid translational and rotational motions of the entire structure. We isolate and remove these components of motion by treating the system {x 1, . . ., x L} as a set of particles with unit mass, and computing effective structural translational and rotational velocities by summing point-wise momenta. The translational component of motion is simply the average displacement across positions x Trans i = h x i i. For rotational motion around the center of mass, it is convenient to define the non-translational motion as x i = x i x Trans i and the centered Cartesian coordinates asx i = x i hx i i. The point-wise angular momentum is then l i =x i ⇥ x i and we define a total angular velocity of the structure! by summing these and dividing by the moment of inertia as! = (DISPLAYFORM1 Input :Initial state DISPLAYFORM0 Speed clipping We found it helpful to stabilize the model by enforcing a speed limit on overall structural motions for the internal coordinate steps. This prevents small changes to the energy function during learning from causing extreme dynamics that in turn produce a non-informative learning signal. To accomplish this, we translationally and rotationally detrend the update of the predictor step x and compute a hypothetical time step✏ z that would limit the fastest motion to 2 Angstroms per iteration. We then compute modified predictor and corrector steps subject to this new, potentially slower, time step. While this breaks the asymptotics of Langevin dynamics, (i) it is unlikely on our timescales that we achieve stationarity and (ii) it can be avoided by regularizing the dynamics away from situations where clipping is necessary. In the future, considering non-Gaussian perturbations with kinetic energies similar to Relativistic Monte Carlo might accomplish a similar goal in a more principled manner. The final integrator combining these ideas is presented in Figure 3. B APPENDIX B: TRAINING B.1 DATA For a training and validation set, we downloaded all protein domains of length L  200 from Classes ↵,, and ↵/ in CATH release 4.1, and then hierarchically purged a randomly selected set of A, T, and H categories. This created three validation sets of increasing levels of difficulty: H, which contains domains with superfamilies that are excluded from train (but fold topologies may be present), T, which contains fold topologies that were excluded from train (fold generalization), and A which contains secondary structure architectures that were excluded from train. For a test set, we downloaded all folds that were new to CATH release 4.2, which (due to a propensity of structural biology to make new structures of previously solved folds), provided 10,381 test domains. We further stratified this test set into C, A, T, and H categories based on their nearest CATH classification in the training set. We also analyzed test set stratifications based on nearest neighbors in both training and validation in FIG0. We note that the validation set was not explicitly used to tune hyperparameters due to the large cost of training (2 months on 2 M40 GPUs), but we did keep track of validation statistics during training. We optimized all models for 200,000 iterations with Adam . We optimize the model using a composite loss containing several terms, which are detailed as follows. Distance loss We score distances in the model with a contact-focused distance loss DISPLAYFORM0 where the contact-focusing weights are DISPLAYFORM1 is the sigmoid function. Angle loss We use the loss DISPLAYFORM2 where DISPLAYFORM3 T are unit length feature vectors that map the angles {a i, d i} to the unit sphere. Other angular losses, such as the negative log probability of a Von-Mises Fisher distribution, are based on the inner product of the feature vectors H(z a) · H(z b) rather than the Euclidean distance ||H(z a) H(z b)|| between them. It is worth noting that these two quantities are directly related DISPLAYFORM4 Taking z a as fixed and z b as the argument, the Euclidean loss has a cusp at z a whereas the Von-Mises Fisher loss is smooth around z a. This is analogous to the difference between L 1 and L 2 losses, where the cusped L 1 loss favors median behavior while the smooth L 2 loss favors average behavior. Trajectory loss In a further analogy to reinforcement learning, damped backpropation through time necessitates an intermediate loss function that can criticize transient states of the simulator. We compute this by featurizing the per time step coordinates as the product D ijvij (Figure 2C) and doing the same contact-weighted averaging as the distance loss. Template Modelling (TM) Score The TM-score BID27, DISPLAYFORM5 is a measure of superposition quality between two protein structures on that was presented as an approximately length-independent alternative to RMSD. The TM-score is the best attainable value of the preceding quantity for all possible superpositions of two structures, where DISPLAYFORM6 ||. This requires iterative optimization, which we implemented with a sign gradient descent with 100 iterations to optimally superimpose the model and target structure. We backpropagate through this unrolled optimization process as well as that of the simulator. Hydrogen bond loss We determine intra-backbone hydrogen bonds using the electrostatic model of DSSP . First, we place virtual hydrogens at 1 Angstroms along the negative angle bisector of the C i 1 N i C↵ i bond angle. Second, we compute a putative energy U h-bond ij (in kcal/mol) for each potential hydrogen bond from an amide donor at i to a carbonyl acceptor at j as DISPLAYFORM7 = 0.084 DISPLAYFORM8 where D ab = ||X i,a X j,b || is the Euclidean distance between atom a of residue i and atom b of residue j. We then make hard assignments of hydrogen bonds for the data with DISPLAYFORM9 Published as a conference paper at ICLR 2019We'predict' the probabilities of hydrogen bonds of the data given the model via logisitic regression of soft model assignments as DISPLAYFORM10 where a, b, c are learned parameters with the softplus parameterizations enforcing a, b > 0 and (u) = 1/(1 + exp( u) is the sigmoid function. The final hydrogen bond loss is the cross-entropy between these predictions and the data, DISPLAYFORM11 Secondary Structure Prediction We output standard 8-class predictions of secondary structure and score them with a cross-entropy loss. The combination of energy function, simulator, and refinement network can build an atomic level model of protein structure from sequence, and our goal is to optimize (meta-learn) this entire procedure by gradient descent. Before going into specifics of the loss function, however, we will discuss a challenges and solutions for computing gradients of unrolled simulations in the face of chaos. Gradient-based learning of iterative computational procedures such as Recurrent Neural Networks (RNNs) is well known to be subject to the problems of exploding and vanishing gradients BID14. Informally, these occur when the sensitivities of model outputs to inputs become either extremely large or extremely small and the gradient is no longer an informative signal for optimization. We find that backpropagation through unrolled simulations such as those presented is no exception to this rule. Often we observed that a model would productively learn for tens of thousands of iterations, only to suddenly and catastrophically exhibit diverging gradients from which the optimizer could not recover -even when the observed simulation dynamics exhibited no obvious qualitative changes to behavior and the standard solutions of gradient clipping BID14 were in effect. Similar phenomena have been observed previously in the context of meta-learning and are explored in detail in a concurrent work BID12.In FIG5, we furnish a minimal example that illustrates how chaos can lead to irrevocable loss of learning. We see that for even a simple particle-in-a-well, some choices of system parameters (such as too large a time step) can lead to chaotic dynamics which are synonymous with explosive gradients. This example is hardly contrived, and is in fact a simple model of the distance potentials between coordinates in our simulations. Moreover, it is important to note that chaos may not be easy to diagnose: for learning rates ↵ 2 [1.7, 1.8] the position of the particle x remains more or less confined in the well while the sensitivities diverge to 10 200. It seems unlikely that meta-learning would be able to recover after descending into chaos. The view per time step Exploding gradients and chaotic dynamics involve the same mechanism: a multiplicative accumulation of sensitivities. In dynamical systems this is frequently phrased as'exponentially diverging sensitivity to initial conditions'. Intuitively, this can be understood by examining how the Jacobian of an entire trajectory decomposes into a product of Jacobians as DISPLAYFORM0 When the norms of the per time-step Jacobians DISPLAYFORM1 @x (t 1) are typically larger than 1, the sensitivity || @x (T) @x || will grow exponentially with T. Ideally, we would keep these norms well-behaved which is the rationale recent work on stabilization of RNNs (; BID9 . Next we will offer a general-purpose regularizer to approximately enforce this goal for any differentiable computational iteration with continuous state. dx (bottom). When the step size ↵ is small, these dynamics converge to a periodic orbit over 2 k values where 0  k < 1. After some critical step size, the dynamics undergo a period-doubling bifurcation , become chaotic, and the gradients regularly diverge to huge numbers. Approximate Lipschitz conditions One condition that guarantees that a deterministic map F: R N! R N, x t = F (x t 1, ✓) cannot exhibit exponential sensitivity to initial conditions is the condition of being non-expansive (also known as 1-Lipschitz or Metric). That is, for any two input points x a, x b 2 R N, iterating the map cannot increase the distance between them as |F (x a, ✓) DISPLAYFORM2 Repplying the map to the bound immediately implies DISPLAYFORM3 for any number of iterations t. Thus, two initially close trajectories iterated through a non-expansive mapping must remain at least that close for arbitrary time. We approximately enforce non-expansivity by performing an online sensitivity analysis within simulations. At randomly selected time-steps, the current time step x (t) is rolled back to the preceding state and re-executed with small Gaussian perturbations to the state ⇠ N (0, 10 4 I) 6. We regularize the sensitivity by adding DISPLAYFORM4 to the loss. Interestingly, the stochastic nature of this approximate regularizer is likely a good thing -a truly non-expansive map is quite limited in what it can model. However, being'almost' non-expansive seems to be incredibly helpful for learning. Damped Backpropagation through Time The approximate Lipschitz conditions (or Lyapunov regularization) encourage but do not guarantee stable backpropagation. When chaotic phasetransitions or otherwise occur we need a fall-back plan to be able to continue learning. At the same time, we would like gradient descent to proceed in the usual manner when simulator dynamics To reduce our reliance on alignments and the generation of profiles for inference of new sequences while still leveraging evolutionary sequence data, we augmented our training set by dynamically spiking in diverse, related sequence into the model during training. Given a set of M sequences in the alignment we sample a sequence t based on its normalized Hamming distance d t with probability DISPLAYFORM5 where EDA is a scaling parameter that we set to 5. When the alternate sequence contains gaps, we construct a chimeric sequence that substitutes those sites with the query. This strategy increased the number of available sequence-structure pairs by several orders of magnitude, and we used it for both profile and 1-seq based training. C APPENDIX C: For each sequence from the CATH release 4.2 dataset, 100 structures were generated from both the profile and sequence-only models, while a single structure was generated from the RNN baseline models. The reported TM-scores were calculated using Maxcluster BID19. A single representative structure was chosen from the ensemble of 100 structures using 3D-Jury . A pairwise distance matrix of TM-scores was calculated for all of the 100 structures in the ensemble. Clusters were determined by agglomerative hierarchical clustering with complete linkage using a TM-score threshold of 0.5 to determine cluster membership. NEMO RNN, 300 hidden units FIG0: Sampling speed. Per-protein sampling times for various batch sizes across NEMO and one of the RNN baselines on a single Tesla M40 GPU with 12GB memory and 20 cores. For all in the main paper, 100 models were sampled per protein followed by consensus clustering with 3D-jury, adding an additional factor of 10 2 cost between NEMO and the RNN.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Byg3y3C9Km
We use an unrolled simulator as an end-to-end differentiable model of protein structure and show it can (sometimes) hierarchically generalize to unseen fold topologies.
Progress in understanding how individual animals learn requires high-throughput standardized methods for behavioral training and ways of adapting training. During the course of training with hundreds or thousands of trials, an animal may change its underlying strategy abruptly, and capturing these changes requires real-time inference of the animal’s latent decision-making strategy. To address this challenge, we have developed an integrated platform for automated animal training, and an iterative decision-inference model that is able to infer the momentary decision-making policy, and predict the animal’s choice on each trial with an accuracy of ~80\%, even when the animal is performing poorly. We also combined decision predictions at single-trial resolution with automated pose estimation to assess movement trajectories. Analysis of these features revealed categories of movement trajectories that associate with decision confidence. We designed our automated training system as follows (
[ 0, 1, 0, 0, 0, 0 ]
Hylu4mYIIS
Automated mice training for neuroscience with online iterative latent strategy inference for behavior prediction
Recurrent neural networks (RNNs) are a powerful tool for modeling sequential data. Despite their widespread usage, understanding how RNNs solve complex problems remains elusive. Here, we characterize how popular RNN architectures perform document-level sentiment classification. Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. We identify a simple mechanism, integration along an approximate line attractor, and find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs). Overall, these demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks. Recurrent neural networks (RNNs) are a popular tool for sequence modelling tasks. These architectures are thought to learn complex relationships in input sequences, and exploit this structure in a nonlinear fashion. RNNs are typically viewed as black boxes, despite considerable interest in better understanding how they function. Here, we focus on studying how recurrent networks solve document-level sentiment analysis-a simple, but longstanding benchmark task for language modeling BID6 BID13. We demonstrate that popular RNN architectures, despite having the capacity to implement high-dimensional and nonlinear computations, in practice converge to low-dimensional representations when trained against this task. Moreover, using analysis techniques from dynamical systems theory, we show that locally linear approximations to the nonlinear RNN dynamics are highly interpretable. In particular, they all involve approximate low-dimensional line attractor dynamics-a useful dynamical feature that can be implemented by linear dynamics and used to store an analog value BID10. Furthermore, we show that this mechanism is surprisingly consistent across a range of RNN architectures. We trained four RNN architectures-LSTM BID3, GRU BID0, Update Gate RNN (UGRNN) BID1, and standard (vanilla) RNNs-on binary sentiment classifcation tasks. We trained each network type on each of three datasets: the IMDB movie review dataset, which contains 50,000 highly polarized reviews BID7; the Yelp review dataset, which contained 500,000 user reviews BID14; and the Stanford Sentiment Treebank, which contains 11,855 sentences taken from movie reviews BID11. For each task and architecture, we analyzed the best performing networks, selected using a validation set (see Appendix A for details). We analyzed trained networks by linearizing the dynamics around approximate fixed points. Approximate fixed points are state vectors {h 1 N h − F (h, 0) 2 2, and then minimizing q with respect to hidden states, h, using standard auto-differentiation methods BID2. We ran this optimization multiple times starting from different initial values of h. These initial conditions were sampled randomly from the state activation visited by the trained network, which was done to intentionally sample states related to the operation of the RNN. For brevity, we explain our approach using the working example of the LSTM trained on the Yelp dataset FIG0 ). We find similar for all architectures and datasets, these are shown in Appendix B. As an initial exploratory analysis step, we performed principal components analysis (PCA) on the RNN states concatenated across 1,000 test examples. The top 2-3 PCs explained ∼90% of the variance in hidden state activity FIG0, black line). The distribution of hidden states visited by untrained networks on the same test set was much higher dimensional FIG0, gray line), suggesting that training the networks stretched the geometry of their representations along a lowdimensional subspace. We then visualized the RNN dynamics in this lowdimensional space by forming a 2D histogram of the density of RNN states colored by the sentiment label FIG0. This visualization is reminiscent of integration dynamics along a line attractor-a well-studied mechanism for evidence accumulation in simple recurrent networks BID10 BID8 )-and we reasoned that similar dynamics may be used for sentiment classification. The hypothesis that RNNs approximate line attractor dynamics during sentiment classification makes four specific predictions, which we investigate and confirm in subsequent sections. First, the fixed points form an approximately 1D manifold that is aligned/correlated with the readout weights (Section 3.2). Second, all fixed points are attracting and marginally stable. That is, in the absence of input (or, perhaps, if a string of neutral/uninformative words are encountered) the RNN state should rapidly converge to the closest fixed point and then should not change appreciably (Section 3.4). Third, locally around each fixed point, inputs representing positive vs. negative evidence should produce linearly separable effects on the RNN state vector along some dimension (Section 3.5). Finally, these instantaneous effects should be integrated by the recurrent dynamics along the direction of the 1D fixed point manifold (Section 3.5). We numerically identified the location of ∼500 RNN fixed points using previously established methods BID12 BID2. We then projected these fixed points into the same low-dimensional space used in FIG0. Although the PCA projection was fit to the RNN hidden states, and not the fixed points, a very high percentage of variance in fixed points was captured by this projection FIG0, suggesting that that the RNN states remain close to the manifold of fixed points. We call the vector that describes the main axis of variation of the 1D manifold m. Consistent with the line attractor hypothesis, the fixed points appeared to be spread along a 1D curve when visualized in PC space, and furthermore the principal direction of this curve was aligned with the readout weights FIG0. We further verified that this low-dimensional approximation was accurate by using locally linear embedding (LLE; BID9 to parameterize a 1D manifold of fixed points in the raw, high-dimensional data. This provided a scalar coordinate, θ i ∈ [−1, 1], for each fixed point, which was well-matched to the position of the fixed point manifold in PC space (coloring of points in FIG0). We next aimed to demonstrate that the identified fixed points were marginally stable, and thus could be used to preserve accumulated information from the inputs. To do this, we used a standard linearization procedure BID4 ) to obtain an approximate, but highly interpretable, description of the RNN dynamics near the fixed point manifold. Given the last state h t−1 and the current input x t, the approach is to locally approximate the update rule with a first-order Taylor expansion: DISPLAYFORM0 where ∆h t−1 = h t−1 − h * and ∆x t = x t − x *, and {J rec, J inp} are Jacobian matrices of the system: DISPLAYFORM1 We choose h * to be a numerically identified fixed point and x * = 0, thus we have F (h *, x *) ≈ h * and ∆x t = x t. Under this choice, equation FORMULA0 reduces to a discrete-time linear dynamical system: DISPLAYFORM2 It is important to note that both Jacobians depend on which fixed point we choose to linearize around, and should thus be thought of as functions of h *; for notational simplicity we do not denote this dependence explicitly. By reducing the nonlinear RNN to a linear system, we can analytically estimate the network's response to a sequence of T inputs. In this approximation, the effect of each input x t is decoupled from all others; that is, the final state is given by the sum of all individual effects. 1 We can restrict our focus to the effect of a single input, x t (i.e. a single term in this sum). Let k = T − t be the number of time steps between x t and the end of the document. The total effect of x t on the final RNN state becomes: DISPLAYFORM3 1 We consider the case where the network has closely converged to a fixed point, so that h0 = h * and thus ∆h0 = 0.where L = R −1, the columns of R (denoted r a) contain the right eigenvectors of J rec, the rows of L (denoted a) contain the left eigenvectors of J rec, and Λ is a diagonal matrix containing complex-valued eigenvalues, λ 1 > λ 2 >... > λ N, which are sorted based on their magnitude. From equation 3 we see that x t affects the representation of the network through N terms (called the eigenmodes or modes of the system). The magnitude of each mode after k steps is given by the λ k a; thus, the size of each mode either reduces to zero or diverges exponentially fast, with a time constant given by: τ a = 1 log(|λa|). This time constant has units of tokens (or, roughly, words) and yields an interpretable number for the effective memory of the system. FIG1 plots the eigenvalues and associated time constants and shows the distribution of all eigenvalues at three representative fixed points along the fixed point manifold FIG1. In FIG1, we plot the decay time constant of the top three modes; the slowest decaying mode persists after ∼1000 time steps, while the next two modes persists after ∼100 time steps, with lower modes decaying even faster. Since the average review length for the Yelp dataset is ∼175 words, only a small number of modes could represent information from the beginning of the document. Overall, these eigenvalue spectra are consistent with our observation that RNN states only explore a low-dimensional subspace when performing sentiment classification. RNN activity along the majority of dimensions is associated with fast time constants and is therefore quickly forgotten. Restricting our focus to the top eigenmode for simplicity (there may be a few slow modes of integration), the effect of a single input, x t, on the network activity (equation 3) becomes: r 1 1 J inp x, where we have dropped the dependence on t since λ 1 ≈ 1, so the effect of x is largely insensitive to the exact time it was input to system. Using this expression, we separately analyzed the effects of specific words. We first examined the term J inp x for various choices of x (i.e. various word tokens). This quantity represents the instantaneous linear effect of x on the RNN state vector and is shared across all eigenmodes. We projected the ing vectors onto the same low-dimensional subspace shown in FIG0. We see that positive and negative valence words push the hidden state in opposite directions. Neutral words, in contrast, exert much smaller effects on the RNN state FIG2.While J inp x represents the instantaneous effect of a word, only the features of this input that overlap with the top few eigenmodes are reliably remembered by the network. The scalar quantity 1 J inp x, which we call the input projection, captures the magnitude of change induced by x along the eigenmode associated with the longest timescale. Again we observe that the valence of x strongly correlates with this quantity: neutral words have an input projection near zero while positive and negative words produced larger magnitude responses of opposite sign. Furthermore, this is reliably observed across all fixed points. FIG2 shows the average input projection for positive, negative, and neutral words; the histogram shows the distribution of these average effects across all fixed points along the line attractor. Finally, if the input projection onto the top eigenmode is non-negligible, then the right eigenvector r 1 (which is normalized to unit length) represents the direction along which x is integrated. If the RNN implements an approximate line attractor, then r 1 (and potentially other slow modes) should align with the principal direction of the manifold of fixed points, m. We indeed observe a high degree of overlap between r 1 and m both visually in PC space FIG2 and quantitatively across all fixed points FIG2. In this work we applied dynamical systems analysis to understand how RNNs solve sentiment analysis. We found a simple mechanismintegration along a line attractorpresent in multiple architectures trained to solve the task. Overall, this work provides preliminary, but optimistic, evidence that different, highly intricate network models can converge to similar solutions that may be reduced and understood by human practitioners. An RNN is defined by a nonlinear update rule h t = F (h t−1, x t), which is applied recursively from an initial state h 0 over a sequence of inputs x 1, x 2,..., x T. Let N and M denote the dimensionality of the hidden states and the input vectors, so that h t ∈ R N and x t ∈ R M. In sentiment classification, T represents the number of word tokens in a sequence, which can vary on a document-by-document basis. To process word sequences for a given dataset, we build a vocabulary and encode words as one-hot vectors. These are fed to a dense linear embedding layer with an embedding size of M = 128 (x t are the embeddings in what follows). The word embeddings were trained from scratch simultaneously with the RNN. We considered four RNN architectures-LSTM BID3, GRU BID0, UGRNN BID1, and the vanilla RNN (VRNN)-each corresponding to a separate nonlinear update rule, F (·, ·). For the LSTM architecture, h t consists of a concatenated hidden state vector and cell state vector so that N is twice the number of computational units; in all other architectures N is equal to the number of units. The RNN prediction is evaluated at the final time step T, and is given byŷ = w h T + b, where we call w ∈ R N the readout weights. In the LSTM architecture, the cell state vector is not read out, and thus half of the entries in w are enforced to be zero under this notation. We examined three benchmark datasets for sentiment classification: the IMDB movie review dataset, which contains 50,000 highly polarized reviews BID7; the Yelp review dataset, which contained 500,000 user reviews BID14; and the Stanford Sentiment Treebank, which contains 11,855 sentences taken from movie reviews BID11. The Stanford Sentiment Treebank also contains short phrases with labeled sentiments; these were not analyzed. For each task and architecture, we performed a randomized hyper-parameter search and selected the best networks based on a validation set. All models were trained using Adam BID5 ) with a batch size of 64. The hyper-parameter search was performed over the following ranges: the initial learning rate (10 −5 to 10 −1), learning rate decay factor (0 to 1), gradient norm clipping (10 −1 to 10), 2 regularization penalty (10 −3 to 10 −1), the β 1 (0.5 to 0.99), and β 2 (0.9999 to 0.99) parameters of the Adam optimization routine. We additionally trained a bag-of-words model (logistic regression trained with word counts), as a baseline comparison. The accuracies of our final models on the held-out test set are summarized in Table 1. We analyzed the best performing models for each combination of architecture type and dataset. For each model, we numerically identified a large set of fixed points {h * BID12 . Briefly, we accomplished this by first defining the loss function q = 1 N h−F (h, 0) 2 2, and then minimizing q with respect to hidden states, h, using standard auto-differentiation methods BID2. We ran this optimization multiple times starting from different initial values of h. These initial conditions were sampled randomly from the state activation during the operation of the trained network, which was done to intentionally sample states related to the operation of the RNN. We varied the stopping tolerance for q using 9 points logarithmically spaced between 10 −9 and 10 −5 running the optimization from 1000 different initial conditions for each tolerance. This allowed us to find approximate fixed points of varying speeds. Values of q at numerical zero are true fixed points, while small but non-zero values are called slow points. Slow points are often reasonable places to perform a linearization, assuming that √ q, which is akin to speed, is slow compared to the operation of the network. DISPLAYFORM0 Below we provide figures summarizing the linear integration mechanism for each combination of architecture (LSTMs, GRUs, Update Gate RNNs, Vanilla RNNs) and dataset (Yelp, IMDB, and Stanford Sentiment). Note that the first figure, LSTMs trained on Yelp, reproduces the figures in the main text-we include it here for completeness. The description of each panel is given in FIG0, note that these descriptions are the same across all figures. We find that these mechanisms are remarkably consistent across architectures and datasets. (lower left) Instantaneous effect of word inputs, J inp x, for positive (green), negative (red), and neutral (cyan) words. Blue arrows denote 1, the top left eigenvector. The PCA projection is the same as FIG1, but centered around each fixed point.(lower middle left) Average of 1 J inp x over 100 different words, shown for positive, negative, neutral words.(lower middle right) Same plot as in FIG1, with an example fixed point highlighted (approximate fixed points in grey). Blue arrows denote r1, the top right eigenvector.(lower right) Distribution of r 1 m (overlap of the top right eigenvector with the fixed point manifold) over all fixed points. Null distribution is randomly generated unit vectors of the size of the hidden state.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJlKDVS2hV
We analyze recurrent networks trained on sentiment classification, and find that they all exhibit approximate line attractor dynamics when solving this task.
Parallel developments in neuroscience and deep learning have led to mutually productive exchanges, pushing our understanding of real and artificial neural networks in sensory and cognitive systems. However, this interaction between fields is less developed in the study of motor control. In this work, we develop a virtual rodent as a platform for the grounded study of motor activity in artificial models of embodied control. We then use this platform to study motor activity across contexts by training a model to solve four complex tasks. Using methods familiar to neuroscientists, we describe the behavioral representations and algorithms employed by different layers of the network using a neuroethological approach to characterize motor activity relative to the rodent's behavior and goals. We find that the model uses two classes of representations which respectively encode the task-specific behavioral strategies and task-invariant behavioral kinematics. These representations are reflected in the sequential activity and population dynamics of neural subpopulations. Overall, the virtual rodent facilitates grounded collaborations between deep reinforcement learning and motor neuroscience. Animals have nervous systems that allow them to coordinate their movement and perform a diverse set of complex behaviors. Mammals, in particular, are generalists in that they use the same general neural network to solve a wide variety of tasks. This flexibility in adapting behaviors towards many different goals far surpasses that of robots or artificial motor control systems. Hence, studies of the neural underpinnings of flexible behavior in mammals could yield important insights into the classes of algorithms capable of complex control across contexts and inspire algorithms for flexible control in artificial systems (b). Recent efforts at the interface of neuroscience and machine learning have sparked renewed interest in constructive approaches in which artificial models that solve tasks similar to those solved by animals serve as normative models of biological intelligence. Researchers have attempted to leverage these models to gain insights into the functional transformations implemented by neurobiological circuits, prominently in vision (; ;), but also increasingly in other areas, including audition and navigation . Efforts to construct models of biological locomotion systems have informed our understanding of the mechanisms and evolutionary history of bodies and behavior (; ; ;). Neural control approaches have also been applied to the study of reaching movements, though often in constrained behavioral paradigms , where supervised training is possible . While these approaches model parts of the interactions between animals and their environments , none attempt to capture the full complexity of embodied control, involving how an animal uses its senses, body and behaviors to solve challenges in a physical environment. Equal contribution. The development of models of embodied control is valuable to the field of motor neuroscience, which typically focuses on restricted behaviors in controlled experimental settings. It is also valuable for AI research, where flexible models of embodied control could be applicable to robotics. Here, we introduce a virtual model of a rodent to facilitate grounded investigation of embodied motor systems. The virtual rodent affords a new opportunity to directly compare principles of artificial control to biological data from real-world rodents, which are more experimentally accessible than humans. We draw inspiration from emerging deep reinforcement learning algorithms which now allow artificial agents to perform complex and adaptive movement in physical environments with sensory information that is increasingly similar to that available to animals (; ; a; . Similarly, our virtual rodent exists in a physical world, equipped with a set of actuators that must be coordinated for it to behave effectively. It also possesses a sensory system that allows it to use visual input from an egocentric camera located on its head and proprioceptive input to sense the configuration of its body in space. There are several questions one could answer using the virtual rodent platform. Here we focus on the problem of embodied control across multiple tasks. While some efforts have been made to analyze neural activity in reduced systems trained to solve multiple tasks , those studies lacked the important element of motor control in a physical environment. Our rodent platform presents the opportunity to study how representations of movements as well as sequences of movements change as a function of goals and task contexts. To address these questions, we trained our virtual rodent to solve four complex tasks within a physical environment, all requiring the coordinated control of its body. We then ask "Can a neuroscientist understand a virtual rodent?" -a more grounded take on the originally satirical "Can a biologist fix a radio?" or the more recent "Could a neuroscientist understand a microprocessor?" . We take a more sanguine view of the tremendous advances that have been made in computational neuroscience in the past decade, and posit that the supposed'failure' of these approaches in synthetic systems is partly a misdirection. Analysis approaches in neuroscience were developed with the explicit purpose of understanding sensation and action in real brains, and often implicitly rooted in the types of architectures and processing that are thought relevant in biological control systems. With this philosophy, we use analysis approaches common in neuroscience to explore the types of representations and dynamics that the virtual rodent's neural network employs to coordinate multiple complex movements in the service of solving motor and cognitive tasks. We implemented a virtual rodent body (Figure 1) in MuJoCo , based on measurements of laboratory rats (see Appendix A.1). The rodent body has 38 controllable degrees of freedom. The tail, spine, and neck consist of multiple segments with joints, but are controlled by tendons that co-activate multiple joints (spatial tendons in MuJoCo). The rodent will be released as part of dm control/locomotion. The virtual rodent has access to proprioceptive information as well as "raw" egocentric RGB-camera (64×64 pixels) input from a head-mounted camera. The proprioceptive inputs include internal joint angles and angular velocities, the positions and velocities of the tendons that provide actuation, egocentric vectors from the root (pelvis) of the body to the positions of the head and paws, a vestibular-like upright orientation vector, touch or contact sensors in the paws, as well as egocentric acceleration, velocity, and 3D angular velocity of the root. Figure 2: Visualizations of four tasks the virtual rodent was trained to solve: (A) jumping over gaps ("gaps run"), (B) foraging in a maze ("maze forage"), (C) escaping from a hilly region ("bowl escape"), and (D) touching a ball twice with a forepaw with a precise timing interval between touches ("two-tap"). We implemented four tasks adapted from previous work in deep reinforcement learning and motor neuroscience (a;) to encourage diverse motor behaviors in the rodent. The tasks are as follows: Run along a corridor, over "gaps", with a reward for traveling along the corridor at a target velocity (Figure 2A). Collect all the blue orbs in a maze, with a sparse reward for each orb collected (Figure 2B). Escape a bowl-shaped region by traversing hilly terrain, with a reward proportional to distance from the center of the bowl (Figure 2C). Approach orbs in an open field, activate them by touching them with a forepaw, and touch them a second time after a precise interval of 800ms with a tolerance of ±100ms; there is a time-out period if the touch is not within the tolerated window and rewards are provided sparsely on the first and second touch (Figure 2D). We did not provide the agent with a cue or context indicating its task. Rather, the agent had to infer the task from the visual input and behave appropriately. Figure 3: The virtual rodent agent architecture. Egocentric visual image inputs are encoded into features via a small residual network and proprioceptive state observations are encoded via a small multi-layer perceptron. The features are passed into a recurrent LSTM module . The core module is trained by backpropogation during training of the value function. The outputs of the core are also passed as features to the policy module (with the dashed arrow indicating no backpropogation along this path during training) along with shortcut paths from the proprioceptive observations as well as encoded features. The policy module consists of one or more stacked LSTMs (with or without skip connections) which then produce the actions via a stochastic policy. Emboldened by recent in which end-to-end RL produces a single terrain-adaptive policy , we trained a single architecture on the multiple motorcontrol-reliant tasks (see Figure 3). To train a single policy to perform all four tasks, we used an IMPALA-style setup for actor-critic DeepRL ; parallel workers collected rollouts, logged them to a replay, from which a central learner sampled data to perform updates. The value-function critic was trained using off-policy correction via V-trace. To update the actor, we used a variant of MPO where the E-step is performed using advantages determined from the empirical returns and the value-function, instead of the Q-function. Empirically, we found that the "escape" task was more challenging to learn during interleaved training relative to the other tasks. Consequently, we present arising from training a singletask expert on the escape task and training the multi-task policies using kickstarting for that task , with a weak coefficient (.001 or .005). Kickstarting on this task made the seeds more reliably solve all four tasks, facilitating comparison of the multi-task policies with different architectures (i.e. the policy having 1, 2, or 3 layers, with or without skip connections across those layers). The procedure yields a single neural network that uses visual inputs to determine how to behave and coordinates its body to move in ways required to solve the tasks. See video examples of a single policy solving episodes of each task: gaps, forage, escape, and two-tap. We analyzed the virtual rodent's neural network activity in conjunction with its behavior to characterize how it solves multiple tasks (Figure 4A). We used analyses and perturbation techniques adapted from neuroscience, where a range of techniques have been developed to highlight the properties of real neural networks. Biological neural networks have been hypothesized to control, select, and modulate movement through a variety of debated mechanisms, ranging from explicit neural representations of muscle forces and behavioral primitives, to more abstract production of neural dynamics that could underly movement (; ;). A challenge with nearly all of these models however is that they have largely been inspired by findings from individual behavioral tasks, making it unclear how to generalize them to a broader range of naturalistic behaviors. To provide insight into mechanisms underlying movement in the virtual rodent, and to potentially give insight by proxy into the mechanisms underlying behavior in real rats, we thus systematically tested how the different network layers encoded and generated different aspects of movement. For all analyses we logged the virtual rodent's kinematics, joint angles, computed forces, sensory inputs, and the cell unit activity of the LSTMs in core and policy layers during 25 trials per task from each network architecture. We began our analysis by quantitatively describing the behavioral repertoire of the virtual rodent. A challenge in understanding the neural mechanisms underlying behavior is that it can be described at many timescales. On short timescales, one could describe rodent locomotion using a set of actuators that produce joint-specific patterns of forces and kinematics. However on longer timescales, these force patterns are organized into coordinated, re-used movements, such as running, jumping, and turning. These movements can be further combined to form behavioral strategies or goal-directed behaviors. Relating neural representations to motor behaviors therefore requires analysis methods that span multiple timescales of behavioral description. To systematically examine the classes of behaviors these networks learn to generate and how they are differentially deployed across tasks, we developed sets of behavioral features that describe the kinematics of the animal on fast (5-25 Hz), intermediate (1-25 Hz) or slow (0.3-5 Hz) timescales (Appendix A.2, A.3). As validation that these features reflected meaningful differences across behaviors, embedding these features using tSNE produced a behavioral map in which virtual rodent behaviors, were segregated to different regions of the map (Figure 4B)(see video). This behavioral repertoire of the virtual rodent consisted of many behaviors observed in rodents, such as rearing, jumping, running, climbing and spinning. While the exact kinematics of the virtual rodent's behaviors did not exactly match those observed in real rats, they did reproduce unexpected features. For instance the stride frequency of the virtual rodent during galloping matches that observed in rats (Appendix A.3). We next investigated how these behaviors were used by the virtual rodent across tasks. On short timescales, low-level motor features like joint speed and actuator forces occupied similar regions in principal component space (Figure 4C). In contrast, behavioral kinematics, especially on long, 0.3-5 Hz timescales, were more differentiated across tasks. Similar held when examining overlap in other dimensions using multidimensional scaling. Overall this suggests that the network learned to adapt similar movements in a selective manner for different tasks, suggesting that the agent exhibited a form of behavioral flexibility. We next examined the neural activity patterns underlying the virtual rodent's behavior to test if networks produced behaviors through explicit representations of forces, kinematics or behaviors. As expected, core and policy units operate on distinct timescales (See Appendix A.3, Figure 9). Units in the core typically fluctuated over timescales of 1-10 seconds, likely representing variables associated with context and reward. In contrast, units in policy layers were more active over subsecond timescales, potentially encoding motor and behavioral features. To quantify which aspects of behavior were encoded in the core and policy layers, and how these patterns varied across layers, we used representational similarity analysis (RSA) . RSA provides a global measure of how well different features are encoded in layers of a neural network by analyzing the geometries of network activity upon exposure to several stimuli, such as objects. To apply RSA, first a representational similarity (or equivalently, dissimilarity) matrix is computed that quantifies the similarity of neural population responses to a set of stimuli. To test if different neural populations show similar stimulus encodings, these similarity matricies can then be directly compared across different network layers. Multiple metrics, such as the matrix correlation or dot product can be used to compare these neural representational similarity matricies. Here we used the linear centered kernel alignment (CKA) index, which shows invariance to orthonormal rotations of population activity . RSA can also be used to directly test how well a particular stimulus feature is encoded in a population. If each stimuli can be quantitively described by one or more feature vectors, a similarity matrix can also be computed across the set of stimuli themselves. The strength of encoding of a particular set of features can by measured by comparing the correlation of the stimulus feature similarity matrix and the neuronal similarity matrix. The correlation strength directly reflects the ability of a linear decoder trained on the neuronal population vector to distinguish different stimuli . Unlike previous applications of RSA in the analysis of discrete stimuli such as objects, behavior evolves continuously. To adapt RSA to behavioral analysis, we partitioned time by discretizing each behavioral feature into 50 clusters (Appendix A.4). As expected, RSA revealed that core and policy layers encoded somewhat distinct behavioral features. Policy layers contained greater information about fast timescale kinematics in a manner that was largely conserved across layers, while core layers showed more moderate encoding of kinematics that was stronger for slow behavioral features (Figure 5B,C). This difference in encoding was largely consistent across all architectures tested (Figure 5D). The feature encoding of policy networks was somewhat consistent with the emergence of a hierarchy of behavioral abstraction. In networks trained with three policy layers, representations were distributed in timescales across layers, with the last layer (policy 2) showing stronger encoding of fast behavioral features, and the first layer (policy 0) instead showing stronger encoding of slow behavioral features. However, policy layer activity, even close to the motor periphery, did not show strong explicit encoding of behavioral kinematics or forces. We then investigated the degree to which the rodent's neural networks used the same neural representations to produce behaviors, such as running or spinning, that were shared across tasks. Embedding population activity into two-dimensions using multidimensional scaling revealed that core neuron representations were highly distinct across all tasks, while policy layers contained more overlap (Figure 6A), suggesting that some behavioral representations were re-used. Comparison of representational similarity matricies for behaviors that were shared across tasks revealed that policy layers tended to possess a relatively similar encoding of behavioral features, especially fast behavioral features, over tasks (Figure 6C ; Appendix A.4). This was validated by inspection of neural activity during individual behaviors shared across tasks (Appendix A.5, Figure 10). Core layer representations across almost all behavioral categories were more variable across tasks, consistent with encoding behavioral sequences or task variables. Interestingly, when comparing this cross-task encoding similarity across architectures, we found that one layer networks showed a marked increase in the similarity of behavioral encoding across tasks (Figure 6D). This suggests that in networks with lower computational capacity, animals must rely on a smaller, shared behavioral representation across tasks. While RSA described which behavioral features were represented in core and policy activity, we were also interested in describing how neural activity changes over time to produce different behaviors. We began by analyzing neural activity during the production of stereotyped behaviors. Activity patterns in the two-tap task showed peak activity in core and policy units that was sequentially organized (Figure 7), uniformly tiling time between both taps of the two-tap sequence. This sequential activation was observed across tasks and behaviors in the policy network, including during running (see video) where, consistent with policy networks encoding short-timescale kinematic features in a task-invariant manner, neural activity sequences were largely conserved across tasks (See Appendix A.5, Figure 10). These sequences were reliably repeated across instances of the respective behaviors, and in the case of the two-tap sequence, showed reduced neural variability relative to surrounding timepoints (See Appendix A.6, Figure 11). The finding of sequential activity hints at a putative mechanism for the rodent's behavioral production. We next hoped to systematically quantify the types of sequential and dynamical activity present in core and policy networks without presupposing the behaviors of interest. To describe population dynamics in relation to behavior, we first applied principal components analysis (PCA) to the activity during the performance of single tasks, and visualized the gradient of the population vector as a vector field. Figure 8A shows such a vector field representation of the first two principal components of the core and final policy layer during the two-tap task. We generated vector fields by discretizing the PC space into a two-dimensional grid and calculating the average neural activity gradient with respect to time for each bin. The vector fields showed strong signatures of rotational dynamics across all layers, likely a signature of previously described sequential activity. To extract rotational patterns, we used jPCA, a dimensionality reduction method that extracts latent rotational dynamics in neural activity . The ing jPCs form an orthonormal basis that spans the same space as the first six traditional PCs, while maximally emphasizing rotational dynamics. Figure 8B shows the vector fields of the first two jPC planes for the core and final policy layers along with their characteristic frequency. Consistent with our previous findings, jPC planes in the core have lower characteristic frequencies than those in policy layers across tasks (Figure 8C). The jPC planes also individually explained a large percentage of total neural variability (Figure 8D). These rotational dynamics in the policy and core jPC planes were respectively associated with the production of behaviors and the reward structure of the task. For example, in the two-tap task, rotations in the fastest jPC plane in the core were concurrent with the approach to reward, while rotations in the second fastest jPC were concurrent with long timescale transitions between running to the orb and performing the two-tap sequence. Similarly, the fastest jPC in policy layers was correlated with the phase of running, while the second fastest was correlated with the phase of the two-tap sequence (video). This trend of core and policy neural dynamics respectively reflecting behavioral and task-related features was also present in other tasks. For example, in the maze forage task, the first two jPC planes in the core respectively correlated with reaching the target orb and discovering the location of new orbs, while those in the policy were correlated with low-level locomotor features such as running phase (video). Along with RSA, these findings support a model in which the core layer transforms sensory information into a contextual signal in a task-specific manner. This signal then modulates activity in the policy toward different trajectories that generate appropriate behaviors in a more task-independent fashion. For a more complete set of behaviors with neural dynamics visualizations overlaid, see Appendix A.7. To causally demonstrate the differing roles of core and policy units in respectively encoding taskrelevant features and movement, we performed silencing and activation of different neuronal subsets in the two-tap task. We identified two stereotyped behaviors (rears and spinning jumps) that were reliably used in two different seeds of the agent to reach the orb in the task. We ranked neurons according to the degree of modulation of their z-scored activity during the performance of these behaviors. We then inactivated subsets of neurons by clamping activity to the mean values between the first and second taps and observed the effects of inactivation on trial success and behavior. In both seeds analyzed, inactivation of policy units had a stronger effect on motor behavior than the inactivation of core units. For instance, in the two-tap task, ablation of 64 neurons in the final policy layer disrupts the performance of the spinning jump (Appendix A.8 Figure 12B video). In contrast, ablation of behavior-modulated core units did not prevent the production of the behavior, but mildly affected the way in which the behavior is directed toward objects in the environment. For example, ablation of a subset of core units during the performance of a spinning jump had a limited effect, but sometimes ed in jumps that missed the target orbs (video; See Appendix A.8, Figure 12C). We also performed a complementary perturbation aimed to elicit behaviors by overwriting the cell state of neurons in each layer with the average time-varying trajectory of neural activity measured during natural performance of a target behavior. The efficacy of stimulation was found to depend on the gross body posture and behavioral state of an animal, but was nevertheless successful in some cases. For example, during the two-tap sequence, we were able to elicit spinning movements common to searching behaviors in the forage task (video; See Appendix A.8, Figure 12D, E). The efficacy of this activation was more reliable in layers closer to the motor output (Figure 12D). In fact, activation of core units rarely elicited spins, but rather elicited sporadic dashes reminiscent of the searching strategy of many models during the forage task (video). For many computational neuroscientists and artificial intelligence researchers, an aim is to reverseengineer the nervous system at an appropriate level of abstraction. In the motor system, such an effort requires that we build embodied models of animals equipped with artificial nervous systems capable of controlling their synthetic bodies across a range of behavior. Here we introduced a virtual rodent capable of performing a variety of complex locomotor behaviors to solve multiple tasks using a single policy. We then used this virtual nervous system to study principles of the neural control of movement across contexts and described several commonalities between the neural activity of artificial control and previous descriptions of biological control. A key advantage of this approach relative to experimental approaches in neuroscience is that we can fully observe sensory inputs, neural activity, and behavior, facilitating more comprehensive testing of theories related to how behavior can be generated. Furthermore, we have complete knowledge of the connectivity, sources of variance, and training objectives of each component of the model, providing a rare ground truth to test the validity of our neural analyses. With these advantages in mind, we evaluated our analyses based on their capacity to both describe the algorithms and representations employed by the virtual rodent and recapitulate the known functional objectives underlying its creation without prior knowledge. To this end, our description of core and policy as respectively representing value and motor production is consistent with the model's actor-critic training objectives. But beyond validation, our analyses provide several insights into how these objectives are reached. RSA revealed that the cell activity of core and policy layers had greater similarity with behavioral and postural features than with short-timescale actuators. This suggests that the representation of behavior is useful in the moment-to-moment production of motor actions in artificial control, a model that has been previously proposed in biological action selection and motor control . These behavioral representations were more consistent across tasks in the policy than in the core, suggesting that task context and value activity in the core engaged task-specific behavioral strategies through the reuse of shared motor activity in the policy. Our analysis of neural dynamics suggests that reused motor activity patterns are often organized as sequences. Specifically, the activity of policy units uniformly tiles time in the production of several stereotyped behaviors like running, jumping, spinning, and the two-tap sequence. This finding is consistent with reports linking sequential neural activity to the production of stereotyped motor and task-oriented behavior in rodents (; ;), including during task delay periods , as well as in singing birds . Similarly, by relating rotational dynamics to the virtual rodent's behavior, we found that different behaviors were seemingly associated with distinct rotations in neural activity space that evolved at different timescales. These findings are consistent with a hierarchical control scheme in which policy layer dynamics that generate reused behaviors are activated and modulated by sensorimotor signals from the core. This work represents an early step toward the constructive modeling of embodied control for the purpose of understanding the neural mechanisms behind the generation of behavior. Incrementally and judiciously increasing the realism of the model's embodiment, behavioral repertoire, and neural architecture is a natural path for future research. Our virtual rodent possesses far fewer actuators and touch sensors than a real rodent, uses a vastly different sense of vision, and lacks integration with olfactory, auditory, and whisker-based sensation (see). While the virtual rodent is capable of locomotor behaviors, an increased diversity of tasks involving decision making, memory-based navigation, and working memory could give insight into "cognitive" behaviors of which rodents are capable. Furthermore, biologically-inspired design of neural architectures and training procedures should facilitate comparisons to real neural recordings and manipulations. We expect that this comparison will help isolate residual elements of animal behavior generation that are poorly captured by current models of motor control, and encourage the development of artificial neural architectures that can produce increasingly realistic behavior. To construct the virtual rodent model, we obtained the mass and lengths of the largest body segments that influence the physical properties of the virtual rodent. First, we dissected cadavers of two female Long-Evans rats, and measured the mass of relevant limb segments and organs. Next, we measured the lengths of body segments over the skin of animals anesthetized with 2% v/v isoflurane anesthesia in oxygen. We confirmed that these skin based measurements approximated bone lengths by measuring bone lengths in a third cadaver. The care and experimental manipulation of all animals were reviewed and approved by the appropriate Institutional Animal Care and Use Committee. Body Table 2: Length measurements of limb segments used to construct the virtual rodent model from 7 female Long-Evans rats. Measurements were performed using calipers either over the skin or over dissected bones (*). Thoracic and sacral refer to vertebral segments. L and R refer to the left and right sides of the animal's body. We generated features describing the whole-body pose and kinematics of the virtual rodent on fast, intermediate, and slow temporal scales. To describe the whole-body pose, we took the top 15 principal components of the virtual rodent's joint angles and joint positions to yield two 15 dimensional sets of eigenpostures . We combined these into a 30 dimensional set of postural features. To describe the animal's whole-body kinematics, we computed the continuous wavelet transform of each eigenposture using a Morlet wavelet spanning 25 scales. For each set of eigenpostures this yielded a 375 dimensional time-frequency representation of the underlying kinematics. We then computed the top 15 principal components of each 375 dimensional time-frequency representation and combined them to yield a 30 dimensional representational description of the animal's behavioral kinematics. To facilitate comparison of kinematics to neural representations on different timescales, we used three sets of wavelet frequencies on 1 to 25 Hz (intermediate), 0.3 to 5 Hz (slow) or 5-25 Hz (fast) timescales. In separate work, we have found that combining postural and kinematic information improves separation of animal behaviors in behavioral embeddings. Therefore, we combined postural and dynamical features, the later on intermediate timescales, to yield a 60 dimensional set of'behavioral features' that we used to map the animal's behavior using tSNE (Figure 4C) . tSNEs were made using the Barnes-Hut approximation with a perplexity of 30. A.3 POWER SPECTRAL DENSITY OF BEHAVIOR AND NETWORK ACTIVITY Figure 9: (A) Power spectral density estimates of four different features describing animal behavior, computed by averaging the spectral density of the top ten principal components of each feature, weighted by the variance they explain. (B) Power spectral density estimates of four different network layers, computed by averaging the spectral density of the top ten principal components of each matrix of activations, weighted by the variance they explain. Notice that policy layers have more power in high frequency bands than core layers. Arrows mark peaks in the power spectra corresponding to locomotion. Notably, the 4-5 Hz frequency of galloping in the virtual rat matches that measured in laboratory rats . Power spectral density was computed using Welch's method using a 10 s window size and 5 s overlap. We used representational similarity analysis to compare population representations across different network layers and to compute the encoding strength of different features describing animal behavior in the population. Representational similarity analysis has in the past been used to compare neural population responses in tasks where behavioral stimuli are discrete, for instance corpuses of objects or faces . A challenge in scaling such approaches to neural analysis in the context of behavior is that behavior unfolds continuously in time. It is thus a priori unclear how to discretize behavior into discrete chunks in which to compare representations. Formally, we defined eight sets of features B i=1...8 describing the behavior of the animal on different timescales. These included features such as joint angles, the angular speed of the joint angles, eigenposture coefficients, and actuator forces that vary on short timescales, as well as behavioral kinematics, which vary on longer timescales and'behavioral features', which consisted of both kinematics and eigenpostures. Each feature set is a matrix B i ∈ R M xqi where M is the number of timepoints in the experiment and q i is the number of features in the set. We discretized each set B i using k-means clustering with k = 50 to yield a partition of the timepoints in the experiment P i. Using the discretization defined in P i, we can perform representational similarity analysis to compare the structure of population responses across neural network layers L m and L n or between a given network layer and features of the behavior B i. Following notation in we let X ∈ R kxp be a matrix of population responses across p neurons and the k behavioral categories in P i. We let Y ∈ R kxq be either the matrix of population responses from q neurons in a distinct network layer, or a set of q features describing the behavior of the animal in the feature set B i. After computing the response matricies in a given behavioral partition, we compared the representational structure of the matricies XX T and Y Y T. To do so, we compute the similarity between these matricies using the linear Centered Kernel Alignment index, which is invariant under orthonormal rotations of the population activity. Following , the CKA coeffient is: Where · F is the Frobenius norm. For centered X and Y, the numerator is equivalent to the dot-product between the vectorized responses XY For a given network layer L m, and a behavioral partition P i, we can denote and The former equation describes the similarity across two layers of the network, and the later describes the similarity of the network activity to a set of behavioral descriptors. An additional challenge comes when restricting this analysis to comparing the neural representations of behavioral across different tasks T a, T b, where not all behaviors are necessarily used in each task. To make such a comparison, we denote B i (T a) to be the set of behavioral clusters observed in task T a, and B to be the set of behaviors used in each of the two tasks. We can then define a restricted partition of timepoints for each task P Ta,T b i or P T b,Ta i that includes only these behaviors, and compute the representational similarity between the same layer across tasks: We have presented a means of performing representational similarity analysis across continuous time domains, where the natural units of discretization are unclear and likely manifold. While we focused on analyzing responses on the population level, it is likely that different subspaces of the population may encode information about distinct behavioral features at different timescales, which is still an emerging domain in representational similarity analysis techniques. Published as a conference paper at ICLR 2020 A.5 NEURAL POPULATION ACTIVITY ACROSS TASKS DURING RUNNING Figure 10: Average activity in the final policy layer (policy 2) during running cycles across different tasks. In each heatmap, rows correspond to the absolute averaged z-scored activity for individual neurons, while columns denote time relative to the mid stance of the running phase. Across heatmaps, neurons are sorted by the time of peak activity in the tasks denoted on the left, such that each column of heatmaps contains the same average activity information with rearranged rows. Aligned running bouts were acquired by manually segmenting the the principal component space of policy 2 activity to find instances of mid-stance running and analyzing the surrounding 200 ms. During the execution of stereotyped behaviors, neural variability was reduced (Figure 11). Recall that in our setting, neurons have no intrinsic noise, but inherit motor noise through observations of the state (i.e. via sensory reafference). This effect loosely resembles, and perhaps informs one line of interpretation of the widely reported phenomenon of neural variability reducing with stimulus or task onset . Our reproduction of this effect, which simply emerges from training, suggests that variance modulation may partly arise from moments in a task that benefit from increased behavioral precision .
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyxrxR4KPS
We built a physical simulation of a rodent, trained it to solve a set of tasks, and analyzed the resulting networks.
We present Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution. This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, ing in a highly discriminative distance function with unbiased mini-batch gradients. Experimentally we show OT-GAN to be highly stable when trained with large mini-batches, and we present state-of-the-art on several popular benchmark problems for image generation. Generative modeling is a major sub-field of Machine Learning that studies the problem of how to learn models that generate images, audio, video, text or other data. Applications of generative models include image compression, generating speech from text, planning in reinforcement learning, semi-supervised and unsupervised representation learning, and many others. Since generative models can be trained on unlabeled data, which is almost endlessly available, they have enormous potential in the development of artificial intelligence. The central problem in generative modeling is how to train a generative model such that the distribution of its generated data will match the distribution of the training data. Generative adversarial nets (GANs) represent an advance in solving this problem, using a neural network discriminator or critic to distinguish between generated data and training data. The critic defines a distance between the model distribution and the data distribution which the generative model can optimize to produce data that more closely resembles the training data. A closely related approach to measuring the distance between the distributions of generated data and training data is provided by optimal transport theory. By framing the problem as optimally transporting one set of data points to another, it represents an alternative method of specifying a metric over probability distributions and provides another objective for training generative models. The dual problem of optimal transport is closely related to GANs, as discussed in the next section. However, the primal formulation of optimal transport has the advantage that it allows for closed form solutions and can thus more easily be used to define tractable training objectives that can be evaluated in practice without making approximations. A complication in using primal form optimal transport is that it may give biased gradients when used with mini-batches (see BID1 and may therefore be inconsistent as a technique for statistical estimation. In this paper we present OT-GAN, a variant of generative adversarial nets incorporating primal form optimal transport into its critic. We derive and justify our model by defining a new metric over probability distributions, which we call Mini-batch Energy Distance, combining optimal transport in primal form with an energy distance defined in an adversarially learned feature space. This combination in a highly discriminative metric with unbiased mini-batch gradients. In Section 2 we provide the preliminaries required to understand our work, and we put our contribution into context by discussing the relevant literature. Section 3 presents our main theoretical contribution: Minibatch energy distance. We apply this new distance metric to the problem of learning generative models in Section 4, and show state-of-the-art in Section 5. Finally, Section 6 concludes by discussing the strengths and weaknesses of the proposed method, as well as directions for future work. Generative adversarial nets BID10 were originally motivated using game theory: A generator g and a discriminator d play a zero-sum game where the generator maps noise z to simulated images y = g(z) and where the discriminator tries to distinguish the simulated images y from images x drawn from the distribution of training data p. The discriminator takes in each image x and y and outputs an estimated probability that the given image is real rather than generated. The discriminator is rewarded for putting high probability on the correct classification, and the generator is rewarded for fooling the discriminator. The goal of training is then to find a pair of (g, d) for which this game is at a Nash equilibrium. At such an equilibrium, the generator minimizes its loss, or negative game value, which can be defined as DISPLAYFORM0 Arjovsky et al. FORMULA0 re-interpret GANs in the framework of optimal transport theory. Specifically, they propose the Earth-Mover distance or Wasserstein-1 distance as a good objective for generative modeling: DISPLAYFORM1 where Π(p, g) is the set of all joint distributions γ(x, y) with marginals p(x), g(y), and where c(x, y) is a cost function that take to be the Euclidean distance. If the p(x) and g(y) distributions are interpreted as piles of earth, the Earth-Mover distance D EMD (p, g) can be interpreted as the minimum amount of "mass" that γ has to transport to turn the generator distribution g(y) into the data distribution p(x). For the right choice of cost c, this quantity is a metric in the mathematical sense, meaning that D EMD (p, g) ≥ 0 and D EMD (p, g) = 0 if and only if p = g. Minimizing the Earth-Mover distance in g is thus a valid method for deriving a statistically consistent estimator of p, provided p is in the model class of our generator g. Unfortunately, the minimization over γ in Equation 2 is generally intractable, so turn to the dual formulation of this optimal transport problem: DISPLAYFORM2 where we have replaced the minimization over γ with a maximization over the set of 1-Lipschitz functions. This optimization problem is generally still intractable, but argue that it is well approximated by using the class of neural network GAN discriminators or critics described earlier in place of the class of 1-Lipschitz functions, provided we bound the norm of their gradient with respect to the image input. Making this substitution, the objective becomes quite similar to that of our original GAN formulation in Equation 1.In followup work BID11 propose a different method of bounding the gradients in the class of allowed critics, and provide strong empirical supporting this interpretation of GANs. In spite of their success, however, we should note that GANs are still only able to solve this optimal transport problem approximately. The optimization with respect to the critic cannot be performed perfectly, and the class of obtainable critics only very roughly corresponds to the class of 1-Lipschitz functions. The connection between GANs and dual form optimal transport is further explored by BID2 and BID6, who extend the analysis to different optimal transport costs and to a broader model class including latent variables. An alternative approach to generative modeling is chosen by BID7 who instead chose to approximate the primal formulation of optimal transport. They start by taking an entropically smoothed generalization of the Earth Mover distance, called the Sinkhorn distance BID4: DISPLAYFORM3 where the set of allowed joint distribution Π β is now restricted to distributions with entropy of at least some constant β. BID7 then approximate this distance by evaluating it on mini-batches of data X, Y consisting of K data vectors x, y. The cost function c then gives rise to a K ×K transport cost matrix C, where C i,j = c(x i, y j) tells us how expensive it is to transport the ith data vector x i in mini-batch X to the j-th data vector y j in mini-batch Y. Similarly, the coupling distribution γ is replaced by a K ×K matrix M of soft matchings between these i, j elements, which is restricted to the set of matrices M with all positive entries, with all rows and columns summing to one, and with sufficient entropy − Tr[M log(M T)] ≥ α. The ing distance, evaluated on a minibatch, is then DISPLAYFORM4 In practice, the minimization over the soft matchings M can be found efficiently on the GPU using the Sinkhorn algorithm. Consequently, BID7 call their method of using Equation FORMULA4 in generative modeling Sinkhorn AutoDiff. The great advantage of this mini-batch Sinkhorn distance is that it is fully tractable, eliminating the instabilities often experienced with GANs due to imperfect optimization of the critic. However, a disadvantage is that the expectation of Equation 5 over mini-batches is no longer a valid metric over probability distributions. Viewed another way, the gradients of Equation 5, for fixed mini-batch size, are not unbiased estimators of the gradients of our original optimal transport problem in Equation 4. For this reason, BID1 propose to instead use the Energy Distance, also called Cramer Distance, as the basis of generative modeling: DISPLAYFORM5 where x, x are independent samples from data distribution p and y, y independent samples from the generator dsitribution g. In Cramer GAN they propose training the generator by minimizing this distance metric, evaluated in a latent space which is learned by the GAN critic. In the next section we propose a new metric for generative modeling, combining the insights of GANs and optimal transport. Although our work was performed concurrently to that by BID7 and BID1, it can be understood most easily as forming a synthesis of the ideas used in Sinkhorn AutoDiff and Cramer GAN. As discussed in the last section, most previous work in generative modeling can be interpreted as minimizing a distance D(g, p) between a generator distribution g(x) and the data distribution p(x), where the distributions are defined over a single vector x which we here take to be an image. However, in practice deep learning typically works with mini-batches of images X rather than individual images. For example, a GAN generator is typically implemented as a high dimensional function G(Z) that turns a mini-batch of random noise Z into a mini-batch of images X, which the GAN discriminator then compares to a mini-batch of images from the training data. The central insight of Mini-batch GAN ) is that it is strictly more powerful to work with the distributions over mini-batches g(X), p(X) than with the distributions over individual images. Here we further pursue this insight and propose a new distance over mini-batch distributions D[g(X), p(X)] which we call the Mini-batch Energy Distance. This new distance combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, ing in a highly discriminative distance function with unbiased mini-batch gradients. In order to derive our new distance function, we start by generalizing the energy distance given in Equation 6 to general non-Euclidean distance functions d. Doing so gives us the generalized energy distance: DISPLAYFORM0 where X, X are independent samples from distribution p and Y, Y independent samples from g. This distance is typically defined for individual samples, but it is valid for general random objects, including mini-batches like we assume here. The energy distance D GED (p, g) is a metric, in the mathematical sense, as long as the distance function d is a metric BID13. Under this condition, meaning that d satisfies the triangle inequality and several other conditions, we have that D(p, g) ≥ 0, and D(p, g) = 0 if and only if p = g. Using individual samples x, y instead of minibatches X, Y, BID23 showed that such generalizations of the energy distance can equivalently be viewed as a form of maximum mean discrepancy, where the MMD kernel k is related to the distance function DISPLAYFORM1 We find the energy distance perspective more intuitive here and follow Cramer GAN in using this perspective instead. We are free to choose any metric d for use in Equation 7, but not all choices will be equally discriminative when used for generative modeling. Here, we choose d to be the entropy-regularized Wasserstein distance, or Sinkhorn distance, as defined for mini-batches in Equation 5. Although the average over mini-batch Sinkhorn distances is not a valid metric over probability distributions p, g, ing in the biased gradients problem discussed in Section 2, the Sinkhorn distance is a valid metric between individual mini-batches, which is all we require for use inside the generalized energy distance. Putting everything together, we arrive at our final distance function over distributions, which we call the Minibatch Energy Distance. Like with the Cramer distance, we typically work with the squared distance, which we define as DISPLAYFORM2 where X, X are independently sampled mini-batches from distribution p and Y, Y are independent mini-batches from g. We include the subscript c to make explicit that this distance depends on the choice of transport cost function c, which we will learn adversarially as discussed in Section 4. MED (p, g) still incorporates the primal form optimal transport of the Sinkhorn distance, which in Section 5 we show leads to much stronger discriminative power and more stable generative modeling. In concurrent work, BID8 independently propose a very similar loss function to, but using a single sample from the data and generator distributions. We obtained best using two independently sampled minibatches from each distribution. In the last section we defined the mini-batch energy distance which we propose using for training generative models. However, we left undefined the transport cost function c(x, y) on which it depends. One possibility would be to choose c to be some fixed function over vectors, like Euclidean distance, but we found this to perform poorly in preliminary experiments. Although minimizing the mini-batch energy distance D 2 M ED (p, g) guarantees statistical consistency for simple fixed cost functions c like Euclidean distance, the ing statistical efficiency is generally poor in high dimensions. This means that there typically exist many bad distributions distributions g for which D 2 M ED (p, g) is so close to zero that we cannot tell p and g apart without requiring an enormous sample size. To solve this we propose learning the cost function adversarially, so that it can adapt to the generator distribution g and thereby become more discriminative. In practice we implement this by defining c to be the cosine distance between vectors v η (x) and v η (y), where v η is a deep neural network that maps the images in our mini-batch into a learned latent space. That is we define the transport cost to be DISPLAYFORM0 where we choose η to maximize the ing minibatch energy distance. In practice, training our generative model g θ and our adversarial transport cost c η is done by alternating gradient descent as is standard practice in GANs BID10. Here we choose to update the generator more often than we update our critic. This is contrary to standard practice (e.g. and ensures our cost function c does not become degenerate. If c were to assign zero transport cost to two non-identical regions in image space, the generator would quickly adjust to take advantage of this. Similar to how a quickly adapting critic controls the generator in standard GANs, this works the other way around in our case. Contrary to standard GANs, our generator has a well defined and statistically consistent training objective even when the critic is not updated, as long as the cost function c is not degenerate. We also investigated forcing v η to be one-to-one by parameterizing it using a RevNet Gomez et al. FORMULA0, thereby ensuring c cannot degenerate, but this proved unnecessary if the generator is updated often enough. Our full training procedure is described in Algorithm 1, and is visually depicted in FIG1 . Here we compute the matching matrix M in W c using the Sinkhorn algorithm. Unlike Genevay et al. FORMULA0 we do not backpropagate through this algorithm. Ignoring the gradient flow through the matchings M is justified by the envelope theorem (see e.g. BID3): Since M is chosen to minimize W c, the gradient of W c with respect to this variable is zero (when projected into the allowed space M). Algorithm 1 assumes we use standard SGD for optimization, but we are free to use other optimizers. In our experiments we use Adam BID12.Our algorithm for training generative models can be generalized to include conditional generation of images given some side information s, such as a text-description of the image or a label. When generating an image y we simply draw s from the training data and condition the generator on it. The rest of the algorithm is identical to Algorithm 1 but with (Y, S) in place of Y, and similar substitutions for X, X, Y. The full algorithm for conditional generation is detailed in Algorithm 2 in the appendix. Require: n gen, the number of iterations of the generator per critic iteration Require: η 0, initial critic parameters. θ 0, initial generator parameters 1: for t = 1 to N do 2:Sample X, X two independent mini-batches from real data, and Y, Y two independent mini-batches from the generated samples 3: DISPLAYFORM0 if t mod n gen + 1 = 0 then 5: DISPLAYFORM1 end if 9: end for In this section, we demonstrate the improved stability and consistency of the proposed method on five different datasets with increasing complexity. One advantage of OT-GAN compared to regular GAN is that for any setting of the transport cost c, i.e. any fixed critic, the objective is statistically consistent for training the generator g. Even if we stop updating the critic, the generator should thus never diverge. With a bad fixed cost function c the signal for learning g may be very weak, but at least it should never point in the wrong direction. We investigate whether this theoretical property holds in practice by examining a simple toy example. We train generative models using different types of GAN on a 2D mixture of 8 Gaussians, with means arranged on a circle. The goal for the generator is to recover all 8 modes. For the proposed method and all the baseline methods, the architectures are simple MLPs with ReLU activations. A similar experimental setting has been considered in BID16 to demonstrate the mode coverage behavior of various GAN models. There, GANs using mini-batch features, DAN-S, are shown to capture all the 8 modes when training converges. To test the consistency of GAN models, we stop updating the discriminator after 15k iterations and visualize the generator distribution for an additional 25K iterations. As shown in FIG2, mode collapse occurs in a mini-batch feature GAN after a few thousand iterations training with a fixed discriminator. However, using the mini-batch energy distance, the generator does not diverge and the generated samples still cover all 8 modes of the data. The first column shows the data distribution. The top row shows the training of OT-GAN using mini-batch energy distance. The bottom row shows the training with the original GAN loss (DAN-S). The latter collapses to 3 out of 8 modes after fixing the discriminator, while OT-GAN remains consistent. CIFAR-10 is a well-studied dataset of 32×32 color images for generative models BID14. We use this data set to investigate the importance of the different design decisions made with OT-GAN, and we compare the visual quality of its generated samples with other state-of-theart GAN models. Our model and the other reported are trained in an unsupervised manner. We choose "inception score" as numerical assessment to compare the visual quality of samples generated by different models. Our generator and critic are standard convnets, similar to those used by DCGAN BID17, but without any batch normalization, layer normalization, or other stabilizing additions. Appendix B contains additional architecture and training details. We first investigate the effect of batch size on training stability and sample quality. As shown in Figure 3, training is not very stable when the batch size is small (i.e. 200). As batch size increases, training becomes more stable and the inception score of samples increases. Unlike previous methods, our objective (the minibatch energy distance, Section 3) depends on the chosen minibatch size: Larger minibatches are more likely to cover many modes of the data distribution, thereby not only yielding lower variance estimates but also making our distance metric more discriminative. To reach the large batch sizes needed for optimal performance we make use of multi GPU training. In this work we only use up to 8 GPUs per experiment, but we anticipate more GPUs to be useful when using larger models. In Figure 4 we present the samples generated by our model trained with a batch size of 8000. In addition, we also compare with the sample quality of other state-of-the-art GAN models in Table 1. OT-GAN achieves a score of 8.47 ±.12, outperforming all baseline models. To evaluate the importance of using optimal transport in OT-GAN, we repeat our CIFAR-10 experiment with random matching of samples. Our minibatch energy distance objective remains valid when we match samples randomly rather than using optimal transport. In this case the minibatch energy distance reduces to the regular (generalized) energy distance. We repeat our CIFAR-10 experiment and train a generator with the same architecture and hyperparameters as above, but with random matching of samples instead of optimal transport. The highest ing Inception score achieved during the training process is 4.64 using this approach, as compared to 8.47 with optimal transport. Figure 5 shows a random sample from the ing model. Figure 4: Samples generated by OT-GAN on CIFAR-10, without using labels. Figure 5: Samples generated without using optimal transport. To illustrate the ability of OT-GAN in generating high quality images on more complex data sets, we train OT-GAN to generate 128×128 images on the dog subset of ImageNet BID20. A smaller batch size of 2048 is used due to GPU memory contraints. As shown in Figure 6, the samples generated by OT-GAN contain less nonsensical images, and the sample quality is significantly better than that of a tuned DCGAN variant which still suffers from mode collapse. The superior image quality is confirmed by the inception score achieved by OT-GAN(8.97±0.09) on this dataset, which outperforms that of DCGAN(8.19±0.11) Figure 6: ImageNet Dog subset samples generated by OT-GAN (left) and DCGAN (right). To further demonstrate the effectiveness of the proposed method on conditional image synthesis, we compare OT-GAN with state-of-the-art models on text-to-image generation BID19 a; BID25. As shown in Table 2, the images generated by OT-GAN with batch size 2048 also achieve the best inception score here. Example images generated by our conditional generative model on the CUB test set are presented in FIG4. DISPLAYFORM0 2.88 ±.04 3.62 ±.07 3.70 ±.04 3.84 ±.05 Table 2: Inception scores by state-of-the-art methods BID19 a; BID25 and the proposed OT-GAN on the CUB test set. Higher inception scores mean better image quality. We have presented OT-GAN, a new variant of GANs where the generator is trained to minimize a novel distance metric over probability distributions. This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, ing in a highly discriminative distance function with unbiased mini-batch gradients. OT-GAN was shown to be uniquely stable when trained with large mini-batches and to achieve state-of-the-art on several common benchmarks. One downside of OT-GAN, as currently proposed, is that it requires large amounts of computation and memory. We achieve the best when using very large mini-batches, which increases the time required for each update of the parameters. All experiments in this paper, except for the mixture of Gaussians toy example, were performed using 8 GPUs and trained for several days. In future work we hope to make the method more computationally efficient, as well as to scale up our approach to multi-machine training to enable generation of even more challenging and high resolution image data sets. A unique property of OT-GAN is that the mini-batch energy distance remains a valid training objective even when we stop training the critic. Our implementation of OT-GAN updates the generative model more often than the critic, where GANs typically do this the other way around (see e.g. BID11 . As a we learn a relatively stable transport cost function c(x, y), describing how (dis)similar two images are, as well as an image embedding function v η (x) capturing the geometry of the training data. Preliminary experiments suggest these learned functions can be used successfully for unsupervised learning and other applications, which we plan to investigate further in future work. Algorithm 2 Conditional Optimal Transport GAN (OT-GAN) training algorithm with step size α, using minibatch SGD for simplicity Require: n gen, the number of iterations of the generator per critic iteration Require: η 0, initial critic parameters. θ 0, initial generator parameters 1: for t = 1 to N do 2:Sample (X, S), (X, S) two independent mini-batches from real data, with side information, and (Y, S), (Y, S) two independent mini-batches from the generator, re-using the same side information 3: DISPLAYFORM0 if t mod n gen + 1 = 0 then 5: DISPLAYFORM1 end if 9: end for B CIFAR-10 ARCHITECTURE AND TRAINING DETAILS The generator and critic are implemented as convolutional networks. Their architectures are loosely based on DCGAN with various modifications. Weight normalization and data-dependent initialization BID21 are used for both. The generator maps latent codes sampled from a 100 dimensional uniform distribution between -1 and 1 to 32 × 32 color images. The main module of the generator is a 2x2 nearest-neighbor upsampling operation followed by a convolution with a 5 × 5 kernel using gated linear units BID5. The main module of the critic is a convolution with a 5 × 5 kernel and stride 2 using the concatenated ReLU activation function BID24. Notably, the generator and critic do not use an activation normalization technique such as batch or layer normalization. We train the model using Adam with a learning rate of 3 × 10 −4, β 1 = 0.5, β 2 = 0.999. We update the generator 3 times for every critic update. OT-GAN includes two additional hyperparameters for the Sinkhorn algorithm, the number of iterations to run the algorithm and 1 λ which is the entropy penalty of alignments. Initial tuning found a value of 500 to work well for both. To illustrate the importance of learning the transport cost function adversarially, we repeat our CIFAR-10 experiment using cosine distance defined in the original feature space:c(x, y) = 1 − x · y x 2 y 2, where x, y are original image pixel values. In this case, only the transport cost function is a fixed distance function, but all the rest experiment settings are the same as those of OT-GAN. The highest inception score during the training process is 4.93, as compared to 8.47 when learning cost function adversarially using another neural network. The generated samples are shown in FIG5. To further investigate sample diversity and mode collapse in GANs, we train the same generator using DCGAN and OT-GAN on the Imagenet dog data set for a large number of epochs. For DCGAN we observe mode collapse starting to occur after about 900 epochs, as indicated in figure 9. The model does not recover from this if we continue training. We have observed similar behavior for many other types of GAN. For OT-GAN we continued to train for 13000 epochs on this data set but never observed any mode collapse or reduction in sample diversity. Figure 9: Imagenet dog samples generated with DCGAN (left) after 900 epochs and OT-GAN (right) after 13000 epochs. When training long enough, DCGAN suffers from mode collapse as indicated by the highlighted samples. We did not observe any mode collapse for OT-GAN, even when training for many more epochs.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkQkBnJAb
An extension of GANs combining optimal transport in primal form with an energy distance defined in an adversarially learned feature space.
We build a virtual agent for learning language in a 2D maze-like world. The agent sees images of the surrounding environment, listens to a virtual teacher, and takes actions to receive rewards. It interactively learns the teacher’s language from scratch based on two language use cases: sentence-directed navigation and question answering. It learns simultaneously the visual representations of the world, the language, and the action control. By disentangling language grounding from other computational routines and sharing a concept detection function between language grounding and prediction, the agent reliably interpolates and extrapolates to interpret sentences that contain new word combinations or new words missing from training sentences. The new words are transferred from the answers of language prediction. Such a language ability is trained and evaluated on a population of over 1.6 million distinct sentences consisting of 119 object words, 8 color words, 9 spatial-relation words, and 50 grammatical words. The proposed model significantly outperforms five comparison methods for interpreting zero-shot sentences. In addition, we demonstrate human-interpretable intermediate outputs of the model in the appendix. Some empiricists argue that language may be learned based on its usage . suggests that the successful use of a word reinforces the understanding of its meaning as well as the probability of it being used again in the future. BID3 emphasizes the role of social interaction in helping a child develop the language, and posits the importance of the feedback and reinforcement from the parents during the learning process. This paper takes a positive view of the above behaviorism and tries to explore some of the ideas by instantiating them in a 2D virtual world where interactive language acquisition happens. This interactive setting contrasts with a common learning setting in that language is learned from dynamic interactions with environments instead of from static labeled data. Language acquisition can go beyond mapping language as input patterns to output labels for merely obtaining high rewards or accomplishing tasks. We take a step further to require the language to be grounded BID13. Specifically, we consult the paradigm of procedural semantics BID24 which posits that words, as abstract procedures, should be able to pick out referents. We will attempt to explicitly link words to environment concepts instead of treating the whole model as a black box. Such a capability also implies that, depending on the interactions with the world, words would have particular meanings in a particular context and some content words in the usual sense might not even have meanings in our case. As a , the goal of this paper is to acquire "in-context" word meanings regardless of their suitability in all scenarios. On the other hand, it has been argued that a child's exposure to adult language provides inadequate evidence for language learning BID7, but some induction mechanism should exist to bridge this gap . This property is critical for any AI system to learn an infinite number of sentences from a finite amount of training data. This type of generalization problem is specially addressed in our problem setting. After training, we want the agent to generalize to interpret zero-shot sentences of two types: Testing ZS2 sentences contain a new word ("watermelon") that never appears in any training sentence but is learned from a training answer. This figure is only a conceptual illustration of language generalization; in practice it might take many training sessions before the agent can generalize. (Due to space limitations, the maps are only partially shown.) 1) interpolation, new combinations of previously seen words for the same use case, or 2) extrapolation, new words transferred from other use cases and models. In the following, we will call the first type ZS1 sentences and the second type ZS2 sentences. Note that so far the zero-shot problems, addressed by most recent work BID14 BID4 of interactive language learning, belong to the category of ZS1. In contrast, a reliable interpretation of ZS2 sentences, which is essentially a transfer learning problem, will be a major contribution of this work. We created a 2D maze-like world called XWORLD FIG0 ), as a testbed for interactive grounded language acquisition and generalization. 1 In this world, a virtual agent has two language use cases: navigation (NAV) and question answering (QA). For NAV, the agent needs to navigate to correct places indicated by language commands from a virtual teacher. For QA, the agent must correctly generate single-word answers to the teacher's questions. NAV tests language comprehension while QA additionally tests language prediction. They happen simultaneously: When the agent is navigating, the teacher might ask questions regarding its current interaction with the environment. Once the agent reaches the target or the time is up, the current session ends and a new one is randomly generated according to our configuration (Appendix B). The ZS2 sentences defined in our setting require word meanings to be transferred from single-word answers to sentences, or more precisely, from language prediction to grounding. This is achieved by establishing an explicit link between grounding and prediction via a common concept detection function, which constitutes the major novelty of our model. With this transferring ability, the agent is able to comprehend a question containing a new object learned from an answer, without retraining the QA pipeline. It is also able to navigate to a freshly taught object without retraining the NAV pipeline. It is worthwhile emphasizing that this seemingly "simple" world in fact poses great challenges for language acquisition and generalization, because:The state space is huge. Even for a 7ˆ7 map with 15 wall blocks and 5 objects selected from 119 distinct classes, there are already octillions (10 27) of possible different configurations, not to mention the intra-class variance of object instances (see FIG0 in the appendix). For two configurations that only differ in one block, their successful navigation paths could be completely different. This requires an accurate perception of the environment. Moreover, the configuration constantly changes from session to session, and from training to testing. In particular, the target changes across sessions in both location and appearance. The goal space implied by the language for navigation is huge. For a vocabulary containing only 185 words, the total number of distinct commands that can be said by the teacher conforming to our defined grammar is already over half a million. Two commands that differ by only one word could imply completely different goals. This requires an accurate grounding of language. The environment demands a strong language generalization ability from the agent. The agent has to learn to interpret zero-shot sentences that might be as long as 13 words. It has to "plug" the meaning of a new word or word combination into a familiar sentential context while trying to still make sense of the unfamiliar whole. The recent work BID14 BID4 addresses ZS1 (for short sentences with several words) but not ZS2 sentences, which is a key difference between our learning problem and theirs. We describe an end-to-end model for the agent to interactively acquire language from scratch and generalize to unfamiliar sentences. Here "scratch" means that the model does not hold any assumption of the language semantics or syntax. Each sentence is simply a sequence of tokens with each token being equally meaningless in the beginning of learning. This is unlike some early pioneering systems (e.g., SHRDLU BID23 and ABIGAIL ) that hard-coded the syntax or semantics to link language to a simulated world-an approach that presents scalability issues. There are two aspects of the interaction: one is with the teacher (i.e., language and rewards) and the other is with the environment (e.g., stepping on objects or hitting walls). The model takes as input RGB images, sentences, and rewards. It learns simultaneously the visual representations of the world, the language, and the action control. We evaluate our model on randomly generated XWORLD maps with random agent positions, on a population of over 1.6 million distinct sentences consisting of 119 object words, 8 color words, 9 spatial-relation words, and 50 grammatical words. Detailed analysis (Appendix A) of the trained model shows that the language is grounded in such a way that the words are capable to pick out referents in the environment. We specially test the generalization ability of the agent for handling zero-shot sentences. The average NAV success rates are 84.3% for ZS1 and 85.2% for ZS2 when the zero-shot portion is half, comparable to the rate of 90.5% in a normal language setting. The average QA accuracies are 97.8% for ZS1 and 97.7% for ZS2 when the zero-shot portion is half, almost as good as the accuracy of 99.7% in a normal language setting. Our model incorporates two objectives. The first is to maximize the cumulative reward of NAV and the second is to minimize the classification cost of QA. For the former, we follow the standard reinforcement learning (RL) paradigm: the agent learns the action at every step from reward signals. It employs the actor-critic (AC) algorithm to learn the control policy (Appendix E). For the latter, we adopt the standard supervised setting of Visual QA : the groundtruth answers are provided by the teacher during training. The training cost is formulated as the multiclass cross entropy. The model takes two streams of inputs: images and sentences. The key is how to model the language grounding problem. That is, the agent must link (either implicitly or explicitly) language concepts to environment entities to correctly take an action by understanding the instruction in the current visual context. A straightforward idea would be to encode the sentence s with an RNN and encode the perceived image e with a CNN, after which the two encoded representations are mixed together. Specifically, let the multimodal module be M, the action module be A, and the prediction module be P, this idea can be formulated as: BID14; BID4 all employ the above paradigm. In their implementations, M is either vector concatenation or element-wise product. For any particular word in the sentence, fusion with the image could happen anywhere starting from M all the way to the end, right before a label is output. This is due to the fact that the RNN folds the string of words into a compact embedding which then goes through the subsequent blackbox computations. Figure 2: An overview of the model. We process e by always placing the agent at the center via zero padding. This helps the agent learn navigation actions by reducing the variety of target representations. c, a, and v are the predicted answer, the navigation action, and the critic value for policy gradient, respectively. φ denotes the concept detection function shared by language grounding and prediction. M A generates a compact representation from x loc and h for navigation (Appendix C). DISPLAYFORM0 Therefore, language grounding and other computational routines are entangled. Because of this, we say that this paradigm has an implicit language grounding strategy. Such a strategy poses a great challenge for processing a ZS2 sentence because it is almost impossible to predict how a new word learned from language prediction would perform in the complex entanglement involved. Thus a careful inspection of the grounding process is needed. The main idea behind our approach is to disentangle language grounding from other computations in the model. This disentanglement makes it possible for us to explicitly define language grounding around a core function that is also used by language prediction. Specifically, both grounding and prediction are cast as concept detection problems, where each word (embedding) is treated as a detector. This opens up the possibility of transferring word meanings from the latter to the former. The overall architecture of our model is shown in Figure 2. We begin with our definition of "grounding." We define a sentence as generally a string of words of any length. A single word is a special case of a sentence. Given a sentence s and an image representation h " CNNpeq, we say that s is grounded in h as x if I) h consists of M entities where an entity is a subset of visual features, and II) x P t0, 1uM with each entry xrms representing a binary selection of the mth entity of h. Thus x is a combinatorial selection over h. Furthermore, x is explicit if III) it is formed by the grounding of (some) individual words of s (i.e., compositionality).We say that a framework has an explicit grounding strategy if IV) all language-vision fusions in the framework are explicit groundings. For our problem, we propose a new framework with an explicit grounding strategy: DISPLAYFORM0 where the sole language-vision fusion x in the framework is an explicit grounding. Notice in the above how the grounding process, as a "bottleneck," allows only x but not other linguistic information to flow to the downstream of the network. That is, M A, M P, A, and P all rely on grounded but not on other sentence representations. By doing so, we expect x to summarize all the necessary linguistic information for performing the tasks. The benefits of this framework are two-fold. First, the explicit grounding strategy provides a conceptual abstraction BID12 ) that maps high-dimensional linguistic input to a lowerdimensional conceptual state space and abstracts away irrelevant input signals. This improves the generalization for similar linguistic inputs. Given e, all that matters for NAV and QA is x. This guarantees that the agent will perform exactly in the same way on the same image e even given different sentences as long as their grounding x are the same. It disentangles language grounding from subsequent computations such as obstacle detection, path planning, action making, and feature classification, which all should be inherently language-independent routines. Second, because x is explicit, the roles played by the individual words of s in the grounding are interpretable. This is in contrast to Eq. 1 where the roles of individual words are unclear. The interpretability provides a possibility of establishing a link between language grounding and prediction, which we will perform in the remainder of this section. Let h P R NˆD be a spatially flattened feature cube (originally in 3D, now the 2D spatial domain collapsed into 1D for notational simplicity), where D is the number of channels and N is the number of locations in the spatial domain. We adopt three definitions for an entity: 1) a feature vector at a particular image location, 2) a particular feature map along the channel dimension, and 3) a scalar feature at the intersection of a feature vector and a feature map. Their grounding are denoted as x loc ps, hq P t0, 1uN, x feat ps, hq P t0, 1u D, and x cube ps, hq P t0, 1u NˆD, respectively. In the rest of the paper, we remove s and h from x loc, x feat, and x cube for notational simplicity while always assuming a dependency on them. We assume that x cube is a low-rank matrix that can be decomposed into the two: DISPLAYFORM0 To make the model fully differentiable, in the following we relax the definition of grounding so that x loc P r0, 1sN, x feat P r0, 1s D, and x cube P r0, 1s NˆD. The attention map x loc is responsible for image spatial attention. The channel mask x feat is responsible for selecting image feature maps, and is assumed to be independent of the specific h, namely, x feat ps, hq " x feat psq. Intuitively, h can be modulated by x feat before being sent to downstream processings. A recent paper by de BID8 proposes an even earlier modulation of the visual processing by directly conditioning some of the parameters of a CNN on the linguistic input. Finally, we emphasize that our explicit grounding, even though instantiated as a soft attention mechanism, is different from the existing visual attention models. Some attention models such as; de Vries et al. FORMULA0 violate definitions III and IV. Some work (a; b;) violates definition IV in a way that language is fused with vision by a multilayer perceptron (MLP) after image attention. BID0 proposes a pipeline similar to ours but violates definition III in which the image spatial attention is computed from a compact question embedding output by an RNN. With language grounding disentangled, now we relate it to language prediction. This relation is a common concept detection function. We assume that every word in a vocabulary, as a concept, is detectable against entities of type as defined in Section 2.2.1. For a meaningful detection of spatial-relation words that are irrelevant to image content, we incorporate parametric feature maps into h to learn spatial features. Assume a precomputed x feat, the concept detection operates by sliding over the spatial domain of the feature cube h, which can be written as a function φ: DISPLAYFORM0 where χ P R N is a detection score map and u is a word embedding vector. This function scores the embedding u against each feature vector of h, modulated by x feat that selects which feature maps to DISPLAYFORM1 " What is the color of the object in the northeast?" Figure 3: An illustration of the attention cube x cube " x loc¨xfeat, where x loc attends to image regions and x feat selects feature maps. In this example, x loc is computed from "northeast. " In order for the agent to correctly answer "red" (color) instead of "watermelon" (object name), x feat has to be computed from the sentence pattern "What... color...? " use for the scoring. Intuitively, each score on χ indicates the detection response of the feature vector in that location. A higher score represents a higher detection response. While there are many potential forms for φ, we implement it as φph, x feat, uq " h¨px feat˝u q, where˝is the element-wise product. To do so, we have word embedding u P R D where D is equal to the number of channels of h. For prediction, we want to output a word given a question s and an image e. Suppose that x loc and x feat are the grounding of s. Based on the detection function φ, M P outputs a score vector m P R K over the entire lexicon, where each entry of the vector is: DISPLAYFORM0 where u k is the kth entry of the word embedding table. The above suggests that mrks is the of weighting the scores on the map χ k by x loc. It represents the correctness of the kth lexical entry as the answer to the question s. To predict an answer DISPLAYFORM1 Note that the role of x feat in the prediction is to select which feature maps are relevant to the question s. Otherwise it would be confusing for the agent about what to predict (e.g., whether to predict a color or an object name). By using x feat, we expect that different feature maps encode different image attributes (see an example in the caption of Figure 3). More analysis of x feat is performed in Appendix A. To compute x cube, we compute x loc and x feat separately. We want x loc to be built on the detection function φ. One can expect to compute a series of score maps χ of individual words and merge them into x loc. Suppose that s consists of L words tw l u with w l " u k being some word k in the dictionary. Let τ psq be a sequence of indices tl i u where 0 ď l i ă L. This sequence function τ decides which words of the sentence are selected and organized in what order. We define x loc as x loc " Υ`φph, 1, w l1 q,..., φph, 1, w li q,..., φph, 1, w l I q" DISPLAYFORM0 Cross correlation "apple""northwest""northwest of apple" Figure 4: A symbolic example of the 2D convolution for transforming attention maps. A 2D convolution can be decomposed into two steps: flipping and cross correlation. The attention map of "northwest" is treated as an offset filter to translate that of "apple." Note that in practice, the attention is continuous and noisy, and the interpreter has to learn to find out the words (if any) to perform this convolution.where 1 P t0, 1u D is a vector of ones, meaning that it selects all the feature maps for detecting w li. Υ is an aggregation function that combines the sequence of score maps χ li of individual words. As such, φ makes it possible to transfer new words from Eq. 4 to Eq. 5 during test time. If we were provided with an oracle that is able to output a parsing tree for any sentence, we could set τ and Υ according to the tree semantics. Neural module networks (NMNs) (a; b; BID15) rely on such a tree for language grounding. They generate a network of modules where each module corresponds to a tree node. However, labeled trees are needed for training. Below we propose to learn τ and Υ based on word attention to bypass the need for labeled structured data. We start by feeding a sentence s " tw l u of length L to a bidirectional RNN . It outputs a compact sentence embedding s emb and a sequence of L word context vectors w l. Each w l summarizes the sentential pattern around that word. We then employ a meta controller called interpreter in an iterative manner. For the ith interpretation step, the interpreter computes the word attention as: DISPLAYFORM0 where S cos is cosine similarity and GRU is the gated recurrent unit BID6. Here we use τ˚to represent an approximation of τ via soft word attention. We set p 0 to the compact sentence embedding s emb. After this, the attended word s i is fed to the detection function φ. The interpreter aggregates the score map of s i by: DISPLAYFORM1 where˚denotes a 2D convolution, σ is sigmoid, and ρ i is a scalar. W and b are parameters to be learned. Finally, the interpreter outputs x I loc as x loc, where I is the predefined maximum step. Note that in the above we formulate the map transform as a 2D convolution. This operation enables the agent to reason about spatial relations. Recall that each attention map x loc is egocentric. When the agent needs to attend to a region specified by a spatial relation referring to an object, it can translate the object attention with the attention map of the spatial-relation word which serves as a 2D convolutional offset filter (Figure 4). For this reason, we set y 0 as a one-hot map where the map center is one, to represent the identity translation. A similar mechanism of spatial reasoning via convolution was explored by BID20 for a voxel-grid 3D representation. By assumption, the channel mask x feat is meant to be determined solely from s; namely, which features to use should only depend on the sentence itself, not on the value of the feature cube h. Thus it is computed as DISPLAYFORM2 where the RNN returns an average state of processing s, followed by an MLP with the sigmoid activation.2 3 RELATED WORK Our XWORLD is similar to the 2D block world MaseBase . However, we emphasize the problem of grounded language acquisition and generalization, while they do not. There have been several 3D simulated worlds such as ViZDoom BID18 ), DeepMind Lab (, and Malmo BID16 . Still, these other settings intended for visual perception and control, with less or no language. Our problem setting draws inspirations from the AI roadmap delineated by . Like theirs, we have a teacher in the environment that assigns tasks and rewards to the agent, potentially with a curriculum. Unlike their proposal of using only the linguistic channel, we have multiple perceptual modalities, the fusion of which is believed to be the basis of meaning BID19 .Contemporary to our work, several end-to-end systems also address language grounding problems in a simulated world with deep RL. Misra et al. FORMULA0 Other recent work BID15) on zero-shot multitask learning treats language tokens as (parsed) labels for identifying skills. In these papers, the zero-shot settings are not intended for language understanding. The problem of grounding language in perception can perhaps be traced back to the early work of Siskind (1994; 1999), although no statistical learning was adopted at that time. Our language learning problem is related to some recent work on learning to ground language in images and videos BID27 ). The navigation task is relevant to robotics navigation under language commands BID5; ). The question-answering task is relevant to image question answering (VQA) (; BID10 ; ; BID0 BID8 . The interactive setting of learning to accomplish tasks is similar to that of learning to play video games from pixels . For all the experiments, both the sentences and the environments change from session to session, and from training to testing. The sentences are drawn conforming to the teacher's grammar. There are three types of language data: NAV command, QA question, and QA answer, which are illustrated in FIG3. In total, there are "570k NAV commands, "1m QA questions, and 136 QA answers (all the content words plus "nothing" and minus "between"). The environment configurations are randomly generated from octillions of possibilities of a 7ˆ7 map, conforming to some high-level specifications such as the numbers of objects and wall blocks. For NAV, our model is evaluated on four types of navigation commands:nav obj: Navigate to an object. nav col obj: Navigate to an object with a specific color. nav nr obj: Navigate to a location near an object. nav bw obj: Navigate to a location between two objects. For QA, our model is evaluated on twelve types of questions (rec * in TAB3). We refer the reader to Appendix B for a detailed description of the experiment settings. Four comparison methods and one ablation method are described below: DISPLAYFORM0 A variant of our model that replaces the interpreter with a contextual attention model. This attention model uses a gated RNN to convert a sentence to a filter which is then convolved with the feature cube h to obtain the attention map x loc. The filter covers the 3ˆ3 neighborhood of each feature vector in the spatial domain. The rest of the model is unchanged. An adaptation of a model devised by which was originally proposed for VQA. We replace our interpreter with their stacked attention model to compute the attention map x loc. Instead of employing a pretrained CNN as they did, we train a CNN from scratch to accommodate to XWORLD. The CNN is configured as the one employed by our model. The rest of our model is unchanged. An adaptation of a model devised by which was originally proposed for VQA. We flatten h and project it to the word embedding space R D. Then it is appended to the input sentence s as the first word. The augmented sentence goes through an LSTM whose last state is used for both NAV and QA FIG0, Appendix D). An adaptation of a model proposed by BID10 which was originally proposed for image captioning. It instantiates L as a vanilla LSTM which outputs a sentence embedding. Then h is projected and concatenated with the embedding. The concatenated vector is used for both NAV and QA FIG0 Appendix D). This concatenation mechanism is also employed by BID14. DISPLAYFORM0 An ablation of our model that does not have the QA pipeline (M P and P) and is trained only on the NAV tasks. The rest of the model is the same. In the following experiments, we train all six approaches (four comparison methods, one ablation, and our model) with a small learning rate of 1ˆ10´5 and a batch size of 16, for a maximum of 200k minibatches. Additional training details are described in Appendix C. After training, we test each approach on 50k sessions. For NAV, we compute the average success rate of navigation where a success is defined as reaching the correct location before the time out of a session. For QA, we compute the average accuracy in answering the questions. In this experiment, the training and testing sentences (including NAV commands and QA questions) are sampled from the same distribution over the entire sentence space. We call it the normal language setting. The training reward curves and testing are shown in FIG4. VL and CE have quite low rewards without convergences. These two approaches do not use the spatial attention x loc, and thus always attend to whole images with no focus. The region of a target location is tiny compared to the whole egocentric image (a ratio of 1 : p7ˆ2´1q 2 " 1 : 169). It is possible that this difficulty drives the models towards overfitting QA without learning useful representations for NAV. Both CA and SAN obtain rewards and success rates slightly worse than ours. This suggests that in a normal language setting, existing attention models are able to handle previously seen sentences. However, their language generalization abilities, especially on the ZS2 sentences, are usually very weak, as we will demonstrate in the next section. The ablation NAVA has a very large variance in its performance. Depending on the random seed, its reward can reach that of SAN (´0.8), or it can be as low as that of CE (´3.0). Comparing it to our full method, we conclude that even though the QA pipeline operates on a completely different set of sentences, it learns useful local sentential knowledge that in an effective training of the NAV pipeline. Note that all the four comparison methods obtained high QA accuracies (ą70%, see FIG4), despite their distinct NAV . This suggests that QA, as a supervised learning task, is easier than NAV as an RL task in our scenario. One reason is that the groundtruth label in QA is a much stronger training signal than the reward in NAV. Another reason might be that NAV additionally requires learning the control module, which is absent in QA. Our more important question is whether the agent has the ability of interpreting zero-shot sentences. For comparison, we use CA and SAN from the previous section, as they achieved good performance in the normal language setting. For a zero-shot setting, we set up two language conditions: DISPLAYFORM0 Some word pairs are excluded from both the NAV commands and the QA questions. We consider three types of unordered combinations of the content words: (object, spatial relation), (object, color), and (object, object). We randomly hold out X% of the word pairs during the training. NewWord [ZS2] Some single words are excluded from both the NAV commands and the QA questions, but can appear in the QA answers. We randomly hold out X% of the object words during the training. We vary the value of X (12.5, 20.0, 50.0, 66.7, or 90.0) in both conditions to test how sensitive the generalization is to the amount of the held-out data. For evaluation, we report the test only for the zero-shot sentences that contain the held-out word pairs or words. The are shown in FIG5.We draw three from the . First, the ZS1 sentences are much easier to interpret than the ZS2 sentences. Neural networks seem to inherently have this capability to some extent. This is consistent with what has been observed in some previous work BID14 BID4 that addresses the generalization on new word combinations. Second, the ZS2 sentences are difficult for CA and SAN. Even with a held-out portion as small as X% " 12.5%, their navigation success rates and QA accuracies drop up to 80% and 35%, respectively, compared to those in the normal language setting. In contrast, our model tends to maintain the same until X " 90.0. Impressively, it achieves an average success rate of 60% and an average accuracy of 83% even when the number of new object words is 9 times that of seen object words in the NAV commands and QA questions, respectively! Third, in ZS2, if we compare the slopes of the success-rate curves with those of the accuracy curves (as shown in FIG5, we notice that the agent generalizes better on QA than on NAV. This further verifies our finding in the previous section that QA is in general an easier task than NAV in XWORLD. This demonstrates the necessity of evaluating NAV in addition to QA, as NAV requires additional language grounding to control. Interestingly, we notice that nav bw obj is an outlier command type for which CA is much less sensitive to the increase of X in ZS2. A deep analysis reveals that for nav bw obj, CA learns to cheat by looking for the image region that contains the special pattern of object pairs in the image without having to recognize the objects. This suggests that neural networks tend to exploit data in an unexpected way to achieve tasks if no constraints are imposed BID21 .To sum up, our model exhibits a strong generalization ability on both ZS1 and ZS2 sentences, the latter of which pose a great challenge for traditional language grounding models. Although we use a particular 2D world for evaluation in this work, the promising imply the potential for scaling to an even larger vocabulary and grammar with a much larger language space. We discuss the possibility of adapting our model to an agent with similar language abilities in a 3D world (e.g., ; BID16). This is our goal for the future, but here we would like to share some preliminary thoughts. Generally speaking, a 3D world will pose a greater challenge for vision-related computations. The key element of our model is the attention cube x cube that is intended for an explicit language grounding, including the channel mask x feat and the attention map x loc. The channel mask only depends on the sentence, and thus is expected to work regardless of the world's dimensionality. The interpreter depends on a sequence of score maps χ which for now are computed as multiplying a word embedding with the feature cube (Eq. 3). A more sophisticated definition of φ will be needed to detect objects in a 3D environment. Additionally, the interpreter models the spatial transform of attention as a 2D convolution (Eq. 7). This assumption will be not valid for reasoning 3D spatial relations on 2D images, and a better transform method that accounts for perspective distortion is required. Lastly, the surrounding environment is only partially observable to a 3D agent. A working memory, such as an LSTM added to the action module A, will be important for navigation in this case. Despite these changes to be made, we believe that our general explicit grounding strategy and the common detection function shared by language grounding and prediction have shed some light on the adaptation. We have presented an end-to-end model of a virtual agent for acquiring language from a 2D world in an interactive manner, through the visual and linguistic perception channels. After learning, the agent is able to both interpolate and extrapolate to interpret zero-shot sentences that contain new word combinations or even new words. This generalization ability is supported by an explicit grounding strategy that disentangles the language grounding from the subsequent languageindependent computations. It also depends on sharing a detection function between the language grounding and prediction as the core computation. This function enables the word meanings to transfer from the prediction to the grounding during the test time. Promising language acquisition and generalization have been obtained in the 2D XWORLD. We hope that this work can shed some light on acquiring and generalizing language in a similar way in a 3D world. Thomas Landauer and Susan Dumais. A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104, 1997. blue brown gray green orange purple red yellow apple armadillo artichoke avocado banana bat bathtub beans bear bed bee beet beetle bird blueberry bookshelf broccoli bull butterfly cabbage cactus camel carpet carrot cat centipede chair cherry circle clock coconut corn cow crab crocodile cucumber deer desk dinosaur dog donkey dragon dragonfly duck eggplant elephant fan fig fireplace fish fox frog garlic giraffe glove goat grape greenonion greenpepper hedgehog horse kangaroo knife koala ladybug lemon light lion lizard microwave mirror monitor monkey monster mushroom octopus onion ostrich owl panda peacock penguin pepper pig pineapple plunger potato pumpkin rabbit racoon rat rhinoceros rooster seahorse seashell seaurchin shrimp snail snake sofa spider square squirrel stairs star strawberry tiger toilet tomato triangle turtle vacuum wardrobe washingmachine watermelon whale wheat zebra east north northeast northwest south southeast southwest west blue brown gray green orange purple red yellow apple armadillo artichoke avocado banana bat bathtub beans bear bed bee beet beetle bird blueberry bookshelf broccoli bull butterfly cabbage cactus camel carpet carrot cat centipede chair cherry circle clock coconut corn cow crab crocodile cucumber Channel mask x feat. We inspect the channel mask x feat which allows the model to select certain feature maps from a feature cube h and predict an answer to the question s. We randomly sample 10k QA questions and compute x feat for each of them using the grounding module L. We divide the 10k questions into 134 groups, where each group corresponds to a different answer. 4 Then we compute an Euclidean distance matrix D where entry Dri, js is the average distance between the x feat of a question from the ith group and that from the jth group FIG6. Where is green apple located? What is between lion and hedgehog? The grid in east to cherry? The location of the fish is? What is in blue? there are three topics (object, color, and spatial relation) in the matrix. The distances computed within a topic are much smaller than those computed across topics. This demonstrates that with the channel mask, the model is able to look at different subsets of features for questions of different topics, while using the same subset of features for questions of the same topic. It also implies that the feature cube h is learned to organize feature maps according to image attributes. To intuitively demonstrate how the interpreter works, we visualize the word context vectors w l in Eq. 6 for a total of 20k word locations l (10k from QA questions and the other 10k from NAV commands). Each word context vector is projected down to a space of 50 dimensions using PCA BID17, after which we use t-SNE (van der BID22) to further reduce the dimensionality to 2. The t-SNE method uses a perplexity of 100 and a learning rate of 200, and runs for 1k iterations. The visualization of the 20k points is shown in FIG7. Recall that in Eq. 6 the word attention is computed by comparing the interpreter state p i´1 with the word context vectors w l. In order to select the content words to generate meaningful score maps via φ, the interpreter is expected to differentiate them from the remaining grammatical words based on the contexts. This expectation is verified by the above visualization in which the context vectors of the content words (in blue, green, and red) and those of the grammatical words (in black) are mostly separated. FIG7 shows some example questions whose word attentions are computed from the word context vectors. It can be seen that the content words are successfully selected by the interpreter. Attention map x loc. Finally, we visualize the computation of the attention map x loc. In each of the following six examples, the intermediate attention maps loc is the final output of the interpreter at the current time. Note that not all the of the three steps are needed to generate the final output. Some might be discarded according to the value of the update gate ρ i. As a , sometimes the interpreter may produce "bogus" intermediate attention maps which do not contribute to x loc. Each example also visualizes the environment terrain map x terr (defined in Appendix C) that perfectly detects all the objects and blocks, and thus provides a good guide for the agent to navigate successfully without hitting walls or confounding targets. For a better visualization, the egocentric images are converted back to the normal view. The maximum number of time steps is four times the map size. That is, the agent only has 7ˆ4 " 28 steps to reach a target. II The number of objects on the map ranges from 1 to 5.III The number of wall blocks on the map ranges from 0 to 15.IV The positive reward when the agent reaches the correct location is 1.0.V The negative rewards for hitting walls and for stepping on non-target objects are´0.2 and´1.0, respectively. VI The time penalty of each step is´0.1. apple, armadillo, artichoke, avocado, banana, bat, between, blue,?,., and, bathtub, beans, bear, bed, bee, beet, east, brown, block, by, can, beetle, bird, blueberry, bookshelf, broccoli, bull, north, gray, color, could, destination, butterfly, cabbage, cactus, camel, carpet, carrot, northeast, green, direction, does, find, cat, centipede, chair, cherry, circle, clock, northwest, orange, go, goal, grid, coconut, corn, cow, crab, crocodile, cucumber, south, purple, have, identify, in, deer, desk, dinosaur, dog, donkey, dragon, southeast, red, is, locate, located, dragonfly, duck, eggplant, elephant, fan, fig, southwest, yellow location, me, move, fireplace, fish, fox, frog, garlic, giraffe, west name, navigate, near, glove, goat, grape, greenonion, greenpepper, hedgehog, nothing, object, of, horse, kangaroo, knife, koala, ladybug, lemon, on, one, please, light, lion, lizard, microwave, mirror, monitor, property, reach, say, monkey, monster, mushroom, octopus, onion, orange, side, target, tell, ostrich, owl, panda, peacock, penguin, pepper, the, thing, three, pig, pineapple, plunger, potato, pumpkin, rabbit, to, two, what, racoon, rat, rhinoceros, rooster, seahorse, seashell, where, which, will, seaurchin, shrimp, snail, snake, sofa, spider, you, your square, squirrel, stairs, star, strawberry, tiger, toilet, tomato, triangle, turtle, vacuum, wardrobe, washingmachine, watermelon, whale, wheat, zebra The teacher has a vocabulary size of 185. There are 9 spatial relations, 8 colors, 119 distinct object classes, and 50 grammatical words. Every object class contains 3 different instances. All object instances are shown in FIG0. Every time the environment is reset, a number of object classes are randomly sampled and an object instance is randomly sampled for each class. There are in total 16 types of sentences that the teacher can speak, including 4 types of NAV commands and 12 types of QA questions. Each sentence type has several non-recursive templates, and corresponds to a concrete type of tasks the agent must learn to accomplish. In total there are 1,639,015 distinct sentences with 567,579 for NAV and 1,071,436 for QA. The sentence length varies between 2 and 13. The object, spatial-relation, and color words of the teacher's language are listed in TAB1. These are the content words that can be grounded in XWORLD. All the others are grammatical words. Note that the differentiation between the content and the grammatical words is automatically learned by the agent according to the tasks. Every word is represented by an entry in the word embedding table. The sentence types that the teacher can speak are listed in TAB3. Each type has a triggering condition about when the teacher says that type of sentences. Besides the shown conditions, an extra condition for NAV commands is that the target must be reachable from the current agent location. An extra condition for color-related questions is that the object color must be one of the eight defined colors. If at any time step there are multiple types triggered, we randomly sample one for NAV and another for QA. After a sentence type is sampled, we generate a sentence according to the corresponding sentence templates. The environment image e is a 156ˆ156 egocentric RGB image. The CNN in F has four convolutional layers: p3, 3, 64q, p2, 2, 64q, p2, 2, 256q, p1, 1, 256q, where pa, b, cq represents a layer configuration of c kernels of size a applied at stride width b. All the four layers are ReLU activated. To enable the agent to reason about spatial-relation words (e.g., "north"), we append an additional parametric cube to the convolutional output to obtain h. This parametric cube has the same number of channels with the CNN output, and it is initialized with a zero mean and a zero standard deviation. For NAV, x loc is used as the target to reach on the image plane. However, knowing this alone does not suffice. The agent must also be aware of walls and possibly confounding targets (other objects) in the environment. Toward this end, M A further computes an environment terrain map x terr " σphf q where f P R D is a parameter vector to be learned and σ is sigmoid. We expect that x terr detects any blocks informative for navigation. Note that x terr is unrelated to the specific command; it solely depends on the current environment. After stacking x loc and x terr together, M A feeds them to another CNN followed by an MLP. The CNN has two convolutional layers p3, 1, 64q and p3, 1, 4q, both with paddings of 1. It is followed by a three-layer MLP where each layer has 512 units and is ReLU activated. The action module A contains a two-layer MLP of which the first layer has 512 ReLU activated units and the second layer is softmax whose output dimension is equal to the number of actions. We use adagrad BID9 ) with a learning rate of 10´5 for stochastic gradient descent (SGD). The reward discount factor is set to 0.99. All the parameters have a default weight decay of 10´4ˆ16. For each layer, its parameters have zero mean and a standard deviation of 1 {? K, where K is the number of parameters of that layer. We set the maximum interpretation step I " 3. The whole model is trained end to end. Some additional implementation details of the baselines in Section 4.3 are described below.[CA] Its RNN has 512 units. [VL] Its CNN has four convolutional layers p3, 2, 64q, p3, 2, 64q, p3, 2, 128q, and p3, 1, 128q. This is followed by a fully-connected layer of size 512, which projects the feature cube to the word embedding space. The RNN has 512 units. For either QA or NAV, the RNN's last state goes through a three-layer MLP of which each layer has 512 units FIG0 ).[CE] It has the same layer-size configuration with VL FIG0 ).[SAN] Its RNN has 256 units. Following the original approach, we use two attention layers. All the layers of the above baselines are ReLU activated. The agent has one million exploration steps in total, and the exploration rate λ decreases linearly from 1 to 0.1. At each time step, the agent takes an action a P tleft, right, up, downu with a probability of λ¨1 4`p 1´λq¨π θ pa|s, eq, where π θ is the current policy, and s and e denote the current command and environment image, respectively. To stabilize the learning, we also employ experience replay (ER) . The environment inputs, rewards, and the actions taken by the agent in the most recent 10k time steps are stored in a replay buffer. During training, each time a minibatch ta i, s i, e i, r i u N i"1 is sampled from the buffer, using the rank-based sampler which has proven to increase the training efficiency by prioritizing rare experiences. Then we compute the gradient as: v is the value function, θ are the current parameters, θ´are the target parameters that have an update delay, and γ is the discount factor. This gradient maximizes the expected reward while minimizing the temporal-difference (TD) error. Note that because of ER, our AC method is off-policy. To avoid introducing biases into the gradient, importance ratios are needed. However, we ignored them in the above gradient for implementation simplicity. We found that the current implementation worked well in practice for our problem. We exploit curriculum learning BID2 ) to gradually increase the environment complexity to help the agent learn. The following quantities are increased in proportional to minp1, G 1 {Gq, where G 1 is the number of sessions trained so far and G is the total number of curriculum sessions:I The size of the open space on the environment map. II The number of objects in the environment. III The number of wall blocks. IV The number of object classes that can be sampled from. V The lengths of the NAV command and the QA question. We found that this curriculum is important for an efficient learning. Specifically, the gradual changes of quantities IV and V are supported by the findings of that children learn new words in a linguistic corpus much faster after partial exposure to the corpus. In the experiments, we set G "25k during training while do not have any curriculum during test (i.e., testing with the maximum difficulty).
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1UOm4gA-
Training an agent in a 2D virtual world for grounded language acquisition and generalization.
Reinforcement learning algorithms, though successful, tend to over-fit to training environments, thereby hampering their application to the real-world. This paper proposes $\text{W}\text{R}^{2}\text{L}$ -- a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We empirically demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments Reinforcement learning (RL) has become a standard tool for solving decision-making problems with feedback, and though significant progress has been made, algorithms often over-fit to training environments and fail to generalise across even slight variations of transition dynamics . Robustness to changes in transition dynamics is a crucial component for adaptive and safe RL in real-world environments. Motivated by real-world applications, recent literature has focused on the above problems, proposing a plethora of algorithms for robust decisionmaking (; ;). Most of these techniques borrow from game theory to analyse, typically in a discrete state and actions spaces, worst-case deviations of agents' policies and/or environments, see;;; and references therein. These methods have also been extended to linear function approximators , and deep neural networks showing (modest) improvements in performance gain across a variety of disturbances, e.g., action uncertainties, or dynamical model variations. In this paper, we propose a generic framework for robust reinforcement learning that can cope with both discrete and continuous state and actions spaces. Our algorithm, termed Wasserstein Robust Reinforcement Learning (WR 2 L), aims to find the best policy, where any given policy is judged by the worst-case dynamics amongst all candidate dynamics in a certain set. This set is essentially the average Wasserstein ball around a reference dynamics P 0. The constraints makes the problem well-defined, as searching over arbitrary dynamics can only in non-performing system. The measure of performance is the standard RL objective, the expected return. Both the policy and the dynamics are parameterised; the policy parameters θ k may be the weights of a deep neural network, and the dynamics parameters φ j the parameters of a simulator or differential equation solver. The algorithm performs estimated descent steps in φ space and -after (almost) convergence -performs an update of policy parameters, i.e., in θ space. Since φ j may be high-dimensional, we adapt a zero'th order sampling method based extending to make estimations of gradients, and in order to define the constraint set which φ j is bounded by, we generalise the technique to estimate Hessians (Proposition 2). We emphasise that although access to a simulator with parameterisable dynamics is required, the actual reference dynamics P 0 need not be known explicitly nor learnt by our algorithm. Put another way, we are in the "RL setting", not the "MDP setting" where the transition probability matrix is known a priori. The difference is made obvious, for example, in the fact that we cannot perform dynamic programming, and the determination of a particular probability transition can only be estimated from sampling, not retrieved explicitly. Hence, our algorithm is not model-based in the traditional sense of learning a model to perform planning. We believe our contribution is useful and novel for two main reasons. Firstly, our framing of the robust learning problem is in terms of dynamics uncertainty sets defined by Wasserstein distance. Whilst we are not the first to introduce the Wasserstein distance into the context of MDPs (see, e.g., or), we believe our formulation is amongst the first suitable for application to the demanding application-space we desire, that being, high-dimensional, continuous state and action spaces. Secondly, we believe our solution approach is both novel and effective (as evidenced by experiments below, see Section 5), and does not place a great demand on model or domain knowledge, merely access to a simulator or differentiable equation solver that allows for the parameterisation of dynamics. Furthermore, it is not computationally demanding, in particular, because it does not attempt to build a model of the dynamics, and operations involving matrices are efficiently executable using the Jacobian-vector product facility of automatic differentiation engines. A Markov decision process (MDP) 1 is denoted by M = S, A, P, R, γ, where S ⊆ R d denotes the state space, A ⊆ R n the action space, P: S × A × S → is a state transition probability describing the system's dynamics, R: S × A → R is the reward function measuring the agent's performance, and γ ∈ specifies the degree to which rewards are discounted over time. At each time step t, the agent is in state s t ∈ S and must choose an action a t ∈ A, transitioning it to a new state s t+1 ∼ P (s t+1 |s t, a t), and yielding a reward R(s t, a t). A policy π: S × A → is defined as a probability distribution over state-action pairs, where π(a t |s t) represents the density of selecting action a t in state s t. Upon subsequent interactions with the environment, the agent collects a trajectory τ of state-action pairs. The goal is to determine an optimal policy π by solving: where p π (τ) denotes the trajectory density function, and R Total (τ) the return, that is, the total accumulated reward: with µ 0 (·) denoting the initial state distribution. We make use of the Wasserstein distance to quantify variations from a reference transition density P 0 (·). The latter being a probability distribution, one may consider other divergences, such as Kullback-Leibler (KL) or total variation (TV). Our main intuition for choosing Wasserstein distance is explained below, but we note that it has a number of desirable properties: Firstly, it is symmetric (W p (µ, ν) = W p (ν, µ), a property K-L lacks). Secondly, it is well-defined for measures with different supports (which K-L also lacks). Indeed, the Wasserstein distance is flexible in the forms of the measures that can be compared -discrete, continuous or a mixture. Finally, it takes into account the underlying geometry of the space the distributions are defined on, which can encode valuable information. It is defined as follows: Let X be a metric space with metric d(·, ·). Let C(X) be the space of continuous functions on X and let M(X) be the set of probability measures on X. Let µ, ν ∈ M(X). Let K(µ, ν) be the set of couplings between µ, ν: That is, the set of joint distributions κ ∈ M(X × X) whose marginals agree with µ and ν respectively. Given a metric (serving as a cost function) d(·, ·) for X, the p'th Wasserstein distance W p (µ, ν) for p ≥ 1 between µ and ν is defined as: (in this paper, and mostly for computational convenience, we use p = 2, though other values of p are applicable). The desirable properties of Wasserstein distance aside, our main intuition for choosing it is described thus: Per the definition, constraining the possible dynamics to be within an -Wasserstein ball of a reference dynamics P 0 (·) means constraining it in a certain way. Wasserstein distance has the form mass × distance. If this quantity is constrained to be less than a constant, then if the mass is large, the distance is small, and if the distance is large, the mass is small. Intuitively, when modelling the dynamics of a system, it may be reasonable to concede that there could be a systemic erroror bias -in the model, but that bias should not be too large. It is also reasonable to suppose that occasionally, the behaviour of the system may be wildly different to the model, but that this should be a low-probability event. If the model is frequently wrong by a large amount, then there is no use in it. In a sense, the Wasserstein ball formalises these assumptions. Due to the continuous nature of the state and action spaces considered in this work, we resort to deep neural networks to parameterise policies, which we write as π θ (a t |s t), where θ ∈ R d1 is a set of tunable hyper-parameters to optimise. For instance, these policies can correspond to multilayer perceptrons for MuJoCo environments, or to convolutional neural networks in case of highdimensional states depicted as images. Exact policy details are ultimately application dependent and, consequently, provided in the relevant experiment sections. In principle, one can similarly parameterise dynamics models using deep networks (e.g., LSTMtype models) to provide one or more action-conditioned future state predictions. Though appealing, going down this path led us to agents that discover worst-case transition models which minimise training data but lack any valid physical meaning. For instance, original experiments we conducted on CartPole ended up involving transitions that alter angles without any change in angular velocities. More importantly, these effects became more apparent in high-dimensional settings where the number of potential minimisers increases significantly. It is worth noting that we are not the first to realise such an artifact when attempting to model physics-based dynamics using deep networks. Authors in remedy these problems by introducing Lagrangian mechanics to deep networks, while others (; argue the need to model dynamics given by differential equation structures directly. Though incorporating physics-based priors to deep networks is an important and challenging task that holds the promise of scaling model-based reinforcement learning for efficient solvers, in this paper we rather study an alternative direction focusing on perturbing differential equation solvers and/or simulators with respect to the dynamic specification parameters φ ∈ R d2 . Not only would such a consideration reduce the dimension of parameter spaces representing transition models, but would also guarantee valid dynamics due to the nature of the simulator. Though tackling some of the above problems, such a direction arrives with a new set of challenges related to computing gradients and Hessians of black-box solvers. In Section 4, we develop an efficient and scalable zero-order method for valid and accurate model updates. Unconstrained Loss Function: Having equipped agents with the capability of representing policies and perturbing transition models, we are now ready to present an unconstrained version of WR 2 L's loss function. Borrowing from robust optimal control, we define robust reinforcement learning as an algorithm that learns best-case policies under worst-case transitions: where p φ θ (τ) is a trajectory density function parameterised by both policies and transition models, i.e., θ and φ, respectively: specs vector and diff. solver Under review as a conference paper at ICLR 2020 At this stage, it should be clear that our formulation, though inspired from robust optimal control, is, truthfully, more generic as it allows for parameterised classes of transition models without incorporating additional restrictions on the structure or the scope by which variations are executed 2. Constraints & Complete Problem Definition: Clearly, the problem in Equation 5 is ill-defined due to the arbitrary class of parameterised transitions. To ensure well-behaved optimisation objectives, we next introduce constraints to bound search spaces and ensure convergence to feasible transition models. For a valid constraint set, our method assumes access to samples from a reference dynamics model P 0 (·|s, a), and bounds learnt transitions in an -Wasserstein ball around P 0 (·|s, a), i.e., the set defined as: where ∈ R + is a hyperparameter used to specify the "degree of robustness" in a similar spirit to maximum norm bounds in robust optimal control. It is worth noting, that though we have access to samples from a reference simulator, our setting is by no means restricted to model-based reinforcement learning in an MDP setting. That is, our algorithm operates successfully given only traces from P 0 accompanied with its specification parameters, e.g., pole lengths, torso masses, etc. -a more flexible framework that does not require full model learners. Though defining a better behaved optimisation objective, the set in Equation 6 introduces infinite number of constraints when considering continuous state and/or actions spaces. To remedy this problem, we target a relaxed version that considers a constraint of average Wasserstein distance bounded by a hyperparameter: The sampling (s, a) in the expectation is done as follows: We sample trajectories using reference dynamics P 0 and a policy π that chooses actions uniformly at random (uar). Then (s, a) pairs are sampled uar from those collected trajectories. For a given pair (s, a), W 2 2 (P φ (·|s, a), P 0 (·|s, a)) is approximated through the empirical distribution: we use the state that followed (s, a) in the collected trajectories as a data point. Estimating Wasserstein distance using empirical data is standard, see, e.g., Peyré et al.. One approach which worked well in our experiments, was to assume that the dynamics are given by deterministic functions plus Gaussian noise with diagonal convariance matrices. This makes estimation easier in high dimensions since sampling in each dimension is independent of others, and the total samples needed is a constant factor of the number of dimensions. Gaussian distributions also have closed-form expressions for Wasserstein distance, given in terms of mean and covariance. We thus arrive at WR 2 L's optimisation problem allowing for best policies under worst-case yet bounded transition models: Wasserstein Robust Reinforcement Learning Objective: Our solution alternates between updates of θ and φ, keeping one fixed when updating the other. Fixing dynamics parameters φ, policy parameters θ can be updated by solving, which is the formulation of a standard RL problem. Consequently, one can easily adapt any policy search method for updating policies under fixed dynamical models. As described later in Section 4, we make use of proximal policy optimisation . When updating φ given a set of fixed policy parameters θ, the Wasserstein constraints must be respected. Unfortunately, even with the simplification introduced in Section 3.1 the constraint is still difficult to compute. To alleviate this problem, we propose to approximate the constraint in by its Taylor expansion up to second order. That is, defining W (φ):= E (s,a) W 2 2 (P φ (·|s, a), P 0 (·|s, a)), the above can be approximated around φ 0 by a second-order Taylor as: Recognising that W (φ 0) = 0 (the distance between the same probability densities), and ∇ φ W (φ 0) = 0 since φ 0 minimises W (φ), we can simplify the Hessian approximation by writing:. Substituting our approximation back in the original problem in Equation 8, we reach the following optimisation problem for determining model parameter given fixed policies: where is the Hessian of the expected squared 2-Wasserstein distance evaluated at φ 0. Optimisation problems with quadratic constraints can be efficiently solved using interior-point methods. To do so, one typically approximates the loss with a first-order expansion and determines a closed-form solution. Consider a pair of parameters θ [k] and φ [j] (which will correspond to parameters of the j'th inner loop of the k'th outer loop in the algorithm we present). To find φ [j+1], we solve: It is easy to show that a minimiser to the above equation can derived in a closed-form as: with g [k,j] denoting the gradient 3 evaluated at θ [k] and φ [j], i.e., g Generic Algorithm: Having described the two main steps needed for updating policies and models, we now summarise these findings in the pseudo-code in Algorithm 1. As the Hessian 4 of the Wasserstein distance is evaluated based on reference dynamics and any policy π, we pass it, along with and φ 0 as inputs. Then Algorithms 1 operates in a descent-ascent fashion in two main phases. In the first, lines 5 to 10 in Algorithm 1, dynamics parameters are updated using the closed-form solution in Equation 10, while ensuring that learning rates abide by a step size condition (we used the Wolfe conditions , though it can be some other method). With this, the second phase (line 11) utilises any state-of-the-art reinforcement learning method to adapt policy parameters generating θ [k+1]. Regarding the termination condition for the inner loop, we leave this as a decision for the user. It could be, for example, a large finite time-out, or the norm of the gradient g [k,j] being below a threshold, or whichever happens first., and j ← 0 Phase I: Update model parameter while fixing the policy: while termination condition not met do 7: Compute descent direction for the model parameters as given by Equation 10: Update candidate solution, while satisfying step size conditions (see discussion below) on the learning rate α: end while 10: Perform model update setting 11: Phase II: Update policy given new model parameters: Use any standard reinforcement learning algorithm for ascending in the gradient direction, e.g.,, with β [k] a learning rate. 13: end for Consider a simulator (or differential equation solver) S φ for which the dynamics are parameterised by a real vector φ, and for which we can execute steps of a trajectory (i.e., the simulator takes as input an action a and gives back a successor state and reward). For generating novel physics-grounded transitions, one can simply alter φ and execute the instruction in S φ from some a state s ∈ S, while applying an action a ∈ A. Not only does this ensure valid (under mechanics) transitions, but also promises scalability as specification parameters typically reside in lower dimensional spaces compared to the number of tuneable weights when using deep networks as transition models. Gradient Estimation: Recalling the update rule in Phase I of Algorithm 1, we realise the need for, estimating the gradient of the loss function with respect to the vector specifying the dynamics of the environment, i.e., g at each iteration of the inner-loop j. Handling simulators as black-box models, we estimate the gradients by sampling from a Gaussian distribution with mean 0 and σ 2 I co-variance matrix. Our choice for such estimates is not arbitrary but rather theoretically grounded as one can easily prove the following proposition: Proposition 1 (Zero-Order Gradient Estimate). For a fixed θ and φ, the gradient can be computed as: Hessian Estimation: Having derived a zero-order gradient estimator, we now generalise these notions to a form allowing us to estimate the Hessian. It is also worth reminding the reader that such a Hessian estimator needs to be performed one time only before executing the instructions in Algorithm 1 (i.e., H 0 is passed as an input). Precisely, we prove the following proposition: Proposition 2 (Zero-Order Hessian Estimate). The hessian of the Wasserstein distance around φ 0 can be estimated based on function evaluations. Recalling that, and defining W (s,a) (φ):= W 2 2 (P φ (·|s, a), P 0 (·|s, a)), we prove: Proofs of these propositions are given in Appendix A. They allow for a procedure where gradient and Hessian estimates can be simply based on simulator value evaluations while perturbing φ and φ 0. It is important to note that in order to apply the above, we are required to be able to evaluate An empirical estimate of the p-Wasserstein distance between two measures µ and ν can be performed by computing the p-Wasserstein distance between the empirical distributions evaluated at sampled data. That is, one can approximation µ by µ n = 1 n n i=1 δ xi where x i are identically and independently distributed according to µ. Approximating ν n similarly, we then realise that We evaluate WR 2 L on a variety of continuous control benchmarks from the MuJoCo environment. Dynamics in our benchmarks were parameterised by variables defining physical behaviour, e.g., density of the robot's torso, friction of the ground, and so on. We consider both low and high dimensional dynamics and demonstrate that our algorithm outperforms state-of-the-art from both standard and robust reinforcement learning. We are chiefly interested in policy generalisation across environments with varying dynamics, which we measure using average test returns on novel systems. The comparison against standard reinforcement learning algorithms allows us to understand whether lack of robustness is a critical challenge for sequential decision making, while comparisons against robust algorithms test if we outperform state-of-the-art that considered a similar setting to ours. From standard algorithms, we compare against proximal policy optimisation (PPO) , and trust region policy optimisation (TRPO) (b); an algorithm based on natural actor-crtic . From robust algorithms, we demonstrate how WR 2 L favours against robust adversarial reinforcement learning (RARL) , and action-perturbed Markov decision processes (PR-MDP) proposed in . Due to space constraints, the are presented in Appendix B.2. It is worth noting that we attempted to include deep deterministic policy gradients (DDPG) in our comparisons. Results including DDPG were, however, omitted as it failed to show any significant robustness performance even on relatively simple systems, such as the inverted pendulum; see reported in Appendix B.3. During initial trials, we also performed experiments parameterising models using deep neural networks. Results demonstrated that these models, though minimising training data error, fail to provide valid physics-grounded dynamics. For instance, we arrived at inverted pendula models that vary pole angles without exerting any angular speeds. This problem became even more apparent in high-dimensional systems, e.g., Hopper, Walker, etc due to the increased number of possible minima. As such, presented in this section make use of our zero-order method that can be regarded as a scalable alternative for robust solutions. We evaluate our method both in low and high-dimensional MuJuCo tasks . We consider a variety of systems including CartPole, Hopper, and Walker2D; all of which require direct joint-torque control. Keeping with the generality of our method, we utilise these frameworks as-is with no additional alterations, that is, we use the exact setting of these benchmarks as that shipped with OpenAI gym without any reward shaping, state-space augmentation, feature extraction, or any other modifications of-that-sort. Details are given in section B. Due to space constraints, for one-dimensional parameter variations are given in Appendix B.2, where it can be seen that WR 2 L outperforms both robust and non-robust algorithms when onedimensional simulator variations are considered. Figure 1 shows for dynamics variations along two dimensions. Here again, our methods demonstrates considerable robustness. The fourth column, "PPO mean", refers to experiments where PPO is trained on a dynamics sampled uniformly at random from the Wasserstein constraint set. It displayes more robustness than when trained on just the reference dynamics, however, as can be seen from Fig. 2, our method performs noticably better in high dimensions, which is the main strength of our algorithm. Results with High-Dimensional Model Variation: Though above demonstrate robustness, an argument against a min-max objective can be made especially when only considering lowdimensional changes in the simulator. Namely, one can argue the need for such an objective as opposed to simply sampling a set of systems and determining policies performing-well on average similar to the approach proposed in . A counter-argument to the above is that a gradient-based optimisation scheme is more efficient than a sampling-based one when high-dimensional changes are considered. In other words, a sampling procedure is hardly applicable when more than a few parameters are altered, while WR 2 L can remain suitable. To assess these claims, we conducted two additional experiments on the Hopper and HalfCheetah benchmarks. In the first, we trained robustly while changing friction and torso densities, and tested on 1,000 systems generated by varying all 11 dimensions of the Hopper dynamics, and 21 dimensions of the HalfCheetah system. The histogram Figures 2(b) and (f) demonstrate that the empirical densities of the average test returns are mostly centered around 3,000 for the Hopper, and around 4,500 for the Cheetah, which improves that of PPO trained on reference (Figures 2(a) and (e)) with return masses mostly accumulated at around 1,000 in the case of the Hopper and almost equally distributed when considering HalfCheetah. Such improvements, however, can be an artifact of the careful choice of the low-dimensional degrees of freedom allowed to be modified during Phase I of Algorithm 1. To get further insights, Figures 2(c) and (g) demonstrate the effectiveness of our method trained and tested while allowing to tune all 11 dimensional parameters of the Hopper sim- ulator, and the 21 dimensions of the HalfCheetah. Indeed, our are in accordance with these of the previous experiment depicting that most of the test returns' mass remains around 3,000 for the Hopper, and improves to accumulate around 4,500 for the HalfCheetah. Interestingly, however, our algorithm is now capable of acquiring higher returns on all systems 6 since it is allowed to alter all parameters defining the simulator. As such, we conclude that WR 2 L outperforms others when high-dimensional simulator variations are considered. In Figures 2(d) and (h), we see the for PPO trained with dynamics sampled uar from the Wasserstein constraint set. We see that although in the two-dimensional variation case this training method worked well (see Figures 1(d), (h), (l)), it does not scale well to high dimensions, and our method does better. Previous work on robust MDPs, e.g., , whilst valuable in its own right, is not sufficient for the RL setting due to the need in the latter case to give efficient solutions for large state and action spaces, and the fact that the dynamics are not known a priori. Closer to our own work, approaches the robustness problem by training on an ensemble of dynamics in order to be deployed on a target environment. The algorithm introduced, Ensemble Policy Optimisation (EPOpt), alternates between two phases: (i) given a distribution over dynamics for which simulators (or models) are available (the source domain), train a policy that performs well for the whole distribution; (ii) gather data from the deployed environment (target domain) to adapt the distribution. The objective is not max-min, but a softer variation defined by conditional value-at-risk (CVaR). The algorithm samples a set of dynamics {φ k} from a distribution over dynamics P ψ, and for each dynamics φ k, it samples a trajectory using the current policy parameter θ i. It then selects the worst performing -fraction of the trajectories to use to update the policy parameter. Clearly this process bears some resemblance to our algorithm, but there is a crucial difference: our algorithm takes descent steps in the φ space. The difference if important when the dynamics parameters sit in a high-dimensional space, since in that case, optimisationfrom-sampling could demand a considerable number of samples. In any case, our experiments demonstrate our algorithm performs well even in these high dimensions. We note that we were were unable to find the code for this paper, and did not attempt to implement it ourselves. The CVaR criterion is also adopted in , in which, rather than sampling trajectories and finding a quantile in terms of performance, two policies are trained simultaneously: a "protagonist" which aims to optimise performance, and an adversary which aims to disrupt the protagonist. The protagonist and adversary train alternatively, with one being fixed whilst the other adapts. We made comparisons against this algorithm in our experiments. More recently, studies robustness with respect to action perturbations. There are two forms of perturbation addressed: (i) Probabilistic Action Robust MDP (PR-MDP), and (ii) Noisy Action Robust MDP (NR-MDP). In PR-MDP, when an action is taken by an agent, with probability α, a different, possibly adversarial action is taken instead. In NR-MDP, when an action is taken, a perturbation is added to the action itself. and , the algorithm is suitable for applying deep neural networks, and the paper reports experiments on InvertedPendulum, Hopper, Walker2d and Humanoid. We tested against PR-MDP in some of our experiments, and found it to be lacking in robustness (see Section 5). a non-stationary Markov Decision Process model is considered, where the dynamics can change from one time step to another. The constraint is based on Wasserstein distance, specifically, the Wasserstein distance between dynamics at time t and t is bounded by L|t − t |, i.e., is L-Lipschitz with respect to time, for some constant L. They approach the problem by treating nature as an adversary and implement a Minimax algorithm. The basis of their algorithm is that due to the fact that the dynamics changes slowly (due to the Lipschitz constraint), a planning algorithm can project into the future the scope of possible future dynamics and plan for the worst. The ing algorithm, known as Risk Averse Tree Search, isas the name implies -a tree search algorithm. It operates on a sequence "snapshots" of the evolving MDP, which are instances of the MDP at points in time. The algorithm is tested on small grid world, and does not appear to be readily extendible to the continuous state and action scenarios our algorithm addresses. To summarise, our paper uses the Wasserstein distance for quantifying variations in possible dynamics, in common with , but is suited to applying deep neural networks for continuous state and action spaces. Our algorithm does not require a full dynamics available to it, merely a parameterisable dynamics. It competes well with the above papers, and operates well for high dimensional problems, as evidenced by the experiments. In this paper, we proposed a robust reinforcement learning algorithm capable of outperforming others in terms of test returns on unseen dynamics. The algorithm makes use of Wasserstein constraints for policies generalising across varying domains, and considers a zero-order method for scalable solutions. Empirically, we demonstrated superior performance against state-of-the-art from both standard and robust reinforcement learning on low and high-dimensional MuJuCo environments. In future work, we aim to consider robustness in terms of other components of MDPs, e.g., state representations, reward functions, and others. Furthermore, we will implement WR 2 L on real hardware, considering sim-to-real experiments. -Sub-Case III when indices are all distinct: We have Diagonal Elements Conclusion: Using the above we conclude that • Off-Diagonal Elements (i.e., when i = j): The above analysis is now repeated for computing the expectation of the off-diagonal elements of matrix B. Similarly, this can also be split into three sub-cases depending on indices: -Sub-Case III when indices are all distinct: We have Off-Diagonal Elements Conclusion: Using the above and due to the symmetric properties of H, we conclude that Finally, analysing c, one can realise that Substituting the above back in the original approximation in Equation 11, and using the linearity of the expectation we can easily achieve the statement of the proposition. For clarity, we summarise variables parameterising dynamics in Table 1, and detail specifics next. CartPole: The goal of this classic control benchmark is to balance a pole by driving a cart along a rail. The state space is composed of the position x and velocityẋ of the cart, as well as the angle θ and angular velocities of the poleθ. We consider two termination conditions in our experiments: 1) pole deviates from the upright position beyond a pre-specified threshold, or 2) cart deviates from its zeroth initial position beyond a certain threshold. To conduct robustness experiments, we parameterise the dynamics of the CartPole by the pole length l p, and test by varying l p ∈ [0.3, 3]. Hopper: In this benchmark, the agent is required to control a hopper robot to move forward without falling. The state of the hopper is represented by positions, {x, y, z}, and linear velocities, {ẋ,ẏ,ż}, of the torso in global coordinate, as well as angles, {θ i} 2 i=0, and angular speeds, {θ i} 2 i=0, of the three joints. During training, we exploit an early-stopping scheme if "unhealthy" states of the robot were visited. Parameters characterising dynamics included densities {ρ i} 3 i=0 of the four links, armature {a i} 2 i=0 and damping {ζ i} 2 i=0 of three joints, and the friction coefficient µ g. To test for robustness, we varied both frictions and torso densities leading to significant variations in dynamics. We further conducted additional experiments while varying all 11 dimensional specification parameters. Walker2D: This benchmark is similar to Hopper except that the controlled system is a biped robot with seven bodies and six joints. Dimensions for its dynamics are extended accordingly as reported in Table 1. Here, we again varied the torso density for performing robustness experiments in the range ρ 0 ∈. Halfcheetah: This benchmark is similar to the above except that the controlled system is a twodimensional slice of a three-dimensional cheetah robot. Parameters specifying the simulator consist of 21 dimensions, with 7 representing densities. In our two-dimensional experiments we varied the torso-density and floor friction, while in high-dimensional ones, we allowed the algorithm to control all 21 variables. Our experiments included training and a testing phases. During the training phase we applied Algorithm 1 for determining robust policies while updating transition model parameters according to the min-max formulation. Training was performed independently for each of the algorithms on the relevant benchmarks while ensuring best operating conditions using hyper-parameter values reported elsewhere (; ;). For all benchmarks, policies were represented using parametrised Gaussian distributions with their means given by a neural network and standard derivations by a group of free parameters. The neural network consisted of two hidden layers with 64 units and hyperbolic tangent activations in each of the layers. The final layer exploited linear activation so as to output a real number. Following the actor-critic framework, we also trained a standalone critic network having the same structure as that of the policy. For each policy update, we rolled-out in the current worst-case dynamics to collect a number of transitions. The number associated to these transitions was application-dependent and varied between benchmarks in the range of 5,000 to 10,000. The policy was then optimised (i.e., Phase II of Algorithm 1) using proximal policy optimization with a generalised advantage estimation. To solve the minimisation problem in the inner loop of Algorithm 1, we sampled a number of dynamics from a diagonal Gaussian distribution that is centered at the current worst-case dynamics model. The number of sampled dynamics and the variance of the sampled distributions depended on both the benchmark itself, and well as the dimensions of the dynamics. Gradients needed for model updates were estimated using the in Propositions 7 and 8. Finally, we terminated training when the policy entropy dropped below an application-dependent threshold. When testing, we evaluated policies on unseen dynamics that exhibited simulator variations as described earlier. We measured performance using returns averaged over 20 episodes with a maximum length of 1,000 time steps on testing environments. We note that we used non-discounted mean episode rewards to compute such averages. Figure 3 shows the robustness of policies on various taks. For a fair comparison, we trained two standard policy gradient methods (TRPO (b) and PPO ), and two robust RL algorithms (RARL , PR-MDP ) with the reference dynamics preset by our algorithm. The range of evaluation parameters was intentionally designed to include dynamics outside of the -Wasserstein ball. Clearly, WR 2 L outperforms all baselines in this benchmark. As mentioned in the experiments section of the main paper, we refrained from presenting involving deep deterministic policy gradients (DDPG) due to its lack in robustness even on simple systems, such as the CartPole. In Section 3.2 we presented a closed form solution to the following optimisation problem: which took the form of:
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyxwZRNtDr
An RL algorithm that learns to be robust to changes in dynamics
Partially observable Markov decision processes (POMDPs) are a natural model for scenarios where one has to deal with incomplete knowledge and random events. Applications include, but are not limited to, robotics and motion planning. However, many relevant properties of POMDPs are either undecidable or very expensive to compute in terms of both runtime and memory consumption. In our work, we develop a game-based abstraction method that is able to deliver safe bounds and tight approximations for important sub-classes of such properties. We discuss the theoretical implications and showcase the applicability of our on a broad spectrum of benchmarks. In offline motion planning, we aim to find a strategy for an agent that ensures certain desired behavior, even in the presence of dynamical obstacles and uncertainties BID0. If random elementslike uncertainty in the outcome of an action or in the movement of dynamic obstacles -need to be taken into account, the natural model for such scenarios are Markov decision processes (MDPs). MDPs are non-deterministic models which allow the agent to perform actions under full knowledge of the current state of the agent its surrounding environment. In many applications, though, full knowledge cannot be assumed, and we have to deal with partial observability BID1. For such scenarios, MDPs are generalized to partially observable MDPs (POMDPs). In a POMDP, the agent does not know the exact state of the environment, but only an observation that can be shared between multiple states. Additional information about the likelihood of being in a certain state can be gained by tracking the observations over time. This likelihood is called the belief state. Using an update function mapping a belief state and an action as well as the newly obtained observation to a new belief state, one can construct a (typically infinite) MDP, commonly known as the belief MDP.While model checking and strategy synthesis for MDPs are, in general, well-manageable problems, POMDPs are much harder to handle and, due to the potentially infinite belief space, many problems are actually undecidable BID2. Our aim is to apply abstraction and abstraction refinement techniques to POMDPs in order to get good and safe approximative for different types of properties. As a case study, we work with a scenario featuring a controllable agent. Within a certain area, the agent needs to traverse a room while avoiding both static obstacles and randomly moving opponents. The area is modeled as a grid, the static obstacles as grid cells that may not be entered. Our assumption for this scenario is that the agent always knows its own position, but the positions of an opponent is only known if its distance from the agent is below a given threshold and if the opponent is not hidden behind a static obstacle. We assume that the opponents move probabilistically. This directly leads to a POMDP model for our case study. For simplification purposes, we only deal with one opponent, although our approach supports an arbitrary number of opponents. We assume the observation function of our POMDPs to be deterministic, but more general POMDPs can easily be simplified to this case. The goal is to find a strategy which maximizes the probability to navigate through the grid from an initial to a target location without collision. For a grid size of n × n cells and one opponent, the number of states in the POMDP is in O(n 4), i. e., the state space grows rapidly with increasing grid size. In order to handle non-trivial grids, we propose an approach using game-based abstraction BID3.Intuitively, we lump together all states that induce the same observation; for each position of the agent, we can distinguish between all states in which the opponent's position is known, but states in which the position is unknown are merged into one far away state BID4. In order to get a safe approximation We show that any strategy computed with our abstraction that guarantees a certain level of safety can be mapped to a strategy for the original POMDP guarantiing at least the same level of safety. In particular, we establish a simulation relation between paths in the probabilistic game and paths in the POMDP. Intuitively, each path in the POMDP can be reproduced in the probabilistic game if the second player resolves the nondeterminism in a certain way. Game-based model checking assumes the non-determinism to be resolved in the worst way possible, so it will provide a lower bound on the level of safety achievable in the actual POMDP. For full proof see BID4. We analyzed the game-based models using the PRISM-games model checker and compared the obtained with the stateof-the-art POMDP model checker PRISM-pomdp BID5, showing that we can handle grids that are considerably larger than what PRISM-pomdp can handle, while still getting schedulers that induce values which are close to optimal. TAB0 shows a few of our experiments for verifying a reach-avoid property on a grid without obstacles. The colums show the probability (computed by the respective method) to reach a goal state without a collision. As one can see, the abstraction approach is faster by orders of magnitude than solving the POMDP directly, and the game model also is much smaller for large grids while still getting very good approximations for the actual probabilities. The strategies induce even better values when they are mapped back to the original POMDP. While being provably sound, our approach is still targeting an undecidable problem and as such not complete in the sense that in general no strategy with maximum probability for success can be deduced. In particular for cases with few paths to the goal location, the gap between the obtained bounds and the actual maximum can become large. For those cases, we define a scheme to refine the abstraction by encoding one or several steps of history into the current state, which leads to larger games and accordingly longer computation times, but also to better . TAB0 showcases an implementation of this one-step history refinement. We use a benchmark representing a long, narrow tunnel, in which the agent has to pass the opponent once, but, due to the abstraction, can actually run into the it repeatedly if the abstraction-player has the opponent re-appear in front of the agent. With longer tunnels, the probability to safely arrive in a goal state diminishes. Adding a refinement which remembers the last known position of the opponent and thus restricting the non-deterministic movement keeps the probability constant for arbitrary length. We developed a game-based abstraction technique to synthesize strategies for a class of POMDPs. This class encompasses typical grid-based motion planning problems under restricted observability of the environment. For these scenarios, we efficiently compute strategies that allow the agent to maneuver the grid in order to reach a given goal state while at the same time avoiding collisions with faster moving obstacles. Experiments show that our approach can handle state spaces up to three orders of magnitude larger than general-purpose state-of-the-art POMDP solvers in less time, while at the same time using fewer states to represent the same grid sizes.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
rJeKVt-iaV
This paper provides a game-based abstraction scheme to compute provably sound policies for POMDPs.
In this paper we approach two relevant deep learning topics: i) tackling of graph structured input data and ii) a better understanding and analysis of deep networks and related learning algorithms. With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Mazes). Doing so, we are able to model the topology of data while staying in Euclidean space, thus allowing its processing with standard CNN architectures. We suggest a suitable architecture for this problem and show that it can express a perfect solution to the classification task. The shape of the cost function around this solution is also derived and, remarkably, does not depend on the size of the maze in the large maze limit. Responsible for this behavior are rare events in the dataset which strongly regulate the shape of the cost function near this global minimum. We further identify an obstacle to learning in the form of poorly performing local minima in which the network chooses to ignore some of the inputs. We further support our claims with training experiments and numerical analysis of the cost function on networks with up to $128$ layers. Deep convolutional networks have achieved great success in the last years by presenting human and super-human performance on many machine learning problems such as image classification, speech recognition and natural language processing ). Importantly, the data in these common tasks presents particular statistical properties and it normally rests on regular lattices (e.g. images) in Euclidean space BID3 ). Recently, more attention has been given to other highly relevant problems in which the input data belongs to non-Euclidean spaces. Such kind of data may present a graph structure when it represents, for instance, social networks, knowledge bases, brain activity, protein-interaction, 3D shapes and human body poses. Although some works found in the literature propose methods and network architectures specifically tailored to tackle graph-like input data BID3; BID4; BID15; BID22; BID23 b) ), in comparison with other topics in the field this one is still not vastly investigated. Another recent focus of interest of the machine learning community is in the detailed analysis of the functioning of deep networks and related algorithms BID8; BID12 ). The minimization of high dimensional non-convex loss function by means of stochastic gradient descent techniques is theoretically unlikely, however the successful practical achievements suggest the contrary. The hypothesis that very deep neural nets do not suffer from local minima BID9 ) is not completely proven BID36 ). The already classical adversarial examples BID27 ), as well as new doubts about supposedly well understood questions, such as generalization BID43 ), bring even more relevance to a better understanding of the methods. In the present work we aim to advance simultaneously in the two directions described above. To accomplish this goal we focus on the topological classification of graphs BID29; BID30 ). However, we restrict our attention to a particular subset of planar graphs constrained by a regular lattice. The reason for that is threefold: i) doing so we still touch upon the issue of real world graph structured data, such as the 2D pose of a human body BID1; BID16 ) or road networks BID25; BID39 ); ii) we maintain the data in Euclidean space, allowing its processing with standard CNN architectures; iii) this particular class of graphs has various non-trivial statistical properties derived from percolation theory and conformal field theories BID5; BID20; BID34 ), allowing us to analytically compute various properties of a deep CNN proposed by the authors to tackle the problem. Specifically, we introduce Maze-testing, a specialized version of the reachability problem in graphs BID42 ). In Maze-testing, random mazes, defined as L by L binary images, are classified as solvable or unsolvable according to the existence of a path between given starting and ending points in the maze (vertices in the planar graph). Other recent works approach maze problems without framing them as graphs BID37; BID28; BID33 ). However, to do so with mazes (and maps) is a common practice in graph theory BID2; BID32 ) and in applied areas, such as robotics BID11; BID7 ). Our Mazetesting problem enjoys a high degree of analytical tractability, thereby allowing us to gain important theoretical insights regarding the learning process. We propose a deep network to tackle the problem that consists of O(L 2) layers, alternating convolutional, sigmoid, and skip operations, followed at the end by a logistic regression function. We prove that such a network can express an exact solution to this problem which we call the optimal-BFS (breadth-first search) minimum. We derive the shape of the cost function around this minimum. Quite surprisingly, we find that gradients around the minimum do not scale with L. This peculiar effect is attributed to rare events in the data. In addition, we shed light on a type of sub-optimal local minima in the cost function which we dub "neglect minima". Such minima occur when the network discards some important features of the data samples, and instead develops a sub-optimal strategy based on the remaining features. Minima similar in nature to the above optimal-BFS and neglect minima are shown to occur in numerical training and dominate the training dynamics. Despite the fact the Maze-testing is a toy problem, we believe that its fundamental properties can be observed in real problems, as is frequently the case in natural phenomena BID31 ), making the presented analytical analysis of broader relevance. Additionally important, our framework also relates to neural network architectures with augmented memory, such as Neural Turing Machines BID13 ) and memory networks BID40; BID35 ). The hot-spot images FIG9, used to track the state of our graph search algorithm, may be seen as an external memory. Therefore, to observe how activations spread from the starting to the ending point in the hot-spot images, and to analyze errors and the landscape of the cost function (Sec. 5), is analogous to analyze how errors occur in the memory of the aforementioned architectures. This connection gets even stronger when such memory architectures are employed over graph structured data, to perform task such as natural language reasoning and graph search; BID17; BID14 ). In these cases, it can be considered that their memories in fact encode graphs, as it happens in our framework. Thus, the present analysis may eventually help towards a better understanding of the cost functions of memory architectures, potentially leading to improvements of their weight initialization and optimization algorithms thereby facilitating training BID26 ).The paper is organized as follows: Sec. 2 describes in detail the Maze-testing problem. In Sec. 3 we suggest an appropriate architecture for the problem. In Sec. 4 we describe an optimal set of weights for the proposed architecture and prove that it solves the problem exactly. In Sec. 5 we report on training experiments and describe the observed training phenomena. In Sec. 6 we provide an analytical understanding of the observed training phenomena. Finally, we conclude with a discussion and an outlook. Let us introduce the Maze-testing classification problem. Mazes are constructed as a random two dimensional, L × L, black and white array of cells (I) where the probability (ρ) of having a black cell is given by ρ c = 0.59274, while the other cells are white. An additional image (H 0), called the initial hot-spot image, is provided. It defines the starting point by being zero (Off) everywhere except on a 2 × 2 square of cells having the value 1 (On) chosen at a random position (see FIG9 . A Maze-testing sample (i.e. a maze and a hot-spot image) is labelled Solvable if the ending point, defined as a 2 × 2 square at the center of the maze, is reachable from the starting point (defined by the hot-spot image) by moving horizontally or vertically along black cells. The sample is labelled Unsolvable otherwise. A maze-testing sample consists of a maze (I) and an initial hot-spot image (H0). The proposed architecture processes H0 by generating a series of hot-spot images (Hi>0) which are of the same dimension as H0 however their pixels are not binary but rather take on values between 0 (Off, pale-orange) and 1 (On, red). This architecture can represent an optimal solution, wherein the red region in H0 spreads on the black cluster in I to which it belongs. Once the spreading has exhausted, the Solvable/Unsolvable label is determined by the values of Hn at center (ending point) of the maze. In the above example, the maze in question is Unsolvable, therefore the On cells do not reach the ending point at the center of Hn. A maze in a Maze-testing sample has various non-trivial statistical properties which can be derived analytically based on from percolation theory and conformal field theory BID5; BID20; BID34 ). Throughout this work we directly employ such statistical properties, however we refer the reader to the aforementioned references for further details and mathematical derivations. At the particular value chosen for ρ, the problem is at the percolation-threshold which marks the phase transition between the two different connectivity properties of the maze: below ρ c the chance of having a solvable maze decays exponentially with r (the geometrical distance between the ending and starting points). Above ρ c it tends to a constant at large r. Exactly at ρ c the chance of having a solvable maze decays as a power law (1/r η, η = 5/24). We note in passing that although Mazetesting can be defined for any ρ, only the choice ρ = ρ c leads to a computational problem whose typical complexity increases with L.Maze-testing datasets can be produced very easily by generating random arrays and then analyzing their connectivity properties using breadth-first search (BFS), whose worse case complexity is O(L 2). Notably, as the system size grows larger, the chance of producing solvable mazes decay as 1/L η, and so, for very large L, the labels will be biased towards unsolvable. There are several ways to de-bias the dataset. One is to select an unbiased subset of it. Alternatively, one can gradually increase the size of the starting-point to a starting-square whose length scales as L η. Unless stated otherwise, we simply leave the dataset biased but define a normalized test error (err), which is proportional to the average mislabeling rate of the dataset divided by the average probability of being solvable. Here we introduce an image classification architecture to tackle the Maze-testing problem. We frame maze samples as a subclass of planar graphs, defined as regular lattices in the Euclidean space, which can be handle by regular CNNs. Our architecture can express an exact solution to the problem and, at least for small Mazes (L ≤ 16), it can find quite good solutions during training. Although applicable to general cases, graph oriented architectures find it difficult to handle large sparse graphs due to regularization issues BID15; BID22 ), whereas we show that our architecture can perform reasonably well in the planar subclass. Our network, shown in FIG9, is a deep feedforward network with skip layers, followed by a logistic regression module. The deep part of the network consists of n alternating convolutional and sigmoid layers. Each such layer (i) receives two L × L images, one corresponding to the original maze (I) and the other is the output of the previous layer (H i−1). It performs the operation DISPLAYFORM0, where * denotes a 2D convolution, the K convolutional kernel is 1×1, the K hot kernel is 3×3, b is a bias, and σ(x) = (1+e −x) −1 is the sigmoid function. The logistic regression layer consists of two perceptrons (j = 0, 1), acting on DISPLAYFORM1 where H n is the rasterized/flattened version of H n, W j is a 2 × L 2 matrix, and b reg is a vector of dimension 2. The logistic regression module outputs the label Solvable if p 1 ≥ p 0 and Unsolvable otherwise. The cost function we used during training was the standard negative log-likelihood. As we next show, the architecture above can provide an exact solution to the problem by effectively forming a cellular automaton executing a breadth-first search (BFS). A choice of parameters which achieves this is λ ≥ λ c = 9.727±0.001, DISPLAYFORM0 T, where q center is the index of H n which corresponds to the center of the maze. Let us explain how the above neural network processes the image (see also FIG9). Initially H 0 is On only at the starting-point. Passing through the first convolutional-sigmoid layer it outputs H 1 which will be On (i.e. have values close to one) on all black cells which neighbor the On cells as well as on the original starting point. Thus On regions spread on the black cluster which contains the original starting-point, while white clusters and black clusters which do not contain the starting-point remain Off (close to zero in H i). The final logistic regression layer simply checks whether one of the 2 × 2 cells at the center of the maze are On and outputs the labels accordingly. To formalize the above we start by defining two activation thresholds, v l and v h, and refer to activations which are below v l as being Off and to those above v h as being On. The quantity v l is defined as the smallest of the three real solutions of the equation v l = σ(5v l − 0.5λ). Notably we previously chose λ > λ c as this is the critical value above which three real solutions to v l (rather than one) exist. For v h we choose 0.9.Next, we go case by case and show that the action of the convolutional-sigmoid layer switches activations between Off and On just as a BFS would. This amounts to bounding the expression σ(K hot * H i−1 + K * I + b) for all possibly 3 × 3 sub-arrays of H i−1 and 1 × 1 sub-arrays of I. There are thus 2 10 possibilities to be examined. FIG1 shows the desired action of the layer on three important cases (A-C). Each case depicts the maze shape around some arbitrary point x, the hot-spot image around x before the action of the layer (H i−1), and the desired action of the layer (H i). Case A. Having a white cell at x implies I[x] = 0 and therefore the argument of the above sigmoid is smaller than −0.5λ c this regardless of H i−1 at and around x. Thus H i [x] < v l and so it is Off. As the 9 activations of H i−1 played no role, case A covers in fact 2 9 different cases. Case B. Consider a black cell at x, with H i−1 in its vicinity all being Off (vicinity here refers to x and its 4 neighbors). Here the argument is smaller or equal to 5v l − 0.5λ c, and so the activation remains Off as desired. Case B covers 2 4 cases as the values of H i−1 on the 4 corners were irrelevant. Case C. Consider a black cell at x with one or more On activations of H i−1 in its vicinity. Here the argument is larger than v h λ c − 0.5λ c = 0.4λ c. The sigmoid is then larger than 0.97 implying it is On. Case C covers 2 4 (2 5 − 1) different cases. Since 2 9 + 2 4 + 2 4 (2 5 − 1) = 2 10 we exhausted all possible cases. Lastly it can be easily verified that given an On (Off) activation at the center of the full maze the logistic regression layer will output the label Solvable (Unsolvable).Let us now determine the required depth for this specific architecture. The previous analysis tells us that at depth d unsolvable mazes would always be labelled correctly however solvable mazes would be label correctly only if the shortest-path between the starting-point and the center is d or less. The worse case scenario thus occurs when the center of the maze and the starting-point are connected by a one dimensional curve twisting its way along O(L 2) sites. Therefore, for perfect performance the network depth would have to scale as the number of sites namely n = O(L 2). A tighter but probabilistic bound on the minimal depth can be established by borrowing various from percolation theory. It is known, from BID44, that the typical length of the shortest path (l) for critical percolation scales as r dmin, where r is the geometrical distance and d min = 1.1. Moreover, it is known that the probability distribution P (l|r) has a tail which falls as l −2 for l ≈> r dmin BID10 ). Consequently, the chance that at distance r the shortest path is longer than r dmin r a, where a is some small positive number, decays to zero and so, d should scale as L with a power slightly larger than d min (say n = L 1.2). We have performed several training experiments with our architecture on L = 16 and L = 32 mazes with depth n = 16 and n = 32 respectively, datasets of sizes M = 1000, M = 10000, and M = 50000. Unless stated otherwise we used a batch size of 20 and a learning rate of 0.02. In the following, we split the experiments into two different groups corresponding to the related phenomena observed during training, which will the analyzed in detail in the next section. Optimal-BFS like minima. For L = 16, M = 10000 mazes and a positive random initialization for K hot and K in the network found a solution with ≈ 9% normalized test error performance in 3 out of the 7 different initializations (baseline test error was 50%). In all three successful cases the minima was a variant of the Optimal-BFS minima which we refer to as the checkerboard-BFS minima. It is similar to the optimal-BFS but spreads the On activations from the starting-point using a checkerboard pattern rather than a uniform one, as shown in FIG2. The fact that it reaches ≈ 9% test error rather than zero is attributed to this checkerboard behavior which can occasionally miss out the exit point from the maze. Neglect minima. Again for L = 16 but allowing for negative entries in K and K hot test error following 14 attempts and 500 epochs did not improve below 44%. Analyzing the weights of the network, the 6% improvement over the baseline error (50%) came solely from identifying the inverse correlation between many white cells near the center of the maze and the chance of being solvable. Notably, this heuristic approach completely neglects information regarding the starting-point of the maze. For L = 32 mazes, despite trying several random initialization strategies including positive entries, dataset sizes, and learning rates, the network always settled into such a partial neglect minimum. In an unsuccessful attempt to push the weights away from such partial neglect behavior, we performed further training experiments with a biased dataset in which the maze itself was uncorrelated with the label. More accurately, marginalizing over the starting-point there is an equal chance for both labels given any particular maze. To achieve this, a maze shape was chosen randomly and then many random locations were tried-out for the starting-point using that same maze. From these, we picked 5 that ed in a Solvable label and 5 that ed in an Unsolvable label. Maze shapes which were always Unsolvable were discarded. Both the L = 16 and L = 32 mazes trained on this biased dataset performed poorly and yielded 50% test error. Interestingly they improved their cost function by settling into weights in which b ≈ −10 is large compared to [K hot] ij <≈ 1 while W and b were close to zero (order of 0.01). We have verified that such weights imply that activations in the last layer have a negligible dependence on the starting-point and a weak dependence on the maze shape. We thus refer to this minimum as a "total neglect minimum". Here we seek an analytical understanding of the aforementioned training phenomena through the analysis of the cost function around solutions similar or equal to those the network settled into during training. Specifically we shall first study the cost function landscape around the optimal-BFS minimum. As would become clearer at the end of that analysis, the optimal BFS shares many similarities with the checkerboard-BFS minimum obtained during training and one thus expects a similar cost function landscape around both of these. The second phenomena analyzed below is the total neglect minimum obtained during training on the biased dataset. The total neglect minimum can be thought of as an extreme version of the partial neglect minima found for L = 32 in the original dataset. Our analysis of the cost function near the optimal-BFS minimum will be based on two separate models capturing the short and long scale behavior of the network near this miminum. In the first model we approximate the network by linearizing its action around weak activations. This model would enable us to identify the density of what we call "bugs" in the network. In the second model we discretize the activation levels of the neural network into binary variables and study how the ing cellular automaton behaves when such bugs are introduced. FIG3 shows a numerical example of the dynamics we wish to analyze. Up to layer 19 (H19) the On activations spread according to BFS however at H20 a very faint localized unwanted On activation begins to develop (a bug) and quickly saturates (H23). Past this point BFS dynamics continues normally but spreads both the original and the unwanted On activations. While not shown explicitly, On activations still appear only on black maze cells. Notably the bug developed in rather large black region as can be deduced from the large red region in its origin. See also a short movie showing the occurrence of this bug at https://youtu.be/2I436BVAVdM and more bugs at https://youtu.be/kh-AfOo4TkU.At https://youtu.be/t-_TDkt3ER4 a similar behavior is shown for the checkerboard-BFS. Unlike an algorithm, a neural network is an analog entity and so a-priori there are no sharp distinctions between a functioning and a dis-functioning neural network. An algorithm can be debugged and the bug can be identified as happening at a particular time step. However it is unclear if one can generally pin-point a layer and a region within where a deep neural network clearly malfunctioned. Interestingly we show that in our toy problem such pin-pointing of errors can be done in a sharp fashion by identifying fast and local processes which cause an unwanted switching been Off and On activations in H i (see FIG3). We call these events bugs, as they are local, harmful, and have a sharp meaning in the algorithmic context. Below we obtain asymptotic expressions for the chance of generating such bugs as the network weights are perturbed away from the optimal-BFS minimum. The main of this subsection, derived below, is that the density of bugs (or chance of bug per cell) scales as DISPLAYFORM0 for (λ − λ c) <≈ 0 and zero for λ − λ c >= 0 where C ≈ 1.7. Following the analysis below, we expect the same dependence to hold for generic small perturbations only with different C and λ c. We have tested this claim numerically on several other types of perturbations (including ones that break the π/2 rotational symmetry of the weights) and found that it holds. To derive Eq., we first recall the analysis in Sec. 4, initially as it is decreased λ has no effect but to shift v l (the Off activation threshold) to a higher value. However, at the critical value (λ = λ c, v l = v l,c) the solution corresponding to v l vanishes (becomes complex) and the correspondence with the BFS algorithm no longer holds in general. This must not mean that all Off activations are no longer stable. Indeed, recall that in Sec. 4 the argument that a black Off cell in the vicinity of Off cells remains Off FIG1, Case B) assumed a worse case scenario in which all the cells in its vicinity where both Off, black, and had the maximal Off activation allowed (v l). However, if some cells in its vicinity are white, their Off activations levels are mainly determined by the absence of the large K term in the sigmoid argument and orders of magnitude smaller than v l. We come to the that black Off cells in the vicinity of many white cells are less prone to be spontaneously turned On than black Off cells which are part of a large cluster of black cells (see also the bug in FIG3). In fact using the same arguments one can show that infinitesimally below λ c only uniform black mazes will cause the network to malfunction. To further quantify this, consider a maze of size l × l where the hot-spot image is initially all zero and thus Off. Intuitively this hot-spot image should be thought of as a sub-area of a larger maze located far away from the starting-point. In this case a functioning network must leave all activation levels below v l. To assess the chance of bugs we thus study the probability that the output of the final convolutional-sigmoid layer will have one or more On cells. To this end, we find it useful to linearize the system around low activation yielding (see the Appendix for a complete derivation) DISPLAYFORM1 where r b denotes black cells (I(r b) = 1), the sum is over the nearest neighboring black cells to r b, DISPLAYFORM2 For a given maze (I), Eq., defines a linear Hermitian operator (L I) with random off-diagonal matrix elements dictated by I via the restriction of the off-diagonal terms to black cells. Stability of Off activations is ensured if this linear operator is contracting or equivalently if all its eigenvalues are smaller than 1 in magnitude. Hermitian operators with local noisy entries have been studied extensively in physics, in the context of disordered systems and Anderson localization BID19 ). Let us describe the main relevant . For almost all I's the spectrum of L consists of localized eigenfunctions (φ m). Any such function is centered around a random site (x m) and decays exponentially away from that site with a decay length of χ which in our case would be a several cells long. Thus given φ m with an eigenvalue |E m | > 1, t repeated actions of the convolutional-sigmoid layer will make ψ n [x] in a χ vicinity of x m grow in size as e Emt. Thus (|E m | − 1) −1 gives the characteristic time it takes these localized eigenvalue to grow into an unwanted localized region with an On activation which we define as a bug. Our original question of determining the chance of bugs now translates into a linear algebra task: finding, Nλ, the number of eigenvalues in L I which are larger than 1 in magnitude, averaged over I, for a given λ. Sinceλ simply scales all eigenvalues one finds that Nλ is the number of eigenvalues larger thanλ −1 in L I withλ = 1. Analyzing this latter operator, it is easy to show that the maximal eigenvalues occurs when φ n (r) has a uniform pattern on a large uniform region where the I is black. Indeed if I contains a black uniform true box of dimension l u × l u, the maximal eigenvalue is easily shown to be E lu = 5 − 2π 2 /(l u) 2. However the chance that such a uniform region exists goes as (l/l u) 2 e log(ρc)l 2 u and so P (∆E) ∝ l 2 e log(ρc)2π 2 (∆E), where ∆E = 5 − E. This reasoning is rigorous as far as lower bounds on Nλ are concerned, however it turns out to capture the functional behavior of P (∆E) near ∆E = 0 accurately BID18 ) which is given by ∆E), where the unknown constant C captures the dependence on various microscopic details. In the Appendix we find numerically that C ≈ 0.7. Following this we find DISPLAYFORM3 DISPLAYFORM4 The range of integration is chosen to includes all eigenvalues which, following a multiplication byλ, would be larger than 1.To conclude we found the number of isolated unwanted On activations which develop on l × l Off regions. Dividing this number by l 2 we obtain the density of bugs (ρ bug) near λ ≈ λ c. The last technical step is thus to express ρ bug in terms of λ. Focusing on the small ρ bug region or ∆E → 0 +, we find that ∆E = 0 occurs when dσ dx (σ −1 (η ∞ (λ))) = 1/(5λ),λ = 1/5, and λ = λ c = 9.72.Expanding around λ = λ c we find DISPLAYFORM5 2 ). Approximating the integral over P (x) and taking the leading scale dependence, we arrive at Eq. FORMULA3 with C = C 10λc 49−λc. In this subsection we wish to understand the large scale effect of ρ bug namely, its effect on the test error and the cost function. Our key here are that DISPLAYFORM0 DISPLAYFORM1 despite its appearance it can be verified that the above right hand side is smaller than L −5/48 within its domain of applicability. To derive Eqs. FORMULA9 and FORMULA10, we note that as a bug is created in a large maze, it quickly switches On the cells within the black "room" in which it was created. From this region it spreads according to BFS and turns On the entire cluster connected to the buggy room (see FIG3). To asses the effect this bug has on performance first note that solvable mazes would be labelled Solvable regardless of bugs however unsolvable mazes might appear solvable if a bug occurs on a cell which is connected to the center of the maze. Assuming we have an unsolvable maze, we thus ask what is the chance of it being classified as solvable. Given a particular unsolvable maze instance (I), the chance of classifying it as solvable is given by p err (I) = 1 DISPLAYFORM2 where s counts the number of sites in the cluster connected to the central site (central cluster). The probability distribution of s for percolation is known and given by p(s) = Bs 1−τ, τ = 187/91 BID6 ), with B being an order of one constant which depends on the underlying lattice. Since clusters have a fractional dimension, the maximal cluster size is L d f. Consequently, p err (I) averaged over all I instances is given by DISPLAYFORM3 ds, which can be easily expressed in terms of Gamma functions (Γ(x), Γ(a, x)) (see BID0). In the limit of ρ bug <≈ L −d f, where its derivatives with respect to ρ bug are maximal, it simplifies to DISPLAYFORM4 whereas for ρ bug > L −d f, its behavior changes to p err = (−BΓ(2 − τ))ρ DISPLAYFORM5 bug. Notably once ρ bug becomes of order one, several of the approximation we took break down. Let us relate p err to the test error (err). In Sec. the cost function was defined as the mislabeling chance over the average chance of being solvable (p solvable). Following the above discussion the mislabelling chance is p err p solvable and consequently err = p err. Combining Eqs. 1 and 5 we obtain our key , Eqs.As a side, one should appreciate a potential training obstacle that has been avoided related to the fact that err ∝ ρ 5/91 big. Considering L → ∞, if ρ bug was simply proportional to (λ c − λ), err will have a sharp singularity near zero. For instance, as one reduces err by a factor of 1/e, the gradients increase by e 86/5 ≈ 3E + 7. These effects are in accordance with ones intuition that a few bugs in a long algorithm will typically have a devastating effect on performance. Interestingly however, the essential singularity in ρ bug (λ), derived in the previous section, completely flattens the gradients near λ c.Thus the essentially singularity which comes directly from rare events in the dataset strongly regulates the test error and in a related way the cost function. However it also has a negative side-effect concerning the robustness of generalization. Given a finite dataset the rarity of events is bounded and so having λ < λ c may still provide perfect performance. However when encountering a larger dataset some samples with rarer events (i.e. larger black region) would appear and the network will fail sharply on these (i.e. the wrong prediction would get a high probability). Further implications of this dependence on rare events on training and generalization errors will be studied in future work. To provide an explanation for this phenomena let us divide the activations of the upper layer to its starting-point dependent and independent parts. Let H n denote the activations at the top layer. We expand them as a sum of two functions DISPLAYFORM0 where the function A and B are normalized such that their variance on the data (α and β, respectively) is 1. Notably near the reported total neglect minima we found that α β ≈ e −10. Also note that for the biased dataset the maze itself is uncorrelated with the labels and thus β can be thought of as noise. Clearly any solution to the Maze testing problem requires the starting-point dependent part (α) to become larger than the independent part (β). We argue however that in the process of increasing α the activations will have to go through an intermediate "noisy" region. In this noisy region α grows in magnitude however much less than β and in particular obeys α < β 2. As shown in the Appendix the negative log-likelihood, a commonly used cost function, is proportional to β 2 − α for α, β 1. Thus it penalizes random false predictions and, within a region obeying α < β 2 it has a minimum (global with respect to that region) when α = β = 0. The later being the definition of a total neglect minima. Establishing the above α β 2 conjecture analytically requires several pathological cases to be examined and is left for future work. In this work we provide an argument for its typical correctness along with supporting numerics in the Appendix. A deep convolution network with a finite kernel has a notion of distance and locality. For many parameters ranges it exhibits a typical correlation length (χ). That is a scale beyond which two activations are statistically independent. Clearly to solve the current problem χ has to grow to an order of L such that information from the input reaches the output. However as χ gradually grows, relevant and irrelevant information is being mixed and propagated onto the final layer. While β depends on information which is locally accessible at each layer (i.e. the maze shape), α requires information to travel from the first layer to the last. Consequently α and β are expected to scale differently, as e −L/χ and e −1/χ resp. (for χ << L). Given this one finds that α β 2 as claimed. Further numerical support of this conjecture is shown in the Appendix where an upper bound on the ratio α/β 2 is studied on 100 different paths leading from the total neglect miminum found during training to the checkerboard-BFS minimum. In all cases there is a large region around the total neglect minimum in which α β 2. Despite their black-box reputation, in this work we were able to shed some light on how a particular deep CNN architecture learns to classify topological properties of graph structured data. Instead of focusing our attention on general graphs, which would correspond to data in non-Euclidean spaces, we restricted ourselves to planar graphs over regular lattices, which are still capable of modelling real world problems while being suitable to CNN architectures. We described a toy problem of this type (Maze-testing) and showed that a simple CNN architecture can express an exact solution to this problem. Our main contribution was an asymptotic analysis of the cost function landscape near two types of minima which the network typically settles into: BFS type minima which effectively executes a breadth-first search algorithm and poorly performing minima in which important features of the input are neglected. Quite surprisingly, we found that near the BFS type minima gradients do not scale with L, the maze size. This implies that global optimization approaches can find such minima in an average time that does not increase with L. Such very moderate gradients are the of an essential singularity in the cost function around the exact solution. This singularity in turn arises from rare statistical events in the data which act as early precursors to failure of the neural network thereby preventing a sharp and abrupt increase in the cost function. In addition we identified an obstacle to learning whose severity scales with L which we called neglect minima. These are poorly performing minima in which the network neglects some important features relevant for predicting the label. We conjectured that these occur since the gradual incorporation of these important features in the prediction requires some period in the training process in which predictions become more noisy. A "wall of noise" then keeps the network in a poorly performing state. It would be interesting to study how well the and lessons learned here generalize to other tasks which require very deep architectures. These include the importance of rare-events, the essential singularities in the cost function, the localized nature of malfunctions (bugs), and neglect minima stabilized by walls of noise. These conjectures potentially could be tested analytically, using other toy models as well as on real world problems, such as basic graph algorithms (e.g. shortest-path) BID14 ); textual reasoning on the bAbI dataset ), which can be modelled as a graph; and primitive operations in "memory" architectures (e.g. copy and sorting) BID13 ). More specifically the importance of rare-events can be analyzed by studying the statistics of errors on the dataset as it is perturbed away from a numerically obtained minimum. Technically one should test whether the perturbation induces an typical small deviation of the prediction on most samples in the dataset or rather a strong deviation on just a few samples. Bugs can be similarly identified by comparing the activations of the network on the numerically obtained minimum and on some small perturbation to that minimum while again looking at typical versus extreme deviations. Such an analysis can potentially lead to safer and more robust designs were the network fails typically and mildly rather than rarely and strongly. Turning to partial neglect minima these can be identified provided one has some prior knowledge on the relevant features in the dataset. The correlations or mutual information between these features and the activations at the final layer can then be studied to detect any sign of neglect. If problems involving neglect are discovered it may be beneficial to add extra terms to the cost function which encourage more mutual information between these neglected features and the labels thereby overcoming the noise barrier and pushing the training dynamics away from such neglect minimum. We have implemented the architecture described in the main text using Theano BID38 ) and tested how cost changes as a function of δ = λ c − λ (λ c = 9.727..) for mazes of sizes L = 24, 36 and depth (number of layers) 128. These depths are enough to keep the error rate negligible at δ = 0. A slight change made compared to Maze-testing as described in the main text, is that the hot-spot was fixed at a distance L/2 for all mazes. The size of the datasets was between 1E + 5 and 1E + 6. We numerically obtained the normalized performance (cost L (δ)) as a function of L and δ. As it follows from Eq. in the main text the curve, log(L −2+5/24 cost L (δ)), for the L = 24 and L = 36 should collapse on each other for ρ bug < L −d f. FIG4 ) of the main-test depicts three such curves, two for L = 36, to give an impression of statistical error, and one for L = 24 curve (green), along with the fit to the theory (dashed line). The fit, which involves two parameters (the proportionally constant in Eq. of the main text and C) captures well the behavior over three orders of magnitude. As our are only asymptotic, both in the sense of large L and λ → λ c, minor discrepancies are expected. To prepare the action of sigmoid-convolutional for linearization we find it useful to introduce the following variables on locations (r b, r w) with black (b) and white (w) cells DISPLAYFORM0 a(r w) = e −5.5λ.homogeneous. Consequently destabilization occurs exactly atλ = 1/5 and is not blurred by the inhomogeneous terms. Recall that λ c is defined as the value at which the two lower solutions of. It is now easy to verify that even within the linear approximation destabilization occurs exactly at λ c. The source of this agreement is the fact that d vanishes for a uniform black maze. The qualitative lesson here is thus the following: The eigenvectors of S with large s, are associated with large black regions in the maze. It is only on the boundaries of such regions that d is non-zero. Consequently near λ ≈ λ c the d term projected on the largest eigenvalues can, to a good accuracy, be ignored and stability analysis can be carried on the homogeneous equation ψ = S ψ where s n < 1 means stability and s n > 1 implies a bug. Consider an abstract classification tasks where data point x ∈ X are classified into two categories l ∈ {0, 1} using a deterministic function f: X → {0, 1} and further assume for simplicity that the chance of f (x) = a is equal to f (x) = b. Phrased as a conditional probability distribution P f (l|x) is given by P f (f (a)|x) = 1 while P f (!f (a)|x) = 0. Next we wish to compare the following family of DISPLAYFORM0 where g|X → {0, 1} is a random function, uncorrelated with f (x), outputting the labels {0, 1} with equal probability. Notably at α = 1/2, β = 0 it yields P f while at α, β = 0 it is simply the maximum entropy distribution. Let us measure the log-likelihood of P α,β under P f for α, β 1 L(α, β) = We thus find that β reduces the log-likelihood in what can be viewed as a penalty to false confidence or noise. Assuming, as argued in the main text, that α is constrained to be smaller than β 2 near β ≈ 0, it is preferable to take both α and β to zero and reach the maximal entropy distribution. We note by passing that the same arguments could be easily generalized to f (x), g(x) taking real values leading again to an O(α) − O(β 2) dependence in the cost function. Let us relate the above notations to the ones in the main text. Clearly x = ({I}, H 0 ) and {0, 1} = {U nsolvable, Solvable}. Next we recall that in the main text α and β multiplied the vectors function representing the H 0 -depended and H 0 -independent parts of H n. The probability estimated by the logistic regression module was given by P (Solvable|x) = e K Solvable · Hn e − K Solvable · Hn + e − K U nsolvable · Hn P (U nsolvable|x) = e K U nsolvable · Hn e − K Solvable · Hn + e − K U nsolvable · Hn which yields, to leading order in α and β P α,β (l|x) = 1/2 + α(2l + 1) DISPLAYFORM1 where K − = (K Solvable − K U nsolvable)/2 and (2l + 1) understood as the taking the values ±1.Consequently (2f − 1) and (2g − 1) are naturally identified with K Solvable · A/N A and K Solvable · B/N B respectively with N A and N B being normalization constants ensuring a variance of 1. While (α, β) = (N A α, N B β). Recall also that by construction of the dataset, the g we thus obtain is uncorrelated with f. 2 CONJECTUREHere we provide numerical evidence showing that α β 2 in a large region around the total neglect minima found during the training of our architecture on the biased dataset (i.e. the one where marginalizing over the starting-point yields a 50/50 chance of being solvable regardless of the maze shape).For a given set of K h ot, K and b parameters we fix the maze shape and study the variance of the top layer activations given O different starting points. We pick the maximal of these and then average this maximal variance over O different mazes. This yields our estimate of α. In fact it is an upper bound on α as this averaged-max-variance may reflect wrong prediction provided that they depend on H 0.We then obtain an estimate of β by again calculating the average-max-variance of the top layer however now with H 0 = 0 for all maze shapes. Next we chose a 100 random paths parametrized by γ leading from the total neglect minima (γ = 0) for the total neglect through a random point at γ = 15, and then to the checkerboard-BFS minima at γ = 30. The random point was placed within a hyper-cube of length 4 having the total neglect minima at its center. The path was a simple quadratic interpolation between the three point. The graph below shows the statistics of α/β 2 on these 100 different paths. Notably no path even had α > e −30 β 2 within the hyper-cube. We have tried three different other lengths for the hyper cube (12 and 1) and arrived at the same . The natural logarithm of an upper bound to α/β 2 as a function of a paramterization (γ) of a path leading from the numerically obtained total neglect minima to the checkerboard BFS minima through a random point. The three different curves show the max,mean, and median based on a 100 different paths. Notably no path violated the α β 2 constrain in the vicinity of the total neglect minima.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJGWO9k0Z
A toy dataset based on critical percolation in a planar graph provides an analytical window to the training dynamics of deep neural networks
While neural networks can be trained to map from one specific dataset to another, they usually do not learn a generalized transformation that can extrapolate accurately outside the space of training. For instance, a generative adversarial network (GAN) exclusively trained to transform images of cars from light to dark might not have the same effect on images of horses. This is because neural networks are good at generation within the manifold of the data that they are trained on. However, generating new samples outside of the manifold or extrapolating "out-of-sample" is a much harder problem that has been less well studied. To address this, we introduce a technique called neuron editing that learns how neurons encode an edit for a particular transformation in a latent space. We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons. By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations. We showcase our technique on image domain/style transfer and two biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs. Many experiments in biology are conducted to study the effect of a treatment or a condition on a set of samples. For example, the samples can be groups of cells and the treatment can be the administration of a drug. However, experiments and clinical trials are often performed on only a small subset of samples from the entire population. Usually, it is assumed that the effects generalize in a context-independent manner without mathematically attempting to model the effect and potential interactions with the context. However, mathematically modeling the effect and potential interactions with information would give us a powerful tool that would allow us to assess how the treatment would generalize beyond the samples measured. We propose a neural network-based method for learning a general edit function corresponding to treatment in the biological setting. While neural networks offer the power and flexibility to learn complicated ways of transforming data from one distribution to another, they are often overfit to the training dataset in the sense that they only learn how to map one specific data manifold to another, and not a general edit function. Indeed, popular neural network architectures like GANs pose the problem as one of learning to generate the post-treatment data distributions from pre-treatment data distributions. Instead, we reframe the problem as that of learning an edit function between the preand post-treatment versions of the data, that could be applied to other datasets. We propose to learn such an edit, which we term neuron editing, in the latent space of an autoencoder neural network with non-linear activations. First we train an autoencoder on the entire population of data which we are interested in transforming. This includes all of the pre-treatment samples and the post-treatment samples from the subset of the data on which we have post-treatment measurements. The internal layers of this autoencoder represent the data with all existing variation decomposed into abstract features (neurons) that allow the network to reconstruct the data accurately BID28 BID4 BID17 BID24. Neuron editing involves extracting differences between the observed pre-and post-treatment activation distributions for neurons in this layer and then applying them to pre-treatment data from the rest of the population to synthetically generate post-treatment data. Thus performing the edit node-by-node in this space actually encodes complex multivariate edits in the ambient space, performed on denoised and meaningful features, owing to the fact that these features themselves are complex non-linear combinations of the input features. While neuron editing is a general technique that could be applied to the latent space of any neural network, even GANs themselves, we instead focus exclusively on the autoencoder in this work to leverage three of its key advantages. First, we seek to model complex distribution-to-distribution transformations between large samples in high-dimensional space. While this can be generally intractable due to difficulty in estimating joint probability distributions, research has provided evidence that working in a lower-dimensional manifold facilitates learning transformations that would otherwise be infeasible in the original ambient space BID32 BID21 BID29. The non-linear dimensionality reduction performed by autoencoders finds intrinsic data dimensions that esentially straighten the curvature of data in the ambient space. Thus complex effects can become simpler shifts in distribution that can be computationally efficient to apply. Second, by performing the edit to the neural network internal layer, we allow for the modeling of some context dependence. Some neurons of the internal layer have a drastic change between preand post-treatment versions of the experimental subpopulation, while other neurons such as those that encode context information not directly associated with treatment have less change in the embedding layer. The latter neurons are less heavily edited but still influence the output jointly with edited neurons due to their integration in the decoding layers. These edited neurons interact with the data-context-encoding neurons in complex ways that may be more predictive of treatment than the experimental norm of simply assuming widespread generalization of context-free. Third, editing in a low-dimensional internal layer allows us to edit on a denoised version of the data. Because of the reconstruction penalty, more significant dimensions are retained through the bottleneck dimensions of an autoencoder while noise dimensions are discarded. Thus, by editing in the hidden layer, we avoid editing noise and instead edit significant dimensions of the data. We note that neuron editing makes the assumption that the internal neurons have semantic consistency across the data, i.e., the same neurons encode the same types of features for every data manifold. We demonstrate that this holds in our setting because the autoencoder learns a joint manifold of all of the given data including pre-and post-treatment samples of the experimental subpopulation and pre-treatment samples from the rest of the population. Recent show that neural networks prefer to learn patterns over memorizing inputs even when they have the capacity to do so BID31.We demonstrate that neuron editing extrapolates better than generative models on two important criteria. First, as to the original goal, the predicted change on extrapolated data more closely resembles the predicted change on interpolated data. Second, the editing process produces more complex variation, since it simply preserves the existing variation in the data rather than needing a generator to learn to create it. We compare the predictions from neuron editing to those of several generationbased approaches: a traditional GAN, a GAN implemented with residual blocks (ResnetGAN) to show generating residuals is not the same as editing BID26, and a CycleGAN BID33. While in other applications, like natural images, GANs have shown an impressive ability to generate plausible individual points, we illustrate that they struggle with these two criteria. We also motivate why neuron editing is performed on inference by comparing against a regularized autoencoder that performs the internal layer transformations during training, but the decoder learns to undo the transformation and reconstruct the input unchanged BID0.In the following section, we detail the neuron editing method. Then, we motivate the extrapolation problem by trying to perform natural image domain transfer on the canonical CelebA dataset. We then move to two biological applications where extrapolation is essential: correcting the artificial variability introduced by measuring instruments (batch effects), and predicting the combined effects of multiple drug treatments (combinatorial drug effects) BID1. Let S, T, X ⊆ R d be sampled sets representing d-dimensional source, target, and extrapolation distributions, respectively. We seek a transformation that has two properties: when applied to S it produces a distribution equivalent to the one represented by T, and when applied to T it is the identity function. GANs learn a transformation with these properties, and when parameterized with ReLU or leaky ReLU activations, as they often are, this transformation also has the property that it is piecewise linear BID11 BID14. However, the GAN optimization paradigm produces transformations that do not behave comparably on both S and X. Therefore, instead of learning such a transformation, we define a transformation with these properties on a learned space (summarized in FIG0).We first train an encoder/decoder pair E/D to map the data into an abstract neuron space decomposed into high-level features such that it can also decode from that space, i.e., an objective L: DISPLAYFORM0 where MSE is the mean-squared error. Then, without further training, we separately extract the activations of an n-dimensional internal layer of the network for inputs from S and from T, denoted by a S: S → R n, a T: T → R n. We define a piecewise linear transformation, called N euronEdit, which we apply to these distributions of activations: DISPLAYFORM1 where a ∈ R n consists of n activations for a single network input, p S j, p T j ∈ R n consist of the j th percentiles of activations (i.e., for each of the n neurons) over the distributions of a S, a T correspondingly, and all operations are taken pointwise, i.e., independently on each of the n neurons in the layer. Then, we define N euronEdit(a S): S → R n given by x → N euronEdit(a S (x)), and equivalently for a T and any other distribution (or collection) of activations over a set of network inputs. Therefore, the N euronEdit function operates on distributions, represented via activations over network input samples, and transforms the input activation distribution based on the difference between the source and target distributions (considered via their percentile disctretization).We note that the N euronEdit function has the three properties of a GAN generator: 1. N euronEdit(a S) ≈ a T (in terms of the represented n-dimensional distributions) 2. N euronEdit(a T) = a T 3. piecewise linearity. However, we are also able to guarantee that the neuron editing performed on the source distribution S will be the same as that performed on the extrapolation distribution X, which would not be the case with the generator in a GAN.To apply the learned transformation to X, we first extract the activations of the internal layer computed by the encoder, a X. Then, we cascade the transformations applied to the neuron activations through the decoder without any further training. Thus, the transformed outputX is obtained by: DISPLAYFORM2 Crucially, the nomenclature of an autoencoder no longer strictly applies. If we allowed the encoder or decoder to train with the transformed neuron activations, the network could learn to undo these transformations and still produce the identity function. However, since we freeze training and apply these transformations exclusively on inference, we turn an autoencoder into a generative model that need not be close to the identity. Training a GAN in this setting could exclusively utilize the data in S and T, since we have no real examples of the output for X to feed to the discriminator. Neuron editing, on the other hand, is able to model the variation intrinsic to X in an unsupervised manner despite not having real posttransformation data for X. Since we know a priori that X will differ substantially from S, this provides significantly more information. Furthermore, GANs are notoriously tricky to train BID22 BID12 BID30. Adversarial discriminators suffer from oscillating optimization dynamics, uninterpretable losses BID6, and most debilitatingly, mode collapse BID25 BID15 BID20. Mode collapse refers to the discriminator being unable to detect differences in variabillity between real and fake examples. In other words, the generator learns to generate a point that is very realistic, but produces that same point for most (or even all) input, no matter how different the input is. In practice, we see the discriminator struggles to detect differences in real and fake distributions even without mode collapse, as evidenced by the generator favoring ellipsoid output instead of the more complex and natural variability of the real data. Since our goal is not to generate convincing individual examples of the post-transformation output, but the more difficult task of generating convincing entire distributions of the post-transformation output, this is a worrisome defect of GANs. Neuron editing avoids all of these traps by learning an unsupervised model of the data space with the easier-to-train autoencoder. The essential step that facilitates generation is the isolation of the variation in the neuron activations that characterizes the difference between source and target distributions. There is a relationship between neuron editing and the well-known word2vec embeddings in natural language processing BID10. There, words are embedded in a latent space where a meaningful transformation such as changing the gender of a word is a constant vector in this space. This vector can be learned on one example, like transforming man to woman, and then extrapolated to another example, like king, to predict the location in the space of queen. Neuron editing is an extension in complexity of word2vec's vector arithmetic, because instead of transforming a single point into another single point, it transforms an entire distribution into another distribution. In this section we compare neuron interference as an editing method to various generating methods: a regularized autoencoder, a standard GAN, a ResnetGAN, and a CycleGAN. For the regularized autoencoder, the regularization penalized differences in the distributions of the source and target in a latent layer using maximal mean discrepancy BID0 BID8. The image experiment used convolutional layers with stride-two filters of size four, with 64-128-256-128-64 filters in the layers. All other models used fully connected layers of size 500-250-50-250-500. In all cases, leaky ReLU activation was used with 0.2 leak. Training was done with minibatches of size 100, with the adam optimizer BID16, and a learning rate of 0.001. We first consider a motivational experiment on the canonical image dataset of CelebA. If we want to learn a transformation that turns a given image of a person with black hair to that same person except with blond hair, a natural approach would be to collect two sets of images, one with all black haired people and another with all blond haired people, and teach a generative model to map between them. The problem with this approach is that the learned model will only be able to apply the hair color change to images similar to those in the collection, unable to extrapolate. This is illustrated in FIG1, where we collect images that have the attribute male and the attribute black hair and try to map to the set of images with the attribute male and the attribute blond hair. Then, after training on this data, we extrapolate and apply the transformation to females with black hair, which had not been seen during training. The GAN models are unable to successfully model this transformation on out-of-sample data. In the parts of the image that should stay the same (everything but the hair color), they do not always generate a recreation of the input. In the hair color, only sometimes is the color changed. The regular GAN model especially has copious artifacts that are a of the difficulty in training these models. This provides further evidence of the benefits of avoiding these complications when possible, for example by using the stable training of an autoencoder and editing it as we do in neuron editing. In FIG1, we motivate why we need to perform the N euronEdit transformation on the internal layer of a neural network, as opposed to applying it on some other latent space like PCA. Only in the neuron space has this complex and abstract transformation of changing the hair color (and only the hair color) been decomposed into a relatively simple and piecewise linear shift. We next demonstrate another application of neuron editing's ability to learn to transform a distribution based on a separate source/target pair: batch correction. Batch effects are differences in the observed data that are caused by technical artifacts of the measurement process. In other words, we can measure the same sample twice and get two rather different datasets back. When we measure different samples, batch effects get confounded with the true difference between the samples. Batch effects are a ubiquitous problem in biological experimental data that (in the best case) prevent combining measurements or (in the worst case) lead to wrong . Addressing batch effects is a goal of many new models BID9 BID27 BID7 BID13, including some deep learning methods BID23 BID0.One method for grappling with this issue is to repeatedly measure an identical control (spike-in) set of cells with each sample, and correct that sample based on the variation in its version of the control BID3. In our terminology of generation, we choose our source/target pair to be Control1/Control2, and then extrapolate to Sample1. Our transformed Sample1 can then be compared to raw Sample2 cells, rid of any variation induced by the measurement process. We would expect this to be a natural application of neuron editing as the data distributions are complex and the control population is unlikely to be similar to any (much less all) of the samples. The dataset we investigate in this section comes from a mass cytometry BID5 ) experiment which measures the amount of particular proteins in each cell in two different individuals infected with dengue virus BID0. The data consists of 35 dimensions, where Control1, Control2, Sample1, and Sample2 have 18919, 22802, 94556, and 55594 observations, respectively. The two samples were measured in separate runs, so in addition to the true difference in biology creating variation between them, there are also technical artifacts creating variation between them. From the controls, we can see one such batch effect characterized by artificially low readings in the amount of the protein InfG in Control1 (the x-axis in Figure 3a).We would like our model to identify this source of variation and compensate for the lower values of InfG without losing other true biological variation in Sample1. For example, Sample1 also has higher values of the protein CCR6, and as the controls show, this is a true biological difference, not a batch effect (the y-axis in Figure 3a). Because the GANs never trained on cells with high CCR6 and were never trained to generate cells with high CCR6, it is unsurprising that all of them remove that variation. Worryingly, the GAN maps almost all cells to the same values of CCR6 and InfG (Figure 3d), and the CycleGAN maps them to CCR6 values near zero (Figure 3f). This means later comparisons would not only lose the information that these cells were exceptionally high in CCR6, but now they would even look exceptionally low in it. The ResnetGAN does not fix this problem, as it is intrinsic to the specification of the generation objective, which only encourages the production of output like the target distribution, and in this case we want output different from the target distribution. As in the previous examples, the ResnetGAN also learns residuals that produces more ellipsoid data, rather than preserving the variation in the original source distribution or even matching the variation in the target distribution. The regularized autoencoder has learned to undo the transformations to its latent space and produced unchanged data. Neuron editing, on the other hand, decomposes the variability into just the separation between controls (increases in InfG), and edits the sample to include this variation. This removes the batch effect that caused higher readings of InfG, while preserving all other real variation, including both the intra-sample variation and the variation separating the populations in CCR6.Unlike the other generative models, neuron editing produced the intended transformations for the proteins InfG and CCR6, and here we go on to confirm that its are accurate globally across all dimensions. In FIG2, a PCA embedding of the whole data space is visualized for Control1 (light blue), Control2 (light red), Sample1 (dark blue), and post-transformation Sample1 (dark red). The transformation from Control1 to Control2 mirrors the transformation applied to Sample1. Notably, the other variation (intra-sample variation) is preserved. In FIG2, we see that for every dimension, the variation between the controls corresponds accurately to the variation introduced by neuron editing into the sample. These global assessments across the full data space offer additional corroboration that the transformations produced by neuron editing reasonably reflect the transformation as evidenced by the controls. Finally, we consider biological data from a combinatorial drug experiment on cells from patients with acute lymphoblastic leukemia BID1. The dataset we analyze consists of cells Figure 3: The of learning to batch correct a sample based on the difference in a repeatedlymeasured control population measured. The true transformation is depicted with a solid arrow and the learned transformation with a dashed arrow. There is a batch effect that should be corrected in IFNg (horizontal) with true variation that should be preserved in CCR6 (vertical). All of the GANs attempt to get rid of all sources of variation (but do so only partially because the input is out-ofsample). The autoencoder does not move the data at all. Neuron editing corrects the batch effect in IFNg while preserving the true biological variation in CCR6. under four treatments: no treatment (basal), BEZ-235 (Bez), Dasatinib (Das), and both Bez and Das (Bez+Das). These measurements also come from mass cytometry, this time on 41 dimensions, with the four datasets consisting of 19925, 20078, 19843, and 19764 observations, respectively. In this setting, we define the source to be the basal cells, the target to be the Das cells, and then extrapolate to the Bez cells. We hold out the true Bez+Das data and attempt to predict the effects of applying Das to cells that have already been treated with Bez. A characteristic effect of applying Das is a decrease in p4EBP1 (seen on the x-axis of FIG3). No change in another dimension, pSTATS, is associated with the treatment (the y-axis of FIG3). Neuron editing accurately models this horizontal change, without introducing any vertical change or losing variation within the extrapolation dataset FIG3 ). The regularized autoencoder, as before, does not change the output at all, despite the manipulations within its internal layer FIG3. None of the three GAN models accurately predict the real combination: the characteristic horizontal shift is identified, but additional vertical shifts are introduced and much of the original within-Bez variability is lost FIG3 -e).We note that since much of the variation in the target distribution already exists in the source distribution and the shift is a relatively small one, we might expect the ResnetGAN to be able to easily mimic the target. However, despite the residual connections, it still suffers from the same problems as the other model using the generating approach: namely, the GAN objective encourages all output to be like the target it trained on. That the GANs are not able to replicate even two-dimensional slices of the target data proves that they have not learned the appropriate transformation. But to further evaluate that neuron editing produces a meaningful transformation globally, we here make a comparison of every dimension. FIG4 compares the real and predicted means (in 6a) and variances (in 6b) of each dimension. Neuron editing more accurately predicts the principle direction and magnitude of transformation across all dimensions. Furthermore, neuron editing better preserves the variation in the real data. In almost all dimensions, the GAN generates data with less variance than really exists. In this paper, we tackled a data-transformation problem inspired by biological experimental settings: that of generating transformed versions of data based on observed pre-and post-transformation versions of a small subset of the available data. This problem arises during clinical trials or in settings where effects of drug treatment (or other experimental conditions) are only measured in a subset of the population, but expected to generalize beyond that subset. Here we introduce a novel approach that we call neuron editing, for applying the treatment effect to the remainder of the dataset. Neuron editing makes use of the encoding learned by the latent layers of an autoencoder and extracts the changes in activation distribution between the observed pre-and post treatment measurements. Then, it applies these same edits to the internal layer encodings of other data to mimic the transformation. We show that performing the edit on neurons of an internal layer in more realistic transformations of image data, and successfully predicts synergistic effects of drug treatments in biological data. Moreover, we note that it is feasible to learn complex data transformations in the non-linear dimensionality reduced space of a hidden layer rather than in ambient space where joint probability distributions are difficult to extract. Finally, learning edits in a hidden layer allows for interactions between the edit and other context information from the dataset during decoding. Future work along these lines could include training parallel encoders with the same decoder, or training to generate conditionally.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rygZJ2RcF7
We reframe the generation problem as one of editing existing points, and as a result extrapolate better than traditional GANs.
We present a representation for describing transition models in complex uncertain domains using relational rules. For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the ing state given their properties in the previous state. An iterative greedy algorithm is used to construct a set of deictic references that determine which objects are relevant in any given state. Feed-forward neural networks are used to learn the transition distribution on the relevant objects' properties. This strategy is demonstrated to be both more versatile and more sample efficient than learning a monolithic transition model in a simulated domain in which a robot pushes stacks of objects on a cluttered table. Many complex domains are appropriately described in terms of sets of objects, properties of those objects, and relations among them. We are interested in the problem of taking actions to change the state of such complex systems, in order to achieve some objective. To do this, we require a transition model, which describes the system state that from taking a particular action, given the previous system state. In many important domains, ranging from interacting with physical objects to managing the operations of an airline, actions have localized effects: they may change the state of the object(s) being directly operated on, as well as some objects that are related to those objects in important ways, but will generally not affect the vast majority of other objects. In this paper, we present a strategy for learning state-transition models that embodies these assumptions. We structure our model in terms of rules, each of which only depends on and affects the properties and relations among a small number of objects in the domain, and only very few of which may apply for characterizing the effects of any given action. Our primary focus is on learning the kernel of a rule: that is, the set of objects that it depends on and affects. At a moderate level of abstraction, most actions taken by an intentional system are inherently directly parametrized by at least one object that is being operated on: a robot pushes a block, an airport management system reschedules a flight, an automated assistant commits to a venue for a meeting. It is clear that properties of these "direct" objects are likely to be relevant to predicting the action's effects and that some properties of these objects will be changed. But how can we characterize which other objects, out of all the objects in a household or airline network, are relevant for prediction or likely to be affected?To do so, we make use of the notion of a deictic reference. In linguistics, a deictic (literally meaning "pointing") reference, is a way of naming an object in terms of its relationship to the current situation rather than in global terms. So, "the object I am pushing," "all the objects on the table nearest me," and "the object on top of the object I am pushing" are all deictic references. This style of reference was introduced as a representation strategy for AI systems by BID0, under the name indexical-functional representations, for the purpose of compactly describing policies for a video-game agent, and has been in occasional use since then. We will learn a set of deictic references, for each rule, that characterize, relative to the object(s) being operated on, which other objects are relevant. Given this set of relevant objects, the problem of describing the transition model on a large, variable-size domain, reduces to describing a transition model on fixed-length vectors characterizing the relevant objects and their properties and relations, which we represent and learn using standard feed-forward neural networks. Next, we briefly survey related work, describe the problem more formally, and then provide an algorithm for learning both the structure, in terms of deictic references, and parameters, in terms of neural networks, of a sparse relational transition model. We go on to demonstrate this algorithm in a simulated robot-manipulation domain in which the robot pushes objects on a cluttered table. Rule learning has a long history in artificial intelligence. The novelty in our approach is the combination of learning discrete structures with flexible parametrized models in the form of neural networks. Rule learning We are inspired by very early work on rule learning by BID4, which sought to find predictive rules in simple noisy domains, using Boolean combinations of binary input features to predict the effects of actions. This approach has a modern re-interpretation in the form of schema networks BID5. The rules we learn are lifted, in the sense that they can be applied to objects, generally, and are not tied to specific bits or objects in the input representation and probabilistic, in the sense that they make a distributional prediction about the outcome. In these senses, this work is similar to that of and methods that build on it (BID9, BID8, BID6 .) In addition, the approach of learning to use deictic expressions was inspired by Pasula et al. and used also by BID7 in the form of object-oriented reinforcement learning and by BID2. BID2, however, relies on a full description of the states in ground first-order logic and does not have a mechanism to introduce new deictic references to the action model. Our representation and learning algorithm improves on the Pasula et al. strategy by using the power of feed-forward neural networks as a local transition model, which allows us to address domains with real-valued properties and much more complex dependencies. In addition, our EM-based learning algorithm presents a much smoother space in which to optimize, making the overall learning faster and more robust. We do not, however, construct new functional terms during learning; that would be an avenue for future work for us. Graph network models There has recently been a great deal of work on learning graph-structured (neural) network models BID1. There is a way in which our rule-based structure could be interpreted as a kind of graph network, although it is fairly non-standard. We can understand each object as being a node in the network, and the deictic functions as being labeled directed hyper-edges (between sets of objects). Unlike the typical graph network models, we do not condition on a fixed set of neighboring nodes and edges to compute the next value of a node; in fact, a focus of our learning method is to determine which neighbors (and neighbors of neighbors, etc.) to condition on, depending on the current state of the edge labels. This means that the relevant neighborhood structure of any node changes dynamically over time, as the state of the system changes. This style of graph network is not inherently better or worse than others: it makes a different set of assumptions (including a strong default that most objects do not change state on any given step and the dynamic nature of the neighborhoods) which are particularly appropriate for modeling an agent's interactions with a complex environment using actions that have relatively local effects. We assume we are working on a class of problems in which the domain is appropriately described in terms of objects. This method might not be appropriate for a single high-dimensional system in which the transition model is not sparse or factorable, or can be factored along different lines (such as a spatial decomposition) rather than along the lines of objects and properties. We also assume a set of primitive actions defined in terms of control programs that can be executed to make actual changes in the world state and then return. These might be robot motor skills (grasping or pushing an object) or virtual actions (placing an order or announcing a meeting). In this section, we formalize this class of problems, define a new rule structure for specifying probabilistic transition models for these problems, and articulate an objective function for estimating these models from data. A problem domain is given by tuple D = (Υ, P, F, A) where Υ is a countably infinite universe of possible objects, P is a finite set of properties P i: Υ → R, i ∈ [N P] = {1, · · ·, N P}, and F is a finite set of deictic reference functions DISPLAYFORM0 where ℘ (Υ) denotes the powerset of Υ. Each function F i ∈ Υ maps from an ordered list of objects to a set of objects, and we define it as DISPLAYFORM1 where the relation f i: Υ mi+1 → {True, False} is defined in terms of the object properties in P. For example, if we have a location property P loc and m i = 1, we can define f i (o, o 1) = 1 Ploc(o)−Ploc(o1) <0.5 so that the function F i associated with f i maps from one object to the set of objects that are within 0.5 distance of its center; here 1 is an indicator function. Finally, A is a set of action templates DISPLAYFORM2 where Ψ is the space of executable control programs. Each action template is a function parameterized by continuous parameters α i ∈ R di and a tuple of n i objects that the action operates on. In this work, we assume that P, F and A are given. A problem instance is characterized by I = (D, U), where D is a domain defined above and U ⊂ Υ is a finite universe of objects with |U| = N U. For simplicity, we assume that, for a particular instance, the universe of objects remains constant over time. In the problem instance I, we characterize a state s in terms of the concrete values of all properties in P on all objects in U; that is, DISPLAYFORM0 A problem instance induces the definition of its action space A, constructed by applying every action template A i ∈ A to all tuples of n i elements in U and all assignments α i to the continuous parameters; namely, DISPLAYFORM1 In many domains, there is substantial uncertainty, and the key to robust behavior is the ability to model this uncertainty and make plans that respect it. A sparse relational transition model (SPARE) for a domain D, when applied to a problem instance I for that domain, defines a probability density function on the ing state s ing from taking action a in state s. Our objective is to specify this function in terms of domain elements P, R, and F in such a way that it will apply to any problem instance, independent of the number and properties of the objects in its universe. We achieve this by defining the transition model in terms of a set of transition rules, T = {T k} K k=1 and a score function C: T × S → N. The score function takes in as input a state s and a rule T ∈ T, and outputs a non-negative integer. If the output is 0, the rule does not apply; otherwise, the rule can predict the distribution of the next state to be p(s | s, a; T). The final prediction of SPARE is DISPLAYFORM0 whereT = arg max T ∈T C(T, s) and the matrix DISPLAYFORM1 ) is the default predicted covariance for any state that is not predicted to change, so that our problem is well-formed in the presence of noise in the input. Here I N U is an identity matrix of size N U, and diag ([σ DISPLAYFORM2 represents a square diagonal matrix with σ i on the main diagonal, denoting the default variance for property P i if no rule applies. Note that the transition rules will be learned from past experience with a loss function specified in Section 3.3. In the rest of this section, we formalize the definition of transition rules and the score function. Transition rule T = (A, Γ, ∆, φ θ, v default) is characterized by an action template A, two ordered lists of deictic references Γ and ∆ of size N Γ and N ∆, a predictor φ θ and the default variances DISPLAYFORM3 for each property P i under this rule. The action template is defined as operating on a tuple of n object variables, which we will refer to as DISPLAYFORM4 A reference list uses functions to designate a list of additional objects or sets of objects, by making deictic references DISPLAYFORM5 Figure 2: Instead of directly mapping from current state s to next state s, our prediction model uses deictic references to find subsets of objects for prediction. In the left most graph, we illustrate what relations are used to construct the input objects with two rules for the same action template, DISPLAYFORM6 default ), where the reference list DISPLAYFORM7 1 to the target object o 2 and added input features computed by an aggregator g on o 3, o 6 to the inputs of the predictor of rule T 1. Similarly for DISPLAYFORM8, the first deictic reference selected o 3 and then γ2 is applied on o 3 to get o 1. The predictors φ θ are neural networks that map the fixed-length input to a fixed-length output, which is applied to a set of objects computed from a relational graph on all the objects, derived from the reference list DISPLAYFORM9, to compute the whole next state s. Because δ DISPLAYFORM10 θ is only predicting a single property, we use a "de-aggregator" function h to assign its prediction to both objects o 4, o 6. based on previously designated objects. In particular, Γ generates a list of objects whose properties affect the prediction made by the transition rule, while ∆ generates a list of objects whose properties are affected after taking an action specified by the action template A.We begin with the simple case in which every function returns a single object, then extend our definition to the case of sets. Concretely, for the t-th element DISPLAYFORM11 where F ∈ F is a deictic reference function in the domain, m is the arity of that function, and integer k j ∈ [n+t−1] specifies that object O n+t in the object list can be determined by applying function F to objects (O kj) m j=1. Thus, we get a new list of objects, DISPLAYFORM12. So, reference γ 1 can only refer to the objects (O i) n i=1 that are named in the action, and determines an object O n+1. Then, reference γ 2 can refer to objects named in the action or those that were determined by reference γ 1, and so on. When the function DISPLAYFORM13 ) ∈ Γ returns a set of objects rather than a single object, this process of adding more objects remains almost the same, except that the O t may denote sets of objects, and the functions that are applied to them must be able to operate on sets. In the case that a function F returns a set, it must also specify an aggregator, g, that can return a single value for each property P i ∈ P, aggregated over the set. Examples of aggregators include the mean or maximum values or possibly simply the cardinality of the set. For example, consider the case of pushing the bottom (block A) of a stack of 4 blocks, depicted in FIG1. Suppose the deictic reference is F =above, which takes one object and returns a set of objects immediately on top of the input object. Then, by applying F =above starting from the initial set O 0 = {A}, we get an ordered list of sets of objects DISPLAYFORM14 Returning to the definition of a transition rule, we now can see informally that if the parameters of action template A are instantiated to actual objects in a problem instance, then Γ and ∆ can be used to determine lists of input and output objects (or sets of objects). We can use these lists, finally, to construct input and output vectors. The input vector x consists of the continuous action parameters α of action A and property P i (O t) for all properties P i ∈ P and objects O t ∈ O NΓ that are selected by Γ in arbitrary but fixed order. In the case that O t is a set of size greater than one, the aggregator associated with the function F that computed the reference is used to compute P i (O t). Similar for the desired output construction, we use the references in the list ∆, initializeÔ = O, and gradually add more objects to construct the output set of objectsÔ =Ô (N∆). The output vector is y = [P (ô)]ô ∈Ô,P ∈P where ifô is a set of objects, we apply a mean aggregator on the properties of all the objects inô. The predictor φ θ is some functional form φ (such as a feed-forward neural network) with parameters (weights) θ that will take values x as input and predict a distribution for the output vector y. It is difficult to represent arbitrarily complex distributions over output values. In this work, we restrict ourselves to representing a Gaussian distributions on all property values in y, encoded with a mean and independent variance for each dimension. Now, we describe how a transition rule can be used to map a state and action into a distribution over the new state. A transition rule T = (A, Γ, ∆, φ θ, v default) applies to a particular state-action (s, a) pair if a is an instance of A and if none of the elements of the input or output object lists is empty. To construct the input (and output) list, we begin by assigning the actual objects o 1,..., o n to the object variables O 1,..., O n in action instance a, and then successively computing references γ i ∈ Γ based on the previously selected objects, applying the definition of the deictic reference F in each γ i to the actual values of the properties as specified in the state s. If, at any point, a γ i ∈ Γ or δ i ∈ ∆ returns an empty set, then the transition rule does not apply. If the rule does apply, and successfully selects input and output object lists, then the values of the input vector x can be extracted from s, and predictions are made on the mean and variance values Pr(DISPLAYFORM15 θ2 (x) be the vector entry corresponding to the predicted Gaussian parameters of property P i of j-th output object setô j and denote s[o, P i] as the property P i of object o in state s, for all o ∈ U. The predicted distribution of the ing state p(s | s, a; T) is computed as follows: DISPLAYFORM16 where v i ∈ v default is the default variance of property P i in rule T. There are two important points to note. First, it is possible for the same object to appear in the object-list more than once, and therefore for more than one predicted distribution to appear for its properties in the output vector. In this case, we use the mixture of all the predicted distributions with uniform weights. Second, when an element of the output object list is a set, then we treat this as predicting the same single property distribution for all elements of that set. This strategy has sufficed for our current set of examples, but an alternative choice would be to make the predicted values be changes to the current property value, rather than new absolute values. Then, for example, moving all of the objects on top of a tray could easily specify a change to each of their poses. We illustrate how we can use transition rules to build a SPARE in Fig. 2.For each transition rule T k ∈ T and state s ∈ S, we assign the score function value to be 0 if T k does not apply to state s. Otherwise, we assign the total number of deictic references plus one, N Γ + N ∆ + 1, as the score. The more references there are in a rule that is applicable to the state, the more detailed the match is between the rules conditions and the state, and the more specific the predictions we expect it to be able to make. We frame the problem of learning a transition model from data in terms of conditional likelihood. The learning problem is, given a problem domain description D and a set of experience E tuples, DISPLAYFORM0, find a SPARE T that minimizes the loss function: DISPLAYFORM1 Note that we require all of the tuples in E to belong to the same domain D, and require for any (DISPLAYFORM2 and s (i) belong to the same problem instance, but individual tuples may be drawn from different problem instances (with, for example, different numbers and types of objects). In fact, to get good generalization performance, it will be important to vary these aspects across training instances. We describe our learning algorithm in three parts. First, we introduce our strategy for learning φ θ, which predicts a Gaussian distribution on y, given x. Then, we describe our algorithm for learning reference lists Γ and ∆ for a single transition rule, which enable the extraction of x and y from E. Finally, we present an EM method for learning multiple rules. For a particular transition rule T with associated action template A, once Γ and ∆ have been specified, we can extract input and output features x and y from a given set of experience samples E. We would like to learn the transition rule's predictor φ θ to minimize Eq.. Our predictor takes the form φ θ (x) = N (µ θ (x), Σ θ (x)) and a neural network is used to predict both the mean µ θ (x) and the diagonal variance Σ θ (x). We directly optimize the negative data-likelihood loss function DISPLAYFORM0 Let E T ∈ E be the set of experience tuples to which rule T applies. Then once we have θ, we can optimize the default variance of the rule DISPLAYFORM1 It can be shown that these loss-minimizing values for the default predicted variances v default are the empirical averages of the squared deviations for all unpredicted objects (i.e., those for which φ θ does not explicitly make predictions), where averages are computed separately for each object property. We use θ, v default ← LEARNDIST(D, E, Γ, ∆) to refer to this learning and optimization procedure for the predictor parameters and default variance. Algorithm 1 Greedy procedure for constructing Γ. DISPLAYFORM0 train model using Γ 0 = ∅, save loss L 0 i ← 1 4: DISPLAYFORM0 for all γ ∈ R i do 7: DISPLAYFORM1 else breakIn the simple setting where only one transition rule T exists in our domain D, we show how to construct the input and output reference lists Γ and ∆ that will determine the vectors x and y. Suppose for now that ∆ and v default are fixed, and we wish to learn Γ. Our approach is to incrementally build up Γ by adding DISPLAYFORM2 ) tuples one at a time via a greedy selection procedure. Specifically, let R i be the universe of possible γ i, split the experience samples E into a training set E train and a validation set E val, and initialize the list Γ to be Γ 0 = ∅. For each i, compute γ i = arg min γ∈Ri L(T γ ; D, E val), where L in Eq. evaluates a SPARE T γ with a single transition rule T = (A, Γ i−1 ∪ {γ}, ∆, φ θ, v default ), where θ and v default are computed using the LEARNDIST described in Section 4.1 2. If the value of the loss function L(T γi ; D, E val) is less than the value of L(T γi−1 ; D, E val), then we let Γ i = Γ i−1 ∪{γ i} and continue. Else, we terminate the greedy selection process with Γ = Γ i−1, since further growing the list of deictic references hurts the loss. We also terminate the process when i exceeds some predetermined maximum allowed number of input deictic references, N Γ. Pseudocode for this algorithm is provided in Algorithm 1.In our experiments we set ∆ = Γ and construct the lists of deictic references using a single pass of the greedy algorithm described above. This simplification is reasonable, as the set of objects that are relevant to predicting the transition outcome often overlap substantially with the objects that are affected by the action. Alternatively, we could learn ∆ via an analogous greedy procedure nested around or, as a more efficient approach, interleaved with, the one for learning Γ. Our training data in robotic manipulation tasks are likely to be best described by many rules instead of a single one, since different combinations of relations among objects could be present in different states. For example, we may have one rule for pushing a single object and another rule for pushing a stack of objects. We now address the case where we wish to learn K rules from a single experience set E, for K > 1. We do so via initial clustering to separate experience samples into K clusters, one for each rule to be learned, followed by an EM-like approach to further separate samples and simultaneously learn rule parameters. To facilitate the learning of our model, we will additionally learn membership probabilities Z = ((z i,j) DISPLAYFORM0, where z i,j represents the probability that the i-th experience sample is assigned to transition rule T j, and DISPLAYFORM1. We initialize membership probabilities via clustering, then refine them through EM.Because the experience samples E may come from different problem instances and involve different numbers of objects, we cannot directly run a clustering algorithm such as k-means on the (s, a, s) samples themselves. Instead we first learn a single transition rule T = (A, Γ, ∆, φ θ, v default) from E using the algorithm in Section 4.2, use the ing Γ and ∆ to transform E into x and y, and then run k-means clustering on the concatenation of x, y, and values of the loss function when T is used to predict each of the samples. For each experience sample, the squared distance from the sample to each of the K cluster centers is computed, and membership probabilities for the sample to each of the K transition rules to be learned are initialized to be proportional to the (multiplicative) inverses of these squared distances. Before introducing the EM-like algorithm that simultaneously improves the assignment of experience samples to transition rules and learns details of the rules themselves, we make a minor modification to transition rules to obtain mixture rules. Whereas a probabilistic transition rule has been defined as T = (A, Γ, ∆, φ θ, v default), a mixture rule is T = (A, π Γ, π ∆, Φ), where π Γ represents a distribution over all possible lists of input references Γ (and similarly for π ∆ and ∆), of which there are a finite number, since the set of available reference functions F is finite, and there is an upper bound N Γ on the maximum number of references Γ may contain. For simplicity of terminology, we refer to each possible list of references Γ as a shell, so π Γ is a distribution over possible shells. DISPLAYFORM2 is a collection of κ transition rules (i.e., predictors φ θ (k), each with an associated Γ (k), ∆ (k), and v (k) default ). To make predictions for a sample (s, a) using a mixture rule, predictions from each of the mixture rule's κ transition rules are combined according to the probabilities that π Γ and π ∆ assign to each transition rule's Γ (k) and ∆ (k). Rather than having our EM approach learn K transition rules, we instead learn K mixture rules, as the distributions π Γ and π ∆ allow for smoother sorting of experience samples into clusters corresponding to the different rules, in contrast to the discrete Γ and ∆ of regular transition rules. As before, we focus on the case where for each mixture rule, DISPLAYFORM3, and π Γ = π ∆ as well. Our EM-like algorithm is then as follows:1. For each j ∈ [K], initialize distributions π Γ = π ∆ for mixture rule T j as follows. First, use the algorithm in Section 4.2 to learn a transition rule on the weighted experience samples E Zj with weights equal to the membership probabilities DISPLAYFORM4. In the process of greedily assembling reference lists Γ = ∆, data likelihood loss function values are computed for multiple explored shells, in addition to the shell Γ = ∆ that was ultimately selected. Initialize π Γ = π ∆ to distribute weight proportionally, ac- DISPLAYFORM5, with the summation taken over all explored shells Γ, is a normalization factor so that the total weight assigned by π Γ to explored shells is 1−. The remaining probability weight is distributed uniformly across unexplored shells., where we have dropped subscripting according to j for notational simplicity: DISPLAYFORM0 default ) using the procedure in Section 4.2 on the weighted experience samples E Zj, where we choose DISPLAYFORM1 to be the list of references with k-th highest weight according to π Γ = π ∆. (b) Update π Γ = π ∆ by redistributing weight among the top κ shells according to a voting procedure where each training sample "votes" for the shell whose predictor minimizes the validation loss for that sample. In other words, the i-th experience sample DISPLAYFORM2 ). Then, shell weights are assigned to be proportional to the sum of the sample weights (i.e., membership probability of belonging to this rule) of samples that voted for each particular shell: the number of votes received by the k-th shell is V (k) = |E| i=1 1 v(i)=k · z i,j, for indicator function 1 and k ∈ [κ]. Then, π Γ (k), the current k-th highest value of π Γ, is updated to become V (k)/ξ, where ξ is a normalization factor to ensure that π Γ remains a valid probability distribution. (Specifically, ξ = ( DISPLAYFORM3 Step 2a, in case the κ shells with highest π Γ values have changed, in preparation for using the mixture rule to make predictions in the next step.3. Update membership probabilities by scaling by data likelihoods from using each of the K rules to make predictions: DISPLAYFORM4 is the data likelihood from using mixture rule T j to make predictions for the i-th experience sample E (i), and DISPLAYFORM5 ) is a normalization factor to maintain K j=1 z i,j = 1. 4. Repeat Steps 2 and 3 some fixed number of times, or until convergence. We apply our approach, SPARE, to a challenging problem of predicting pushing stacks of blocks on a cluttered table top. We describe our domain, the baseline that we compare to and report our . In our domain D = (Υ, P, F, A), the object universe Υ is composed of blocks of different sizes and weight, the property set P includes shapes of the blocks (width, length, height) and the position of the block ((x, y, z) location relative to the table). We have one action template, push(α, o), which pushes toward a target object o with parameters α = (x g, y g, z g, d), where (x g, y g, z g) is the 3D position of the gripper before the push starts and d is the distance of the push. The orientation of the gripper and the direction of the push are computed from the gripper location and the target object location. We simulate this 3D domain using the physically realistic PyBullet (simulator. In real-world scenarios, an action cannot be executed with the exact action parameters due to the inaccuracy in the motor and hence in our simulation, we add Gaussian noise on the action parameters during execution to imitate this effect. We consider the following deictic references in the reference collection F: identity (O), which takes in an object O and returns O; above (O), which takes in an object O and returns the object immediately above O; below (O), which takes in an object O and returns the object immediately below O; nearest (O), which takes in an object O and returns the object that is closest to O. Neural network (NN) We compare to a neural network function approximator that takes in as input the current state s ∈ R N P ×N U and action parameter α ∈ R N A, and outputs the next state s ∈ R N P ×N U. The list of objects that appear in each state is ordered: the target objects appear first and the remaining objects are sorted by their poses (first sort by x coordinate, then y, then z).Graph NN We compare to a fully connected graph NN. Each node of the graph corresponds to an object in the scene, and the action α is concatenated to the state of each object. Bidirectional edges connect every node in the graph. The graph NN consists of encoders for the nodes and edges, propagation networks for message passing, and a node decoder to convert back to predict the mean and variance of the next state of each object. As a sanity check, we start from a simple problem where a gripper is pushing a stack of three blocks with two extra blocks on the table. We randomly sampled 1250 problem instances by drawing random block shapes and locations from a uniform distribution within a range while satisfying the condition that the stack of blocks is stable and the extra blocks do not affect the push. In each problem instance, we uniformly randomly sample the action parameters and obtain the training data, a collection of tuples of state, action and next state, where the target object of the push action is always the one at the bottom of the stack. We held out 20% of the training data as the validation set. We found that our approach is able to reliably select the correct combinations of the references that select all the blocks in the problem instance to construct inputs and outputs. In FIG3, we show how the performance varies as deictic references are added during a typical run of this experiment. The solid purple curves show training performance, as measured by data likelihood on the validation set, while the dashed purple curve shows performance on a held-out test set with 250 unseen problem instances. As expected, performance improves noticeably from the addition of the first two deictic references selected by the greedy selection procedure, but not from the 4th. The brown curve shows the learned default standard deviations, used to compute data likelihoods for features of objects not explicitly predicted by the rule. As expected, the learned default standard deviation drops as deictic references are added, until it levels off after the third reference is added since at that point the set of references captures all moving objects in the scene. Sensitivity analysis on the number of objects We compare our approach to the baselines in terms of how sensitive the performance is to the number of objects that exist in the problem instance. We continue the setting where a stack of three blocks lie on a table, with extra blocks that may affect the prediction of the next state. FIG3 shows the performance, as measured by the log data likelihood, as a function of the number of extra blocks. For each number of extra blocks, we used 1250 training problem instances with 20% as the validation set and 250 testing problem instances. When there are no extra blocks, SPARE learns a single rule whose x and y contain the same information as the inputs and outputs for the baselines. As more objects are added to the table, NN's performance drops as the presence of these additional objects appear to complicate the scene and NN is forced to consider more objects when making its predictions. SPARE outperforms graph NN, as the good predictions for the extra blocks contribute to the log data likelihood. Note that, performance aside, NN is limited to problems for which the number of objects in the scenes is fixed, as the it requires a fixed-size input vector containing information about all objects. Our SPARE approach does not have this limitation, and could have been trained on a single, large dataset that is the combination of the datasets with varying numbers of extra objects. However, we did not do this in our experiments for the sake of providing a more fair comparison against NN.Sample efficiency We evaluate our approach on more challenging problem instances where the robot gripper is pushing blocks on a cluttered table top and there are two additional blocks on the table that do not interfere or get affected by the pushing action. FIG3 (c) plots the data likelihood as a function of the number of training samples. We evaluate with training samples varying from 100 to 1250 and in each setting, the test dataset has 250 samples. Both our approach and the baselines benefit from having more training samples, but our approach is much more sample efficient and achieves good performance within only 500 training samples. Learning multiple transition rules Now we put our approach in a more general setting where multiple transition rules need to be learned for prediction of the next state. Our approach adopts an EM-like procedure to assign each training sample its distribution on the transition rules and learn each transition rule with re-weighted training samples. First, we construct a training dataset and 70% of it is on pushing 4-block stack. Our EM approach is able to concentrate to the 4-block case as shown in FIG4. The three curves correspond to the three stack heights in the original dataset, and each shows the average weight assigned to the "target" rule among samples of that stack height, where the target rule is the one that starts with a high concentration of samples of that particular height. At iteration 0, we see that the rules were initialized such that samples were assigned 70% probability of belonging to specific rules, based on stack height. As the algorithm progresses, the samples separate further, suggesting that the algorithm is able to separate samples into the correct groups. Conclusion These demonstrate the power of combining relational abstraction with neural networks, to learn probabilistic state transition models for an important class of domains from very little training data. In addition, the structural nature of the learned models will allow them to be used in factored search-based planning methods that can take advantage of sparsity of effects to plan efficiently. Finally, the end purpose of obtaining a transition model for robotic actions is to enable planning to achieve high-level goals. Due to time constraints, this work did not assess the effectiveness of learned template-based models in planning, but this is a promising area of future work as the assumption that features of objects for which a template makes no explicit prediction do not change meshes well with STRIPS-style planners, as they make similar assumptions. We here provide details on our experiments and more in experiments. Experimental details Our experiments on SPARE in this paper used nueral network predictors for making mean predictions and variance predictions, described in Section 4.1. Each network was implemented as a fully-connected network with two hidden layers of 64 nodes each in Keras, used ReLU activations between layers, and the Adam optimizer with default parameters. Predictors for the templates approach were trained for 1000 epochs each with a decaying learning rate starting at 1e-2 and decreasing by a factor of 0.6 every 100 epochs. The baseline NN predictor was implemented in exactly the same way. For the GNN, we used a node encoder and edge encoder to map to latent spaces of 16 dimensions. The propagation networks consisted of 2 fully connected layers of 16 units each, and the decoder mapped back to 6 dimensions: 3 for the mean, and 3 for the variance. The GNN was trained using a decaying learning rate starting at 1e-2, and decreasing by a factor of 0.5 every 100 epochs. A total of 900 epochs were used. States were parameterized by the (x, y, z) pose of each object in the scene, ordered such that the target object of the action always appeared first, and other objects appeared in random order (except for the baseline). Action parameters included the (x, y, z) starting pose of the robotic gripper, as well as a "push distance" parameter that controls how long the push action lasts. Actions were implemented to be stochastic by adding some amount of noise to the target location of each push, potentially reflecting inaccuries in robotic control. We use the clustering-based approaches for initializing membership probabilities presented in Section 4.3. In this section, we how well our clustering approach performs. TAB0 shows the sample separation achieved by the discrete clustering approach, where samples are assigned solely to their associated clusters found by k-means, on the push dataset for stacks of varying height. Each column corresponds to the one-third of samples which involve stacks of a particular height. Entries in the table show the proportion of samples of that stack height that have been assigned to each of the three clusters, where in generating these data the clusters were ordered so that the predominantly 2-block sample cluster came first, followed by the predominantly 3-block cluster, then the 4-block cluster. Values in parentheses are standard deviations across three runs of the experiment. As seen in the table, separation of samples into clusters is quite good, though 22.7% of 3-block samples were assigned to the predominantly 4-block cluster, and 11.5% of 2-block samples were assigned to the predominantly 3-block cluser. The sample separation evidenced in TAB0 is enough such that templates trained on the three clusters of samples reliably select deictic references that consider the correct number of blocks, i.e., the 2-block cluster reliably learns a template which considers the target object and the object above the target, and similiarly for the other clusters and their respective stack heights. However, initializing samples to belong solely to a single cluster, rather than initializing membership probabilities, is unlikely to be robust in general, so we turn to the proposed clustering-based approaches for initializing membership probabilities instead. TAB1 is analogous to TAB0 in structure, but shows sample separation for sample membership probabilities initialized to be proportional to the inverse distance from the sample to each of the cluster centers found by k-means. TAB2 is the same, except with membership probabilities initialized to be proportional to the square of the inverse distance to cluster centers. Sample separation is better in the case of squared distances than non-squared distances, but it's unclear whether this generalizes to other datasets. For our specific problem instance, the log data likelihood feature turns out to be very important for the success of these clustering-based initialization approaches. For example, Table 4 shows analogous to those in TAB2, where the only difference is that all log data likelihoods were multiplied by five before being passed as input to the k-means clustering algorithm. Comparing the two tables, this scaling of data likelihood to become relatively more important as a feature in better data separation. This suggests that the relative importance between log likelihood and the other input features is a parameter of these clustering approaches that should be tuned. Effect of object ordering on baseline performance The single-predictor baseline used in our experiments receives all objects in the scene as input, but this leaves open the question of in what order these objects should be presented. Because the templates approach has the target object of the action specified, in the interest of fairness this information is also provided to the baseline by having the target object always appear first in the ordering. As there is in general no clear ordering for the remainder of the objects, we could present them in a random order, but perhaps sorting the objects according to position (first by x-coordinate, then y, then z) could in better predictions than if objects are completely randomly ordered. Table 4: Sample separation from clustering-based initialization of membership probabilities, where probabiliites are assigned to be proportional to squared inverse distance to cluster centers, and log data likelihood feature used as part of k-means clustering has been multiplied by a factor of five. Standard deviations are reported in parentheses. To analyze the effect of object ordering on baseline performance, we run the same experiment where a push action is applied to the bottom-most of a stack of three blocks, and there exist some number of additional objects on the table that do not interfere with the action in any way. Figure 6 shows our . We test three object orderings: random ("none"), sorted according to object position ("xtheny"), and an ideal ordering where the first three objects in the ordering are exactly the three objects in the stack ordered from bottom up ("stack"). As expected, in all cases, predicted log likelihood drops as more extra blocks are added to the scene, and the random ordering performs worst while the ideal ordering performs best. Figure 6: Effect of object ordering on baseline performance, on task of pushing a stack of three blocks on a table top, where there are extra blocks on the table that do not interfere with the push.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJxsV2R5FQ
A new approach that learns a representation for describing transition models in complex uncertaindomains using relational rules.
Differentiable planning network architecture has shown to be powerful in solving transfer planning tasks while possesses a simple end-to-end training feature. Many great planning architectures that have been proposed later in literature are inspired by this design principle in which a recursive network architecture is applied to emulate backup operations of a value iteration algorithm. However existing frame-works can only learn and plan effectively on domains with a lattice structure, i.e. regular graphs embedded in a certain Euclidean space. In this paper, we propose a general planning network, called Graph-based Motion Planning Networks (GrMPN), that will be able to i) learn and plan on general irregular graphs, hence ii) render existing planning network architectures special cases. The proposed GrMPN framework is invariant to task graph permutation, i.e. graph isormophism. As a , GrMPN possesses the generalization strength and data-efficiency ability. We demonstrate the performance of the proposed GrMPN method against other baselines on three domains ranging from 2D mazes (regular graph), path planning on irregular graphs, and motion planning (an irregular graph of robot configurations). Reinforcement learning (RL) is a sub-field of machine learning that studies about how an agent makes sequential decision making to interact with an environment. These problems can in principle be formulated as Markov decision process (MDP). (Approximate) Dynamic programming methods such as value iteration or policy iterations are often used for policy optimization. These dynamic programming approaches can also be leveraged to handle learning, hence referred as model-based RL . Model-based RL requires an estimation of the environment model hence is computationally expensive, but it is shown to be very data-efficient. The second common RL paradigm is model-free which does not require a model estimation hence has a lower computation cost but less data-efficiency . With a recent marriage with deep learning, deep reinforcement learning (DRL) has achieved many remarkable successes on a wide variety of applications such as game , robotics, chemical synthesis , news recommendation etc. DRL methods also range from model-based (; a) to model-free approaches. On the other hand, transfer learning across tasks has long been desired because it is much more challenging in comparison to single-task learning. Recent work has proposed a very elegant idea that suggests to encode a differentiable planning module in a policy network architecture. This planning module can emulate the recursive operation of value iterations, called Value Iteration Networks (VIN). Using this network, the agent is able to evaluate multiple future planning steps for a given policy. The planning module is designed to base on a recursive application of convolutional neural networks (CNN) and max-pooling for value function updates. VIN not only allows policy optimization with more data-efficiency, but also enables transfer learning across problems with shared transition and reward structures. VIN has laid foundation for many later differentiable planning network architectures such as QMDP-Net , planning under uncertainty , Memory Augmented Control Network (MACN) , Predictron , planning networks etc. However, these approaches including VIN is limited to learning with regular environment structures, i.e. the transition function forms an underlying 2D lattice structure. Recent works have tried to mitigate this issue by resorting to graph neural networks. These work exploit geometric intuition in environments which have irregular structures such as generalized VIN , planning on relational domains , , automated planning for scheduling , etc. The common between these approaches are in the use of graph neural networks to process irregular data structures like graphs. Among these frameworks, only GVIN is able to emulate the value iteration algorithm on irregular graphs of arbitrary sizes, e.g. generalization to arbitrary graphs. GVIN has a differentiable policy network architecture which is very similar to VIN. GVIN can also have a zero-shot planning ability on unseen graphs. However, GVIN requires domain knowledge to design a graph convolution which might limit it to become a universal graph-based path planning framework. In this paper, we aim to demonstrate different formulations for value iteration networks on irregular graphs. These proposed formulations are based on different graph neural network models. These models are capable of learning optimal policies on general graphs where their transition and reward functions are not provided a priori and yet to be estimated. These models are known to be invariant to graph isomorphism, therefore they are able to have a generalization ability to graphs of different sizes and structures. As a , they enjoy the ability of zero-shot learning to plan. Specifically, it is known that Bellman equations are written as the form of message passing, therefore we propose using message passing neural networks (MPNN) to emulate the value iteration algorithm on graphs. We will show two most general formulations of graph-based value iteration network that are based on two general-purpose approaches in the MPNN family: Graph Networks (GN) and Graph Attention Networks (GAT) . In particular, our contributions are three-fold: • We develop a MPNN based path planning network (GrMPN) which can learn to plan on general graphs, e.g. regular and irregular graphs. GrMPN is an differentiable end-to-end planning network architecture trained via imitation learning. We implement GrMPN via two formulations that are based on GN and GAT. • GrMPN is a general graph-based value iteration network that will render existing graphbased planning algorithms special cases. GrMPN is invariant to graph isomorphism which enables transfer planning on graphs of different structure and size. • We will demonstrate the efficacy of GrMPN which achieves state of the art on various domains including 2D maze with regular graph structures, irregular graphs, and motion planning problems. We show that GrMPN outperforms existing approaches in terms of data-efficiency, performance and scalability. This section provides on Markov decision process (MDP), value iteration algorithm, value iteration networks (VIN) and graph neural networks (GNN). A MDP is defined as M = (S, A, P, R), where S and A represent state and action spaces. P defines a transition function P(s, a, s) = P (s |s, a), where s, s ∈ S, a ∈ A. A planning algorithm, e.g. dynamic programming , aims to find an optimal policy π: S → A so that a performance measure V π (s) = E (t γ t r t |s 0 = s) is maximized, for all state s ∈ S; where γ ∈ is a discount factor. The expectation is w.r.t stochasticity of P and R. Value iteration (VI) is one of dynamic programming algorithms that can plan on M. It starts by updating the value functions V (s), ∀s ∈ S iteratively via the Bellman backup operator T, V (k) = T V (k−1) as follows: where k is an update iteration index. T is applied iteratively until V (k) (·) converges to optimal values. It is proved that the operator T is a Lipschitz map with a factor of γ. In other words, as k → ∞, V (k) converges to a fixed-point value function V *. As a , the optimal policy π * can be computed as, π * (s) = arg max a∈A [R(s, a) + γ s P (s |s, a)V * (s)]. Q-value functions are also defined similarly as Q(s, a) = E (t γ t r t |s 0 = s, a 0 = a). In addition, we have the relation between V and Q value functions as V (s) = max a Q(s, a). For goal-oriented tasks, the reward function R(s, a) can be designed to receive low values at intermediate states and high values at goal states s *. Value iteration network Planning on large MDPs might be very computationally expensive, hence transfer-planning would be desirable, especially for tasks sharing similar structures of P and R. Value Iteration Networks (VIN) is an differentiable planning framework that can i) do transfer-planning for goal-oriented tasks with different goal states, ii) and learn the shared underlying MDP M between tasks, i.e. learning the transition P and reward R functions. Let's assume that we want to find an optimal plan on a MDP M with unknown P and R. VIN's policy network with embedded approximate reward and transition functionsR andP is trained endto-end through imitation learning. R andP are assumed to be from an unknown MDPM whose optimal policy can form useful features about the optimal policy in M. Based on observation feature φ(s) on state s, the relation between M andM is denoted asR = f R (φ(·)) andP = f P (φ(·)). More specifically, inputs are 2D images, e.g. of the m × m maze with start and goal states; outputs are optimal paths going from start to goal. VIN embeds value iteration as a recursive application of convolutions and max-pooling over the feature channels. VIN consists of a convolutional layer Q a ∈ m×m with |A| channels. The trainable parameters are W R a,i,j and W P a,i,j with |A| channels, which account for the reward and transition embeddings. The recursive process contains two following convolution and max-pooling operations, Q where the convolution operators on R.H.S in the first equation is written as: where i, j are cell index in the maze. VIN has later inspired many other differentiable planning algorithms. For example, VIN's idea can again be exploited for diffirentiable planning architectures for planning on partially observable environments such as QMDP-Net , , Memory Augmented Control Network (MACN) . A related differentiable planning network is also used in the Predictron framework where its core planning module aims to estimate a Markov reward process that can be rolled forward for many imagined planning steps. A notable extension of VIN is proposed by (b), called Gated path planning networks (GPPN), in which they use LSTM to replace the recursive VIN update, i.e. These algorithms show great success at path planning on many different grid-based navigation tasks in which the states are either fully or partially observable. However the underlying state space must assume regular lattices in order to exploit local connectivity through the help of convolution operations of CNN. This limits their applications to domains whose state spaces might be in the forms of irregular graphs. There is recent effort considering planning and reinforcement learning whose state transitions form a general graph. Niu et. al. propose such a graph-based model-based deep reinforcement learning framework that generalizes VIN to differentiable planning on graphs, called generalized value iteration (GVIN). GVIN takes a graph with a certain start node and a goal node as input, and output an optimal plan. GVIN can learn an underlying MDP and an optimal planning policy via either imitation learning or reinforcement learning. Inspired by VIN, GVIN applies recursive graph convolution and max-pooling operators to emulate the value iteration algorithm on general graphs. With a specially designed convolution kernel, GVIN can also transfer planning to unseen graphs of arbitrary size. Specifically, an input is a directed, weighted spatial graph (one-hot vector labels the goal state) describes one task. GVIN constructs convolution operators W P a as a function of the input graph G. The reward graph signal is denoted asR = f R (G, v *), where f R is a CNN in VIN, but an identity function in GVIN. The value functions V (v) with v ∈ V on graph nodes can be computed recursively as follows, Q While VIN uses CNN to construct W P a that could only capture 2D lattice structures, GVIN designs directional and spatial kernels to construct W P a that try to capture invariant translation on irregular graphs. However we will show that these kernels are not enough to capture invariance to graph isomorphism, which leads to a poor performance in domains with a complex geometric structure. In particular, though it works well on multiple navigation tasks, GVIN is shown to be sensitive to the choice of hyperparameters and specially designed convolution kernels, e.g. directional discretization. Graph neural networks (GNN) have received much attention recently as they can process data on irregular domains such as graphs or sets. The general idea of GNN is to compute an encoded feature h i for each node v i based on the structure of the graph, node v i and edge e ij features, and previous encoded features as, where f is a parametric function, and N (i) denotes the set of neighbour nodes of node i. After computing h i (probably apply f for k iterations), an additional function is used to compute the output at each node,, where g is implemented using another neural network, called a read-out function. Graph convolution network Many earliest work on GNN propose extending traditional CNNs to handle convolution operations on graphs through the use of spectral methods (; ;). For example, graph convolution networks (GCN) is based on fundamental convolution operations on spectral domains. GCN must assume graphs of same size. Therefore, these methods which rely on the computation of graph eigenvectors are either computationally expensive or not able to learn on graphs of arbitrary sizes. Many later introduced graph convolutional networks on spatial domain such as Neural FPs , PATCHY-SAN , DCNN , etc., are able to learn on graphs of arbitrary sizes. However they are limited to either the choice of a subset of node neighbours or random walks of k-hop neighborhouds. These drawbacks limit graph convolution based methods to applications on large-scale graphs with highly arbitrary sizes and structures, hence not favourable for transfer planning in MDPs. Graph attention network (GAT) GAT is inspired by the attention mechanism by modifying the convolution operation in GCN in order to make learning more efficient and scalable to domains of large graphs. Specifically, the encoding at each node is recursively computed as, h, where σ is an activation function, and α (l) ij is the attention coefficients which are computed as where a is weight vector, and k denotes the embedding layer k whose weights are W (k). Message passing neural network (MPNN) MPNN uses the mechanism of message passing to compute graph embedding. In particular, the calculation of a feature on each node involves two phases: i) message passing and readout. The message passing operation at a node is based on its state and all messages received from neighbor nodes. The readout is based on the node state and the calculated message. These phases are summarised as follows m MPNN is a unified framework for graph convolution and other existing graph neural networks back to that time, e.g. Graph convolution network and Laplacian Based Methods (; ; ;), Gated Graph Neural Networks (GG-NN) , Interaction Nework (IN) , Molecular Graph Convolutions , Deep Tensor Neural Networks (Schütt et al., 2017). Gilmer et.al. has made great effort in converting these frameworks to become a MPNN variant. MPNN is designed similarly to GG-NN in which GRU is applied to implement the recursive message operation, but different at message and output functions. Specifically, the message sent from node j to i is implemented as f (h i, h j, e ij) = A(e ij)h j, where A(e ij) is implemented as a neural network that maps edge feature e ij to an Another variant of the message funcion that is additionally based on the receiver node feature h i was also implemented in the paper. Updating message with all received information h i, h j, e ij is inspired by . MPNN has shown the state-of-the-art performance in prediction tasks on large graph dataset, e.g. molecular properties. Graph network (GN) Graph networks (GN) ) is a general framework that combines all previous graph neural networks. The update operations of GN involve nodes, edges and global graph features. Therefore it renders MPNN, GNN, GCN, GAT as special cases. Specifically, if we denote an additional global graph feature as u, the updates of GN which consist of three update functions g, and three aggregation functions f. These functions are implemented based on the message passing mechanism. The aggregation functions are 1) m i = f e→v ({e ij} j∈N (i) ) aggregate messages sent from edges to compute information of node i; 2) m e = f e→u ({e ij} j∈N (i),∀i ) aggregate messages sent from all edges to the global node u; 3) aggregate messages sent from all nodes to the global node u. These aggregation functions must be invariant to the order of nodes, which is critical to the Weisfeiler-Lehman (WL) graph isomorphism test and regarded as an important requirement for graph representation learning. Using aggregated information, the three update functions to node, edge and global features are defined as follows e The aggregation functions could be element-wise summation, averages, or max/min operations. The update functions could use general neural networks. The use of edge features and the global graph node makes GN distinct from MPNN. In addition, the use of a recursive update from immediate neighbors is in contrast to multi-hop updates of many spectral methods. In this section, we propose to use a general graph neural network to construct inductive biases for "learning to plan", called graph-based motion planning network (GrMPN). Similar to GVIN, our framework is trained on a set of motion planning problems as inputs and associated optimal paths (or complete optimal policies) as outputs. The inputs are general graphs without knowing its reward function and transition. At test time, GrMPN takes i) a test problem defined as a general graph and ii) a pair of starting node and a goal node. The target is to return an optimal plan for the test problem., where G i is a general graph G = (V, E), and τ * i is an optimal path (or an optimal policy) which can be generated by an expert as a demonstration. The target is to learn an optimal policy for a new graph G with a starting node and a goal node. This learning problem can be either formulated as imitation learning or reinforcement learning . While it is straightforward to train with RL, within the scope of this paper we only focus on the formulation of imitation learning. General GrMPN framework We propose a general framework for graph-based value iteration that is based on the principle of message passing. First, we also design a feature-extract function to learn a reward function r: r = f R (G; v *) given a graph G and a goal node v *. We use a similar setting from GVIN for f R (a more general architecture is depicted in Appendix A.1. Second, GrMPN also consists of the following recurrent application of graph operations at all nodes i and edges ij. q ai) where k is the processing step index; v i is the node feature (which also contains the node's value function); f e→v, g e, g v are defined as aggregation and update functions. Note that we use q ai as edge features. If the transition is stochastic, we can use |A| channels on edges. GrMPN can be implemented based on GN. GrMPN does not represent the global node u. It consists of the representation of nodes and edges, hence uses one aggregation function and two update functions, as described below: Note that for brevity the above updates assume deterministic actions, similar to the setting in VIN and GVIN. The message aggregation is equivalent to a sum over actions a s p(s |s, a)V (s), similar to the implementation of GPPN (b). The algorithm of GrMPN via Graph Networks (GrMPN-GN) is summarized in Algorithm 1. In general, many graph neural networks can be used to represent the graph-based value iteration module. As MPNN, GGNN and other similar approaches are special cases of GN, therefore we can Input: graph G = {V, E}, a goal node v * easily rewrite GrMPN to become an application of these approaches. In this section, we draw a connection of GrMPN to GAT, and show how GAT can also be used to represent a graph-based VI module. We use multi-heads to represent |A| channels for the Q value functions. Each head represents a value function corresponding to an action Q a, V where σ is a Maxout activation function or a over possible actions (in our implementation we chose the latter); and α a ij is the attention coefficients of channel a which are computed as We denote this formulation as GrMPN-GAT. We can also make a connection between GrMPN-GN and GrMPN-GAT by reformulating the updates and coefficients of GATPPN to message passing in which edge attentions become edge updates. The edge feature now have the attention coefficients α as additional information. GrMPN-GAT is rewritten as a GN module with attentions as: where e ij is the feature of the edge i-j that contains attention coefficients and q ij (equivalently to the Q-value function q ai). The algorithm of GrMPN-GAT can be described similar to GrMPN-GN in Algorithm 1. We evaluate the proposed framework GrMPN-GAT and GrMPN-GN on 2D mazes, irregular graphs, and motion planning problems. These problems range from simple regular graphs (2D mazes) to irregular graphs of simple (irregular grid) and complex geometric structures (motion planning). Through these experiments we want to confirm the following points: 1. GrMPN frameworks based on general graph neural networks can not only perform comparably to VIN for lattice structures, but also with more data-efficiency, because GrMPN-GAT and GrMPN-GN are invariant to graph isomorphism. 2. GrMPN is able to generalize well to unseen graphs of arbitrary sizes and structures, because they exploit graph isomorphism in a principled way. In particular, GrMPN handles longrange planning well by providing a great generalization ability a) across nodes in graphs b) across graphs. Therefore they are able to cope with planning on larger problems with high complexity, hence significantly improve task performance and data-efficiency. 3. GrMPN-GAT exploiting attended (weighted) updates for value functions, i.e. non-local updates, would outperform GrMPN-GN which only based on uniform sum over actions. In all experiments, we follow the standard encode-decode graph network design. Firstly, each graph component (node, edge) feature is passed through a two-layered MLP encoder with ReLu activation and 16 hidden units each. To increase the robustness, this two-layered network is also used in other graph network components, include both the graph block module (GrMPN-GAT or GrMPN-GN) and the decoder network. Notably, in the lattice 20× 20 experiment, as the number of nodes is significantly large, we instead increase the hidden units up to 32. We use an additional node encoding f v, to compute note features v in the motion planning problems (see Appendix A.1 a full policy network architecture). Finally, we use the standard RMSProp algorithm with a learning rate of 0.001 in all experiments. The numbers of message passing step k, is set differently in different experiments. Specifically, k is set respectively equal to 10, 15, 20 for 12 × 12, 16 × 16 and 20 × 20 2D mazes. Meanwhile, k = 20 in all other irregular graph experiments, except for the last motion planing task tested on a roadmap of 500 configuration where we set k = 40 to handle large graphs. We use other standard settings as used in the papers of VIN and GVIN (Note that VIN set the recurrent parameter as: k=40 on 2D mazes, k=200 on irregular graphs, and motion planning). We use three different metric which are also used in different work: i) %Accuracy (Acc) is the percentage of nodes whose predicted actions are optimal, ii) %Success (Succ) is the percentage of nodes whose predicted path reach the goal, and iii) path difference is the Euclidean distance over all nodes between a predicted path vs. optimal path. Training: While extensions to RL training is straightforward, we are only focused on imitation learning. Therefore we use the same objective function used by VIN and GVIN, i.e. a cross-entropy loss for the supervised learning problem with dataset {v, a * = π * (v)} where v is a state node with an optimal action a * demonstrated by an expert. In this experiment we carry out evaluations on 2D mazes which have a regular graph structure. We compare GrMPN against VIN, GVIN, and GPPN. The environment and experiment settings are set similar to VIN, GVIN, and GPPN. We use the same script used in VIN, GVIN, and GPPN to generate graphs. For each each graph, we generated seven optimal trajectories corresponding to different start and goal nodes. Note that only GPPN requires a complete optimal policy which gives an optimal action at every graph node. This setting makes GPPN have a little advantage. We train and test on the same graph with sizes 12 × 12, 16 × 16, 20 × 20. The number of generated graphs for training is chosen from small to large with values {200, 1000, 5000}. The size of testing data is fixed to 1000 graphs of corresponding size. The shown in Table 1 tells that GrMPN-GAT and GrMPN-GN not only outperform other baselines but also more data-efficient. GPPN has a slightly better performance with a large amount of data. We note that GPPN must assume the data consists of optimal policies instead of a small set of optimal demonstration trajectories as in VIN, GVIN and ours. GVIN is a graph-based VIN but it relies on specially designed convolution kernels that are based on the choice of discretised directions. Therefore GVIN is not able to perform as well as GrMPN-GAT and GrMPN-GN which are based on principled graph networks and known to be invariant to graph isomorphism. GrMPN-GAT and GrMPN-GN show significant better in terms of data-efficiency on large domains 20 × 20. On these large domains, learning algorithms often require a large amount of data. However GrMPN-GAT and GrMPN-GN can still learn well with a limited amount of data. This shows how important invariance to graph isormophism is for learning on graphs. Performance of GrMPN-GAT on bigger domain is better than small domains, because the amount of nodes involved in training is bigger in large graphs. We show more ablation in Appendix. This experiment uses the same script used by GVIN, which is based on Networkx , to create synthetic graphs that are with random coordinates from box 2 in 2D space. We vary the parameters of the generation program to create three types of irregular graphs: Dense, Sparse, and Treelike. For Tree-like graphs, we use use the Networkx's function, geographical threshold graph by setting the connectedness probability between nodes to a small value. We create Tree-like graphs which are not considered in GVIN, because there are two main challenges on these graphs. First, with the same number of nodes and amount of generated graphs, tree-like graphs would have much fewer nodes for training. Second, Tree-like graphs in a major issue which are ideal to evaluate generalization for long-range planning which requires propagation of value functions across nodes in graphs well. We generate 10000 graphs, with varying number of nodes {10, 100}. The label for each graph is an optimal policy, i.e. an optimal action at every graph node. Training is 6/7 of the generated data, while testing is 1/7. The comparing are described in Tables 2 (on Dense), 3 (on Sparse), and 7 (on Tree-like). Testing is performed on irregular graphs of different size: 100 and 150 nodes on Dense, 100 nodes on Sparse. The show that GrMPN methods perform comparably with GVIN on Dense graphs in terms of Success rate, but slightly better in terms of Accuracy and Distance difference. On Sparse graphs, GrMPN-GAT and GrMPN-GN based on the principled of message passing are able to have fast updates across nodes. Sampling-based methods such as probabilistic roadmaps (PRM) have been shown to be every efficient in practice. PRM is one of the most widely used techniques in robotic motion planning, especially for applications in navigation. In such an application, a motion planning algorithm must find an optimal path that must satisfy i) the environment's geometry constraints, i.e. collision-free path, and ii) the robot system constraint, e.g. differential constraints. PRM is multiple-query methods that are very useful in highly structured environments such as large buildings. We evaluate GrMPN on two motion planning problems: 2D navigation with a holonomic mobile robot and manipulation with a simulated 7-DoF Baxter robot arm. We aim to improve PRM by bringing it closer to an online-planning method through transfer planning. In this section, we show that GrMPN would outperform GVIN under such tasks of a complex geometric structure. Setting: Note that the input of the simulated 7-DoF Baxter robot forms a kinematic tree. Therefore we propose to use three alternative encoding layers f v to comppute node features: GN, GCN or MLP. For the 2D navigation with a mobile robot, we only use a MLP to encode robot locations. In addition, we use a simple MLP layer of 16 nodes to encode the one-hot goal value. Then, the outputs from these two encoders are concatenated and used as node features. This architecture renders our implementation a hierarchical graph-based planning method. Data generation: For the mobile robot navigation, we use a standard PRM algorithm to construct 2000 roadmaps for training, each with 200 robot configurations. A graph node is represented as a robot configuration which is a generalized coordinate (x, y) where x, y are 2D translation coordinates. We generate two different test sets consisting of 1000 roadmaps: 200 and 500 configurations. Each generation time uses a different setting of environment obstacles. For each generated graph, we use the Dijkstra algorithm to provide one optimal trajectory corresponding to one random pair of start and goal states. For the simulated 7-DoF Baxter robot arm, we use the same scrift as in to generate different environment settings (with different obstacles), and set different start and goal configurations. For each environment setting, the roadmap and the optimal trajectories from 20 randomly selected start nodes to the goal are then found by using PRM * and the Dijkstra algorithm provided by the OMPL library (Ş ucan et al., 2012). In total, we generate 280 roadmaps, each with 300 configurations. The test set contains 80 roadmaps, each with about 1200 configurations. Analysis: Table 4 show test performance on 2D navigation with a mobile robot. It shows that GrMPN methods not only outperform GVIN but also possess a great generalization ability. We additionally evaluate the generalization ability of GrMPN methods by using the trained model using Tree-like data as described in Irregular Graphs section to test on the created roadmap test set. The distance difference (Diff) computes the cost difference between the predicted path and the optimal path planned by Dijkstra. We skip reporting on GVIN on large testing graphs (500 configuration nodes) due to its degraded performance. The trained model using Tree-like graph data could also generalize well on unseen graphs generated by PRM on different environments. In addition, they can generalize to much bigger graphs (with 500 nodes). This suggests GrMPN is able to do zero-shot learning to plan. Table 5 show the test performance on the motion planning task with a simulated 7-DoF Baxter arm. We skip reports on GVIN due to its poor performance and scalability to learning on large graphs. The show that GrMPN methods are able to do motion planning on this complex geometric domain with high accuracy and success rates. Table 5: Simulated 7-DoF Baxter arm motion planning: Test performance with different uses of node encoding (MLP-, GCN-, GN-) and graph-based planning (GrMPN-GAT and GrMPN-GN). Figure 1: GrMPN based on graph neural networks. A APPENDIX Our proposed graph-based motion planning network is inspired by VIN and GVIN, and depicted in Fig. 1. The in Figs. 2 and 3 show ablation of GrMPN-GAT and GrMPN-GN on different number of graph processing steps. The figure show the value function maps after training. The show how value functions on a test graph (after training) are computed depending on the value k of processing steps. More nodes would be updated with a larger number of processing step which corresponding to more batches of value iterations updates. This ablation also shows that GrMPN-GAT is able to generalize across nodes better than GrMPN-GN. This generalization ability would significantly help with long-range planning problems. The information on generated graphs is summarized in Table 6. As seen in Fig. 4, GVIN is not able to spread the update to nodes that are far from the goal node. This figure also shows that GrMPN-GAT has a slightly better generalization ability across nodes than GrMPN-GN. The value functions of GrMPN-GN have more un-updated nodes (see the color of nodes that are far from the goal node as labeled in black) than that of GrMPN-GAT. This explains why GrMPN-GAT performs slightly better than GrMPN-GN. The in Table 7 tell that VIN is not able to cope with very sparse graphs and long-range planning. This shows GVIN has a weak generalization ability. Data generation: We use a standard PRM algorithm to construct 2000 graph maps for training, each with 200 robot configurations. A graph node is represented as a robot configuration which is a generalized coordinate (x, y) where x, y are 2D translation coordinates. We generate two different test sets consisting of 1000 roadmaps: 200 and 500 configurations. Each generation time uses a different setting of environment obstacles. For each generated graph, we use the Dijkstra algorithm to provide one optimal trajectory corresponding to one random pair of start and goal states, for an example in Fig. 6. For the simulated 7-DoF Baxter robot arm, we use the same scrift as in to generate different environment settings (with different obstacles), and set different start and goal configurations. An example of a generated environment is depicted in Fig. 5. Figures 7 and 8 show that GVIN is not able to update value functions of nodes far from the goal, while GrMPN-GAT and GrMPN-GN can generalize value updates well to such nodes. The color map in Figure 8 also suggests that GrMPN-GAT slightly has wider value propagation, which means better generalization for long-range planning and across task graphs.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkxLiJSKwB
We propose an end-to-end differentiable planning network for graphs. This can be applicable to many motion planning problems
We describe techniques for training high-quality image denoising models that require only single instances of corrupted images as training data. Inspired by a recent technique that removes the need for supervision through image pairs by employing networks with a "blind spot" in the receptive field, we address two of its shortcomings: inefficient training and poor final denoising performance. This is achieved through a novel blind-spot convolutional network architecture that allows efficient self-supervised training, as well as application of Bayesian distribution prediction on output colors. Together, they bring the self-supervised model on par with fully supervised deep learning techniques in terms of both quality and training speed in the case of i.i.d. Gaussian noise. Denoising, the removal of noise from images, is a major application of deep learning. Several architectures have been proposed for general-purpose image restoration tasks, e.g., U-Nets BID13, hierarchical residual networks BID11, and residual dense networks BID17. Traditionally, the models are trained in a supervised fashion with corrupted images as inputs and clean images as targets, so that the network learns to remove the corruption. BID9 introduced NOISE2NOISE training, where pairs of corrupted images are used as training data. They observe that when certain statistical conditions are met, a network faced with the impossible task of mapping corrupted images to corrupted images learns, loosely speaking, to output the "average" image. For a large class of image corruptions, the clean image is a simple per-pixel statistic -such as mean, median, or mode -over the stochastic corruption process, and hence the restoration model can be supervised using corrupted data by choosing the appropriate loss function to recover the statistic of interest. While removing the need for clean training images, NOISE2NOISE training still requires at least two independent realizations of the corruption for each training image. While this eases data collection significantly compared to noisy-clean pairs, large collections of (single) poor images are still much more widespread. This motivates investigation of self-supervised training: how much can we learn from just looking at bad data? While foregoing supervision would lead to the expectation of some regression in performance, can we make up for it by making stronger assumptions about the corruption process? In this paper, we show that under the assumption of additive Gaussian noise that is i.i.d. between pixels, no concessions in denoising performance are necessary. We draw inspiration from the recent NOISE2VOID (N2V) training technique of BID7. The algorithm needs no image pairs, and uses just individual noisy images as training data, assuming that the corruption is zero-mean and independent between pixels. The method is based on blind-spot networks where the receptive field of the network does not include the center pixel. This allows using the same noisy image as both training input and training target -because the network cannot see the correct answer, using the same image as target is equivalent to using a different noisy realization. This approach is self-supervised in the sense that the surrounding context is used to predict the value of the output pixel without a separate reference image BID3.The networks used by BID7 do not have a blind spot by design, but are trained to ignore the center pixel using a masking scheme where only a few output pixels can contribute to the loss function, reducing training efficiency considerably. We remedy this with a novel architecture that allows efficient training without masking. Furthermore, the existence of the blind spot leads to poor denoising quality. We derive a scheme for combining the network output with data in the blind spot, bringing the denoising quality on par with conventionally trained networks. In our blind-spot network architecture, we effectively construct four denoiser network branches, each having its receptive field restricted to a different direction. A single-pixel offset at the end of each branch separates the receptive field from the center pixel. The are then combined by 1×1 convolutions. In practice, we run four rotated versions of each input image through a single receptive field -restricted branch, yielding a simpler architecture that performs the same function. This also implicitly shares the convolution kernels between the branches and thus avoids the four-fold increase in the number of trainable weights. Our convolutional blind-spot networks are designed by combining multiple branches that each have their receptive field restricted to a half-plane (FIG0) that does not contain the center pixel. The principle of limiting the receptive field has been used in PixelCNN (van den a) image synthesis networks, where only pixels synthesized before the current pixel are allowed in the receptive field. We combine the four branches with a series of 1×1 convolutions to obtain a receptive field that can extend arbitrarily far in every direction but does not contain the center pixel. In order to transform a restoration network into one with a restricted receptive field, we modify each individual layer so that its receptive field is fully contained within one half-plane, including the center row/column. The receptive field of the ing network includes the center pixel, so we offset the feature maps by one pixel before combining them. Layers that do not extend the receptive field, e.g., concatenation, summation, 1×1 convolution, etc., can be used without modifications. Convolution layers. To restrict the receptive field of a zero-padding convolution layer to extend only, say, upwards, the easiest solution is to offset the feature maps downwards when performing the convolution operation. For an h × w kernel size, a downwards offset of k = h/2 pixels is equivalent to using a kernel that is shifted upwards so that all weights below the center line are zero. Specifically, we first append k rows of zeros to the top of input tensor, then perform the convolution, and finally crop out the k bottom rows of the output. Downsampling and upsampling layers. Many image restoration networks involve downsampling and upsampling layers, and by default, these extend the receptive field in all directions. Consider, e.g., a bilinear 2 × 2 downsampling step followed immediately by a nearest-neighbor 2 × 2 upsampling step. The contents of every 2 × 2 pixel block in the output now correspond to the average of this block in the input, i.e., information has been transferred in every direction within the block. We fix this problem by again applying an offset to the data. It is sufficient to restrict the receptive field for the pair of downsampling and upsampling layers, which means that only one of the layers needs to be modified, and we have chosen to attach the offsets to the downsampling layers. For a 2 × 2 bilinear downsampling layer, we can restrict the receptive field to extend upwards only by padding the input tensor with one row of zeros at top and cropping out the bottom row before performing the actual downsampling operation. In their basic form, blind-spot networks suffer from the inability to utilize the data at the center pixel at test time; yet, clearly, the observed value carries information about the underlying clean signal. For training it is mandatory to disconnect the information flow from pixel position to itself, but there is no such restriction when using the network to restore novel images after it has been trained. We capitalize on this by training the network to predict, based on the context, a distribution of values instead of a single mean prediction, and applying maximum a posteriori estimation at test time. In Bayesian training BID12 BID8 BID5, the network predicts output distributions using a negative log-likelihood loss function. We model the data using multivariate Gaussian distributions: For images with c color components, we have the denoising network output a vector of means µ y and a covariance matrix Σ y for each pixel. For convenience, we parameterize the c × c inverse per-pixel covariance matrix as Σ −1 y = A y T A y, where A y is an upper triangular matrix. This choice ensures that Σ y is positive semidefinite with non-negative diagonal entries, as required for a covariance matrix. For RGB images, the network thus outputs a total of nine values per pixel: the three-component mean µ y and the six nonzero elements of A y.Let f (y; µ y, Σ y) denote the probability density of a multivariate Gaussian distribution N (µ y, Σ y) at target pixel color y, i.e., exp[− DISPLAYFORM0 Under our parameterization, the corresponding negative log-likelihood loss to optimize during training is DISPLAYFORM1 where C is a constant term that can be discarded. Because A y is a triangular matrix, its determinant |A y | is the product of its diagonal elements. To avoid numerical issues, we clamp this determinant to a small positive epsilon ( = 10 −8) so that the logarithm is always well-defined. We assume that all of our images are corrupted by additive uniform Gaussian noise N (0, σ 2 I) with a known standard deviation σ. Using noisy targets means that there is a "baseline" level of noise in the network output distributions that we must discount. Thanks to the blind spot, the network output is independent of the noise in the center pixel, so their (co-)variances are additive. We can therefore calculate Σ p = Σ y − σ 2 I to determine the actual uncertainty Σ p of the network. To avoid negative variances due to approximation errors, the diagonal elements of Σ p can be clamped to zero. Let us now derive our maximum a posteriori (MAP) denoising procedure. For each pixel, our goal is to find the most likely clean valuex given our knowledge of the noisy valuex and the output distribution predicted by the network based on the blind-spot neighborhood Ω. It follows that DISPLAYFORM2 where we have first applied Bayes' theorem to obtain the MAP objective P (x|x)P (x|Ω), and then expressed the associated probabilities as pdfs of Gaussian distributions. In the first term we have exploited the symmetry of the Gaussian distribution, and as the prior term P (x|Ω) we use the prediction of the network with the baseline uncertainty removed. Following BID0, the mean, and consequently the argmax, of this product of two Gaussian distributions iŝ DISPLAYFORM3 For the baseline experiments, as well as for the backbone of our blind-spot networks, we use the same U-Net BID13 architecture as BID9, see their appendix for details. The only differences are that we have layers DEC CONV1A and DEC CONV1B output 96 feature maps like the other convolution layers at the decoder stage, and layer DEC CONV1C is removed. After combining the four receptive field restricted branches, we thus have 384 feature maps. These are fed into three successive 1×1 convolutions with 384, 96, and n output channels, respectively, where n is the number of output components for the network. All convolution layers except the last 1×1 convolution use leaky ReLU with α = 0.1 . All networks were trained using Adam with default parameters BID6, learning rate λ = 0.0003, and minibatch size of 4. As training data, we used random 256×256 crops from the 50K images in the ILSVRC2012 (Imagenet) validation set. The training continued until 1.2M images were shown to the network. All training and test images were corrupted with Gaussian noise, σ = 25. Table 1 shows the denoising quality in dB for the four test datasets used. From the BSD300 dataset we use the 100 validation images only. Similar to BID7, we use the grayscale version of the BSD68 dataset -for this case we train a single-channel (c = 1) denoiser using only the luminance channel of the training images. All our blind-spot noise-to-noise networks use the convolutional architecture (Section 2) and are trained without masking. In BSD68 our simplified L2 variant closely matches the original NOISE2VOID training, suggesting that our network with an architecturally enforced blind spot is approximately as capable as the masking-based network trained by BID7. We see that the denoising quality of our Full setup (Section 3) is on par with baseline of N2N and N2C, and clearly surpasses standard blind-spot denoising (L2) that does not exploit the information in the blind spot. Doing the estimation separately for each color BID9 and BID7. Full is our blind-spot training and denoising method as described in Section 3. Per-comp. is an ablated setup where each color component is treated as an independent univariate Gaussian, highlighting the importance of expressing color outputs as multivariate distributions. L2 refers to training using the standard L2 loss function and ignoring the center pixel when denoising. Columns N2N and N2C refer to NOISE2NOISE training of BID9 and traditional supervised training with clean targets (i.e., noise-to-clean), respectively. Results within 0.05 dB of the best for each dataset are shown in boldface. channel (Per-comp.) performs significantly worse, except in the grayscale BSD68 dataset where it is equivalent to the Full method. FIG1 shows example denoising . Our Full setup produces images that are virtually identical to the N2N baseline both visually and in terms of PSNR. The ablated Per-comp. setup tends to produce color artifacts, demonstrating the shortcomings of the simpler per-component univariate model. Finally, the L2 variant that ignores the center pixel during denoising produces visible checkerboard patterns, some of which can also be seen in the images of BID7. We have shown that self-supervised training -looking at noisy images only, without the benefit of seeing the same image under different noise realizations -is sufficient for learning deep denoising models on par with those that make use of another realization as a training target, be it clean or corrupted. Currently this comes at the cost of assuming pixel-wise independent noise with a known analytic likelihood model. PixelCNNs BID16 a; BID14 generate novel images in a scanline order, one pixel at a time, by conditioning the possible pixel colors using all previous, already generated pixels. The training uses masked convolutions that prevent looking at pixels that would not have been generated yet -one good implementation of masking (van den a) combines a vertical half-space (previous scanlines) with a horizontal line (current scanline). In our application we use four half-spaces to exclude the center pixel only. Regrettably the term "blind spot" has a slightly different meaning in PixelCNNs: van den BID15 uses it to denote valid input pixels that the network in question fails to see due to poor design, whereas we follow the naming convention of BID7 so that a blind spot is always intentional. Applying Bayesian statistics to denoising has a long history. Non-local means BID1, BM3D BID2, and WNNM BID4 identify a group of similar pixel neighborhoods and estimate the center pixel's color from those. This is conceptually similar to our solution, which uses a convolutional network to represent the mapping from neighborhoods to the distilled outputs. Both approaches need only the noisy images, but while the explicit block-based methods determine a small number of neighborhoods from the input image alone, our blind-spot training can implicitly identify and regress an arbitrarily large number of neighborhoods from a collection of noisy training data.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1e7g4edw4
We learn high-quality denoising using only single instances of corrupted images as training data.
Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates. This has been a notable problem in training deep RL agents to perform web-based tasks, such as booking flights or replying to emails, where a single mistake can ruin the entire sequence of actions. A common remedy is to "warm-start" the agent by pre-training it to mimic expert demonstrations, but this is prone to overfitting. Instead, we propose to constrain exploration using demonstrations. From each demonstration, we induce high-level "workflows" which constrain the allowable actions at each time step to be similar to those in the demonstration (e.g., "Step 1: click on a textbox; Step 2: enter some text"). Our exploration policy then learns to identify successful workflows and samples actions that satisfy these workflows. Workflows prune out bad exploration directions and accelerate the agent’s ability to discover rewards. We use our approach to train a novel neural policy designed to handle the semi-structured nature of websites, and evaluate on a suite of web tasks, including the recent World of Bits benchmark. We achieve new state-of-the-art , and show that workflow-guided exploration improves sample efficiency over behavioral cloning by more than 100x. We are interested in training reinforcement learning (RL) agents to use the Internet (e.g., to book flights or reply to emails) by directly controlling a web browser. Such systems could expand the capabilities of AI personal assistants BID42, which are currently limited to interacting with machine-readable APIs, rather than the much larger world of human-readable web interfaces. Reinforcement learning agents could learn to accomplish tasks using these human-readable web interfaces through trial-and-error BID44. But this learning process can be very slow in tasks with sparse reward, where the vast majority of naive action sequences lead to no reward signal BID46 BID30. This is the case for many web tasks, which involve a large action space (the agent can type or click anything) and require a well-coordinated sequence of actions to succeed. A common countermeasure in RL is to pre-train the agent to mimic expert demonstrations via behavioral cloning BID37 BID23, encouraging it to take similar actions in similar states. But in environments with diverse and complex states such as websites, demonstrations may cover only a small slice of the state space, and it is difficult to generalize beyond these states (overfitting). Indeed, previous work has found that warm-starting with behavioral cloning often fails to improve over pure RL BID41. At the same time, simple strategies to combat overfitting (e.g. using fewer parameters or regularization) cripple the policy's flexibility BID9, which is required for complex spatial and structural reasoning in user interfaces. In this work, we propose a different method for leveraging demonstrations. Rather than training an agent to directly mimic them, we use demonstrations to constrain exploration. By pruning away bad exploration directions, we can accelerate the agent's ability to discover sparse rewards. Furthermore, for all demonstrations d do Induce workflow lattice from d Observe an initial environment state πw samples a workflow from a lattice Roll out an episode e from the workflow Use e to update πw if e gets reward +1 then Add e to replay buffer Periodically: if replay buffer size > threshold then Sample episodes from replay buffer Update πn with sampled episodes Observe an initial environment state πn rolls out episode e Update πn and critic V with e if e gets reward +1 then Add e to replay bufferFigure 1: Workflow-guided exploration (WGE). After inducing workflow lattices from demonstrations, the workflow policy π w performs exploration by sampling episodes from sampled workflows. Successful episodes are saved to a replay buffer, which is used to train the neural policy π n.because the agent is not directly exposed to demonstrations, we are free to use a sophisticated neural policy with a reduced risk of overfitting. To constrain exploration, we employ the notion of a "workflow" BID13. For instance, given an expert demonstration of how to forward an email, we might infer the following workflow:Click an email title → Click a "Forward" button → Type an email address into a textbox → Click a "Send" button This workflow is more high-level than an actual policy: it does not tell us exactly which email to click or which textbox to type into, but it helpfully constrains the set of actions at each time step. Furthermore, unlike a policy, it does not depend on the environment state: it is just a sequence of steps that can be followed blindly. In this sense, a workflow is environment-blind. The actual policy certainly should not be environment-blind, but for exploration, we found environment-blindness to be a good inductive bias. To leverage workflows, we propose the workflow-guided exploration (WGE) framework as illustrated in Figure 1: 1. For each demonstration, we extract a lattice of workflows that are consistent with the actions observed in the demonstration (Section 3).2. We then define a workflow exploration policy π w (Section 4), which explores by first selecting a workflow, and then sampling actions that fit the workflow. This policy gradually learns which workflow to select through reinforcement learning.3. Reward-earning episodes discovered during exploration enter a replay buffer, which we use to train a more powerful and expressive neural network policy π n (Section 5).A key difference between the web and traditional RL domains such as robotics BID5 or game-playing BID8 is that the state space involves a mix of structured (e.g. HTML) and unstructured inputs (e.g. natural language and images). This motivates us to propose a novel neural network policy (DOMNET), specifically designed to perform flexible relational reasoning over the tree-structured HTML representation of websites. We evaluate workflow-guided exploration and DOMNET on a suite of web interaction tasks, including the MiniWoB benchmark of BID41, the flight booking interface for Alaska Airlines, and a new collection of tasks that we constructed to study additional challenges such as noisy environments, variation in natural language, and longer time horizons. Compared to previous on MiniWoB BID41, which used 10 minutes of demonstrations per task (approximately 200 demonstrations on average), our system achieves much higher success rates and establishes new state-of-the-art with only 3-10 demonstrations per task. In the standard reinforcement learning setup, an agent learns a policy π(a|s) that maps a state s to a probability distribution over actions a. At each time step t, the agent observes an environment state s t and chooses an action a t, which leads to a new state s t+1 and a reward r t = r(s t, a t). The goal is to maximize the expected return E[R], where R = t γ t r t+1 and γ is a discount factor. Typical reinforcement learning agents learn through trial-and-error: rolling out episodes (s 1, a 1, . . ., s T, a T) and adjusting their policy based on the of those episodes. We focus on settings where the reward is delayed and sparse. Specifically, we assume that the agent receives reward only at the end of the episode, and the reward is high (e.g., +1) for only a small fraction of possible trajectories and is uniformly low (e.g., −1) otherwise. With large state and action spaces, it is difficult for the exploration policy to find episodes with positive rewards, which prevents the policy from learning effectively. We further assume that the agent is given a goal g, which can either be a structured key-value mapping (e.g., {task: forward, from: Bob, to: Alice}) or a natural language utterance (e.g., "Forward Bob's message to Alice"). The agent's state s consists of the goal g and the current state of the web page, represented as a tree of elements (henceforth DOM tree). We restrict the action space to click actions Click(e) and type actions Type(e,t), where e is a leaf element of the DOM tree, and t is a string from the goal g (a value from a structured goal, or consecutive tokens from a natural language goal). FIG2 shows an example episode for an email processing task. The agent receives +1 reward if the task is completed correctly, and −1 reward otherwise. Given a collection of expert demonstrations d = (s 1,ã 1, . . .,s T,ã T), we would like explore actions a t that are "similar" to the demonstrated actionsã t. Workflows capture this notion of similarity by specifying a set of similar actions at each time step. Formally, a workflow z 1:T is a sequence of workflow steps, where each step z t is a function that takes a state s t and returns a constrained set z t (s t) of similar actions. We use a simple compositional constraint language (Appendix A) to describe workflow steps. For example, with z t = Click(Tag("img")), the set z t (s t) contains click actions on any DOM element in s t with tag img. We induce a set of workflows from each demonstration d = (s 1,ã 1, . . .,s T,ã T) as follows. For each time step t, we enumerate a set Z t of all possible workflow steps z t such thatã t ∈ z t (s t). The set of workflows is then the cross product Z 1 × · · · × Z T of the steps. We can represent the induced workflows as paths in a workflow lattice as illustrated in FIG2.To handle noisy demonstrations where some actions are unnecessary (e.g., when the demonstrator accidentally clicks on the ), we add shortcut steps that skip certain time steps. We also add shortcut steps for any consecutive actions that can be collapsed into a single equivalent action (e.g., collapsing two type actions on the same DOM element into a single Type step). These shortcuts allow the lengths of the induced workflows to differ from the length of the demonstration. We henceforth ignore these shortcut steps to simplify the notation. The induced workflow steps are not equally effective. For example in FIG2, the workflow step Click(Near(Text("Bob"))) (Click an element near text "Bob") is too specific to the demonstration scenario, while Click(Tag("div")) (Click on any <div> element) is too general and covers too many irrelevant actions. The next section describes how the workflow policy π w learns which workflow steps to use. DISPLAYFORM0 Click(And(Tag("img"), Class("icon")))Type(SameRow( Like("to")), Field("to")) Type(Tag("input"),Field("to")) DISPLAYFORM1 Demonstration: goal = {task: forward, from: Bob, to: Alice} Workflow lattice: DISPLAYFORM2 Figure 2: From each demonstration, we induce a workflow lattice based on the actions in that demonstration. Given a new environment, the workflow policy samples a workflow (a path in the lattice, as shown in bold) and then samples actions that fit the steps of the workflow. Our workflow policy interacts with the environment to generate an episode in the following manner. At the beginning of the episode, the policy conditions on the provided goal g, and selects a demonstration d that carried out a similar goal: DISPLAYFORM0 where sim(g, g d) measures the similarity between g and the goal g d of demonstration d. In our tasks, we simply let sim(g, g d) be 1 if the structured goals share the same keys, and −∞ otherwise. Then, at each time step t with environment state s t, we sample a workflow step z t according to the following distribution: DISPLAYFORM1 where each ψ z,t,d is a separate scalar parameter to be learned. Finally, we sample an action a t uniformly from the set z t (s t). DISPLAYFORM2 The overall probability of exploring an episode e = (s 1, a 1, . . ., s T, a T) is then: DISPLAYFORM3 where p(s t |s t−1, a t−1) is the (unknown) state transition probability. Note that π w (z|d, t) is not a function of the environment states s t at all. Its decisions only depend on the selected demonstration and the current time t. This environment-blindness means that the workflow policy uses far fewer parameters than a state-dependent policy, enabling it to learn more quickly and preventing overfitting. Due to environment-blindness, the workflow policy cannot solve the task, but it quickly learns to certain good behaviors, which can help the neural policy learn. To train the workflow policy, we use a variant of the REINFORCE algorithm BID49 BID44. In particular, after rolling out an episode e = (s 1, a 1, . . ., s T, a T), we approximate the gradient using the unbiased estimate DISPLAYFORM4 where G t is the return at time step t and v d,t is a baseline term for variance reduction. Sampled episodes from the workflow policy that receive a positive reward are stored in a replay buffer, which will be used for training the neural policy π n. As outlined in Figure 1, the neural policy is learned using both on-policy and off-policy updates (where episodes are drawn from the replay buffer). Both updates use A2C, the synchronous version of the advantage actor-critic algorithm. Since only episodes with reward +1 enter the replay buffer, the off-policy updates behave similarly to supervised learning on optimal trajectories. Furthermore, successful episodes discovered during on-policy exploration are also added to the replay buffer. Model architecture. We propose DOMNET, a neural architecture that captures the spatial and hierarchical structure of the DOM tree. As illustrated in FIG4, the model first embeds the DOM elements and the input goal, and then applies a series of attentions on the embeddings to finally produce a distribution over actions π n (a|s) and a value function V (s), the critic. We highlight our novel DOM embedder, and defer other details to Appendix C.We design our DOM embedder to capture the various interactions between DOM elements, similar to recent work in graph embeddings BID24 BID35 BID16. In particular, DOM elements that are "related" (e.g., a checkbox and its associated label) should pass their information to each other. To embed a DOM element e, we first compute the base embedding v e base by embedding and concatenating its attributes (tag, classes, text, etc.). In order to capture the relationships between DOM elements, we next compute two types of neighbor embeddings:1. We define spatial neighbors of e to be any element e within 30 pixels from e, and then sum up their base embeddings to get the spatial neighbor embedding v e spatial. 2. We define depth-k tree neighbors of e to be any element e such that the least common ancestor of e and e in the DOM tree has depth at most k. Intuitively, tree neighbors of a higher depth are more related. For each depth k, we apply a learnable affine transformation f on the base embedding of each depth-k tree neighbor e, and then apply max pooling to get v We evaluate our approach on three suites of interactive web tasks:1. MiniWoB: the MiniWoB benchmark of BID41 2. MiniWoB++: a new set of tasks that we constructed to incorporate additional challenges not present in MiniWoB, such as stochastic environments and variation in natural language.3. Alaska: the mobile flight booking interface for Alaska Airlines, inspired by the FormWoB benchmark of BID41.We describe the common task settings of the MiniWoB and MiniWoB++ benchmarks, and defer the description of the Alaska benchmark to Section 6.3.3. Environment. Each task contains a 160px × 210px environment and a goal specified in text. The majority of the tasks return a single sparse reward at the end of the episode; either +1 (success) or −1 (failure). For greater consistency among tasks, we disabled all partial rewards in our experiments. The agent has access to the environment via a Selenium web driver interface. The public MiniWoB benchmark 1 contains 80 tasks. We filtered for the 40 tasks that only require actions in our action space, namely clicking on DOM elements and typing strings from the input goal. Many of the excluded tasks involve somewhat specialized reasoning, such as being able to compute the angle between two lines, or solve algebra problems. For each task, we used Amazon Mechanical Turk to collect 10 demonstrations, which record all mouse and keyboard events along with the state of the DOM when each event occurred. Evaluation metric. We report success rate: the percentage of test episodes with reward +1. Since we have removed partial rewards, success rate is a linear scaling of the average reward, and is equivalent to the definition of success rate in BID41. We compare the success rates across the MiniWoB tasks of the following approaches:• SHI17: the system from BID41, pre-trained with behavioral cloning on 10 minutes of demonstrations (approximately 200 demonstrations on average) and fine-tuned with RL. Unlike DOMNET, this system primarily uses a pixel-based representation of the state. • DOMNET+BC+RL: our proposed neural policy, DOMNET, but pre-trained with behavioral cloning on 10 demonstrations and fine-tuned with RL, like SHI17. During behavioral cloning, we apply early stopping based on the reward on a validation set.• DOMNET+WGE: our proposed neural policy, DOMNET, trained with workflow-guided exploration on 10 demonstrations. For DOMNET+BC+RL and DOMNET+WGE, we report the test success rate at the time step where the success rate on a validation set reaches its maximum. The are shown in Figure 3. By comparing SHI17 with DOMNET+BC+RL, we can roughly evaluate the contribution of our new neural architecture DOMNET, since the two share the same training procedure (BC+RL). While SHI17 also uses the DOM tree to compute text alignment features in addition to the pixel-level input, our DOMNET uses the DOM structure more explicitly. We find DOMNET+BC+RL to empirically improve the success rate over SHI17 on most tasks. By comparing DOMNET+BC+RL and DOMNET+WGE, we find that workflow-guided exploration enables DOMNET to perform even better on the more difficult tasks, which we analyze in the next section. Some of the workflows that the workflow policy π w learns are shown in Appendix B.6.3 ANALYSIS We constructed and released the MiniWoB++ benchmark of tasks to study additional challenges a web agent might encounter, including: longer time horizons (click-checkboxes-large), "soft" reasoning about natural language (click-checkboxes-soft), and stochastically varying layouts (multiorderings, multi-layouts). TAB1 lists the tasks and their time horizons (number of steps needed for a perfect policy to carry out the longest goal) as a crude measure of task complexity. We first compare the performance of DOMNET trained with BC+RL (baseline) and DOMNET trained with WGE (our full approach). The proposed WGE model outperforms the BC+RL model by an average of 42% absolute success rate. We analyzed their behaviors and noticed two common failure modes of training with BC+RL that are mitigated by instead training with WGE:1. The BC+RL model has a tendency to take actions that prematurely terminate the episode (e.g., hitting "Submit" in click-checkboxes-large before all required boxes are checked). One likely cause is that these actions occur across all demonstrations, while other nonterminating actions (e.g., clicking different checkboxes) vary across demonstrations. 2. The BC+RL model occasionally gets stuck in cyclic behavior such as repeatedly checking and unchecking the same checkbox. These failure modes stem from overfitting to parts of the demonstrations, which WGE avoids. Next, we analyze the workflow policy π w learned by WGE. The workflow policy π w by itself is too simplistic to work well at test time for several reasons:1. Workflows ignore environment state and therefore cannot respond to the differences in the environment, such as the different layouts in multi-layouts. 2. The workflow constraint language lacks the expressivity to specify certain actions, such as clicking on synonyms of a particular word in click-checkboxes-soft. 3. The workflow policy lacks expressivity to select the correct workflow for a given goal. Nonetheless the workflow policy π w is sufficiently constrained to discover reward some of the time, and the neural policy π n is able to learn the right behavior from such episodes. As such, the neural policy can achieve high success rates even when the workflow policy π w performs poorly. While MiniWoB tasks provide structured goals, we can also apply our approach to natural language goals. We collected a training dataset using the overnight data collection technique BID47. In the email-inbox-nl task, we collected natural language templates by asking annotators to paraphrase the task goals (e.g., "Forward Bob's message to Alice" → "Email Alice the email I got from Bob") and then abstracting out the fields ("Email <TO> the email I got from <FROM>"). During training, the workflow policy π w receives states with both the structured goal and the natural language utterance generated from a random template, while the neural policy π n receives only the utterance. At test time, the neural policy is evaluated on unseen utterances. The in TAB1 show that the WGE model can learn to understand natural language goals (93% success rate).Note that the workflow policy needs access to the structured inputs only because our constraint language for workflow steps operates on structured inputs. The constraint language could potentially be modified to work with utterances directly (e.g., After("to") extracts the utterance word after "to"), but we leave this for future work. We applied our approach on the Alaska benchmark, a more realistic flight search task on the Alaska Airlines mobile site inspired by the FormWoB task in BID41. In this task, the agent must complete the flight search form with the provided information (6-7 fields). We ported the web page to the MiniWoB framework with a larger 375px × 667px screen, replaced the server backend with a surrogate JavaScript function, and clamped the environment date to March 1, 2017.Following BID41, we give partial reward based on the fraction of correct fields in the submitted form if all required fields are filled in. Despite this partial reward, the reward is still extremely sparse: there are over 200 DOM elements (compared to ≈ 10-50 in MiniWoB tasks), and a typical episode requires at least 11 actions involving various types of widgets such as autocompletes and date pickers. The probability that a random agent gets positive reward is less than 10 −20.We first performed experiments on Alaska-Shi17, a clone of the original Alaska Airlines task in BID41, where the goal always specifies a roundtrip flight (two airports and two dates). On their dataset, our approach, using only 1 demonstration, achieves an average reward of 0.97, compared to their best of 0.57, which uses around 80 demonstrations. Our success motivated us to test on a more difficult version of the task which additionally requires selecting flight type (a checkbox for one-way flight), number of passengers (an increment-decrement counter), and seat type (hidden under an accordion). We achieve an average reward of 0.86 using 10 demonstrations. This demonstrates our method can handle long horizons on real-world websites. To evaluate the demonstration efficiency of our approach, we compare DOMNET+WGE with DOMNET+BC+RL trained on increased numbers of demonstrations. We compare DOM-NET+WGE trained on 10 demonstrations with DOMNET+BC+RL on 10, 100, 300, and 1000 demonstrations. The test rewards 3 on several of the hardest tasks are summarized in FIG3.Increasing the number of demonstrations improves the performance of BC+RL, as it helps prevent overfitting. However, on every evaluated task, WGE trained with only 10 demonstrations still achieves much higher test reward than BC+RL with 1000 demonstrations. This corresponds to an over 100x sample efficiency improvement of our method over behavioral cloning in terms of the number of demonstrations. Learning agents for the web. Previous work on learning agents for web interactions falls into two main categories. First, simple programs may be specified by the user BID50 or may be inferred from demonstrations BID1. Second, soft policies may be learned from scratch or "warm-started" from demonstrations BID41. Notably, sparse rewards prevented BID41 from successfully learning, even when using a moderate number of demonstrations. While policies have proven to be more difficult to learn, they have the potential to be expressive and flexible. Our work takes a step in this direction. Sparse rewards without prior knowledge. Numerous works attempt to address sparse rewards without incorporating any additional prior knowledge. Exploration methods BID32 BID11 BID48 help the agent better explore the state space to encounter more reward; shaping rewards BID31 directly modify the reward function to encourage certain behaviors; and other works BID22 augment the reward signal with additional unsupervised reward. However, without prior knowledge, helping the agent receive additional reward is difficult in general. Imitation learning. Various methods have been proposed to leverage additional signals from experts. For instance, when an expert policy is available, methods such as DAGGER BID40 and AGGREVATE BID39 BID43 can query the expert policy to augment the dataset for training the agent. When only expert demonstrations are available, inverse reinforcement learning methods BID0; BID15 BID19 BID7 infer a reward function from the demonstrations without using reinforcement signals from the environment. The usual method for incorporating both demonstrations and reinforcement signals is to pre-train the agent with demonstrations before applying RL. Recent work extends this technique by introducing different objective functions and regularization during pre-training, and mixing demonstrations and rolled-out episodes during RL updates BID20 BID18 BID46 BID30.Instead of training the agent on demonstrations directly, our work uses demonstrations to guide exploration. The core idea is to explore trajectories that lie in a "neighborhood" surrounding an expert demonstration. In our case, the neighborhood is defined by a workflow, which only permits action sequences analogous to the demonstrated actions. Several previous works also explore neighborhoods of demonstrations via reward shaping BID10 BID21 or off-policy sampling BID26. One key distinction of our work is that we define neighborhoods in terms of action similarity rather than state similarity. This distinction is particularly important for the web tasks: we can easily and intuitively describe how two actions are analogous (e.g., "they both type a username into a textbox"), while it is harder to decide if two web page states are analogous (e.g., the email inboxes of two different users will have completely different emails, but they could still be analogous, depending on the task.)Hierarchical reinforcement learning. Hierarchical reinforcement learning (HRL) methods decompose complex tasks into simpler subtasks that are easier to learn. Main HRL frameworks include abstract actions BID45 BID25 BID17, abstract partial policies BID33, and abstract states BID38 BID14 BID27. These frameworks require varying amounts of prior knowledge. The original formulations required programmers to manually specify the decomposition of the complex task, while BID3 only requires supervision to identify subtasks, and BID6; BID12 learn the decomposition fully automatically, at the cost of performance. Within the HRL methods, our work is closest to BID33 and the line of work on constraints in robotics BID36 BID34. The work in BID33 specifies partial policies, which constrain the set of possible actions at each state, similar to our workflow items. In contrast to previous instantiations of the HAM framework BID2 BID28, which require programmers to specify these constraints manually, our work automatically induces constraints from user demonstrations, which do not require special skills to provide. BID36; also resemble our work, in learning constraints from demonstrations, but differ in the way they use the demonstrations. Whereas our work uses the learned constraints for exploration, BID36 only uses the constraints for planning and build a knowledge base of constraints to use at test time. Summary. Our workflow-guided framework represents a judicious combination of demonstrations, abstractions, and expressive neural policies. We leverage the targeted information of demonstrations and the inductive bias of workflows. But this is only used for exploration, protecting the expressive neural policy from overfitting. As a , we are able to learn rather complex policies from a very sparse reward signal and very few demonstrations. Acknowledgments. This work was supported by NSF CAREER Award IIS-1552635. We try to keep the constraint language as minimal and general as possible. The main part of the language is the object selector (elementSet) which selects either objects that share a specified property, or objects that align spatially. These two types of constraints should be applicable in many typical RL domains such as game playing and robot navigation. To avoid combinatorial explosion of relatively useless constraints, we limit the number of nested elementSet applications to 3, where the third application must be the Class filter. When we induce workflow steps from a demonstration, the valid literal values for tag, string, and classes are extracted from the demonstration state. login-user Enter the username "ashlea" and password "k0UQp" and press login.{username: ashlea, password: k0UQp} Type(Tag("input_text"),Field("username")) Type(Tag("input_password"),Field("password")) Click(Like("Login"))email-inbox Find the email by Ilka and forward it to Krista. {task: forward, name: Ilka, to: Krista} Click(Near(Field("by"))) Click(SameCol(Like("Forward"))) Type(And(Near("Subject"),Class("forward-sender")),Field("to")) Click(Tag("span"))
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryTp3f-0-
We solve the sparse rewards problem on web UI tasks using exploration guided by demonstrations
Nowadays deep learning is one of the main topics in almost every field. It helped to get amazing in a great number of tasks. The main problem is that this kind of learning and consequently neural networks, that can be defined deep, are resource intensive. They need specialized hardware to perform a computation in a reasonable time. Unfortunately, it is not sufficient to make deep learning "usable" in real life. Many tasks are mandatory to be as much as possible real-time. So it is needed to optimize many components such as code, algorithms, numeric accuracy and hardware, to make them "efficient and usable". All these optimizations can help us to produce incredibly accurate and fast learning models. Our work focused on two main tasks that have gained significant attention from researchers, that 11 are automated face detection and emotion recognition. Since these are computationally intensive 12 tasks, not much has been specifically developed or optimized for embedded platforms. We show how 31 dataset that has 6 classes FIG2. To perform reasonable tests an input image of size 100x100x3 has been used. As shown in Fig. 4 34 we compared based on computation time for the pipeline with and without accelerations. Raspberry needs a computation time that is double the time needed by a Movidius, for example the 36 first needs 150ms per frame against the 70ms of the latter. We conducted several tests and reported 37 the inference time for each task and for the whole pipeline in TAB0.
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
SyxkWkdPoX
Embedded architecture for deep learning on optimized devices for face detection and emotion recognition
Word embedding is a useful approach to capture co-occurrence structures in a large corpus of text. In addition to the text data itself, we often have additional covariates associated with individual documents in the corpus---e.g. the demographic of the author, time and venue of publication, etc.---and we would like the embedding to naturally capture the information of the covariates. In this paper, we propose a new tensor decomposition model for word embeddings with covariates. Our model jointly learns a \emph{base} embedding for all the words as well as a weighted diagonal transformation to model how each covariate modifies the base embedding. To obtain the specific embedding for a particular author or venue, for example, we can then simply multiply the base embedding by the transformation matrix associated with that time or venue. The main advantages of our approach is data efficiency and interpretability of the covariate transformation matrix. Our experiments demonstrate that our joint model learns substantially better embeddings conditioned on each covariate compared to the standard approach of learning a separate embedding for each covariate using only the relevant subset of data. Furthermore, our model encourages the embeddings to be ``topic-aligned'' in the sense that the dimensions have specific independent meanings. This allows our covariate-specific embeddings to be compared by topic, enabling downstream differential analysis. We empirically evaluate the benefits of our algorithm on several datasets, and demonstrate how it can be used to address many natural questions about the effects of covariates. The use of factorizations of co-occurrence statistics in learning low-dimensional representations of words is an area that has received a large amount of attention in recent years, perhaps best represented by how widespread algorithms such as GloVe BID10 and Word2Vec BID8 are in downstream applications. In particular, suppose we have a set of words i ∈ [n], where n is the size of the vocabulary. The aim is to, for a fixed dimensionality d, assign a vector v i ∈ R d to each word in the vocabulary in a way that preserves semantic structure. In many settings, we have a corpus with additional covariates on individual documents. For example, we might have news articles from both conservative and liberal-leaning publications, and using the same word embedding for all the text can lose interesting information. Furthermore, we suggest that there are meaningful semantic relationships that can be captured by exploiting the differences in these conditional statistics. To this end, we propose the following two key questions that capture the problems that our work addresses, and for each, we give a concrete motivating example of a problem in the semantic inference literature that it encompasses. Question 1: How can we leverage conditional co-occurrence statistics to capture the effect of a covariate on word usage?For example, did William Shakespeare truly write all the works credited to him, or have there been other "ghostwriters" who have contributed to the Shakespeare canon? This is the famous Shakespeare authorship question, for which historians have proposed various candidates as the true authors of particular plays or poems BID5. If the latter scenario is the case, what in particular distinguishes the writing style of one candidate from another, and how can we infer who the most likely author of a work is from a set of candidates?Question 2: Traditional factorization-based embedding methods are rotationally invariant, so that individual dimensions do not have semantic meaning. How can we break this invariance to yield a model which aligns topics with interpretable dimensions?There has been much interest in the differences in language and rhetoric that appeal to different demographics. For example, studies have been done regarding "ideological signatures" specific to voters by partisan alignment (Robinson et al.) in which linguistic differences were proposed along focal axes, such as the "mind versus the body" in texts with more liberal or conservative ideologies. How can we systematically infer topical differences such as these between different communities?Questions such as these, or more broadly covariate-specific trends in word usage, motivated this study. Concretely, our goal is to provide a general framework through which embeddings of sets of objects with co-occurrence structure, as well as the effects of conditioning on particular covariates, can be learned jointly. As a byproduct, our model also gives natural meaning to the different dimensions of the embeddings, by breaking the rotational symmetry of previous embedding-learning algorithms, such that the ing vector representations of words and covariates are "topic-aligned".Previous Work Typically, algorithms for learning embeddings rely on the intuition that some function of the co-occurrence statistics is low rank. Studies such as GloVe and Word2Vec proposed based on minimizing low-rank approximation-error of nonlinear transforms of the co-occurrence statistics. let A be the n × n matrix with A ij the co-occurrence between words i and j, where co-occurrence is defined as the (possibly weighted) number of times the words occur together in a window of fixed length. For example, GloVe aimed to find vectors v i ∈ R d and biases b i ∈ R such that the loss DISPLAYFORM0 was minimized, where f was some fixed increasing weight function. Word2Vec aimed to learn vector representations via minimizing a neural-network based loss function. A related embedding approach is to directly perform principal component analysis on the PMI (pointwise mutual information) matrix of the words (Bullinaria & Levy). PMI-factorization based methods aim to find vectors {v i} such that DISPLAYFORM1 where the probabilities are taken over the co-occurrence matrix. This is essentially the same as finding a low-rank matrix V such that V T V ≈ P M I, and empirical show that the ing embedding captures useful semantic structure. The ideas of several previous studies on the geometry of word embeddings was helpful in formulating our model. A random-walk based mathematical framework for understanding these different successful learning algorithms was proposed BID1, in which the corpus generation process is a random process driven by the random walk of a discrete-time discourse vector c t ∈ R d. In this framework, our work can be thought of as analyzing the effects of covariates on the random walk transition kernel and the stationary distribution. Additionally, there have been previous studies of "multi-sense" word embeddings BID11 BID9, which is similar to our idea that the same word can have different meanings in different contexts. However, in the multi-sense setting, the idea is that the word intrinsically has different meanings (for example, "crane" can be an action, a bird, or a vehicle), whereas in ours, the different meanings are imposed by conditioning on a covariate. Finally, tensor methods have been used in other settings recently, such as collaborative filtering BID13 and (Li & Farias), to learn the effects of conditioning on some summary statistics. Our Contributions There are several reasons why a joint learning model based on tensor factorization is more desirable than performing GloVe m times, where m is the number of covariates, so that each covariate-specific corpus has its own embedding. Our main contributions are a decomposition algorithm that addresses these issues, and the methods for systematic analysis we propose. The first issue that arises is sample complexity. In particular, because for the most part words are used in roughly similar ways across different contexts, the ing embeddings should not be too different, except perhaps along specific dimensions. Thus, it is better to jointly train an embedding model along the covariates to aggregate the co-occurrence structure, especially in cases where the entire corpus is large, but many conditional corpuses (conditioned on a covariate) are small. Secondly, simply training a different embedding for each corpus makes it difficult to compare the embeddings across the covariate dimension. Because of issues such as rotation invariance of GloVelike models, specific dimensions mean different things across different runs (and initializations) of these algorithms. The model we propose has the additional property that it induces a natural basis to view the embeddings in, one which is "topic-aligned" in the sense that it is not rotation-invariant and thus implies independent topic meanings given to different dimensions. Paper Organization In section 2, we provide our embedding algorithm, as well as mathematical justification for its design. In section 3, we detail our dataset. In section 4, we validate our algorithm with respect to intrinsic properties and standard metrics. In section 5, we propose several experiments for systematic downstream analysis. Notation Throughout this section, we will assume a vocabulary of size n and a discrete covariate to condition on of size m (for example, the community that the corpus comes from, i.e. liberal or conservative discussion forums). It is easy to see how our algorithm generalizes to higher-order tensor decompositions when there are multiple dimensions covariates to condition on (for example, slicing along community and slicing along timeframe simultaneously). Words will be denoted with indices i, j ∈ [n] and covariates with index k ∈ [m]. Dimensions in our embedding are referred to by index t ∈ [d].We will denote the co-occurrence tensor as A ∈ R n×n×m, where A ijk denotes how many times words i and j occurred together within a window of some fixed length, in the corpus coming from covariate k. The of our algorithm will be two sets of vectors, {v i ∈ R d} and {c k ∈ R d}, as well as bias terms that also fit into the objective. Finally, let denote the element-wise product between two vectors. Objective Function and Discussion Here, we give the objective function our method minimizes, and provide some explanation for how one should imagine the effect of the covariate weights. The objective function we minimize is the following partial non-negative tensor factorization objective function for jointly training word vectors and weight vectors representing the effect of covariates, adapted from the original GloVe objective (note that c DISPLAYFORM0 which is to be optimized over {v i ∈ R d}, {c k ∈ R d}, and {b ik ∈ R}. To gain a little more intuition for why this is a reasonable objective function, note that the ing objective for a single "covariate slice" is essentially DISPLAYFORM1 model since the c k can be absorbed into the v i. We used f (x) = (min(100,x) 100 ) 0.75, to parallel the original objective function in BID10.One can think of the dimensions our model learns as independent topics, and the effects of the covariate weights c k as upweighting or downweighting the importance of these topics in contributing to the conditional co-occurrence statistics. Here we provide a geometric perspective on our model in the context of some prior work. This geometric interpretation is not necessary to applying our method and to understand its . Throughout this section, note that at a high level, the aim of our method is to learn sets {v i ∈ R d}, {c k ∈ R d}, and {b ik ∈ R} for 1 ≤ i ≤ n, 1 ≤ k ≤ m, such that for a fixed k, the vectors {c k v i} and the biases {b ik} approximate the vectors and biases that would have been learned from running GloVe on only the k th slice of the co-occurrence tensor. We now provide some rationale for why this is a reasonable objective. The main motivation for our algorithm is the following geometric intuition inspired in part by the model of BID1. In their model, corpus generation is determined by the nature of the random walk performed by a context vector c t over time steps. In particular, for each timestep t, the context vector c t emits words i with probability ∝ exp(v i · c t), and then c t updates its location based on a transition distribution. The assumption is that the transition distribution has a stationary distribution that is uniform over some ellipse. One natural equivalence family of transition matrices that preserves this property is the group ing from multiplication by a positive semidefinite matrix. Therefore, we consider a natural extension of this model, where the embedding ing from conditioning on different covariates is equivalent to multiplying the transition matrix by a symmetric PSD matrix. Alternatively, this is equivalent to a model where the transition kernel remains unchanged, but the embedding vectors themselves are multiplied by a symmetric PSD matrix (namely, the Moore-Penrose pseudoinverse of the original PSD matrix). This is the viewpoint we adopt. Figure 1: The effects of conditioning on covariates (covariates are discussion forums, described in Section 3). Left: baseline embedding with some possible word embedding positionings. Middle, right: embedding under effect of covariates: for example, "hillary" and "crooked" are pushed closer together under effects of The Donald, and "healthcare" pushed closer to "universal" and "free" under effects of SandersForPresident. Context vectors and random walk transitions also shown. In this framework, assign each covariate k its own symmetric PSD matrix, B k. It is well-known that any symmetric PSD matrix can be factorized as B k = R T k D k R k for some orthonormal basis R k and some (nonnegative) diagonal D k. Thus it suffices to consider the effect of a covariate on some ground truth "base embedding" M as applying the linear operator B k to each embedding vector, ing in the new embedding B k M.This model is quite expressive in its own right, but we consider a natural restriction where we propose that there exists some universal (at least across covariates) basis R under which the ing embeddings are affected by covariates via multiplication by just a diagonal matrix, instead of a PSD matrix. In particular, we note that DISPLAYFORM0 where M k = R k M is a rotated version of M. Now, in the restricted model where all the R k are equal, we can write all the M k as RM, so it suffices to just consider the rotation of the basis that the original embedding was trained in where R is just the identity (since matrix-factorization based word embedding models are rotation invariant). Under this model, the co-occurrence statistics under some transformation should be equivalent to M T D 2 k M. A careful comparison shows that this approximation is precisely that which is implied by equation 4, as desired. Note that this is essentially saying that in this distributed word representation model, there exists some rotation of the embedding space under which the effect of the covariate separates along dimensions. The implication is that there are some set of independent "topics" that each covariate will upweight or downweight in importance (or possibly ignore altogether with a weight of 0), characterizing the effect of this conditioning directly in terms of the effect on these topics. Algorithm Details Our model learns the ing parameters {v i ∈ R d}, {c k ∈ R d}, and {b ik}, by using the Adam BID6 algorithm, which was empirically shown to yield good convergence in the original GloVe setting. The specific hyperparameters used for each dataset will be described in the next section. The word and covariate weight vectors were initialized as random unit vectors 1. We evaluated our method in to primary datasets. In both datasets, co-occurrence statistics were formed by considering size 8 windows and using an inverse-distance weighting (e.g. neighboring words had 1 added to their co-occurrence, and words 3 apart had 1 3 added), which was suggested by some implementations of BID10.The first dataset, referred to as the "book dataset", consists of the full text from 29 books written by 4 different authors. The books we used were J.K. Rowling's "Harry Potter" series (7 books), "Cormoran Strike" series (3 books), and "The Casual Vacancy"; C. S. Lewis's "The Chronicles of Narnia" series (7 books), and "The Screwtape Letters"; George R. R. Martin's "A Song of Ice and Fire" series (5 books); and Stephenie Meyer's "Twilight" series (4 books), and "The Host". These books are fiction works in similar genres, with highly overlapping vocabularies and common themes. A trivial way of learning series-specific tendencies in word usage would be to cluster according to unique vocabularies (for example, only the "Harry Potter" series would have words such as "Harry" and "Ron" frequently), so the co-occurrence tensor was formed by looking at all words that occurred in all of the series with multiple books, which eliminated all series-specific words. Furthermore, series by the same author had very different themes, so there is no reason intrinsic to the vocabulary to believe the weight vectors would cluster by author. The vocabulary size was 5,020, and after tuning our algorithm to embed this dataset, we used 100 dimensions and a learning rate of 10 −5.The second dataset, referred to as the "politics dataset", was a collection of comments made in 2016 in 6 different subreddits on the popular discussion forum reddit, and was selected to address both Questions 1 and 2. The covariate was the discussion forum, and the subreddits we used were AskReddit, news, politics, SandersForPresident, The Donald, and WorldNews. AskReddit was chosen as a baseline discussion forum with a very general vocabulary usage, and the discussion forums for the Sanders and Trump support bases were also selected, as well as three politically-relevant but more neutral communities (it should be noted that the politics discussion forum tends to be very leftleaning). We considered a vocabulary of size 15,000, after removing the 28 most common words (suggested by other works in the embedding literature) and entries of the cooccurrence tensor with less than 10 occurrences (for the sake of training efficiency). The embedding used 200 dimensions and a learning rate of 10 −5. We performed the tensor decomposition algorithm on the book dataset, and considered how well the weight vectors of the covariate clustered by series and also by author.(a) 2D PCA of book dataset weight vectors (tensor decomposition algorithm, 100 dimensions) (b) 2D PCA of book dataset topic vectors, as predicted by LDA (100 topics)For every book in every series, the closest weight vectors by series were all the books in the same series. Furthermore, for every book written by every author, the closest weight vectors by author were all the books by the same author. This clear clustering behavior even when only conditioning on co-occurrence statistics for words that appear in all series (throwing out series-specific terms) implies the ability of the weight vectors to cluster according to higher-order information, perhaps such as writing style. As a baseline, we considered the topic breakdown on the same co-occurrence statistics predicted by Latent Dirichlet Allocation BID2 with various dimensionalities, all of which failed to produce any meaningful clusters. Consider the problem of learning an embedding for the text of a particular book (or series). The main advantage given by using the covariate-specific embedding that our method yields over applying GloVe to the individual slice is a data efficiency -by pooling the co-occurrence statistics of words across other books, we are able to give better (less noisy) estimates of the vectors, especially for sparsely-occurring or nonexistent words. For reference, individual books contained between 26747 and 355814 words. To this end, we performed the following experiment. For some book k, consider two embeddings: 1) the of performing GloVe on the co-occurrence statistics of just the book, and 2) the (weighted) embedding ing from our algorithm, specific to the book. Then, we tested these ing embeddings using a standard suite of evaluation metrics, including cluster purity and correlation similarities. Our method outperformed method 1 on all 7 tasks, often by a significant margin. BID1, it is unreasonable to ask for sparsity in the word embeddings themselves. However, our topic-specific weighting scheme for the covariate weight vectors implies that sparsity is desirable in the weights. The weight sparsity ing from our algorithm was experimentally verified through many runs of our embedding method, as well as across different optimization methods. Note that in the objective function, sparse coordinates will become "stuck" at 0, because the gradient update of c kt is proportional to c kt: DISPLAYFORM0 The dimensions that the covariates were sparse in did not overlap by much: the average number of sparse coordinates per weight vector was 20.7, and the average number of coordinates that was sparse in both vectors of a random pair was 5.2. This suggest that the set of "topics" each conditional slice of the tensor does not care about is fairly specific. Experimenting with regularizing the word vectors forced the model into a smaller subspace (i.e. sparse dimensions existed but were shared across all the words), which is not useful and provides further evidence for the natural isotropic distribution of word vectors. Regularizing the weights of the covariate vectors by adding an l 1 penalty term did not noticeably change the sparsity pattern. To confirm that the sparsity was a of separation of covariate effects rather than an artifact of our algorithm, we ran our decomposition on a co-occurrence tensor which was the of taking the same slice (subreddit) and subsampling its entries 3 times, creating 3 slices of essentially similar cooccurrence statistics. We applied our algorithm with different learning parameters, and the ing weight vectors after 90 iterations (when the outputs vectors converged) were extremely non-sparse, with between 0 and 2 sparse coordinates per weight vector. The dimensions that are specifically 0 for a covariate corresponds to topics that are relatively less relevant for that covariate. In the next section, we develop methods to systematically interpret the covariate weights in terms of topics. A simple test of inferring topic meaning (i.e. topics coordinates are associated with) is to consider the set of words which are large in the given coordinate. Concretely the task is, given some index DISPLAYFORM0, output the words whose (normalized) vectors have largest value in dimension t. We show the of this experiment for several of the sparse coordinates in the AskReddit weight vector: There are several to be drawn from this experiment. Firstly, while there is some clear noise, specific topic meanings do seem to appear in certain coordinates (which we infer in the table above). It is reasonable that meaning would appear out of coordinates which are sparsely weighted in some covariate, because this means that it is a topic that is relevant in other discussion forums but purposely ignored in some specific forum, so it is likely to have a consistent theme. When we performed this task for coordinates which had low variance in their weights between covariates, the ing words were much less consistent. It is also interesting to see how covariates weight a topic whose meaning we have identified. For example, for coordinate 194 (corresponding to military themes), AskReddit placed negligible weight, news and worldnews placed weight 2.06 and 2.04 respectively, SandersForPresident and The Donald placed weight 0.41 and 0.38 respectively, and politics placed weight 0.05. This process can also be reversed -for example, consider coordinates small in worldnews and large in news. One example was coordinate 188, and upon checking the words that were large in this coordinate ({taser, troops, supremacists, underpaid, rioters, amendment, racially, hispanic}) it seemed clear that it had themes of rights and protests, which makes sense as a domestic issue, not a global one. We performed the following experiment: which pairs of words start off close in the baseline embedding, yet under some covariate weights move far apart (or vice versa)? Concretely, the task is, for a fixed word i and a fixed covariate k, identify words j such that ||c DISPLAYFORM0 where the magnitude of drift is quantified by the ratio of distances in the normalized embedding. The motivation is to find words whose general usage is similar, but have very different meanings in specific communities. We present a few representative examples of this experiment below, for k = The Donald. Combining the previous two sections allows us to do an end-to-end case study on words that drift under a covariate, so we can explain specifically which topics (under reweighting) caused this shift. For example, the words "immigrant" and "parasite" were significantly closer under the weights placed by The Donald, so we considered dimensions that were simultaneously large in the vector v immigrant − v parasite and sparse in the weight vector c T he Donald. The dimensions 89 and 139 were sparse and also the 2nd and 3rd largest coordinates in the difference vector, so they had a large contribution to the subspace which was zeroed out under the reweighting. Words that were large in these dimensions (and thus representative of the zeroed out subspace meaning) included {misunderstanding, populace, scapegoat, rebuilding} for 89, and {answers, barriers, applicants, backstory, selfless, indigenous} for 139. This suggests two independent reasons for the drift, namely dimensions corresponding to emotional appeal and legal immigration respectively being zeroed out. One of the most famous downstream applications of recent embedding methods such as BID10 and BID8 is their ability to capture analogies. This is typically formulated as a: DISPLAYFORM0 We considered how well our method captured covariate-specific analogies, which appear in a covariatespecific embedding but not most others. To this end, we considered experiments of the form: for fixed words a, b, c, determine words d such that for some covariate k, the quantity DISPLAYFORM1 is small, yet for other k, the quantity is large. The intuition is that under the covariate transform, v c − v d points roughly in the same direction as v a − v b, and d is close to c in semantic meaning. In particular, we set a = "hillary", c = "trump", and found words b for which there existed a d consistently at the top across subreddits (implying existence of strong analogies). For example, when b = "woman", d = "man" was the best analogy for every weighting. Then, for these b, we considered words d whose relative rankings in the subreddits had high variance. The differential analogies captured were quite striking, and we present several representative examples in TAB4. Discussion We have presented a joint tensor model that essentially learns an embedding for each word and for each covariate. This model makes it very simple to compute the covariate specific embedding: we just take the element-wise vector product. It also enables us to systematically interpret the covariate vector by looking at dimensions along which weight is large or 0. Our experiments show that these dimensions can be interpreted as coherent topics. While we focus on word embeddings, our tensor covariate embedding model can be naturally applied in other settings. For example, there is a large amount of interest in learning embeddings of individual genes to capture biological interactions. The natural covariates here are the cell types and our method would be able to model cell-type specific gene interactions. Another interesting setting with conditional covariates would be time-series specific embeddings, where data efficiency becomes more of an issue. We hope our framework is general enough that it will be of use to practitioners in these settings and others. We also experimented with using (Duchi et al.) as the optimization method, but the ing weight vectors in the politics dataset had highly-overlapping sparse dimensions. This implies that the optimization method tried to fit the model to a smaller-dimensional subspace, which is not a desirable source of sparsity.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1suU-bAW
Using the same embedding across covariates doesn't make sense, we show that a tensor decomposition algorithm learns sparse covariate-specific embeddings and naturally separable topics jointly and data-efficiently.
Deep Learning has received significant attention due to its impressive performance in many state-of-the-art learning tasks. Unfortunately, while very powerful, Deep Learning is not well understood theoretically and in particular only recently for the complexity of training deep neural networks have been obtained. In this work we show that large classes of deep neural networks with various architectures (e.g., DNNs, CNNs, Binary Neural Networks, and ResNets), activation functions (e.g., ReLUs and leaky ReLUs), and loss functions (e.g., Hinge loss, Euclidean loss, etc) can be trained to near optimality with desired target accuracy using linear programming in time that is exponential in the input data and parameter space dimension and polynomial in the size of the data set; improvements of the dependence in the input dimension are known to be unlikely assuming $P\neq NP$, and improving the dependence on the parameter space dimension remains open. In particular, we obtain polynomial time algorithms for training for a given fixed network architecture. Our work applies more broadly to empirical risk minimization problems which allows us to generalize various previous and obtain new complexity for previously unstudied architectures in the proper learning setting. Deep Learning has received significant attention due to its impressive performance in many state-of-the-art learning tasks. Unfortunately, while very powerful, Deep Learning is not well understood theoretically and in particular only recently for the complexity of training deep neural networks have been obtained. In this work we show that large classes of deep neural networks with various architectures (e.g., DNNs, CNNs, Binary Neural Networks, and ResNets), activation functions (e.g., ReLUs and leaky ReLUs), and loss functions (e.g., Hinge loss, Euclidean loss, etc) can be trained to near optimality with desired target accuracy using linear programming in time that is exponential in the input data and parameter space dimension and polynomial in the size of the data set; improvements of the dependence in the input dimension are known to be unlikely assuming P N P, and improving the dependence on the parameter space dimension remains open. In particular, we obtain polynomial time algorithms for training for a given fixed network architecture. Our work applies more broadly to empirical risk minimization problems which allows us to generalize various previous and obtain new complexity for previously unstudied architectures in the proper learning setting. Deep Learning is a powerful tool for modeling complex learning tasks. Its versatility allows for nuanced architectures that capture various setups of interest and has demonstrated a nearly unrivaled performance on state-of-the-art learning tasks across many domains. At the same time, the fundamental behavior of Deep Learning methods is not well understood. One particular aspect that recently gained significant interest is the computational complexity of training such networks. The basic training problem is usually formulated as an empirical risk minimization problem (ERM) that can be phrased as DISPLAYFORM0 where is some loss function, DISPLAYFORM1 is an i.i.d. sample from some data distribution D, and f is a neural network architecture parameterized by φ ∈ Φ with Φ being the parameter space of the considered architecture (e.g., network weights). The empirical risk minimization problem is solved in lieu of the general risk minimization problem (GRM) min φ ∈Φ E (x,y)∈D [( f (x, φ), y)] which is usually impossible to solve due to the inaccessibility of D. Several works have studied the training problem for specific architectures, both in the proper and improper learning setup. In the former, the ing "predictor" obtained from is always of the form f (·,φ) for someφ ∈ Φ, whereas in the latter, the predictor is allowed to be outside the class of functions {f (·, φ): φ ∈ Φ} as long as it satisfies certain approximation guarantee to the solution of1. In both cases, all basically establish trainability in time that is exponential in the network parameters but polynomial in the amount of data. In this work we complement and significantly extend previous work by providing a principled method to convert the empirical risk minimization problem in associated with the learning problem for various architectures into a linear programming problem (LP) in the proper learning setting. The obtained linear programming formulations are of size roughly exponential in the input dimension and in the parameter space dimension and linear in the size of the data-set. This provides new bounds on the computational complexity of the training problem. For an overview on Complexity Theory we refer the reader to BID6. Our work is most closely related to BID18, BID27, and BID5. In BID27 the authors show that 1 -regularized networks can be learned improperly in polynomial time in the size of the data (with a possibly exponential architecture dependent constant) for networks with ReLU-like activations (but not actual ReLUs) and an arbitrary number of layers k. These were then generalized in BID18 to actual ReLU activations. In both cases the improper learning setup is considered, i.e., the learned predictor is not a neural network itself and the learning problem is solved approximately for a given target accuracy. In contrast to these works, BID5 considered proper and exact learning however only for k = 2 (i.e., one hidden layer).In relation to these works, we consider the proper learning setup for an arbitrary number of layers k and a wide range of activations, loss functions, and architectures. As previous works, except BID5, we consider the approximate learning setup as we are solving the empirical risk minimization problem and we also establish generalization of our so-trained models. Our approach makes use of BID9 that allows for reformulating non-convex optimization problems with small treewidth and discrete as well as continuous decision variables as an approximate linear programming formulations. To the best of our knowledge, there are no previous papers that propose LP-based approaches for training neural networks. There are, however, proposed uses of Mixed-Integer and Linear Programming technology in other aspects of Deep Learning. Some examples of this include feature visualization BID17, generating adversarial examples BID15 BID3 BID17, counting linear regions of a Deep Neural Network BID25, performing inference BID1 and providing strong convex relaxations for trained neural networks BID2. We first establish a general framework that allows us to reformulate (regularized) ERM problems arising in Deep Learning (among others!) into approximate linear programs with explicit bounds on their complexity. The ing methodology allows for providing complexity upper bounds for specific setups simply by plugging-in complexity measures for the constituting elements such as layer architecture, activation functions, and loss functions. In particular our approach overcomes limitations of previous approaches in terms of handling the accuracy of approximations of non-linearities used in the approximation functions to achieve the overall target accuracy. Principled Training through LPs. If > 0 is arbitrary, then for any sample size D there exists a dataindependent linear program, i.e., the LP can be written down before seeing the data, with the following properties:1In the case of Neural Networks, for example, improper learning could lead to a predictor that does not correspond to a Neural Network, but that might behave closely to one. Solving the ERM problem to -optimality. The linear program describes a polyhedron P such that for every realized data set (X,Ŷ) DISPLAYFORM0 there is a face FX,Ŷ ⊆ P such that optimizing certain linear function over FX,Ŷ solves to -optimality returning a feasible parametrizationφ ∈ Φ which is part of our hypothesis class (i.e., we consider proper learning). The face FX,Ŷ ⊆ P is simply obtained from P by fixing certain variables of the linear program using the values of the actual sample; equivalently, by Farkas' lemma, this can be achieved by modifying the objective function to ensure optimization over the face belonging to the data. As such, the linear program has a build-once-solve-many feature. We will also show that a possible data-dependent LP formulation is meaningless (see Appendix B).Size of the linear program. The size, measured as bit complexity, of the linear program is roughly DISPLAYFORM1 where L is a constant depending on, f, and Φ that we will introduce later, n, m are the dimensions of the data points, i.e.,x i ∈ R n andŷ i ∈ R m for all i ∈ [D], and N is the dimension of the parameter space Φ. The overall learning algorithm is obtained then by formulating and solving the linear program, e.g., with the ellipsoid method whose running time is polynomial in the size of the input BID20. Even sharper size bounds can be obtained for specific architectures assuming network structure (see Appendix F) and our approach immediately extends to regularized ERMs (see Appendix E). It is important to mention that the constant L measures a certain Lipschitzness of the ERM training problem. While not exactly requiring Lipschitz continuity in the same way, Lipschitz constants have been used before for measuring complexity in the improper learning framework (see BID18) and more recently have been shown to be linked to generalization in BID19.Generalization. Additionally, we establish that the solutions obtained for the ERM problem via our linear programming approach generalize, utilizing techniques from stochastic optimization. We also show that using our approach one can obtain a significant improvement on the of BID18 when approximating the general risk minimization problem. Due to space limitations, however, we relegate this discussion to Appendix I.Throughout this work we assume both data and parameters to be well-scaled, which is a common assumption and mainly serves to simplify the representation of our ; the main assumption is the reasonable boundedness, which can be assumed without significant loss of generality as actual computations assume boundedness in any case (see also BID22 for arguments advocating the use of normalized coefficients in neural networks). More specifically, we assume DISPLAYFORM2 We point out three important features of our . First, we provide a solution method that has provable optimality guarantees for the ERM problem, ensures generalization, and linear dependency on the data (in terms of the complexity of the LP) without assuming convexity of the optimization problem. To the best of our knowledge, the only presenting optimality guarantees in a proper learning, non-convex setting is that of BID5. Second, the linear program that we construct for a given sample size D is data-independent in the sense that it can be written down before seeing the actual data realization and as such it encodes reasonable approximations of all possible data sets that can be given as an input to the ERM problem. This in particular shows that our linear programs are not simply discretizing space: if one considers a discretization of data contained in [−1, 1] n × [−1, 1] m, the total number of possible data sets of size D is exponential in D, which makes the linear dependence on D of the size of our LPs a remarkable feature. Finally, our approach can be directly extended to handle commonly used regularizers as we show in Appendix E; for ease of presentation though we omit regularizers throughout our main discussions. Complexity for various network architectures. We apply our methodology to various well-known neural network architectures and either generalize previous or provide completely new . We provide an overview of our in TAB0, where k is the number of layers, w is width of the network, n/m are the input/output dimensions and N is the total number of parameters. We use G to denote the directed graph defining the neural network and ∆ the maximum vertex in-degree in G. In all the node DISPLAYFORM3 computations are linear with bias term and normalized coefficients, and activation functions with Lipschitz constant at most 1 and with 0 as a fixed point; these include ReLU, Leaky ReLU, eLU, Tanh, among others. We would like to point out that certain improvements in the in TAB0 can be obtained by further specifying if the ERM problem corresponds to regression or classification. For example, the choice of loss functions and the nature of the output data y (discrete or continuous) typically rely on this. We can exploit such features in the construction of the LPs (see the proof of Theorem 3.1) and provide a sharper bound on the LP size. Nonetheless, these improvements are not especially significant and in the interest of clarity and brevity we prefer to provide a unified discussion on ERM. Missing proofs have been relegated to the appendix due to space limitations. The complexity of the training problem for the Fully Connected DNN case is arguably the most studied and, to the best of our knowledge, all training algorithms with approximation or optimality guarantees have a polynomial dependency on D only after fixing the architecture (depth, width, input dimension, etc.). In our setting, once the architecture is fixed, we obtain a polynomial dependence in both D and 1/. Moreover, our show that in the bounded case one can obtain a training algorithm with polynomial dependence on D across all architectures, assuming very little on the specific details of the network (loss, activation, etc). This answers an open question left by BID5 regarding the possibility of a training algorithm with polynomial dependence on D. In addition, we show that a uniform LP can be obtained without compromising that dependence on D.The reader might wonder if the exponential dependence on the other parameters of our LP sizes can be improved, namely the input dimension n + m and the parameter space dimension N (we are ignoring for the moment the exponent involving the depth k, as it will be typically dominated by N). The dependence on the input dimension is unlikely to be improved due to the NP-hardness in BID10, whereas obtaining a polynomial dependence on the parameter space dimension remains open (see BID5). A recent paper BID4 provides an NP-hard DNN training problem that becomes polynomially solvable when the input dimension is fixed. However, this considers a fixed architecture, thus the parameter space dimension is a constant and the running time is measured with respect to D. In the following let [n] {1, . . ., n} and [n] 0 {0, . . ., n}. Given a graph H, we will use V(H) and E(H) to denote the vertex-set and edge-set of H, respectively, and δ H (u) will be the set of edges incident to vertex u. We will need:Definition 2.1. For a function g: K ⊆ R n → R, we denote its Lipschitz constant with respect to the p-norm DISPLAYFORM0 Moreover, in the following let E ω ∈Ω [·] and V ω ∈Ω [·] denote the expectation and variance with respect to the random variable ω ∈ Ω, respectively. The basic ERM problem is typically of the form, where is some loss function, DISPLAYFORM0 is an i.i.d. sample from some data distribution D that we have reasonable sampling access to, and f is a model that is parametrized by φ ∈ Φ. We consider the proper learning setting here, where the computed solution to the ERM problem has to belong to the hypothesis class induced by Φ; for a detailed discussion see Appendix A.2. We next define the Lipschitz constant of an ERM problem with respect to the infinity norm. DISPLAYFORM1 over the domain DISPLAYFORM2 We emphasize that in FORMULA9 we are considering the data-dependent entries as variables as well, and not only the parameters Φ as it is usually done in the literature. This is because we will construct data-independent LPs, a subtlety that will become clear later. A neural network can be understood as a function f defined over a directed graph that maps inputs x ∈ R n to f (x) ∈ R m. The directed graph G = (V, E), which represents the network architecture, often naturally decomposes into layers DISPLAYFORM0 where V 0 is referred to as the input layer and V k as the output layer. To all other layers we refer to as hidden layers. These graphs do neither have to be acyclic (as in the case of recurrent neural networks) nor does the layer decomposition imply that arcs are only allowed between adjacent layers (as in the case of ResNets). In feed-forward networks, however, the graph is assumed to be acyclic. For the unfamiliar reader we provide a more formal definition in Appendix A.3. We will introduce the key concepts that we need to formulate and solve binary optimization problems with small treewidth, which will be the main workhorse behind our . The treewidth of a graph is a parameter used to measure how tree-like a given graph is. Among all its equivalent definitions, the one we will use in this work is the following:Definition 2.3. Let G be an undirected graph. A tree-decomposition BID24 ) of G is a pair (T, Q) where T is a tree and Q = {Q t : t ∈ V(T)} is a family of subsets of V(G) such that DISPLAYFORM0 The width of the decomposition is defined as max {|Q t | : t ∈ V(T)} − 1. The treewidth of G is the minimum width over all tree-decompositions of G.We refer to the Q t as bags as customary. An example of a tree-decomposition is given in FIG0 in Appendix A.1. In addition to width, another important feature of a tree-decomposition (T, Q) we use is the size of the tree-decomposition given by |V(T)|.Consider a problem of the form DISPLAYFORM1 where the f i and g j are arbitrary functions that we access via a function value oracle, i.e., an oracle that returns the function upon presentation with an input. We will further use the concept of intersection graph. Definition 2.4. The intersection graph Γ[I] for an instance I of BO is the undirected graph which has a vertex for each x variable and an edge for each pair of x variables that appear in any common constraint. Note that in the above definition we have ignored the y variables which will be of great importance later. The sparsity of a problem is now given by the treewidth of its intersection graph and we obtain: Theorem 2.5 is an immediate generalization of a theorem in BID9 distinguishing the variables y, which do not need to be binary in nature, but are fully determined by the binary variables x. The proof is omitted as it is almost identical to the proof in BID9. For the sake of completeness, we include a proof sketch in Appendix C.1. We will now show how we can obtain an approximate LP formulation for the ERM problem. A notable feature is that our LP formulation is data-independent in the sense that we can write down the LP, for a given sample size D, before having seen the actual data; the LP is later specialized to a given data set by fixing some of its variables. This subtlety is extremely important as it prevents trivial solutions, where some non-deterministic guess provides a solution to the ERM problem for a given data set and then simply writes down a small LP that outputs the network configuration; such an LP would be of small size (the typical notion of complexity used for LPs) however not efficiently computable. By making the construction independent of the data we circumvent this issue; we provide a short discussion in Appendix B and refer the interested reader to BID13 BID12; BID11 for an in-depth discussion of this subtlety. Slightly simplifying, we might say for now that in the same way we do not want algorithms to be designed for a fixed data set, we do not want to construct LPs for a specific data set but for a wide range of data sets. As mentioned before, we assume DISPLAYFORM0, 1] m as normalization to simplify the exposition. Since the BO problem only considers linear objective functions, we begin by reformulating the ERM problem in the following equivalent form: DISPLAYFORM1 Motivated by this reformulation, we study an approximation to the following set: DISPLAYFORM0 The variables DISPLAYFORM1 denote the data variables, that will be assigned values upon a specification of a data set of sample size D.Let r ∈ R with −1 ≤ r ≤ 1. Given γ ∈ we can approximate r as a sum of inverse powers of 2, within additive error proportional to γ. For N γ log 2 γ −1 there exist values z h ∈ {0, 1} with h ∈ [N γ], so that DISPLAYFORM2 Our strategy is now to approximately represent the x, y, φ variables via these binary approximations, i.e., as FORMULA9, and consider the following approximation of S(D, Φ,, f): DISPLAYFORM3 DISPLAYFORM4 We can readily describe the error of the approximation of S(D, Φ,, f) by S (D, Φ,, f) in the ERM problem induced by the discretization: DISPLAYFORM5 By substituting out the x, y, φ by means of the equations of S (D, Φ,, f), we obtain a feasible region as BO. So far, we have phrased the ERM problem in terms of a binary optimization problem using a discretization of the continuous variables. This in and of itself is neither insightful nor useful. In this section we will perform the key step, reformulating the convex hull of S (D, Φ,, f) as a moderate-sized linear program by means of Theorem 2.5 exploiting small treewidth of the ERM problem. After replacing the (x, y, φ) variables in S (D, Φ,, f) using the z variables, we can see that the intersection graph of S (D, Φ,, f) is given by Figure 1a, where we use (x, y, φ) as stand-ins for corresponding the binary variables z x, z y, z φ. Recall that the intersection graph does not include the L variables. It is not hard to see that a valid tree-decomposition for this graph is given by Figure 1b. This tree-decomposition has size D and width N γ (n + m + N) − 1 (much less than the N γ (N + Dn + Dm) binary variables). This yields our main theorem: Proof. The proof of part (a) follows directly from Theorem 2.5 using N γ = log(2L/) along with the tree-decomposition of Figure 1b, which implies |V(T)| + p = 2D in this case. Parts (b), (c) and (d) rely on the explicit construction for Theorem 2.5 and they are given in Appendix D. DISPLAYFORM0 DISPLAYFORM1 Observe that equivalently, by Farkas' lemma, optimizing over the face can be also achieved by modifying the objective function in a straightforward way. Also note that the number of evaluations of and f is independent of D. We would like to further point out that we can provide an interesting refinement of the theorem from above: if Φ has an inherent network structure (as in the case of Neural Networks) one can exploit treewidth-based sparsity of the network itself in order to obtain a smaller linear program with the same approximation guarantees as before. This allows us to reduce the exponent in the exponential term of the LP size to an expression that depends on the sparsity of the network, instead of its size. For brevity of exposition, we relegate this discussion to Appendix F. Another improvement can be obtained by using more information about the input data. Assuming extra structure on the input, one could potentially improve the n + m exponent on the LP size. We relegate the discussion of this feature to Remark D.1 in the Appendix, as it requires the explicit construction of the LP, which we also provide in the Appendix. DISPLAYFORM2 where σ is the ReLU activation function σ(x) max{0, x} applied component-wise and each T i: DISPLAYFORM3 is an affine linear function. Here w 0 = n is the dimension of the input data and w k = m is the dimension of the output of the network. We write DISPLAYFORM4 normalization. Thus, if v is a node in layer i, the node computation performed in v is of the formâ T z +b, whereâ is a row of A i andb is a component of b i. Note that in this case the dimension of the parameter space Φ is exactly the number of edges of the network. Hence, we use N to represent the number of edges as well. We begin with a short technical Lemma, with which we can immediately establish the following corollary. DISPLAYFORM5 DISPLAYFORM6 Proof. Proving that the architecture Lipschitz constant is L ∞ w O(k 2) suffices. Note that all node computations take the form h(z, a, b) = z T a + b for a ∈ [−1, 1] w and b ∈ [−1, 1]. The only difference is made in the domain of z, which varies from layer to layer. The 1-norm of the gradient of h is at most z 1 + a 1 + 1 ≤ z 1 + w + 1 which, in virtue of Lemma 4.1, implies that a node computation on layer i (where the weights are considered variables as well) has Lipschitz constant at most DISPLAYFORM7, which shows that the Lipschitz constants can be multiplied layer-by-layer to obtain the overall architecture Lipschitz constant. Since ReLUs have Lipschitz constant equal to 1, and DISPLAYFORM8 The reader might have noticed that a sharper bound for the Lipschitz constant above could have been used, however we chose simpler bounds for the sake of presentation. It is worthwhile to compare the previous lemma to the following closely related . Remark 4.4. We point out a few key differences of this with the algorithm we can obtain from solving the LP in Corollary 4.2: (a) One advantage of our is the benign dependency on D. An algorithm that solves the training problem using our proposed LP has polynomial dependency on the data-size regardless of the architecture. (b) As we have mentioned before, our approach is able to construct an LP before seeing the data. (c) The dependency on w of our algorithm is also polynomial. To be fair, we are including an extra parameter N-the number of edges of the Neural Network-on which the size of our LP depends exponentially. (d) We are able to handle any output dimension m and any number of layers k. (e) We do not assume convexity of the loss function, which causes the ing LP size to depend on how well behaved is in terms of its Lipschitzness. (f) The of BID5 has two advantages over our : there is no boundedness assumption on the coefficients, and they are able to provide a globally optimal solution instead of an -approximation. Another interesting point can be made with respect to Convolutional Neural Networks (CNN). In these, convolutional layers are included which help to significantly reduce the number of parameters involved in the neural network. From a theoretical perspective, a CNN can be obtained by simply enforcing certain parameters of a fully-connected DNN to be equal. This implies that Lemma 4.5 can also be applied to CNNs, with the key difference residing in parameter N, which is the dimension of the parameter space and does not correspond to the number of edges in a CNN. In TAB0 we provide explicit LP sizes for common architectures. These can be directly obtained from Lemma 4.5, using the specific Lipschitz constants of the loss functions. We provide explicit computations in Appendix G. We have presented a novel framework which shows that training a wide variety of neural networks can be done in time which depends polynomially on the data set size, while satisfying a predetermined arbitrary optimality tolerance. Our approach is realized by approaching training through the lens of linear programming. Moreover, we show that training using a particular data set is closely related to the face structure of a data-independent polytope. Our contributions not only improve the best known algorithmic for neural network training with optimality/approximation guarantees, but also shed new light on (theoretical) neural network training by bringing together concepts of graph theory, polyhedral geometry, and non-convex optimization as a tool for Deep Learning. While the LPs we are constructing are large, and likely to be difficult to solve by straightforward use of our formulation, we strongly believe the theoretical foundations we lay here can also have practical implications in the Machine Learning community. First of all, we emphasize that all our architecture dependent terms are worst-case bounds, which can be improved by assuming more structure in the corresponding problems. Additionally, the history of Linear Programming has provided many interesting cases of extremely large LPs that can be solved to near-optimality without necessarily generating the complete LP description. In these cases the theoretical understanding of the LP structure is crucial to drive the development of incremental solution strategies. Finally, we would like to point out an interesting connection between the way our approach works and the current most practically effective training algorithm: stochastic gradient descent. Our LP approach implicitly "decomposes" the problem for each data point, and the LP merges them back together without losing any information nor optimality guarantee, even in the non-convex setting. This is the core reason why our LP has a linear dependence on D, and bears close resemblance to SGD where single data points (or batches of those) are used in a given step. As such, our might provide a new perspective, through low treewidth, on why the current practical algorithms work so well, and perhaps hints at a synergy between the two approaches. We believe this can be an interesting path to bring our ideas to practice. structure. An alternative definition to Definition 2.3 of treewidth that the reader might find useful is the following; recall that a chordal graph is a graph where every induced cycle has length exactly 3. Definition A.1. An undirected graph G = (V, E) has treewidth ≤ ω if there exists a chordal graph H = (V, E) with E ⊆ E and clique number ≤ ω + 1.H in the definition above is sometimes referred to as a chordal completion of G. In FIG0 we present an example of a graph and a valid tree-decomposition. The reader can easily verify that the conditions of Definition 2.3 are met in this example. Moreover, using Definition A.1 one can verify that the treewidth of the graph in FIG0 is exactly 2.Two important folklore we use in Section C.1 and Section F are the following. Then there exists t ∈ T such that K ⊆ Q t. An important distinction is the type of solution to the ERM that we allow. In proper learning we require the solution to satisfy φ ∈ Φ, i.e., the model has to be from the considered model class induced by Φ and takes the form f (·, φ *) for some φ * ∈ Ω, with DISPLAYFORM0 and this can be relaxed to -approximate (proper) learning by allowing for an additive error > 0 in the above. In contrast, in improper learning we allow for a model g(·), that cannot be obtained as f (·, φ) with φ ∈ Φ, DISPLAYFORM1 with a similar approximate version. As we mentioned in the main body, this article considers the proper learning setup. In a Neural Network, the graph G defining the network can be partitioned in layers. This means that V(G) = k i=0 V i for some sets V i -the layers of the network. Each vertex v ∈ V i with i ∈ [k] 0 has an associated set of in-nodes denoted by δ + (v) ⊆ V, so that (w, v) ∈ E for all w ∈ δ + (v) and an associated set of out-nodes δ − (v) ⊆ V defined analogously. If i = 0, then δ + (v) are the inputs (from data) and if i = k, then δ − (v) are the outputs of the network. Moreover, each node v ∈ V performs a node computation g i (δ + (v)), where g i: R |δ + (v) | → R with i ∈ [k] is typically a smooth function (often these are linear or affine linear functions) and then the node activation is computed as a i (g i (δ + (v))), where a i: R → R with i ∈ [k] is a (not necessarily smooth) function (e.g., ReLU activations of the form a i (x) = max{0, x}) and the value on all out-nodes w ∈ δ − (v) is set to a i (g i (δ + (v))) for nodes in layer i ∈ [k]. In feed-forward networks, we can further assume that if v ∈ V i, then δ + (v) ⊆ ∪ i−1 j=0 V j, i.e., all arcs move forward in the layers. As mentioned before, the assumption that the construction of the LP is independent of the specific data is important and reasonable as it basically prevents us from constructing an LP for a specific data set, which would be akin to designing an algorithm for a specific data set in ERM problem. To further illustrate the point, suppose we would do the latter, then a correct algorithm would be a simple print statement of the optimal configurationφ. Clearly this is nonsensical and we want the algorithm to work for all types of data sets as inputs. We have a similar requirement for the construction of the LP, with the only difference that number of data points D has to be known at time of construction. As such LPs more closely resemble a circuit model of computation (similar to the complexity class P/poly); see BID13 BID12; Braun and Pokutta (2018+) for details. The curious reader might still wonder how our main changes if we allow the LPs in Theorem 3.1 to be data-dependent, i.e., if we construct a specific linear program after we have seen the data set: Remark B.1. To obtain a data-dependent linear program we can follow the same approach as in Section 3 and certainly produce an LP that will provide the same approximation guarantees for a fixed data set. This is not particularly insightful, as it is based on a straight-forward enumeration which takes a significant amount of time, considering that it only serves one data set. On the other hand, our shows that by including the input data as a variable, we do not induce an exponential term in the size of the data set D and we can keep the number function evaluations to be roughly the same. Our approach shares some similarities with stochastic gradient descent (SGD) based training: data points are considered separately (or in small batches) and the method (in case of SGD) or the LP (in our case) ensure that the information gained from a single data point is integrated into the overall ERM solution. In the case of SGD this happens through sequential updates of the form x t+1 ← x t − η∇ f i (x t), where i is a random function corresponding to a training data point (X i,Ŷ i) from the ERM problem. In our case, it is the LP that'glues together' solutions obtained from single training data points by means of leveraging the low treewidth. This is reflected in the linear dependence in D in the problem formulation size. Proof of Lemma 3.1. Choose binary valuesz so as to attain the approximation for variables x, y, φ as in FORMULA18 and definex,ŷ,φ,L fromz according to the definition of S (D, Φ,, f). Since DISPLAYFORM0 by Lipschitzness we obtain |L d −L d | ≤. The then follows. Proof of Lemma 4.1. The can be verified directly, since for a ∈ [−1, 1] w and b ∈ [−1, 1] it holds |z T a + b| ≤ w z ∞ + 1.Proof of Lemma 4.5. The proof follows almost directly from the proof of Corollary 4.2. The two main differences are the input dimension of a node computation, which can be at most ∆ instead of w and FORMULA9 the fact that an activation function a with Lipchitz constant 1 and that a = 0 satisfies |a(z)| ≤ |z|, thus the domain of each node computation computed in Lemma 4.1 applies. The layer-by-layer argument can be applied as the network is feed-forward. DISPLAYFORM1 Let us recall the definition of BO: DISPLAYFORM2 We sketch the proof of Proof. Since the support of each f i induces a clique in the intersection graph, there must exist a bag Q such that supp(f i) ⊆ Q (Lemma A.3). The same holds for each g j. We modify the tree-decomposition to include the y j variables the following way:• For each j ∈ [p], choose a bag Q containing supp(g j) and add a new bag Q (j) consisting of Q ∪ {y j} and connected to Q.• We do this for every j ∈ [p], with a different Q (j) for each different j. This creates a new tree-decomposition (T, Q) of width at most ω + 1, which has each variable y j contained in a single bag Q (j) which is a leaf.• The size of the tree-decomposition is |T | = |T | + p. From here, we proceed as follows:• For each t ∈ T, if Q t y j for some j ∈ [p], then we construct DISPLAYFORM3 otherwise we simply construct DISPLAYFORM4 Note that these sets have size at most 2 |Q t |.• We define variables X[Y, N] where Y, N form a partition of Q t 1 ∩ Q t 2. These are at most 2 ω |V(T)|.• For each t ∈ T and v ∈ F t, we create a variable λ v. These are at most 2 ω |V(T)|.We formulate the following linear optimization problem DISPLAYFORM5 Note that the notation in the last constraint is justified since by construction supp(g j) ⊆ Q (j). The proof of the fact that LBO is equivalent to BO follows from the arguments in BID9. The key difference justifying the addition of the y variables relies in the fact that they only appear in leaves of the tree decomposition (T, Q), and thus in no intersection of two bags. The gluing argument using variables X[Y, N] then follows directly, as it is then only needed for the x variables to be binary. We can substitute out the x and y variables and obtain an LP whose variables are only λ v and X [Y, N]. This produces an LP with at most 2 · 2 ω |V(T)| variables and (2 · 2 ω + 1)|V(T)| constraints. This proves the size of the LP is O(2 ω (|V(T)| + p)) as required. DISPLAYFORM6 In this Section we show how to construct the polytope in Theorem 3.1. We first recall the following definition: DISPLAYFORM7 and recall that S (D, Φ,, f) is a discretized version of the set mentioned above. From the tree-decomposition detailed in Section 3.2, we see that data-dependent variables x, y, L are partitioned in different bags for each DISPLAYFORM8. Let us index the bags using d. Since all data variables have the same domain, the sets F d we construct in the proof of Theorem 2.5 will be the same for all d ∈ [D]. Using this observation, we can construct the LP as follows:1. Fix, say, d = 1 and enumerate all binary vectors corresponding to the discretization of The only evaluations of and f are performed in the construction of F 1. As for the additional computations, the bottleneck lies in creating all λ variables, which takes time O((2L/) n+m+N D). Remark D.1. Note that in step 1 of the LP construction we are enumerating all possible discretized values of x 1, y 1, i.e., we are implicitly assuming all points in [−1, 1] n+m are possible inputs. This is reflected in the (2L/) n+m term in the LP size estimation. If one were to use another discretization method (or a different "point generation" technique) using more information about the input data, this term could be improved and the explicit exponential dependency on the input dimension of the LP size could be alleviated significantly. However, note that in a fully-connected neural network we have N ≥ n + m and thus an implicit exponential dependency on the input dimension could remain unless more structure is assumed. This is in line with the NP-hardness . We leave the full development of this potential improvement for future work. DISPLAYFORM9 DISPLAYFORM10 and let φ * be an optimal solution to the ERM problem with input data (X,Ŷ). Consider now binary variables zx, zŷ to attain the approximation and definex,ỹ from zx, zŷ, i.e., DISPLAYFORM11 and similarly forỹ. Define the set DISPLAYFORM12 and similarly as before define S (X,Ỹ, Φ,, f) to be the discretized version (on variables φ). The following Lemma shows the quality of approximation to the ERM problem obtained using S(X,Ỹ, Φ,, f) and subsequently S (X,Ỹ, Φ,, f). DISPLAYFORM13 Proof. The first inequality follows from the same proof as in Lemma 3.1. For the second inequality, let φ be the binary approximation to φ, and L defined by DISPLAYFORM14 Sincex,ỹ, φ are approximations to x,ŷ, φ, by Lipschitzness we know that DISPLAYFORM15 Proof. Sinceφ ∈ Φ, and φ * is a "true" optimal solution to the ERM problem, we immediately have DISPLAYFORM16 On the other hand, by the previous Lemma we know there exists DISPLAYFORM17 Note that since the objective is linear, the optimization problem in the previous Corollary is equivalent if we replace S (X,Ỹ, Φ,, f) by its convex hull. Therefore the only missing link to the face property of the data-independent polytope is the following: DISPLAYFORM18 Proof. The proof follows from simply fixing variables in the corresponding LBO that describes DISPLAYFORM19 and v ∈ F d, we simply need to make λ v = 0 whenever the (x, y) components of v do not correspond toX,Ỹ. We know this is well defined, sinceX,Ỹ are already discretized, thus there must be some v ∈ F d corresponding to them. The structure of the ing LP is the same as LBO, so the fact that it is exactly conv(S (X,Ỹ, Φ,, f)) follows. The fact that it is a face of conv(S (D, Φ,, f)) follows from the fact that the procedure simply fixed some inequalities to be tight. A common practice to avoid over-fitting is the inclusion of regularizer terms in. This leads to problems of the form DISPLAYFORM0 where R(·) is a function, typically a norm, and λ > 0 is a parameter to control the strength of the regularization. Regularization is generally used to promote generalization and discourage over-fitting of the obtained ERM solution. The reader might notice that our arguments in Section 3 regarding the epigraph reformulation of the ERM problem and the tree-decomposition of its intersection graph can be applied as well, since the regularizer term does not add any extra interaction between the data-dependent variables. The previous analysis extends immediately to the case with regularizers after appropriate modification of the architecture Lipschitz constant L to include R(·).Definition E.1. Consider a regularized ERM problem with parameters D, Φ,, f, R, and λ. We define its DISPLAYFORM1 over the domain DISPLAYFORM2 So far we have considered general ERM problems exploiting only the structure of the ERM induced by the finite sum formulations. We will now study ERM under Network Structure, i.e., specifically ERM problems as they arise in the context of Neural Network training. We will see that in the case of Neural Networks, we can exploit the sparsity of the network itself to obtain better LP formulations of conv(S (D, Φ,, f)).Suppose the network is defined by a graph G, and recall that in this case, Φ ⊆ [−1, 1] E(G). By using additional auxiliary variables s representing the node computations and activations, we can describe S(D, Φ,, f) in the following way: DISPLAYFORM0 The only difference with our original description of S(D, Φ,, f) in is that we explicitly "store" node computations in variables s. These new variables will allow us to better use the structure of G. Assumption F.1. To apply our approach in this context we need to further assume Φ to be the class of Neural Networks with normalized coefficients and bounded node computations. This means that we restrict to the case when s ∈ [−1, 1] |V (G) |D.Under Assumption F.1 we can easily derive an analog description of S (D, Φ,, f) using this node-based representation of S (D, Φ,, f). In such description we also include a binary representation of the auxiliary variables s. Let Γ be the intersection graph of such a formulation of S (D, Φ,, f) and Γ φ be the sub-graph of Γ induced by variables φ. Using a tree-decomposition (T, Q) of Γ φ we can construct a tree-decomposition of Γ the following way: DISPLAYFORM1 is a copy of (T, Q).2. We connect the trees T i in a way that the ing graph is a tree (e.g., they can be simply concatenated one after the other). It is not hard to see that this is a valid tree-decomposition of Γ, of size |T | · D -since the bags were duplicated D times-and width N γ (tw(Γ φ) + |V(G)| + n + m).We now turn to providing a bound to tw(Γ φ). To this end we observe the following:1. The architecture variables φ are associated to edges of G. Moreover, two variables φ e, φ f, with e, f ∈ E appear in a common constraint if and only if there is a vertex v such that e, f ∈ δ + (v). 2. This implies that Γ φ is a sub-graph of the line graph of G. Recall that the line graph of a graph G is obtained by creating a node for each edge of G and connecting two nodes whenever the respective edges share a common endpoint. The treewidth of a line graph is related to the treewidth of the base graph (see BID8 BID14 BID7 ; BID21 DISPLAYFORM2 In Section 4 we specified our -the size of the data-independent LPs-for feed-forward networks with 1-Lipschitz activation functions. However, we kept as a parameter L ∞ ; the Lipschitz constant of (·, ·) over DISPLAYFORM0 w j a valid bound on the output of the node computations, as proved in Lemma 4.1. Note that U k ≤ w k+1.In this Section we compute this Lipschitz constant for various common loss functions. It is important to mention that we are interested in the Lipschitznes of with respect to both the output layer and the data-dependent variables as well -not a usual consideration in the literature. Note that a bound on the Lipschitz constant L ∞ is given by sup z,y ∇ (z, y) 1.• Quadratic Loss (z, y) = z − y 2 2. In this case it is easy to see that DISPLAYFORM1 • Absolute Loss (z, y) = z − y 1. In this case we can directly verify that the Lipschitz constant with respect to the infinity norm is at most 2m.• Cross Entropy Loss with Soft-max Layer. In this case we include the Soft-max computation in the definition of, therefore DISPLAYFORM2 where S(z) is the Soft-max function defined as DISPLAYFORM3 which in principle cannot be bounded. Nonetheless, since we are interested in the domain DISPLAYFORM4 • Hinge Loss (z, y) = max{1 − z T x, 0}. Using a similar argument as for the Quadratic Loss, one can easily see that the Lipschitz constant with respect to the infinity norm is at most m(U k + 1) ≤ m(w k+1 + 1). A Binarized activation unit (BiU) is parametrized by p + 1 values b, a 1,..., a p. Upon a binary input vector z 1, z 2,..., z p the output is binary value y defined by: DISPLAYFORM0, and y = 0 otherwise. Now suppose we form a network using BiUs, possibly using different values for the parameter p. In terms of the training problem we have a family of (binary) vectors x 1,..., x D in R n and binary labels and corresponding binary label vectors y 1,..., y D in R m, and as before we want to solve the ERM problem. Here, the parametrization φ refers to a choice for the pair (a, b) at each unit. In the specific case of a network with 2 nodes in the first layer and 1 node in the second layer, and m = 1, BID10 showed that it is NP-hard to train the network so as to obtain zero loss, when n = D. Moreover, the authors argued that even if the parameters (a, b) are restricted to be in {−1, 1}, the problem remains NP-Hard. See BID16 for an empirically efficient training algorithm for BiUs. In this section we apply our techniques to the ERM problem to obtain an exact polynomial-size dataindependent formulation for each fixed network (but arbitrary D) when the parameters (a, b) are restricted to be in {−1, 1}.We begin by noticing that we can reformulate using an epigraph formulation as in. Moreover, since the data points in a BiU are binary, if we keep the data points as variables, the ing linear-objective optimization problem is a binary optimization problem as BO. This allows us to claim the following: Proof. The follows from applying Theorem 2.5 directly to the epigraph formulation of BiU keeping x and y as variables. In this case an approximation is not necessary. The construction time and the data-independence follow along the same arguments used in the approximate setting before. DISPLAYFORM1, where is some loss function, f is a neural network architecture with parameter space Φ, and (x, y) ∈ R n+m drawn from the distribution D. We solve the finite sum problem, i.e., the empirical risk minimization problem DISPLAYFORM2 We will show in this section, for any 1 > α > 0, > 0, we can choose a (reasonably small!) sample size D, so that with probability 1 − α it holds: DISPLAYFORM3 As the size of the linear program that we use for training only linearly depends on the number of data points, this also implies that we will have a linear program of reasonable size as a function of α and.The following proposition summaries the generalization argument used in stochastic programming as presented in (see also BID26 F(x, γ i).Ifx ∈ X is an -approximate solution to, i.e.,, with L and σ 2 as above, then with probability 1 − α it holds GRM(φ) ≤ min φ ∈Φ GRM(φ) + 6, i.e.,φ is a 6 -approximate solution to min φ ∈Φ GRM(φ). A closely related regarding an approximation to the GRM problem for neural networks is provided by BID18 in the improper learning setting. The following corollary to (, Corollary 4.5) can be directly obtained, rephrased to match our notation:Theorem I.4 BID18 ). There exists an algorithm that outputsφ such that with probability 1 − α, DISPLAYFORM4 Remark I.5. In contrast to the above of BID18, note that in our paper we consider the proper learning setting, where we actually obtain a neural network. In addition we point out several key differences between Theorem I.4 and the algorithmic version of our when solving the LP in Corollary I.3 of size as FORMULA1: (a) In FORMULA1, the dependency on the input dimension is better than in. (b) The dependency on the Lipschitz constant is significantly better in, although we have to point out that we are relying on the Lipschitz constant with respect to all inputs of the loss function and in a potentially larger domain. (c) The dependency on is also better in. (d) We are not assuming convexity of and we consider general m.(e) The dependency on k in FORMULA1 is much more benign than the one in, which is doubly exponential. Remark I.6. Since the first submission of this article, a manuscript by BID23 was published which extended the by BID18. This work provides an algorithm with similar characteristics to the one by BID18 but in the proper learning setting, for depth-2 ReLU networks with convex loss functions. The running time of the algorithm (rephrased to match our notation) is (n/α) O 2 (w/) O. Analogous to the comparison in Remark I.5, we obtain a much better dependence with respect to and we do not rely on convexity of the loss function or on constant depth of the neural network.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkMwHsCctm
Using linear programming we show that the computational complexity of approximate Deep Neural Network training depends polynomially on the data size for several architectures
The extended Kalman filter (EKF) is a classical signal processing algorithm which performs efficient approximate Bayesian inference in non-conjugate models by linearising the local measurement function, avoiding the need to compute intractable integrals when calculating the posterior. In some cases the EKF outperforms methods which rely on cubature to solve such integrals, especially in time-critical real-world problems. The drawback of the EKF is its local nature, whereas state-of-the-art methods such as variational inference or expectation propagation (EP) are considered global approximations. We formulate power EP as a nonlinear Kalman filter, before showing that linearisation in a globally iterated algorithm that exactly matches the EKF on the first pass through the data, and iteratively improves the linearisation on subsequent passes. An additional benefit is the ability to calculate the limit as the EP power tends to zero, which removes the instability of the EP-like algorithm. The ing inference scheme solves non-conjugate temporal Gaussian process models in linear time, $\mathcal{O}(n)$, and in closed form. Temporal Gaussian process models can be solved in linear computational scaling, O(n), in the number of data n (Hartikainen and Särkkä, 2010). However, non-conjugate (i.e., non-Gaussian likelihood) GP models introduce a computational problem in that they generally involve approximating intractable integrals in order to update the posterior distribution when data is observed. The most common numerical method used in such scenarios is sigma-point integration , with Gauss-Hermite cubature being a popular way to choose the sigma-point locations and weights. A drawback of this method is that the number of cubature points scales exponentially with the dimensionality d. Lower-order sigma-point methods allow accuracy to be traded off for scalability, for example the unscented transform (which forms the basis for the unscented Kalman filter, see Särkkä, 2013) requires only 2d + 1 cubature points. One significant alternative to cubature methods is linearisation. Although such an approach has gone out of fashion lately, García-Fernández et al. showed that a globally iterated version of the statistically linearised filter (SLF, Särkkä, 2013), which performs linearisation w.r.t. the posterior rather than the prior, performs in line with expectation propagation in many modelling scenarios, whilst also providing local convergence guarantees (Appendix D explains the connection to our proposed method). Crucially, linearisation guarantees that the integrals required to calculate the posterior have a closed form solution, which in significant computational savings if d is large. Motivated by these observations, and with the aim of illustrating the connections between classical filtering methods and EP, we formulate power EP as a Gaussian filter parametrised by a set of local likelihood approximations. The linearisations used to calculate these approximations are then refined during multiple passes through the data. We show that a single iteration of our approach is identical to the extended Kalman filter , and furthermore that we are able to calculate exactly the limit as the EP power tends to zero, since there are no longer any intractable integrals that depend on the power. The is a global approximate inference algorithm for temporal GPs that is efficient and stable, easy to implement, scales to problems with large data and high-dimensional latent states, and consistently outperforms the EKF. We consider non-conjugate (i.e., non-Gaussian likelihood) Gaussian process models with one-dimensional inputs t (i.e., time) which have a dual kernel (left) and discrete state space (right) form, R s is the latent state vector containing the GP dynamics. Each x (i) k contains the state dynamics for one latent GP, for example a Matérn-5 /2 GP prior is modelled with x The hyerparameters θ of the kernel K θ determine the state transition matrix A θ,k and the process noise q k ∼ N(0, Q θ,k). The measurement model h(x k, r k) is a (nonlinear) function of x k and the observation noise r k ∼ N(0, R k). Our aim is to calculate the posterior over the latent states, p(x k | y 1, . . ., y n) for k < n, otherwise known as the smoothing solution, which can be obtained via application of a Gaussian filter (to obtain the filtering solution p(x k | y 1, . . ., y k)) followed by a Gaussian smoother. If h(·) is linear then the Kalman filter and Rauch-Tung-Striebel (RTS, Särkkä, 2013) smoother return the optimal solution. Gaussian filtering and smoothing As with most approximate inference methods, we approximate the filtering distributions with Gaussians, p(x k | y 1:k) ≈ N(x k ; m k, P k). The prediction step remains the same as in the standard Kalman filter, with the ing distribution acting as the EP cavity on the forward (filtering) pass: m To account for the non-Gaussian likelihood in the update step we follow , introducing an intermediary step in which the parameters of the approximate likelihoods, N(x k ; m, are set via a moment matching procedure and stored before continuing with the Kalman updates. This PEP formulation, with power α, makes use of the fact that the required moments can be calculated via the derivatives of the log-normaliser, Z k, of the tilted distribution (see), giving After the mean and covariance of our new likelihood approximation have been calculated, we can proceed with a modified set of linear Kalman filter updates, As in , we augment the standard RTS smoother with another moment matching step where the cavity distribution is calculated by removing (a fraction α of) the local likelihood from the marginal smoothing distribution p(R a×s and ∈ R a×a are the Jacobian of h(·) evaluated at the mean w.r.t. x k and r k respectively. This new Gaussian form means the moment matching step becomes, where is zero (see , for discussion). Therefore, Now we update the approximate likelihood in closed form (Appendix B gives the derivation), The when we use Eq. (with α = 1) to modify the filter updates, Eq., is exactly the EKF (see Appendix C for the proof). Additionally, since these updates are now available in closed form, a variational free energy method (α → 0, see) is simple to implement and doesn't require any matrix subtractions and inversions in Eq., which can be costly and unstable. Taking α → 0 prior to linearisation is not possible because the intractable integrals also depend on α. Appendix A describes our full iterative algorithm. In Fig. 2, we compare our approach (EKF-PEP, α = 1) to EP and the EKF on two nonconjugate GP tasks (see Appendix E for the full formulations). Whilst our method is suited to large datasets, we focus here on small time series for ease of comparison. In the left-hand plot, a log-Gaussian Cox process (approximated with a Poisson model for 200 equal time interval bins) is used to model the intensity of coal mining accidents. EKF-PEP and the EKF match the EP posterior well, with EKF-PEP obtaining an even tighter match to both the mean and marginal variances. The right-hand plot shows a similar comparison for 133 accelerometer readings in a simulated motorcycle crash, using a heteroscedastic noise model. Linearisation in this model is a crude approximation to the true likelihood, but we observe that iteratively refining the linearisation vastly improves the posterior is some regions. This new perspective on linearisation in approximate inference unifies the PEP and EKF paradigms for temporal data, and provides an improvement to the EKF that requires no additional implementation effort. Key areas for further exploration are the effect of adjusting α (i.e., changing the cavity and the linearisation point), and the use of statistical linearisation as an alternative method for obtaining the local approximations. Appendix A. The proposed globally iterated EKF-PEP algorithm Algorithm 1 Globally iterated extended Kalman filter with power EP-style updates and discretised state space model h, H, J x, J r, α measurement model, Jacobian and EP power m 0 ← 0, P 0 ← P ∞, e 1:n = 0 initial state while not converged do iterated EP-style loop for k = 1 to n do forward pass (FILTERING) evaluate Jacobian Here we derive in full the closed form site updates after linearisation. Plugging the derivatives from Eq. into the updates in Eq. we get, By the matrix inversion lemma, and withR so that where Applying the matrix inversion lemma for a second time we obtain We can also write Together the above calculations give the approximate site mean and covariance as Appendix C. Analytical linearisation in EP (α = 1) in an iterated version of the EKF Here we prove that a single pass of the proposed EP-style algorithm with linearisation is exactly equivalent to the EKF. Plugging the closed form site updates, Eq., with α = 1 (since the filter predictions can be interpreted as the cavity with the full site removed), into our modified Kalman filter update equations, Eq., we get a new set of Kalman updates in which the latent noise terms are determined by scaling the observation noise with the Jacobian of the state: This can be rewritten to explicitly show that there are two innovation covariance terms, S k andŜ k, which act on the state mean and covariance separately: Now we calculate the inverse ofŜ k: and the inverse of S k: which shows thatŜ and hence, recalling thatR k = J r k R k J r k, Eq. simplifies to give exactly the extended Kalman filter updates: EKF update step: Posterior linearisation (García-Fernández et al., 2015) is a filtering algorithm that iteratively refines local posterior approximations based on statistical linear regression (SLR), and can be seen as a globally iterated extension of the SLR filter (Särkkä, 2013). The idea is that the measurement function is linearised with respect to the posterior, rather than the prior, which is particularly beneficial when the measurement noise is small, such that the prior and posterior can have very different locations and variance. One drawback of using SLR is that it does not generally in closed form updates, however it does provide local convergence guarantees. We have shown in Section 2 that on the first filtering pass our proposed algorithm is equivalent to the EKF. However, the power EP formulation of the smoothing pass, Eq., iteratively refines the approximate likelihood parameters in the context of the posterior (with a fraction of the local likelihood removed). Letting α → 0 during the cavity calculation in Eq. implies that the expectations are now with respect to the full marginal posterior. This shows that PLF is a version of our algorithm in which α = 0 and analytical linearisation is replaced with SLR. This motivates the following observation: posterior linearisation is a variational free energy method in which the intractable integrals required for posterior calculation are solved via linearisation of the likelihood mean function. This is intuitive since the formulation of the PLF is based on minimizing local KL divergences. The local convergence analysis in García-Fernández et al. depends on using SLR as the linearisation method and initialising the state sufficiently close to a fixed point. However, it now becomes apparent why both the PLF and our algorithm are generally more stable than EP: no covariance subtractions and inversions are necessary in calculating the cavity distribution, which avoids the possibility of negative-definite covariance matrices. Log-Gaussian Cox process The coal mining dataset contains the dates of 191 coal mine explosions in Britain between the years 1851-1962, discretised into n = 200 equal time interval bins. We use a log-Gaussian Cox process to model this count data. Assuming the process has locally constant intensity in the subregions allows a Poisson likelihood to be used for each bin,, where we define f However, the Poisson is a discrete probability distribution and the EKF applies to continuous observations. Therefore we use a Gaussian approximation to the Poisson likelihood, noticing that the first two moments of the Poisson distribution are equal to the intensity λ k = exp f Heteroscedastic noise model The motorcycle crash experiment consists of 131 simulated readings from an accelerometer on a motorcycle helmet during impact. A single GP is not a good model for this data due to the heteroscedasticity of the observation noise, therefore it is common to model the noise separately. We model the process with one GP for the mean and another for the time varying observation noise. Letting r k ∼ N, we place a GP prior over f and f, both with Matern-3 /2 kernels, f (t) ∼ GP 0, κ θ 1 (t, t), f (t) ∼ GP 0, κ θ 2 (t, t), k ) = N(f h(x k, r k) = f where φ(z) = log(1 + exp(z)). In practice a problem arises when linearising this likelihood model. Since the mean of r k = 0, the Jacobian of the noise term disappears when evaluated at the mean regardless of the value of f. Hence we reformulate the model to improve identifiability,h (x k, r k) = (y k − f Left is the EKF-PEP method and right is the PEP equivalent. The top plots are the posterior for f (t) (the mean process), the middle plots show the posterior for f (t) (the observation noise process), and the bottom plots are the full model.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkxNKk2VKS
We unify the extended Kalman filter (EKF) and the state space approach to power expectation propagation (PEP) by solving the intractable moment matching integrals in PEP via linearisation. This leads to a globally iterated extension of the EKF.
This paper explores the simplicity of learned neural networks under various settings: learned on real vs random data, varying size/architecture and using large minibatch size vs small minibatch size. The notion of simplicity used here is that of learnability i.e., how accurately can the prediction function of a neural network be learned from labeled samples from it. While learnability is different from (in fact often higher than) test accuracy, the herein suggest that there is a strong correlation between small generalization errors and high learnability. This work also shows that there exist significant qualitative differences in shallow networks as compared to popular deep networks. More broadly, this paper extends in a new direction, previous work on understanding the properties of learned neural networks. Our hope is that such an empirical study of understanding learned neural networks might shed light on the right assumptions that can be made for a theoretical study of deep learning. Over the last few years neural networks have significantly advanced state of the art on several tasks such as image classification BID23 ), machine translation BID32 ), structured prediction BID2 ) and so on, and have transformed the areas of computer vision and natural language processing. Despite the success of neural networks in making these advances, the reasons for their success are not well understood. Understanding the performance of neural networks and reasons for their success are major open problems at the moment. Questions about the performance of neural networks can be broadly classified into two groups: i) optimization i.e., how are we able to train large neural networks well even though it is NP-hard to do so in the worst case, and ii) generalization i.e., how is it that the training error and test error are close to each other for large neural networks where the number of parameters in the network is much larger than the number of training examples (highly overparametrized). This paper explores three aspects of generalization in neural networks. The first aspect is the performance of neural networks on random training labels. While neural networks generalize well (i.e., training and test error are close to each other) on real datasets even in highly overparametrized settings, BID33 shows that neural networks are nevertheless capable of achieving zero training error on random training labels. Since any given network will have large error on random test labels, BID33 concludes that neural networks are indeed capable of poor generalization. However since the labels of the test set are random and completely independent of the training data, this leaves open the question of whether neural networks learn simple patterns even on random training data. Indeed the of BID22 establish that even in the presence of massive label noise in the training data, neural networks obtain good test accuracy on real data. This suggests that neural networks might learn some simple patterns even with random training labels. The first question this paper asks is (Q1): Do neural networks learn simple patterns on random training data?A second, very curious, aspect about the generalization of neural networks is the observation that increasing the size of a neural network helps in achieving better test error even if a training error of zero has already been achieved (see, e.g., BID21) i.e., larger neural networks have better generalization error. This is contrary to traditional wisdom in statistical learning theory which holds that larger models give better training error but at the cost of higher generalization error. A recent line of work proposes that the reason for better generalization of larger neural networks is implicit regularization, or in other words larger learned models are simpler than smaller learned models. for references. The second question this paper asks is (Q2): Do larger neural networks learn simpler patterns compared to smaller neural networks when trained on real data?The third aspect about generalization that this paper considers is the widely observed phenomenon that using large minibatches for stochastic gradient descent (SGD) leads to poor generalization LeCun et al..(Q3): Are neural networks learned with small minibatch sizes simpler compared to those learned with large minibatch sizes?All the above questions have been looked at from the point of view of flat/sharp minimizers BID11. Here flat/sharp corresponds to the curvature of the loss function around the learned neural network. BID18 for true vs random data, BID24 for large vs small neural networks and BID16 for small vs large minibatch training, all look at the sharpness of minimizers in various settings and connect it to the generalization performance of neural networks. While there certainly seems to be a connection between the sharpness of the learned neural network, there is as yet no unambiguous notion of this sharpness to quantify it. See BID4 for more details. This paper takes a complementary approach: it looks at the above questions through the lens of learnability. Let us say we are considering a multi-class classification problem with c classes and let D denote a distribution over the inputs x ∼ R d. Given a neural network N, draw n independent samples x tr 1, · · ·, x tr n from D and train a neural network N on training data DISPLAYFORM0 The learnability of a neural network N is defined to be DISPLAYFORM1 Note that L(N) implicitly depends on D, the architecture and learning algorithm used to learn N as well as n. This dependence is suppressed in the notation but will be clear from context. Intuitively, larger the L(N), easier it is to learn N from data. This notion of learnability is not new and is very closely related to probably approximately correct (PAC) learnability; BID15. In the context of neural networks, learnability has been well studied from a theoretical point as we discuss briefly in Sec.2. There we also discuss some related empirical but to the best of our knowledge there has been no work investigating the learnability of neural networks that are encountered in practice. This paper empirically investigates the learnability of neural networks of varying sizes/architectures and minibatch sizes, learned on real/random data in order to answer (Q1) and (Q2) and (Q3). The main contributions of this paper are as follows: DISPLAYFORM2 The in this paper suggest that there is a strong correlation between generalizability and learnability of neural networks i.e., neural networks that generalize well are more learnable compared to those that do not generalize well. Our experiments suggest that• Neural networks do not learn simple patterns on random data.• Learned neural networks of large size/architectures that achieve higher accuracies are more learnable.• Neural networks learned using small minibatch sizes are more learnable compared to those learned using large minibatch sizes. Experiments also suggest that there are qualitative differences between learned shallow networks and deep networks and further investigation is warranted. Paper organization: The paper is organized as follows. Section 2 gives an overview of related work. Section 3 presents the experimental setup and . Section 5 concludes the paper with some discussion of and future directions. Learnability of the concept class of neural networks has been addressed from a theoretical point of view in two recent lines of work. The first line of work shows hardness of learning by exhibiting a distribution and neural net that is hard to learn by certain type of algorithms. We will mention one of the recent , further information can be obtained from references therein. (see also BID26 BID25) show that there exist families of single hidden layer neural networks of small size that is hard to learn for statistical query algorithms (statistical query algorithms BID14 capture a large class of learning algorithms, in particular, many deep learning algorithms such as SGD). The holds for log-concave distributions on the input and for a wide class of activation functions. If each sample is used only ones, then the hardness in their means that the number of samples required is exponentially large. These do not seem directly applicable to input distributions and networks encountered in practice. The second line of work shows, under various assumptions on D and/or N, that the learnability of neural networks is close to 1 Arora et al. FORMULA1; Janzamin et al. FORMULA1; BID5 BID34. Recently, BID6 give a provably efficient algorithm for learning one hidden layer neural networks consisting of sigmoids. However, their algorithm, which uses the kernel method, is different from the ones used in practice and the output hypothesis is not in the form of a neural network. Using one neural net to train another has also been used in practice, e.g. Ba & Caurana FORMULA1; BID10; BID30. The goal in these works is to train a small neural net to the data with high accuracy by a process often called distillation. To this end, first a large network is trained to high accuracy. Then a smaller network is trained on the original data, but instead of class labels, the training now uses the classification probabilities or related quantities of the large network. Thus the goal of this line of research, while related, is different from our goal. In this section, we will describe our experiments and present . All our experiments were performed on CIFAR-10 BID17. The 60, 000 training examples were divided into three subsets D 1, D 2 and D 3 with D 1 and D 2 having 25000 samples each and D 3 having 10000 samples. We overload the term D i to denote both the unlabeled as well as labeled data points in the i th split; usage will be clear from context. For all the experiments, we use vanilla stochastic gradient descent (SGD) i.e., no momentum parameter, with an initial learning rate of 0.01. We decrease the learning rate by a factor of 3 4 if there is no decrease in train error for the last 10 epochs. Learning proceeds for 500 epochs or when the training zero-one error becomes smaller than 1%, whichever is earlier. Unless mentioned otherwise, minibatch size of 64 is used and the final training zero-one error is smaller than 1%. For training, we minimize logloss and do not use weight decay. The experimental setup is as follows. Step 1 Train a network N 1 on (labeled) D 1.Step 2 Use N 1 to predict labels for (unlabeled) D 2, denoted by N 1 (D 2).Step 3 Train another network N 2 on the data (D 2, N 1 (D 2)).Learnability of a network is computed as DISPLAYFORM0 All the numbers reported here were averaged over 5 independent runs. We now present experimental aimed at answering (Q1), (Q2) and (Q3) we raised in Section 1. The first set of experiments are aimed at understanding the effect of data on the simplicity of learned neural networks. We work with three different kinds of data. In this section we vary the data in three ways• True data: Use labeled images from CIFAR-10 for D 1 in Step 1.• Random labels: Use unlabeled images from CIFAR-10 for D 1 in Step 1 and assign them random labels uniformly between 1 and 10.• Random images: Use random images and labels in Step 1, where each pixel in the image is drawn uniformly from [−1, 1].For this set of experiments architecture of N 1 was the same as that of N 2. The networks N 1 and N 2 were varied over different architectures: , , ResNet He et al. (2016a), PreActResnet BID9, and. We also do the same experiment on shallow convolutional neural networks with one convolutional layer and one fully connected layer. For the shallow networks, we vary the number of filters in N 1 and N 2 from {16, 32, 64, 128, 256, 512, 1024}. We start with 16 filters since that is the minimum number of filters where the training zero one error goes below 1%.The learnability values for various networks for true data, random labels and random images are presented in Table 1 for shallow convolutional networks, TAB1 for popular deep convolutional networks and TAB2 clearly demonstrates that the complexity of a learned neural network heavily depends on the training data. Given that complexity of the learned model is closely related to its generalizability, this further supports the view that generalization in neural networks heavily depends on training data. Similar can be observed for shallow convolutional networks on CIFAR-100 in Table 4.It is perhaps surprising that the learnability of networks trained on random data is substantially higher than 10% for shallow networks, on the other hand it's close to 10% for deeper networks. Some of this is due to class imbalance: in the case of true data, class imbalance is minimal for all architectures. While, when trained on random labels or random images output of N 1 on D 2 was skewed. This happened both for shallow networks and deeper networks but was slightly higher for shallow networks. TAB3 presents the percentage of each class in the labels of N 1 on D2. However, we do not have a quantification of how much of learnability in the case of shallow networks arises due to class imbalance and a compelling reason for high learnability of shallow networks. TAB6 present these for VGG-11 and GoogleNet. The key point we would like to point out from these tables is that if we focus on those examples where N 1 does not predict the true label correctly i.e., TLP = 0 or the first row in the tables, we see that approximately half of these examples are still learned correctly by N 2. Contrast this with the learnability values of N 1 learned with random data which are all less than 20%. This suggests that networks learned on true data make simpler predictions even on examples which they misclassify. The second set of experiments are aimed at understanding the effect of network size and architecture on the learnability of the learned neural network. First, we work with shallow convolutional neural networks (CNN) with 1 convolutional layer and 1 fully connected layer. The are presented in Table 10. Even though training accuracy is always greater than 99%, test accuracy increases with increase in the size of N 1 -Neyshabur et al. FORMULA1 reports similar for 2-layer multilevel perceptrons (MLP). It is clear that for any fixed N 2, the learnability of the learned network increases as the number of filters in N 1 increases. This suggests that the larger learned networks are indeed simpler than the smaller learned networks. Note also that for every N 1, its learnability values are always larger than its test accuracy when N 2 has 16 or more filters. This suggests that N 2 learns information about N 1 that is not contained in the data. We performed the same experiment for some popular architectures as in Section 3.2. The are presented in TAB1. Note that the accuracies reported here are significantly lower than those reported in published literature for the corresponding models; the reason for this is that our data size is essentially cut by half (see Section 3.1). Except for the case where N 2 is ResNet18 and N 1 is either a VGG or ResNet, there is a positive correlation between test accuracy and learnability i.e., a network with higher test accuracy is more learnable. We do not know the reason for the exception mentioned above. Furthermore, the pattern observed for shallow networks, that learnability is larger than accuracy, does not seem to always hold for these larger networks. The third set of experiments are aimed at understanding the effect of minibatch size on the learned model. For this set of experiments, N 1 and N 2 are again varied over different architectures while keeping the architectures of N 1 and N 2 same. The minibatch size for training of N 2 (Step 3) is fixed to 64 while the minibatch size for training of N 1 (Step 1) is varied over {32, 64, 128, 256}. TAB2 presents these . It is clear from these that for any architecture, increasing the minibatch size leads to a reduction in learnability. This suggests that using a larger minibatch size in SGD leads to a more complex neural network as compared to using a smaller minibatch size. In this section, we will explore a slightly orthogonal question of whether neural networks learned with different random initializations converge to the same neural network, as functions. While there are some existing works e.g., BID7, which explore linear interpolation between the parameters of two learned neural networks with different initializations, we are interested here in understanding if different SGD solutions still correspond to the same function. In order to do this, we compute the confusion matrix for different SGD solutions. If SGD is run k times (k = 5 in this case), recall that the (i, j) entry of the confusion matrix, where 1 ≤ i, j ≤ k gives the fraction of examples on which the i th and j th SGD solutions agree. The following are the confusion matrices For both the networks, we see that the off-diagonal entries are quite close to each other. This seems to suggest that while the different SGD solutions are not same as functions, they agree on a common subset (93% for shallow network and 73% for VGG-11) of examples. Furthermore, for VGG-11, the off-diagonal entries are very close to the test accuracy -this behavior of VGG-11 seems common to other popular architectures as well. This seems to suggest that different SGD solutions agree on precisely those examples which they predict correctly, which in turn means that the subset of examples on which different SGD solutions agree with each other are precisely the correctly predicted examples. However this does not seem to be the case. Figures 1 and 2 show the histograms of the number of distinct predictions for shallow network and VGG-11 respectively. For each number i ∈ [k], it shows the fraction of examples for which the k SGD solutions make exactly i distinct predictions. The number of examples for which there is exactly 1 prediction, or equivalently all the SGD solutions agree is significantly smaller than the test accuracies reported above. The experimental so far show a clear correlation between learnability and generalizability of learned neural networks. This naturally leads to the question of why this is the case. We hypothesize that learnability captures the inductive bias of SGD training of neural networks. More precisely, when we start training, intuitively, the initial random network generalizes well (i.e., both train and test errors are high) and is also simple (learnability is high). As SGD changes the network to reduce the training error, it becomes more complex (learnability decreases) and generalization error increases. FIG2 which shows the plots of learnability and generalizability of shallow 2-layer CNNs supports this hypothesis. DISPLAYFORM0 This paper explores the learnability of learned neural networks under various scenarios. The herein suggest that while learnability is often higher than test accuracy, there is a strong correlation between low generalization error and high learnability of the learned neural networks. This paper also shows that there are some qualitative differences between shallow and popular deep neural networks. Some questions that this paper raises are the effect of optimization algorithms, hyperparameter selection and initialization schemes on learnability. On the theoretical front, it would be interesting to characterize neural networks that can be learned efficiently via backprop. Given the strong correlation between learnability and generalization, driving the network to converge to learnable networks might help achieve better generalization.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJ1RPJWAW
Exploring the Learnability of Learned Neural Networks
With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits. Simply looking at differences between holistic metrics such as accuracy, BLEU, or F1 do not tell us \emph{why} or \emph{how} a particular method is better and how dataset biases influence the choices of model design. In this paper, we present a general methodology for {\emph{interpretable}} evaluation of NLP systems and choose the task of named entity recognition (NER) as a case study, which is a core task of identifying people, places, or organizations in text. The proposed evaluation method enables us to interpret the \textit{model biases}, \textit{dataset biases}, and how the \emph{differences in the datasets} affect the design of the models, identifying the strengths and weaknesses of current approaches. By making our {analysis} tool available, we make it easy for future researchers to run similar analyses and drive the progress in this area.
[ 0, 0, 0, 1, 0 ]
HJxTgeBtDr
We propose a generalized evaluation methodology to interpret model biases, dataset biases, and their correlation.
The task of visually grounded dialog involves learning goal-oriented cooperative dialog between autonomous agents who exchange information about a scene through several rounds of questions and answers. We posit that requiring agents to adhere to rules of human language while also maximizing information exchange is an ill-posed problem, and observe that humans do not stray from a common language, because they are social creatures and have to communicate with many people everyday, and it is far easier to stick to a common language even at the cost of some efficiency loss. Using this as inspiration, we propose and evaluate a multi-agent dialog framework where each agent interacts with, and learns from, multiple agents, and show that this in more relevant and coherent dialog (as judged by human evaluators) without sacrificing task performance (as judged by quantitative metrics). Intelligent assistants like Siri and Alexa are increasingly becoming an important part of our daily lives, be it in the household, the workplace or in public places. As these systems become more advanced, we will have them interacting with each other to achieve a particular goal BID9. We want these conversations to be interpretable to humans for the sake of transparency and ease of debugging. Having the agents communicate in natural language is one of the most universal ways of ensuring interpretability. This motivates our work on goal-driven agents which interact in coherent language understandable to humans. To that end, this paper builds on work by BID2 on goal-driven visual dialog agents. The task is formulated as a conversation between two agents, a Question (Q-) and an Answer (A-) bot. The A-Bot is given an image, while the QBot is given only a caption to the image. Both agents share a common objective, which is for the Q-Bot to form an accurate mental representation of the unseen image using which it can retrieve, rank or generate that image. This is facilitated by the exchange of 10 pairs of questions and answers between the two agents, using a shared vocabulary. BID2 trained the agents first in isolation via supervision from the VisDial dataset BID1, followed by making them interact and adapt to each other via reinforcement learning by optimizing for better task performance. While trying to maximize performance, the agents learn to communicate in non-grammatical and semantically meaningless sentences in order to maximize the exchange of information. This reduces transparency of the AI system to human observers and is undesirable. We address this problem by proposing a multi-agent dialog framework where each agent interacts with multiple agents. This is motivated by our observation that humans adhere to syntactically and semantically coherent language, which we hypothesize is because they have to interact with an entire community, and having a private language for each person would be extremely inefficient. We show that our multi-agent (with multiple Q-Bots and multiple A-Bots) dialog system in more coherent and human-interpretable dialog between agents, without compromising on task performance, which also validates our hypothesis. This makes them seem more helpful, transparent and trustworthy. We will make our code available as open-source. 1 The game involves two collaborative agents a question bot (Q-bot) and an answer bot (A-bot). The A-Bot is provided an image, I (represented as a feature embedding y gt extracted by, say, a pretrained CNN model BID13), while the Q-Bot is provided with only a caption of the image. The Q-Bot is tasked with estimating a vector representationŷ of I, which is used to retrieve that image from a dataset. Both agents receive a common penalty from the environment which is equal to the error inŷ with respect to y gt. Thus, an unlimited number of games may be simulated without human supervision, motivating the use of reinforcement learning in this framework. Our primary focus for this work is to ensure that the agents' dialog remains coherent and understandable while also being informative and improving task performance. For concreteness, an example of dialog that is informative yet incoherent: question: "do you recognize the guy and age is the adult?", answered with: "you couldn't be late teens, his". The example shows that the bots try to extract and convey as much information as possible in a single question/answer (sometimes by incorporating multiple questions or answers into a single statement). But in doing so they lose basic semantic and syntactic structure. Most of the major works which combine vision and language have traditionally been focusing on the problem of image captioning (( BID7, BID17, BID14, BID5, BID12, BID19) and visual question answering (BID0, BID20, BID4, BID18). The problem of visual dialog is relatively new and was first introduced by BID1 who also created the VisDial dataset to advance the research on visually grounded dialog. The dataset was collected by pairing two annotators on Amazon Mechanical Turk to chat about an image. They formulated the task as a'multi-round' VQA task and evaluated individual responses at each round in an image guessing setup. In a subsequent work by BID2 they proposed a Reinforcement Learning based setup where they allowed the Question bot and the Answer bot to have a dialog with each other with the goal of correctly predicting the image unseen to the Question bot. However, in their work they noticed that the reinforcement learning based training quickly lead the bots to diverge from natural language. In fact recently showed that language emerging from two agents interacting with each other might not even be interpretable or compositional. Our multi-agent framework aims to alleviate this problem and prevent the bots from developing a specialized language between them. Interleaving supervised training with reinforcement learning also helps prevent the bots from becoming incoherent and generating non-sensical dialog. Recent work has also proposed using such goal driven dialog agents for other related tasks including negotiation BID10 and collaborative drawing . We believe that our work can easily extend to those settings as well. proposed a generative-discriminative framework for visual dialog where they train only an answer bot to generate informative answers for ground truth questions. These answers were then fed to a discriminator, which was trained to rank the generated answer among a set of candidate answers. This is a major restriction of their model as it can only be trained when this additional information of candidate answers is available, which restricts it to a supervised setting. Furthermore, since they train only the answer bot and have no question bot, they cannot simulate an entire dialog which also prevents them from learning by self-play via reinforcement. BID16 further improved upon this generative-discriminative framework by formulating the discriminator as a more traditional GAN BID3, where the adversarial discriminator is tasked to distinguish between human generated and machine generated dialogs. Furthermore, unlike they modeled the discriminator using an attention network which also utilized the dialog history in predicting the next answer allowing it to maintain coherence and consistency across dialog turns. We briefly describe the agent architectures and leave the details for the appendix. The question bot architecture we use is inspired by the answer bot architecture in BID2 and but the individual units have been modified to provide more useful representations. Similar to the original architecture, our Q-Bot, shown in FIG2, also consists of 5 parts, (a) fact encoder, (b) state-history encoder, (c) caption encoder, (d) image regression network and (e) question decoder. The fact encoder is modeled using a Long-Short Term Memory (LSTM) network, which encodes the previous question-answer pair into an embedding F Q t. We modify the statehistory encoder to incorporate a two-level hierarchical encoding of the dialog. It uses the fact embedding F Q t at each time step to compute attention over the history of dialog, (F DISPLAYFORM0 and produce a history encoding S Q t . The key modification (compared to) in our architecture is the addition of a separate LSTM to compute a caption embedding C. This is key to ensuring that the hierarchical encoding does not exclusively attend on the caption while generating the history embedding, and prevents the occurrence of repetitive questions in a dialog since the encoding now has an adequate representation of the previous facts. The caption embedding is then concatenated with F Q t and S Q t, and passed through multiple fully connected layers to compute the state-history encoder embedding e t and the predicted image feature embeddingŷ t = f (S Q t). The encoder embedding, e Q t is fed to the question decoder, another LSTM, which generates the question, q t. For all LSTMs and fully connected layers in the model we use a hidden layer size of 512. The image feature vector is 4096 dimensional. The word embeddings and the encoder embeddings are 300 dimensional. The architecture for A-Bot, also inspired from, shown in FIG2, is similar to that of the Q-Bot. It has 3 components: (a) question encoder, (b) state-history encoder and (c) answer decoder. The question encoder computes an embedding, Q t for the question to be answered, q t. The history encoding (F A 1, F A 2, F A 3 ...F A t) → S A t uses a similar two-level hierarchical encoder, where the attention is computed using the question embedding Q t. The caption is passed on to the A-Bot as the first element of the history, which is why we do not use a separate caption encoder. Instead, we use the fc7 feature embedding of a pretrained VGG-16 BID13 model to compute the image embedding I. The three embeddings S A t, Q t, I are concatenated and passed through another fully connected layer to extract the encoder embedding e A t. The answer decoder, which is another LSTM, uses this embedding e A t to generate the answer a t. Similar to the Q-Bot, we use a hidden layer size of 512 for all LSTMs and fully connected layers. The image feature vector coming from the CNN is 4096 dimensional (FC7 features from VGG16). The word embeddings and the encoder embeddings are 300 dimensional. We follow the training process proposed in BID2. Two agents, a Q-Bot and an A-Bot are first trained in isolation, by supervision from the VisDial dataset. After this supervised pretraining for 15 epochs over the data, we smoothly transition the agents to learn by reinforcement via a curriculum. Specifically, for the first K rounds of dialog for each image, the agents are trained using supervision from the VisDial dataset. For the remaining 10-K rounds, however, they are trained via reinforcement learning. K starts at 9 and is linearly annealed to 0 over 10 epochs. The individual phases of training will be described below, with some details in the appendix. In the supervised part of training, both the Q-Bot and A-Bot are trained separately. Both the Q-Bot and A-Bot are trained with a Maximum Likelihood Estimation (MLE) loss computed using the ground truth questions and answers, respectively, for every round of dialog. The Q-Bot simultaneously optimizes another objective: minimizing the Mean Squared Error (MSE) loss between the true and predicted image embeddings. The ground truth dialogs and image embeddings come from the VisDial dataset. Given the true dialog history, image features and ground truth question, the A-Bot generates an answer to that question. Given the true dialog history and the previous question-answer pair, the Q-Bot is made to generate the next question to ask the A-Bot. However, there are multiple problems with this training scheme. First, MLE is known to in models that generate repetitive dialogs and often produce generic responses. Second, the agents are never allowed to interact during training. Thus, when they interact during testing, they end up facing out of distribution questions and answers, and produce unexpected responses. Third, the sequentiality of dialog is lost when the agents are trained in an isolated, supervised manner. To alleviate the issues pointed out with supervised training, we let the two bots interact with each other via self-play (no ground-truth except images and captions). This interaction is also in the form of questions asked by the Q-Bot, and answered in turn by the A-Bot, using a shared vocabulary. The state space is partially observed and asymmetric, with the Q-Bot observing {c, q 1, a 1 . . . q 10, a 10} and the A-Bot observing the same, plus the image I. Here, c is the caption, and q i, a i is the i th dialog pair exchanged where i = 1... 10. The action space for both bots consists of all possible output sequences of a specified maximum length (Q-Bot: 16, ABot: 9 as specified by the dataset) under a fixed vocabulary (size 8645). Note that these parameter values are chosen to comply with the VisDial dataset. Each action involves predicting words sequentially until a stop token is predicted, or the generated statement reaches the maximum length. Additionally, Q-Bot also produces a guess of the visual representation of the input image (VGG fc-7 embedding of size 4096). Both Q-Bot and A-Bot share the same objective and get the same reward to encourage cooperation. Information gain in each round of dialog is incentivized by setting the reward as the change in distance of the predicted image embedding to the ground-truth image representation. This means that a QA pair is of high quality only if it helps the Q-Bot make a better prediction of the image representation. Both policies are modeled by neural networks, as discussed in Section 4.However, as noted above, this RL optimization problem is ill-posed, since rewarding the agents for information exchange does not motivate them to stick to the rules and conventions of the English language. Thus, we follow the elaborate curriculum scheme described above, despite which the bots are still observed to diverge from natural language and produce non-grammatical and incoherent dialog. Thus, we propose a multi bot architecture to help the agents interact in diverse and coherent, yet informative, dialog. Learning Algorithm: A dialog round at time t consists of the following steps: 1) The Q-Bot, conditioned on the state encoding, generates a question q t, 2) A-Bot updates its state encoding with q t and then generates an answer a t, 3) Both Q-Bot and A-Bot encode the completed exchange as a fact embedding, 4) Q-Bot updates its state encoding to incorporate this fact and finally 5) Q-Bot predicts the image representation for the unseen image conditioned on its updated state encoding. Similar to BID1, we use the REIN-FORCE BID15 algorithm that updates network parameters in response to experienced rewards. The per-round rewards maximized are: DISPLAYFORM0 This reward is positive if the distance between image representation generated at time t is closer to the ground truth than the representation at time t − 1, hence incentivizing information gain at each round of dialog. The REINFORCE update rule ensures that a (q t, a t) exchange that is informative has its probabilities pushed up. Do note that the image feature regression network f is trained directly via supervised gradient updates on the L-2 loss. In this section we describe our proposed MultiAgent Dialog architecture in detail. The motivation end while 25: end procedure behind this is the observation that allowing a pair of agents to interact with each other and learn via reinforcement for too long leads to them developing an idiosyncratic private language which does not adhere to the rules of human language, and is hence not understandable by human observers. We claim that if instead of allowing a single pair of agents to interact, we were to make the agents more social, and make them interact and learn from multiple other agents, they would be disincentivized to develop a private language, and would have to conform to the common language. In particular, we create either multiple Q-bots to interact with a single A-bot, or multiple A-bots to interact with a single Q-bot. All these agents are initialized with the learned parameters from the supervised pretraining as described in Section 5.1. Then, for each batch of images from the VisDial dataset, we randomly choose a Q-bot to interact with the A-bot, or randomly choose an A-bot to interact with the Q-bot, as the case may be. The two chosen agents then have a complete dialog consisting of 10 question-answer pairs about each of those images, and update their respective weights based on the rewards received (as per Equation 1) during the conversation, using the REINFORCE algorithm. This process is repeated for each batch of images, over the entire VisDial dataset. It is important to note that histories are not shared across batches. MADF can be understood in detail using the pseudocode in Algorithm 1. We use the VisDial 0.9 dataset for our task introduced by BID1. The data is collected using Amazon Mechanical Turk by pairing 2 annotators and asking them to chat about the image as a multi round VQA setup. One of the annotators acts as the questioner and has access to only the caption of the image and has to ask questions from the other annotator who acts as the answerer and must answer the questions based on the visual information from the actual image. This dialog repeats for 10 rounds at the end of which the questioner has to guess what the image was. We perform our Model MRR Mean Rank R@1 R@5 R@10 Answer Prior BID1 0 Figure 3: The percentile scores of the ground truth image compared to the entire test set of 40k images. The X-axis denotes the dialog round number (from 1 to 10), while the Y-axis denotes the image retrieval percentile score.experiments on VisDial v0.9 (the latest available release) containing 83k dialogs on COCO-train and 40k on COCO-val images, for a total of 1.2M dialog question-answer pairs. We split the 83k into 82k for train, 1k for validation, and use the 40k as test, in a manner consistent with BID1. The caption is considered to be the first round in the dialog history. We evaluate the performance of our model using 6 metrics, proposed by BID2: 1) Mean Reciprocal Rank (MRR), 2) Mean Rank, 3) Recall@1, 4) Recall@5, 5) Recall@10 and 6) Image Retrieval Percentile. Mean Rank and MRR compute the average rank (and its reciprocal, respectively) assigned to the answer generated by the A-bot, over a set of 100 candidate answers for each question (also averaged over all the 10 rounds). Recall@k computes the percentage of answers with rank less than k. Image Retrieval percentile is a measure of how close the image prediction generated by the Q-bot is to the ground truth. All the images in the test set are ranked according to their distance from the predicted image embedding, and the rank of the ground truth embedding is used to calculate the image retrieval percentile. Table 1 compares the Mean Rank, MRR, Recall@1, Recall@5 and Recall@10 of our agent architecture and dialog framework (below the horizontal line) with previously proposed architectures (above the line). SL refers to the agents after only the isolated, supervised training of Section 5.1. RL-1Q,1A refers to a single, idiosyncratic pair of agents trained via reinforcement as in Section 5.2. RL-1Q,3A and RL-3Q,1A refer to social agents trained via our Multi-Agent Dialog framework in Section 5.3, with 1Q,3A referring to 1 Q-Bot and 3 A-Bots, and 3Q,1A referring to 3 Q-Bots and 1 A-Bot. It can be seen that our agent architectures clearly outperform all previously proposed generative architectures in MRR, Mean Rank and R@10, but not in R@1 and R@5. This indicates that our approach produces consistently good answers (as measured by MRR, Mean Rank and R@10), even though they might not be the best possible answers (as measured by R@1 and R@5). SL has the best MRR and Mean Rank, which drops drastically in RL-1Q,1A. The agents trained by MADF recover and are able to outperform all previously proposed models. Fig. 3 shows image retrieval percentile scores over dialog rounds. The percentile score decreases monotonically for SL, but is stable for all versions using RL. There are no quantitative metrics to comprehensively evaluate dialog quality, hence we do a hu-. A total of 20 evaluators (randomly chosen students) were shown the caption and the 10 QA-pairs generated by each system for one of 4 randomly chosen images, and asked to give an ordinal ranking (from 1 to 4) for each metric. If the evaluator was also given access to the image, she was asked only to evaluate metrics 3, 4 and 5, while if the evaluator was not shown the image, she was asked only to evaluate metrics 1, 2 and 5. Table 2 contains the average ranks obtained on each metric (lower is better).The convincingly prove our hypothesis that having multiple A-Bots to interact and learn from will improve the Q-Bot, and vice versa. This is because having multiple A-Bots to interact with gives the Q-Bot access to a variety of diverse dialog, leading to more stable updates with lower bias. The confirm this, with Q-Bot Relevance rank being best in RL-1Q,3A, and A-Bot Relevance rank being best in RL-3Q,1A. These two dialog systems, which were trained via MADF, also have the best overall dialog coherence by a significant margin over RL-1Q,1A and SL. We show two of the examples shown to the human evaluators in Figure 4. The trends observed in the scores given by human evaluators is also clearly visible in these examples. MADF agents are able to model the human responses much better than the other agents. It can also be seen that although the RL-1Q,1A system has greater diversity in its responses, the quality of those responses is greatly degraded, with the A-Bot's answers especially being both non-grammatical and irrelevant. In Section 5.1, we discussed how the MSE loss used in SL in models which generate repetitive dialog, which can be seen in Fig. 4. Consider the first image, where in the SL QA-generations, the Q-Bot repeats the same questions multiple times, and gets inconsistent answers from the A-Bot for the same question. By contrast, all 10 QA-generations for RL-3Q,1A are grammatically correct. The Q-Bot's questions are very relevant to the image being considered, and the A-Bot's answers appropriate and correct. In this paper we propose a novel Multi-Agent Dialog Framework (MADF), inspired from human communities, to improve the dialog quality of AI agents. We show that training 2 agents with supervised learning can lead to uninformative and repetitive dialog. Furthermore, we observe that the task performance (measured by the image retrieval percentile scores) for the system trained via supervision only deteriorates as dialog round number increases. We hypothesize that this is because the agents were trained in isolation and never allowed to interact during supervised learning, which leads to failure during testing when they encounter out of distribution samples (generated by the other agent, instead of ground truth) for the first time. We show how allowing a single pair of agents to interact and learn from each other via reinforcement learning dramatically improve their percentile scores, which additionally does not deteriorate over multiple rounds of dialog, since the agents have interacted with one another and been exposed to the other's generated questions or answers. However, the agents, in an attempt to improve task performance end up developing their own private language which does not adhere to the rules and conventions of human languages, and generates nongrammatical and non-sensical statements. As a , the dialog system loses interpretability and sociability. Figure 4: Two randomly selected images from the VisDial dataset followed by the ground truth (human) and generated dialog about that image for each of our 4 systems (SL, RL-1Q,1A, RL-1Q,3A, RL-3Q,1A). These images were also used in the human evaluation shown in Table 2.multi-agent dialog framework based on self-play reinforcement learning, where a single A-Bot is allowed to interact with multiple Q-Bots and vice versa. Through a human evaluation study, we show that this leads to significant improvements in dialog quality measured by relevance, grammar and coherence. This is because interacting with multiple agents prevents any particular pair from maximizing performance by developing a private language, since it would harm performance with all the other agents. We plan to explore several other multi bot architectural settings and perform a more thorough human evaluation for qualitative analysis of our dialog. We also plan on incorporating an explicit perplexity based reward term in our reinforcement learning setup to further improve the dialog quality. We will also experiment with using a discriminative answer decoder which uses information of the possible answer candidates to rank the generated answer with respect to all the candidate answers and use the ranking performance to train the answer decoder. Another avenue for future exploration is to use a richer image feature embedding to regress on. Currently, we use a regression network to compute the estimated image embedding which represents the Q-Bot's understanding of the image. We plan to implement an image generation GAN which can use this embedding as a latent code to generate an image which can be visualized. While the MADF in its current form only works if we have multiple Q-Bots or multiple A-Bots but not both, future work could possibly look at incorporating that into the framework, while ensuring that the updates do not become too unstable. then generate their respective embeddings which are fed into the feature regression network and the question decoder to produceŷ t and update the hidden state of the question decoder respectively. The Q-Bot is then trained by maximizing the likelihood p(q gt |E Q t) of the training data q gt, computed using the softmax probabilities given by the question decoder. Simultaneously, the Mean Squared Error (MSE) loss between the predicted image embedding and ground truth is also minimized. Effectively, the loss DISPLAYFORM0 is minimized. At time step t, the A-Bot's question encoder is fed with the ground-truth question for t, the state/history/image encoder is fed with all the ground-truth QA pairs up to t − 1 and the image I. These encoders then generate their respective embeddings which are fed into the answer decoder to produce a t. The A-Bot is trained by maximizing the likelihood p(a gt |E A t) of the training data a gt, computed using the softmax probabilities given by the answer decoder. Effectively, the loss DISPLAYFORM0 The Q-Bot is given only the caption c gt and the A-Bot is given only the image I and caption c gt as inputs. At time step t, the Q-Bot's fact encoder is fed with the generated QA pair for t − 1, the state/history encoder is fed with all the generated QA pairs up to t − 1 and the caption encoder is given the true image caption c gt as input. These encoders then generate their respective embeddings which are fed into the feature regression network and the question decoder to produceŷ t and q t respectively. The change in distance betweenŷ t and y gt due to the current QA-pair is given as a reward to Q-Bot (Eqn. 1), which it uses to train itself via REINFORCE. Simultaneously, the Mean Squared Error (MSE) loss between the predicted image embedding and ground truth is also minimized via supervision. Effectively, the loss DISPLAYFORM0 is minimized, where G t = 10−t k=0 γ k r t+k+1 indicates the Monte-Carlo return at step t, and γ is a discount factor equal to 0.99. At time step t, the A-Bot's question encoder is fed with the generated question q t, the state/history/image encoder is fed with all the generated QA pairs up to t − 1 and the image I. These encoders then generate their respective embeddings which are fed into the answer decoder to produce a t. The A-Bot also receives the same reward as the Q-Bot, and trains itself via REINFORCE. Effectively, the loss DISPLAYFORM0 is minimized, where G t = 10−t k=0 γ k r t+k+1 indicates the Monte-Carlo return at step t, and γ is a discount factor equal to 0.99.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJxX9mx8y7
Social agents learn to talk to each other in natural language towards a goal
Posterior collapse in Variational Autoencoders (VAEs) arises when the variational distribution closely matches the uninformative prior for a subset of latent variables. This paper presents a simple and intuitive explanation for posterior collapse through the analysis of linear VAEs and their direct correspondence with Probabilistic PCA (pPCA). We identify how local maxima can emerge from the marginal log-likelihood of pPCA, which yields similar local maxima for the evidence lower bound (ELBO). We show that training a linear VAE with variational inference recovers a uniquely identifiable global maximum corresponding to the principal component directions. We provide empirical evidence that the presence of local maxima causes posterior collapse in deep non-linear VAEs. Our findings help to explain a wide range of heuristic approaches in the literature that attempt to diminish the effect of the KL term in the ELBO to reduce posterior collapse. The generative process of a deep latent variable model entails drawing a number of latent factors from an uninformative prior and using a neural network to convert such factors to real data points. Maximum likelihood estimation of the parameters requires marginalizing out the latent factors, which is intractable for deep latent variable models. The influential work of BID21 and BID28 on Variational Autoencoders (VAEs) enables optimization of a tractable lower bound on the likelihood via a reparameterization of the Evidence Lower Bound (ELBO) BID18 BID4. This has created a surge of recent interest in automatic discovery of the latent factors of variation for a data distribution based on VAEs and principled probabilistic modeling BID15 BID5 BID8 BID13.Unfortunately, the quality and the number of the latent factors learned is directly controlled by the extent of a phenomenon known as posterior collapse, where the generative model learns to ignore a subset of the latent variables. Most existing work suggests that posterior collapse is caused by the KL-divergence term in the ELBO objective, which directly encourages the variational distribution to match the prior. Thus, a wide range of heuristic approaches in the literature have attempted to diminish the effect of the KL term in the ELBO to alleviate posterior collapse. By contrast, we hypothesize that posterior collapse arises due to spurious local maxima in the training objective. Surprisingly, we show that these local maxima may arise even when training with exact marginal log-likelihood. While linear autoencoders BID30 have been studied extensively BID2 BID23, little attention has been given to their variational counterpart. A well-known relationship exists between linear autoencoders and PCAthe optimal solution to the linear autoencoder problem has decoder weight columns which span the subspace defined by the principal components. The Probabilistic PCA (pPCA) model BID32 recovers the principal component subspace as the maximum likelihood solution of a Gaussian latent variable model. In this work, we show that pPCA is recovered exactly using linear variational autoencoders. Moreover, by specifying a diagonal covariance structure on the variational distribution we recover an identifiable model which at the global maximum has the principal components as the columns of the decoder. The study of linear VAEs gives us new insights into the cause of posterior collapse. Following the analysis of BID32, we characterize the stationary points of pPCA and show that the variance of the observation model directly impacts the stability of local stationary points -if the variance is too large then the pPCA objective has spurious local maxima, which correspond to a collapsed posterior. Our contributions include:• We prove that linear VAEs can recover the true posterior of pPCA and using ELBO to train linear VAEs does not add any additional spurious local maxima. Further, we prove that at its global optimum, the linear VAE recovers the principal components.• We shows that posterior collapse may occur in optimization of marginal log-likelihood, without powerful decoders. Our experiments verify the analysis of the linear setting and show that these insights extend even to high-capacity, deep, non-linear VAEs.• By learning the observation noise carefully, we are able to reduce posterior collapse. We present evidence that the success of existing approaches in alleviating posterior collapse depends on their ability to reduce the stability of spurious local maxima. Probabilistic PCA. We define the probabilitic PCA (pPCA) model as follows. Suppose latent variables z ∈ R k generate data x ∈ R n. A standard Gaussian prior is used for z and a linear generative model with a spherical Gaussian observation model for x: DISPLAYFORM0 The pPCA model is a special case of factor analysis BID3, which replaces the spherical covariance σ 2 I with a full covariance matrix. As pPCA is fully Gaussian, both the marginal distribution for x and the posterior p(z|x) are Gaussian and, unlike factor analysis, the maximum likelihood estimates of W and σ 2 are tractable BID32.Variational Autoencoders. Recently, amortized variational inference has gained popularity as a means to learn complicated latent variable models. In these models, the marginal log-likelihood, log p(x), is intractable but a variational distribution, q(z|x), is used to approximate the posterior, p(z|x), allowing tractable approximate inference. To do so we typically make use of the Evidence Lower Bound (ELBO): DISPLAYFORM1 The ELBO consists of two terms, the KL divergence between the variational distribution, q(z|x), and prior, p(z), and the expected conditional log-likelihood. The KL divergence forces the variational distribution towards the prior and so has reasonably been the focus of many attempts to alleviate posterior collapse. We hypothesize that in fact the marginal log-likelihood itself often encourages posterior collapse. In Variational Autoencoders (VAEs), two neural networks are used to parameterize q φ (z|x) and p θ (x|z), where φ and θ denote two sets of neural network weights. The encoder maps an input x to the parameters of the variational distribution, and then the decoder maps a sample from the variational distribution back to the inputs. Posterior collapse. The most consistent issue with VAE optimization is posterior collapse, in which the variational distribution collapses towards the prior: ∃i s.t. ∀x q φ (z i |x) ≈ p(z i). This reduces the capacity of the generative model, making it impossible for the decoder network to make use of the information content of all of the latent dimensions. While posterior collapse is typically described using the variational distribution as above, one can also define it in terms of the true posterior p(z|x) as: ∃i s.t. ∀x p(z i |x) ≈ p(z i).3 Related Work BID12 discuss the relationship between robust PCA methods BID6 and VAEs. In particular, they show that at stationary points the VAE objective locally aligns with pPCA under certain assumptions. We study the pPCA objective explicitly and show a direct correspondence with linear VAEs. BID12 show that the covariance structure of the variational distribution may help smooth out the loss landscape. This is an interesting whose interactions with ours is an exciting direction for future research. BID14 motivate posterior collapse through an investigation of the learning dynamics of deep VAEs. They suggest that posterior collapse is caused by the inference network lagging behind the true posterior during the early stages of training. A related line of research studies issues arising from approximate inference causing mismatch between the variational distribution and true posterior BID10 BID19 BID16. By contrast, we show that local maxima may exist even when the variational distribution matches the true posterior exactly. Alemi et al. FORMULA0 use an information theoretic framework to study the representational properties of VAEs. They show that with infinite model capacity there are solutions with equal ELBO and marginal log-likelihood which span a range of representations, including posterior collapse. We find that even with weak (linear) decoders, posterior collapse may occur. Moreover, we show that in the linear case this posterior collapse is due entirely to the marginal log-likelihood. The most common approach for dealing with posterior collapse is to anneal a weight on the KL term during training from 0 to 1 BID5 BID31 BID24 BID15 BID17. Unfortunately, this means that during the annealing process, one is no longer optimizing a bound on the log-likelihood. In addition, it is difficult to design these annealing schedules and we have found that once regular ELBO training resumes the posterior will typically collapse again (Section 5.2). propose a constraint on the KL term, which they called "free-bits", where the gradient of the KL term per dimension is ignored if the KL is below a given threshold. Unfortunately, this method reportedly has some negative effects on training stability BID26. Delta-VAEs BID26 instead choose prior and variational distributions carefully such that the variational distribution can never exactly recover the prior, allocating free-bits implicitly. Several other papers have studied alternative formulations of the VAE objective BID27 BID11 BID0. BID11 analyze the VAE objective with the goal of improving image fidelity under Gaussian observation models. Through this lens they discuss the importance of the observation noise. BID29 point out that due to the diagonal covariance used in the variational distribution of VAEs they are encouraged to pursue orthogonal representations. They use linearizations of deep networks to prove their under a modification of the objective function by explicitly ignoring latent dimensions with posterior collapse. Our formulation is distinct in focusing on linear VAEs without modifying the objective function and proving an exact correspondence between the global solution of linear VAEs and principal components. BID23 studies the optimization challenges in the linear autoencoder setting. They expose an equivalence between pPCA and Bayesian autoencoders and point out that when σ 2 is too large information about the latent code is lost. A similar phenomenon is discussed in the supervised learning setting by BID7. BID23 also show that suitable regularization allows the linear autoencoder to exactly recover the principal components. We show that the same can be achieved using linear variational autoencoders with a diagonal covariance structure. In this section we compare and analyze the optimal solutions to both pPCA and linear variational autoencoders. DISPLAYFORM0 Figure 1: Stationary points of pPCA. A zero-column of W is perturbed in the directions of two orthogonal principal components (µ5 and µ7) and the loss surface (marginal log-likelihood) is shown. The stability of the stationary points depends critically on σ 2. Left: σ 2 is able to capture both principal components. Middle: σ 2 is too large to capture one of the principal components. Right: σ 2 is too large to capture either principal component. We first discuss the maximum likelihood estimates of pPCA and then show that a simple linear VAE is able to recover the global optimum. Moreover, the same linear VAE recovers identifiability of the principle components (unlike pPCA which only spans the PCA subspace). Finally, we analyze the loss landscape of the linear VAE showing that ELBO does not introduce any additional spurious maxima. The pPCA model (Eq. FORMULA0) is a fully Gaussian linear model and thus we can compute both the marginal distribution for x and the posterior p(z | x) in closed form: DISPLAYFORM0 DISPLAYFORM1 where DISPLAYFORM2 This model is particularly interesting to analyze in the setting of variational inference as the ELBO can also be computed in closed form (see Appendix C).Stationary points of pPCA We now characterize the stationary points of pPCA, largely repeating the thorough analysis of (see Appendix A of their paper).The maximum likelihood estimate of µ is the mean of the data. We can compute W MLE and σ MLE as follows: DISPLAYFORM3 DISPLAYFORM4 Here U k corresponds to the first k principal components of the data with the corresponding eigenvalues λ 1,..., λ k stored in the k × k diagonal matrix Λ k. The matrix R is an arbitrary rotation matrix which accounts for weak identifiability in the model. We can interpret σ DISPLAYFORM5 as the average variance lost in the projection. The MLE solution is the global optima. Stability of W MLE One surprising observation is that σ 2 directly controls the stability of the stationary points of the marginal log-likelihood (see Appendix A). In Figure 1, we illustrate one such stationary point of pPCA under different values of σ 2. We computed this stationary point by taking W to have three principal components columns and zeros elsewhere. Each plot shows the same stationary point perturbed by two orthogonal eigenvectors corresponding to other principal components. The stability of the stationary points depends on the size of σ 2 -as σ 2 increases the stationary point tends towards a stable local maxima. While this example is much simpler than a non-linear VAE, we find in practice that the same principle applies. Moreover, we observed that the non-linear dynamics make it difficult to learn a smaller value of σ 2 automatically FIG5 ). We now show that linear VAEs are able to recover the globally optimal solution to Probabilistic PCA. We will consider the following VAE model, DISPLAYFORM0 where D is a diagonal covariance matrix which is used globally for all data points. While this is a significant restriction compared to typical VAE architectures, which define an amortized variance for each input point, this is sufficient to recover the global optimum of the probabilistic model. Lemma 1. The global maximum of the ELBO objective (Eq.) for the linear VAE (Eq. FORMULA9) is identical to the global maximum for the marginal log-likelihood of pPCA (Eq. FORMULA3).Proof. The global optimum of pPCA is obtained at the maximum likelihood estimate of W and σ 2, which are specified only up to an orthogonal transformation of the columns of W, i.e., any rotation R in Eq. FORMULA7 FORMULA9 is able to recover the global optimum of pPCA only when DISPLAYFORM1 k (which is diagonal) recovers the true posterior at the global optimum. In this case, the ELBO equals the marginal log-likelihood and is maximized when the decoder has weights W = W MLE. Since, ELBO lower bounds log-likelihood, then the global maximum of ELBO for the linear VAE is the same as the global maximum of marginal likelihood for pPCA.Full details are given in Appendix C. In fact, the diagonal covariance of the variational distribution allows us to identify the principal components at the global optimum. Corollary 1. The global optimum to the VAE solution has the scaled principal components as the columns of the decoder network. Proof. Follows directly from the proof of Lemma 1 and Equation 8.Finally, we can recover full identifiability by requiring D = I. We discuss this in Appendix B.We have shown that at its global optimum the linear VAE is able to recover the pPCA solution and additionally enforces orthogonality of the decoder weight columns. However, the VAE is trained with the ELBO rather than the marginal log-likelihood. The majority of existing work suggests that the KL term in the ELBO objective is responsible for posterior collapse and so we should ask whether this term introduces additional spurious local maxima. Surprisingly, for the linear VAE model the ELBO objective does not introduce any additional spurious local maxima. We provide a sketch of the proof here with full details in Appendix C. Theorem 1. The ELBO objective does not introduce any additional local maxima to the pPCA model. Proof. (Sketch) If the decoder network has orthogonal columns then the variational distribution can capture the true posterior and thus the variational objective exactly recovers the marginal log-likelihood at stationary points. If the decoder network does not have orthogonal columns then the variational distribution is no longer tight. However, the ELBO can always be increased by rotating the columns of the decoder towards orthogonality. This is because the variational distribution fits the true posterior more closely while the marginal log-likelihood is invariant to rotations of the weight columns. Thus, any additional stationary points in the ELBO objective must necessarily be saddle points. The theoretical presented in this section provide new intuition for posterior collapse in general VAEs. Our suggest that the ELBO objective, in particular the KL between the variational distribution and the prior, is not entirely responsible for posterior collapse -even exact marginal log-likelihood may suffer. The evidence for this is two-fold. We have shown that marginal log-likelihood may have spurious local maxima but also that in the linear case the ELBO objective does not add any additional spurious local maxima. Rephrased, in the linear setting the problem lies entirely with the probabilistic model. FIG0 The marginal log-likelihood and optimal ELBO of MNIST pPCA solutions over increasing hidden dimension. Green represents the MLE solution (global maximum), the red dashed line is the optimal ELBO solution which matches the global optimum. The blue line shows the marginal log-likelihood of the solutions using the full decoder weights when σ 2 is fixed to its MLE solution for 50 hidden dimensions. In this section we present empirical evidence found from studying two distinct claims. First, we verified our theoretical analysis of the linear VAE model. Second, we explored to what extent these insights apply to deep non-linear VAEs. In FIG0 we display the likelihood values for various optimal solutions to the pPCA model trained on the MNIST dataset. We plot the maximum log-likelihood and numerically verify that the optimal ELBO solution is able to exactly match this (Lemma 1). We also evaluated the model with all principal components used but with a fixed value of σ 2 corresponding to the MLE solution for 50 hidden dimensions. This is equivalent to σ 2 ≈ λ 222. Here the log-likelihood is optimal at σ 2 = 50 as expected, but interestingly the likelihood decreases for 300 hidden dimensions -including the additional principal components has made the solution worse under marginal log-likelihood. We explored how well the analysis of the linear VAEs extends to deep non-linear models. To do so, we trained VAEs with Gaussian observation models on the MNIST dataset. This is a fairly uncommon choice of model for this dataset, which is nearly binary, but it provides a good setting for us to investigate our theoretical findings. FIG3 shows the cumulative distribution of the per-dimension KL divergence between the variational distribution and the prior at the end of training. We observe that using a smaller value of σ 2 prevents the posterior from collapsing and allows the model to achieve a substantially higher ELBO. It is possible that the difference in ELBO is due entirely to the change of scale introduced by σ 2 and not because of differences in the learned representations. To test this hypothesis we took each of the trained models and optimized for σ 2 while keeping all other parameters fixed (TAB1 . As expected, the ELBO increased but the relative ordering remained the same with a significant gap still present. The final model is evaluated on the training set. We also tuned σ 2 to the trained model and reevaluated to confirm that the difference in loss is due to differences in latent representations. The role of KL-annealing An alternative approach to tuning σ 2 is to scale the KL term directly by a coefficient, β. For β < 1 this provides a loose lowerbound on the ELBO but for appropriate choices of β and learning rate, this scheme can be made equivalent to tuning σ 2 . In this section we explore this technique. We found that KL-annealing may provide temporary relief from posterior collapse but that if σ 2 is not appropriately tuned then ultimately ELBO training will recover the default solution. In FIG4 we show the proportion of units collapsed by threshold for several fixed choices of σ 2 when β is annealed from 0 to 1 over the first 100 epochs. The solid lines correspond to the final model while the dashed line corresponds to the model at 80 epochs of training. Early on, KL-annealing is able to reduce posterior collapse but ultimately we recover the ELBO solution from FIG3 .After finding that KL-annealing alone was insufficient to prevent posterior collapse we explored KL annealing while learning σ 2 . Based on our analysis in the linear case we expect that this should work well: while β is small the model should be able to learn to reduce σ 2 . To test this, we trained the same VAE as above on MNIST data but this time we allowed σ 2 to be learned. The are presented in FIG5 . We trained first using the standard ELBO objective and then again using KL-annealing. The ELBO objective learns to reduce σ 2 but ultimately learns a solution with a large degree of posterior collapse. Using KL-annealing, the VAE is able to learn a much smaller σ 2 value and ultimately reduces posterior collapse. Interestingly, despite significantly differing representations, these two models have approximately the same final training ELBO. This is consistent with the analysis of BID0, who showed that there can exist solutions equal under ELBO with differing posterior distributions. We trained deep convolutional VAEs with 500 hidden dimensions on images from the CelebA dataset (resized to 64x64). In FIG6 we show the training ELBO for the standard ELBO objective and training with KL-annealing. In each case, σ 2 is learned online. As in FIG5, KL-Annealing enabled the VAE to learn a smaller value of σ 2 which corresponded to a better final ELBO value and reduced posterior collapse FIG7 ). By analyzing the correspondence between linear VAEs and pPCA we have made significant progress towards understanding the causes of posterior collapse. We have shown that for simple linear VAEs posterior collapse is caused by spurious local maxima in the marginal log-likelihood and we demonstrated empirically that the same local maxima seem to play a role when optimizing deep non-linear VAEs. In future work, we hope to extend this analysis to other observation models and provide theoretical support for the non-linear case. Here we briefly summarize the analysis of BID32 with some simple additional observations. We recommend that interested readers study Appendix A of BID32 for the full details. We begin by formulating the conditions for stationary points of xi log p(x i): DISPLAYFORM0 Where S denotes the sample covariance matrix (assuming we set µ = µ M LE, which we do throughout), and C = WW T + σ 2 I (note that the dimensionality is different to M). There are three possible solutions to this equation, W = 0, C = S, or the more general solutions. FORMULA0 and are not particularly interesting to us, so we focus herein on.We can write W = ULV T using its singular value decomposition. Substituting back into the stationary points equation, we recover the following: DISPLAYFORM1 Noting that L is diagonal, if the j th singular value (l j) is non-zero, this gives Su j = (σ 2 +l 2 j)u j, where u j is the j th column of U. Thus, u j is an eigenvector of S with eigenvalue DISPLAYFORM2 Thus, all potential solutions can be written as, W = U q (K q − σ 2 I) 1/2 R, with singular values written as k j = σ 2 or σ 2 + l 2 j and with R representing an arbitrary orthogonal matrix. From this formulation, one can show that the global optimum is attained with σ 2 = σ 2 M LE and U q and K q chosen to match the leading singular vectors and values of S. Consider stationary points of the form, W = U q (K q − σ 2 I) 1/2 where U q contains arbitrary eigenvectors of S. In the original pPCA paper they show that all solutions except the leading principal components correspond to saddle points in the optimization landscape. However, this analysis depends critically on σ 2 being set to the true maximum likelihood estimate. Here we repeat their analysis, considering other (fixed) values of σ 2.We consider a small perturbation to a column of W, of the form u j. To analyze the stability of the perturbed solution, we check the sign of the dot-product of the perturbation with the likelihood gradient at w i + u j. Ignoring terms in 2 we can write the dot-product as, DISPLAYFORM0 Now, C −1 is positive definite and so the sign depends only on λ j /k i − 1. The stationary point is stable (local maxima) only if the sign is negative. If k i = λ i then the maxima is stable only when λ i > λ j, in words, the top q principal components are stable. However, we must also consider the case k = σ 2. BID32 show that if σ 2 = σ 2 M LE, then this also corresponds to a saddle point as σ 2 is the average of the smallest eigenvalues meaning some perturbation will be unstable (except in a special case which is handled separately).However, what happens if σ 2 is not set to be the maximum likelihood estimate? In this case, it is possible that there are no unstable perturbation directions (that is, λ j < σ 2 for too many j). In this case when σ 2 is fixed, there are local optima where W has zero-columnsthe same solutions that we observe in non-linear VAEs corresponding to posterior collapse. Note that when σ 2 is learned in non-degenerate cases the local maxima presented above become saddle points where σ 2 is made smaller by its gradient. In practice, we find that even when σ 2 is learned in the non-linear case local maxima exist. Linear autoencoders suffer from a lack of identifiability which causes the decoder columns to span the principal component subspace instead of recovering it. Here we show that linear VAEs are able to recover the principal components up to scaling. We once again consider the linear VAE from Eq. FORMULA9: DISPLAYFORM0 The output of the VAE,x is distributed as, DISPLAYFORM1 Therefore, the linear VAE is invariant to the following transformation: DISPLAYFORM2 where A is a diagonal matrix with non-zero entries so that D is well-defined. We see that the direction of the columns of W are always identifiable, and thus the principal components can be exactly recovered. Moreover, we can recover complete identifiability by fixing D = I, so that there is a unique global maximum. Here we present details on the analysis of the stationary points of the ELBO objective. To begin, we first derive closed form solutions to the components of the marginal log-likelihood (including the ELBO). The VAE we focus on is the one presented in Eq. FORMULA9, with a linear encoder, linear decoder, Gaussian prior, and Gaussian observation model. Remember that one can express the marginal log-likelihood as: DISPLAYFORM0 Each of the terms (A-C) can be expressed in closed form for the linear VAE. Note that the KL term (A) is minimized when the variational distribution is exactly the true posterior distribution. This is possible when the columns of the decoder are orthogonal. The term (B) can be expressed as, DISPLAYFORM1 The term (C) can be expressed as, DISPLAYFORM2 Noting that Wz ∼ N WV(x − µ), WDW T, we can compute the expectation analytically and obtain, DISPLAYFORM3 To compute the stationary points we must take derivatives with respect to µ, D, W, V, σ 2. As before, we have µ = µ M LE at the global maximum and for simplicity we fix µ here for the remainder of the analysis. Taking the marginal likelihood over the whole dataset, at the stationary points we have, DISPLAYFORM4 The above are computed using standard matrix derivative identities (Petersen et al.). These equations yield the expected solution for the variational distribution directly. From Eq. we compute DISPLAYFORM5 recovering the true posterior mean in all cases and getting the correct posterior covariance when the columns of W are orthogonal. We will now proceed with the proof of Theorem 1. Theorem 1. The ELBO objective does not introduce any additional local maxima to the pPCA model. Proof. If the columns of W are orthogonal then the marginal log-likelihood is recovered exactly at all stationary points. This is a direct consequence of the posterior mean and covariance being recovered exactly at all stationary points so that is zero. We must give separate treatment to the case where there is a stationary point without orthogonal columns of W. Suppose we have such a stationary point, using the singular value decomposition we can write W = ULR T, where U and R are orthogonal matrices. Note that log p(x) is invariant to the choice of R BID32. However, the choice of R does have an effect on the first term of Eq.: this term is minimized when R = I, and thus the ELBO must increase. To formalize this argument, we compute at a stationary point. From above, at every stationary point the mean of the variational distribution exactly matches the true posterior. Thus the KL simplifies to: DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 DISPLAYFORM9 where M = diag(W T W) + σ 2 I. Now consider applying a small rotation to W: W → WR. As the optimal D and V are continuous functions of W, this corresponds to a small perturbation of these parameters too for a sufficiently small rotation. Importantly, log det M remains fixed for any orthogonal choice of R but log det M does not. Thus, we choose R to minimize this term. In this manner, shrinks meaning that the ELBO (-2)+ must increase. Thus if the stationary point existed, it must have been a saddle point. We now describe how to construct such a small rotation matrix. First note that without loss of generality we can assume that det(R) = 1. (Otherwise, we can flip the sign of a column of R and the corresponding column of U.) And additionally, we have WR = UL, which is orthogonal. The Special Orthogonal group of determinant 1 orthogonal matrices is a compact, connected Lie group and therefore the exponential map from its Lie algebra is surjective. This means that we can find an upper-triangular matrix B, such that DISPLAYFORM10 T )}, where n is an integer chosen to ensure that the elements of B are within > 0 of zero. This matrix is a rotation in the direction of R which we can make arbitrarily close to the identity by a suitable choice of. This is verified through the Taylor series expansion of DISPLAYFORM11. Thus, we have identified a small perturbation to W (and D and V) which decreases the posterior KL (A) but keeps the marginal log-likelihood constant. Thus, the ELBO increases and the stationary point must be a saddle point. We would like to extend our linear analysis to the case where we have a Bernoulli observation model, as this setting also suffers severely from posterior collapse. The analysis may also shed light on more general categorical observation models which have also been used. Typically, in these settings a continuous latent space is still used (for example, Bowman et al. FORMULA0).We will consider the following model, p(z) = N (0, I), p(x|z) = Bernoulli(y), y = σ(Wz + µ)where σ denotes the sigmoid function, σ(y) = 1/(1 + exp(−y)) and we assume an independent Bernoulli observation model over x. Unfortunately, under this model it is difficult to reason about the stationary points. There is no closed form solution for the marginal likelihood p(x) or the posterior distribution p(z|x).Numerical integration methods exist which may make it easy to evaluate this quantity in practice but they will not immediately provide us a good gradient signal. We can compute the density function for y using the change of variables formula. Noting that Wz + µ ∼ N (µ, WW T), we recover the following logit-Normal distribution: DISPLAYFORM0 We can write the marginal likelihood as, DISPLAYFORM1 = E z y(z) DISPLAYFORM2 where (·) x is taken to be elementwise. Unfortunately, the expectation of a logit-normal distribution has no closed form BID1 ) and so we cannot tractably compute the marginal likelihood. Similarly, under ELBO we need to compute the expected reconstruction error. This can be written as, E q(z|x) [log p(x|z)] = y(z)x (1 − y(z)) 1−x N (z; V(x − µ), D)dz,another intractable integral. Visualizing stationary points of pPCA For this experiment we computed the pPCA MLE using a subset of 10000 random training images from the MNIST dataset. We evaluate and plot the marginal log-likelihood in closed form on this same subset. MNIST VAE The VAEs we trained on MNIST all had the same architecture: 784-1024-512-k-512-1024-784. The VAE parameters were optimized jointly using the Adam optimizer BID20. We trained the VAE for 1000 epochs total, keeping the learning rate fixed throughout. We performed a grid search over values for the learning rate and reported for the model which achieved the best training ELBO.CelebA VAE We used the convolutional architecture proposed by BID15. Otherwise, the experimental procedure followed that of the MNIST VAEs. We also trained convolutional VAEs on the CelebA dataset using fixed choices of σ 2. As expected, the same general pattern emerged as in FIG1. Reconstructions from the KL-Annealed model are shown in FIG9. We also show the output of interpolating in the latent space in FIG10. To produce the latter plot, we compute the variational mean of 3 input points (top left, top right, bottom left) and interpolate linearly between. We also extrapolate out to a fourth point (bottom right), which lies on the plane defined by the other points.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1xaVLUYuE
We show that posterior collapse in linear VAEs is caused entirely by marginal log-likelihood (not ELBO). Experiments on deep VAEs suggest a similar phenomenon is at play.
Transformers have achieved state-of-the-art on a variety of natural language processing tasks. Despite good performance, Transformers are still weak in long sentence modeling where the global attention map is too dispersed to capture valuable information. In such case, the local/token features that are also significant to sequence modeling are omitted to some extent. To address this problem, we propose a Multi-scale attention model (MUSE) by concatenating attention networks with convolutional networks and position-wise feed-forward networks to explicitly capture local and token features. Considering the parameter size and computation efficiency, we re-use the feed-forward layer in the original Transformer and adopt a lightweight dynamic convolution as implementation. Experimental show that the proposed model achieves substantial performance improvements over Transformer, especially on long sentences, and pushes the state-of-the-art from 35.6 to 36.2 on IWSLT 2014 German to English translation task, from 30.6 to 31.3 on IWSLT 2015 English to Vietnamese translation task. We also reach the state-of-art performance on WMT 2014 English to French translation dataset, with a BLEU score of 43.2. In recent years, Transformer has been remarkably adept at sequence learning tasks like machine translation , text classification (;, language modeling (b;, etc. It is solely based on an attention mechanism that captures global dependencies between input tokens, dispensing with recurrence and convolutions entirely. The key idea of the self-attention mechanism is updating token representations based on a weighted sum of all input representations. However, recent research has shown that the Transformer has surprising shortcomings in long sequence learning, exactly because of its use of self-attention. As shown in Figure 1 (a), in the task of machine translation, the performance of Transformer drops with the increase of the source sentence length, especially for long sequences. The reason is that the attention can be over-concentrated and disperse, as shown in Figure 1 (b), and only a small number of tokens are represented by attention. It may work fine for shorter sequences, but for longer sequences, it causes insufficient representation of information and brings difficulty for the model to comprehend the source information intactly. In recent work, local attention that constrains the attention to focus on only part of the sequences (; a) is used to address this problem. However, it costs self-attention the ability to capture long-range dependencies and also does not demonstrate effectiveness in sequence to sequence learning tasks. To build a module with both inductive bias of local and global context modelling in sequence to sequence learning, we hybrid self-attention with convolution and present Parallel multi-scale attention called MUSE. It encodes inputs into hidden representations and then applies self-attention and depth-separable convolution transformations in parallel. The convolution compensates for the in- The left figure shows that the performance drops largely with the increase of sentence length on the De-En dataset. The right figure shows the attention map from the 3-th encoder layer. As we can see, the attention map is too dispersed to capture sufficient information. For example, "[EOS]", contributing little to word alignment, is surprisingly over attended. sufficient use of local information while the self-attention focuses on capturing the dependencies. Moreover, this parallel structure is highly extensible, i.e., new transformations can be easily introduced as new parallel branches, and is also favourable to parallel computation. The main contributions are summarized as follows: • We find that the attention mechanism alone suffers from dispersed weights and is not suitable for long sequence representation learning. The proposed method tries to address this problem and achieves much better performance on generating long sequence. • We propose a parallel multi-scale attention and explore a simple but efficient method to successfully combine convolution with self-attention all in one module. • MUSE outperforms all previous models with same training data and the comparable model size, with state-of-the-art BLEU scores on three main machine translation tasks. • The proposed method enables parallel representation learning. Experiments show that the inference speed can be increased by 31% on GPUs. Like other sequence-to-sequence models, MUSE also adopts an encoder-decoder framework. The encoder takes a sequence of word embeddings (x 1, · · ·, x n) as input where n is the length of input. It transfers word embeddings to a sequence of hidden representation z = (z 1, · · ·, z n). Given z, the decoder is responsible for generating a sequence of text (y 1, · · ·, y m) token by token. The encoder is a stack of N MUSE modules. Residual mechanism and layer normalization are used to connect two adjacent layers. The decoder is similar to encoder, except that each MUSE module in the decoder not only captures features from the generated text representations but also performs attention over the output of the encoder stack through additional context attention. Residual mechanism and layer normalization are also used to connect two modules and two adjacent layers. The key part in the proposed model is the MUSE module, which contains three main parts: selfattention for capturing global features, depth-wise separable convolution for capturing local features, and a position-wise feed-forward network for capturing token features. The module takes the output of (i − 1) layer as input and generates the output representation in a fusion way: where "Attention" refers to self-attention, "Conv" refers to dynamic convolution, "Pointwise" refers to a position-wise feed-forward network. The followings list the details of each part. Figure 2: Multi-scale attention hybrids point-wise transformation, convolution, and self-attention to learn multi-scale sequence representations in parallel. We project convolution and self-attention into the same space to learn contextual representations. We also propose MUSE-simple, a simple version of MUSE, which generates the output representation similar to the MUST model except for that it dose not the include convolution operation: 2.1 ATTENTION MECHANISM FOR GLOBAL CONTEXT REPRESENTATION Self-attention is responsible for learning representations of global context. For a given input sequence X, it first projects X into three representations, key K, query Q, and value V. Then, it uses a self-attention mechanism to get the output representation: Where W O, W Q, W K, and W V are projection parameters. The self-attention operation σ is the dot-production between key, query, and value pairs: Note that we conduct a projecting operation over the value in our self-attention mechanism V 1 = V W V here. We introduce convolution operations into MUSE to capture local context. To learn contextual sequence representations in the same hidden space, we choose depth-wise convolution (we denote it as DepthConv in the experiments) as the convolution operation because it includes two separate transformations, namely, point-wise projecting transformation and contextual transformation. It is because that original convolution operator is not separable, but DepthConv can share the same point-wise projecting transformation with self-attention mechanism. We choose dynamic convolution (a), the best variant of DepthConv, as our implementation. Each convolution sub-module contains multiple cells with different kernel sizes. They are used for capturing different-range features. The output of the convolution cell with kernel size k is: where W V and W out are parameters, W V is a point-wise projecting transformation matrix. The Depth conv refers to depth convolution in the work of Wu et al. (2019a). For an input sequence X, the output O is computed as: where d is the hidden size. Note that we conduct the same projecting operation over the input in our convolution mechanism V 2 = XW V here with that in self-attention mechanism. Shared projection To learn contextual sequence representations in the same hidden space, the projection in the self-attention mechanism V 1 = V W V and that in the convolution mechanism V 2 = XW V is shared. because the shared projection can project the input feature into the same hidden space. If we conduct two independent projection here: are two parameter matrices, we call it as separate projection. We will analyze the necessity of applying shared projection here instead of separate projection. We introduce a gating mechanism to automatically select the weight of different convolution cells. To learn token level representations, MUSE concatenates an self-attention network with a positionwise feed-forward network at each layer. Since the linear transformations are the same across different positions, the position-wise feed-forward network can be seen as a token feature extractor. where W 1, b 1, W 2, and b 2 are projection parameters. We evaluate MUSE on four machine translation tasks. This section describes the datasets, experimental settings, detailed , and analysis. The WMT 2014 English-French translation dataset, consisting of 36M sentence pairs, is adopted as a big dataset to test our model. We use the standard split of development set and test set. We use newstest2014 as the test set and use newstest2012 +newstest2013 as the development set. , we also adopt a joint source and target BPE factorization with the vocabulary size of 40K. For medium dataset, we borrow the setup of and adopt the WMT 2014 English-German translation dataset which consists of 4.5M sentence pairs, the BPE vocabulary size is set to 32K. The test and validation datasets we used are the same as. IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of 160k sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of 32K. The IWSLT 2015 English-Vietnamese translation dataset consists of 133K training sentence pairs. For the En-Vi task, we build a dictionary including all source and target tokens. The vocabulary size for English is 17.2K, and the vocabulary size for the Vietnamese is 6.8K. For fair comparisons, we only compare models reported with the comparable model size and the same training data. We do not compare Wu et al. (2019b) because it is an ensemble method. We build MUSE-base and MUSE-large with the parameter size comparable to Transformer-base and Transformer-large. We adopt multi-head attention as implementation of selfattention in MUSE module. The number of attention head is set to 4 for MUSE-base and 16 for MUSE-large. We also add the network architecture built by MUSE-simple in the similar way into the comparison. MUSE consist of 12 residual blocks for encoder and 12 residual blocks for decoder, the dimension is set to 384 for MUSE-base and 768 for MUSE-large. The hidden dimension of non linear transformation is set to 768 for MUSE-base and 3072 for MUSE-large. The MUSE-large is trained on 4 Titan RTX GPUs while the MUSE-base is trained on a single NVIDIA RTX 2080Ti GPU. The batch size is calculated at the token level, which is called dynamic batching . We adopt dynamic convolution as the variant of depth-wise separable convolution. We tune the kernel size on the validation set. For convolution with a single kernel, we use the kernel size of 7 for all layers. In case of dynamic selected kernels, the kernel size is 3 for small kernels and 15 for large kernels for all layers. The training hyper-parameters are tuned on the validation set. , we use Adam optimizer with a learning rate of 0.001. We use the warmup mechanism and invert the learning rate decay with warmup updates of 4K. For the De-En dataset, we train the model for 20K steps with a batch size of 4K. The parameters are updated every 4 steps. The dropout rate is set to 0.4. For the En-Vi dataset, we train the model for 10K steps with a batch size of 4K. The parameters are also updated every 4 steps. The dropout rate is set to 0.3. We save checkpoints every epoch and average the last 10 checkpoints for inference. During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation , 1 for the two small datasets following the default setting of. We do not tune beam width and length penalty but use the setting reported in. The BLEU 1 metric is adopted to evaluate the model performance during evaluation. As shown in Table 1, MUSE outperforms all previously models on En-De and En-Fr translation, including both state-of-the-art models of stand alone self-attention , and convolutional models (; a). This shows that either self-attention or convolution alone is not enough for sequence to sequence learning. The proposed parallel multi-scale attention improves over them both on En-De and En-Fr. En-De En-Fr ConvSeq2seq 25.2 40.5 SliceNet 26.1 -Transformer 28.4 41.0 Weighted Transformer 28.9 41.4 Layer-wise Coordination 29.1 -Transformer (relative position) 29.2 41.5 Transformer 29 Relative position or local attention constraints bring improvements over origin self-attention model, but parallel multi-scale outperforms them. MUSE can also scale to small model and small datasets, as depicted in Table 2, MUSE-base pushes the state-of-the-art from 35.7 to 36.3 on IWSLT De-En translation dataset. It is shown in Table 1 and Table 2 that MUSE-simple which contains the basic idea of parallel multi-scale attention achieves state-of-the-art performance on three tasks. In this subsection we compare MUSE and its variants on IWSLT 2015 De-En translation to answer the question. Does concatenating self-attention with convolution certainly improve the model? To bridge the gap between point-wise transformation which learns token level representations and self-attention which learns representations of global context, we introduce convolution to enhance our multi-scale attention. As we can see from the first experiment group of Table 3, convolution is important in the parallel multi-scale attention. However, it is not easy to combine convolution and self-attention in one module to build better representations on sequence to sequence tasks. As shown in the first line of both second and third group of Table 3, simply learning local representations by using convolution or depth-wise separable convolution in parallel with self-attention harms the performance. Furthermore, combining depth-wise separable convolution (in this work we choose its best variant dynamic convolution as implementation) is even worse than combining convolution. Conv and self-attention? We conjecture that convolution and self-attention both learn contextual sequence representations and they should share the point transformation and perform the contextual transformation in the same hidden space. We first project the input to a hidden representation and perform a variant of depth-wise convolution and self-attention transformations in parallel. The fist two experiments in third group of Table 3 show that validating the utility of sharing Projection in parallel multi-scale attention, shared projection gain 1.4 BLEU scores over separate projection, and bring improvement of 0.5 BLEU scores over MUSE-simple (without DepthConv). How much is the kernel size? Comparative experiments show that the too large kernel harms performance both for DepthConv and convolution. Since there exists self-attention and point-wise transformations, simply applying the growing kernel size schedule proposed in SliceNet doesn't work. Thus, we propose to use dynamically selected kernel size to let the learned network decide the kernel size for each layer. Parallel multi-scale attention brings time efficiency on GPUs The underlying parallel structure (compared to the sequential structure in each block of Transformer) allows MUSE to be efficiently computed on GPUs. For example, we can combine small matrices into large matrices, and while it does not reduce the number of actual operations, it can be better paralleled by GPUs to speed up computation. Concretely, for each MUSE module, we first concentrate W Q, W K, W V of selfattention and W 1 of point feed-forward transformation into a single encoder matrix W Enc, and then perform transformation such as self-attention, depth-separable convolution, and nonlinear transformation, in parallel, to learn multi-scale representations in the hidden layer. W O, W 2, W out can also be combined a single decoder matrix W Dec. The decoder of sequence to sequence architecture can be implemented similarly. In Table 4, we conduct comparisons to show the speed gains with the aforementioned implementation, the batch size is set to one sample per batch to simulate online inference environment. Under the settings, where the numbers of parameters are similar for MUSE and Transformer, about 31% increase in inference speed can be obtained. The experiments use MUSE with 6 MUSE-simple modules and Transformer with 6 base blocks. The hidden size is set to 512. It is worth noticing that for the MUSE structure used in the main experiments, ideally a similar speedup can be witnessed if the computing device is powerful enough. However, such is not the case in our preliminary experiments. We also need to point out the implementation is far from fully optimized and the are only meant to demonstrate the feasibility of the procedure. Inference Speed (tokens/s) Transformer 132 MUSE 173 Acceleration 31% Parallel multi-scale attention generates much better long sequence As demonstrated in Figure 3, MUSE generates better sequences of various length than self-attention, but it is remarkably adept at generate long sequence, e.g. for sequence longer than 100, MUSE is two times better. Lower layers prefer local context and higher layers prefer more contextual representations MUSE contains multiple dynamic convolution cells, whose streams are fused by a gated mechanism. The weight for each dynamic cell is a scalar. Here we analyze the weight of different dynamic convolution cells in different layers. Figure 4 shows that as the layer depth increases, the weight of dynamic convolution cells with small kernel sizes gradually decreases. It demonstrates that lower layers prefer local features while higher layers prefer global features. It is corresponding to the finding in. MUSE not only gains BLEU scores, but also generates more reasonable sentences and increases the translation quality. We conduct the case study on the De-En dataset and the cases are shown in Table 5. In case 1, although the baseline transformer translates many correct words according to the source sentence, the translated sentence is not fluent at all. It indicates that Transformer does not capture the relationship between some words and their neighbors, such as "right" and "clap". By contrast, MUSE captures them well by combining local convolution with global selfattention. In case 2, the cause adverbial clause is correctly translated by MUSE while transformer misses the word "why" and fails to translate it. Sequence to sequence learning is an important task in machine learning. It evolves understanding and generating sequence. Machine translation is the touchstone of sequence to sequence learning. Traditional approaches usually adopt long-short term memory networks to learn the representation of sequences. However, these models either are built upon auto-regressive structures requiring longer encoding time or perform worse on real-world natural language processing tasks. Recent studies explore convolutional neural networks (CNN) or self-attention to support high-parallel sequence modeling and does not require auto-regressive structure during encoding, thus bringing large efficiency improvements. They are strong at capturing local or global dependencies. There are several studies on combining self-attention and convolution. However, they do not surpass both convectional and self-attention mechanisms. Sukhbaatar et al. (2019b) propose to augment convolution with self attention by directly concentrating them in computer vision tasks. However, as demonstrated in Table 3 there method does not work for sequence to sequence learning task. Since state-of-the-art models on question answering tasks still consist on self-attention and do no adopt ideas in QAnet . Both self-attention and convolution (a) outperforms Evolved transformer by near 2 BLEU scores on En-Fr translation. It seems that learning global and local context through stacking self-attention and convolution layers does not beat either self-attention or convolution models. In contrast, the proposed parallel multiscale attention outperforms previous convolution or self-attention based models on main translation tasks, showing its effectiveness for sequence to sequence learning. Although the self-attention mechanism has been prevalent in sequence modeling, we find that attention suffers from dispersed weights especially for long sequences, ing from the insufficient local information. To address this problem, we present Parallel Multi-scale Attention (MUSE) and MUSE-simple. MUSE-simple introduces the idea of parallel multi-scale attention into sequence to sequence learning. And MUSE fuses self-attention, convolution, and point-wise transformation together to explicitly learn global, local and token level sequence representations. Especially, we find from empirical that the shared projection plays important part in its success, and is essential for our multiscale learning. Beyond the inspiring new state-of-the-art on three machine translation datasets, detailed analysis and model variants also verify the effectiveness of MUSE. In future work, we would like to explore the detailed effects of shared projection on contextual representation learning. We are exited about future of parallel multi-scale attention and plan to apply this simple but effective idea to other tasks including image and speech. A.1 CASE STUDY Source wenn sie denken, dass die auf der linken seite jazz ist und die, auf der rechten seite swing ist, dann klatschen sie bitte. Target if you think the one on the left is jazz and the one on the right is swing, clap your hands. Transformer if you think it's jazz on the left, and those on the right side of the swing are clapping, please. MUSE if you think the one on the left is jazz, and the one on the right is swing, please clap. Case 2 Source und deswegen haben wir uns entschlossen in berlin eine halle zu bauen, in der wir sozusagen die elektrischen verhältnisse der insel im maßstab eins zu drei ganz genau abbilden können. Target and that's why we decided to build a hall in berlin, where we could precisely reconstruct, so to speak, the electrical ratio of the island on a one to three scale. Transformer and so in berlin, we decided to build a hall where we could sort of map the electrical proportions of the island at scale one to three very precisely. and that's why we decided to build a hall in berlin, where we can sort of map the electric relationship of the island at the scale one to three very precisely. Table 5: Case study on the De-En dataset. The blue bolded words denote the wrong translation and red bolded words denote the correct translation. In case 1, transformer fails to capture the relationship between some words and their neighbors, such as "right" and "clap". In case 2, the cause adverbial clause is correctly translated by MUSE while transformer misses the word "why" and fails to translate it.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJe-3REFwr
This paper propose a new model which combines multi scale information for sequence to sequence learning.
Training neural networks with verifiable robustness guarantees is challenging. Several existing approaches utilize linear relaxation based neural network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying network architectures. Meanwhile, interval bound propagation (IBP) based training is efficient and significantly outperforms linear relaxation based methods on many tasks, yet it may suffer from stability issues since the bounds are much looser especially at the beginning of training. In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass. CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks. We conduct large scale experiments on MNIST and CIFAR datasets, and outperform all previous linear relaxation and bound propagation based certified defenses in L_inf robustness. Notably, we achieve 7.02% verified test error on MNIST at epsilon=0.3, and 66.94% on CIFAR-10 with epsilon=8/255. The success of deep neural networks (DNNs) has motivated their deployment in some safety-critical environments, such as autonomous driving and facial recognition systems. Applications in these areas make understanding the robustness and security of deep neural networks urgently needed, especially their resilience under malicious, finely crafted inputs. Unfortunately, the performance of DNNs are often so brittle that even imperceptibly modified inputs, also known as adversarial examples, are able to completely break the model . The robustness of DNNs under adversarial examples is well-studied from both attack (crafting powerful adversarial examples) and defence (making the model more robust) perspectives (; a; b; ; ; ; b; 2018b; c; ; ; ; b). Recently, it has been shown that defending against adversarial examples is a very difficult task, especially under strong and adaptive attacks. Early defenses such as distillation have been broken by stronger attacks like C&W (b). Many defense methods have been proposed recently (; ; ; ; ; a; 2019a), but their robustness improvement cannot be certified -no provable guarantees can be given to verify their robustness. In fact, most of these uncertified defenses become vulnerable under stronger attacks . Several recent works in the literature seeking to give provable guarantees on the robustness performance, such as linear relaxations (; ; a; b; ;), interval bound propagation , ReLU stability regularization (c), and distributionally robust optimization and semidefinite relaxations (a; Dvijotham et al.). Linear relaxations of neural networks, first proposed by , is one of the most popular categories among these certified defences. They use the dual of linear programming or several similar approaches to provide a linear relaxation of the network (referred to as a "convex adversarial polytope") and the ing bounds are tractable for robust optimization. However, these methods are both computationally and memory intensive, and can increase model training time by a factor of hundreds. On the other hand, interval bound propagation (IBP) is a simple and efficient method for training verifiable neural networks , which achieved state-of-the-art verified error on many datasets. However, since the IBP bounds are very loose during the initial phase of training, the training procedure can be unstable and sensitive to hyperparameters. In this paper, we first discuss the strengths and weakness of existing linear relaxation based and interval bound propagation based certified robust training methods. Then we propose a new certified robust training method, CROWN-IBP, which marries the efficiency of IBP and the tightness of a linear relaxation based verification bound, CROWN . CROWN-IBP bound propagation involves a IBP based fast forward bounding pass, and a tight convex relaxation based backward bounding pass (CROWN) which scales linearly with the size of neural network output and is very efficient for problems with low output dimensions. Additional, CROWN-IBP provides flexibility for exploiting the strengths of both IBP and convex relaxation based verifiable training methods. The efficiency, tightness and flexibility of CROWN-IBP allow it to outperform state-of-the-art methods for training verifiable neural networks with ∞ robustness under all settings on MNIST and CIFAR-10 datasets. In our experiment, on MNIST dataset we reach 7.02% and 12.06% IBP verified error under ∞ distortions = 0.3 and = 0.4, respectively, outperforming the state-of-the-art baseline by IBP (8.55% and 15.01%). On CIFAR-10, at = 2 255, CROWN-IBP decreases the verified error from 55.88% (IBP) to 46.03% and matches convex relaxation based methods; at a larger, CROWN-IBP outperforms all other methods with a noticeable margin. Neural network robustness verification algorithms seek for upper and lower bounds of an output neuron for all possible inputs within a set S, typically a norm bounded perturbation. Most importantly, the margins between the ground-truth class and any other classes determine model robustness. However, it has already been shown that finding the exact output range is a non-convex problem and NP-complete . Therefore, recent works resorted to giving relatively tight but computationally tractable bounds of the output range with necessary relaxations of the original problem. Many of these robustness verification approaches are based on linear relaxations of non-linear units in neural networks, including CROWN , DeepPoly , Fast-Lin , DeepZ and Neurify (b). We refer the readers to (b) for a comprehensive survey on this topic. After linear relaxation, they bound the output of a neural network f i (·) by linear upper/lower hyper-planes: where a row vector is the product of the network weight matrices W (l) and diagonal matrices D (l) reflecting the ReLU relaxations for output neuron i; b L and b U are two bias terms unrelated to ∆x. Additionally, Dvijotham et al. (2018c; a); solve the Lagrangian dual of verification problem; Raghunathan et al. (2018a; b); Dvijotham et al. propose semidefinite relaxations which are tighter compared to linear relaxation based methods, but computationally expensive. Bounds on neural network local Lipschitz constant can also be used for verification (c;). Besides these deterministic verification approaches, randomized smoothing can be used to certify the robustness of any model in a probabilistic manner (; a; ;). To improve the robustness of neural networks against adversarial perturbations, a natural idea is to generate adversarial examples by attacking the network and then use them to augment the training set . More recently, showed that adversarial training can be formulated as solving a minimax robust optimization problem as in. Given a model with parameter θ, loss function L, and training data distribution X, the training algorithm aims to minimize the robust loss, which is defined as the maximum loss within a neighborhood {x + δ|δ ∈ S} of each data point x, leading to the following robust optimization problem: proposed to use projected gradient descent (PGD) to approximately solve the inner max and then use the loss on the perturbed example x + δ to update the model. Networks trained by this procedure achieve state-of-the-art test accuracy under strong attacks (; a;). Despite being robust under strong attacks, models obtained by this PGD-based adversarial training do not have verified error guarantees. Due to the nonconvexity of neural networks, PGD attack can only compute the lower bound of robust loss (the inner maximization problem). Minimizing a lower bound of the inner max cannot guarantee is minimized. In other words, even if PGD-attack cannot find a perturbation with large loss, that does not mean there exists no such perturbation. This becomes problematic in safety-critical applications since those models need certified safety. Verifiable adversarial training methods, on the other hand, aim to obtain a network with good robustness that can be verified efficiently. This can be done by combining adversarial training and robustness verification-instead of using PGD to find a lower bound of inner max, certified adversarial training uses a verification method to find an upper bound of the inner max, and then update the parameters based on this upper bound of robust loss. Minimizing an upper bound of the inner max guarantees to minimize the robust loss. There are two certified robust training methods that are related to our work and we describe them in detail below. Dvijotham et al. (2018b). Since the bound propagation process of a convex adversarial polytope is too expensive, several methods were proposed to improve its efficiency, like Cauchy projection and dynamic mixed training (a). However, even with these speed-ups, the training process is still slow. Also, this method may significantly reduce a model's standard accuracy (accuracy on natural, unmodified test set). As we will demonstrate shortly, we find that this method tends to over-regularize the network during training, which is harmful for obtaining good accuracy. Interval Bound Propagation (IBP). Interval Bound Propagation (IBP) uses a very simple rule to compute the pre-activation outer bounds for each layer of the neural network. Unlike linear relaxation based methods, IBP does not relax ReLU neurons and does not consider the correlations between neurons of different layers, yielding much looser bounds. proposed a variety of abstract domains to give sound over-approximations for neural networks, including the "Box/Interval Domain" (referred to as IBP in) and showed that it could scale to much larger networks than other works (a) could at the time. demonstrated that IBP could outperform many state-of-the-art by a large margin with more precise approximations for the last linear layer and better training schemes. However, IBP can be unstable to use and hard to tune in practice, since the bounds can be very loose especially during the initial phase of training, posing a challenge to the optimizer. To mitigate instability, use a mixture of regular and minimax robust cross-entropy loss as the model's training loss. Notation. We define an L-layer feed-forward neural network recursively as: where h (x) = x, n 0 represents input dimension and n L is the number of classes, σ is an elementwise activation function. We use z to represent pre-activation neuron values and h to represent Table 1: IBP trained models have low IBP verified errors but when verified with a typically much tighter bound, including convex adversarial polytope (CAP) and CROWN , the verified errors increase significantly. CROWN is generally tighter than convex adversarial polytope however the gap between CROWN and IBP is still large, especially at large. We used a 4-layer CNN network for all datasets to compute these bounds. 1 post-activation neuron values. Consider an input example x k with ground-truth label y k, we define a set of S(x k,) = {x| x − x k ∞ ≤} and we desire a robust network to have the property We define element-wise upper and lower bounds for z (l) and Verification Specifications. Neural network verification literature typically defines a specification vector c ∈ R n L, that gives a linear combination for neural network output: c f (x). In robustness verification, typically we set c i = 1 where i is the ground truth class label, c j = −1 where j is the attack target label and other elements in c are 0. This represents the margin between class i and class j. For an n L class classifier and a given label y, we define a specification matrix C ∈ R n L ×n L as: otherwise (note that the y-th row contains all 0) Importantly, each element in vector m:= Cf (x) ∈ R n L gives us margins between class y and all other classes. We define the lower bound of Cf (x) for all x ∈ S(x k,) as m(x k,), which is a very important quantity: when all elements of m(x k,) > 0, x k is verifiably robust for any perturbation with ∞ norm less than. m(x k,) can be obtained by a neural network verification algorithm, such as convex adversarial polytope, IBP, or CROWN. showed that for cross-entropy (CE) loss: gives us the opportunity to solve the robust optimization problem via minimizing this tractable upper bound of inner-max. This guarantees that max x∈S(x k,) L(f (x), y) is also minimized. Interval Bound Propagation (IBP) Interval Bound Propagation (IBP) uses a simple bound propagation rule. For the input layer we set x L ≤ x ≤ x U element-wise. For affine layers we have: where |W (l) | takes element-wise absolute value. Note that h = x U and h = x L 2. And for element-wise monotonic increasing activation functions σ, 1 We implemented CROWN with efficient CNN support on GPUs: https://github.com/huanzhang12/CROWN-IBP 2 For inputs bounded with general norms, IBP can be applied as long as this norm can be converted to per-neuron intervals after the first affine layer. For example, for p norms (1 ≤ p ≤ ∞) Hölder's inequality can be applied at the first affine layer to obtain h and h, and IBP rule for later layers do not change. We found that IBP can be viewed as training a simple augmented ReLU network which is friendly to optimizers (see Appendix A for more discussions). We also found that a network trained using IBP can obtain good verified errors when verified using IBP, but it can get much worse verified errors using linear relaxation based verification methods, including convex adversarial polytope (CAP) by (equivalently, Fast-Lin by) and CROWN . Table 1 demonstrates that this gap can be very large on large. However, IBP is a very loose bound during the initial phase of training, which makes training unstable and hard to tune; purely using IBP frequently leads to divergence. proposed to use a schedule where is gradually increased during training, and a mixture of robust cross-entropy loss with natural cross-entropy loss as the objective to stabilize training: In Figure 1 we train a small 4-layer MNIST model and we linearly increase from 0 to 0.3 in 60 epochs. We plot the ∞ induced norm of the 2nd CNN layer during the training process of CROWN-IBP and . The norm of weight matrix using does not increase. When becomes larger (roughly at = 0.2, epoch 40), the norm even starts to decrease slightly, indicating that the model is forced to learn smaller norm weights. Meanwhile, the verified error also starts to ramp up possibly due to the lack of capacity. We conjecture that linear relaxation based training over-regularizes the model, especially at a larger. However, in CROWN-IBP, the norm of weight matrices keep increasing during the training process, and verifiable error does not significantly increase when reaches 0.3. Another issue with current linear relaxation based training or verification methods is their high computational and memory cost, and poor scalability. For the small network in Figure 1, convex adversarial polytope (with 50 random Cauchy projections) is 8 times slower and takes 4 times more memory than CROWN-IBP (without using random projections). Convex adversarial polytope scales even worse for larger networks; see Appendix J for a comparison. Overview. We have reviewed IBP and linear relaxation based methods above. As shown in , IBP performs well at large with much smaller verified error, and also efficiently scales to large networks; however, it can be sensitive to hyperparameters due to its very imprecise bound at the beginning phase of training. On the other hand, linear relaxation based methods can give tighter lower bounds at the cost of high computational expenses, but it over-regularizes the network at large and forbids us to achieve good standard and verified accuracy. We propose CROWN-IBP, a new certified defense where we optimize the following problem (θ represents the network parameters): where our lower bound of margin m(x,) is a combination of two bounds with different natures: IBP, and a CROWN-style bound (which will be detailed below); L is the cross-entropy loss. Note that the combination is inside the loss function and is thus still a valid lower bound; thus still holds and we are within the minimax robust optimization theoretical framework. Similar to IBP and TRADES (a), we use a mixture of natural and robust training loss with parameter κ, allowing us to explicitly trade-off between clean accuracy and verified accuracy. In a high level, the computation of the lower bounds of CROWN-IBP (m CROWN-IBP (x,)) consists of IBP bound propagation in a forward bounding pass and CROWN-style bound propagation in a backward bounding pass. We discuss the details of CROWN-IBP algorithm below. Forward Bound Propagation in CROWN-IBP. In CROWN-IBP, we first obtain z (l) and z (l) for all layers by applying, and. Then we will obtain m IBP (x,) = z (L) (assuming C is merged into W (L) ). The time complexity is comparable to two forward propagation passes of the network. Linear Relaxation of ReLU neurons Given z (l) and z (l) computed in the previous step, we first check if some neurons are always active (z where propose to adaptively select α k = 1 when z k | and 0 otherwise, which minimizes the relaxation error. Following, for an input vector z (l), we effectively replace the ReLU layer with a linear layer, giving upper or lower bounds of the output: where D (l) and D (l) are two diagonal matrices representing the "weights" of the relaxed ReLU layer. Other general activation functions can be supported similarly. In the following we focus on conceptually presenting the algorithm, while more details of each term can be found in the Appendix. Backward Bound Propagation in CROWN-IBP. Unlike IBP, CROWN-style bounds start bounding from the last layer, so we refer to it as backward bound propagation (not to be confused with the back-propagation algorithm to obtain gradients). Suppose we want to obtain the lower bound (we assume the specification matrix C has been merged into), which can be bounded linearly by Eq.. CROWN-style bounds choose the lower bound of σ(z i,k is positive, and choose the upper bound otherwise. We then merge W (L) and the linearized ReLU layer together and define: where b with the next linear layer, which is straight forward by plugging in Then we continue to unfold the next ReLU layer σ(z (L−2) ) using its linear relaxations, and compute in a similar manner as in. Along with the bound propagation process, we need to compute a series of matrices,. At this point, we merged all layers of the network into a linear layer: z For ReLU networks, convex adversarial polytope uses a very similar bound propagation procedure. CROWN-style bounds allow an adaptive selection of α i in, thus often gives better bounds (e.g., see Table 1). We give details on each term in Appendix L. Computational Cost. Ordinary CROWN and convex adversarial polytope use as the final layer of the network. For each layer m, we need a different set of m A matrices, defined as A m,(l), l ∈ {m − 1, · · ·, 0}. This causes three computational issues: • Computation of all A m,(l) matrices is expensive. Suppose the network has n neurons for all L − 1 intermediate and input layers and n L n neurons for the output layer (assuming L ≥ 2), the time complexity of ordinary CROWN or convex adversarial polytope is O(A ordinary forward propagation only takes O(Ln 2) time per example, thus ordinary CROWN does not scale up to large networks for training, due to its quadratic dependency in L and extra Ln times overhead. • When both W (l) and W (l−1) represent convolutional layers with small kernel tensors K (l) and, there are no efficient GPU operations to form the matrix and K (l−1). Existing implementations either unfold at least one of the convolutional kernels to fully connected weights, or use sparse matrices to represent W (l) and W (l−1). They suffer from poor hardware efficiency on GPUs. In CROWN-IBP, we use IBP to obtain bounds of intermediate layers, which takes only twice the regular forward propagate time (O(Ln 2)), thus we do not have the first and second issues. The time complexity of the backward bound, only n L times slower than forward propagation and significantly more scalable than ordinary CROWN (which is Ln times slower than forward propagation, where typically n n L). The third convolution issue is also not a concern, since we start from the last specification layer W (L) which is a small fully connected layer. Suppose we need to compute on GPUs using the transposed convolution operator with kernel K (L−1), without unfolding any convoluational layers. Conceptually, the backward pass of CROWN-IBP propagates a small specification matrix W (L) backwards, replacing affine layers with their transposed operators, and activation function layers with a diagonal matrix product. This allows efficient implementation and better scalability. Benefits of CROWN-IBP. Tightness, efficiency and flexibility are unique benefits of CROWN-IBP: • CROWN-IBP is based on CROWN, a tight linear relaxation based lower bound which can greatly improve the quality of bounds obtained by IBP to guide verifiable training and improve stabability; • CROWN-IBP avoids the high computational cost of convex relaxation based methods: the time complexity is reduced from O(, well suited to problems where the output size n L is much smaller than input and intermediate layers' sizes; also, there is no quadratic dependency on L. Thus, CROWN-IBP is efficient on relatively large networks; • The objective is strictly more general than IBP and allows the flexibility to exploit the strength from both IBP (good for large) and convex relaxation based methods (good for small). We can slowly decrease β to 0 during training to avoid the over-regularization problem, yet keeping the initial training of IBP more stable by providing a much tighter bound; we can also keep β = 1 which helps to outperform convex relaxation based methods in small regime (e.g., = 2/255 on CIFAR-10). Models and training schedules. We evaluate CROWN-IBP on three models that are similar to the models used in on MNIST and CIFAR-10 datasets with different ∞ perturbation norms. Here we denote the small, medium and large models in as DM-small, DM-medium and DM-large. During training, we first warm up (regular training without robust loss) for a fixed number of epochs and then increase from 0 to train using a ramp-up schedule of R epochs. Similar techniques are also used in many other works (; a;). For both IBP and CROWN-IBP, a natural cross-entropy (CE) loss with weight κ (as in Eq) may be added, and κ is scheduled to linearly decrease from κ start to κ end within R ramp-up epochs. used κ start = 1 and κ end = 0.5. To understand the trade-off between verified accuracy and standard (clean) accuracy, we explore two more settings: κ start = κ end = 0 (without natural CE loss) and κ start = 1, κ end = 0. For β, a linear schedule during the ramp-up period is used, but we always set β start = 1 and β end = 0, except that we set β start = β end = 1 for CIFAR-10 at = 2 255. Detailed model structures and hyperparameters are in Appendix C. Our training code for IBP and CROWN-IBP, and pre-trained models are publicly available 3. Metrics. Verified error is the percentage of test examples where at least one element in the lower bounds m(x k,) is < 0. It is an guaranteed upper bound of test error under any ∞ perturbations. We obtain m(x k,) using IBP or CROWN-IBP (Eq. 13). We also report standard (clean) errors and errors under 200-step PGD attack. PGD errors are lower bounds of test errors under ∞ perturbations. Comparison to IBP. Table 2 represents the standard, verified and PGD errors under different for each dataset with different κ settings. We test CROWN-IBP on the same model structures in Table 1 of. These three models' architectures are presented in Table A in the Appendix. Here we only report the DM-large model structure in as it performs best under all setttings; small and medium models are deferred to Table C in the Appendix. When both κ start = κ end = 0, no natural CE loss is added and the model focuses on minimizing verified error, but the lack of natural CE loss may lead to unstable training, especially for IBP; the κ start = 1, κ end = 0.5 setting emphasizes on minimizing standard error, usually at the cost of slightly higher verified error rates. κ start = 1, κ end = 0 typically achieves the best balance. We can observe that under the same κ settings, CROWN-IBP outperforms IBP in both standard error and verified error. The benefits of CROWN-IBP is significant especially when model is large and is large. We highlight that CROWN-IBP reduces the verified error rate obtained by IBP from 8.21% to 7.02% on MNIST at = 0.3 and from 55.88% to 46.03% on CIFAR-10 at = 2/255 (it is the first time that an IBP based method outperforms from , and our model also has better standard error). We also note that we are the first to obtain verifiable bound on CIFAR-10 at = 16/255. Trade-off Between Standard Accuracy and Verified Accuracy. To show the trade-off between standard and verified accuracy, we evaluate DM-large CIFAR-10 model with test = 8/255 under different κ settings, while keeping all other hyperparameters unchanged. For each κ end = {0.5, 0.25, 0}, we uniformly choose 11 κ start ∈ [1, κ end] while keeping all other hyper-parameters unchanged. A larger κ start or κ end tends to produce better standard errors, and we can explicitly control the trade-off between standard accuracy and verified accuracy. In Figure 2 we plot the standard and verified errors of IBP and CROWN-IBP trained models with different κ settings. Each cluster on the figure has 11 points, representing 11 different κ start values. Models with lower verified errors tend to have higher standard errors. However, CROWN-IBP clearly outperforms IBP with improvement on both standard and verified accuracy, and pushes the Pareto front towards the lower left corner, indicating overall better performance. To reach the same verified error of 70%, CROWN-IBP can reduce standard error from roughly 55% to 45%. Training Stability. To discourage hand-tuning on a small set of models and demonstrate the stability of CROWN-IBP over a broader range of models, we evaluate IBP and CROWN-IBP on a variety of small and medium sized model architectures (18 for MNIST and 17 for CIFAR-10), detailed in Appendix D. To evaluate training stability, we compare verified errors under different ramp-up schedule length (R = {30, 60, 90, 120} on CIFAR-10 and R = {10, 15, 30, 60} on MNIST) Table 4 of are evaluated using mixed integer programming (MIP) and linear programming (LP), which are strictly smaller than IBP verified errors but computationally expensive. For a fair comparison, we use the IBP verified errors reported in their Table 3. † According to direct communications with , achieving the 68.44% IBP verified error requires to adding an extra PGD adversarial training loss. Without adding PGD, the verified error is 72.91% (LP/MIP verified) or 73.52% (IBP verified). Our should be compared to 73.52%. ‡ Although not explicitly mentioned, the CIFAR-10 models in are trained using train = 1.1 test. We thus follow their settings. § We use βstart = βend = 1 for this setting, and thus CROWN-IBP bound (β = 1) is used to evaluate the verified error. and different κ settings. Instead of reporting just the best model, we compare the best, worst and median verified errors over all models. Our are presented in Figure 3: (a) is for MNIST with = 0.3; (c),(d) are for CIFAR with = 8/255. We can observe that CROWN-IBP achieves better performance consistently under different schedule length. In addition, IBP with κ = 0 cannot stably converge on all models when schedule is short; under other κ settings, CROWN-IBP always performs better. We conduct additional training stability experiments on MNIST and CIFAR-10 dataset under other model and settings and the observations are similar (see Appendix H). We propose a new certified defense method, CROWN-IBP, by combining the fast interval bound propagation (IBP) bound and a tight linear relaxation based bound, CROWN. Our method enjoys high computational efficiency provided by IBP while facilitating the tight CROWN bound to stabilize training under the robust optimization framework, and provides the flexibility to trade-off between the two. Our experiments show that CROWN-IBP consistently outperforms other IBP baselines in both standard errors and verified errors and achieves state-of-the-art verified test errors for ∞ robustness. Given a fixed neural network (NN) f (x), IBP gives a very loose estimation of the output range of f (x). However, during training, since the weights of this NN can be updated, we can equivalently view IBP as an augmented neural network, which we denote as an IBP-NN (Figure A). Unlike a usual network which takes an input x k with label y k, IBP-NN takes two points x L = x k − and x U = x k + as inputs (where x L ≤ x ≤ x U, element-wisely). The bound propagation process can be equivalently seen as forward propagation in a specially structured neural network, as shown in Figure A. After the last specification layer C (typically merged into W (L) ), we can obtain m(x k,). Then, −m(x k,) is sent to softmax layer for prediction. Importantly, since [m(x k,)] y k = 0 (as the y k -th row in C is always 0), the top-1 prediction of the augmented IBP network is y k if and only if all other elements of m(x k,) are positive, i.e., the original network will predict correctly for all x L ≤ x ≤ x U. When we train the augmented IBP network with ordinary cross-entropy loss and desire it to predict correctly on an input x k, we are implicitly doing robust optimization (Eq.). The simplicity of IBP-NN may help a gradient based optimizer to find better solutions. On the other hand, while the computation of convex relaxation based bounds can also be cast as an equivalent network (e.g., the "dual network" in), its construction is significantly more complex, and sometimes requires non-differentiable indicator functions (the sets I +, I − and I in). As a consequence, it can be challenging for the optimizer to find a good solution, and the optimizer tends to making the bounds tighter naively by reducing the norm of weight matrices and over-regularizing the network, as demonstrated in Figure 1. Both IBP and CROWN-IBP produce lower bounds m(x,), and a larger lower bound has better quality. To measure the relative tightness of the two bounds, we take the average of all bounds of training examples: A positive value indicates that CROWN-IBP is tighter than IBP. In Figure B we plot this averaged bound differences during schedule for one MNIST model and one CIFAR-10 model. We can observe that during the early phase of training when the schedule just starts, CROWN-IBP produces significantly better bounds than IBP. A tighter lower bound m(x,) gives a tighter upper bound for max δ∈S L(x + δ; y; θ), making the minimax optimization problem more effective to solve. When the training schedule proceeds, the model gradually learns how to make IBP bounds tighter and eventually the difference between the two bounds become close to 0. Why CROWN-IBP stabilizes IBP training? When taking a randomly initialized network or a naturally trained network, IBP bounds are very loose. But in Table 1, we show that a network trained using IBP can eventually obtain quite tight IBP bounds and high verified accuracy; the network can adapt to IBP bounds and learn a specific set of weights to make IBP tight and also correctly classify examples. However, since the training has to start from weights that produce loose bounds for IBP, the beginning phase of IBP training can be challenging and is vitally important. We observe that IBP training can have a large performance variance across models and initializations. Also IBP is more sensitive to hyper-parameter like κ or schedule length; in Figure 3, many IBP models converge sub-optimally (large worst/median verified error). The reason for instability is that during the beginning phase of training, the loose bounds produced by IBP make the robust loss ineffective, and it is challenging for the optimizer to reduce this loss and find a set of good weights that produce tight IBP verified bounds in the end. Conversely, if our bounds are much tighter at the beginning, the robust loss always remains in a reasonable range during training, and the network can gradually learn to find a good set of weights that make IBP bounds increasingly tighter (this is obvious in Figure B). Initially, tighter bounds can be provided by a convex relaxation based method like CROWN, and they are gradually replaced by IBP bounds (using β start = 1, β end = 0), eventually leading to a model with learned tight IBP bounds in the end. The goal of these experiments is to reproduce the performance reported in and demonstrate the advantage of CROWN-IBP under the same experimental settings. Specifically, to reproduce the IBP , for CIFAR-10 we train using a large batch size and long training schedule on TPUs (we can also replicate these on multi-GPUs using a reasonable amount of training time; see Section F). Also, for this set of experiments we use the same code base as in. For model performance on a comprehensive set of small and medium sized models trained on a single GPU, please see Table D in Section F, as well as the training stability experiments in Section 4 and Section H. The models structures (DM-small, DM-medium and DM-large) used in Table C and Table 2 are listed in Table A. These three model structures are the same as in. Training hyperparameters are detailed below: • For MNIST IBP baseline , we follow exact the same set of hyperparameters as in . We train 100 epochs (60K steps) with a batch size of 100, and use a warm-up and ramp-up duration of 2K and 10K steps. Learning rate for Adam optimizer is set to 1 × 10 −3 and decayed by 10X at steps 15K and 25K. Our IBP match their reported numbers. Note that we always use IBP verified errors rather than MIP verified errors. We use the same schedule for CROWN-IBP with train = 0.2 (test = 0.1) in Table C and Table 2. For train = 0.4, this schedule can obtain verified error rates 4.22%, 7.01% and 12.84% at test = {0.2, 0.3, 0.4} using the DM-Large model, respectively. • For MNIST CROWN-IBP with train = 0.4 in Table C and Table 2, we train 200 epochs with a batch size of 256. We use Adam optimizer and set learning rate to 5 × 10 −4. We warm up with 10 epochs' regular training, and gradually ramp up from 0 to train in 50 epochs. We reduce the learning rate by 10X at epoch 130 and 190. Using this schedule, IBP's performance becomes worse (by about 1-2% in all settings), but this schedule improves verified error for CROWN-IBP at test = 0.4 from 12.84% to to 12.06% and does do affect verified errors at other test levels. • For CIFAR-10, we follow the setting in and train 3200 epochs on 32 TPU cores. We use a batch size of 1024, and a learning rate of 5 × 10 −4. We warm up for 320 epochs, and ramp-up for 1600 epochs. Learning rate is reduced by 10X at epoch 2600 and 3040. We use random horizontal flips and random crops as data augmentation, and normalize images according to per-channel statistics. Note that this schedule is slightly different from the schedule used in ; we use a smaller batch size due to TPU memory constraints (we used TPUv2 which has half memory capacity as TPUv3 used in ), and also we decay learning rates later. We found that this schedule improves both IBP baseline performance and CROWN-IBP performance by around 1%; for example, at = 8/255, this improved schedule can reduce verified error from 73.52% to 72.68% for IBP baseline (κ start = 1.0, κ end = 0.5) using the DM-Large model. Hyperparameter κ and β. We use a linear schedule for both hyperparameters, decreasing κ from κ start to κ end while increasing β from β start to β end. The schedule length is set to the same length as the schedule. In both IBP and CROWN-IBP, a hyperparameter κ is used to trade-off between clean accuracy and verified accuracy. Figure 2 shows that κ end can significantly affect the trade-off, while κ start has minor impacts compared to κ end. In general, we recommend κ start = 1 and κ end = 0 as a safe starting point, and we can adjust κ end to a larger value if a better standard accuracy is desired. The setting κ start = κ end = 0 (pure minimax optimization) can be challenging for IBP as there is no natural loss as a stabilizer; under this setting CROWN-IBP usually produces a model with good (sometimes best) verified accuracy but noticeably worse standard accuracy (on CIFAR-10 = 8 255 the difference can be as large as 10%), so this setting is only recommended when a model with best verified accuracy is desired at a cost of noticeably reduced standard accuracy. Compared to IBP, CROWN-IBP adds one additional hyperparameter, β. β has a clear meaning: balancing between the convex relaxation based bounds and the IBP bounds. β start is always set to 1 as we want to use CROWN-IBP to obtain tighter bounds to stabilize the early phase of training when IBP bounds are very loose; β end determines if we want to use a convex relaxation based bound (β end = 1) or IBP based bound (β end = 0) after the schedule. Thus, we set β end = 1 for the case where convex relaxation based method can outperform IBP (e.g., CIFAR-10 = 2/255, and β end = 0 for the case where IBP outperforms convex relaxation based bounds. We do not tune or grid-search this hyperparameter. . "CONV k w×h+s" represents a 2D convolutional layer with k filters of size w×h using a stride of s in both dimensions. "FC n" = fully connected layer with n outputs. Last fully connected layer is omitted. All networks use ReLU activation functions. In all our training stability experiments, we use a large number of relatively small models and train them on a single GPU. These small models cannot achieve state-of-the-art performance but they can be trained quickly and cheaply, allowing us to explore training stability over a variety of settings, and report min, median and max statistics. We use the following hyperparameters: • For MNIST, we train 100 epochs with batch size 256. We use Adam optimizer and the learning rate is 5 × 10 −4 . The first epoch is standard training for warming up. We gradually increase linearly per batch in our training process with a schedule length of 60. We reduce the learning rate by 50% every 10 epochs after schedule ends. No data augmentation technique is used and the whole 28 × 28 images are used (normalized to 0 -1 range). • For CIFAR, we train 200 epoch with batch size 128. We use Adam optimizer and the learning rate is 0.1%. The first 10 epochs are standard training for warming up. We gradually increase linearly per batch in our training process with a schedule length of 120. We reduce the learning rate by 50% every 10 epochs after schedule ends. We use random horizontal flips and random crops as data augmentation. The three channels are normalized with mean (0.4914, 0.4822, 0.4465) and standard deviation (0.2023, 0.1914, 0.2010). These numbers are per-channel statistics from the training set used in . All verified error numbers are evaluated on the test set using IBP, since the networks are trained using IBP (β = 0 after reaches the target train), except for CIFAR = 2 255 where we set β = 1 to compute the CROWN-IBP verified error. Table B gives the 18 model structures used in our training stability experiments. These model structures are designed by us and are not used in. Most CIFAR-10 models share the same structures as MNIST models (unless noted on the table) except that their input dimensions are different. Model A is too small for CIFAR-10 thus we remove it for CIFAR-10 experiments. Models A -J are the "small models" reported in Figure 3. Models K -T are the "medium models" reported in Figure 3. For in Table 1, we use a small model (model structure B) for all three datasets. These MNIST, CIFAR-10 models can be trained on a single NVIDIA RTX 2080 Ti GPU within a few hours each. In Table 2 we report from the best DM-Large model. Table C presents the verified, standard (clean) and PGD attack errors for all three model structures used in (DM-Small, DM-Medium and DM-Large) trained on MNIST and CIFAR-10 datasets. We evaluate IBP and CROWN-IBP under the same three κ settings as in Table 2. We use hyperparameters detailed in Section C to train these models. We can see that given any model structure and any κ setting, CROWN-IBP consistently outperforms IBP. Table B: Model structures used in our training stability experiments. We use ReLU activations for all models. We omit the last fully connected layer as its output dimension is always 10. In the table, "Conv k w × w + s" represents to a 2D convolutional layer with k filters of size w × w and a stride of s. Model A -J are referred to as "small models" and model K to T are referred to as "medium models". In this section we present additional experiments on a variety of smaller MNIST and CIFAR-10 models which can be trained on a single GPU. The purpose of this experiment is to compare model performance statistics (min, median and max) on a wide range of models, rather than a few hand selected models. The model structures used in these experiments are detailed in Table B. In Table D, we present the best, median and worst verified and standard (clean) test errors for models trained on MNIST and CIFAR-10 using IBP and CROWN-IBP. Although these small models cannot achieve state-of-the-art performance, CROWN-IBP's best, median and worst verified errors among all model structures consistently outperform those of IBP. Especially, in many situations the worst case verified error improves significantly using CROWN-IBP, because IBP training is not stable on some of the models. It is worth noting that in this set of experiments we explore a different setting: train = test. We found that both IBP and CROWN-IBP tend to overfit to training dataset on MNIST with small, thus verified errors are not as good as presented in Table C. This overfitting issue can be alleviated by using train > test (as used in Table 2 and Table C), or using an explicit 1 regularization, which will be discussed in detail in Section I. To further test the training stability of CROWN-IBP, we run each MNIST experiment (using selected models in Table B) 5 times to get the mean and standard deviation of the verified and standard errors on test set. Results are presented in Table E Table E: Means and standard deviations of verified and standard errors of 10 MNIST models trained using CROWN-IBP. The architectures of these models are presented in Table B. We run each model 5 times to compute its mean and standard deviation. regularization term in CROWN-IBP training helps when train = 0.1 or 0.2. The verified and standard errors on the training and test sets with and without regularization can be found in Table F. We can see that with a small 1 regularization added (λ = 5 × 10 −5) we can reduce verified errors on test set significantly. This makes CROWN-IBP comparable to the numbers reported in convex adversarial polytope ; at = 0.1, the best model using convex adversarial polytope training can achieve 3.67% verified error, while CROWN-IBP achieves 3.60% best certified error on the models presented in Table F. The overfitting is likely caused by IBP's strong learning power without over-regularization, which also explains why IBP based methods significantly outperform linear relaxation based methods at larger values. Using early stopping can also improve verified error on test set; see Figure D. J TRAINING TIME In Table G we present the training time of CROWN-IBP, IBP and convex adversarial polytope on several representative models. All experiments are measured on a single RTX 2080 Ti GPU with 11 GB RAM except for 2 DM-Large models where we use 4 RTX 2080 Ti GPUs to speed up training. We can observe that CROWN-IBP is practically 1.5 to 3.5 times slower than IBP. Theoretically, CROWN-IBP is up to n L = 10 times slower 4 than IBP; however usually the total training time is less than 10 times since the CROWN-IBP bound is only computed during the ramp-up phase, and CROWN-IBP has higher GPU computation intensity and thus better GPU utilization than IBP. convex adversarial polytope . Using random projections alone is not sufficient to scale purely linear relaxation based methods to larger datasets, thus we advocate a combination of IBP bounds with linear relaxation based methods as in CROWN-IBP, which offers good scalability and stability. We also note that the random projection based acceleration can also be applied to the backward bound propagation (CROWN-style bound) in CROWN-IBP to further speed up CROWN-IBP. The use of 32 TPUs for our CIFAR-10 experiments is not necessary. We use TPUs mainly for obtaining a completely fair comparison to IBP , as their implementation was TPU-based. Since TPUs are not widely available, we additionally implemented CROWN-IBP using multi-GPUs. We train the best models in Table 2 on 4 RTX 2080Ti GPUs. As shown in Table H, we can achieve comparable verified errors using GPUs, and the differences between GPU and TPU training are around ±0.5%. Training time is reported in Table G. L EXACT FORMS OF THE CROWN-IBP BACKWARD BOUND CROWN is a general framework that replaces non-linear functions in a neural network with linear upper and lower hyperplanes with respect to pre-activation variables, such that the entire neural network function can be bounded by a linear upper hyperplane and linear lower hyperplane for all x ∈ S (S is typically a norm bounded ball, or a box region): CROWN achieves such linear bounds by replacing non-linear functions with linear bounds, and utilizing the fact that the linear combinations of linear bounds are still linear, thus these linear bounds 1 We use β start = β end = 1 for this setting, the same as in Table 2, and thus CROWN-IBP bound is used to evaluate the verified error. can propagate through layers. Suppose we have a non-linear vector function σ, applying to an input (pre-activation) vector z, CROWN requires the following bounds in a general form: In general the specific bounds A σ, b σ, A σ, b σ for different σ needs to be given in a case-by-case basis, depending on the characteristics of σ and the preactivation range z ≤ z ≤ z. In neural network common σ can be ReLU, tanh, sigmoid, maxpool, etc. Convex adversarial polytope is also a linear relaxation based techniques that is closely related to CROWN, but only for ReLU layers. For ReLU such bounds are simple, where A σ, A σ are diagonal matrices, b σ = 0: where D and D are two diagonal matrices:, if z k > 0, i.e., this neuron is always active 0, if z k < 0, i.e., this neuron is always inactive α, otherwise, any 0 ≤ α ≤ 1 1, if z k > 0, i.e., this neuron is always active 0, if z k < 0, i.e., this neuron is always inactive if z k > 0, i.e., this neuron is always active 0, if z k < 0, i.e., this neuron is always inactive Note that CROWN-style bounds require to know all pre-activation bounds z (l) and z (l). We assume these bounds are valid for x ∈ S. In CROWN-IBP, these bounds are obtained by interval bound propagation (IBP). With pre-activation bounds z (l) and z (l) given (for x ∈ S), we rewrite the CROWN lower bound for the special case of ReLU neurons: Theorem L.1 (CROWN Lower Bound). For a L-layer neural network function f (x): R n0 → R n L, ∀j ∈ [n L], ∀x ∈ S, we have f j (x) ≤ f j (x), where if l ∈ {0, · · ·, L − 1}.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Skxuk1rFwB
We propose a new certified adversarial training method, CROWN-IBP, that achieves state-of-the-art robustness for L_inf norm adversarial perturbations.
Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned ``important'' weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited ``important'' weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in does not bring improvement over random initialization. Over-parameterization is a widely-recognized property of deep neural networks , which leads to high computational cost and high memory footprint for inference. As a remedy, network pruning BID12;;; BID14 has been identified as an effective technique to improve the efficiency of deep networks for applications with limited computational budget. A typical procedure of network pruning consists of three stages: 1) train a large, over-parameterized model (sometimes there are pretrained models available), 2) prune the trained large model according to a certain criterion, and 3) fine-tune the pruned model to regain the lost performance. 2015). Thus most existing pruning techniques choose to fine-tune a pruned model instead of training it from scratch. The preserved weights after pruning are usually considered to be critical, as how to accurately select the set of important weights is a very active research topic in the literature (; BID14 ; b; BID4).In this work, we show that both of the beliefs mentioned above are not necessarily true for structured pruning methods, which prune at the levels of convolution channels or larger. Based on an extensive empirical evaluation of state-of-the-art pruning algorithms on multiple datasets with multiple network architectures, we make two surprising observations. First, for structured pruning methods with predefined target network architectures (Figure 2), directly training the small target model from random initialization can achieve the same, if not better, performance, as the model obtained from the three-stage pipeline. In this case, starting with a large model is not necessary and one could instead directly train the target model from scratch. Second, for structured pruning methods with autodiscovered target networks, training the pruned model from scratch can also achieve comparable or even better performance than fine-tuning. This observation shows that for these pruning methods, what matters more may be the obtained architecture, instead of the preserved weights, despite training the large model is needed to find that target architecture. Interestingly, for a unstructured pruning method that prunes individual parameters, we found that training from scratch can mostly achieve comparable accuracy with pruning and fine-tuning on smaller-scale datasets, but fails to do so on the large-scale ImageNet benchmark. Note that in some cases, if a pretrained large model is already available, pruning and fine-tuning from it can save the training time required to obtain the efficient model. The contradiction between some of our and those reported in the literature might be explained by less carefully chosen hyper-parameters, data augmentation schemes and unfair computation budget for evaluating baseline approaches. Predefined: prune x% channels in each layer Automatic: prune a%, b%, c%, d% channels in each layer A 4-layer model Figure 2: Difference between predefined and automatically discovered target architectures, in channel pruning as an example. The pruning ratio x is userspecified, while a, b, c, d are determined by the pruning algorithm. Unstructured sparse pruning can also be viewed as automatic. Our advocate a rethinking of existing structured network pruning algorithms. It seems that the over-parameterization during the first-stage training is not as beneficial as previously thought. Also, inheriting weights from a large model is not necessarily optimal, and might trap the pruned model into a bad local minimum, even if the weights are considered "important" by the pruning criterion. Instead, our suggest that the value of automatic structured pruning algorithms sometimes lie in identifying efficient structures and performing implicit architecture search, rather than selecting "important" weights. For most structured pruning methods which prune channels/filters, this corresponds to searching the number of channels in each layer. In section 5, we discuss this viewpoint through carefully designed experiments, and show the patterns in the pruned model could provide design guidelines for efficient architectures. The rest of the paper is organized as follows: in Section 2, we introduce and some related works on network pruning; in Section 3, we describe our methodology for training the pruned model from scratch; in Section 4 we experiment on various pruning methods and show our main for both pruning methods with predefined or automatically discovered target architectures; in Section 5, we discuss the value of automatic pruning methods in searching efficient network architectures; in Section 6 we discuss some implications and conclude the paper. Recent success of deep convolutional networks BID13; BID8;; 2017a) has been coupled with increased requirement of computation resources. In particular, the model size, memory footprint, the number of computation operations (FLOPs) and power usage are major aspects inhibiting the use of deep neural networks in some resource-constrained settings. Those large models can be infeasible to store, and run in real time on embedded systems. To address this issue, many methods have been proposed such as low-rank approximation of weights (; BID11, weight quantization , knowledge distillation BID2 ) and network pruning (; BID14, among which network pruning has gained notable attention due to their competitive performance and compatibility. One major branch of network pruning methods is individual weight pruning, and it dates back to Optimal Brain Damage BID12 and Optimal Brain Surgeon , which prune weights based on Hessian of the loss function. More recently, proposes to prune network weights with small magnitude, and this technique is further incorporated into the "Deep Compression" pipeline (b) to obtain highly compressed models. proposes a data-free algorithm to remove redundant neurons iteratively. uses Variatonal Dropout (P.) to prune redundant weights. learns sparse networks through L 0 -norm regularization based on stochastic gate. However, one drawback of these unstructured pruning methods is that the ing weight matrices are sparse, which cannot lead to compression and speedup without dedicated hardware/libraries (a).In contrast, structured pruning methods prune at the level of channels or even layers. Since the original convolution structure is still preserved, no dedicated hardware/libraries are required to realize the benefits. Among structured pruning methods, channel pruning is the most popular, since it operates at the most fine-grained level while still fitting in conventional deep learning frameworks. Some heuristic methods include pruning channels based on their corresponding filter weight norm BID14 and average percentage of zeros in the output BID3. Group sparsity is also widely used to smooth the pruning process after training (; BID0 BID10). BID4 and impose sparsity constraints on channel-wise scaling factors during training, whose magnitudes are then used for channel pruning. BID6 uses a similar technique to prune coarser structures such as residual blocks. He et al. (2017b) and minimizes next layer's feature reconstruction error to determine which channels to keep. optimizes the reconstruction error of the final response layer and propagates a "importance score" for each channel. uses Taylor expansion to approximate each channel's influence over the final loss and prune accordingly. analyzes the intrinsic correlation within each layer and prune redundant channels. proposes a layer-wise compensate filter pruning algorithm to improve commonly-adopted heuristic pruning metrics. He et al. (2018a) proposes to allow pruned filters to recover during the training process. BID15 prune certain structures in the network based on the current input. Our work is also related to some recent studies on the characteristics of pruning algorithms. shows that random channel pruning can perform on par with a variety of more sophisticated pruning criteria, demonstrating the plasticity of network models. In the context of unstructured pruning, The Lottery Ticket Hypothesis conjectures that certain connections together with their randomly initialized weights, can enable a comparable accuracy with the original network when trained in isolation. We provide comparisons between and this work in Appendix A. shows that training a small-dense model cannot achieve the same accuracy as a pruned large-sparse model with identical memory footprint. In this work, we reveal a different and rather surprising characteristic of structured network pruning methods: fine-tuning the pruned model with inherited weights is not better than training it from scratch; the ing pruned architectures are more likely to be what brings the benefit. In this section, we describe in detail our methodology for training a small target model from scratch. Target Pruned Architectures. We first divide network pruning methods into two categories. In a pruning pipeline, the target pruned model's architecture can be determined by either a human (i.e., predefined) or the pruning algorithm (i.e., automatic) (see Figure 2). When a human predefines the target architecture, a common criterion is the ratio of channels to prune in each layer. For example, we may want to prune 50% channels in each layer of VGG. In this case, no matter which specific channels are pruned, the pruned target architecture remains the same, because the pruning algorithm only locally prunes the least important 50% channels in each layer. In practice, the ratio in each layer is usually selected through empirical studies or heuristics. Examples of predefined structured pruning include BID14, , He et al. (2017b) and He et al. (2018a) When the target architecture is automatically determined by a pruning algorithm, it is usually based on a pruning criterion that globally compares the importance of structures (e.g., channels) across layers. Examples of automatic structured pruning include BID4, BID6, and.Unstructured pruning (; ;) also falls in the category of automatic methods, where the positions of pruned weights are determined by the training process and the pruning algorithm, and it is usually not possible to predefine the positions of zeros before training starts. Datasets, Network Architectures and Pruning Methods. In the network pruning literature, CIFAR-10, CIFAR-100 BID9 ImageNet datasets are the de-facto benchmarks, while VGG , ResNet and DenseNet BID4 are the common network architectures. We evaluate four predefined pruning methods, BID14, , He et al. (2017b), He et al. (2018a), two automatic structured pruning methods, BID4, BID6, and one unstructured pruning method . For the first six methods, we evaluate using the same (target model, dataset) pairs as presented in the original paper to keep our comparable. For the last one , we use the aforementioned architectures instead, since the ones in the original paper are no longer state-of-the-art. On CIFAR datasets, we run each experiment with 5 random seeds, and report the mean and standard deviation of the accuracy. Training Budget. One crucial question is how long we should train the small pruned model from scratch. Naively training for the same number of epochs as we train the large model might be unfair, since the small pruned model requires significantly less computation for one epoch. Alternatively, we could compute the floating point operations (FLOPs) for both the pruned and large models, and choose the number of training epoch for the pruned model that would lead to the same amount of computation as training the large model. Note that it is not clear how to train the models to "full convergence" given the stepwise decaying learning rate schedule commonly used in the CIFAR/ImageNet classification tasks. In our experiments, we use Scratch-E to denote training the small pruned models for the same epochs, and Scratch-B to denote training for the same amount of computation budget (on ImageNet, if the pruned model saves more than 2× FLOPs, we just double the number of epochs for training Scratch-B, which amounts to less computation budget than large model training). When extending the number of epochs in Scratch-B, we also extend the learning rate decay schedules proportionally. One may argue that we should instead train the small target model for fewer epochs since it may converge faster. However, in practice we found that increasing the training epochs within a reasonable range is rarely harmful. In our experiments we found in most times Scratch-E is enough while in other cases Scratch-B is needed for a comparable accuracy as fine-tuning. Note that our evaluations use the same computation as large model training without considering the computation in fine-tuning, since in our evaluated methods fine-tuning does not take too long; if anything this still favors the pruning and fine-tuning pipeline. Implementation. In order to keep our setup as close to the original papers as possible, we use the following protocols: 1) ff a previous pruning method's training setup is publicly available, e.g. BID4, BID6 and He et al. (2018a), we adopt the original implementation; 2) otherwise, for simpler pruning methods, e.g., BID14 and , we re-implement the three-stage pruning procedure and generally achieve similar as in the original papers; 3) for the remaining two methods (; b), the pruned models are publicly available but without the training setup, thus we choose to re-train both large and small target models from scratch. Interestingly, the accuracy of our re-trained large model is higher than what is reported in the original papers. This could be due to the difference in the deep learning frameworks: we used Pytorch while the original papers used Caffe BID8. In this case, to accommodate the effects of different frameworks and training setups, we report the relative accuracy drop from the unpruned large model. We use standard training hyper-parameters and data-augmentation schemes, which are used both in standard image classification models (; BID4 and network pruning methods BID14 BID4 BID6 a). The optimization method is SGD with Nesterov momentum, using an stepwise decay learning rate schedule. For random weight initialization, we adopt the scheme proposed in . For of models fine-tuned from inherited weights, we either use the released models from original papers (case 3 above) or follow the common practice of fine-tuning the model using the lowest learning rate when training the large model BID14 b). For CIFAR, training/fine-tuning takes 160/40 epochs. For ImageNet, training/fine-tuning takes 90/20 epochs. For reproducing the and a more detailed knowledge about the settings, see our code at: https://github. com/Eric-mingjie/rethinking-network-pruning. In this section we present our experimental comparing training pruned models from scratch and fine-tuning from inherited weights, for both predefined and automatic (Figure 2) structured pruning, as well as a magnitude-based unstructured pruning method . We also include a comparison with the Lottery Ticket Hypothesis , and an experiment on transfer learning from image classification to object detection in Appendix, due to space limit. We also put the and discussions on a pruning method (Soft Filter pruning (a) ) in Appendix. L 1 -norm based Filter Pruning BID14 ) is one of the earliest works on filter/channel pruning for convolutional networks. In each layer, a certain percentage of filters with smaller L 1 -norm will be pruned. Table 1 shows our . The Pruned Model column shows the list of predefined target models (see BID14 for configuration details on each model). We observe that in each row, scratch-trained models achieve at least the same level of accuracy as fine-tuned models, with Scratch-B slightly higher than Scratch-E in most cases. On ImageNet, both Scratch-B models are better than the fine-tuned ones by a noticeable margin. Table 1: Results (accuracy) for L1-norm based filter pruning BID14. "Pruned Model" is the model pruned from the large model. Configurations of Model and Pruned Model are both from the original paper. ThiNet greedily prunes the channel that has the smallest effect on the next layer's activation values. As shown in TAB2, for VGG-16 and ResNet-50, both Scratch-E and Scratch-B can almost always achieve better performance than the fine-tuned model, often by a significant margin. The only exception is Scratch-E for VGG-Tiny, where the model is pruned very aggressively from VGG-16 (FLOPs reduced by 15×), and as a , drastically reducing the training budget for Scratch-E. The training budget of Scratch-B for this model is also 7 times smaller than the original large model, yet it can achieve the same level of accuracy as the fine-tuned model. Regression based Feature Reconstruction (b) prunes channels by minimizing the feature map reconstruction error of the next layer. In contrast to ThiNet , this optimization problem is solved by LASSO regression. Results are shown in TAB3. Again, in terms of relative accuracy drop from the large models, scratch-trained models are better than the fine-tuned models. (b). Pruned models such as "VGG-16-5x" are defined in He et al. (2017b). Similar to TAB2, we compare relative accuracy drop from unpruned large models. Network Slimming BID4 imposes L 1 -sparsity on channel-wise scaling factors from Batch Normalization layers BID7 during training, and prunes channels with lower scaling factors afterward. Since the channel scaling factors are compared across layers, this method produces automatically discovered target architectures. As shown in Table 4, for all networks, the small models trained from scratch can reach the same accuracy as the fine-tuned models. More specifically, we found that Scratch-B consistently outperforms (8 out of 10 experiments) the finetuned models, while Scratch-E is slightly worse but still mostly within the standard deviation. Table 4: Results (accuracy) for Network Slimming BID4. " Prune ratio" stands for total percentage of channels that are pruned in the whole network. The same ratios for each model are used as the original paper. Sparse Structure Selection BID6 ) also uses sparsified scaling factors to prune structures, and can be seen as a generalization of Network Slimming. Other than channels, pruning can be on residual blocks in ResNet or groups in ResNeXt . We examine residual blocks pruning, where ResNet-50 are pruned to be ResNet-41, ResNet-32 and ResNet-26. Table 5 shows our . On average Scratch-E outperforms pruned models, and for all models Scratch-B is better than both. Table 5: Results (accuracy) for residual block pruning using Sparse Structure Selection BID6. In the original paper no fine-tuning is required so there is a "Pruned" column instead of "Fine-tuned" as before. Unstructured magnitude-based weight pruning can also be treated as automatically discovering architectures, since the positions of exact zeros cannot be determined before training, but we highlight its differences with structured pruning using another subsection. Because all the network architectures we evaluated are fully-convolutional (except for the last fully-connected layer), for simplicity, we only prune weights in convolution layers here. Before training the pruned sparse model from scratch, we re-scale the standard deviation of the Gaussian distribution for weight initialization, based on how many non-zero weights remain in this layer. This is to keep a constant scale of backward gradient signal as in , which however in our observations does not bring gains compared with unscaled counterparts. Table 6: Results (accuracy) for unstructured pruning . " Prune Ratio" denotes the percentage of parameters pruned in the set of all convolutional weights. As shown in Table 6, on the smaller-scale CIFAR datasets, when the pruned ratio is small (≤ 80%), Scratch-E sometimes falls short of the fine-tuned , but Scratch-B is able to perform at least on par with the latter. However, we observe that in some cases, when the prune ratio is large (95%), fine-tuning can outperform training from scratch. On the large-scale ImageNet dataset, we note that the Scratch-B is mostly worse than fine-tuned by a noticable margin, despite at a decent accuracy level. This could be due to the increased difficulty of directly training on the highly sparse networks (CIFAR), or the scale/complexity of the dataset itself (ImageNet). Another possible reason is that compared with structured pruning, unstructured pruning significantly changes the weight distribution (more details in Appendix G). The difference in scratch-training behaviors also suggests an important difference between structured and unstructured pruning. While we have shown that, for structured pruning, the inherited weights in the pruned architecture are not better than random, the pruned architecture itself turns out to be what brings the efficiency benefits. In this section, we assess the value of architecture search for automatic network pruning algorithms (Figure 2) by comparing pruning-obtained models and uniformly pruned models. Note that the connection between network pruning and architecture learning has also been made in prior works (; BID4 ;, but to our knowledge we are the first to isolate the effect of inheriting weights and solely compare pruningobtained architectures with uniformly pruned ones, by training both of them from scratch. Parameter Efficiency of Pruned Architectures. In FIG0 (left), we compare the parameter efficiency of architectures obtained by an automatic channel pruning method (Network Slimming BID4), with a naive predefined pruning strategy that uniformly prunes the same percentage of channels in each layer. All architectures are trained from random initialization for the same number of epochs. We see that the architectures obtained by Network Slimming are more parameter efficient, as they could achieve the same level of accuracy using 5× fewer parameters than uniformly pruning architectures. For unstructured magnitude-based pruning , we conducted a similar experiment shown in FIG0 (right). Here we uniformly sparsify all individual weights at a fixed probability, and the architectures obtained this way are much less efficient than the pruned architectures. Figure 4: The average sparsity pattern of all 3×3 convolutional kernels in certain layer stages in a unstructured pruned VGG-16. Darker color means higher probability of weight being kept. We also found the channel/weight pruned architectures exhibit very consistent patterns (see TAB8 and Figure 4). This suggests the original large models may be redundantly designed for the task and the pruning algorithm can help us improve the efficiency. This also confirms the value of automatic pruning methods for searching efficient models on the architectures evaluated. More Analysis. However, there also exist cases where the architectures obtained by pruning are not better than uniformly pruned ones. We present such in FIG1, where the architectures obtained by pruning (blue) are not significantly more efficient than uniform pruned architectures (red). This phenomenon happens more likely on modern architectures like ResNets and DenseNets. When we investigate the sparsity patterns of those pruned architectures (shown in TAB13, 19 and 20 in Appendix H), we find that they exhibit near-uniform sparsity patterns across stages, which might be the reason why it can only perform on par with uniform pruning. In contrast, for VGG, the pruned sparsity patterns can always beat the uniform ones as shown in FIG0 and Figure 6. We also show the sparsity patterns of VGG pruned by Network Slimming BID4 in TAB2 of Appendix H, and they are rather far from uniform. Compared to ResNet and DenseNet, we can see that VGG's redundancy is rather imbalanced across layer stages. Network pruning techniques may help us identify the redundancy better in the such cases. Generalizable Design Principles from Pruned Architectures. Given that the automatically discovered architectures tend to be parameter efficient on the VGG networks, one may wonder: can we derive generalizable principles from them on how to design a better architecture? We conduct several experiments to answer this question. For Network Slimming, we use the average number of channels in each layer stage (layers with the same feature map size) from pruned architectures to construct a new set of architectures, and we call this approach "Guided Pruning"; for magnitude-based pruning, we analyze the sparsity patterns (Figure 4) in the pruned architectures, and apply them to construct a new set of sparse models, which we call "Guided Sparsification". The are shown in Figure 6. It can be seen that for both Network Slimming (Interestingly, these guided design patterns can sometimes be transferred to a different VGG-variant and/or dataset. In Figure 6, we distill the patterns of pruned architectures from VGG-16 on CIFAR-10 and apply them to design efficient VGG-19 on CIFAR-100. These sets of architectures are denoted as "Transferred Guided Pruning/Sparsification". We can observe that they (brown) may sometimes be slightly worse than architectures directly pruned (blue), but are significantly better than uniform pruning/sparsification (red). In these cases, one does not need to train a large model to obtain an efficient model as well, as transferred design patterns can help us achieve the efficiency directly. Discussions with Conventional Architecture Search Methods. Popular techniques for network architecture search include reinforcement learning and evolutionary algorithms (; BID16 . In each iteration, a randomly initialized network is trained and evaluated to guide the search, and the search process usually requires thousands of iterations to find the goal architecture. In contrast, using network pruning as architecture search only requires a one-pass training, however the search space is restricted to the set of all "subnetworks" inside a large network, whereas traditional methods can search for more variations, e.g., activation functions or different layer orders. Figure 6: Pruned architectures obtained by different approaches, all trained from scratch, averaged over 5 runs. "Guided Pruning/Sparsification" means using the average sparsity patterns in each layer stage to design the network; "Transferred Guided Pruning/Sparsification" means using the sparsity patterns obtained by a pruned VGG-16 on CIFAR-10, to design the network for VGG-19 on CIFAR-100. Following the design guidelines provided by the pruned architectures, we achieve better parameter efficiency, even when the guidelines are transferred from another dataset and model. uses a similar pruning technique to Network Slimming BID4 to automate the design of network architectures; BID1 prune channels using reinforcement learning and automatically compresses the architecture. On the other hand, in the network architecture search literature, sharing/inheriting trained parameters (; b) during searching has become a popular approach for reducing the training budgets, but once the target architecture is found, it is still trained from scratch to maximize the accuracy. Our encourage more careful and fair baseline evaluations of structured pruning methods. In addition to high accuracy, training predefined target models from scratch has the following benefits over conventional network pruning procedures: a) since the model is smaller, we can train the model using less GPU memory and possibly faster than training the original large model; b) there is no need to implement the pruning criterion and procedure, which sometimes requires fine-tuning layer by layer and/or needs to be customized for different network architectures BID14 BID4; c) we avoid tuning additional hyper-parameters involved in the pruning procedure. Our do support the viewpoint that automatic structured pruning finds efficient architectures in some cases. However, if the accuracy of pruning and fine-tuning is achievable by training the pruned model from scratch, it is also important to evaluate the pruned architectures against uniformly pruned baselines (both training from scratch), to demonstrate the method's value in identifying efficient architectures. If the uniformly pruned models are not worse, one could also skip the pipeline and train them from scratch. Even if pruning and fine-tuning fails to outperform the mentioned baselines in terms of accuracy, there are still some cases where using this conventional wisdom can be much faster than training from scratch: a) when a pre-trained large model is already given and little or no training budget is available; we also note that pre-trained models can only be used when the method does not require modifications to the large model training process; b) there is a need to obtain multiple models of different sizes, or one does not know what the desirable size is, in which situations one can train a large model and then prune it by different ratios. The Lottery Ticket Hypothesis conjectures that inside the large network, a sub-network together with their initialization makes the training particularly effective, and together they are termed the "winning ticket". In this hypothesis, the original initialization of the sub-network (before large model training) is needed for it to achieve competitive performance when trained in isolation. Their experiments show that training the sub-network with randomly re-initialized weights performs worse than training it with the original initialization inside the large network. In contrast, our work does not require reuse of the original initialization of the pruned model, and shows that random initialization is enough for the pruned model to achieve competitive performance. The seem to be contradictory, but there are several important differences in the evaluation settings: a) Our main is drawn on structured pruning methods, despite for smallscale problems (CIFAR) it also holds on unstructured pruning; only evaluates on unstructured pruning. b) Our evaluated network architectures are all relatively large modern models used in the original pruning methods, while most of the experiments in use small shallow networks (< 6 layers). c) We use momentum SGD with a large initial learning rate (0.1), which is widely used in prior image classification (; BID4 and pruning works BID14 BID4 b; ; a; BID6) to achieve high accuracy, and is the de facto default optimization setting on CIFAR and ImageNet; while with two initial learning rates 0.1 and 0.01, on CIFAR-10 dataset. Each point is averaged over 5 runs. Using the winning ticket as initialization only brings improvement when the learning rate is small (0.01), however such small learning rate leads to a lower accuracy than the widely used large learning rate (0.1).In this section, we show that the difference in learning rate is what causes the seemingly contradicting behaviors between our work and , in the case of unstructured pruning on CIFAR. For structured pruning, when using both large and small learning rates, the winning ticket does not outperform random initialization. We test the Lottery Ticket Hypothesis by comparing the models trained with original initialization ("winning ticket") and that trained from randomly re-initialized weights. We experiment with two choices of initial learning rate (0.1 and 0.01) with a stepwise decay schedule, using momentum SGD. 0.1 is used in our previous experiments and most prior works on CIFAR and ImageNet. , we investigate both iterative pruning (prune 20% in each iteration) and one-shot pruning for unstructured pruning. We show our for unstructured pruning in FIG3 and TAB13, and L 1 -norm based filter pruning BID14 in Table 9. with two initial learning rates: 0.1 and 0.01. The same are visualized in FIG3. Using the winning ticket as initialization only brings improvement when the learning rate is small (0.01), however such small learning rate leads to a lower accuracy than the widely used large learning rate (0.1).From FIG3 and TAB13, we see that for unstructured pruning, using the original initialization as in only provides advantage over random initialization with small initial learning rate 0.01. For structured pruning as BID14, it can be seen from Table 9 that using the original initialization is only on par with random initialization for both large and small initial learning rates. In both cases, we can see that the small learning rate gives lower accuracy than the widely-used large learning rate. To summarize, in our evaluated settings, the winning ticket only brings improvement in the case of unstructured pruning, with small initial learning rate, but this small learning rate yields inferior accuracy compared with the widely-used large learning rate. Note that also report in their Section 5, that the winning ticket cannot be found on ResNet-18/VGG using the large learning rate. The reason why the original initialization is helpful when the learning rate is small, might be the weights of the final trained model are not too far from the original initialization due to the small parameter updating stepsize. Table 9: Experiments on the Lottery Ticket Hypothesis on a structured pruning method (L1-norm based filter pruning BID14) with two initial learning rates: 0.1 and 0.01. In both cases, using winning tickets does not bring improvement on accuracy. Soft Filter Pruning (SFP) (a) prunes filters every epoch during training but also keeps updating the pruned filters, i.e., the pruned weights have the chance to be recovered. In the original paper, SFP can either run upon a random initialized model or a pretrained model. It falls into the category of predefined methods (Figure 2). TAB16 shows our without using pretrained models and TAB17 shows the with a pretrained model. We use authors' code (b) for obtaining the . It can be seen that Scratch-E outperforms pruned models for most of the time and Scratch-B outperforms pruned models in nearly all cases. Therefore, our also holds on this method. (a) using pretrained models. We have shown that for structured pruning the small pruned model can be trained from scratch to match the accuracy of the fine-tuned model in classification tasks. To see whether this phenomenon would also hold for transfer learning to other vision tasks, we evaluate the L 1 -norm based filter pruning method BID14 on the PASCAL VOC object detection task, using Faster-RCNN . Object detection frameworks usually require transferring model weights pre-trained on ImageNet classification, and one can perform pruning either before or after the weight transfer. More specifically, the former could be described as "train on classification, prune on classification, fine-tune on classification, transfer to detection", while the latter is "train on classification, transfer to detection, prune on detection, fine-tune on detection". We call these two approaches Prune-C (classification) and Prune-D (detection) respectively, and report the in TAB2. With a slight abuse of notation, here Scratch-E/B denotes "train the small model on classification, transfer to detection", and is different from the setup of detection without ImageNet pre-training as in. TAB2: Results (mAP) for pruning on detection task. The pruned models are chosen from BID14. Prune-C refers to pruning on classifcation pre-trained weights, Prune-D refers to pruning after the weights are transferred to detection task. Scratch-E/B means pre-training the pruned model from scratch on classification and transfer to detection. For this experiment, we adopt the code and default hyper-parameters from , and use PASCAL VOC 07 trainval/test set as our training/test set. For backbone networks, we evaluate ResNet-34-A and ResNet-34-B from the L 1 -norm based filter pruning BID14, which are pruned from ResNet-34. TAB2 shows our , and we can see that the model trained from scratch can surpass the performance of fine-tuned models under the transfer setting. Another interesting observation from TAB2 is that Prune-C is able to outperform Prune-D, which is surprising since if our goal task is detection, directly pruning away weights that are considered unimportant for detection should presumably be better than pruning on the pre-trained classification models. We hypothesize that this might be because pruning early in the classification stage makes the final model less prone to being trapped in a bad local minimum caused by inheriting weights from the large model. This is in line with our observation that Scratch-E/B, which trains the small models from scratch starting even earlier at the classification stage, can achieve further performance improvement. It would be interesting to see whether our observation still holds if the model is very aggressively pruned, since they might not have enough capacity to be trained from scratch and achieve decent accuracy. Here we provide using Network Slimming BID4 and L 1 -norm based filter pruning BID14. From TAB3, it can be seen that when the prune ratio is large, training from scratch is better than fine-tuned models by an even larger margin, and in this case fine-tuned models are significantly worse than the unpruned models. Note that in TAB2, the VGG-Tiny model we evaluated for ThiNet is also a very aggressively pruned model (FLOPs reduced by 15× and parameters reduced by 100×). BID4 when the models are aggressively pruned. " Prune ratio" stands for total percentage of channels that are pruned in the whole network. Larger ratios are used than the original paper of BID4 Generally, pruning algorithms use fewer epochs for fine-tuning than training the large model BID14 b; ). For example, L 1 -norm based filter pruning BID14 uses 164 epochs for training on CIFAR-10 datasets, and only fine-tunes the pruned networks for 40 epochs. This is due to that mostly small learning rates are used for fine-tuning to better preserve the weights from the large model. Here we experiment with fine-tuning for more epochs (e.g., for the same number of epochs as Scratch-E) and show it does not bring noticeable performance improvement. Table 16: "Fine-tune-40" stands for fine-tuning 40 epochs and so on. Scratch-E models are trained for 160 epochs. We observe that fine-tuning for more epochs does not help improve the accuracy much, and models trained from scratch can still perform on par with fine-tuned models. We use L 1 -norm filter pruning BID14 for this experiment. Table 16 shows our with different number of epochs for fine-tuning. It can be seen that fine-tuning for more epochs gives negligible accuracy increase and sometimes small decrease, and Scratch-E models are still on par with models fine-tuned for enough epochs. In our experiments, we use the standard training schedule for both CIFAR (160 epochs) and ImageNet (90 epochs). Here we show that our observation still holds after we extend the standard training schedule. We use L 1 -norm based filter pruning BID14 for this experiment. TAB8 shows our when we extend the standard training schedule of CIFAR from 160 to 300 epochs. We observe that scratch trained models still perform at least on par with fine-tuned models. BID14 when the training schedule of the large model is extended from 160 to 300 epochs. Figure 8: Weight distribution of convolutional layers for different pruning methods. We use VGG-16 and CIFAR-10 for this visualization. We compare the weight distribution of unpruned models, fine-tuned models and scratch-trained models. Top: Results for Network Slimming BID4. Bottom: Results for unstructured pruning . Accompanying the discussion in subsection 4.3, we show the weight distribution of unpruned large models, fine-tuned pruned models and scratch-trained pruned models, for two pruning methods: (structured) Network Slimming BID4 and unstructured pruning . We choose VGG-16 and CIFAR-10 for visualization and compare the weight distribution of unpruned models, fine-tuned models and scratch-trained models. For Network Slimming, the prune ratio is 50%. For unstructured pruning, the prune ratio is 80%. Figure 8 shows our . We can see that the weight distribution of fine-tuned models and scratch-trained pruned models are different from the unpruned large models -the weights that are close to zero are much fewer. This seems to imply that there are less redundant structures in the found pruned architecture, and support the view of architecture learning for automatic pruning methods. For unstructured pruning, the fine-tuned model also has significantly different weight distribution from the scratch-trained model -it has nearly no close-to-zero values. This might be a potential reason why sometimes models trained from scratch cannot achieve the accuracy of the fine-tuned models, as shown in Table 6. In this section we provide the additional on sparsity patterns for the pruned models, accompanying the discussions of "More Analysis" in Section 5. TAB13: Sparsity patterns of PreResNet-164 pruned on CIFAR-10 by Network Slimming shown in FIG1 (left) under different prune ratio. The top row denotes the total prune ratio. The values denote the ratio of channels to be kept. We can observe that for a certain prune ratio, the sparsity patterns are close to uniform (across stages). Table 19: Average sparsity patterns of 3×3 kernels of PreResNet-110 pruned on CIFAR-100 by unstructured pruning shown in FIG1 (middle) under different prune ratio. The top row denotes the total prune ratio. The values denote the ratio of weights to be kept. We can observe that for a certain prune ratio, the sparsity patterns are close to uniform (across stages). TAB2: Average sparsity patterns of 3×3 kernels of DenseNet-40 pruned on CIFAR-100 by unstructured pruning shown in FIG1 (right) under different prune ratio. The top row denotes the total prune ratio. The values denote the ratio of weights to be kept. We can observe that for a certain prune ratio, the sparsity patterns are close to uniform (across stages). TAB2: Sparsity patterns of VGG-16 pruned on CIFAR-10 by Network Slimming shown in FIG0 (left) under different prune ratio. The top row denotes the total prune ratio. The values denote the ratio of channels to be kept. For each prune ratio, the latter stages tend to have more redundancy than earlier stages.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJlnB3C5Ym
In structured network pruning, fine-tuning a pruned model only gives comparable performance with training it from scratch.
Brushing techniques have a long history with the first interactive selection tools appearing in the 1990's. Since then, many additional techniques have been developed to address selection accuracy, scalability and flexibility issues. Selection is especially difficult in large datasets where many visual items tangle and create overlapping. This paper investigates a novel brushing technique which not only relies on the actual brushing location but also on the shape of the brushed area. Firstly, the user brushes the region where trajectories of interest are visible. Secondly, the shape of the brushed area is used to select similar items. Thirdly, the user can adjust the degree of similarity to filter out the requested trajectories. This technique encompasses two types of comparison metrics, the piece-wise Pearson correlation and the similarity measurement based on information geometry. We apply it to concrete scenarios with datasets from air traffic control, eye-tracking data and GPS trajectories.. This figure shows our main rationale to define our shaped based brushing technique. A) Unselected trail set where the user wishes to select the only curved one. B) Standard brushing technique, where the brushing of the curved trail also selects every other trail which touches the brushing area. C) Our method uses the brushing input to compare with the brushed trajectory and only the trajectories similar in shape are selected. Such techniques help to visually select items of interest with interactive paradigms (i.e. lasso, boxes, brush) in a view. When the user visually detects a relevant pattern (i.e. a specific curve or a trend), the brushing technique can then be applied to select it. While this selection can be seamlessly performed, the user may still face issues when the view becomes cluttered with many tangled items. In such dense visualization, existing brushing techniques also select items in the vicinity of the target and thus capture part of the clutter (see Figure 1). To address such an issue, the user can adjust the brushing parameters by changing the brush size or the selection box locations. However, this may take time and requires many iterations or trials. This paper proposes a novel brushing technique that filters trajectories taking into account the shape of the brush in addition to the brush area. This dual input is provided at the same time and opens novel opportunities for brushing techniques. The cornerstone of such a technique relies on a shape comparison algorithm. This algorithm must provide a numerical similarity measurement which is ordered (low value for unrelated shapes, and high value for correlated shapes), continuous (no steps in the computed metric) and with a semantic so that the user can partially understand the logic behind this similarity measurement. Thus, to build such a dual filtering technique, the following design requirements (DR) must be fulfilled: • DR3: The technique enables incremental selection refinement. • DR4: The technique is interactive. Taking into account the identified requirements (DR1-DR4), this paper presents a novel shape-based brushing tool. To the best of our knowledge, such a combination of brushing and shape comparison technique has not yet been explored in trajectory analysis and this paper fills this gap. In the remainder of the paper the following subjects are presented. First, previous works in the domain of brushing and shape comparison are provided. Second, the brushing pipeline is detailed with explanations of the comparison metric data processing. Next, the use of such a technique through different use-cases is demonstrated. The brushing technique is discussed in terms of usefulness, specific application and possible limitations. Finally, the paper concludes with a summary of our contribution and provides future research directions. There are various domain-specific techniques targeting trail exploration and analysis. In this section, we explore three major components of selection techniques for trail-set exploration and analysis relevant to our work: brushing, query-by-content, and similarity measurement. Trail-set exploration relies on pattern discovery where relevant trails need to be selected for further analysis. Brushing is a selection technique for information visualization, where the user interactively highlights a subset of the data by defining an area of interest. This technique has been shown to be a powerful and generic interaction technique for information retrieval. The selection can be further refined using interactive filtering techniques. The approach presented in this paper is based on dynamic queries and direct manipulation. Systems designed specifically for spatio-temporal visualization and in particular in trajectory visualizations are very complex because of their 3D and time varying nature. Due to this, several systems and frameworks have been especially designed to visualize them. Most of these systems include selection techniques based on brushing, and some of them enable further query refinement through boolean operations. These techniques do not take into account the shape of the trails, so selecting a specific one with a particular shape requires many manipulations and iterations to fine-tune the selection. While this paper attempts to suggest a shape-based brushing technique for trail sets, researchers have explored shape-based selection techniques in different contexts, both using arbitrary shapes and sketch-based queries. Sketch-based querying presents several advantages over traditional selection. It has been used for volumetric data sets, and neural pathway selection. This last work is the closest to the current study. However, the authors presented a domain-specific application and they based their algorithm on the Euclidean distance. This is not a robust metric for similarity detection since it is hard to provide a value indicating a high similarity and varies greatly according to the domain and the data considered. In addition, this metric does not support direction and orientation matching nor the combination of brushing with filtering. In addition, user-sketched pattern matching plays an important role in searching and localizing time series patterns of interest. For example, Holz and Feiner defined a relaxed selection technique in which the users draw a query to select the relevant part of a displayed time-series. Correl et al. propose a sketch-based query system to retrieve time series using dynamic time wrapping, mean square error or the Hough transform. They present all matches individually in a small multiple, arranged according to the similarity measurement. These techniques, as the one proposed here, also take advantage of sketches to manipulate data. However, they are designed for querying rather than selecting, and specifically for 2D data. Other approaches use boxes and spheres to specify the regions of interest, and the desired trails are obtained if they intersect these regions of interest. However, many parameters must be changed by the analysts in order to achieve a simple single selection. The regions of interest must be re-scaled appropriately, and then re-positioned back and forth multiple times for each operation. Additionally, many selection box modifications are required to refine the selection and thus hinder and alter the selection efficiency. Given a set of trajectories, we are interested in retrieving the most similar subset of trajectories with a user-sketched query. Common approaches include selecting the K-nearestneighbors (KNN) based on the Euclidean distance (ED) or elastic matching metrics (e.g, Dynamic Time Warping -DTWs). Affinity cues have also been used to group objects. For example, objects of identical color are given a high similarity coefficient for the color affinity. The Euclidean distance is the most simple to calculate, but, unlike mathematical similarity measurements which are usually bound between 0 and 1 or -1 and 1, ED is unbounded and task-specific. A number of works have suggested transforming the raw data into a lower-dimensional representation (e.g., SAX, PAA ). However, they require adjusting many abstract parameters which are dataset-dependant and thus reduce their flexibility. In, global and local proximity of 2D sketches are presented. The second measure is used for similarity detection where an object is contained within another one, and is not relevant to this work. While the first measure refers to the distance between two objects (mostly circles and lines), there is no guarantee that the approach could be generalized to large datasets such as eye tracking, GPS or aircraft trajectories. In contrast, Dynamic Time Warping has been considered as the best measurement in various applications to select shapes by matching their representation. It has been used in gestures recognition, eye movements and shapes. An overview of existing metrics is available. The k-Nearest Neighbor (KNN) approach has also long been studied for trail similarity detection. However, using this metric, two trails may provide a good accurate connection (i.e, a small difference measure as above) even if they have very different shapes. Other measurements to calculate trajectory segment similarity are the Minimum Bounding Rectangles (MBR) or Fréchet Distance which leverage the perpendicular distance, the parallel distance and the angular distance in order to compute the distance between two trajectories. In order to address the aforementioned issues, we propose and investigate two different approaches. The first approach is based on directly calculating the correlations on x-axis and y-axis independently between the shape of the brush and the trails (section 3.1.1). The second approach (section 3.1.2) is based on the geometrical information of the trails, i.e, the trails are transformed into a new space (using eigenvectors of the co-variance matrix) which is more suitable for similarity detection. This paper's approach leverages the potential of these two metrics to foster efficient shape-based brushing for large cluttered datasets. As such, it allows targeting detailed motif discovery performed interactively. This section presents the interactive pipeline (Figure 2) which fulfills the identified design requirements (DR1-DR4). As for any interactive system, user input plays the main role and will operate at every stage of the data processing. First, the input data (i.e. trail set) is given. Next, the user inputs a brush where the pipeline extracts the brushed items, the brush area and its shape. Then, two comparison metrics are computed between every brushed item and the shape of the brush (similarity measurement). Followed by the binning process and its filtering, the ing data is presented to the user. The user can then refine the brush area and choose another comparison metric until the desired items are selected. As previously detailed in the related word section, many comparison metrics exist. While our pipeline can use any metric that fulfill the design requirement DR2 (continuous, ordered and meaningful comparison), the presented pipeline contains only two complementary algorithms: Pearson and FPCA. The first one focuses on a shape comparison basis with correlation between their representative vertices, while the latter focuses on curvature comparison. As shown in Figure 3, each metric produces different . The user can use either of them depending on the type of filtering to be performed. During the initial development of this technique, we first considered using the Euclidean distance (ED) and DTW, but we have rapidly observed their limitations and we argue that PC and FPCA are more suitable to trajectory datasets. First, PC values are easier to threshold. A PC value > 0.8 provides a clear indication of the similarity of 2 shapes. Moreover, to accurately discriminate between complex trajectories, we need to go beyond the performance of ED. Furthermore, the direction of the trajectories, while being essential for our brushing technique, is not supported by ED and DTW similarity measures. Another disadvantage of using ED is domain-and task-specific threshold that can drastically vary depending on the context. PC, that we used in our approach, on the other hand, uses the same threshold independently of the type of datasets. The two following sections detail the two proposed algorithms. Pearson's Correlation (PC) is a statistical tool that measures the correlation between two datasets and produces a continuous measurement between ∈ [−1, 1] with 1 indicating a high degree of similarity, and −1 an anti-correlation indicating an opposite trend. This metric is well suited (DR2) for measuring dataset similarity (i.e. in trajectory points). Pearson's Correlation PC between two trails T i and T j on the x − axis can be defined as follows: Where T i x and T j x are the means, E the expectation and σ T ix, σ T jx the standard deviations. The correlation is computed on the y − axis and the x − axis for two dimensional points. This metric is invariant in point translation and trajectory scale but it does not take into account the order of points along a trajectory. Therefore, the pipeline also considers the FPCA metric that is more appropriate to trajectory shape but that does not take into account negative correlation. Functional Data Analysis is a well-known information geometry approach that captures the statistical properties of multivariate data functions, such as curves modeled as a point in an infinite-dimensional space (usually the L 2 space of square integrable functions ). The Functional Principal Component Analysis (FPCA) computes the data variability around the mean curve of a cluster while estimating the Karhunen-Loeve expansion scores. A simple analogy can be drawn with the Principal Component Analysis (PCA) algorithm where eigen vectors and their eigen values are computed, the FCPA performs the same operations with eigen functions (piece-wise splices) and their principal component scores to model the statistical properties of a considered cluster: where b j are real-valued random variables called principal component scores. φ j are the principal component functions, which obey to: φ j are the (vector-valued) eigenfunctions of the covariance operator with eigenvalues λ j. We refer the prospective reader to the work of Hurter et al. for a Discrete implementation. With this model, knowing the mean curveγ and the principal component functions φ j, a group of curves can be described and reconstructed (Inverse FPCA) with the matrix of the principal component score b j of each curve. Usually, a finite vector Figure 2. This figure shows the interaction pipeline to filtered items thanks to the user brush input. The pipeline extracts the brushed items but also the shape of the brush which will be then used as a comparison with the brushed items (metrics stage). Then, brushed items are stored in bins displayed with small multiples where the user can interactivity refine the selection (i.e. binning and filtering stage). Finally, the user can adjust the selection with additional brushing interactions. Note that both the PC and FPCA can be used for the binning process, but separately. Each small multiple comes from one metric exclusively. Figure 3. This figure shows an example of the two metrics for shape comparison usage. The user brushed around curve 3 and thus also selected curves 1 and 2. Thanks to the Pearson computation, the associated small multiples show that only curve 2 is correlated to the shape of the brush. Curve 3 is anti correlated since it in an opposite direction to the shape of the brush. The FPCA computation does not take into account the direction but rather the curvature similarity. As such, only shape 3 is considered as highly similar to the brush shape input. (with fixed dimension d) of b j scores is selected such that the explained variance is more than a defined percentile. To compute a continuous and meaningful metric (DR2), the metric computation uses the two first Principal Components (PC) to define the representative point of a considered trajectory. Then, the metric is computed by the euclidean distance between the shape of the brush and each brushed trajectory in the Cartesian scatterplot PC1/PC2 (Figure 4). Each distance is then normalized between with 1 corresponding to the largest difference in shape between the considered shape of the brush of the corresponding trajectory. Taking into account the computed comparison metrics, the pipeline stores the ing values into bins. Items can then be sorted in continuous ways from the less similar to the most similar ones. While the Pearson measurements ∈ [−1, 1] and the FPCA ∈, this binning process operates in the same way. Each bin is then used to visually show the trajectories it contains through small multiples (we use 5 small multiples which gives a good compromise between visualization compactness and trajectory visibility). The user can then interactively filter the selected items (DR4) with a range slider on top of the small multiple visualization. The user is thus able to decide whether to remove uncorrelated items or refine the correlated one with a more restricted criteria (DR3). This technique is designed to enable flexible and rapid brushing of trajectories, by both the location and the shape of the brush. The technique's interaction paradigm is now described and illustrated in a scenario where an air traffic management expert studies the flight data depicted in Figure 6. Aircraft trajectories can be visually represented as connected line segments that form a path on a map. Given the flight level (altitude) of the aircraft, the trajectories can be presented in 3D and visualized by varying their appearances or changing their representation to basic geometry types. Since the visualization considers a large number of trajectories that compete for the visual space, these visualizations often present occlusion and visual clutter issues, rendering exploration difficult. Edge bundling techniques have been used to reduce clutter and occlusion but they come at the cost of distorting the trajectory shapes which might not always be desirable. Analysts need to explore this kind of datasets in order to perform diverse tasks. Some of these tasks compare expected aircraft trajectories with the actual trajectories. Other tasks detect unexpected patterns and perform out traffic analysis in complex areas with dense traffic. To this end, various trajectory properties such as aircraft direction, flight level and shape are examined. However, most systems only support selection techniques that rely on starting and end points, or predefined regions. We argue that the interactive shape brush technique would be helpful for these kinds of tasks, as they require the visual inspection of the data, the detection of the specific patterns and then their selection for further examination. As these specific patterns might differ from the rest of the data precisely because of their shape, a technique that enables their selection through this characteristic will make their manipulation easier, as detailed in the example scenario. We consider a dataset that includes 4320 aircraft trajectories of variable lengths from one day of flight traffic over the French airspace. We define the trail T T T as a set of real-valued consecutive points T T T = [(T T T x x x 1, T T T y y y 1)), (T T T x x x 2, T T T y y y 2),..., (T T T x x x n, T T T y y y n) ] where n is the number of points and (T T T x x x i, T T T y y y i) corresponds to the i − th coordinate of the trail. The Figure 6 depicts an example of 4133 trails (aircraft in French airspace). The brush Shape consists of a set of real-valued consecutive points S S S = [(S S Sx x x 1, S S Sy y y 1)), (S S Sx x x 2, S S Sy y y 2),..., (S S Sx x x m, S S Sy y y m) ] where m is the number of points. Note that while the length n of each trail is fixed, the length m of the S S Shape depends on the length of the user brush. The S S Shape is smoothed using a 1Cfilter and then resampled to facilitate the trail comparison. The similarity metrics are then used in subsequences of the s s shape of approximately the same length as the brush S S Shape. In order to do this, each trial is first resampled so that each pair of consecutive vertices on the trail has the same distance l vertices. The user starts by exploring the data using pan and zoom operations. They are interested in the trajectories from the south-east of France to Paris. The user can choose if they are looking for a subsequence match or an exact match. A subsequence match involves the detection of trajectories having a subsequence similar to the S S Shape locally. Exact match comparison also takes into account the length of the trajectory and the S S Shape in order to select a trajectory, i.e, its length must be approximately similar to the length of the Shape (general measurements). This option is especially useful to select a trajectory by its start and end points (e.g, finding trajectories taking off from an airport A and landing at an airport B). The exact matching is supported by analyzing the length of the trail and the Shape before applying the similarity metric algorithm. The analyst in the scenario activates the subsequence match and starts brushing in the vicinity of the target trajectories following the trajectory shape with the mouse. This will define both the brush region and the brush shape. Once the brushing has been completed, the selected trajectories are highlighted in green, as depicted in Figure 5 -(b). a b Figure 5. (a) The user brushes the trajectories in order to select those from the south-east of France to Paris. (b) They select the most correlated value to take into account the direction of the trails The similarity calculation between the Shape and the brushed region will produce a similarity value for each trail contained in the region and the trails are distributed in small multiples as detailed in Section 3. Once the user has brushed, she can adjust the selection by selecting one of the bins displayed in the small multiples and using its range slider. This enables a fine control over the final selection and makes the algorithm thresholding easier to understand as the user has control over both the granularity of the exploration and the chosen similarity level. The range slider position controls the similarity level, and its size determines the number of trajectories selected at each slider position: the smallest size selects one trajectory at a time. The range slider size and position are adjusted by direct manipulation. As the bins are equally sized, the distribution of the similarity might not be linear across the small multiples. This makes navigation easier since the trajectories distribution in the small multiples is continuous. However, this also entails that not every bin will correspond to the same similarity value interval. To keep this information available to the user, a colored heatmap (from red to green) displays the actual distribution. In the current scenario, the expert, as they wish to select only the flights to Paris and not from Paris, selects the trajectories that are correlated with the original brush. These trajectories are on the right side of the small multiple, highlighted in green as depicted in Figure 5 -(b). The expert is then interested in exploring the flights that land on the north landing strip but that are not coming from the east. For this, they perform a new shape brush to identify the planes that do come from the east, and distinguishable by the "C" shape in the trajectories, as depicted in Figure 7. To be able to select the geometry precisely, the expert changes to the FPCA metric, using the keyboard shortcut. In this case the small multiple arranges the trajectories from less similar to more similar. This entails that the small multiple based on FPCA also enables the selection of all the trajectories that do not match the specified Shape but which are contained in the brushing region. As all trajectories passing through the north landing strip are contained in the brushing region, the most similar trajectories will correspond to the ones that have a "C" shape, in the same orientation as the Shape, and thus come from the east. The less similar will be the ones that interest the analyst, so they can select them by choosing the most dissimilar small multiple as depicted in Figure 7 -(b). Figure 6. One day's aircraft trajectories in French airspace, including taxiing, taking-off, cruise, final approach and landing phases. Selecting specific trajectories using the standard brushing technique will yield inaccurate due to the large number of trajectories, occlusions, and closeness in the spatial representation. We argue that there is a strong demand for targeted brushing to select motifs in datasets. In various domains, including aircraft trajectories, eye tracking, GPS trajectories or brain fiber analysis, there is a substantial need to be able to discover hidden motifs in large datasets. Undoubtedly, retrieving desired trails in such datasets would help analysts to focus on the most interesting parts of the data. The system was built using C# and OpenTK on a 64bit 1 XPS 15 Dell Laptop. Although, both PC and FPCA provide different but valuable , the running performance was 10 times faster with PC compared to FPCA. The technique was first tested informally with ML: 3? experts from aerospace domain with more than 10 years of experience in trajectories analysis. While the collected feedback was largely positive, we observed some limitations regarding the misunderstanding of our filtering parameters. Given the novelty of our interaction technique, users needed a small training period to better understand the semantic of our small multiples. Nevertheless, our technique provides initial good selection without any parameter adjustment. Because the presented technique is not designed to replace standard brushing but rather to complement it, we argue that an evaluation comparing techniques that are substantially different will not capture the actual usefulness of our technique. The goal of this technique is to facilitate trajectories selection in dense areas, where standard brushing would require multiple user actions (panning, zooming, brushing). Therefore, a use-cases approach has been chosen to showcase relevant scenarios of usage. Eye-tracking technologies are gaining popularity for analyzing human behaviour, in visualization analysis, human factors, human-computer interaction, neuroscience, psychology and training. The principle consists in finding the likely objects of interest by tracking the movements of the user's eyes. Using a camera, the pupil center position is detected and the gaze, i.e, the point in the scene the user is fixating on, is computed using a prior calibration procedure. Therefore, the gaze data consist of sampled trails representing the movements of the user's eye gaze while completing a given task. Two important types of recorded movements characterize eye behaviour: the fixations and saccades. Fixations are the eye positions the user fixates for a certain amount of time, in other words, they describe the locations that captured the attention of the user. The saccades connect the different fixations, i.e, they represent the rapid movements of the eye from one location to another. The combination of these eye movements are called the scanpath (Figure 8A). The scanpath is subject to overplotting. This challenge may be addressed through precise brushing techniques to select specific trails. Usually, fixation events are studied to create an attention map which shows the salient elements in the scene. The salient elements are located at high density fixation areas. However, the temporal connections of the different fixations provide additional information. The saccades enable the links between the fixations to be maintained and the temporal meaning of the eye movement to be held. Discovering patterns in the raw scanpath data is difficult since, in contrast to aircraft trajectories, eye movements are sparser and less regular (Figure 8). To address this, different kinds of visualizations for scanpaths have been proposed in the literature. For example, edge bundling techniques minimize visual clutter of large and occluded graphs. However, these techniques either alter trail properties such as shape and geometric information, or are otherwise computationally expensive, which makes them unsuitable for precise exploration and mining of large trail datasets. Moreover, it is possible to animate eye movements in order to have an insight of the different fixations and saccades. However, given the large datasets of eye movements retrieved from lengthy experiments containing thousands of saccades, this approach is unnecessarily time-consuming and expensive. Therefore, we next describe how this study's approach supports proper and more efficient motif discovery on such eye tracking datasets. The tested dataset is adapted from Peysakhovich et al., where a continuous recording of eye movement in a cockpit was performed. The gaze data was recorded at 50 Hz. Sequential points located in a square of 20 × 20 pixels and separated by at least 200 ms were stored as a fixation event and replaced by their average in order to reduce noise coming from the microsaccades and the tracking device. In order to illustrate some examples, we could consider a domain expert who wishes to explore the movements of the pilot's eyes in a cockpit. When performing a task, the pilot scans the different instruments in the cockpit, focuses more on certain instruments or interacts with them. Especially in this context, the order of pilot attention is important since checking a parameter in one instrument may give an indication of the information displayed in another instrument. For example, the priority of the Primary Flight Display (PFD) instrument compared to Flight Control Unit (FCU) will differ for the cruise phase as compared to the final landing approach. As an example of analysis, the user wishes to explore the movement of the eye from the Primary Flight Display (PFD) to the Navigation Display (ND). Selecting these scanpaths using traditional brushing techniques would be challenging because of the clutter, selecting those scanpaths would introduce additional accidental selections. Therefore, he brushes these scanpaths using a shape that extends from the PFD to the ND, applying the Pearson metric to consider the direction. Figure 8 (a) depicts the brushed eye movements that correspond to the most correlated trails in the small multiple. There are several saccades between those two devices, and this is inline with the fact that saccadic movements between the PFD and the ND are typically caused by parameter checking routines. However, when the user changes the selection and brushes the scanpath between the ND and the FCU, it is surprising to see that there is only one saccade between them. Brushing now with a shape that goes between the PFD and the FCU (Figure 8 -(c)) reveals only one scanpath. This is difficult to visualize in the raw data or using the standard brushing technique. A final Shape searching for an eye movement from the PFD to the LAI and passing by the FCU, in only one saccade (Figure 8-(d) ). To determine the meaning of this behavior, the tool also enables the expert to exploit a continuous transition to increase the visibility and gain insight on when these saccadic movements occurred (temporal view). The user can change the visual mapping from the (x,y) gaze location to the (time,y) temporal view. This smooth transition avoids abrupt change to the visualization (figure 9). GPS trajectories consist of sequential spatial locations recorded by a measurement instrument. Subjects such as people, wheeled vehicles, transportation modes and devices may be tracked by analyzing the spatial positions provided by these instruments. Analysts may need to explore and analyze different paths followed by the users. The advances in position-acquisition and ubiquitous devices have granted extremely large location data, which indicate the mobility of different moving targets such as autonomous vehicles, pedestrians, natural phenomena, etc. The commonness of these datasets calls for novel approaches in order to discover information and mine the data. Traditionally, researchers analyse GPS logs by defining a distance function (e.g, KNN) between two trajectories and then applying expensive processing algorithms to address the similarity detection. For example, they first convert the trajectories into a set of road segments by leveraging map-matching algorithms. Afterwards, the relationship between trajectories is managed using indexing structures. Using the data provided by Zheng et al., we seek to investigate different locations followed by the users in Beijing. The data consists of GPS trajectories collected for the Geolife project by 182 users during a period of over five years (from April 2007 to August 2012). Each trajectory is represented by a 3D latitude, longitude and altitude point. A range of users' outdoor movements were recorded, including life routines such as travelling to work, sports, shopping, etc. As the quantity of GPS data is becoming increasingly large and complex, proper brushing is challenging. Using bounding boxes somewhat alleviate this difficulty by setting the key of interest on the major corners. However, many boxes must be placed carefully for one single selection. The boxes can help the analysts to select all the trajectories that pass through a specific location, but do not simplify the analysis of overlapping and directional trajectories. This study's approach intuitively supports path differentiation for both overlapping trajectories and takes direction into account. For example, we are interested in answering questions about the activities people perform and their sequential order. For this dataset, the authors were interested in finding event sequences that could inform tourists. The shape based brushing could serve as a tool to further explore their . For example, if they find an interesting classical sequence that passes through locations A and B they can further explore if this sequence corresponds to a larger sequence and what other locations are visited before or after. A first brushing and refinement using the FPCA metric and small multiples enables them to select all the trajectories that include a precise event sequence passing through a set of locations, as depicted in Figure 10. A second brushing using the Pearson metric enables further explorations that also take into account the direction of the trajectories. Switching between the correlated trajectories and the anti-correlated ones, the user can gain insight about the visitation order of the selected locations. The proposed brushing technique leverages existing methods with the novel usage of the shape of the brush as an additional filtering parameter. The interaction pipeline shows different data processing steps where the comparison algorithm between the brushed items and the shape of the brush plays a central role. While the presented pipeline contains two specific and complementary comparison metric computations, another one can be used as long as it fulfills the continuity and metric se- Figure 10. Three different trajectories containing three different event sequences from. mantic requirements (DR2). There are indeed many standard approaches (ED, DTW, Discrete FrÃl'chet distance) that are largely used by the community and could be used to extend our technique when faced with different datasets. Furthermore, the contribution of this paper is a novel shape-based brushing technique and not simply a shape similarity measure. In our work, we found two reasonable similarity measures that fulfill our shape-based brushing method: The FPCA distance comparison provides an accurate curve similarity measurement while the Pearson metric provides a complementary criteria with the direction of the trajectory. In terms of visualization, the binning process provides a valuable overview of the order of the trajectory shapes. This important step eases the filtering and adjustment of the selected items. It is important to mention that this filtering operates in a continuous manner as such trajectories are added or removed one by one when adjusting this filtering parameter. This practice helps to fine tune the selected items with accurate filtering parameters. The presented scenario shows how small multiple interaction can provide flexibility. This is especially the case when the user brushes specific trajectories to be then removed when setting the compatibility metrics to uncorrelated. This operation performs a brush removal. The proposed filtering method can also consider other types of binning and allows different possible representations (i.e. various visual mapping solutions). This paper illustrates the shape based brushing technique with three application domains (air traffic, eye tracking, gps data), but it can be extended to any moving object dataset. However, our evaluation is limited by the number of studied application domains. Furthermore, even if various users and practitioners participated in the design of the technique, and assessed the simplicity and intuitiveness of the method, we did not conduct a more formal evaluation. The shape based brush is aimed at complementing the traditional brush, and in no way do we argue that it is more efficient or effective than the original technique for all cases. The scenarios are examples of how this technique enables the selection of trails that would be otherwise difficult to manipulate, and how the usage of the brush area and its shape to perform comparison opens novel brushing perspectives. We believe they provide strong evidence of the potential of such a technique. The technique also presents limitations in its selection flexibility, as it is not yet possible to combine selections. Many extensions can be applied to the last step of the pipeline to support this. This step mainly addresses the DR4 where the selection can be refined thanks to user inputs. As such, multiple selections can be envisaged and finally be composed. Boolean operations can be considered with the standard And, Or, Not. While this composition is easy to model, it remains difficult for an end user to master the operations when there are more than 2 subset operations. As a solution, Hurter et al. proposed an implicit item composition with a simple drag and drop technique. The pipeline can be extended with the same paradigm where a place holder can store filtered items and then be composed to produce the final . The user can then refine the selection by adding, removing or merging multiple selections. In this paper, a novel sketch-based brushing technique for trail selection was proposed and investigated. This approach facilitates user selection in occluded and cluttered data visualization where the selection is performed on a standard brush basis while taking into account the shape of the brush area as a filtering tool. This brushing tool works as follows. Firstly, the user brushes the trajectory of interest trying to follow its shape as closely as possible. Then the system pre-selects every trajectory which touches the brush area. Next, the algorithm computes a distance between every brushed shape and the shape of the brushed area. Comparison scores are then sorted and the system displays visual bins presenting trajectories from the lowest scores (unrelated -or dissimilar trajectories) to the highest values/scores (highly correlated or similar trajectories). The user can then adjust a filtering parameter to refine the actual selected trajectories that touch the brushed area and which have a suitable correlation with the shape of the brushed area. The cornerstone of this shape-based technique relies on the shape comparison method. Therefore, we choose two algorithms which provide enough flexibility to adjust the set of selected trajectories. One algorithm relies on functional decomposition analysis which insures a shape curvature comparison, while the other method insures an accurate geometric based comparison (Pearson algorithm). To validate the efficiency of this method, we show three examples of usage with various types of trail datasets. This work can be extended in many directions. We can first extend it with additional application domains and other types of dataset such as car or animal movements or any type of time varying data. We can also consider other types of input to extend the mouse pointer usage. Virtual Reality data exploration with the so called immersive analytic domain gives a relevant work extension which will be investigated in the near future. Finally, we can also consider adding machine learning to help users brush relevant trajectories. For instance, in a very dense area, where the relevant trajectories or even a part of the trajectories are not visible due to the occlusion, additional visual processing may be useful to guide the user during the brushing process.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
yaeJLwvTr
Interactive technique to improve brushing in dense trajectory datasets by taking into account the shape of the brush.
Generative Adversarial Networks (GANs) can produce images of surprising complexity and realism, but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene. Capturing such complex interactions between different objects in the world, including their relative scaling, spatial layout, occlusion, or viewpoint transformation is a challenging problem. In this work, we propose to model object composition in a GAN framework as a self-consistent composition-decomposition network. Our model is conditioned on the object images from their marginal distributions and can generate a realistic image from their joint distribution. We evaluate our model through qualitative experiments and user evaluations in scenarios when either paired or unpaired examples for the individual object images and the joint scenes are given during training. Our reveal that the learned model captures potential interactions between the two object domains given as input to output new instances of composed scene at test time in a reasonable fashion. Generative Adversarial Networks (GANs) have emerged as a powerful method for generating images conditioned on a given input. The input cue could be in the form of an image BID1 BID20, a text phrase BID32 BID23 a; BID10 or a class label layout BID18 BID19 BID0. The goal in most of these GAN instantiations is to learn a mapping that translates a given sample from source distribution to generate a sample from the output distribution. This primarily involves transforming either a single object of interest (apples to oranges, horses to zebras, label to image etc.), or changing the style and texture of the input image (day to night etc.). However, these direct input-centric transformations do not directly capture the fact that a natural image is a 2D projection of a composition of multiple objects interacting in a 3D visual world. In this work, we explore the role of compositionality in learning a function that maps images of different objects sampled from their marginal distributions (e.g., chair and table) into a combined sample (table-chair) that captures their joint distribution. Modeling compositionality in natural images is a challenging problem due to the complex interaction possible among different objects with respect to relative scaling, spatial layout, occlusion or viewpoint transformation. Recent work using spatial transformer networks BID9 within a GAN framework BID14 decomposes this problem by operating in a geometric warp parameter space to find a geometric modification for a foreground object. However, this approach is only limited to a fixed and does not consider more complex interactions in the real world. Another recent work on scene generation conditioned on text and a scene graph and explicitly provides reasoning about objects and their relations BID10.We develop a novel approach to model object compositionality in images. We consider the task of composing two input object images into a joint image that captures their joint interaction in natural images. For instance, given an image of a chair and a table, our formulation should be able to generate an image containing the same chair-table pair interacting naturally. For a model to be able to capture the composition correctly, it needs to have the knowledge of occlusion ordering, i.e., a table comes in front of chair, and spatial layout, i.e., a chair slides inside table. To the best of our knowledge, we are among the first to solve this problem in the image conditional space without any prior explicit information about the objects' layout. Our key insight is to reformulate the problem of composition of two objects into first composing the given object images to generate the joint combined image which models the object interaction, and then decomposing the joint image back to obtain individual ones. This reformulation enforces a selfconsistency constraint ) through a composition-decomposition network. However, in some scenarios, one does not have access to the paired examples of same object instances with their combined compositional image, for instance, to generate the joint image from the image of a given table and a chair, we might not have any example of that particular chair besides that particular table while we might have images of other chairs and other tables together. We add an inpainting network to our composition-decomposition layers to handle the unpaired case as well. Through qualitative and quantitative experiments, we evaluate our proposed Compositional-GAN approach in two training scenarios: (a) paired: when we have access to paired examples of individual object images with their corresponding composed image, (b) unpaired: when we have a dataset from the joint distribution without being paired with any of the images from the marginal distributions. Generative adversarial networks (GANs) have been used in a wide variety of settings including image generation BID4 BID31 BID11 and representation learning BID24 BID15. The loss function in GANs have been shown to be very effective in optimizing high quality images conditioned on available information. Conditional GANs BID18 generate appealing images in a variety of applications including image to image translation both in the case of paired and unpaired data, inpainting missing image regions BID20 BID30, generating photorealistic images from labels BID18 BID19, and solving for photo super-resolution BID13 BID12.Image composition is a challenging problem in computer graphics where objects from different images are to be overlayed in one single image. Appearance and geometric differences between these objects are the obstacles that can in non-realistic composed images. BID35 addressed the composition problem by training a discriminator network that could distinguish realistic composite images from synthetic ones. BID27 developed an end-to-end deep CNN for image harmonization to automatically capture the context and semantic information of the composite image. This model outperformed its precedents BID26 BID29 which transferred statistics of hand-crafted features to harmonize the foreground and the in the composite image. Recently, BID14 used spatial transformer networks as a generator by performing geometric corrections to warp a masked object to adapt to a fixed image. Moreover, BID10 computed a scene layout from given scene graphs which revealed an explicit reasoning about relationships between objects and converted the layout to an output image. Despite the success all these approaches gained in improving perceptual realism, they lack the realistic complex problem statement where no explicit prior information about the scene layout is given. In the general case which we address, each object should be rotated, scaled, and translated in addition to occluding others and/or being occluded to generate a realistic composite image. We propose a generative network for composing two objects by learning how to handle their relative scaling, spatial layout, occlusion, and viewpoint transformation. Given a set of images from the marginal distribution of the first object, X = {x 1, · · ·, x n}, and a set of images from the marginal distribution of the second object, Y = {y 1, · · ·, y n}, in addition to a set of real images from their joint distribution containing both objects, C = {c 1, · · ·, c n}, we generate realistic composite images containing objects given from the first two sets. We propose a conditional generative adversarial network for two scenarios: paired inputs-output in the training set where each image in C is correlated with an image in X and one in Y, and unpaired training data where images in C are not paired with images in X and Y. It is worth noting that our goal is not to learn a generative model of all possible compositions, but learn to output the mode of the distribution. The modular components of our proposed approach are critical in learning the mode of plausible compositions. For instance, our relative appearance flow network handles the viewpoint and the spatial transformer network handles affine transformation eventually making the generator invariant to these transformations. In the following sections, we first summarize a conditional generative adversarial network, and then will discuss our network architecture and its components for the two circumstances. Starting from a random noise vector, z, GANs generate images c of a specific distribution by adversarially training a generator, G, versus a discriminator, D. While the generator tries to produce realistic images, the discriminator opposes the generator by learning to distinguish between real and fake images. In the conditional GAN models (cGANs), an auxiliary information, x, in the form of an image or a label is fed into the model alongside the noise vector ({x, z} → c); BID18. The objective of cGANs would be therefore an adversarial loss function formulated as: z) )] where G and D minimize and maximize this loss function, respectively. DISPLAYFORM0 The convergence of the above GAN objective and consequently the quality of generated images would be improved if an L 1 loss penalizing deviation of generated images from their ground-truth is added. Thus, the generator's objective function would be summarized as: DISPLAYFORM1 In our proposed compositional GAN, model ({(x, y), z} → c) is conditioned on two input images, (x, y), concatenated channel-wise in order to generate an image from the target distribution p data (c).We have access to real samples of these three distributions during training (in two paired and unpaired scenarios). Similar to, we ignore random noise as the input to the generator, and dropout is the only source of randomness in the network. In some specific domains, the relative view point of the objects should be changed accordingly to generate a natural composite image. Irrespective of the paired or unpaired inputs-output cases, we train a relative encoder-decoder appearance flow network BID34, G RAFN, taking the two input images and synthesizing a new viewpoint of the first object, DISPLAYFORM0, given the viewpoint of the second one, y i encoded in its binary mask. The relative appearance flow network is trained on a set of images in X with arbitrary azimuth angles α i ∈ {−180•, −170 DISPLAYFORM1 •} along with their target images in an arbitrary new viewpoint with azimuth angle θ i ∈ {−180•, −170 DISPLAYFORM2 •} and a set of foreground masks of images in Y in the target viewpoints. The network architecture for our relative appearance flow network is illustrated in the appendix and its loss function is formulated as: DISPLAYFORM3 DISPLAYFORM4 As mentioned above, G RAFN is the encoder-decoder network predicting appearance flow vectors, which after a bilinear sampling generates the synthesized view. The encoder-decoder mask generating network, G M RAFN, shares weights in its encoder with G RAFN, while its decoder is designed for predicting foreground mask of the synthesized image. Moreover, x r i is the ground-truth image for x i in the new viewpoint,M fg xi is its predicted foreground mask, and M fg xi, M fg yi are the ground-truth foreground masks for x i and y i, respectively. In this section, we propose a model, G, for composing two objects when there is a corresponding composite real image for each pair of input images in the training set. In addition to the relative AFN discussed in the previous section, to relatively translate the center-oriented input objects, we train our variant of the spatial transformer network (STN) BID9 which simultaneously takes the two RGB images, x Figure 1: Compositional GAN training model both for paired and unpaired training data. The yellow box refers to the RAFN step for synthesizing a new viewpoint of the first object given the foreground mask of the second one, which will be applied only during training with paired data. The orange box represents the process of inpainting the input segmentations for training with unpaired data. Rest of the model would be similar for the paired and unpaired cases which includes the STN followed by the self-consistent composition-decomposition network. The main backbone of our proposed model consists of a self-consistent composition-decomposition network both as conditional generative adversarial networks. The composition network, G c, takes the two translated input RGB images, x T i and y i T, concatenated channel-wise in a batch, with size N × 6 × H × W, and generates their corresponding output,ĉ i, with size N × 3 × H × W composed of the two input images appropriately. This generated image will be then fed into the decomposition network, G dec, to be decomposed back into its constituent objects,x T i andŷ T i in addition to G M dec that predicts probability segmentation masks of the composed image,M xi andM yi. The two decomposition components G dec and G M dec share their weights in their encoder network but are different in the decoder. We assume the ground-truth foreground masks of the inputs and the target composite image are available, thus we remove from all images in the network for simplicity. A GAN loss with gradient penalty BID6 ) is applied on top of generated imagesĉ i,x T i,ŷ T i to make them look realistic in addition to multiple L 1 loss functions penalizing deviation of generated images from their ground-truth. An schematic of our full network, G, is represented in FIG2 and the loss function is summarized as: DISPLAYFORM0, y i ), and x c i, y c i are the ground-truth transposed full object inputs corresponding to c i. Moreover, L L1 (G dec) is the self-consistency constraint penalizing deviation of decomposed images from their corresponding transposed inputs and L CE is the cross entropy loss applied on the predicted probability segmentation masks. We also added the gradient penalty introduced by BID6 to improve convergence of the GAN loss functions. If viewpoint transformation is not needed for the objects' domain, one can replace x RAFN i with x i in the above equations. The benefit of adding decomposition networks will be clarified in Section 3.5. Here, we propose a variant of our model discussed in section 3.3 for broader object domains where paired inputs-outputs are not available or hard to collect. In this setting, there is not one-to-one mapping between images in sets X, Y and images in the composite domain, C. However, we still assume that foreground and segmentation masks are available for images in all three sets during training. Therefore, the is again removed for simplicity. Given the segmentation masks, M xi, M yi, of the joint ground-truth image, c i, we first crop and resize object segments x c i,s = c i M xi and y c i,s = c i M yi to be at the center of the image, similar to the input center-oriented objects at test time, calling them as x i,s and y i,s. For each object, we add a self-supervised inpainting network BID20, G f, as a component of our compositional GAN model to generate full objects from the given segments of image c i, reinforcing the network to learn object occlusions and spatial layouts more accurately. For this purpose, we apply a random mask on each x i ∈ X to zero out pixel values in the mask region and train a conditional GAN, G x f, to fill in the missing regions. To guide this masking process toward a similar task of generating full objects from segmentations, we can use the foreground mask of images in Y for zeroing out images in X. Another cGAN network, G y f, should be trained similarly to fill in the missing regions of masked images in Y. The loss function for each inpainting network would be as: DISPLAYFORM0 Therefore, starting from two inpainting networks trained on sets X and Y, we generate a full object from each segment of image c i both for the center oriented segments, x i,s and y i,s, and the original segments, x, we can train a spatial transformer network similar to the model for paired data discussed in section 3.3 followed by the composition-decomposition networks to generate composite image and its probability segmentation masks. Since we start from segmentations of the joint image rather than an input x i from a different viewpoint, we skip training the RAFN end-to-end in the compositional network, and use its pre-trained model discussed in section 3.2 at test time. After training the network, we study performance of the model on new images, x, y, from the marginal distributions of sets X and Y along with their foreground masks to generate a naturallooking composite image containing the two objects. However, since generative models cannot generalize very well to a new example, we continue optimizing network parameters given the two input test instances to remove artifacts and generate sharper BID1. Since the ground-truth for the composite image and the target spatial layout of the objects are not available at test time, the self-consistency cycle in our decomposition network provides the only supervision for penalizing deviation from the original objects through an L 1 loss. We freeze the weights of the relative spatial transformer and appearance flow networks, and only refine the weights of the composition-decomposition layers where the GAN loss will be applied given the real samples from our training set. We again ignore for simplicity given the foreground masks of the input instances. The ground-truth masks of the transposed full input images, M fg x, M fg y can be also obtained by applying the pre-trained RAFN and STN on the input masks. We then use the Hadamard product to multiply the predicted masksM x,M y with the objects foreground masks M fg x, M fg y, respectively to eliminate artifacts outside of the target region for each object. One should note that M fg x, M fg y are the foreground masks of the full transposed objects whileM x andM y are the predicted segmentation masks. Therefore, the loss function for this refinement would be: Test on the chair-table (A) and basket-bottle (B) composition tasks trained with either paired or unpaired data. "NN" stands for the nearest neighbor image in the paired training set, and "NoInpaint" shows the of the unpaired model without the inpainting network. In both paired and unpaired cases,ĉ before andĉ after show outputs of the generator before and after the inference refinement network, respectively. Also,ĉ after s represents summation of masked transposed inputs after the refinement step. wherex T,ŷ T are the generated decomposed images, and x T and y T are the transposed inputs. DISPLAYFORM0 DISPLAYFORM1 In the experiments, we will present: images generated directly from the composition network,ĉ, before and after this refinement step, images generated directly based on the predicted segmentation masks asĉ s =M x x T +M y y T. In this section, we study the performance of our compositional GAN model for both the paired and unpaired scenarios through multiple qualitative and quantitative experiments in different domains. First, we use the Shapenet dataset BID2 as our main source of input objects and study two composition tasks: a chair next to a table, a bottle in a basket. Second, we show our model performing equally well when one object is fixed and the other one is relatively scaled and linearly transformed to generate a composed image. We present our on the CelebA dataset BID17 composed with sunglasses downloaded from the web. In all our experiments, the values for the training hyper-parameters are set to λ 1 = 100, λ 2 = 50, λ 3 = 1, and the inference λ = 100. Composition of a chair and a table is a challenging problem since viewpoints of the two objects should be similar and one object should be partially occluded and/or partially occlude the other one depending on their viewpoint. This problem cannot be resolved by considering each object as a separate individual layer of an image. By feeding in the two objects simultaneously to our proposed network, the model learns to relatively transform each object and composes them reasonably. We manually made a collection of 1K composite images from Shapenet chairs and tables which can be used for both the paired and unpaired training models. In the paired scenario, we use the pairing information between each composite image and its constituent full chair and table besides their foreground masks. On the other hand, to show the performance of our model on the unpaired examples as well, we ignore the individual chairs and tables used in each composite image, and use a different subset of Shapenet chairs and tables as real examples of each individual set. We made sure that these two subsets do not overlap with the chairs and tables in composite images to avoid the occurrence of implicit pairing in our experiments. Chairs and tables in the input-output sets can pose in a random azimuth angle in the range [−180 DISPLAYFORM0 •] at steps of 10 •. As discussed in section 3.2, feeding in the foreground mask of an arbitrary table with a random azimuth angle in addition to the input chair to our trained relative appearance flow network synthesizes the chair in the viewpoint consistent with the table. Synthesized test chairs as X RAFN are represented in the third row of FIG3 -A.In addition, to study our network components, we visualize model outputs at different steps in FIG3 -A. To evaluate our network with paired training data on a new input chair and table represented as X and Y, respectively, we find its nearest neighbor composite example in the training set in terms of its constituent chair and table features extracted from a pre-trained VGG19 network BID25. As shown in the fourth row of FIG3 -A, nearest neighbors are different enough to be certain that network is not memorizing its training data. We also illustrate output of the network before and after the inference refinement step discussed in section 3.5 in terms of the generator's prediction,ĉ, as well as the direct summation of masked transposed inputs,ĉ s, for both paired and unpaired training models. The refinement step sharpens the synthesized image and removes artifacts generated by the model. Our from the model trained on unpaired data shown in the figure is comparable with those from paired data. Moreover, we depict the performance of the model without our inpainting network in the eighth row, where occlusions are not correct in multiple examples. FIG3 -A emphasizes that our model has successfully resolved the challenges involved in this composition task, where in some regions such as the chair handle, table is occluding the chair while in some other regions such as table legs, chair is occluding the table. More exemplar images as well as some of the failure cases for both paired and unpaired scenarios are presented in Appendix C.1.We have also conducted an Amazon Mechanical Turk evaluation BID33 to compare the performance of our algorithm in different scenarios including training with and without paired data and before and after the final inference refinement network. From a set of 90 test images of chairs and tables, we have asked 60 evaluators to select their preferred composite image generated by the model trained on paired data versus images generated by the model trained on unpaired data, both after the inference refinement step. As a , 57% of the composite images generated by our model trained on paired inputs-outputs were preferred to the ones generated through the unpaired scenario. It shows that even without paired examples during training, our proposed model performs reasonably well. We have repeated the same study to compare the quality of images generated before and after the inference refinement step, where the latter was preferred 71.3% of the time to the non-refined images revealing the benefit of the the last refinement module in generating higher-quality images. In this experiment, we address the compositional task of putting a bottle in a basket. Similar to the chair-table problem, we manually composed Shapenet bottles with baskets to prepare a training set of 100 paired examples. We trained the model both with and without the paired data, similarly to section 4.1, and represent outputs of the network before and after the inference refinement in FIG3 -B. In addition, nearest neighbor examples in the paired training set are shown for each new input instance (fourth row) as well as the model's predictions in the unpaired case without the inpainting network (eighth column). As clear from the , our inpainting network plays a critical role in the success of our unpaired model specially for handling occlusions. This problem statement is similarly interesting since the model should identify which pixels to be occluded. For instance in the first column of FIG3 -B, the region inside the basket is occluded by the blue bottle while the region outside is occluding the latter. More examples are shown in Appendix C.2.Similarly, we evaluate the performance of our model on this task through an Amazon Mechanical Turk study with 60 evaluators and a set of 45 test images. In summary, outputs from our paired training were preferred to the unpaired case 57% of the time and the inference refinement step was Test examples for the face-sunglasses composition task. Top two rows: input sunglasses and face images; 3rd and 4th rows: the output of our compositional GAN for the paired and unpaired models, respectively; Last row: images generated by the ST-GAN BID14 model.observed to be useful in 64% of examples. These confirm the benefit of the refinement module and the comparable performance of training in the unpaired scenario with the paired training case. Ablation Studies and Other Baselines: In Appendix C.3, we repeat the experiments with each component of the model removed at a time to study their effect on the final composite image. In addition, in Appendix C.4, we show the poor performance of two baseline models (CycleGAN and Pix2Pix) in a challenging composition task. In this section, we compose a pair of sunglasses with a face image, similar to BID14, where the latter should be fixed while sunglasses should be rescaled and transformed relatively. We used the CelebA dataset BID17, followed its training/test splits and cropped images to 128 × 128 pixels. We hand-crafted 180 composite images of celebrity faces from the training split with sunglasses downloaded from the web to prepare a paired training set. However, we could still use our manual composite set for the unpaired case with access to the segmentation masks separating sunglasses from faces. In the unpaired scenario, we used 6K images from the training split while not overlapping with our composite images to be used as the set of individual faces during training. In this case, since pair of glasses is always occluding the face, we report based on summation of the masked transposed inputs,ĉ s for both the paired training data and the unpaired one. We also compare our with the ST-GAN model BID14 which assumes images of faces as a fixed and warps the glasses in the geometric warp parameter space. Our both in paired and unpaired cases, shown in FIG4,look more realistic in terms of the scale, rotation angle, and location of the sunglasses with the cost of only 180 paired training images or 180 unpaired images with segmentation masks. More example images are illustrated in Appendix C.5.To confirm this observation, we have studied the by asking 60 evaluators to score our model predictions versus ST-GAN on a set of 75 test images. According to this study where we compare our model trained on paired data with ST-GAN, 84% of the users evaluated favorably our network predictions. Moreover, when comparing ST-GAN with our unpaired model, 73% of the evaluators selected the latter. These confirm the ability of our model in generalizing to the new test examples and support our claim that both our paired and unpaired models significantly outperform the recent ST-GAN model in composing a face with a pair of sunglasses. In this paper, we proposed a novel Compositional GAN model addressing the problem of object composition in conditional image generation. Our model captures the relative linear and viewpoint transformations needed to be applied on each input object (in addition to their spatial layout and occlusions) to generate a realistic joint image. To the best of our knowledge, we are among the first to solve the compositionality problem without having any explicit prior information about object's layout. We evaluated our compositional GAN through multiple qualitative experiments and user evaluations for two cases of paired versus unpaired training data. In the future, we plan to extend this work toward generating images composed of multiple (more than two) and/or non-rigid objects. Architecture of our relative appearance flow network is illustrated in FIG5 which is composed of an encoder-decoder set of convolutional layers for predicting the appearance flow vectors, which after a bilinear sampling generates the synthesized view. The second decoder (last row of layers in FIG5) is for generating foreground mask of the synthesized image following a shared encoder network BID34. All convolutional layers are followed by batch normalization BID7 and a ReLU activation layer except for the last convolutional layer in each decoder. In the flow decoder, output is fed into a Tanh layer while in the mask prediction decoder, the last convolutional layer is followed by a Sigmoid layer to be in the range. Diagram of our relative spatial transformer network is represented in FIG6. The two input images (e.g., chair and table) are concatenated channel-wise and fed into the localization network to generate two set of parameters, θ 1, θ 2 for the affine transformations. This single network is simultaneously trained on the two images learning their relative transformations required to getting close to the given target images. In this figure, orange feature maps are the output of a conv2d layer (represented along with their corresponding number of channels and dimensions) and yellow maps are the output of max-pool2d followed by ReLU. The blue layers also represent fully connected layers. Figure 6 and a few failure test examples in FIG8 for both paired and unpaired training models. Here, viewpoint and linear transformations in addition to occluding object regions should be performed properly to generate a realistic image. Figure 6: Test on the chair-table composition task trained with either paired or unpaired data. "NN" stands for the nearest neighbor image in the paired training set, and "NoInpaint" shows the of the unpaired model without the inpainting network. In both paired and unpaired cases, c before andĉ after show outputs of the generator before and after the inference refinement network, respectively. Also,ĉ after s represents summation of masked transposed inputs after the refinement step. In the bottle-basket composition, the main challenging problem is the relative scale of the objects besides their partial occlusions. In Figure 8, we visualize more test examples and study the performance of our model before and after the inference refinement step for both paired and unpaired scenarios. The third column of this figure represents the nearest neighbor training example found for each new input pair, (X, Y), through their features extracted from the last layer of the pre-trained VGG19 network BID25. Moreover, the seventh column shows outputs of the trained unpaired network without including the inpainting component during training revealing the necessity of the inpainting network while training with unpaired data. We repeat the experiments on composing a bottle with a basket, with each component of the model removed at a time, to study their effect on the final composite image. Qualitative are illustrated in FIG9. First and second columns show bottle and basket images which are concatenated channelwise as the input to the network. Following columns are: -3rd column: no reconstruction loss on the composite image in wrong color and faulty occlusion, -4th column: no cross-entropy mask loss in training in faded bottles, -5th column: no GAN loss in training and inference generates outputs with a different color and lower quality than the input image, -6th column: no decomposition generator (G dec) and self-consistent cycle in partially missed bottles, -7, 8th columns represent full model in paired and unpaired scenarios. Figure 8: More test on the basket-bottle composition task trained with either paired or unpaired data. "NN" stands for the nearest neighbor image in the paired training set, and "NoInpaint" shows the of the unpaired model without the inpainting network. In both paired and unpaired cases, c before andĉ after show outputs of the generator before and after the inference refinement network, respectively. Also,ĉ after s represents summation of masked transposed inputs after the refinement step. The purpose of our model is to capture object interactions in the 3D space projected onto a 2D image by handling their spatial layout, relative scaling, occlusion, and viewpoint for generating a realistic image. These factors distinguish our model from CycleGAN and Pix2Pix ) models whose goal is only changing the appearance of the given image. In this section, we compare to these two models. To be able to compare, we use the mean scaling and translating parameters of our training set to place each input bottle and basket together and have an input with 3 RGB channels (9th column in FIG9). We then train a ResNet generator on our paired training data with an adversarial loss added with a L 1 regularizer. Since the structure of the input image might be different from its corresponding ground-truth image (due to different object scalings and layouts), ResNet model works better than a U-Net but still generating unrealistic images (10th column in FIG9). We follow the same approach for the unpaired data through the CycleGAN model (11th column in FIG9). As apparent from the qualitative , it is not easy for either Pix2Pix or CycleGAN networks to learn the transformation between samples from the input distribution and that of the occluded outputs. Adding a pair of sunglasses to an arbitrary face image requires a proper linear transformation of the sunglasses to align well with the face. We illustrate test examples of this composition problem in FIG2 including of both the paired and unpaired training scenarios in the third and fourth columns. In addition, the last column of each composition example case represents the outputs of the ST-GAN model BID14.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rygPUoR9YQ
We develop a novel approach to model object compositionality in images in a GAN framework.
Recent studies have highlighted adversarial examples as a ubiquitous threat to different neural network models and many downstream applications. Nonetheless, as unique data properties have inspired distinct and powerful learning principles, this paper aims to explore their potentials towards mitigating adversarial inputs. In particular, our reveal the importance of using the temporal dependency in audio data to gain discriminate power against adversarial examples. Tested on the automatic speech recognition (ASR) tasks and three recent audio adversarial attacks, we find that (i) input transformation developed from image adversarial defense provides limited robustness improvement and is subtle to advanced attacks; (ii) temporal dependency can be exploited to gain discriminative power against audio adversarial examples and is resistant to adaptive attacks considered in our experiments. Our not only show promising means of improving the robustness of ASR systems, but also offer novel insights in exploiting domain-specific data properties to mitigate negative effects of adversarial examples. Deep Neural Networks (DNNs) have been widely adopted in a variety of machine learning applications BID18 BID20. However, recent work has demonstrated that DNNs are vulnerable to adversarial perturbations BID32 BID10. An adversary can add negligible perturbations to inputs and generate adversarial examples to mislead DNNs, first found in image-based machine learning tasks BID10 BID2 BID21 BID7 a; BID30.Beyond images, given the wide application of DNN-based audio recognition systems, such as Google Home and Amazon Alexa, audio adversarial examples have also been studied recently BID0 BID8 BID17. Comparing between image and audio learning tasks, although their state-of-the-art DNN architectures are quite different (i.e., convolutional v.s. recurrent neural networks), the attacking methodology towards generating adversarial examples is fundamentally unanimous -finding adversarial perturbations through the lens of maximizing the training loss or optimizing some designed attack objectives. For example, the same attack loss function proposed in BID8 ) is used to generate adversarial examples in both visual and speech recognition models. Nonetheless, different types of data usually possess unique or domain-specific properties that can potentially be used to gain discriminative power against adversarial inputs. In particular, the temporal dependency in audio data is an innate characteristic that has already been widely adopted in the machine learning models. However, in addition to improving learning performance on natural audio examples, it is still an open question on whether or not the temporal dependency can be exploited to help mitigate negative effects of adversarial examples. The focus of this paper has two folds. First, we investigate the robustness of automatic speech recognition (ASR) models under input transformation, a commonly used technique in the image domain to mitigate adversarial inputs. Our experimental show that four implemented transformation techniques on audio inputs, including waveform quantization, temporal smoothing, down-sampling and autoencoder reformation, provide limited robustness improvement against the recent attack method proposed in BID1, which aims to circumvent the gradient obfuscation issue incurred by input transformations. Second, we demonstrate that temporal dependency can be used to gain discriminative power against adversarial examples in ASR. We perform the proposed temporal dependency method on both the LIBRIS BID11 and Mozilla Common Voice datasets against three state-of-the-art attack methods BID0 BID36 considered in our experiments and show that such an approach achieves promising identification of non-adaptive and adaptive attacks. Moreover, we also verify that the proposed method can resist strong proposed adaptive attacks in which the defense implementations are known to an attacker. Finally, we note that although this paper focuses on the case of audio adversarial examples, the methodology of leveraging unique data properties to improve model robustness could be naturally extended to different domains. The promising also shed new lights in designing adversarial defenses against attacks on various types of data. Related work An adversarial example for a neural network is an input x adv that is similar to a natural input x but will yield different output after passing through the neural network. Currently, there are two different types of attacks for generating audio adversarial examples: the Speech-toLabel attack and the Speech-to-Text attack. The Speech-to-Label attack aims to find an adversarial example x adv close to the original audio x but yields a different (wrong) label. To do so, Alzantot et al. proposed a genetic algorithm BID0, and Cisse et al. proposed a probabilistic loss function BID8. The Speech-to-Text attack requires the transcribed output of the adversarial audio to be the same as the desired output, which has been made possible by BID16. Yuan et al. demonstrated the practical "wav-to-API" audio adversarial attacks BID36. Another line of research focuses on adversarial training or data augmentation to improve model robustness BID28 BID26 BID29 BID31, which is beyond our scope. Our proposed approach focuses on gaining the discriminative power against adversarial examples through embedded temporal dependency, which is compatible with any ASR model and does not require adversarial training or data augmentation. TO AUDIO DOMAIN? Although in recent years both image and audio learning tasks have witnessed significant breakthroughs accomplished by advanced neural networks, these two types of data have unique properties that lead to distinct learning principles. In images, the pixels entail spatial correlations corresponding to hierarchical object associations and color descriptions, which are leveraged by the convolutional neural networks (CNN) for feature extraction. In audios, the waveforms possess apparent temporal dependency, which is widely adopted by the recurrent neural networks (RNNs). For the segmentation task in the image domain, spatial consistency has played an important role in improving model robustness BID22. However, it remains unknown whether temporal dependency can have a similar effect of improving model robustness against audio adversarial examples. In this paper, we aim to address the following fundamental questions: (a) do lessons learned from image adversarial examples transfer to the audio domain?; and (b) can temporal dependency be used to discriminate audio adversarial examples? Moreover, studying the discriminative power of temporal dependency in audios not only highlights the importance of using unique data properties towards building robust machine learning models but also aids in devising principles for investigating more complex data such as videos (spatial + temporal properties) or multimodal cases (e.g., images + texts).Here we summarize two primary findings concluded from our experimental in Section 4.Audio input transformation is not effective against adversarial attacks Input transformation is a widely adopted defense technique in the image domain, owing to its low operating cost and easy integration with the existing network architecture BID24 BID33 BID9. Generally speaking, input transformation aims to perform certain feature transformation on the raw image in order to disrupt the adversarial perturbations before passing it to a neural network. Popular approaches include bit quantization, image filtering, image reprocessing, and autoencoder reformation BID35 BID12 BID25. However, many existing methods are shown to be bypassed by subsequent or adaptive adversarial attacks BID3 BID13 BID4 BID23. Moreover, Athalye et al. BID1 has pointed out that input transformation may cause obfuscated gradients when generating adversarial examples and thus gives a false sense of robustness. They also demonstrated that in many cases this gradient obfuscation issue can be circumvented, making input transformation still vulnerable to adversarial examples. Similarly, in our experiments we find that audio input transformations based on waveform quantization, temporal filtering, signal downsampling or autoencoder reformation suffers from similar weakness: the tested model with input transformation becomes fragile to adversarial examples when one adopts the attack considering gradient obfuscation as in BID1.Temporal dependency possesses strong discriminative power against adversarial examples in automatic speech recognition Instead of input transformation, in this paper, we propose to exploit the inherent temporal dependency in audio data to discriminate adversarial examples. Tested on the automatic speech recognition (ASR) tasks, we find that the proposed methodology can effectively detect audio adversarial examples while minimally affecting the recognition performance on normal examples. In addition, experimental show that a considered adaptive adversarial attack, even when knowing every detail of the deployed temporal dependency method, cannot generate adversarial examples that bypass the proposed temporal dependency-based approach. Combining these two primary findings, we conclude that the weakness of defense techniques identified in the image case is very likely to be transferred to the audio domain. On the other hand, exploiting unique data properties to develop defense methods, such as using temporal dependency in ASR, can lead to promising defense approaches that can resist adaptive adversarial attacks. In this section, we will introduce the effect of basic input transformations on audio adversarial examples, and analyze temporal dependency in audio data. We will also show that such temporal dependency can be potentially leveraged to discriminate audio adversarial examples. Inspired by image input transformation methods and as a first attempt, we applied some primitive signal processing transformations to audio inputs. These transformations are useful, easy to implement, fast to operate and have delivered several interesting findings. Quantization: By rounding the amplitude of audio sampled data into the nearest integer multiple of q, the adversarial perturbation could be disrupted since its amplitude is usually small in the input space. We choose q = 128, 256, 512, 1024 as our parameters. Local smoothing: We use a sliding window of a fixed length for local smoothing to reduce the adversarial perturbation. For an audio sample x i, we consider the K − 1 samples before and after it, denoted by [x i−K+1, . . ., x i, . . ., x i+K−1], as a local reference sequence and replace x i by the smoothed value (average, median, etc) of its reference sequence. Downsampling: Based on sampling theory, it is possible to down-sample a band-limited audio file without sacrificing the quality of the recovered signal while mitigating the adversarial perturbations in the reconstruction phase. In our experiments, we down-sample the original 16kHz audio data to 8kHz and then perform signal recovery. Autoencoder: In adversarial image defending field, the MagNet defensive method BID25 is an effective way to remove adversarial noises: Implement an autoencoder to project the adversarial input distribution space into the benign distribution. In our experiments, we implement a sequence-to-sequence autoencoder and the whole audio will be cut into frame-level pieces passing through the autoencoder and concatenate them in the final stage, while using the whole audio passing the autoencoder directly is proved to be ineffective and hard to utilize the underlying information. Due to the fact that audio sequence has an explicit temporal dependency (e.g., correlations in consecutive waveform segments), here we aim to explore if such temporal dependency will be affected by adversarial perturbations. The pipeline of the temporal dependency based method is shown in FIG0. Given an audio sequence, we propose to select the first k portion of it (i.e., the prefix of length k) as input for ASR to obtain transcribed as S k. We will also insert the whole sequence into ASR and select the prefix of length k of the transcribed as S {whole,k}, which has the same length as S k. We will then compare the consistency between S k and S {whole,k} in terms of temporal dependency distance. Here we adopt the word error rate (WER) as the distance metric BID19. For normal/benign audio instance, S k and S {whole,k} should be similar since the ASR model is consistent for different sections of a given sequence due to its temporal dependency. However, for audio adversarial examples, since the added perturbation aims to alter the ASR output toward the targeted transcription, it may fail to preserve the temporal information of the original sequence. Therefore, due to the loss of temporal dependency, S k and S {whole,k}, in this case, will not be able to produce consistent . Based on such hypothesis, we leverage the prefix of length k of the transcribed and the transcribed k portion to potentially recognize adversarial inputs. The presentation flows of the experimental are summarized as follows. We will first introduce the datasets, target learning models, attack methods, and evaluation metrics for different defense/detection methods that we focus on. We then discuss the defense/detection effectiveness for different methods against each attack respectively. Finally, we evaluate strong adaptive attacks against these defense/detection methods. We show that due to different data properties, the autoencoder based defense cannot effectively recover the ground truth for adversarial audios and may also have negative effects on benign instances as well. Input transformation is less effective in defending adversarial audio than images. In addition, even when some input transformation is effective for recovering some adversarial audio data, we find that it is easy to perform adaptive attacks against them. The proposed TD method can effectively detect adversarial audios generated by different attacks targeting on various learning tasks (classification and speech-to-text translation). In particular, we propose different types of strong adaptive attacks against the TD detection method. We show that these strong adaptive attacks are not able to generate effective adversarial audio against TD and we provide some case studies to further understand the performance of TD. In our experiments, we measure the effectiveness on several adversarial audio generation methods. For audio classification attack, we used Speech Commands dataset. For the speech-to-text attack, we benchmark each method on both LibriSpeech and Mozilla Common Voice dataset. In particular, for the Commander Song attack BID36, we measure the generated adversarial audios given by the authors. Dataset LibriSpeech dataset: LibriSpeech BID27 is a corpus of approximately 1000 hours of 16Khz English speech derived from audiobooks from the LibriVox project. We used samples from its test-clean dataset in their website and the average duration is 4.294s. We generated adversarial examples using the attack method in.Mozilla Common Voice dataset: Common Voice is a large audio dataset provided by Mozilla. This dataset is public and contains samples from human speaking audio files. We used the 16Khz-sampled data released in, whose average duration is 3.998s. The first 100 samples from its test dataset are used to mount attacks, which is the same attack experimental setup as in.Speech Commands dataset: Speech Commands dataset BID34 is an audio dataset contains 65000 audio files. Each audio is just a single command lasting for one second. Commands are "yes", "no", "up", "down", "left", "right", "on", "off", "stop", and "go".Model and learning tasks For the speech-to-text task, we use DeepSpeech speech-to-text transcription network, which is a biRNN based model with beam search to decode text. For audio classification task, we use a convolutional speech commands classification model. For the Command Song attack, we evaluate the performance on Kaldi speech recognition platform. Genetic algorithm based attack against audio classification (GA): For the audio classification task, we consider the state-of-the-art attack proposed in BID0. Here an audio classification model is attacked and the audio classes include "yes, no, up, down, etc.". They aimed to attack such a network to misclassify an adversarial instance based on either targeted or untargeted attack. Commander Song attack against speech-to-text translation (Commander): Commander Song BID36 ) is a speech-to-text targeted attack which can attack audio extracted from popular songs. The adversarial audio can even be played over the air with its adversarial characteristics. Since the Commander Song codes are not available, we measure the effectiveness of the generated adversarial audios given by the authors. Optimization based attack against speech-to-text translation (Opt): We consider the targeted speechto-text attack proposed by, which uses CTC-loss in a speech recognition system as an objective function and solves the task of adversarial attack as an optimization problem. Evaluation Metrics For defense method such as input transformation, since it aims to recover the ground truth (original instances) from adversarial instances, we use the word error rate (WER) and character error rate (CER) BID19 as evaluation metrics to measure the recovery efficiency. WER and CER are commonly used metrics to measure the error between recovered text and the ground truth in word level or character level. Generally speaking, the error rate (ER) is defined by ER = S+D+I N, where S, D, I is the number of substitutions, deletions and insertions calculated by dynamic string alignment, and N is the total number of word/character in the ground truth text. To fairly evaluate the effectiveness of these transformations against speech-to-text attack, we also report the ratio of translation distance between instance and corresponding ground truth before and after transformation. For instance, as a controlled experiment, given an audio instance x (adversarial instance is denoted as x adv), its corresponding ground truth y, and the ASR function g(·), we calculate the effectiveness ratio for benign instances as DISPLAYFORM0,y), where T (·) denotes the of transformation and D(·, ·) characterizes the distance function (WER and CER in our case). For adversarial audio, we calculate the similar efficiency ratio as R adv = D(g(T (x adv)),y) D(g(x adv),y). For the detection method, the standard evaluation metric is the area under curve (AUC) score, aiming to evaluate the detection efficiency. The proposed TD method is the first data-specific metric to detect adversarial audio, which focuses on how many adversarial instances are captured (true positive) without affecting benign instances (false positive). Therefore, we follow the standard criteria and report AUC for TD. For the proposed TD method, we compare the temporal dependency based on WER, CER, as well as the longest common prefix (LCP). LCP is a commonly used metric to evaluate the similarity between two strings. Given strings b 1 and b 2, the corresponding LCP is defined as max In this section, we measured our defense method of autoencoder based defense and input transformation defense for classification attack (GA) and speech-to-text attack (Commander and Opt). We summarize our work in TAB1 and list some basic . For Commander, due to unreleased training data, we are not able to train an autoencoder. For GA and Opt we have sufficient data to train autoencoder. Here we perform the primitive input transformation for audio classification targeted attacks and evaluate the corresponding effects. Due to the space limitation, we defer the of untargeted attacks to the supplemental materials. GA We first evaluate our input transformation against the audio classification attack (GA) in BID0. We implemented their attack with 500 iterations and limit the magnitude of adversarial perturbation within 5 (smaller than the quantization we used in transformation) and generated 50 adversarial examples per attack task (more targets are shown in the supplementary material). The attack success rate is 84% on average. For the ease of illustration, we use Quantization-256 as our input transformation. As observed in FIG2, the attack success rates decreased to only 2.1%, and 63.8% of the adversarial instances have been converted back to their original (true) label. We also measure the possible effects on original audio due to our transformation methods: the original audio classification accuracy without our transformation is 89.2%, and the rate slightly decreased to 89.0% after our transformation, which means the effects of input transformation on benign instances are negligible. In addition, it also shows that for classification tasks, such input transformation is more effective in mitigating the negative effects of adversarial perturbation. This potential reason could be that classification tasks do not rely on audio temporal dependency but focus on local features, while speech-to-text task will be harder to defend based on the tested input transformations. Commander We also evaluate our input transformation method against the Commander Song attack BID36, which implemented an Air-to-API adversarial attack. In the paper, the authors reported 91% attack detection rate using their defense method. We measured our Quan-256 input transformation on 25 adversarial examples obtained via personal communications. Based on the same detection evaluation metric in BID36 1, Quan-256 attains 100% detection rate for characterizing all the adversarial examples. Opt Here we consider the state-of-the-art audio attack proposed in. We separately choose 50 audio files from two audio datasets (Common Voice, LIBRIS) and generate attacks based on the CTC-loss. We evaluate several primitive signal processing methods as input transformation under WER and CER metrics in TAB1 and A2. We then also evaluate the WER and CER based effectiveness ratio we mentioned before to Quantify the effectiveness of transformation. R benign are shown in the brackets for the first two columns in TAB1 and A2, while R adv is shown in the brackets of last two columns within those tables. We compute our using both ground truth and adversarial target "This is an adversarial example" as references. Here small R benign which is close to 1 indicates that transformation has little effect on benign instances, small R adv represents transformation is effective recovering adversarial audio back to benign. From Tables A1 and A2 we showed that most of the input transformations (e.g., Median-4, Downsampling and Quan-256) effectively reduce the adversarial perturbation without affecting the original audio too much. Although these input transformations show certain effectiveness in defending against adversarial audios, we find that it is still possible to generate adversarial audios by adaptive attacks in Section 4.4. Towards defending against (non-adaptive) adversarial images, MagNet BID25 has achieved promising performance by using an autoencoder to mitigate adversarial perturbation. Inspired by it, here we apply a similar autoencoder structure for audio and test if such input transformation can be applied to defending against adversarial audio. We apply a MagNet-like method for feature-extracted audio spectrum map: we build an encoder to compress the information of origin audio features into latent vector z, then use z for reconstruction by passing through another decoder network under frame level and combine them to obtain the transformed audio BID15. Here we analyzed the performance of Autoencoder transformation in both GA and Opt attack. We find that MagNet which gained great effectiveness on defending adversarial images in the oblivious attack setting BID4 BID23, has limited effect on the audio defense. GA We presented our in TAB1 that against classification attack, Autoencoder did not perform well by only reducing attack success rate to 8.2% defeat by other input transformation methods. Since you can reduce the attack success rate to 10% by just destroying the origin audio data and altering to random guess, it's hard to say that Autoencoder method has good performance. Opt We report that the autoencoder works not very well for transforming benign instances (57.6 WER in Common Voice compared to 27.5 WER without transformation, 30.0 WER in LIBRIS compared to 12.4 WER without transformation), also fails to recover adversarial audio (76.5 WER in Common Voice and 99.4 WER in LIBRIS). This shows that the non-adaptive additive adversarial perturbation can bypass the MagNet-like autoencoder on audio, which implies different robustness implications of image and audio data. In this section, we will evaluate the proposed TD detection method on different attacks. We will first report the AUC for detecting different attacks with TD to demonstrate the effectiveness, and we will provide some additional analysis and examples to help better understand TD. We only evaluate our TD method on speech-to-text attacks (Commander and Opt) because of the audio in the Speech Commands dataset for classification attack is just a single command lasting for one second and thus its temporal dependency is not obvious. Commander In Commander Song attack, we directly examine whether the generated adversarial audio is consistent with its prefix of length k or not. We report that by using TD method with k = 1 2, all the generated adversarial samples showed inconsistency and thus were successfully detected. Opt Here we show the empirical performance of distinguishing adversarial audios by leveraging the temporal dependency of audio data. In the experiments, we use these three metrics, WER, CER and LCP, to measure the inconsistency between S k and S {whole,k}. As a baseline, we also directly train a one layer LSTM with 64 hidden feature dimensions based on the collected adversarial and benign audio instances for classification. Some examples of translated for benign and adversarial audios are shown in TAB2. Here we consider three types of adversarial targets: short -hey google; medium -this is an adversarial example; and long -hey google please cancel my medical appointment. We report the AUC score for these detection for k = 1/2 in TAB3. We can see that by using WER as the detection metric, the temporal dependency based method can achieve AUC as high as 0.936 on Common Voice and 0.93 on LIBRIS. We also explore different values of k and we observe that the do not vary too much (detailed can be found in Table A6 in Appendix). When k = 4/5, the AUC score based on CER can reach 0.969, which shows that such temporal dependency based method is indeed promising in terms of distinguishing adversarial instances. Interestingly, these suggest that the temporal dependency based method would suggest an easy-implemented but effective method for characterizing adversarial audio attacks. In this section, we measured some adaptive attack against the defense and detection methods. Since the autoencoder based defense almost fails to defend against different attacks, here we will focus on the input transformation based defense and TD detection. Given that Opt is the strongest attack here, we will mainly apply Opt to perform adaptive attack against the speech-to-text translation task. We list our experiments' structure in TAB4. For full please refer to the Appendix. Here we apply adaptive attacks against the preceding input transformations and therefore evaluate the robustness of the input transformation as defenses. We implemented our adaptive attack based on three input transformation methods: Quantization, Local smoothing, and Downsampling. For these transformations, we leverage a gradientmasking aware approach to generate adaptive attacks. In the optimization based attack, the attack achieved by solving the optimization problem: min δ δ 2 2 + c · l(x + δ, t), where δ is referred to the perturbation, x the benign audio, t the target phrase, and l(·) the CTC-loss. Parameter c is iterated to trade off the importance of being adversarial and remaining close to the original instance. For quantization transformation, we assume the adversary knows the quantization parameter q. We then change our attack targeted optimization function to min δ qδ 2 2 + c · l(x + qδ, t). After that, all the adversarial audios can be resistant against quantization transformations and it only increased a small magnitude of adversarial perturbation, which can be ignored by human ears. When q is large enough, the distortion would increase but the transformation process is also ineffective due to too much information loss. For downsampling transformation, the adaptive attack is conducted by performing the attack on the sampled elements of origin audio sequence. Since the whole process is differentiable, we can do adaptive attack through gradient directly and all the adversarial audios are able to attack. For local smoothing transformation, it is also differentiable in case of average smoothing transformation, so we can pass the gradient effectively. To attack against median smoothing transformation, we can just convert the gradient back to the median and update its value, which is similar to the maxpooling layer's backpropagation process. By implementing the adaptive attack, all the smoothing transformation is shown to be ineffective. We chose our samples randomly from LIBRIS and Common Voice audio dataset with 50 audio samples each. We implemented our adaptive attack on the samples and passed them through the corresponding input transformation. We use down-sampling from 16kHZ to 8kHZ, median / average smoothing with one-sided sequence length K = 4, quantization method with q = 256 as our input transformation methods. In, Decibels (a logarithmic scale that measures the relative loudness of an audio sample) is applied as the measurement of the magnitude of perturbation: dB(x) = max i 20 · log 10 (x i), which x referred as adversarial audio sampled sequence. The relative perturbation is calculated as dB x (δ) = dB(δ) − dB(x), where δ is the crafted adversarial noise. We measured our adaptive attack based on the same criterion. We show that all the adaptive attacks become effective with reasonable perturbation, as shown in Table 6. As suggested in, almost all the adversarial audios have distortion dB x (δ) from -15dB to -45dB which is tolerable to human ears. From Table 6, the added perturbation are mostly within this range. Adaptive Attacks Against Temporal Dependency Based Method To thoroughly evaluate the robustness of temporal dependency based method, we also perform some strong adaptive attack against it. Notably, even if the adversary knows k, the adaptive attack is hard to conduct due to the fact that this process is non-differentiable. Therefore, we propose three types of strong adaptive attacks here aiming to explore the robustness of the temporal consistency based method. Segment attack: Given the knowledge of k, we first split the audio into two parts: the prefix of length k of the audio S k and the rest S k −. We then apply a similar attack to add perturbation to only S k. We hope this audio can be attacked successfully without changing S k − since the second part would not receive gradient updates. Therefore, when performing the temporal-based consistency check, T (S k) would be translated consistently with T (S {whole,k} ). To maximally leverage the information of k, here we propose two ways to attack both S k and S k − individually, and then concatenate them together.1. the target of S k is the first k−portion of the adversarial target, and S k − is attacked to the rest.2. the target of S k is the whole adversarial target, while we attack S k − to be silence, which means S k − transcribing nothing. This is different from the segment attack where S k − is not modified at all. Combination attack: To balance the attack success rate for both sections and the whole sentence against TD, we apply the attack objective function as min δ δ 2 2 + c · (l(x + δ, t) + l((x + δ) k, t k ), where x refers to the whole sentence. For segment attack, we found that in most cases the attack cannot succeed, that the attack success rate remains at 2% for 50 samples in both LIBRIS and Common Voice datasets, and some of the -35.65 -20.91 -9.48 -23.42 -25.12 examples are shown in Appendix. We conjecture the reasons as: 1. S k alone is not enough to be attacked to the adversarial target due to the temporal dependency; 2. the speech recognition on S k − cannot be applied to the whole recognition process and therefore break the recognition process for S k.For concatenation attack, we also found that the attack itself fails. That is, the transcribed of adv(S k)+adv(S k −) differs from the translation of S k +S k −. Some examples are shown in Appendix. The failure of the concatenation adaptive attack more explicitly shows that the temporal dependency plays an important role in audio. Even if the separate parts are successfully attacked into the target, the concatenated instance will again totally break the perturbation and therefore render the adaptive attack inefficient. On the contrary, such concatenation will have negligible effects on benign audio instances, which provides a promising direction to detect adversarial audio. For combination attack, we vary the section portion k D used by TD and evaluate the cases where the adaptive attacker uses the same/different section k A. We define Rand(a,b) as uniformly sampling from [a,b]. We consider a stronger attacker, for whom the k A can be a set containing random sections. The detection for different settings are shown in TAB5. From the , we can see that when |k A | = 1, if the attacker uses the same k A as k D to perform the adaptive attack, the attack can achieve relatively good performance and if the attacker uses different k A, the attack will fail with AUC above 85%. We also evaluate the case that defender randomly sample k D during the detection and find that it's very hard for the adaptive attacker to perform attacks, which can improve model robustness in practice. For |k A | > 1, the attacker can achieve some attack success when the set contains k D. But when |k A | increases, the attacker's performance becomes worse. The complete are given in the Appendix. Notably, the random sample based TD appears to be robust in all cases. This paper proposes to exploit the temporal dependency property in audio data to characterize audio adversarial examples. Our experimental show that while four primitive input transformations on audio fail to withstand adaptive adversarial attacks, temporal dependency is shown to be resistant to these attacks. We also demonstrate the power of temporal dependency for characterizing adversarial examples generated by three state-of-the-art audio adversarial attacks. The proposed method is easy to operate and does not require model retraining. We believe our shed new lights in exploiting unique data properties toward adversarial robustness. This work is partially supported by DARPA grant 00009970. Figure
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1g4E3C9t7
Adversarial audio discrimination using temporal dependency
In order to alleviate the notorious mode collapse phenomenon in generative adversarial networks (GANs), we propose a novel training method of GANs in which certain fake samples can be reconsidered as real ones during the training process. This strategy can reduce the gradient value that generator receives in the region where gradient exploding happens. We show that the theoretical equilibrium between the generators and discriminations actually can be seldom realized in practice. And this in an unbalanced generated distribution that deviates from the target one, when fake datepoints overfit to real ones, which explains the non-stability of GANs. We also prove that, by penalizing the difference between discriminator outputs and considering certain fake datapoints as real for adjacent real and fake sample pairs, gradient exploding can be alleviated. Accordingly, a modified GAN training method is proposed with a more stable training process and a better generalization. Experiments on different datasets verify our theoretical analysis. In the past few years, Generative Adversarial Networks (GANs) have been one of the most popular topics in generative models and achieved great success in generating diverse and high-quality images recently (; ;). GANs are powerful tools for learning generative models, which can be expressed as a zero-sum game between two neural networks. The generator network produces samples from the arbitrary given distribution, while the adversarial discriminator tries to distinguish between real data and generated data. Meanwhile, the generator network tries to fool the discriminator network by producing plausible samples which are close to real samples. When a final theoretical equilibrium is achieved, discriminator can never distinguish between real and fake data. However, we show that a theoretical equilibrium often can not be achieved with discrete finite samples in datasets during the training process in practice. Although GANs have achieved remarkable progress, numerous researchers have tried to improve the performance of GANs from various aspects;;; ) because of the inherent problem in GAN training, such as unstability and mode collapse. showed that a theoretical generalization guarantee does not be provided with the original GAN objective and analyzed the generalization capacity of neural network distance. The author argued that for a low capacity discriminator, it can not provide generator enough information to fit the target distribution owing to lack of ability to detect mode collapse. argued that poor generation capacity in GANs comes from discriminators trained on discrete finite datasets ing in overfitting to real data samples and gradient exploding when generated datapoints approach real ones. As a , proposed a zero-centered gradient penalty on linear interpolations between real and fake samples (GAN-0GP-interpolation) to improve generalization capability and prevent mode collapse ed from gradient exploding. Recent work further studied generalization from a new perspective of privacy protection. In this paper, we focus on mode collapse ed from gradient exploding studied in and achieve a better generalization with a much more stable training process. Our contributions are as follows: discriminator with sigmoid function in the last layer removed D r = {x 1, · · ·, x n} the set of n real samples D g = {y 1, · · ·, y m} the set of m generated samples D f = {f 1, · · ·, f m} the candidate set of M 1 generated samples to be selected as real D F AR ⊂ {f 1, · · ·, f m} the set of M 0 generated samples considered as real 1. We show that a theoretical equilibrium, when optimal discriminator outputs a constant for both real and generated data, is unachievable for an empirical discriminator during the training process. Due to this fact, it is possible that gradient exploding happens when fake datapoints approach real ones, ing in an unbalanced generated distribution that deviates from the target one. 2. We show that when generated datapoints are very close to real ones in distance, penalizing the difference between discriminator outputs and considering fake as real can alleviate gradient exploding to prevent overfitting to certain real datapoints. 3. We show that when more fake datapoints are moved towards a single real datapoint, gradients of the generator on fake datapoints very close to the real one can not be reduced, which partly explains the reason of a more serious overfitting phenomenon and an increasingly unbalanced generated distribution. 4. Based on the zero-centered gradient penalty on data samples (GAN-0GP-sample) proposed in , we propose a novel GAN training method by considering some fake samples as real ones according to the discriminator outputs in a training batch to effectively prevent mode collapse. Experiments on synthetic and real world datasets verify that our method can stabilize the training process and achieve a more faithful generated distribution. In the sequel, we use the terminologies of generated samples (datapoints) and fake samples (datapoints) indiscriminately. Tab. 1 lists some key notations used in the rest of the paper. Unstability. GANs have been considered difficult to train and often play an unstable role in training process. Various methods have been proposed to improve the stability of training. A lot of works stabilized training with well-designed structures (; ; ;) and utilizing better objectives (; ; ;). Gradient penalty to enforce Lipschitz continuity is also a popular direction to improve the stability including Gulrajani et al.,Petzka et al. (2018, Roth et al.,Qi (2017 . From the theoretical aspect, showed that GAN optimization based on gradient descent is locally stable and proved local convergence for simplified zero-centered gradient penalties under suitable assumptions. For a better convergence, a two time-scale update rule (TTUR) and exponential moving averaging (EMA) (Yazıcı et al. ) have also been studied. Mode collapse. Mode collapse is another persistent essential problem for the training of GANs, which means lack of diversity in the generated samples. The generator may sometimes fool the discriminator by producing a very small set of high-probability samples from the data distribution. Recent work studied the generalization capacity of GANs and showed that the model distributions learned by GANs do miss a significant number of modes. A large number of ideas have been proposed to prevent mode collapse. Multiple generators are applied in , , to achieve a more faithful distribution. Mixed samples are considered as the inputs of discriminator in , to convey information on diversity. Recent work studied mode collapse from probabilistic treatment and from entropy of distribution. In the original , the discriminator D maximizes the following objective: The logistic sigmoid function σ(x) = 1 1+e −x is usually used in practice, leading to and to prevent gradient collapse, the generator G maximizes where D 0 is usually represented by a neural network. showed that the optimal discriminator D in Eqn.1 is As the training progresses, p g will be pushed closer to p r. If G and D are given enough capacity, a global equilibrium is reached when p r = p g, in which case the best strategy for D on supp(p r) ∪ supp(p g) is just to output 1 2 and the optimal value for Eqn.1 is 2 log(1 2). With finite training examples in training dataset D r in practice, an empirical version is applied to approximate Eqn.1, using, where x i is from the set D r of n real samples and y i is from the set D g of m generated samples, respectively. Mode collapse in the generator is attributed to gradient exploding in discriminator, according to. When a fake datapoint y 0 is pushed to a real datapoint x 0 and if |D(x 0)− D(y 0)| ≥, is satisfied, the absolute value of directional derivative of D in the direction µ = x 0 − y 0 will approach infinity leading to gradient exploding: Since the gradient ∇ y0 D(y 0) at y 0 outweights gradients towards other modes in a mini-batch, gradient exploding at datapoint y 0 will move multiple fake datapoints towards x 0 ing in mode collapse. Theoretically discriminator outputs a constant 1 2 when a global equilibrium is reached. However in practice, discriminator can often easily distinguish between real samples and fake samples , making a theoretical equilibrium unachievable. Because the distribution p r of real data is unknown for discriminator, discriminator will always consider datapoints in the set D r of real samples as real while D g of generated samples as fake. Even when generated distribution p g is equivalent to the target distribution p r, D r and D g is disjoint with probability 1 when they are sampled from two continuous distributions respectively (proposition 1 in). In this case, actually p g is pushed towards samples in D r instead of the target distribution. However, we show next because of the fact of an unachievable theoretical equilibrium for empirical discriminator during the training process, an unbalanced distribution would be generated that deviates from the target distribution. Proposition 1. For empirical discriminator in original GAN, unless p g is a discrete uniform distribution on D r, the optimal discriminator D output on D r and D g is not a constant 1 2, since there exists a more optimal discriminator which can be constructed as a MLP with O(2d x) parameters. See Appendix A for the detailed proof. If all the samples in D r can be remembered by discriminator and generator, and only if generated samples can cover D r uniformly, which means D g = D r, a theoretical equilibrium in discriminator can be achieved. However, before generator covers all the samples in D r uniformly during the training process, the fact of an unachievable theoretical equilibrium makes it possible that there exists a real datapoint x 0 with a higher discriminator output than that of a generated datapoint y 0. When y 0 approaches x 0 very closely, gradient exploding and overfitting to a single datapoint happen, ing an unbalanced distribution and visible mode collapse. See the generated on a Gaussian dataset of original GAN in Fig. 1a and 1e. The generated distribution neither covers the target Gaussian distribution nor fits all the real samples in D r, making an unbalanced distribution visible. Furthermore, in practice discriminator and generator are represented by a neural network with finite capacity and dataset D r is relatively huge, generator can never memorize every discrete sample ing in a theoretical equilibrium unachievable. In the following subsections, we are interested in the way of stabilizing the output of discriminator to alleviate gradient exploding to achieve a more faithful generated distribution. Let's first consider a simplified scenario where a fake datapoint y 0 is close to a real datapoint x 0. Generator updates y 0 according to the gradient that the generator receives from the discriminator with respect to the fake datapoint y 0, which can be computed as: When y 0 approaches x 0 very closely and a theoretical discriminator equilibrium is not achieved here, namely D 0 (x 0) − D 0 (y 0) ≥, the absolute value of directional derivative (∇ µ D 0) y0 in the direction µ = x 0 − y 0 at y 0 tends to explode and will outweigh directional derivatives in other When y 0 is very close to x 0, the norm of the gradient generator receives from the discriminator with respect to y 0 can be computed as: If y 0 is in the neighborhood of x 0, i.e.,, where δ is a small positive value, we call {x 0, y 0} a close real and fake pair. We are interested in reducing the approximated value of the gradient for a fixed pair {x 0, y 0} to prevent multiple fake datapoints overfitting to a single real one. Note that the output of D 0 for real datapoint x 0 has a larger value than that of fake datapoint y 0. So for a fixed pair {x 0, y 0}, when D 0 (y 0) increases and D 0 (x 0) − D 0 (y 0) decreases, the target value decreases. And, when D 0 (y 0) decreases and D 0 (x 0) − D 0 (y 0) increases, the target value increases, according to Eqn. 6. Now we consider a more general scenario where for a real datapoint x 0, in a set of n real samples, there are m 0 generated datapoints {y 1, y 2, · · ·, y m0} very close to x 0 in the set of m generated samples. We are specially interested in the optimal discriminator output at x 0 and {y 1, y 2, · · ·, y m0}. For simplicity, we make the assumption that discriminator outputs at these interested points are not affected by other datapoints in D r and D g. We also assume discriminator has enough capacity to achieve optimum in this local region. However, without any constraint, discriminator will consistently enlarge the gap between outputs for real datapoints and that for generated ones. Thus, an extra constraint is needed to alleviate the difference between discriminator outputs on real and fake datapoints. It comes naturally to penalize the L 2 norm of D 0 (x 0) − D 0 (y i). Denoting the discriminator output for x 0, D 0 (x 0) as ξ 0 and D 0 (y i) as ξ i, i = 1, · · ·, m 0, we have the following empirical discriminator objective: where the interested term Based on Proposition 2, penalizing the difference between discriminator outputs on close real and fake pairs {x 0, y i} can reduce the norm of ∇ y i L G (y i) from Eqn.6, making it possible to move fake datapoints to other real datapoints instead of only being trapped at x 0. However in practice, it is hard to find the close real and fake pairs to penalize the corresponding difference between discriminator outputs. If we directly penalize the L 2 norm of D 0 (x i) − D 0 (y i) when {x i, y i} are not a pair of close datapoints, ||∇ y i L G (y i)|| for y i may even get larger. Consider D 0 (y i) has a higher value than D 0 (x i), which could happen when x i has more corresponding close fake datapoints than the real datapoint x yi corresponding to y i from Proposition 2. Direct penalization will make D 0 (y i) lower, then D 0 (x yi) − D 0 (y i) gets higher and ||∇ y i L G (y i)|| higher. Thus in practice we could enforce a zero-centered gradient penalty of the form ||(∇D 0) v || 2 to stabilize discriminator output, where v can be real datapoints or fake datapoints. Although thought that discriminator can have zero gradient on the training dataset and may still have gradient exploding outside the training dataset, we believe a zero-centered gradient penalty can make it harder for discriminator to distinguish between real and fake datapoints and fill the gap between discriminator outputs on close real and fake pairs to prevent overfitting to some extent. Fig. 1b and 1f alleviate overfitting phenomenon compared with no gradient penalty in Fig. 1a and 1e. proposed another zero-centered gradient penalty of the form ||(∇D 0) v || 2, where v is a linear interpolation between real and fake datapoints, to prevent gradient exploding. However, we consider it's not a very efficient method to fill the gap between discriminator outputs on close real and fake pairs. To begin with, the of direct linear interpolation between real and fake datapoints may not lie in supp(p r) ∪ supp(p g). Although the author also considered the interpolation on latent codes, it needs an extra encoder which increases operational complexity. Furthermore, for arbitrary pair of real and fake datapoints, the probability that linear interpolation between them lie where gradient exploding happens is close to 0, because large gradient happens when a fake datapoint approaches closely a real datapoint, ing in the gap between discriminator outputs on close real and fake pairs hard to fill. Based on Proposition 2, we also find that when more fake datapoints are moved to the corresponding real datapoint, ||∇ y i L G (y i)|| for a fake datpoint y i only to increase from Eqn.6. It means with the training process going on, more fake datapoints tend to be attracted to one single real datapoint and it gets easier to attract much more fake datapoints to the real one. It partly explains the unstability of GAN training process that especially during the later stage of training, similar generated samples are seen. Compared with Fig. 1a, 1b and 1c at iter.100,000, Fig. 1e, 1f and 1g at iter.200,000 have a worse generalization and much more similar samples are generated with the training process going on. In this subsection, we aim to make ||∇ y i L G (y i)||, i = 1, · · ·, m 0 smaller for optimal empirical discriminator by considering some fake as real on close real and fake pairs based on the above discussions. Suppose for each fake datapoint, it's considered as real datapoint with probability p 0 when training real datapoints, ing in the following empirical discriminator objective: where A is a binary random variable taking values in {0, 1} with Pr(A = 1) = p 0 and the interested term Note that only penalizing the difference between discriminator outputs on close real and fake pairs in Subsection 4.2 is just a special case of considering fake as real here when p 0 = 0. Based on Proposition 3, considering fake datapoints as real with increasing probability p 0 for real datapoints training part can reduce the norm of ∇ y i L G (y i) from Eqn.6. It means when we consider more fake as real where large gradient happens for real training part, the attraction to the real datapoint for fake ones can be alleviate to make it easier to be moved to other real datapoints and prevent the overfitting to one single real datapoint. Note that for a fixed p 0, when the number m 0 of fake datapoints very close to the real one increases, more fake datapoints will be considered as real to alleviate the influences of increasing m 0 discussed in Subsection 4.2. To overcome the problem of overfitting to some single datapoints and achieve a better generalization, we propose that fake samples generated by generator in real time can be trained as real samples in discriminator. For original N real samples in a training batch in training process, we substitute them with N 0 real samples in D r and M 0 generated samples in D g, where N = N 0 + M 0. Our approach is mainly aimed at preventing large gradient in the region where many generated samples overfit one single real sample. To find generated samples in these regions, we choose the generated ones with low discriminator output, owing to the reason that discriminator tends to have a lower output for the region with more generated datapoints approaching one real datapoint from Proposition 2. Therefore, we choose needed M 0 generated samples denoted as set D F AR as real samples from a larger generated set D f containing M 1 generated samples {f 1, f 2, · · ·, f M1} according to corresponding discriminator output: We also add a zero-centered gradient penalty on real datapoints based on the discussions in Subsection 4.2, ing in the following empirical discriminator objective in a batch containing N real samples and M fake samples: where In practice, we usually let N = M. Because some fake datapoints are trained as real ones, the zero-centered gradient penalty are actually enforced on the mixture of real and fake datapoints. When we sample more generated datapoints for D f to decide the needed M 0 datapoints as real, the probability of finding the overfitting region with large gradient is higher. When more fake datapoints in D F AR that are close to corresponding real ones are considered as real for training, it is equivalent to increase the value of p 0 in Subsection 4.3. For a larger D F AR, the number of real samples N 0 will decrease for a fixed batchsize N and the speed to cover real ones may be slowed at the beginning of training owning to the reason that some fake datapoints are considered as real and discriminator will be not so confident to give fake ones a large gradient to move them to real ones. Our method can stabilize discriminator output and prevent mode collapse caused by gradient exploding efficiently based on our theoretical analysis. A more faithful generated distribution will be achieved in practice. To test the effectiveness of our method in preventing an unbalanced distribution ed from overfitting to only some real datapoints, we designed a dataset with finite real samples coming from a Gaussian distributions and trained MLP based GANs with different gradient penalties and our method on that dataset. For gradient penalties in all GANs, the weight λ is set 10. Training batch is set 64 and one quarter of the real training batch are generated samples picked from 256 generated samples according to discriminator output, namely M 0 = 16 and M 1 = 256 in Eqn. 11. Learning rate is set 0.003 for both G and D. The is shown in Fig.1. It can be observed that original GAN, GAN-0GP-sample and GAN-0GP-interpolation all have serious overfitting problem leading to a biased generated distribution with training process going on, while our method can generate much better samples with good generalization. We also test our method on a mixture of 8 Gaussians dataset where random samples in different modes are far from each other. The evolution of our method is depicted in Fig.2. We observe that although our method only covers 3 modes at the beginning, it can cover other modes gradually because our method alleviates the gradient exploding on close real and fake datapoints. It is possible that fake datapoints are moved to other Gaussian modes when the attraction to other modes is larger than to the overfitted datapoints. Hence, our method has the ability to find the uncovered modes to achieve a faithful distribution even when samples in high dimensional space are far from each other. More synthetic experiments can be found in Appendix D. To test our method on real world data, we compare our method with GAN-0GP-sample on CIFAR-10 , CIFAR-100 and a more challenging dataset ImageNet with ResNet-architectures similar with that in. Inception score and FID are used as quantitative. The FID score is evaluated on 10k generated images and statistics of data are calculated at the same scale of generation. Better generation can be achieved with higher inception score and lower FID value. The maximum number of iterations for CIFAR experiment is 500k, while for ImageNet is 600k because of training difficulty with much more modes. We use the code from. The weight λ for gradient penalty is also set 10. Training batch is set 64 and for a better gradient alleviation on close real and fake datapoints, half of the real training batch are generated samples with M 0 = 32 and M 1 = 256 in Eqn. 11. For CIFAR experiments, we use the RMSProp optimizer with α = 0.99 and a learning rate of 10 −4. For ImageNet experiments, we use the Adam optimizer with α = 0, β = 0.9 and TTUR with learning rates of 10 −4 and 3 × 10 −4 for the generator and discriminator respectively. We use an exponential moving average with decay 0.999 over the weights to produce the final model. The on Inception score and FID are shown in Fig. 3 and 4. Our method outperforms GAN-0GP-sample by a large margin. As predicted in Section 5, the speed of our method to cover real ones could be slowed at the beginning of training with some fake considered as real. However, our method can cover more modes and have a much better balanced generation than the baseline. The losses of discriminator and generator during the process of CIFAR-10 training are shown in Fig.5. It can be observed that our method has a much more stable training process. Owing to the overfitting to some single datapoints and an unbalanced generated distribution missing modes, the losses of discriminator and generator for GAN-0GP-sample gradually deviate from the optimal theoretical value, namely 2 log 2 ≈ 1.386 for discriminator and log 2 ≈ 0.693 for generator respectively. However, our method has a much more stable output of discriminator to achieve the losses for discriminator and generator very close to theoretical case. It proves practically that our method can stabilize discriminator output on close real and fake datapoints to prevent more datapoints from (a) (b) Figure 6: Inception score and FID on ImageNet of GAN-0GP-sample and GAN-0GP-sample with our method trapped in a local region and has a better generalization. The losses of discriminator and generator on CIFAR-100 and image samples can be found in Appendix D. For the challenging ImageNet task, we train GANs to learn a generative model of all 1000 classes at resolution 64 × 64 with the limitation of our hardware. However, our models are completely unsupervised learning models with no labels used instead of another 256 dimensions being used in latent code z as labels in. The in Fig. 6 show that our methods still outperforms GAN-0GP-sample on ImageNet. Our method can produce samples of state of the art quality without using any category labels and stabilize the training process. Random selected samples and losses of discriminator and generator during the training process can be found in Appendix D. In this paper, we explain the reason that an unbalanced distribution is often generated in GANs training. We show that a theoretical equilibrium for empirical discriminator is unachievable during the training process. We analyze the affection on the gradient that generator receives from discriminator with respect to restriction on difference between discriminator outputs on close real and fake pairs and trick of considering fake as real. Based on the theoretical analysis, we propose a novel GAN training method by considering some fake samples as real ones according to the discriminator outputs in a training batch. Experiments on diverse datasets verify that our method can stabilize the training process and improve the performance by a large margin. For empirical discriminator, it maximizes the following objective: When p g is a discrete uniform distribution on D r, and generated samples in D g are the same with real samples in D r. It is obvious that the discriminator outputs 1 2 to achieve the optimal value when it cannot distinguish fake samples from real ones. For continues distribution p g, has proved that an -optimal discriminator can be constructed as a one hidden layer MLP with O(d x (m + n)) parameters, namely D(x) ≥ 1 2 + 2, ∀x ∈ D r and D(y) ≤ 1 2 − 2, ∀y ∈ D g, where D r and D g are disjoint with probability 1. In this case, discriminator objective has a larger value than the theoretical optimal version: So the optimal discriminator output on D r and D g is not a constant 1 2 in this case. Even discriminator has much less parameters than O(d x (m + n)), there exists a real datapoint x 0 and a generated datapoint y 0 satisfying D(x 0) ≥ 1 2 + 2 and D(y 0) ≤ 1 2 − 2. Whether p g is a discrete distribution only cover part samples in D r or a continues distribution, there exists a generated datapoint y 0 satisfying y 0 ∈ D r. Assume that samples are normalized: Let W 1 ∈ R 2×dx, W 2 ∈ R 2×2 and W 3 ∈ R 2 be the weight matrices, b ∈ R 2 offset vector and k 1,k 2 a constant, We can construct needed discriminator as a MLP with two hidden layer containing O(2d x) parameters. We set weight matrices For any input v ∈ D r ∪ D g, the discriminator output is computed as: where σ(x) = 1 1+e −x is the sigmoid function. Let α = W 1 v − b, we have where l < 1. Let β = σ(k 1 α), we have as k 2 → ∞. Hence, for any input v ∈ D r ∪ D g, discriminator outputs In this case, discriminator objective also has a more optimal value than the theoretical optimal version: So the optimal discriminator output on D r and D g is also not a constant 1 2 in this case. We rewrite f (ξ 0, ξ 1, · · ·, ξ m0) here To achieve the optimal value, let f (ξ i) = 0, i = 0, · · ·, m 0 and we have It is obvious that ξ 1 = ξ 2 = · · · = ξ m0 = ξ. Hence we have We can solve Substitute Eqn. 28 into Eqn. 26 and we get We can also have from Eqn. 28 and Eqn. 26 respectively Note that there must exist an optimal ξ 0 satisfying f (ξ 0) = 0 in Eqn. 29, so ξ 0 + ln(We rewrite h(ξ 0, ξ 1, · · ·, ξ m0) here Let f (ξ i) = 0, i = 0, · · ·, m 0 and we have ξ 1 = ξ 2 = · · · = ξ m0 = ξ, 1 − σ(ξ 0) − 2nk 0 (ξ 0 − ξ) = 0, We can solve ξ = − ln The derivative of g(ξ 0) with respect to ξ 0 is computed as g (ξ 0) > 0. Hence ξ * 0 increases with p 0 increasing. From Eqn. 33, we also have we further know that ξ * increases and ξ * 0 − ξ * decreases with p 0 increasing. (a) (b) Figure 7: Losses of discriminator (not including regularization term) and generator on CIFAR-100 of GAN-0GP-sample and GAN-0GP-sample with our method For synthetic experiment, the network architectures are the same with that in. While for real world data experiment, we use the similar architectures in. We use Pytorch for development.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyxsR24tvS
We propose a novel GAN training method by considering certain fake samples as real to alleviate mode collapse and stabilize training process.
We present a tool for Interactive Visual Exploration of Latent Space (IVELS) for model selection. Evaluating generative models of discrete sequences from a continuous latent space is a challenging problem, since their optimization involves multiple competing objective terms. We introduce a model-selection pipeline to compare and filter models throughout consecutive stages of more complex and expensive metrics. We present the pipeline in an interactive visual tool to enable the exploration of the metrics, analysis of the learned latent space, and selection of the best model for a given task. We focus specifically on the variational auto-encoder family in a case study of modeling peptide sequences, which are short sequences of amino acids. This task is especially interesting due to the presence of multiple attributes we want to model. We demonstrate how an interactive visual comparison can assist in evaluating how well an unsupervised auto-encoder meaningfully captures the attributes of interest in its latent space. Unsupervised representation learning and generation of text from a continuous space is an important topic in natural language processing. This problem has been successfully addressed by variational auto-encoders (VAE) BID16 and variations, which we will introduce in Section 2. The same methods are also relevant to areas like drug discovery, as the therapeutic small molecules and macromolecules (nucleic acids, peptides, proteins) can be represented as discrete linear sequences, analogous to text strings. Our case study of interest is modeling peptide sequences. In the VAE formulation, we define the sequence representation as a latent variable modeling problem of inputs x and latent variables z, where the joint distribution p(x, z) is factored as p(z)p θ (x|z) and the inference of the hidden variable z for a given input x is approximated through an inference network q φ (z|x). The auto-encoder training typically aims to minimize two competing objectives: (a) reconstruction of the input and (b) regularization in the latent space. Term (b) acts as a proxy to two real desiderata: (i) "meaningful" representations in latent space, and (ii) the ability to sample new datapoints from p(x) through p(z)p θ (x|z). These competing goals and objectives form a fundamental trade-off, and as a consequence, there is no easy way to measure the success of an auto-encoder model. Instead, measuring success requires careful consideration of multiple different metrics. The discussion of the metrics is in Section 2.2, and they will be incorporated in the IVELS tool (Section 5.1 and 5.2).For generating discrete sequences while controlling user-specific attributes, for example peptide sequences with specific functionality, it is crucial to consider conditional generation. The most Figure 1: Overview of the IVELS tool. In every stage, we can filter the models to select the ones with satisfactory performance. In the first stage, models can be compared using the static metrics that are typically computed during training (left). In the second stage, we investigate the activity vs noise of the learned latent space (top right) and evaluate whether we can linearly separate attributes (not shown). During the third stage, the tool enables interactive exploration of the attributes in a 2D projection of the latent space (bottom right). straightforward approach would be limiting the training set to those sequences with the desired attributes. However, this would require large quantities of data labeled with exactly those attributes, which is often not available. Moreover, the usage of those models that are trained on a specific set of labeled data will likely be restricted to that domain. In contrast, unlabeled sequence data is often freely available. Therefore, a reasonable approach for model training is to train a VAE on a large corpus without requiring attribute labels, then leveraging the structure in the latent space for conditional generation based on attributes which are introduced post-hoc. As a prerequisite for this goal, we focus on how q φ (z|x) encodes the data with specific attributes. We introduce the encoding of the data subset corresponding to a specific attribute, i.e. the subset marginal posterior, in Section 3. This will be important in the IVELS tool (Section 5.3 and 5.4). Now that we introduced our models (VAE family), the importance of conditioning on attributes, and our case study of interest (peptide generation), we turn to the focus of our paper. To assist in the model selection process, we present a visual tool for interactive exploration and selection of auto-encoder models. Instead of selecting models by one single unified metric, the tool enables a machine learning practitioner to interactively compare different models, visualize several metrics of interest, and explore the latent space of the encoder. This exploration is building around distributions in the latent space of data subsets, where the subsets are defined by the attributes of interest. We will quantify whether a linear classifier can discriminate attributes in the latent space, and enable visual exploration of the attributes with 2D projections. The setup allows the definition of new ad-hoc attributes and sets to assist users in understanding the learned latent space. The tool is described in Section 5.In Section 6, we discuss some observations we made using IVELS as it relates to our specific domain of peptide modeling and different variations of VAE models. We approach the unsupervised representation learning problem using auto-encoders (AE) BID13. This class of methods map an input x to continuous latent space z ∈ R D from which the input has to be reconstructed. Regular AEs can learn representations that lead to high reconstruction accuracy when they are trained with sparsity constraints, but they are not suitable for sampling new data points from their latent z-space. On the other hand, variational auto-encoders (VAE) BID16, which frame an auto-encoder in a probabilistic formalism that constrains the expressivity of z, allow for easy sampling. In VAE, each sample defines an encoding distribution q φ (z|x) and for each sample, this encoder distribution is constrained to be close to a simple prior distrbution p(z). We consider the case of the encoder specifying a diagonal gaussian distribution only, i.e. q φ (z|x) = N (z; µ(x), Σ(x)) with Σ(x) = diag[exp(log(σ 2 d)(x)))]. The encoder neural network produces the log variances log(σ 2 d)(x). The standard VAE objective is defined as follows (where D KL is the Kullback-Leibler divergence), DISPLAYFORM0 We explore a number of popular model variations we explored for the problem of modeling peptide sequences. With the standard VAE, we observe the same posterior collapse as detailed for natural language in BID3, meaning q(z|x) ≈ p(z) such that no meaningful information is encoded in z space. To address this issue, we introduce a multiplier β on the weight of the second KL term BID12 i.e. β-VAE with β < 1. BID0 analyze the representation vs. reconstruction trade-offs with a rate-distortion (RD) curve which also provides a justification for tuning β to achieve a different trade-off along the RD curve. We also considered two major variations in the VAE family: Wasserstein auto-encoders (WAE) BID29 and adversarial auto-encoders (AAE) BID21. WAE factors an optimal transport plan through the encoder-decoder pair, on the constraint that marginal posterior q φ (z) = E x∼p(x) q φ (z|x) equals a prior distribution, i.e. q φ (z) = p(z). This is relaxed to an objective similar to L VAE above, but with the per-sample D KL regularizer replaced by a divergence regularizing a whole minibatch as an approximation of q φ (z). In WAE training with maximum mean discrepancy (MMD) or with a discriminator (=AAE), we found a benefit of regularizing the encoder variance as in BID24 BID1. For MMD, we used a random features approximation of the Gaussian kernel BID22.In terms of model architectures, the default setting is a bidirectional GRU encoder and a GRU decoder. Skip-connections can be introduced between the latent code z and decoder output BID7, which was motivated by avoiding latent variable collapse. Alternatively, one can replace the standard recurrent decoder with a deconvolutional decoder BID30, which makes the decoder non-autoregressive, thus forcing it to rely more on z. The starting point of any model evaluation are the typical numerical metrics logged during training. Here we consider the following metrics:• Reconstruction log likelihood log p θ (x|z) and D KL (q(z|x)|p(z)), computed over the heldout set.• MMD between q φ (z) (average over heldout encodings q(z|x)) and the prior p(z).• Encoder log(σ 2 j (x)), averaged over heldout samples and over components j = 1... D. Large negative indicates the encoder collapsed to deterministic.• Reconstruction BLEU score on heldout samples.• Perplexity evaluated by an external language model for samples from prior z ∼ p(z) or from heldout encodings z ∼ q φ (z|x).All of our sample-based metrics (BLEU, perplexity) are using beam search as the decoding scheme. Metric of interest 2 Activity and encoder variance. We use (unit) activity as a proxy to investigate how many latent dimensions are encoding useful information about the input. We will extend the concept to evaluate whether the marginal posterior q φ (z) is far from the prior. Active units are defined as the number of dimensions d = 1... D where the activity DISPLAYFORM0 is above a threshold BID4. The activity tells us whether the encoder mean µ φ (x) varies over observations. To expand on this notion, we follow BID18 and focus on the marginal posterior q φ (z). In the usual parametrization where the encoder is used to specify the mean µ φ (x) and diagonal covariance Σ φ (x) of a Gaussian distribution, the total covariance is given by: DISPLAYFORM1 This tells us the d-th diagonal element of the covariance matrix Cov DISPLAYFORM2 is the sum of the activity A d and the average encoder variance σ 2 d (x) (i.e. how much noise is injected along this dimension on average). To satisify q φ (z) = p(z) we need at least the first and second order moments of q φ (z) to match those of p(z), and thus we need Cov q(z) [z] ≈ I. Inspecting the two terms of Cov q φ (z) [z] along the diagonal can thus tell us (i) an obvious way to violate closeness to the prior and (ii) whether the covariance is dominated by activity or encoder uncertainty. In this work, we focus on unconditionally trained VAE models. Even though several approaches have been proposed to incorporate attribute or label information during VAE training BID17 BID26, they require all labels to be present at training time, or require a strategy to deal with missing labels to enable semi-supervised learning. To avoid this, we follow BID8 and aim to train a sequence auto-encoder model unconditionally and rely on the structure of the latent z-space to define the attributes post-hoc. This process also eliminates the need for retraining the VAE when new labels are acquired or new attributes are defined. Specifically, from the interactivity standpoint, this enables end users or model trainers to interactively specify new attribute groups or subsets of the data. We aim to enable attribute-conditioned generation by understanding the learned latent space of a model. Let the attributes be a = [a 1, . . . a n], with each attribute a i taking a value y ∈ A i (typically: positive, negative, not available). Since the probability of the intersection of n attributes can be small or zero, we focus on conditioning on a single attribute at a time. In general, we define a subset S of our dataset as those datapoints where attribute a i = y and denote the corresponding distribution as p S (x) or p ai=y (x) = p(x|a i = y). By focusing on the subset S defined by selecting on a i = y, we have the flexibility to define new subsets online with the same notation. In the auto-encoder formulation and variants we discussed, an important object is the maginal posterior q φ (z) = E x∼p(x) q φ (z|x), which is introduced as the aggregate posterior by BID21. This distribution is central to the analysis of BID14 as well as WAE which relies on q φ (z) = p(z).Let us now define the marginal posterior for subset S with distribution p S (x): DISPLAYFORM0 The subset marginal posterior is an essential distribution for our analysis as it tells us how the distribution corresponding to a specific attribute is encoded in the latent space. Since we aim to be able to sample from q S φ (z) conditionally, we require the distribution to have two properties. First, q S φ (z) needs to be distinct from the distribution q φ (z) (Aim 1). We found that the underlying data-generating distributions of labeled and unlabeled data do not necessarily match for biological applications. Since there might be an underlying reason why a data point has not been labeled, a model should learn which points are of interest for a particular attribute. The second aim is that q S φ (z) should have sufficient regularity to capture the distribution with simple density modeling (Aim 2). Being able to discriminate between different attribute labels within the same category is crucial when aiming to generate sequences with a particular property. To be able to analyze arbitrary subsets in an interactive exploration of q S φ (z), we focus on the following two metrics of interest. This metric addresses the question of whether we can easily discriminate the subset S (corresponding to attribute a i) in the latent space. To address the aims introduced above, we consider two approaches to define S and S:• a i available vs a i not available (Aim 1).• a i = y vs a i = y two different attribute labels (e.g., y =positive, y =negative) (Aim 2).Metric of interest 4 2D projections of the full marginal posterior q φ (z) and the place of each subset marginal posterior q S φ (z) in it. While static metrics can provide an intuition for the quality of the latent space, we further aim to analyze the well-formedness of the space. Thus, we investigate how attributes cluster visually in 2D projections for different models. Peptides are single linear chains of amino acids. We consider sequences that are composed of twenty natural amino acids, i.e., our vocabulary consists of 20 different characters. The length of sequences is restricted to ≤ 25. Depending on the amino acid composition and distribution, peptides can have a range of biological functions, e.g., antimicrobial, anticancer, hormone, and are therefore useful in therapeutic and material applications. Latent variable models such as VAEs have been successfully applied for capturing higher-order, context-dependent constraints in biological peptide sequences BID23, for semisupervised generation of antimicrobial peptide sequences BID6, and for revealing distinct cancer-specific methylation patterns of DNA sequences BID28. BID10 have used VAE models for automatic design of chemicals from SMILES strings. In this study, we focus on comparing the latent spaces learned by modeling the peptide sequences at the character level by the use of VAE and its variants. Furthermore, we investigate the feasibility of using the latent space learned using a large corpus of unlabeled sequences to track representative distinct functional families of peptides. For this purpose, we mainly focus on five biological functions or attributes of peptides, i.e. antimicrobial (AMP), anticancer, hormonal, toxic and antihypertensive. Frequently, these peptides are naturally expressed in a variety of host organisms and therefore are identified as the most promising alternative to conventional small molecule drugs. As an example, given the global emergence of multidrug-resistant bacterias or "superbugs" and a dry discovery pipeline of new antibiotics, AMPs are considered as exciting candidates for future infection treatment strategies. Water solubility is also considered as an additional attribute. Our labeled dataset comprises sequences with different attributes curated from a number of publicly available databases BID25 BID9 BID15 BID2 BID11. Below we provide details of the labeled dataset: The labeled data are augmented with the unlabeled dataset from Uniprot-SwissProt and UniprotTrembl BID5, totaling ∼ 1.7 M points. The data is split into separate train, valid, and test sets corresponding to 80%, 10% and 10% of the data, respectively. Data for which an attribute is present is upsampled with a factor 20× during training. DISPLAYFORM0 Our tool aims to support the role of a model trainer as described by BID27. This role does not assume extensive domain knowledge, but an understanding of the model itself. As such, we DISPLAYFORM0 limit the visual elements of the tool to those that do not evaluate sequences at a granular peptide-level. As a consequence, the tool aims to visualize high-level information that is captured during training and iteratively focuses down to the medium-level, where we evaluate attributes. Specifically, we introduce a three-level pipeline that enables a user to conduct a cohort-analysis of all the models they have trained. During each level, the user can filter the remaining models based on information provided by the tool. The iterative filtering process further allows successively more computationally expensive analyses. The subsections here follow the measures highlighted as "Metric of interest" in the previous sections. The details of the models appearing in the figures are found in Appendix A. The first level FIG2 is a side by side view of a user-specified subset of the metrics logged during training (rows) across multiple models (columns). In our example, we select only the model checkpoints from the final epoch of each training run. These metrics would be typically inspected by a model trainer in graphs over time, for example in tensorboard. However, while graphs over time excel at communicating for a single metric, comparing models across multiple metrics is challenging and time-consuming. By showing an arbitrary number of metrics at once, the IVELS tool enables the selection of promising models for further analysis. The tool further simplifies the selection process through the ability to sort the columns by different metrics. Sorting makes it easy to select models that achieve at least a certain performance threshold. FIG3 presents the encoding of Metric of Interest 2, i.e. the diagonal of Cov q φ (z) [z]. For interpretability, we sort the latent dimensions according to decreasing activity. This visual representation allows inspection of the balance between activity (encoder mean changing depending on observations), and average encoder variance (how much the noise is dominating). We discuss the observations for different models in Section 6.2. Figure 4: Level 2.2. Attribute discriminability in the latent space. Despite being trained fully unsupervised, all models successfully encode multiple attributes in z. Given that we established that z is actively encoding information in the first part of level 2, the second part of level 2 aims to evaluate whether we can linearly separate attributes within the learned space. This is a prerequisite for level 3, which assumes that the encodings can be related to the attributes in a meaningful way. Figure 4 presents the for the models across the attributes that are available in the dataset. Following the Metric of Interest 3, we differentiate between positive and negative labels (indicated by lab) and labeled and unlabeled samples (indicated by between). For each of these scenarios, we train a logistic regression on z of the training set and evaluate it on the training, validation and test sets. To account for a dynamic number of different labels, the for lab are the accuracy, whereas between is measured in AUC. The y axis is scaled in. We allow the user to select either t-SNE BID19 or linear projection on axes of interest (PCA). To visualize the different attributes, we enable color-coding in two modes: show labels and compare labels. " Show labels" will color-code the different values a single attribute can assume, in our case positive and negative. " Compare labels" allows to select two different attributes with a specific value. Using this mode, we can for example examine whether there is a section of the latent space that has learned both soluble and AMP peptides. Should a data point have been annotated with both of the selected values, it is color-coded with a separate color. 6 DISCUSSION AND From stage 2 (Fig. 4) it is encouraging to see that in this unconditionally trained model the different attributes are salient as judged by a simple linear classifier trained on the latent dimensions. Some general trends appear. We can observe generally high performance across all models, which is promising for conditional sampling. One difference of note is that the β-VAE and AAE models perform worse on the AMP attribute. The discriminators for attributes with limited annotation, specifically water-solubility, overfit on the training set which indicates a need for further annotation efforts. The further demonstrate that toxicity is more challenging than the remaining attributes despite being the second-most common attribute in the dataset. These set the stage for further investigation of the latent space. Fig. 5 shows the tSNE projections of the latent space learned using three different models (Level 3). Two distinct attributes, positive antimicrobial (amp_amp_pos, in blue) and antihypertensive (antihyper_antihyper, in red), are also shown. Interestingly, these two attributes are more well-separated in the latent space obtained using AAE and WAE (Fig. 5, middle and right) compared to that from VAE (Fig. 5, left). From the biological perspective, antihypertensive peptides are distinct in terms of length, amino acid composition, and mode of action. Antimicrobial peptides are typically longer than the antihypertensive ones. Also, antimicrobial peptides function by disrupting the cell membrane, while antihypertension properties originate from enzyme inhibition BID20. From the Cov q φ (z) [z] plot (Level 2.1, FIG3, we can observe that the VAE (column 3) suffers from posterior collapse, since it has no active units. We can further see that the β-VAE (column 1 and 2) address the collapse issue and about half of its dimensions in z space are active. Interestingly, the dimensions that are not active become dominated by encoder variance, such that the total variance for each dimension is close to 1. The skip-connection added to the GRU (2nd column) lead to a slightly higher activity around the tail of the active dimensions, though the difference is minor. The WAE and AAE (column 4 and 5) have relatively little encoder variance, meaning they are almost deterministic. Notably, the WAE covariance is furthest away from the prior. From the t-SNE plots (Fig. 5) we see the WAE and AAE show good clustering of the attributes like positive antimicrobial and antihypertensive, showing that the latent space is clearly capturing those attributes, even though they were not incorporated at training time. We presented a tool for Interactive Visual Exploration of Latent Space (IVELS) for model selection focused on auto-encoder models for peptide sequences. Even though we present the tool with this use case, the principle is generally useful for models which do not have a single metric to compare and evaluate. With some adaptation to the model and metrics, this tool could be extended to evaluate other latent variable models, either for sequences or images, speech synthesis models, etc. In all those scenarios, having a usable, visual and interactive tool for model architects and model trainers will enable efficient exploration and selection of different model variations. The from this evaluation can further guide the generation of samples with the desired attribute(s).
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hkl8EILFdN
We present a visual tool to interactively explore the latent space of an auto-encoder for peptide sequences and their attributes.
Neural networks trained through stochastic gradient descent (SGD) have been around for more than 30 years, but they still escape our understanding. This paper takes an experimental approach, with a divide-and-conquer strategy in mind: we start by studying what happens in single neurons. While being the core building block of deep neural networks, the way they encode information about the inputs and how such encodings emerge is still unknown. We report experiments providing strong evidence that hidden neurons behave like binary classifiers during training and testing. During training, analysis of the gradients reveals that a neuron separates two categories of inputs, which are impressively constant across training. During testing, we show that the fuzzy, binary partition described above embeds the core information used by the network for its prediction. These observations bring to light some of the core internal mechanics of deep neural networks, and have the potential to guide the next theoretical and practical developments. Deep neural networks are methods full of good surprises. Today, to perform image classification, one can train a 100M parameters convolutional neural network (CNN) with 1M training examples. Beyond raising questions about generalization , it appears that the classification models derived from those CNNs offer object detectors for free, simply by thresholding activation maps BID22; BID5. The learned representations also appear to be universal enough to be re-used on new tasks even in an entirely different domain (e.g. from natural to medical images in BID10). If memory or computation are bottlenecks, no problem, networks with binary weights and binary activations work just as well BID23. What characteristics of SGD trained neural networks allow these intriguing behaviour to emerge?Deep neural networks also have their limitations. They currently pose lots of difficulties with respect to continuous learning BID15, robustness BID27 BID22, or unsupervised learning BID6. Are there other good surprises to expect in those fields, or do those difficulties correspond to fundamental limitations of SGD trained deep neural networks?In order to answer both questions, a better understanding of deep neural networks is definitely needed. Since the intricate nature of the network hinders theoretical developments, we believe experiments offer a valuable alternative path to offer an insight into the key mechanisms supporting the success of neural networks, thereby paving the way both for future theoretical and practical developments. In other words: analysing how something works helps understanding why it works, and gives ideas to make it work better. In particular, the workings of hidden neurons, while being the core building block of deep neural networks, are still a mystery. It is tempting to associate hidden neurons to the detection of semantically relevant concepts. Accordingly, many works studying neurons have focused on their interpretability. A common and generally admitted conception consists in considering that they represent concepts with a level of abstraction that grows with the layer depth BID19. This conception has been supported by several works showing that intermediate feature maps in convolutional neural networks can be used to detect higher level objects through simple thresholding BID22; BID5. However, it is not clear if these observations reflect the entire relevant information captured by that feature map, or, on the contrary, if this interpretation is ignoring important aspects of it. In other words, the complete characterization of the way a neuron encodes information about the input remains unknown. Moreover, the dynamics of training that lead to the encoding of information used by a neuron is -to our knowledge-unexplored. This paper uses an experimental approach that advances the understanding of both these aspects of neurons. The main finding of our paper is the following: the encodings and dynamics of a neuron can approximately be characterized by the behaviour of a binary classifier. More precisely:1. During training, we observe that the sign of the partial derivative of the loss with respect to the activation of a sample in a given neuron is impressively constant (except when the neuron is too far from the output layer). We observe experimentally that this leads a neuron to push activation of samples either up, or down, partitioning the inputs in two categories of nearly equal size.2. During testing, quantization and binarization experiments show that the fuzzy, binary partition observed in point 1. embeds the core information used by the network for its predictions. This surprisingly simple behaviour has been observed across different layers, different networks and at different problem scales (MNIST, CIFAR-10 and ImageNet). It seems like hidden neurons have a clearly defined behaviour that naturally emerges in neural networks trained with stochastic gradient descent. This behaviour has -to our knowledge-remained undiscovered until now, and raises intriguing questions to address in future investigations. Previous works trying to understand the function of a neuron focus on its interpretability in terms of semantically relevant concepts. In the context of convolutional neural networks for image classification, several recent works have investigated how the activation of a single neuron is related to the input image, by developing methods to visualize the image structures that activate a neuron the most. Those methods include the training of a deconvolution network to project the feature activations back to the input pixel space , and the analysis of how a neuron activation decreases when occluding portions of the input image, revealing which parts of the scene are important regarding this neuron activation . Inverse problem formulations have also been considered to reconstruct an image by inverting a representation obtained inside the network, using a gradient-descent approach regularized by different kinds of image models BID20 BID22. More recently, BID5 went a step further by developping methods to quantify the interpretability of the signal extracted through the previously described visualization methods. All those work conclude that (some of) the individual neurons have the capability to capture visually consistent structures. The fact that object detection emerges when considering units with highest activation inside a CNN trained to recognize scenes supports the idea that a binary form of encoding is embedded within the trained network. However, it is not clear if these observations reflect the entire relevant information captured by the studied feature map. Moreover, investigating further the emergence of concepts into neurons is also motivated by the observation that the object detection technique only works on a subset of the feature maps, leaving the understanding of the others as an open question. Our paper leaves interpretability behind, but provides experiments for the validation of a complete description of the encoding of information in any neuron. Since the idea of binary encoding is central to our work, it is also related to works considering network binarization in a power consumption context, to mitigate the computational and memory requirements of convolutional network. In BID8, only the weights are constrained to only two possible values while, in BID23 and BID12, both the filters and the inputs to convolutional layers are approximated with binary values. The fact that those methods only induce a negligible loss in accuracy reveals that the conventional continuous definition of activations is certainly redundant. Motivated by those previous observations, our work further challenges the binary nature of individual neurons. It does not force binary activations during training, but instead reveals that a bimodal activation pattern naturally emerges from a conventional training procedure. While such an observation has already been presented in BID1 for ReLU networks, we go further by showing that there is no causal relation between the thresholding nature of the activation function and the binary encoding emerging in hidden neurons. Indeed, we show that a binary encoding emerges even in deep linear networks. An important part of our work relies on the observation that the gradients used by the learning algorithm follow some consistent, predictable patterns. This observation has already been highlighted by BID24 and BID25. However, while these works focus on the gradients with respect to parameters on a batch of samples, we analyse the gradients with respect to activations on single samples. This difference of perspective is crucial for the understanding of the representation learned by a neuron, and is a key aspect of our paper. Our goal is to describe the behaviour of neurons in a neural network. Given the growing complexity of neural networks, it is useful to define which part of the architecture we denote as a neuron. We associate neurons to activation functions: each application of a non-linear function to a single value defines one neuron. Following the literature, we will refer to the value preceding the application of the activation function as the pre-activation, and the of it as the activation. In order to reflect the spatial structure of convolutional layers, we consider the different pixels of a feature map as different activations from a same neuron when studying statistical distributions. We experiment with three different architectures: a 2-layer MLP with 0.5 dropout trained on MNIST BID18, A 12-layer CNN with batchnorm BID13 trained on CIFAR-10 (BID17) and a 50-layer ResNet BID11 ) trained on ImageNet BID9. In addition to the ReLU activation BID21, we also analyse a version of the MLP with the sigmoid activation function, and a version of the 12-layer CNN without non-linear activation function. We will thus analyse five different models. Through the paper, we will repeatedly refer to specific layers of these networks. For the MLP, we simply refer to the two fully-connected layers as dense1-act and dense2-act, act being replaced by the used activation function (relu or sigmoid). The cifar CNN is divided in 4 stages of three layers. Layers from a stage have the same spatial dimensions and stages are separated by max-pooling layers. We refer to each layer through the index of its stage and the position of the layer inside the stage, starting at 0. Stage2layer0 refers thus to the first layer of the third stage. We use the ResNet50 network as provided by the Keras applications. We re-use their notations and refer to layers through their stage (in numbers) and block index (in letters). We only study the neurons after combination of the block outputs and the skip connections. The very first layer does not belong to a standard ResNet block, and is denoted as conv1. More information about the models and their training procedure can be found in Appendix. Our experiments were implemented using the Keras BID7 and Tensorflow BID0 libraries. We start our quest of understanding a neuron by watching the gradients flowing through it. Most of the works analysing training dynamics of neural networks have focused on analysing gradients of the loss with respect to parameters, since these are directly used by the learning method. However, gradients with respect to the activations can also give us precious insights, since they directly reveal how the representation of a single sample is constructed. We proceed to a standard training of the cifar CNN and the MNIST MLP networks until convergence. During training, but in a separate process, we record the gradient of the loss with respect to the activations of each input on a regular basis (every 100 batches for cifar and every 10 batches for MNIST, leading to 1600 and 2350 recordings respectively). Measures were only performed on a random subset of neurons and samples due to memory limitations (see Appendix for more details).For each (input sample, neuron) pair, we compute the average sign of the partial derivatives with respect to the corresponding activation, as recorded at the different training steps. This value tells us whether an increased activation generally benefits (negative average) or penalizes (positive average) the classification of the sample. Due to the use of float32 precision, zero partial derivatives appear at some point in training when the sample is correctly classified, making the gradient very small. Since the signs of these values are not relevant, they are ignored when the average sign is calculated. FIG1 shows, for ten randomly selected neurons from different layers, the histograms of the computed average signs (there is one value per input sample). As one can see, the average partial derivative sign is either 1 or -1 for most of the samples, which indicates that the derivative sign doesn't change at all through the training. This is exactly the behaviour you would expect in the output of a binary classifier trying to separate two categories. Since around half of the activations have positive derivatives and the other half negative ones, a neuron seemingly tries to partition the input distribution in two distinct and nearly equally-sized categories. While training of neural networks could potentially be a very noisy procedure, we thus observe a remarkably clear and regular signal in the activation gradients. The regularity of training has already been observed for weights in BID24 and BID25, we observe it now through the lens of activations. In particular, we observe that the activation of a sample in a neuron should be pushed in the same direction throughout nearly all training to improve its prediction: either up or down. Histograms aggregating all neurons of a layer, can be found in Appendix. This behaviour is much less apparent in layers far from the output. Indeed, the histogram corresponding to a neuron from stage2layer2-relu shows more sign changes than the one from stage3layer2-relu. Stage0Layer0-relu is even worse: the majority of the partial derivatives constantly change signs during training. This raises a question: are the same regular dynamics present in early layers, while hidden by undesirable noise? It has been observed that noise in gradients increases exponentially with depth in ReLU-networks due to the derivative discontinuity at 0 BID4. Indeed, the linear version of the cifar CNN (fourth row) provides a much clearer signal than the ReLU version (third row). However, other sources of noise are present: the histogram of stage0layer0-linear average derivative signs does not have a pronounced bimodal behaviour. Is the observed noise an inconvenience emerging from the architecture and training procedure, or rather a key aspect of learning? We leave this question as future work. The gradients strongly indicate that a neuron tries to separate two categories of inputs. Does this effectively happen during training? We assign each sample to a category based on its average activation partial derivative sign, and see how both categories' pre-activations evolve across the recordings. Categories are named'low' and'high' for positive and negative derivatives respectively. Figure 2 shows the for a neuron in dense2-relu, dense2-sigmoid, stage3layer2-relu and in stage3layer2-linear. The dynamics of more neurons can be found in Appendix and in video format on the following link: https://www.youtube.com/channel/UC5VC20umb8r55sOkbNExB4A. This visualization unveils a seemingly endless struggle to separate both categories. While very slow, the signal is effectively there: both categories are distinguished through the training procedure. However, training stops before both categories are completely separated. As will be discussed in Section 6, this raises a question that we believe is crucial: what mechanism regulates which samples are well partitioned in a neuron? To illustrate that the dynamics are not a simple translation, the final highest pre-activations are highlighted in yellow in the visualizations. The figures show the histograms of the average sign of partial derivatives of the loss with respect to activation of samples, as collected over training for a random neuron in ten different layers. An average derivative sign of 1 means that the derivative of the activation of this sample was positive in all the recordings performed during training. For layers close enough to the output, we clearly observe two distinct categories: some sample activations should always go up, others always down. This reveals that the neuron receives consistent information about how to affect the activation of a sample, allowing it to act as a binary classifier. As detailed in Section 3, the layers from the first two rows are part of a network trained on MNIST (with ReLU and sigmoid activation functions respectively), the third and fourth row on CIFAR-10 (with ReLU and no activation function respectively).Another question begs to be answered: according to which mechanism are the high and low categories defined? The average sign of the loss function partial derivative with respect to the activation of a sample determines the category, and seems to be constant along training -at least for layers close to the output FIG1 ). Categories are thus mainly fixed by the initialization of the network's parameters. Moreover, the sign of the derivative signal is heavily conditioned on the class of the input. In particular, in neurons of the output layer, partial derivative signs only depend on the class label, and not on the input. FIG7 in Appendix shows that in dense2-relu, a class is in most cases either entirely present or absent of a category, and is only occasionally split across low and high categories. Category definition is thus approximately a selection of a random subset of classes, determined by the random initial parameters between the studied neuron and the output layer. We leave further exploration of these mechanisms as future work. Figure 2: Evolution of the pre-activation distributions across training. Plots correspond to one neuron from dense2-relu (first row), dense2-sigmoid (second row), stage3layer2-relu (third row) and stage3layer2-linear (fourth row). Pre-activations are separated in two categories, high and low, based on the average partial derivative sign over training of their corresponding activation (see FIG1 . We can see that both categories are being separated during training. The final highest pre-activations of the high category are highlighted to show that it is not a simple translation. Supplementary images from other neurons can be found in Appendix and in video format on https://www.youtube. com/channel/UC5VC20umb8r55sOkbNExB4A. We have shown that neurons operate like binary classifiers during training. Does this also reflect the way a neuron encodes information about the input during testing? Even though the categories are not completely separated, does this partition provide the necessary information for the next layer? In this Section, we test if all the information a neuron transmits is encoded in the binary partition observed in the previous Section. We do this by studying how the performance of neural networks changes when activations of a trained layer are modified through specifically designed quantization and binarization strategies. The strategies are designed not only to highlight the hypothetical binary aspect of the encodings, but also to reveal structural components of it: how fuzzy is the binary rule and can we locate the thresholds? Moreover, this Section also studies ResNet50 since computational limitations are less of a problem. The first experiment aims at testing if a neural network trained in a standard way is robust to quantization of pre-activations. Instead of accepting a continuous range of values, only two distinct values per neuron can be provided. Are these two values per neuron enough for transmitting the relevant information to the next layers? The quantization is based on the percentile rank of a pre-activations with respect to the pre-activation distribution of the neuron. For each neuron, percentiles are computed based on a subset of the data (training or test). The percentile corresponding to a chosen rank is then used as a threshold, separating the pre-activations in two distinct sets. A pre-activation will be quantized to the average value of the set it belongs to. Eleven thresholds equally spaced between 0 and 100 are tried out for the experiment. While the percentile is computed for each neuron specifically, the percentile rank used as a threshold is the same for all of them. FIG2 shows how accuracy on the test set is affected when quantization is performed on different layers. No form of training to adapt to this new pre-activation distribution is applied. The first and penultimate layers of each network are studied, as well as one intermediate layer for the cifar10 CNN and ResNet50. The signal is clear: neural networks are astonishingly robust to quantization of their pre-activations, although not explicitly designed to be so. Performance is quite robust to the chosen threshold, with a preference for higher percentile ranks. Amongst the 8 layers tested, only the conv1 layer from ResNet50 shows significant decrease in accuracy when its pre-activations are quantized. We believe this is due to poor the quality of the gradients in early layers, as discussed in Section 4.1. The quantization experiment suggests that each neuron transmits a binary signal to the next layer, which is a first step for confirming our hypothesis. But we still don't have a clear view on how the signal is encoded. Is there a clear threshold or a fuzzy rule? When can we be confident that a pre-activation should be considered as a member of the low category or the high one? What is the size of both categories? This section presents the design and of a sliding window binarization experiment whose purpose is to provide insights around these questions. In this experiment, instead of separating the pre-activations in two groups using a single percentile rank as threshold, we use two thresholds, forming a window. Activations between the two thresholds are mapped to 1, and activations outside of it are mapped to 0. The experiment is performed using a window with a width of 10 percentile ranks and a center that slides from rank 5 to rank 95. Thus, only 10% of all the pre-activations of a neuron are mapped to 1. Which 10% is fixed by the center of the window: if the center is at rank 35, only the activations between the 30 and 40 percentiles are mapped to 1. Similarly to the quantization experiment, the percentiles are computed using a randomly selected subset of the data (train or test).With such a binarization method, the only information from the original signal that remains is if the activation was inside or outside the window. The usefulness of this information for a particular window depends on the coding scheme used by the neuron. The can thus potentially provide insights about the organization of the representation and allow us to indirectly observe the binary partition used to encode information. To measure the usefulness of the transformed pre-activations, we monitor the test accuracy after reinitialization and retraining of the layers that follow the layer where the binarization has been performed. For computational reasons, linear classifier probes BID2 are used for analysing ResNet50 layers instead of retraining all the subsequent layers 1. With this approach, we can verify if a network is able to make good use of the information contained in the binarized pre-activations and learn useful patterns that generalize to the test set. Since a neuron hypothetically transmits information through a binary partition of the inputs, performance of the network should be better when the pre-activations inside Quantization is performed on a single layer at a time, using a range of percentile ranks as quantization thresholds. Except for conv1 (very first layer of ResNet50), the networks are astonishingly robust to quantization, suggesting that neurons provide a binary signal to the next layers. The average percentile rank of the zero preactivation (which corresponds to ReLU's and sigmoid's threshold) is also provided. As detailed in Section 3, the layers from the first two rows are part of a network trained on MNIST (with ReLU and sigmoid activation functions respectively), the third and fourth row on CIFAR-10 (with ReLU and no activation function respectively) and the fifth row on ImageNet (with ReLU activation).the window correspond to the same category. The performance should decrease when the window is located in fuzzy regions, where both categories are equally present. This experiment thus provide a tool to indirectly measure the presence of the two categories used by the coding scheme. The presented in Figure 4 show a clear signal across all layers and networks: the further away the center of the window is from rank 50, the better the performance of the network. Moreover, the symmetry around percentile rank 50 is striking. Given the binarization strategy, these indicate a fuzzy partition of two categories, with a threshold around percentile rank 50, and a confidence in the category that increases the higher (or lower) the activation. The fact that a window center at the 50th percentile rank does not induce random predictions indicates that the size of the categories are not always equal, but vary across neurons. Our strengthen thus the hypothesis emerging from our analysis of the training dynamics, according to which a neuron partitions the inputs in two distinct but overlapping categories of quasi equal size. These new experiments tell us that this partition also characterizes how neurons encode information about the inputs. Interestingly, there is no causal relation between the thresholding nature of activation functions and the binary behaviour that we observe in the pre-activations. Indeed, while the binary partition observed seems to be symmetrically arranged around the 50th percentile rank (Figure 4), the position of the ReLU or sigmoid thresholds (0 value) aren't (see FIG2, or table 1 in Appendix). Moreover, the binary behaviour also emerges in linear networks, which don't have any thresholding effect in hidden neurons. This observation is quite unexpected, as previous studies on activation binarization focused on binarization at the threshold of the activation function BID1, which now seems quite arbitrary. In this paper, we try to validate an ambitious hypothesis describing the behaviour of a neuron in a neural network during training and testing. Our hypothesis is surprisingly simple: a neuron behaves like a binary classifier, separating two categories of inputs. The categories, of nearly equal size, are provided by the backpropagated gradients and are impressively consistent during training for layers close enough to the output. While stronger validation is needed, our current experiments, ran on networks of different depths and widths, all validate this behaviour. Our have direct implications on the interpretability of neurons. Studies analysing interpretability focused on the highest activations, e.g. above the 99.5 percentile in BID5. While these activations are the ones who are the most clearly discriminated by the neuron, we show that they do not reflect the complete behaviour of the neuron at all. Our experiments reveal that neurons tend to consistently learn concepts that distinguish half of the observed samples, which is fundamentally different. We expect that our observations stimulate further investigations in a number of intriguing research directions disclosed by our analysis. Firstly, since our analysis observes (in FIG2 but does not explain the binary behaviour of neurons in the first layers of a very deep network, it would be interesting to investigate further the regularity of gradients (cfr. Section 4.1), in layers far from the output. This could potentially unveil simple training dynamics which are currently hidden by noise or, on the contrary, reveal that the unstable nature of the backpropagated gradients is a fundamental ingredient supporting the convergence of first layer neurons. Ultimately, these would provide the missing link for a complete characterization of training dynamics in deep networks. Secondly, our work offers a new perspective on the role of activation functions. Their current motivation is that adding non-linearities increases the expressivity of the network. This, however, does not explain why one particular non-linearity is better than another. Our lack of understanding of the role of activation functions heavily limits our ability to design them. Our suggest a local and precise role for activation functions: promoting and facilitating the emergence of a binary encoding in neurons. This could be translated in activation functions with a forward pass consisting of well-positioned binarization thresholds, and a backward pass that takes into account how well a sample is partitioned locally, at the neuron level. Finally, we believe that our work provides a new angle of attack for the puzzle of the generalization gap observed in. Indeed, combining our observations with the works on neuron interpretability tells us that a neuron, while not able to finish its partitioning before convergence, seems to prioritize samples with common patterns (cfr. Figure 2). This prioritization effect during training has already been observed indirectly in BID3, and we are now Figure 4: Sliding window binarization experiment: pre-activations inside a window with a width of percentile rank 10 are mapped to 1, pre-activations outside of it to 0. Information that remains in the signal is only the fact that the pre-activation was inside or outside the window. Observing if a new network can use this information for classification reveals structure about the encoding: which window positions provide the most important information for a classifier? The show a clear pattern across all layers and networks that confirms an encoding based on a fuzzy, binary partition of the inputs in two categories of nearly equal size. As detailed in Section 3, the layers from the first two rows are part of a network trained on MNIST (with ReLU and sigmoid activation functions respectively), the third and fourth row on CIFAR-10 (with ReLU and no activation function respectively) and the fifth row on ImageNet (with ReLU activation).able to localize and study it in depth. The dynamics behind this prioritization between samples of a same category should provide insights about the generalization puzzle. While most previous works have focused on the width of local minima BID16, the regularity of the gradients and the prioritization effect suggest that the slope leading to it also matters: local minima with good generalization abilities are stronger attractors and are reached more rapidly. Two main lessons emerge from our original experimental investigation. The first one arises from the observation that the sign of the loss function partial derivative with respect to the activation of a specific sample is constant along training for the neurons that are sufficiently close to the output, and states that those neurons simply aim at partitioning samples with positive/negative partial derivative sign. The second one builds on two experiments that challenge the partitioning behaviour of neurons in all network layers, and concludes that, as long as it separates large and small pre-activations, a binarization of the neuron's pre-activations in an arbitrary layer preserves most of the information embedded in this layer about the network task. As a main outcome, rather than supporting definitive , the unique observations made in our paper raise a number of intriguing and potentially very important questions about network learning capabilities. Those include questions related to the convergence of first layer neurons in presence of noisy/unstable partial derivatives, the design of activation functions, and the generalization puzzle. To be filled in. Training information:• Learning rate: 1e −1 for ReLU, 1 for sigmoid activation function• Batch size: 128• Number of epochs: 50 One layer is composed of a convolution, an activation function and BatchNormalization. We denote a layer with L(n) where n is the number of filters in the convolution. We denote maxpooling as MP and global average pooling as GP.Architecture: DISPLAYFORM0 Training information:• Learning rate: 1e −2, divided by 5 after epoch 60• Batch size: 32• Number of epochs: 100• Data augmentation is used (but not when retraining after binarization - Figure 4) This network is directly taken from the keras applications. Information about the architecture can be found at: https://github.com/fchollet/keras/blob/master/keras/applications/ resnet50.pyTraining information is not provided. In Section 4, gradients and pre-activations are recorded for• 30.000 samples for dense1 and dense2• 10.000 for stage3layer2• 5.000 for stage2layer2• 500 for stage0layer0The samples were randomly selected. Computing percentiles is also performed on randomly selected samples for computation efficiency. More precisely, we limit the number of samples used to 100.000.The probes used on ResNet50 used for Figure 4 used only 100.000 training samples from the ImageNet dataset. The test error, however, is computed on the complete ImageNet validation set. C SUPPLEMENTARY IMAGES AND TABLES Table 1: Average and standard deviation of the percentile rank of the ReLU threshold (0 value) across neurons of a layer. The percentile rank is based on the pre-activation distributions of each neuron. We observe that in the last layers, the position of the ReLU threshold is nearly the same for all neurons, suggesting convergence to a very precise position in the pre-activation distribution. Overall, given the noisy nature of its position, the ReLU threshold does not seem to be the cause of the binary behaviour of neurons observed in this paper. The figures show the histograms of the average sign of partial derivatives of the loss with respect to sample activations, as collected over training for all neurons in ten different layers. An average derivative sign of 1 means that the derivative of the activation of a neuron for this sample was positive in all the recordings performed during training. The histograms represent the statistics on all (neuron, sample) pairs of the layer. For layers close enough to the output, we clearly observe two distinct categories: some sample activations should always go up, others always down. This reveals that the neuron receives consistent information about how to affect the activation of a sample. The neuron-wise histograms in FIG1 moreover show that more or less half of the input samples have negative derivatives, and the other positive ones, allowing a neuron to act as a binary classifier. As detailed in Section 3, the layers from the first two rows are part of a network trained on MNIST (with ReLU and sigmoid activation functions respectively), the third and fourth row on CIFAR-10 (with ReLU and no activation function respectively).Figure 6: Evolution of the pre-activation distributions across training. Each line corresponds to the dynamics of a different neuron. Plots correspond to neurons from dense2-relu (row 1-3) and from dense2-sigmoid (row 4-6). Pre-activations are separated in two categories, high and low, based on the average partial derivative sign over training of their corresponding activation. We can see that both categories are being separated during training. The final highest pre-activations of the high category are highlighted to show that it is not a simple translation. These illustrations can be seen in video format on https://www.youtube.com/channel/ UC5VC20umb8r55sOkbNExB4A. Figure 7: Evolution of the pre-activation distributions across training. Each line corresponds to the dynamics of a different neuron. Plots correspond to neurons from stage3layer2-relu (row 1-3) and from stage3layer2-linear (row 4-6). Each line corresponds to the dynamics of a different neuron. Pre-activations are separated in two categories, high and low, based on the average partial derivative sign over training of their corresponding activation. We can see that both categories are being separated during training. The final highest pre-activations of the high category are highlighted to show that it is not a simple translation. These illustrations can be seen in video format on https: //www.youtube.com/channel/UC5VC20umb8r55sOkbNExB4A. Histogram showing the consistency between the class of a sample and its belonging to the low category of a neuron (samples whose activation should be decreased) in dense2-relu. In most cases, nearly all the elements of a class are in the same category, as can be seen by the two peaks at 0 and 100%.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1srNebAZ
We report experiments providing strong evidence that a neuron behaves like a binary classifier during training and testing
Automatic classification of objects is one of the most important tasks in engineering and data mining applications. Although using more complex and advanced classifiers can help to improve the accuracy of classification systems, it can be done by analyzing data sets and their features for a particular problem. Feature combination is the one which can improve the quality of the features. In this paper, a structure similar to Feed-Forward Neural Network (FFNN) is used to generate an optimized linear or non-linear combination of features for classification. Genetic Algorithm (GA) is applied to update weights and biases. Since nature of data sets and their features impact on the effectiveness of combination and classification system, linear and non-linear activation functions (or transfer function) are used to achieve more reliable system. Experiments of several UCI data sets and using minimum distance classifier as a simple classifier indicate that proposed linear and non-linear intelligent FFNN-based feature combination can present more reliable and promising . By using such a feature combination method, there is no need to use more powerful and complex classifier anymore. A quick review of engineering problems reveals importance of classification and its application in medicine, mechanical and electrical engineering, computer science, power systems and so on. Some of its important applications include disease diagnosis using classification methods to diagnosis Thyroid , Parkinson BID4 ) and Alzheimers disease BID7 ); or fault detection in power systems such as BID6 ) which uses classification methods to detect winding fault in windmill generators; BID12 ) using neuro-fuzzy based classification method to detect faults in AC motor; and also fault detection in batch processes in chemical engineering BID22 ). In all classification problems extracting useful knowledge and features from data such as image, signal, waveform and etcetera can lead to design efficient classification systems. As extracted data and their features are not usually suitable for classification purpose, two major approaches can be substituted. First approach considers all the classifiers and tries to select effective ones, even if their complexity and computational cost are increased. Second approach focusing on the features, enhances the severability of data, and then uses improved features and data for classification. Feature combination is one of the common actions used to enhance features. In classic combination methods, deferent features vectors are lumped into a single long composite vector BID19 ). In some modern techniques, in addition to combination of feature vectors, dimension of feature space is reduced. Reduction process can be done by feature selection, transmission, and projection or mapping techniques, such as Linear Discriminate Analysis (LDA), Principle Component Analysis (PCA), Independent Component Analysis (ICA) and boosting BID19 ). In more applications, feature combination is fulfilled to improve the efficiency of classification system such as BID3 ), that PCA and Modular PCA (MPCA) along Quad-Tree based hierarchically derived Longest Run (QTLR) features are used to recognize handwritten numerals as a statistical-topological features combination. The other application of feature combination is used for English character recognition, here structure and statistical features combine then BP network is used as a classifier ). Feature combination has many applications; however before using, some questions should be answered: which kind of combination methods is useful for studied application and available data set. Is reduction of feature space dimension always useful? Is linear feature combination method better than non-linear one?In this paper, using structure of Feed-Forward Neural Network (FFNN) along with Genetic Algorithm (GA) as a powerful optimization algorithm, Linear Intelligent Feature Combination (LIFC) and Non-Linear Intelligent Feature Combination (NLIFC) systems is introduced to present adaptive combination systems with the nature of data sets and their features. In proposed method, original features are fed into semi-FFNN structure to map features into new feature space, and then outputs of this intelligent mapping structure are classified by minimum distance classifier via cross-validation technique. In each generation, weights and biases of semi-FFNN structure are updated by GA and correct recognition rate (or error recognition rate) is evaluated. In the rest of this paper, overview of minimum distance classifier, Feed-Forward Neural Network structure and Genetic Algorithm are described in sections2, 3and 4, respectively. In section 5, proposed method and its mathematical consideration are presented. Experimental , comparison between proposed method and other feature combinations and classifiers using the same database are discussed in section 6. Eventually, is presented in section 7. Minimum Distance classifier (or 1-nearest neighbor classifier) is one of the simplest classification methods, which works based on measured distance between an unknown input data and available data in classes. Distance is defined as an index of similarity, according this definition, the minimum distance means the maximum similarity. Distance between two vectors can be calculated in various procedures, such as Euclidian distance, Normalized Euclidian distance, Mahalanobis distance, Manhattan distance and etcetera. Euclidian distance is the most prevalent procedure that is presented in 1. DISPLAYFORM0 Where D is the distance between two vectors X and Y. ||X − Y ||means second norm of Euclidian distance. Notation n is dimension of X and Y whereX = (x 1, x 2,, x n) and Y = (y 1, y 2,, y n). FIG0 shows the concept of a minimum distance classifier. As it can be seen, distance between unknown input data and C2 is the minimum distance among all distances therefore this input data assigns to class C. Artificial Neural Networks (ANNs) are designed based on a model of human brain and its neural cells. Although human knowledge is much more limited than brain; its performance can be understood according to observation and physiology and anatomy information of brain BID13 ). Prominent trait of ANN is its ability to learn complicated problems between input and output vectors. In general, these networks are capable to model many non-linear functions. This ability lets neural networks be used in practical problems such as comparative diagnoses and controlling nonlinear systems. Nowadays, different topologies are proposed for implementing ANN in supervised, unsupervised and reinforcement applications. Feed-forward is a dominant used topology in supervised learning procedure. Feed-forward topology for an ANN is shown in FIG1. As it can be seen, information is fed into ANN via input layer which distribute just input information into the main body of ANN. In this transmission the quantities of information are changed through multiplying by synapse weights of connection between input layer and next layer. Applying activation functions in next layers, updated information arrive at output layer. General equation is given by 2. It is noticeable that in this structure information flow from input to output and there is not any feedback, also there is not any disconnection and jump connection between layers. DISPLAYFORM0 Where, coefficients g and s are activation functions of N 2 and N 1s, respectively. w1s are synapse weights between input layer and hidden layer, also w2s are synapse weights between hidden layer and output layer. In evolution theory, particles of population evolve themselves to be more adaptable to their environment. Therefore the particles that can do this better have more chance to survive. These algorithms are stochastic optimization techniques. In this kind of techniques, information of each generation is transferred to next generation by chromosome. Each chromosome consists of gens and any gen illustrates an especial feature or behavior. Genetic Algorithm (GA) is one of the most well known evolutionary algorithms. In GA's process, first of all, initial population is created based on necessities of problem. After that, objective function is evaluated. In order to achieve the best solution, off springs are created from parents in reproduction step by crossover and mutation. Consequently the best solution is obtained after determined iterations BID11 ). As mentioned before, variant methods may be used to improve the ability of classification system. In some cases, we interest in more complex and powerful classifier, although helpful, it often reduces decision making speed and increases computational cost. The other way is using pre-processing on training data before changing the kind of classifier or its complexity. Feature combination is one of the most common ways used to enhance the quality of features, so simple classifiers can discriminate them easily. In the most feature combination methods, such as LDA, PCA, ICA, MPCA and etcetera the main strategy is to reduce the feature space dimension, whereas based on nature of data sets features sometimes dimension reduction is needed, combination of features in same dimension is enough sometimes, and also increase of feature space dimension may be useful sometimes. The main idea of proposed method in this paper is to applying linear or non-linear intelligent features map in new solution space, in this method discriminative of data is increased. In general, proposed method is illustrated as FIG2 and can be represented as follow: Let R be solution space; according to the mapping concepts, we have: DISPLAYFORM0 Where, Superscriptsn and m are dimensions of solution space (or feature dimensions) before and after mapping process respectively. If n > m, then feature dimension is reduced from n-dimension to m-dimension by transfer function f. If n = m, then there is not any change dimensionality and only transfer function is applied on features. Feature dimension is also increased for n < m. Equation 1 describes the only generality of issue, whereas in proposed method not only the feature space dimension is changed and transfer function is applied, but also features are combined in linear or non-linear format. As shown in FIG3 X = x 1, x 2,, x n is an input data, Y = y 1, y 2,, y m is an output data and F is a transfer function which can be a typical Super polynomial like a feedforward neural network structure. After feeding X into this structure, if m be a unit (m = 1), we would project all features into one axis (or one dimension), and so we have: DISPLAYFORM1 And if m be more than unit (m > 1), then we have: DISPLAYFORM2... DISPLAYFORM3 Where function g may be linear or non-linear activation (or transfer) function. In this paper, as it can be seen in Fig. 5 (a) and Eq. 6 in the case of linear transfer function (like purelin) y s are only the weighted summation of primary features (x s). DISPLAYFORM4 Function g can be non-linear as shown in Eq. 7and Fig. 5 (b). This non-linear function is a kind of sigmoid transfer function which can be changed by coefficient α. DISPLAYFORM5 Figure 5: Linear and non-linear used activation function in proposed method. Now finding the optimum weights and biases for increasing separability of features that leads to increase efficiency of classification system is the heart of this paper. GA is utilized as one of the most accurate and powerful optimization tool. It is applied on features of each data after establishing the structure of mentioned intelligent mapping system and considering weights and biases with initial random value as shown in FIG4.Then a very simple classifiers used that is minimum distance classifier and cross-validation technique -one-leave one-out (Bishop FORMULA1) and error recognition rate is calculated. In this step, stop criteria is evaluated which is the least error recognition rate or the given number of generation for GA. If neither of stop criterions is satisfied, GA updates weights and biases. This process is done again and again until one of the stop criterions become satisfied. In other word mapping system (intelligent combination system) is a fitness function of GA and weights, and biases are GAs chromosomes. In order to evaluate the ability of proposed combination methods four classification tasks of UCI data sets are used BID2 ). Useful information of studied data sets 1 is described as follow (see 1). The Iris data contains 50 samples from three species, namely, Iris setosa, Iris versicolor, and Iris virginica. Sepal length, sepal width, petal length and petal width are four features extracted from each species. Wine: These data are the of a chemical analysis of wines grown in the same region in Italy and they are derived from three different cultivars. The analysis determines the quantities of 13 features extracted from each type of wine. These features are Alcohol, Malic acid, Ash, Alcalinity of Ash, Magnesium, total phenols, flavanoids, non-flavanoid phenols, proanthocyanins, color intensity, hue OD289/OD315 of diluted wines, and proline. Moreover, this dataset contains 178 samples categorized in three classes. The Glass data set consists of 214 samples of nine features from every specie: building-windows-float-processed, building-windows-non-float-processed, vehicle-windows-floatprocessed, containers, tableware, and heal ware. And extracted features are Refractive Index, Sodium, Magnesium, Aluminum, Silicon, Potassium, Calcium, Barium, and Iron. Ionosphere: This radar data was collected by a system in Goose Bay, Labrador. This system consists of a phased array of 16 high-frequency antennas with a total transmitted power on the order of 6.4 kilowatts. This radar data are categorized in two groups; "Good" and "Bad". "Good" radar returns show evidence of some types of structure in the ionosphere. And "Bad" returns dont do so, their signals pass through the ionosphere. As mentioned before, studying the nature of data set in order to design efficient combination and classification systems may be so important. Therefore all possible condition (namely: dimension reduction, dimension increase, only combination of features in same dimension, linear and nonlinear mapping) are considered. For each data set, classification using cross-validation is applied ten times. Classification parameters of GA are also considered similar for all data sets to present same condition, as shown in Table.Coefficient α is 0.2for all non-linear feature combinations. Fig.7 shows typical convergence curves of error recognition rate for studied data sets. Figure 7: Typical convergence curves of error recognition rates for all studied data set of UCI using GA. It is worth noting that; Correct recognition rate = 100 Error recognition rate. Tables FORMULA2 and FORMULA3 present obtained from all studied data sets in all mentioned condition. In each condition, classification is done 10 times. Minimum, maximum and average of correct classification rates are calculated to evaluate accuracy and reliability. It should be mentioned that proposed method works based on error recognition rate, but we report correct recognition rate, here. In order to clear this Tables, consider Iris which has 4 features or dimensions. In first condition features are projected and combined into lower dimension (with 2 dimensions). In second, only combination of features is fulfilled under intelligent combination function and in third one, features are projected and combined into higher dimension (with 8 dimensions). In all conditions, classification is done using linear (L) and non-linear (NL) transfer function. As it can be implied from Tables and FORMULA3, best combination form for Iris is non-linear mapping that dimension of feature space is reduced, although obtained in different condition are approximately similar winner condition is more reliable and accurate. The best classification rate for Wine is obtained by non-linear combination method while feature space dimension is increased. It is completely different for Glass in order to achieve efficient classification system for Glass it is enough to combine features without any changing in feature space dimension. Also, using non-linear combination feature while dimension is reduced can lead to best recognition rates for Ionosphere. In order to compare the performance of proposed method with other combination methods, two common used combination methods, LDA and PCA, are considered in this section. Both methods reduce dimension of feature space. LDA reduce dimension of features to (C-1) dimensions which C is the number of classes. PCA is also reduced feature dimension, but in PCA projected feature Table 3: Obtained for Iris and Wine for 10 times classification in 6 conditions. Table 4: Obtained for Iris and Wine for 10 times classification in 6 conditions. space dimension may be absolutely less than original feature space dimension. TAB4 shows the mapping spaces for LDA and PCA. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 In addition to LDA and PCA, obtained have been compared with other reported classification rate that used same data sets in other literatures as shown in TAB5. Obtained compared with other , show the importance of work on data and features before using complex classifiers with high computational costs. In all data sets, both proposed methods (LIFC and NLIFC) provide high quality features, so a simple classifier such as minimum distance classifier can discriminates classes and presents easily acceptable classification rate: for Iris correct recognition arte is increased from % 94.66 to % 100and for Wine this rate is increased from % 76.96 to % 99.43. Also correct recognition rate reaches to % 86.91 for Glass and % 97.15 for Ionosphere. In order to design more efficient classification system extracting useful knowledge and features from data set is so important and helpful. In many cases, it is more reasonable to spend time and energy to analyze features instead of using more complex classifiers with high computational costs. In this paper intelligent feature combination is proposed to enhance the quality of features and then minimum distance classifier is used as a simple classifier to obtain . Obtained confirm that kind of combination method depends on nature of data set and its features. For some datasets using non-linear mapping system while reducing dimension of the feature space is useful and sometimes using linear mapping system while increasing the dimension of the feature space leads to design the classification system more efficiently. For Iris and Ionosphere using non-linear intelligent mapping system while reducing the dimension of feature space correct recognition rates of %100 and % 97.15respectively. Using non-linear intelligent mapping while increasing dimension of feature space leads to obtain correct recognition rate of % 99.43 for Wine. It is so interesting that the best for Glass obtains when features are combined by non-linear mapping without any change in dimension of feature space.8 REFERENCES
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJqUtdOaZ
A method for enriching and combining features to improve classification accuracy
Recent improvements to Generative Adversarial Networks (GANs) have made it possible to generate realistic images in high resolution based on natural language descriptions such as image captions. Furthermore, conditional GANs allow us to control the image generation process through labels or even natural language descriptions. However, fine-grained control of the image layout, i.e. where in the image specific objects should be located, is still difficult to achieve. This is especially true for images that should contain multiple distinct objects at different spatial locations. We introduce a new approach which allows us to control the location of arbitrarily many objects within an image by adding an object pathway to both the generator and the discriminator. Our approach does not need a detailed semantic layout but only bounding boxes and the respective labels of the desired objects are needed. The object pathway focuses solely on the individual objects and is iteratively applied at the locations specified by the bounding boxes. The global pathway focuses on the image and the general image layout. We perform experiments on the Multi-MNIST, CLEVR, and the more complex MS-COCO data set. Our experiments show that through the use of the object pathway we can control object locations within images and can model complex scenes with multiple objects at various locations. We further show that the object pathway focuses on the individual objects and learns features relevant for these, while the global pathway focuses on global image characteristics and the image . Understanding how to learn powerful representations from complex distributions is the intriguing goal behind adversarial training on image data. While recent advances have enabled us to generate high-resolution images with Generative Adversarial Networks (GANs), currently most GAN models still focus on modeling images that either contain only one centralized object (e.g. faces (CelebA), objects (ImageNet), birds (CUB-200), flowers (Oxford-102), etc.) or on images from one specific domain (e.g. LSUN bedrooms, LSUN churches, etc.). This means that, overall, the variance between images used for training GANs tends to be low BID14. However, many real-life images contain multiple distinct objects at different locations within the image and with different relations to each other. This is for example visible in the MS-COCO data set BID11, which consists of images of different objects at different locations within one image. In order to model images with these complex relationships, we need models that can model images containing multiple objects at distinct locations. To achieve this, we need control over what kind of objects are generated (e.g. persons, animals, objects, etc.), the location, and the size of these objects. This is a much more challenging task than generating a single object in the center of an image. Current work BID10 BID9 BID6 ) often approaches this challenge by using a semantic layout as additional conditional input. While this can be successful in controlling the image layout and object placement, it also places a high burden on the generating process since a complete scene layout must be obtained first. We propose a model that does not require a full semantic layout, but instead only requires the desired object locations and identities (see Figure 1). One part of our model, called the global pathway, is responsible for generating the general layout of the complete image, while a second path, the object pathway, is used to explicitly generate the features of different objects based on the relevant object label and location. The generator gets as input a natural language description of the scene (if existent), the locations and labels of the various objects within the scene, and a random noise vector. The global pathway uses this to create a scene layout encoding which describes high-level features and generates a global feature representation from this. The object pathway generates a feature representation of a given object at a location described by the respective bounding box and is applied iteratively over the scene at the locations specified by the individual bounding boxes. We then concatenate the feature representations of the global and the object pathway and use this to generate the final image. The discriminator, which also consists of a global and object pathway, gets as input the image, the bounding boxes and their respective object labels, and the textual description. The global pathway is then applied to the whole image and obtains a feature representation of the global image features. In parallel, the object pathway focuses only on the areas described by the bounding boxes and the respective object labels and obtains feature representations of these specific locations. Again, the outputs of both the global and the object pathway are merged and the discriminator is trained to distinguish between real and generated images. In contrast to previous work we do not generate a scene layout of the whole scene but only focus on relevant objects which are placed at the specified locations, while the global consistency of the image is the responsibility of the other part of our model. To summarize our model and contributions: 1) We propose a GAN model that enables us to control the layout of a scene without the use of a scene layout. 2) Through the use of an object pathway which is responsible for learning features of different object categories, we gain control over the identity and location of arbitrarily many objects within a scene.3) The discriminator judges not only if the image is realistic and aligned to the natural language description, but also whether the specified objects are at the given locations and of the correct object category. 4) We show that the object pathway does indeed learn relevant features for the different objects, while the global pathway focuses on general image features and the . Having more control over the general image layout can lead to a higher quality of images BID16 BID6 and is also an important requirement for semantic image manipulation BID5 ). Approaches that try to exert some control over the image layout utilize Generative Adversarial Nets BID3, Refinement Networks (e.g. BID1 ; Xu et al. (2018a) ), recurrent attention-based models (e.g. BID12), autoregressive models (e.g. Reed et al. (2016c) ), and even memory networks supplying the image generation process with previously extracted image features (b).One way to exert control over the image layout is by using natural language descriptions of the image, e.g. image captions, as shown by Reed et al. (2016b), Zhang et al. (2018a), , and Xu et al. (2018b). However, these approaches are trained only with images and their respective captions and it is not possible to specifically control the layout or placement of specific objects within the image. Several approaches suggested using a semantic layout of the image, generated from the image caption, to gain more fine-grained control over the final image. BID10, BID9 use a scene layout to generate images in which given objects are drawn within their specified segments based on the generated scene layout. BID6 use the image caption to generate bounding boxes of specific objects within the image and predict the object's shape within each bounding box. This is further extended by BID5 by making it possible to manipulate images on a semantic level. While these approaches offer a more detailed control over the image layout they heavily rely on a semantic scene layout for the image generating process, often implying complex preprocessing steps in which the scene layout is constructed. The two approaches most closely related to ours are by BID16 and BID14. BID14 introduce a model that consists of individual "blocks" which are responsible for different object characteristics (e.g. color, shape, etc.). However, their approach was only tested on the synthetic SHAPES data set BID0, which has only comparatively low variability and no image captions. Reed et al. (2016b) condition both the generator and the discriminator on either a bounding box containing the object or keypoints describing the object's shape. However, the used images are still of relatively low variability (e.g. birds ) and only contain one object, usually located in the center of the image. In contrast, we model images with several different objects at various locations and apply our object pathway multiple times at each image, both in the Figure 1: Both the generator and the discriminator of our model consist of a global and an object pathway. The global pathway focuses on global image characteristics, such as the , while the object pathway is responsible for modeling individual objects at their specified location.generator and in the discriminator. Additionally, we use the image caption and bounding box label to obtain individual labels for each bounding box, while Reed et al. (2016b) only use the image caption as conditional information. For our approach 1, the central goal is to generate objects at arbitrary locations within a scene while keeping the scene overall consistent. For this we make use of a generative adversarial network (GAN) BID3. A GAN consists of two networks, a generator and a discriminator, where the generator tries to reproduce the true data distribution and the discriminator tries to distinguish between generated data points and data points sampled from the true distribution. We use the conditional GAN framework, in which both the generator and the discriminator get additional information, such as labels, as input. The generator G (see Figure 1) gets as input a randomly sampled noise vector z, the location and size of the individual bounding boxes bbox i, a label for each of the bounding boxes encoded as a one-hot vector l onehoti, and, if existent, an image caption embedding ϕ obtained with a pretrained char-CNN-RNN network from Reed et al. (2016b). As a pre-processing step (A), the generator constructs labels label i for the individual bounding boxes from the image caption ϕ and the provided labels l onehoti of each bounding box. For this, we concatenate the image caption embedding ϕ and the one-hot vector of a given bounding box l onehoti and create a new label embedding label i by applying a matrix-multiplication followed by a non-linearity (i.e. a fully connected layer). The ing label label i contains the previous label as well as additional information from the image caption, such as color or shape, and is potentially more meaningful. In case of missing image captions, we use the one-hot embedding l onehoti only. The generator consists of two different streams which get combined later in the process. First, the global pathway (B) is responsible for creating a general layout of the global scene. It processes the previously generated local labels label i for each of the bounding boxes and replicates them spatially at the location of each bounding box. In areas where the bounding boxes overlap the label embeddings label i are summed up, while the areas with no bounding boxes remain filled with zeros. Convolutional layers are applied to this layout to obtain a high-level layout encoding which is concatenated with the noise vector z and the image caption embedding ϕ and the is used to generate a general image layout f global.Second, the object pathway (C) is responsible for generating features of the objects f locali within the given bounding boxes. This pathway creates a feature map of a predefined resolution using convolutional layers which receive the previously generated label label i as input. This feature map is further transformed with a Spatial Transformer Network (STN) BID7 to fit into the Published as a conference paper at ICLR 2019 bounding box at the given location on an empty canvas. The same convolutional layers are applied to each of the provided labels, i.e. we have one object pathway that is applied several times across different labels label i and whose output feeds onto the corresponding coordinates on the empty canvas. Again, features within overlapping bounding box areas are summed up, while areas outside of any bounding box remain zero. As a final step, the outputs of the global and object pathways f global and f locali are concatenated along the channel axis and are used to generate the image in the final resolution, using common GAN procedures. The specific changes of the generator compared to standard architectures are the object pathway that generates additional features at specific locations based on provided labels, as well as the layout encoding which is used as additional input to the global pathway. These two extensions can be added to the generator in any existing architecture with limited extra effort. The discriminator receives as input an image (either original or generated), the location and size of the bounding boxes bbox i, the labels for the bounding boxes as one-hot vectors l onehoti, and, if existent, the image caption embedding ϕ. Similarly to the generator, the discriminator also possesses both a global (D) and an object (E) pathway respectively. The global pathway takes the image and applies multiple convolutional layers to obtain a representation f global of the whole image. The object pathway first uses a STN to extract the objects from within the given bounding boxes and then concatenates these extracted features with the spatially replicated bounding box label l onehoti. Next, convolutional layers are applied and the ing features f locali are again added onto an empty canvas within the coordinates specified by the bounding box. Note, similarly to the generator we only use one object pathway that is applied to multiple image locations, where the outputs are then added onto the empty canvas, summing up overlapping parts and keeping areas outside of the bounding boxes set to zero. Finally, the outputs of both the object and global pathways f locali and f global are concatenated along the channel axis and we again apply convolutional layers to obtain a merged feature representation. At this point, the features are concatenated either with the spatially replicated image caption embedding ϕ (if existent) or the sum of all one-hot vectors l onehoti along the channel axis, one more convolutional layer is applied, and the output is classified as either generated or real. For the general training, we can utilize the same procedure that is used in the GAN architecture that is modified with our proposed approach. In our work we mostly use the StackGAN (a) and AttnGAN (b) frameworks which use a modified objective function taking into consideration the additional conditional information and provided image captions. As such, our discriminator D and our generator G optimize the following objective function: DISPLAYFORM0 where x is an image, c is the conditional information for this image (e.g. label i, bounding boxes bbox i, or an image caption ϕ), z is a randomly sampled noise vector used as input for G, and p data is the true data distribution. Zhang et al. (2018a) and others use an additional technique called conditioning augmentation for the image captions which helps improve the training process and the quality of the generated images. In the experiments in which we use image captions (MS-COCO) we also make use of this technique 2. For the evaluation, we aim to study the quality of the generated images with a particular focus on the generalization capabilities and the contribution of specific parts of our model, in both controllable and large-scale cases. Thus, in the following sections, we evaluate our approach on three different data sets: the Multi-MNIST data set, the CLEVR data set, and the MS-COCO data set. In our first experiment, we used the Multi-MNIST data set BID2 for testing the basic functionality of our proposed model. Using the implementation provided by BID2, we created 50,000 images of resolution 64 × 64 px that contain exactly three normal-sized MNIST digits in non-overlapping locations on a black .2 More detailed information about the implementation can be found in the Appendix. Published as a conference paper at ICLR 2019 1, 6, 7 2, 8, 4 6, 6, 9 9, 7, 9 6, 1, 0 9, 0, 0 8, 2, 2 4, 6, 2 7, 3, 9 0, 2, 9Figure 2: Multi-MNIST images generated by the model. Training included only images with three individual normal-sized digits. Highlighted bounding boxes and yellow ground truth for visualization. As a first step, we tested whether our model can learn to generate digits at the specified locations and whether we can control the digit identity, the generated digit's size, and the number of generated digits per image. According to the , we can control the location of individual digits, their identity, and their size, even though all training images contain exactly three digits in normal size. Figure 2 shows that we can control how many digits are generated within an image (rows A-B, for two to five digits) and various sizes of the bounding box (row C). As a second step, we created an additional Multi-MNIST data set in which all training images contain only digits 0-4 in the top half and only digits 5-9 in the bottom half of the image. For testing digits in the opposite half, we can see that the model is indeed capable of generalizing the position (row D, left), i.e. it can generate digits 0-4 in the bottom half of the image and digits 5-9 in the top half of the image. Nevertheless, we also observed that this does not always work perfectly, as the network sometimes alters digits towards the ones it has seen during training at the respective locations, e.g. producing a "4" more similar to a "9" if in bottom half of the image, or generating a "7" more similar to a "1" if in top half of the image. As a next step, we created a Multi-MNIST data set with images that only contain digits in the top half of the image, while the bottom half is always empty. We can see (Figure 2, row D, right) that the ing model is not able to generate digits in the bottom half of the image (see Figure 6 in the Appendix for more details on this). Controlling for the location still works, i.e. bounding boxes are filled with "something", but the digit identity is not clearly recognizable. Thus, the model is able to control both the object identity and the object location within an image and can generalize to novel object locations to some extent. To test the impact of our model extensions, i.e. the object pathway in both the generator and the discriminator as well as the layout encoding, we performed ablation studies on the previously created Multi-MNIST data set with three digits at random locations. We first disabled the use of the layout encoding in the generator and left the rest of the model unchanged. In the (Figure 2, row E, left), we can see that, overall, both the digit identity and the digit locations are still correct, but minor imperfections can be observed within various images. This is most likely due to the fact that the global pathway of the generator has no information about the digit identity and location until its features get merged with the object pathway. As a next test, we disabled the object pathway of the Published as a conference paper at ICLR 2019 discriminator and left the rest of the model unmodified. Again, we see (row E, right) that we can still control the digit location, although, again, minor imperfections are visible. More strikingly, we have a noticeably higher error rate in the digit identity, i.e. the wrong digit is generated at a given location, most likely due to the fact that there is not object pathway in the discriminator controlling the object identity at the various locations. In comparison, the imperfections are different when only the object pathway of the generator is disabled (row F, left). The layout encoding and the feedback of the discriminator seem to be enough to still produce the digits in the correct image location, but the digit identity is often incorrect or not recognizable at all. Finally, we tested disabling the object pathway in both the discriminator and the generator (see row F, right). This leads to a loss of control of both image location as well as identity and sometimes even in images with more or fewer than three digits per image. This shows that only the layout encoding, without any of the object pathways, is not enough to control the digit identity and location. Overall, these indicate that we do indeed need both the layout encoding, for a better integration of the global and object pathways, and the object pathways in both the discriminator and the generator, for optimal . In our second experiment we used more complex images containing multiple objects of different colors and shapes. The goal of this experiment was to evaluate the generalization ability of our object pathway across different object characteristics. For this, we performed tests similar to BID14, albeit on the more complex CLEVR data set BID8. In the CLEVR data set objects are characterized by multiple properties, in our case the shape, the color, and the size. Based on the implementation provided by BID8, we rendered 25,000 images with a resolution of 64 × 64 pixels containing 2 − 4 objects per image. The label for a given bounding box of an object is the object shape and color (both encoded as one-hot encoding and then concatenated), while the object size is specified through the height and width of the bounding box. Similar to the first experiment, we tested our model for controlling the object characteristics, size, and location. In the first row of FIG1 we present the of the trained model, where the left image of each pair shows the originally rendered one, while the right image was generated by our model. We can confirm that the model can control both the location and the objects' shape and color characteristics. The model can also generate images containing an arbitrary number of objects (forth and fifths pair), even though a maximum of four objects per image was seen during training. The CLEVR data set offers a split specifically intended to test the generalization capability of a model, in which cylinders can be either red, green, purple, or cyan and cubes can be either gray, blue, brown, or yellow during training, while spheres can have any of these colors. During testing, the colors between cylinders and cubes are reversed. Based on these restrictions, we created a second data set of 25,000 training images for testing our model. Results of the test are shown in the second row of FIG1 (again, left image of each pair shows the originally rendered one, while the right image was generated by our model). We can see that the color transfer to novel shape-color combinations takes place, but, similarly to the Multi-MNIST , we can see some artifacts, where e.g. some cubes look a bit more like cylinders and vice versa. Overall, the CLEVR experiment confirms the indication that our model can control object characteristics (provided through labels) and object locations (provided through bounding boxes) and can generalize to novel object locations, novel amounts of objects per image, and novel object characteristic combinations within reasonable boundaries. 2 When using the ground truth bounding boxes at test time (as we do) the IS increases to 11.94 ± 0.09. 3 FID score was calculated with samples generated with the pretrained model provided by the authors. 4 The authors report a "best" value of 25.89 ± 0.47, but when calculating the IS with the pretrained model provided by the authors we only obtain an IS of 23.61. Other researchers on the authors' Github website report a similar value for the pretrained model. 5 We use the updated source code (IS of 10.62) as our baseline model. Table 1: Comparison of the Inception Score (IS) and Fréchet Inception Distance (FID) on the MS-COCO data set for different models. Note: the IS and FID values of our models are not necessarily directly comparable to the other models, since our model gets at test time, in addition to the image caption, up to three bounding boxes and their respective object labels as input. For our final experiment, we used the MS-COCO data set BID11 ) to evaluate our model on natural images of complex scenes. In order to keep our evaluation comparable to previous work, we used the 2014 train/test split consisting of roughly 80,000 training and 40,000 test images and rescaled the images to a resolution of 256 × 256 px. At train-time, we used the bounding boxes and object labels of the three largest objects within an image, i.e. we used zero to three bounding boxes per image. Similarly to work by BID9 we only considered objects that cover at least 2% of the image for the bounding boxes. To evaluate our quantitatively, we computed both the Inception Score (IS, larger is better), which tries to evaluate how recognizable and diverse objects within images are , as well as the Fréchet Inception Distance (FID, smaller is better), which compares the statistics of generated images with real images BID4. As a qualitative evaluation, we generated images that contain more than one object, and checked, whether the bounding boxes can control the object placement. We tested our approach with two commonly used architectures for text-to-image synthesis, namely the StackGAN and the AttnGAN (b), and compared the images generated by these and our models. In the StackGAN, the training process is divided into two steps: first, it learns a generator for images with a resolution of 64 × 64 px based on the image captions, and second, it trains a second generator, which uses the smaller images (64 × 64 px) from the first generator and the image caption as input to generate images with a resolution of 256×256 px. Here, we added the object pathways and the layout encoding at the beginning of both the first generator and the second generator and used the object pathway in both discriminators. The other parts of StackGAN architecture and all hyperparameters remain the same as in the original training procedure for the MS-COCO data set. We trained the model three times from scratch and randomly sampled 3 times 30,000 image captions from the test set for each model. We then calculated the IS and FID values on each of the nine samples of 30,000 generated images and report the averaged values. As presented in Table 1, our StackGAN with added object pathways outperforms the original StackGAN both on the IS and the FID, increasing the IS from 10.62 to 12.12 and decreasing the FID from 74.05 to 55.30. Note, however, that this might also be due to the additional information our model is provided with as it receives up to three bounding boxes and respective bounding box labels per image in addition to the image caption. We also extended the AttnGAN by Xu et al. (2018b), the current state-of-the-art model on the MS-COCO data set (based on the Inception Score), with our object pathway to evaluate its impact on Published as a conference paper at ICLR 2019Figure 4: Examples of images generated from the given caption from the MS-COCO data set. A) shows the original images and the respective image captions, B) shows images generated by our StackGAN+OP (with the corresponding bounding boxes for visualization), and C) shows images generated by the original StackGAN 3 a different model. As opposed to the StackGAN, the AttnGAN consists of only one model which is trained end-to-end on the image captions by making use of multiple, intermediate, discriminators. Three discriminators judge the output of the generator at an image resolution of 64 × 64, 128 × 128, and 256 × 256 px. Through this, the image generation process is guided at multiple levels, which helps during the training process. Additionally, the AttnGAN implements an attention technique through which the networks focus on specific areas of the image for specific words in the image caption and adds an additional loss that checks if the image depicts the content as described by the image caption. There, in the same way as for the StackGAN, we added our object pathway at the beginning of the generator as well as to the discriminator that judges the generator outputs at a resolution of 64 × 64 px. All other discriminators, the higher layers of the generator, and all other hyperparameters and training details stay unchanged. Table 1 shows that adding the object pathway to the AttnGAN increases the IS of our baseline model (the pretrained model provided by the authors) from 23.61 to 24.76, while the FID is roughly the same as for the baseline model. To evaluate whether the StackGAN model equipped with an object pathway (StackGAN+OP) actually generates objects at the given positions we generated images that contain multiple objects and inspected them visually. Figure 4 shows some example images, more can be seen in the Appendix in FIG3. We can observe that the StackGAN+OP indeed generates images in which the objects are at appropriate locations. In order to more closely inspect our global and object pathways, we can also disable them during the image generation process. Figure 5 shows additional examples, in which we generate the same image with either the global or the object pathway disabled during the generation process. Row C of Figure 5 shows images in which the object pathway was disabled and, indeed, we observe that the images contain mostly information and objects at the location of the bounding boxes are either not present or of much less detail than when the object pathway is enabled. Conversely, row D of Figure 5 shows images which were generated when the global pathway was disabled. As expected, areas outside of the bounding boxes are empty, but we also observe that the bounding boxes indeed contain images that resemble the appropriate objects. These indicate, as in the previous experiments, that the global pathway does indeed model holistic image features, while the object pathway focuses on specific, individual objects. When we add the object pathway to the AttnGAN (AttnGAN + OP) we can observe similar 4. Again, we are able to control the location and identity of objects through the object pathway, however, we observe that the AttnGAN+OP, as well as the AttnGAN in general, tends to place objects corresponding to specific features at many locations throughout the image. For example, if the caption contains the word "traffic light" the AttnGAN tends to place objects similar to traffic lights throughout the whole image. Since our model only focuses on generating objects at given locations, while not enforcing that these objects only occur at these locations, this behavior leads to the that the AttnGAN+OP generates desired objects at the desired locations, but might also place the same object at other locations within the image. Note, however, that we only added the object pathway Published as a conference paper at ICLR 2019Figure 5: Examples of images generated from the given caption from the MS-COCO data set. A) shows the original images and the respective image captions, B) shows images generated by our StackGAN+OP (with the corresponding bounding boxes for visualization) with the object pathway enabled, C) shows images generated by the our StackGAN+OP when the object pathway is disabled, and D) shows images generated by the our StackGAN+OP when the global pathway is disabled.to the lowest generator and discriminator and that we might gain even more control over the object location by introducing object pathways to the higher generators and discriminators, too. In order to further evaluate the quality of the generations, we ran an object detection test on the generated images using a pretrained YOLOv3 network BID15. Here, the goal is to measure how often an object detection framework, which was trained on MS-COSO as well, can detect a specified object at a specified location 5. The confirm the previously made observations: For both the StackGAN and the AttnGAN the object pathway seems to improve the image quality, since YOLOv3 detects a given object more often correctly when the images are generated with an object pathway as opposed to images generated with the baseline models. The StackGAN generates objects at the given bounding box, ing in an Intersection over Union (IoU) of greater than 0.3 for all tested labels and greater than 0.5 for 86.7% of the tested labels. In contrast, the AttnGAN tends to place salient object features throughout the image, which leads to an even higher detection rate by the YOLOv3 network, but a smaller average IoU (only 53.3% of the labels achieve an IoU greater than 0.3). Overall, our experiments on the MS-COCO data set indicate that it is possible to add our object pathway to pre-existing GAN models without having to change the overall model architecture or training process. Adding the object pathway provides us with more control over the image generation process and can, in some cases, increase the quality of the generated images as measured via the IS or FID. Our experiments indicate that we do indeed get additional control over the image generation process through the introduction of object pathways in GANs. This enables us to control the identity and location of multiple objects within a given image based on bounding boxes and thereby facilitates the generation of more complex scenes. We further find that the division of work on a global and object pathway seems to improve the image quality both subjectively and based on quantitative metrics such as the Inception Score and the Fréchet Inception Distance. The further indicate that the focus on global image statistics by the global pathway and the more fine-grained attention to detail of specific objects by the object pathway works well. This is visualized for example in rows C and D of Figure 5. The global pathway (row C) generates features for the general image layout and but does not provide sufficient details for individual objects. The object pathway (row D), on the other hand, focuses entirely on the individual objects and generates features specifically for a given object at a given location. While this is the desired behavior Published as a conference paper at ICLR 2019 of our model it can also lead to sub-optimal images if there are not bounding boxes for objects that should be present within the image. This can often be the case if the foreground object is too small (in our case less than 2% of the total image) and is therefore not specifically labeled. In this case, the objects are sometimes not modeled in the image at all, despite being prominent in the respective image caption, since the object pathway does not generate any features. We can observe this, for example, in images described as "many sheep are standing on the grass", where the individual sheep are too small to warrant a bounding box. In this case, our model will often only generate an image depicting grass and other details, while not containing any sheep at all. Another weakness is that bounding boxes that overlap too much (empirically an overlap of more than roughly 30%) also often lead to sub-optimal objects at that location. Especially in the overlapping section of bounding boxes we often observe local inconsistencies or failures. This might be the of our merging of the different features within the object pathway since they are simply added to each other at overlapping areas. A more sophisticated merging procedure could potentially alleviate this problem. Another approach would be to additionally enhance the bounding box layout by predicting the specific object shape within each bounding box, as done for example by BID6.Finally, currently our model does not generate the bounding boxes and labels automatically. Instead, they have to be provided at test time which somewhat limits the usability for unsupervised image generation. However, even when using ground truth bounding boxes, our models still outperform other current approaches that are tested with ground truth bounding boxes (e.g. BID6) based on the IS and FID. This is even without the additional need of learning to specify the shape within each bounding box as done by BID6. In the future, this limitation can be avoided by extracting the relevant bounding boxes and labels directly from the image caption, as it is done for example by BID6 With the goal of understanding how to gain more control over the image generation process in GANs, we introduced the concept of an additional object pathway. Such a mechanism for differentiating between a scene representation and object representations allows us to control the identity, location, and size of arbitrarily many objects within an image, as long as the objects do not overlap too strongly. In parallel, a global pathway, similar to a standard GAN, focuses on the general scene layout and generates holistic image features. The object pathway, on the other hand, gets as input an object label and uses this to generate features specifically for this object which are then placed at the location given by a bounding box The object pathway is applied iteratively for each object at each given location and as such, we obtain a representation of individual objects at individual locations and of the general image layout (, etc.) as a whole. The features generated by the object and global pathway are then concatenated and are used to generate the final image output. Our tests on synthetic and real-world data sets suggest that the object pathway is an extension that can be added to common GAN architectures without much change to the original architecture and can, along with more fine-grained control over the image layout, also lead to better image quality. Published as a conference paper at ICLR 2019 Here we provide some more details about the exact implementation of our experiments. To train our GAN approach on the Multi-MNIST (CLEVR) data set we use the Stage-I Generator and Discriminator from the StackGAN MS-COCO architecture 6. In our following description an upsample block describes the following sequence: nearest neighbor upsampling with factor 2, a convolutional layer with X filters (filter size 3 × 3, stride 1, padding 1), batch normalization, and a ReLU activation. The bounding box labels are one-hot vectors of size encoding the digit identity (CLEVR: encoding object shape and color). Please refer to Table 2 for detailed information on the individual layers described in the following. For all leaky ReLU activations alpha was set to 0.2.In the object pathway of the generator we first create a zero tensor O G which will contain the feature representations of the individual objects. We then spatially replicate each bounding box label into a 4 × 4 layout of shape (CLEVR:) and apply two upsampling blocks. The ing tensor is then added to the tensor O G at the location of the bounding box using a spatial transformer network. In the global pathway of the generator we first obtain the layout encoding. For this we create a tensor of shape (CLEVR: FIG1) that contains the one-hot labels at the location of the bounding boxes and is zero everywhere else. We then apply three convolutional layers, each followed by batch normalization and a leaky ReLU activation. We reshape the output to shape and concatenate it with the noise tensor of shape (sampled from a random normal distribution) to form a tensor of shape. This tensor is then fed into a dense layer, followed by batch normalization and a ReLU activation and the output is reshaped to (−1, 4, 4). We then apply two upsampling blocks to obtain a tensor of shape (−1, 16, 16).At this point, the outputs of the object and the global pathway are concatenated along the channel axis to form a tensor of shape (−1, 16, 16). We then apply another two upsampling blocks ing in a tensor of shape (−1, 64, 64) followed by a convolutional layer and a TanH activation to obtain the final image of shape (−1, 64, 64).In the object pathway of the discriminator we first create a zero tensor O D which will contain the feature representations of the individual objects. We then use a spatial transfomer network to extract the image features at the locations of the bounding boxes and reshape them to a tensor of shape (CLEVR: FIG1). The one-hot label of each bounding box are spatially replicated to a shape of (CLEVR: ) and concatenated with the previously extracted features to form a tensor of shape (CLEVR: ). We then apply a convolutional layer, batch normalization and a leaky ReLU activation to the concatenation of features and label and, again, use a spatial transformer network to resize the output to the shape of the respective bounding box before adding it to the tensor O D.In the global pathway of the discriminator, we apply two convolutional layers, each followed by batch normalization and a leaky ReLU activation and concatenate the ing tensor with the output of the object pathway. After this, we again apply two convolutional layers, each followed by batch normalization and a leaky ReLU activation. We concatenate the ing tensor with the conditioning information about the image content, in this case, the sum of all one-hot vectors. To this tensor we apply another convolutional layer, batch normalization, a leaky ReLU activation, and another convolutional layer, to obtain the final output of the discriminator of shape.Similarly to the procedure of StackGAN and other conditional GANs we train the discriminator to classify real images with correct labels (the sum of one-hot vectors supplied in the last step of the process) as real, while generated images with correct labels and real images with (randomly sampled) incorrect labels should be classified as fake.6 https://github.com/hanzhanggit/StackGAN-Pytorch Published as a conference paper at ICLR 2019 StackGAN-Stage-I For training the Stage-I generator and discriminator (images of size 64 × 64 pixels) we follow the same procedure and architecture outlined in the previous section about the training on the Multi-MNIST and CLEVR data sets. The only difference is that we now have image captions as an additional description of the image. As such, to obtain the bounding box labels we concatenate the image caption embedding 7 and the one-hot encoded bounding box label and apply a dense layer with 128 units, batch normalization, and a ReLU activation to it, to obtain a label of shape for each bounding box. In the final step of the discriminator when we concatenate the feature representation with the conditioning vector, we use the image encoding as conditioning vector and do not use any bounding box labels at this step. The rest of the training proceeds as described in the previous section, except that the bounding box labels now have a shape of. All other details can be found in Table 2.StackGAN-Stage-II In the second part of the training, we train a second generator and discriminator to generate images with a resolution of 256 × 256 pixels. The generator gets as input images with a resolution of 64 × 64 pixels (generated by the trained Stage-I generator) and the image caption and uses them to generate images with a 256 × 256 pixels resolution. A new discriminator is trained to distinguish between real and generated images. On the Stage-II generator we perform the following modifications we use the same procedure as in the Stage-I generator to obtain the bounding box labels. To obtain an image encoding from the generated 64 × 64 image we use three convolutional layers, each followed by batch normalization and a ReLU activation to obtain a feature representation of shape [−1, 16, 16]. Additionally, we replicate each bounding box label (obtained with the dense layer) spatially at the locations of the bounding boxes on an empty canvas of shape and then concatenate it along the channel axis with the image encoding and the spatially replicated image caption embedding. As in the standard StackGAN we then apply more convolutional layers with residual connections to obtain the final image embedding of shape [−1, 16, 16], which provides the input for both the object and the global pathway. The generator's object pathway gets as input the image encoding described in the previous step. First, we create a zero tensor O G which will contain the feature representations of the individual objects. We then use a spatial transformer network to extract the features from within the bounding box and reshapes those features to [−1, 16, 16]. After this, we apply two upsample blocks and then use a spatial transformer network to add the features to O G within the bounding box region. This is done for each of the bounding boxes within the image. The generator's global pathway gets as input the image encoding and uses the same convolutional layers and upsampling procedures as the original StackGAN Stage-II generator. The outputs of the object and global pathway are merged at the resolution of [−1, 64, 64] by concatenating the two outputs along the channel axis. After this, we continue using the standard StackGAN architecture to generate images of shape.The Stage-II discriminator's object pathway first creates a zero tensor O D which will contain the feature representations of the individual objects. It gets as input the image (resolution of 256 × 256 pixels) and we use a spatial transformer network to extract the features from the bounding box and reshape those features to a shape of. We spatially replicate the bounding box label (one-hot encoding) to a shape of [−1, 32, 32] and concatenate it with the extracted features along the channel axis. This is then given to the object pathway which consists of two convolutional layers with batch normalization and a LeakyReLU activation. The output of the object pathway is again transformed to the width and height of the bounding box with a spatial transformer network and then added to O D. This procedure is performed with each of the bounding boxes within the image (maximum of three during training).The Stage-II discriminator's global pathway consists of the standard StackGAN layers, i.e. it gets as input the image (256 × 256 pixels) and applies convolutional layers with stride 2 to it. The outputs of the object and global pathways are merged at the resolution of [−1, 32, 32] by concatenating the 7 Downloaded from https://github.com/reedscot/icml2016 14 Published as a conference paper at ICLR 2019 two outputs along the channel axis We then apply more convolutional with stride 2 to decrease the resolution. After this, we continue in the same way as the original StackGAN.AttnGAN On the AttnGAN 8 we only modify the training at the lower layers of the generator and the first discriminator (working on images of 64 × 64 pixels resolution). For this, we perform the same modifications as described in the StackGAN-Stage-I generator and discriminator. In the generator we obtain the bounding box labels in the same way as in the StackGAN, by concatenating the image caption embedding with the respective one-hot vector and applying a dense layer with 100 units, batch normalization, and a ReLU activation to obtain a bounding box label. In contrast to the previous architectures, we follow the AttnGAN implementation in use the gated linear unit function (GLU) as standard activation for our convolutional layers in the generator. In the generator's object pathway we first create a zero tensor O G of shape which will contain the feature representations of the individual objects. We then spatially replicate each bounding box label into a 4 × 4 layout of shape and apply two upsampling blocks with 768 and 384 filters (filter size=3, stride=1, padding=1). The ing tensor is then added to the tensor O G at the location of the bounding box using a spatial transformer network. In the global pathway of the generator we first obtain the layout encoding in the same way as in the StackGAN-I generator, except that the three convolutional layers of the layout encoding now have 50, 25, and 12 filters respectively (filter size=3, stride=2, padding=1). We concatenate it with the noise tensor of shape (sampled from a random normal distribution) and the image caption embedding to form a tensor of shape. This tensor is then fed into a dense layer with 24,576 units, followed by batch normalization and a ReLU activation and the output is reshaped to. We then apply two upsampling blocks with 768 and 384 filters to obtain a tensor of shape.At this point the outputs of the object and the global pathways are concatenated along the channel axis to form a tensor of shape. We then apply another two upsampling blocks with 192 and 96 filters, ing in a tensor of shape. This feature representation is then used by the following layers of the AttnGAN generator in the same way as detailed in the original paper and implementation. In the object pathway of the discriminator we first create a zero tensor O D which will contain the feature representations of the individual objects. We then use a spatial transfomer network to extract the image features at the locations of the bounding boxes and reshape them to a tensor of shape. The one-hot label of each bounding box is spatially replicated to a shape of (−1, 16, 16) and concatenated with the previously extracted features. We then apply a convolutional layer with 192 filters (filter size=4, stride=1, padding=1), batch normalization and a leaky ReLU activation to the concatenation of features and label and, again, use a spatial transformer network to resize the output to the shape of the respective bounding box before adding it to the tensor O D.In the global pathway of the discriminator we apply two convolutional layers with 96 and 192 filters (filter size=4, stride=2, padding=1), each followed by batch normalization and a leaky ReLU activation and concatenate the ing tensor with the output of the object pathway. After this, we again apply two convolutional layers with 384 and 768 filters (filter size=4, stride=2, padding=1), each followed by batch normalization and a leaky ReLU activation. We concatenate the ing tensor with the spatially replicated image caption embedding. To this tensor we apply another convolutional layer with 768 filters (filter size=3, stride=1, padding=1), batch normalization, a leaky ReLU activation, and another convolutional layer with one filter (filter size=4, stride=4, padding=0), to obtain the final output of the discriminator of shape. The rest of the training and all other hyperparameters and architectural values are left the same as in the original implementation. Published as a conference paper at ICLR 2019Multi-MNIST CLEVR MS-COCO-I MS-COCO-II Conv (fs=4, s=1, p=1) 128 96 192 192 Conv (fs=4, s=1, p=1) 192 Output Shape (192, 32, 32 1, 3, 7 6, 4, 9 9, 3, 5 5, 9, 5 3, 4, 6 6, 3, 5 6, 9, 9 1, 2, 1 8, 4, 6 9, 4, 1 5, 9, 1 8, 0, 9 4, 5, 5 4, 4, 6 7, 2, 6 9, 0, 0 6, 7, 1 3, 4, 2 0, 3, 7 2, 7, 8 9, 7, 2 1, 6, 1 3, 7, 9 4, 1, 5 5, 5, 3 9, 2, 1 7, 6, 4 4, 2, 1 4, 8, 3 2, 2, 4 6, 5, 4 5, 9, 6 9,4, 5 3, 0, 1 1, 0, 7 3, 8, 2 7, 3, 0 4, 3, 9 4, 8, 1 9, 6, 6 Figure 6: Systematic test of digits over vertically different regions. Training set included three normal-sized digits only in the top half of the image. Highlighted bounding boxes and yellow ground truth for visualization. We can see that the model fails to generate recognizable digits once their location is too far in the bottom half of the image, as this location was never observed during training. show typical failure cases of our model, where there is no bounding box for the foreground object present. As a our model only generates the , without the appropriate foreground object, even though the foreground object is very clearly described in the image caption. Figure 9 provides similar but for random bounding box positions. The first six examples show images generated by our StackGAN where we changed the location and size of the respective bounding boxes. The last three examples show failure cases in which we changed the location of the bounding boxes to "unusual" locations. For the image with the child on the bike, we put the bounding box of the bike somewhere in the top half of the image and the bounding box for the child somewhere in the bottom part. Similarly, for the man sitting on a bench, we put the bench in the top and the man in the bottom half of the image. Finally, for the image depicting a pizza on a plate, we put the plate location in the top half of the image and the pizza in the bottom half. Figure 8 shows of text-to-image synthesis on the MS-COCO data set with the AttnGAN architecture. Rows A show the original image and image caption, rows B show the images generated by our AttnGAN + Object Pathway and the given bounding boxes for visualization, and rows C show images generated by the original AttnGAN (pretrained model obtained from https: //github.com/taoxugit/AttnGAN). The last block of examples (last row) show typical failure cases, in which the model does generate the appropriate object within the bounding box, but also places the same object at multiple other locations within the image. Similarly as for StackGAN, FIG4 shows images generated by our AttnGAN where we randomly change the location of the various bounding boxes. Again, the last three examples show failure cases where we put the locations of the bounding boxes at "uncommon" positions. In the image depicting the sandwiches we put the location of the plate in the top half of the image, in the image with the dogs we put the dogs' location in the top half, and in the image with the motorbike we put the human in the left half and the motorbike in the right half of the image. To further inspect the quality of the location and recognizability of the generated objects within an image, we ran a test on object detection using a YOLOv3 network BID15 that was also pretrained on the MS-COCO data set 9. We use the Pytorch implementation from https://github.com/ayooshkathuria/pytorch-yolo-v3 to get the bounding box and label predictions for our images. We follow the standard guidelines and keep all hyperparameters for the YOLOv3 network as in the implementation. We picked the 30 most common training labels (based on how many captions contain these labels) and evaluate the models on these labels, see TAB7.In the following, we evaluate how often the pretrained YOLOv3 network recognizes a specific object within a generated image that should contain this object based on the image caption. For example, we expect an image generated from the caption "a young woman taking a picture with her phone" to contain a person somewhere in the image and we check whether the YOLOv3 network actually recognizes a person in the generated image. Since the baseline StackGAN and AttnGAN only receive the image caption as input (no bounding boxes and no bounding box labels) we decided to only use captions that clearly imply the presence of the given label (see TAB7). We chose this strategy in order to allow for a fair comparison of the ing presence or absence of a given object. Specifically, for a given label we choose all image captions from the test set that contain one of the associated words for this label (associated words were chosen manually, see TAB7) and then generated three images for each caption with each model. Finally, we counted the number of images in which the given object was detected by the YOLOv3 network. Table 4 shows the ratio of images for each label and each model in which the given object was detected at any location within the image. Additionally, for our models that also receive the bounding boxes as input, we calculated the Intersection over Union (IoU) between the ground truth bounding box (the bounding box supplied to the model) and the bounding box predicted by the YOLOv3 network for the recognized object. Table 4 presents the average IoU (for the models that have an object pathway) for each object in the images in which YOLOv3 detected the given object. For each image in which YOLOv3 detected the given object, we calculated the IoU between the predicted bounding box and the ground truth bounding box for the given object. In the cases in which either an image contains multiple instances of the given object (i.e. multiple different bounding boxes for this object were given to the generator) or YOLOv3 detects the given object multiple times we used the maximum IoU between all predicted and ground truth bounding boxes for our statistics. Table 4 summarizes the with the 30 tested labels. We can observe that the StackGAN with object pathway outperforms the original StackGAN when comparing the recall of the YOLOv3 network, i.e. in how many images with a given label the YOLOv3 network actually detected the given object. The recall of the original StackGAN is higher than 10% for 26.7% of the labels, while our StackGAN with object pathway in a recall greater than 10% for 60% of the labels. The IoU is greater than 0.3 for every label, while 86.7% of the labels an IoU of greater than 0.5 (original images: 100%) and 30% have an IoU of greater than 0.7 (original images: 96.7%). This indicates that we can indeed control the location and identity of various objects within the generated images. Compared to the StackGAN, the AttnGAN achieves a much greater recall, with 80% and 83.3% of the labels having a recall of greater than 10% for the original AttnGAN and the AttnGAN with object pathway respectively. The difference in recall values between the original AttnGAN and the AttnGAN with object pathway is also smaller, with our AttnGAN having a higher (lower) recall than the original AttnGAN (we only count cases where the difference is at least 5%) in 26.7% (13.3%) of the labels. The average IoU, on the other hand, is a lot smaller for the AttnGAN than for the StackGAN. We only achieve an IoU greater than 0.3 (0.5, 0.7) for 53.3% (3.3%, 0%) of the labels. As mentioned in the discussion (subsection 4.4), we attribute this to the observation that the AttnGAN tends to place seemingly recognizable features of salient objects at arbitrary locations throughout the image. This might attribute to the overall higher recall but may negatively affect the IoU.Overall, these further confirm our previous experiments and highlight that the addition of the object pathway to the different models does not only enable the direct control of object location and identity but can also help to increase the image quality. The increase in image quality is supported by a higher Inception Score, lower Fréchet Inception Distance (for StackGAN) and a higher performance of the YOLOv3 network in detecting objects within generated images. Table 4: Results of YOLOv3 detections on generated and original images. Recall provides the fraction of images in which YOLOv3 detected the given object. IoU (Intersection over Union) measures the maximum IoU per image in which the given object was detected.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1edIiA9KQ
Extend GAN architecture to obtain control over locations and identities of multiple objects within generated images.
The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets. We propose an abstractive dialog summarization dataset based on MultiWOZ. If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched. To address these two drawbacks, we propose Scaffold Pointer Network (SPNet) to utilize the existing annotation on speaker role, semantic slot and dialog domain. SPNet incorporates these semantic scaffolds for dialog summarization. Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text. On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics. Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track. There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization . Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail is on news documents. AMI meeting corpus is the common benchmark, but it only has extractive summary. In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ . Seq2Seq models such as Pointer-Generator have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator and Transformer on all the metrics. 2 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework and attention mechanism , achieving state-of-the-art on Gigaword and DUC-2004 dataset. proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. applied pointing as copy mechanism and use coverage mechanism to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization . However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias . Recently, pre-training methods are popular in NLP applications. BERT and GPT have achieved state-of-the-art performance in many tasks, including summarization. For instance, proposed a method to pre-train hierarchical document encoder for extractive summarization. proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization. Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: used skip-chain conditional random fields (CRFs) as a ranking method in extractive meeting summarization. compared support vector machines (SVMs) with LDA-based topic models for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work (; ;) created abstractive dialog summary benchmarks with existing dialog corpus. annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as "industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work. As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator . SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain. We first introduce Pointer-Generator . It is a hybrid model of the typical Seq2Seq attention model and pointer network . Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states h i in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states s t. In Pointer-Generator, attention distribution a t is computed as in: where W h, W s, v and b attn are all learnable parameters. With the attention distribution a t, context vector h * t is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text: Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability p gen is calculated as "a soft switch" to choose from copy and generation: where x t is the decoder input, w h *, w s, w x and b ptr are all learnable parameters. σ is sigmoid function, so the generation probability p gen has a range of. The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution P (w) on extended vocabulary is computed as follows: where P vocab is the distribution on the original vocabulary, V, V, b and b are learnable parameters used to calculate such distribution. Our Scaffold Pointer Network (depicted in Figure 1) is based on Pointer-Generator . The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold. Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances x usr t and system utterances x sys t are fed into a user encoder and a system encoder separately to obtain encoder hidden states h. The attention distributions and context vectors are calculated as described in section 3.1. In order to merge these two encoders in our framework, the decoder's hidden state s 0 is initialized as: The pointing mechanism in our model follows the Equation 3, and we obtain the context vector h * t: We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system Figure 1: SPNet overview. The blue and yellow box is the user and system encoder respectively. The encoders take the delexicalized conversation as input. The slots values are aligned with their slots position. Pointing mechanism merges attention distribution and vocabulary distribution to obtain the final distribution. We then fill the slots values into the slot tokens to convert the template to a complete summary. SPNet also performs domain classification to improve encoder representation. research ignored this issue or completed single delexicalized utterance as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism. We first train the model with the delexicalized utterance. Attention distribution a t over the source tokens instructs the decoder to fill up the slots with lexicalized values: Note that w slot specifies the tokens that represents the slot name (e.g. [hotel place], [time] ). Decoder directly copies lexicalized value value(w i) conditioned on attention distribution a t i. If w is not a slot token, then the probability P (w) is calculated as Equation 4. We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability d. The i th element d i in d represents the probability of the i th domain: where U, U, b d and b d are all trainable parameters in the classifier. We denote the loss function of summarization as loss 1 and domain classification as loss 2. Assume target word at timestep t is w * t, loss 1 is the arithmetic mean of the negative log likelihood of w * t over the generated sequence: The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the i th domain labeld i and predict probability d i for this task: where |D| is the number of domains. Finally, we reweight the classification loss with hyperparameter λ and the objective function is: 4 EXPERIMENTAL SETTINGS We validate SPNet on MultiWOZ-2.0 dataset . MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multidomain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table 2. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing. ROUGE is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package 1. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations: In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. "You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant name], [time] ) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows: Models ROUGE-1 ROUGE-2 ROUGE-L CIC base (Pointer-Gen) 62 where V stands for a set of delexicalized values in the reference summary, Count match (v) is the number of values co-occurring in the candidate summary and reference summary, and m is the number of values in set V. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance. CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain. We implemented our baselines with OpenNMT framework . We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe , we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam with a learning rate of 0.001, β 1 = 0.9, β 2 = 0.999. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter λ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively. To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, PointerGenerator and Transformer . Pointer-Generator is the stateof-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in. The hyperparameters in the original implementation were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation. We show all the models' in Table 1. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics. Conversation Client: I'm looking for a place to stay. It needs to be a guesthouse and include free wifi. Help desk: There are 23 hotels that meet your needs. Would you like to narrow your search by area and/or price range? Client: I would like for it to be cheap and include free parking. Help desk: There are nine guesthouse hotels in various areas. What part of town are you hoping for? Client: Nothing in particular. I just need it booked for 6 people for a total of 4 nights starting from sunday. I'd also like the reference number, please. Help desk: Sure! We will book you in a hotel with 4 stars. The reservation will be for 6 people for 4 nights. How many rooms would you like to reserve? Client: Well, for six people I suppose I will be needing three rooms, for four nights, starting Sunday. Help desk: You're booked at the Alexander Bed and Breakfast, 517a coldham lane, for 6 people for four nights starting Sunday. Your reference number is JXVKZ7KV. Client: Thank you! I'm also looking for a restaurant. Ideally an Italian place in the same price range in the centre. Help desk: I found Ask restaurant would you like to book that? Client: Is the name of the restaurant Ask restaurant? Help desk: Yes, the restaurant is called Ask restaurant. Would you like to book a table there? Client: Yes. Book it for the same group of people on the same day at 18:45. Can I get the reference number please? Help desk: Great! Your booking for 6 people at the restaurant Ask on Sunday at 18:45 was successful! Your reference number is Y2Y8QYBY. Client: I am all set. Thank you. Help desk: Thank you for using the Cambridge TowneInfo Help Desk! Summaries Pointer-Generator: You are traveling to Cambridge and looking forward to try local restaurants. You are looking for a place to stay. The hotel should include free wifi and should be in the type of guesthouse. The hotel does n't need to include internet. If there is no such hotel, how about one that has free wifi. Once you find the hotel you want to book it for 6 people and 4 nights starting from Sunday. If the booking fails how about 1 nights. Make sure you get the reference number. You are also looking for information particular restaurant. Its name is called Ask restaurant restaurant. Make you you the price range and postcode. You are looking for information in Cambridge. You are looking for a place to stay. The hotel should be in the type of guest house and should include free wifi. The hotel should be in the cheap price range. Once you find the hotel you want to book it for 6 people and 4 nights starting from Sunday. Make sure you get the reference number. You are also looking for a restaurant. The restaurant should be in the same price range as the hotel and should serve Italian food. The restaurant should be in the same area as the hotel. Once you find the restaurant you want to book a table for the same group of people at 18:45 on the same day. Make sure you get the reference number. You are planning your trip in Cambridge. You are looking for a place to stay. The hotel should include free wifi and should be in the type of guest house. The hotel should be in the cheap price range and should include free parking. Once you find the hotel you want to book it for 6 people and 4 nights starting from Sunday. Make sure you get the reference number. You are also looking for a restaurant. The restaurant should be in the same price range as the hotel and should be in the centre. The restaurant should serve italian food. Once you find the restaurant you want to book a table for the same group of people at 18:45 on the same day. Make sure you get the reference number. Table 2: An example dialog and Pointer-Generator, SPNet and ground truth summaries. We underline semantic slots in the conversation. Red denotes incorrect slot values and green denotes the correct ones. We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed). We present human evaluation in Table 3. In the scoring part, our model outperforms PointerGenerator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet . However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary. Relevance Conciseness Readability Ground truth 3.83 3.67 3.87 Pointer-Gen 3 Table 3: The upper is the scoring part and the lower is the the ranking part. SPNet outperforms Pointer-Generator in all three human evaluation metrics and the differences are significant, with the confidence over 99.5% in student t test. In the ranking part, the percentage of each choice is shown in decimal. Win, lose and tie refer to the state of the former summary in ranking. Table 2 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions "free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property. Our method has limitations. In the example shown in Table 2, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table 6 in Appendix). Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting. We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene. Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries. A SUPPLEMENT TO CASE STUDY Supplement Summary Transformer: You are planning your trip in Cambridge. You are looking for a place to stay. The hotel doesn't need to include internet and should include free parking. The hotel should be in the type of guesthouse. If there is no such hotel, how about one that is in the moderate price range? Once you find the hotel, you want to book it for 6 people and 4 nights starting from Sunday. Make sure you get the reference number. You are also looking forward to dine. The restaurant should be in the centre. Make sure you get the reference number. Which one is better in your opinion, summary 1 (Pointer-Generator) or summary 2 (SPNet) or tie? Evaluator: Summary 2 (SPNet) What is the reason for your rank? We recommend you to talk about it from relevance, conciseness and readability. Evaluator: Summary 2 is more coherent and relevant. It is just as concise but more readable. Which one is better in your opinion, summary 2 (SPNet) or summary 3 (Ground Truth) or tie? Evaluator: Tie What is the reason for your rank? We recommend you to talk about it from relevance, conciseness and readability. Evaluator: Summary 2 and 3 are almost identical. They are equally coherent, relevant, and readable. Help desk: Their postcode is cb11lh and if you want a reservation just let me know the time, party size, and day and I'll make it for you. Client: Not now, thanks. Here's a weird one, though -are there any expensive 0-star hotels in the center of town? Help desk: I'm sorry, but there are no expensive 0-star hotels in the area. Can I help you find something else? Client: Do you have a hotel in the moderate price range? Help desk: The Cityroomz Hotel is moderatly priced, and has free internet and parking. Would you like to make a reservation now? Client: That sounds nice, yes please book a room for 3 guests staying for 5 nights starting on Sunday. Help desk: I'm sorry, those particular dates are filled, if you'd like I could try to make the trip shorter or maybe change days? Client: How about just for the one night? Help desk: Alright, I was able to book your hotel room at the Cityroomz, for 3 people, for one night. Your reference number is E62IXZGB. Is there anything else I can do? Client: I will also need a taxi to go from the restaurant to the hotel, I want to leave the restaurant at 02:15. Help desk: I have booked your taxi. It will be a blue volvo and the contact number is 07729446320. Client: Great. Thanks. That's all I need today. Help desk: You're welcome. Have a wonderful visit. Pointer-Generator: You are traveling to Cambridge. You are looking for a restaurant. The restaurant should serve 1 food and should be in the moderate price range. Make sure you get postcode and address. You are also looking for information in the centre. The hotel should have a star of 0 and should include free wifi. The hotel doesn't need to include internet. If there is no such hotel, how about one that doesn't have free parking? Once you find the hotel, you want to book it for 3 people and 5 nights starting from Sunday. If the booking fails, how about 1 night? Make sure you get the reference number. You are traveling to Cambridge and looking forward to try local restaurants. You are looking for a place to dine. The restaurant should be in the moderate price range and should be in the centre. Make sure you get postcode. You are also looking for a place to stay. The hotel should be in the type of hotel and should have 0-star. The hotel should be in the same area as the restaurant and should be in the same price range as the restaurant. Once you find the hotel, you want to book it for 3 people and 5 nights starting from Sunday. If the booking fails, how about 1 night? Make sure you get the reference number. You also want to book a taxi to commute between the 2 places. You want to leave the restaurant by 02:15. Make sure you get contact number and car type. You are traveling to Cambridge and looking forward to try local restaurants. The restaurant should be in the centre and should be in the moderate price range. Make sure you get postcode. You are also looking for a place to stay. The hotel should be in the expensive price range and should have a star of 0. The hotel should be in the same area as the restaurant. If there is no such hotel, how about one that is in the moderate price range? Once you find the hotel you want to book it for 3 people and 5 nights starting from Sunday. If the booking fails how about 1 night. Make sure you get the reference number. You also want to book a taxi to commute between the 2 places. You want to leave the restaurant by 02:15. Make sure you get contact number and car type. Table 5: An example dialog and Pointer-Generator, SPNet and ground truth summaries. The dialog spans over three domains: restaurant, hotel and taxi. We underline semantic slots in the conversation. Red denotes incorrect slot values and green denotes the correct ones.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1eibJrtwr
We propose a novel end-to-end model (SPNet) to incorporate semantic scaffolds for improving abstractive dialog summarization.
Knowledge bases (KB), both automatically and manually constructed, are often incomplete --- many valid facts can be inferred from the KB by synthesizing existing information. A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities. Given the enormous size of KBs and the exponential number of paths, previous path-based models have considered only the problem of predicting a missing relation given two entities, or evaluating the truth of a proposed triple. Additionally, these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them. We propose a new algorithm, MINERVA, which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity. Since random walks are impractical in a setting with unknown destination and combinatorially many paths from a start node, we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths. On a comprehensive evaluation on seven knowledge base datasets, we found MINERVA to be competitive with many current state-of-the-art methods. Automated reasoning, the ability of computing systems to make new inferences from observed evidence, has been a long-standing goal of artificial intelligence. We are interested in automated reasoning on large knowledge bases (KB) with rich and diverse semantics BID44 BID1 BID5. KBs are highly incomplete BID26, and facts not directly stored in a KB can often be inferred from those that are, creating exciting opportunities and challenges for automated reasoning. For example, consider the small knowledge graph in Figure 1. We can answer the question " Who did Malala Yousafzai share her Nobel Peace prize with?" from the following reasoning path: Malala Yousafzai → WonAward → Nobel Peace Prize 2014 → AwardedTo → Kailash Satyarthi. Our goal is to automatically learn such reasoning paths in KBs. We frame the learning problem as one of query answering, that is to say, answering questions of the form (Malala Yousafzai, SharesNobelPrizeWith, ?).From its early days, the focus of automated reasoning approaches has been to build systems that can learn crisp symbolic logical rules BID24 BID34. Symbolic representations have also been integrated with machine learning especially in statistical relational learning BID29 BID15 BID21 BID22, but due to poor generalization performance, these approaches have largely been superceded by distributed vector representations. Learning embedding of entities and relations using tensor factorization or neural methods has been a popular approach BID31 BID2 , inter alia), but these methods cannot capture chains of reasoning expressed by KB paths. Neural multi-hop models BID30 BID17 BID47 address the aforementioned problems to some extent by operating on KB paths embedded in vector space. However, these models take as input a set of paths which are gathered by performing random walks Figure 1: A small fragment of a knowledge base represented as a knowledge graph. Solid edges are observed and dashed edges are part of queries. Note how each query relation (e.g. SharesNobelPrizeWith, Nationality, etc.) can be answered by traversing the graph via "logical" paths between entity'Malala Yousafzai' and the corresponding answer.independent of the query relation. Additionally, models such as those developed in BID30; BID9 use the same set of initially collected paths to answer a diverse set of query types (e.g. MarriedTo, Nationality, WorksIn etc.).This paper presents a method for efficiently searching the graph for answer-providing paths using reinforcement learning (RL) conditioned on the input question, eliminating any need for precomputed paths. Given a massive knowledge graph, we learn a policy, which, given the query (entity 1, relation, ?), starts from entity 1 and learns to walk to the answer node by choosing to take a labeled relation edge at each step, conditioning on the query relation and entire path history. This formulates the query-answering task as a reinforcement learning (RL) problem where the goal is to take an optimal sequence of decisions (choices of relation edges) to maximize the expected reward (reaching the correct answer node). We call the RL agent MINERVA for "Meandering In Networks of Entities to Reach Verisimilar Answers."Our RL-based formulation has many desirable properties. First, MINERVA has the built-in flexibility to take paths of variable length, which is important for answering harder questions that require complex chains of reasoning BID42. Secondly, MINERVA needs no pretraining and trains on the knowledge graph from scratch with reinforcement learning; no other supervision or fine-tuning is required representing a significant advance over prior applications of RL in NLP. Third, our path-based approach is computationally efficient, since by searching in a small neighborhood around the query entity it avoids ranking all entities in the KB as in prior work. Finally, the reasoning paths found by our agent automatically form an interpretable provenance for its predictions. The main contributions of the paper are: (a) We present agent MINERVA, which learns to do query answering by walking on a knowledge graph conditioned on an input query, stopping when it reaches the answer node. The agent is trained using reinforcement learning, specifically policy gradients (§ 2). (b) We evaluate MINERVA on several benchmark datasets and compare favorably to Neural Theorem Provers (NTP) BID39 and Neural LP, which do logical rule learning in KBs, and also state-of-the-art embedding based methods such as DistMult BID54 and ComplEx BID48 and ConvE BID12. (c) We also extend MINERVA to handle partially structured natural language queries and test it on the WikiMovies dataset (§ 3.3) BID25.We also compare to DeepPath BID53 which uses reinforcement learning to pick paths between entity pairs. The main difference is that the state of their RL agent includes the answer entity since it is designed for the simpler task of predicting if a fact is true or not. As such their method cannot be applied directly to our more challenging query answering task where the second entity is unknown and must be inferred. Nevertheless, MINERVA outperforms DeepPath on their benchmark NELL-995 dataset when compared in their experimental setting (§ 3.2.2). We formally define the task of query answering in a KB. Let E denote the set of entities and R denote the set of binary relations. A KB is a collection of facts stored as triplets (e 1, r, e 2) where e 1, e 2 ∈ E and r ∈ R. From the KB, a knowledge graph G can be constructed where the entities e 1, e 2 are represented as the nodes and relation r as labeled edge between them. Formally, a knowledge graph is a directed labeled multigraph G = (V, E, R), where V and E denote the vertices and edges of the graph respectively. Note that V = E and E ⊆ V × R ×V. Also, following previous approaches BID2 BID30 BID53, we add the inverse relation of every edge, i.e. for an edge (e 1, r, e 2) ∈ E, we add the edge (e 2, r −1, e 1) to the graph. (If the set of binary relations R does not contain the inverse relation r −1, it is added to R as well.)Since KBs have a lot of missing information, two natural tasks have emerged in the information extraction community -fact prediction and query answering. Query answering seeks to answer questions of the form (e 1, r, ?), e.g. Toronto, locatedIn,?, whereas fact prediction involves predicting if a fact is true or not, e.g. (Toronto, locatedIn, Canada)?. Algorithms for fact prediction can be used for query answering, but with significant computation overhead, since all candidate answer entities must be evaluated, making it prohibitively expensive for large KBs with millions of entities. In this work, we present a query answering model, that learns to efficiently traverse the knowledge graph to find the correct answer to a query, eliminating the need to evaluate all entities. Query answering reduces naturally to a finite horizon sequential decision making problem as follows:We begin by representing the environment as a deterministic partially observed Markov decision process on a knowledge graph G derived from the KB (§2.1). Our RL agent is given an input query of the form e 1q, r q,?. Starting from vertex corresponding to e 1q in G, the agent follows a path in the graph stopping at a node that it predicts as the answer (§ 2.2). Using a training set of known facts, we train the agent using policy gradients more specifically by REINFORCE with control variates (§ 2.3). Let us begin by describing the environment. Our environment is a finite horizon, deterministic partially observed Markov decision process that lies on the knowledge graph G derived from the KB. On this graph we will now specify a deterministic partially observed Markov decision process, which is a 5-tuple (S, O, A, δ, R), each of which we elaborate below. States. The state space S consists of all valid combinations in E × E × R × E. Intuitively, we want a state to encode the query (e 1q, r q), the answer (e 2q), and a location of exploration e t (current location of the RL agent). Thus overall a state S ∈ S is represented by S = (e t, e 1q, r q, e 2q) and the state space consists of all valid combinations. Observations. The complete state of the environment is not observed. Intuitively, the agent knows its current location (e t) and (e 1q, r q), but not the answer (e 2q), which remains hidden. Formally, the observation function O: S → E × E × R is defined as O(s = (e t, e 1q, r q, e 2q)) = (e t, e 1q, r q).Actions. The set of possible actions A S from a state S = (e t, e 1q, r q, e 2q) consists of all outgoing edges of the vertex e t in G. Formally A S = {(e t, r, v) ∈ E: S = (e t, e 1q, r q, e 2q), r ∈ R, v ∈ V } ∪ {(s, ∅, s)}. Basically, this means an agent at each state has option to select which outgoing edge it wishes to take having the knowledge of the label of the edge r and destination vertex v. During implementation, we unroll the computation graph up to a fixed number of time steps T. We augment each node with a special action called'NO OP' which goes from a node to itself. Some questions are easier to answer and needs fewer steps of reasoning than others. This design decision allows the agent to remain at a node for any number of time steps. This is especially helpful when the agent has managed to reach a correct answer at a time step t < T and can continue to stay at the'answer node' for the rest of the time steps. Alternatively, we could have allowed the agent to take a special'STOP' action, but we found the current setup to work sufficiently well. As mentioned before, we also add the inverse relation of a triple, i.e. for the triple (e 1, r, e 2), we add the triple (e 2, r −1, e 1) to the graph. We found this important because this actually allows our agent to undo a potentially wrong decision. Transition. The environment evolves deterministically by just updating the state to the new vertex incident to the edge selected by the agent. The query and answer remains the same. Formally, the transition function is δ: S × A → S defined by δ(S, A) = (v, e 1q, r q, e 2q), where S = (e t, e 1q, r q, e 2q) and A = (e t, r, v)).Rewards. We only have a terminal reward of +1 if the current location is the correct answer at the end and 0 otherwise. To elaborate, if S T = (e t, e 1q, r q, e 2q) is the final state, then we receive a reward of +1 if e t = e 2q else 0., i.e. R(S T) = I{e t = e 2q}. To solve the finite horizon deterministic partially observable Markov decision process described above, we design a randomized non-stationary history-dependent policy π = (d 1, d 2, ..., d T−1), where d t: H t → P(A S t) and history H t = (H t−1, A t−1, O t) is just the sequence of observations and actions taken. We restrict ourselves to policies parameterized by long short-term memory network (LSTM) BID19 ).An agent based on LSTM encodes the history H t as a continuous vector h t ∈ R 2d. We also have embedding matrix r ∈ R |R|×d and e ∈ R |E|×d for the binary relations and entities respectively. The history embedding for H t = (H t−1, A t−1, O t) is updated according to LSTM dynamics: DISPLAYFORM0 where a t−1 ∈ R d and o t ∈ R d denote the vector representation for action/relation at time t − 1 and observation/entity at time t respectively and [;] denote vector concatenation. To elucidate, a t−1 = r A t−1, i.e. the embedding of the relation corresponding to label of the edge the agent chose at time t − 1 and o t = e e t if O t = (e t, e 1q, r q) i.e. the embedding of the entity corresponding to vertex the agent is at time t. Based on the history embedding h t, the policy network makes the decision to choose an action from all available actions (A S t) conditioned on the query relation. Recall that each possible action represents an outgoing edge with information of the edge relation label l and destination vertex/entity d. So embedding for each A ∈ A S t is [r l ; e d], and stacking embeddings for all the outgoing edges we obtain the matrix A t. The network taking these as inputs is parameterized as a two-layer feedforward network with ReLU nonlinearity which takes in the current history representation h t and the embedding for the query relation r q and outputs a probability distribution over the possible actions from which a discrete action is sampled. In other words, DISPLAYFORM1 Note that the nodes in G do not have a fixed ordering or number of edges coming out from them. The size of matrix A t is |A S t | × 2d, so the decision probabilities d t lies on simplex of size |A S t |. Also the procedure above is invariant to order in which edges are presented as desired and falls in purview of neural networks designed to be permutation invariant BID56. Finally, to summarize, the parameters of the LSTM, the weights W 1, W 2, the corresponding biases (not shown above for brevity), and the embedding matrices form the parameters θ of the policy network. For the policy network (π θ) described above, we want to find parameters θ that maximize the expected reward: DISPLAYFORM0 where we assume there is a true underlying distribution (e 1, r, e 2) ∼ D. To solve this optimization problem, we employ REINFORCE as follows:• The first expectation is replaced with empirical average over the training dataset.• For the second expectation, we approximate by running multiple rollouts for each training example. The number of rollouts is fixed and for all our experiments we set this number to 20.• For variance reduction, a common strategy is to use an additive control variate baseline BID18 BID14 BID13. We use a moving average of the cumulative discounted reward as the baseline. We tune the weight of this moving average as a hyperparameter. Note that in our experiments we found that using a learned baseline performed similarly, but we finally settled for cumulative discounted reward as the baseline owing to its simplicity.• To encourage diversity in the paths sampled by the policy at training time, we add an entropy regularization term to our cost function scaled by a constant (β). We now present empirical studies for MINERVA in order to establish that (i) MINERVA is competitive for query answering on small (Sec. 3.1.1) as well as large KBs (Sec. 3.1.2), (ii) MINERVA is superior to a path based models that do not search the KB efficiently or train query specific models (Sec. 3.2), (iii) MINERVA can not only be used for well formed queries, but can also easily handle partially structured natural language queries (Sec 3.3), (iv) MINERVA is highly capable of reasoning over long chains, and (v) MINERVA is robust to train and has much faster inference time (Sec. 3.5). To gauge the reasoning capability of MINERVA, we begin with task of query answering on KB, i.e. we want to answer queries of the form (e 1, r, ?). Note that, as mentioned in Sec. 2, this task is subtly different from fact checking in a KB. Also, as most of the previous literature works in the regime of fact checking, their ranking includes variations of both (e 1, r, x) and (x, r, e 2). However, since we do not have access to e 2 in case of question answering scenario the same ranking procedure does not hold for us -we only need to rank on (e 1, r, x). This difference in ranking made it necessary for us to re-run all the implementations of previous work. We used the implementation or the best pre-trained models (whenever available) of BID39 and BID12. For MINERVA to produce a ranking of answer entities during inference, we do a beam search with a beam width of 50 and rank entities by the probability of the trajectory the model took to reach the entity and remaining entities are given a rank of ∞.Method We compare MINERVA with various state-of-the-art models using HITS@1,3,10 and mean reciprocal rank (MRR), which are standard metrics for KB completion tasks. In particular we compare against embedding based models -DistMult BID54, ComplEx BID48 and ConvE BID12. For ConvE and ComplEx, we used the implementation released by BID12 1 on the best hyperparameter settings reported by them. For DistMult, we use our highly tuned implementation (e.g. which performs better than the state-of-the-art of BID46). We also compare with two recent work in learning logical rules in KB namely Neural Theorem Provers (NTP) BID39 and NeuralLP. BID39 also reports a NTP model which is trained with an additional objective function of ComplEx (NTP-λ). For these models, we used the implementation released by corresponding authors 2 3, again on the best hyperparameter settings reported by them. Table 3: Query answering on KINSHIP and UMLS datasets. Dataset We use three standard datasets: COUNTRIES BID3, KINSHIP, and UMLS BID21. The COUNTRIES dataset ontains countries, regions, and subregions as entities and is carefully designed to explicitly test the logical rule learning and reasoning capabilities of link prediction models. The queries are of the form LocatedIn(c, ?) and the answer is a region (e.g. LocatedIn(Egypt, ?) with the answer as Africa). The dataset has 3 tasks (S1-3 in table 2) each requiring reasoning steps of increasing length and difficulty (see BID39 for more details about the tasks). Following the design of the COUNTRIES dataset, for task S1 and S2, we set the maximum path length T = 2 and for S3, we set T = 3. The Unified Medical Language System (UMLS) dataset, is from biomedicine. The entities are biomedical concepts (e.g. disease, antibiotic) and relations are like treats and diagnoses. The KINSHIP dataset contains kinship relationships among members of the Alyawarra tribe from Central Australia. For these two task we use maximum path length T = 2. Also, for MINERVA we turn off entity in in these experiments. Observations For the COUNTRIES dataset, in TAB1 we report a stronger metric -the area under the precision-recall curve -as is common in the literature. We can see that MINERVA compares favorably or outperforms all the baseline models except on the task S2 of COUNTRIES, where the ensemble model NTP-λ and ConvE outperforms it, albeit with a higher variance across runs. Our gains are much more prominent in task S3, which is the hardest among all the tasks. The Kinship and UMLS datasets are small KB datasets with around 100 entities each and as we see from Table 3, embedding based methods (ConvE, ComplEx and DistMult) perform much better than methods which aim to learn logical rules (NTP, NeuralLP and MINERVA). On Kinship, MINERVA outperforms both NeuralLP and NTP and matches the HITS@10 performance of NTP on UMLS. Unlike COUNTRIES, these datasets were not designed to test the logical rule learning ability of models and given the small size, embedding based models are able to get really high performance. Combination of both methods gives a slight increase in performance as can be seen from the of NTP-λ. However, when we initialized MINERVA with pre-trained embeddings of ComplEx, we did not find a significant increase in performance. Dataset Next we evaluate MINERVA on three large KG datasets -WN18RR, FB15K-237 and NELL-995. The WN18RR BID12 and FB15K-237 BID46 datasets are created from the original WN18 and FB15K datasets respectively by removing various sources of test leakage, making the datasets more realistic and challenging. The NELL-995 dataset released by BID53 has separate graphs for each query relation, where a graph for a query relation can have triples from the test set of another query relation. For the query answering experiment, we combine all the graphs and removed all test triples (and the corresponding triples with inverse relations) from the graph. We also noticed that several triples in the test set had an entity (source or target) that never appeared in the graph. Since, there will be no trained embeddings for those entities, we removed them from the test set. This reduced the size of test set from 3992 queries to 2818 queries. We observe that on FB15K-237, however, embedding based methods dominate over MINERVA and NeuralLP. Upon deeper inspection, we found that the query relation types of FB15K-237 knowledge graph differs significantly from others. Analysis of query relations of FB15k-237: We analyzed the type of query relation types on the FB15K-237 dataset. Following BID2, we categorized the query relations into (M)any to 1, 1 to M or 1 to 1 relations. An example of a M to 1 relation would be'/people/profession' (What is the profession of person 'X'?). An example of 1 to M relation would be /music/instrument/instrumentalists ('Who plays the music instrument X?') or'/people/ethnicity/people' ('Who are people with ethnicity X?'). From a query answering point of view, the answer to these questions is a list of entities. However, during evaluation time, the model is evaluated based on whether it is able to predict the one target entity which is in the query triple. Also, since MINERVA outputs the end points of the paths as target entities, it is sometimes possible that the particular target entity of the triple does not have a path from the source entity (however there are paths to other 'correct' answer entities). TAB10 (in appendix) shows few other examples of relations belonging to different classes. Following BID2, we classify a relation as 1-to-M if the ratio of cardinality of tail to head entities is greater than 1.5 and as M-to-1 if it is lesser than 0.67. In the validation set of FB15K-237, 54% of the queries are 1-to-M, whereas only 26% are M-to-1. Contrasting it with NELL-995, 27% are 1-to-M and 36% are M-to-1 or UMLS where only 18% are 1-to-M. Table 10 (in appendix) shows few relations from FB15K-237 dataset which have high tail-to-head ratio. The average ratio for 1-TO-M relations in FB15K-237 is 13.39 (substantially higher than 1.5). As explained before, the current evaluation scheme is not suited when it comes to 1-to-M relations and the high percentage of 1-to-M relations in FB15K-237 also explains the sub optimal performance of MINERVA.We also check the frequency of occurrence of various unique path types. We define a path type as the sequence of relation types (ignoring the entities) in a path. Intuitively, a predictive path which generalizes across queries will occur many number of times in the graph. Figure 2 shows the plot. As we can see, the characteristics of FB15K-237 is quite different from other datasets. For example, in NELL-995, more than 1000 different path types occur more than 1000 times. WN18RR has only 11 different relation types which means there are only 11 3 possible path types of length 3 and even fewer number of them would be predictive. As can be seen, there are few path types which occur more than 10 4 times and around 50 of them occur more than 1000 times. However in FB15K-237, Figure 2: Count of number of unique path types of length 3 which occur more than'x' times in various datasets. For example, in NELL-995 there are more than 10 3 path types which occur more than 10 3 times. However, for FB15k-237, we see a sharp decrease as'x' becomes higher, suggesting that path types do not repeat often.which has the highest number of relation types, we observe a sharp decrease in the number of path types which occur a significant number of times. Since MINERVA cannot find path types which repeat often, it finds it hard to learn path types that generalize. In this experiment, we compare to a model which gathers path based on random walks and tries to predict the answer entity. Neural multi-hop models BID30 BID47, operate on paths between entity pairs in a KB. However these methods need to know the target entity in order to pre-compute paths between entity pairs. BID17 is an exception in this regard as they do random walks starting from a source entity'e 1' and then using the path, they train a classifier to predict the target answer entity. However, they only consider one path starting from a source entity. In contrast, BID30; BID47 use information from multiple paths between the source and target entity. We design a baseline model which combines the strength of both these approaches. Starting from'e 1', the model samples (k = 100) random paths of up to a maximum length of T = 3. Following BID30, we encode each paths with an LSTM followed by a max-pooling operation to featurize the paths. This feature is concatenated with the source entity and query relation vector which is then passed through a feed forward network which scores all possible target entities. The network is trained with a multi-class cross entropy objective based on observed triples and during inference we rank target entities according to the model score. The PATH-BASELINE column of table 4 shows the performance of this model on the three datasets. As we can see MINERVA outperforms this baseline significantly. This shows that a model which predicts based on a set of randomly sampled paths does not do as well as MINERVA because it either loses important paths during random walking or it fails to aggregate predictive features from all the k paths, many of which would be irrelevant to answer the given query. The latter is akin to the problem with distant supervision BID27, where important evidence gets lost amidst a plethora of irrelevant information. However, by taking each step conditioned on the query relation, MINERVA can effectively reduce the search space and focus on paths relevant to answer the query. We also compare MINERVA with DeepPath which uses RL to pick paths between entity pairs. For a fair comparison, we only rank the answer entities against the negative examples in the dataset used in their experiments 5 and report the mean average precision (MAP) scores for each query relation. DeepPath feeds the paths its agent gathers as input features to the path ranking algorithm (PRA) BID22, which trains a per-relation classifier. But unlike them, we train one model which learns for all query relations so as to enable our agent to leverage from correlations and more data. If our agent is not able to reach the correct entity or one of the negative entities, the corresponding entities gets a score of negative infinity. If MINERVA fails to reach any of the entities in the set of correct and negative entities. then we fall back to a random ordering of the entities. As show in Queries in KBs are structured in the form of triples. However, this is unsatisfactory since for most real applications, the queries appear in natural language. As a first step in this direction, we extend MINERVA to take in "partially structured" queries. We use the WikiMovies dataset BID25 which contains questions in natural language albeit generated by templates created by human annotators. An example question is "Which is a film written by Herb Freed?". WikiMovies also has an accompanying KB which can be used to answer all the questions. We link the entity occurring in the question to the KB via simple string matching. To form the vector representation of the query relation, we design a simple question encoder which computes the average of the embeddings of the question words. The word embeddings are learned from scratch and we do not use any pretrained embeddings. We compare our with those reported in TAB7. For this experiment, we found that T = 1 sufficed, suggesting that WikiMovies is not the best testbed for multihop reasoning, but this experiment is a promising first step towards the realistic setup of using KBs to answer natural language question. While chains in KB need not be very long to get good empirical BID30 BID9, in principle MINERVA can be used to learn long reasoning chains. To evaluate the same, we test our model on a synthetic 16-by-16 grid world dataset created by, where the task is to navigate to a particular cell (answer entity) starting from a random cell (start entity) by following a set of directions (query relation). The KB consists of atomic triples of the form (, North,) -entity is north of entity. The queries consists of a sequence of directions (e.g. North, SouthWest, East). The queries are classified into classes based on the path lengths. FIG2 shows the accuracy on varying path lengths. Compared to Neural LP, MINERVA is much more robust to queries, which require longer path, showing minimal degradation in performance for even the longest path in the dataset. Training time. Figure 5 plots the HITS@10 scores on the development set against the training time comparing MINERVA with DistMult. It can be seen that MINERVA converges to a higher score much faster than DistMult. It is also interesting to note that even during the early stages of the training, MINERVA has much higher performance than that of DistMult, as during these initial stages, MINERVA would just be doing random walks in the neighborhood of the source entity (e 1). This implies that MINERVA's approach of searching for an answer in the neighborhood of e 1 is a much more efficient and smarter strategy than ranking all entities in the knowledge graph (as done by DistMult and other related methods).Inference Time. At test time, embedding based methods such as ConvE, ComplEx and DistMult rank all entities in the graph. Hence, for a test-time query, the running time is always O (|E|) where R denotes the set of entities (= nodes) in the graph. MINERVA, on the other hand is efficient at inference time since it has to essentially search for answer entities in its local neighborhood. The many cost at inference time for MINERVA is to compute probabilities for all outgoing edges along the path. Thus inference time of MINERVA only depends on degree distribution of the graph. If we assume the knowledge graph to obey a power law degree distribution, like many natural graphs, then for MINERVA the average inference time can be shown to be O(α α−1), when the coefficient of the power law α > 1. The median inference time for MINERVA is O for all values of α. Note that these quantities are independent of size of entities |E|. For instance, on the test dataset of WN18RR, the wall clock inference time of MINERVA is 63s whereas that of a GPU implementation of DistMult, which is the simplest among the lot, is 211s. Similarly the wall-clock inference time on the test set of NELL-995 for a GPU implementation of DistMult is 115s whereas that of MINERVA is 35s. Query based Decision Making. At each step before making a decision, our agent conditions on the query relation. Figure 4 shows examples, where based on the query relation, the probabilities are peaked on different actions. For example, when the query relation is WorksFor, MINERVA assigns a much higher probability of taking the edge CoachesTeam than AthletePlaysInLeague. We also see similar behavior on the WikiMovies dataset where the query consists of words instead of fixed schema relation. Model Robustness. TAB8 also reports the mean and standard deviation across three independent runs of MINERVA. We found it easy to obtain/reproduce the highest scores across several runs as can be seen from the low deviations in scores. Similarly inverse relation gives the agent the ability to recover from a potentially wrong decision it has taken before. Example (ii) shows such an example, where the agent took a incorrect decision at the first step but was able to revert the decision because of the presence of inverted edges. Learning vector representations of entities and relations using tensor factorization BID31 BID32 BID2 BID38 BID33 BID54 or neural methods BID43 BID46 BID49 has been a popular approach to reasoning with a knowledge base. However, these methods cannot capture more complex reasoning patterns such as those found by following inference paths in KBs. Multi-hop link prediction approaches BID22 BID30 BID17 BID47 BID9 ) address the problems above, but the reasoning paths that they operate on are gathered by performing random walks independent of the type of query relation. BID22 further filters paths from the set of sampled paths based on the restriction that the path must end at one of the target entities in the training set and are within a maximum length. These constraints make them query (i) Can learn general rules: DISPLAYFORM0 (ii) Can learn shorter path: Richard F. Velky Published as a conference paper at ICLR 2018 dependent but they are heuristic in nature. Our approach eliminates any necessity to pre-compute paths and learns to efficiently search the graph conditioned on the input query relation. Inductive Logic Programming (ILP) BID29 aims to learn general purpose predicate rules from examples and knowledge. Early work in ILP such as FOIL BID36, PROGOL BID28 are either rule-based or require negative examples which is often hard to find in KBs (by design, KBs store true facts). Statistical relational learning methods BID15 BID21 BID41 along with probabilistic logic BID37 BID4 combine machine learning and logic but these approaches operate on symbols rather than vectors and hence do not enjoy the generalization properties of embedding based approaches. There are few prior work which treat inference as search over the space of natural language. BID35 propose a task (WikiNav) in which each the nodes in the graph are Wikipedia pages and the edges are hyperlinks to other wiki pages. The entity is to be represented by the text in the page and hence the agent is required to reason over natural language space to navigate through the graph. Similar to WikiNav is Wikispeedia BID51 in which an agent needs to learn to traverse to a given target entity node (wiki page) as quickly as possible. BID0 propose natural logic inference in which they cast the inference as a search from a query to any valid premise. At each step, the actions are one of the seven lexical relations introduced by BID23.Neural Theorem Provers (NTP) BID39 and Neural LP are methods to learn logical rules that can be trained end-to-end with gradient based learning. NTPs are constructed by Prolog's backward chaining inference method. It operates on vectors rather than symbols, thereby providing a success score for each proof path. However, since a score can be computed between any two vectors, the computation graph becomes quite large because of such soft-matching during substitution step of backward chaining. For tractability, it resorts to heuristics such as only keeping the top-K scoring proof paths trading-off guarantees for exact gradients. Also the efficacy of NTPs has yet to be shown on large KBs. Neural LP introduces a differential rule learning system using operators defined in TensorLog BID7. It has a LSTM based controller with a differentiable memory component BID16 BID45 and the rule scores are calculated via attention. Even though, differentiable memory allows end to end training, it necessitates accessing the entire memory, which can be computationally expensive. RL approaches capable of hard selection of memory BID57 are computationally attractive. MINERVA uses a similar hard selection of relation edges to walk on the graph. More importantly, MINERVA outperforms both these methods on their respective benchmark datasets. DeepPath BID53 uses RL based approaches to find paths in KBs. However, the state of their MDP requires the target entity to be known in advance and hence their path finding strategy is dependent on knowing the answer entity. MINERVA does not need any knowledge of the target entity and instead learns to find the answer entity among all entities. DeepPath, additionally feeds its gathered paths to Path Ranking Algorithm BID22, whereas MINERVA is a complete system trained to do query answering. DeepPath also uses fixed pretrained embeddings for its entity and relations. Lastly, on comparing MINERVA with DeepPath in their experimental setting on the NELL dataset, we match their performance or outperform them. MINERVA is also similar to methods for learning to search for structured prediction BID8 BID10 BID11 BID40 BID6. These methods are based on imitating a reference policy (oracle) which make near-optimal decision at every step. In our problem setting, it is unclear what a good reference policy would be. For example, a shortest path oracle between two entities would be unideal, since the answer providing path should depend on the query relation. We explored a new way of automated reasoning on large knowledge bases in which we use the knowledge graphs representation of the knowledge base and train an agent to walk to the answer node conditioned on the input query. We achieve state-of-the-art on multiple benchmark knowledge base completion tasks and we also show that our model is robust and can learn long chains-ofreasoning. Moreover it needs no pretraining or initial supervision. Future research directions include applying more sophisticated RL techniques and working directly on textual queries and documents. Table 10: Few example 1-to-M relations from FB15K-237 with high cardinality ratio of tail to head. Experimental Details We choose the relation and embedding dimension size as 200. The action embedding is formed by concatenating the entity and relation embedding. We use a 3 layer LSTM with hidden size of 400. The hidden layer size of MLP (weights W 1 and W 2) is set to 400. We use Adam BID20 with the default parameters in REINFORCE for the update. In our experiments, we tune our model over two hyper parameters, viz., β which is the entropy regularization constant and λ which is the moving average constant for the REINFORCE baseline. The table 11 lists the best hyper parameters for all the datasets. The NELL dataset released by BID53 includes two additional tasks for which the scores were not reported in the paper and so we were unable to compare them against DeepPath. Nevertheless, we ran MINERVA on these tasks and report our in
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Syg-YfWCW
We present a RL agent MINERVA which learns to walk on a knowledge graph and answer queries