source
stringlengths
273
149k
source_labels
sequence
paper_id
stringlengths
9
11
target
stringlengths
18
668
We propose a novel unsupervised generative model, Elastic-InfoGAN, that learns to disentangle object identity from other low-level aspects in class-imbalanced datasets. We first investigate the issues surrounding the assumptions about uniformity made by InfoGAN, and demonstrate its ineffectiveness to properly disentangle object identity in imbalanced data. Our key idea is to make the discovery of the discrete latent factor of variation invariant to identity-preserving transformations in real images, and use that as the signal to learn the latent distribution's parameters. Experiments on both artificial (MNIST) and real-world (YouTube-Faces) datasets demonstrate the effectiveness of our approach in imbalanced data by: (i) better disentanglement of object identity as a latent factor of variation; and (ii) better approximation of class imbalance in the data, as reflected in the learned parameters of the latent distribution. Generative models aim to model the true data distribution, so that fake samples that seemingly belong to the modeled distribution can be generated (; ;). Recent deep neural network based models such as Generative Adversarial Networks (; ;) and Variational Autoencoders have led to promising in generating realistic samples for high-dimensional and complex data such as images. More advanced models show how to discover disentangled representations;;;; ), in which different latent dimensions can be made to represent independent factors of variation (e.g., pose, identity) in the data (e.g., human faces). InfoGAN ) in particular, tries to learn an unsupervised disentangled representation by maximizing the mutual information between the discrete or continuous latent variables and the corresponding generated samples. For discrete latent factors (e.g., digit identities), it assumes that they are uniformly distributed in the data, and approximates them accordingly using a fixed uniform categorical distribution. Although this assumption holds true for many existing benchmark datasets (e.g.,), real-word data often follows a long-tailed distribution and rarely exhibits perfect balance between the categories. Indeed, applying InfoGAN on imbalanced data can in incoherent groupings, since it is forced to discover potentially non-existent factors that are uniformly distributed in the data; see Fig. 1. In this work, we augment InfoGAN to discover disentangled categorical representations from imbalanced data. Our model, Elastic-InfoGAN, makes two modifications to InfoGAN which are simple and intuitive. First, we remodel the way the latent distribution is used to fetch the latent variables; we lift the assumption of any knowledge about class imbalance, where instead of deciding and fixing them beforehand, we treat the class probabilities as learnable parameters of the optimization process. To enable the flow of gradients back to the class probabilities, we employ the Gumbel-Softmax distribution , which acts as a proxy for the categorical distribution, generating differentiable samples having properties similar to that of categorical samples. Second, we enforce our network to assign the same latent category for an image I and its transformed image I, which induces the discovered latent factors to be invariant to identity-preserving transformations like illumination, translation, rotation, and scale changes. Although there are multiple meaningful ways to partition unlabeled data-e.g., with digits, one partitioning could be based Samples generated with an InfoGAN model learned with a fixed uniform categorical distribution Cat(K = 10, p = 0.1) on balanced and imbalanced data, respectively. Each row corresponds to a different learned latent category. (Right): Samples generated with Elastic-InfoGAN using its automatically learned latent categorical distribution. Although InfoGAN discovers digit identities in the balanced data, it produces redundant/incoherent groupings in the imbalanced data. In contrast, our model is able to discover digit identities in the imbalanced data. on identity, whereas another could be based on stroke width-we aim to discover the partitioning that groups objects according to a high-level factor like identity while being invariant to low-level "nuisance" factors like lighting, pose, and scale changes. Such partitionings focusing on object identity are more likely to be useful for downstream visual recognition applications (e.g., semi-supervised object recognition). In sum, our modifications to InfoGAN lead to better disentanglement and categorical grouping of the data (Fig. 1), while at the same time enabling the discovery of the original imbalance through the learned probability parameters of the Gumbel softmax distribution. Importantly, these modifications do not impede InfoGAN's ability to jointly model both continuous and discrete factors in either balanced or imbalanced data scenarios. Our contributions can be summarized as follows: To our knowledge, our work is the first to tackle the problem of unsupervised generative modeling of categorical disentangled representations in imbalanced data. We show qualitatively and quantitatively our superiority in comparison to Info-GAN and other relevant baselines. Our work takes a step forward in the direction of modeling real data distributions, by not only explaining what modes of a factor of variation are present in the data, but also discovering their respective proportions. Disentangled representation learning Learning disentangled representations of the data has a vast literature . InfoGAN ) is one of the most popular unsupervised GAN based disentanglement methods, which learns disentanglement by maximizing the mutual information between the latent codes and generated images. It has shown promising for discovering meaningful latent factors in balanced datasets like MNIST , CelebA , and SVHN . The recent method of JointVAE extends beta-VAE by jointly modeling both continuous and discrete factors, using Gumbel-Softmax sampling. However, both InfoGAN and JointVAE assume uniformly distributed data, and hence fail to be equally effective in imbalanced data, evident by Fig. 1 and our experiments. Our work proposes modifications to InfoGAN to enable it to discover meaningful latent factors in imbalanced data. Learning from imbalanced data Real world data have a long-tailed distribution , which can impede learning, since the model can get biased towards the dominant categories. To alleviate this issue, researchers have proposed re-sampling (; ; ; ;) and class reweighting techniques (; ; ;) to oversample rare classes and down-weight dominant classes. These methods have shown to be effective for the supervised setting, in which the class distributions are known a priori. There are Figure 2: Elastic-InfoGAN takes a sampled categorical code from a Gumbel-Softmax distribution and a noise vector to generate fake samples. Apart from the original InfoGAN ) loss functions, we have two additional constraints: We take real images x and create a transformed version x using identity-preserving operations (e.g., small rotation), and force their inferred latent code distributions to be close; We also constrain their entropy to be low. The use of differentiable latent variables from the Gumbel-Softmax enables gradients to flow back to the class probabilities to update them. also unsupervised clustering methods that deal with imbalanced data in unknown class distributions (e.g., ;). Our model works in the same unsupervised setting; however, unlike these methods, we propose an unsupervised generative model method that learns to disentangle latent categorical factors in imbalanced data. Leveraging data augmentation for unsupervised image grouping Some works (; ; ;) use data augmentation for image transformation invariant unsupervised clustering or representation learning. The main idea is to maximize the mutual information or similarity between the features of an image and its corresponding transformed image. However, unlike our approach, these methods do not target imbalanced data and do not perform generative modeling. Let X = {x 1, x 2, . . ., x N} be a dataset of N unlabeled images from k different classes. No knowledge about the nature of class imbalance is known beforehand. Our goal is twofold: (i) learn a generative model G which can learn to disentangle object category from other aspects (e.g., digits in MNIST , face identity in YouTube-Faces ); (ii) recover the unknown true class imbalance distribution via the generative modeling process. In the following, we first briefly discuss InfoGAN ), which addressed this problem for the balanced setting. We then explain how InfoGAN can be extended to the scenario of imbalanced data. Learning disentangled representations using the GAN framework was introduced in InfoGAN ). The intuition is for generated samples to retain the information about latent variables, and consequently for latent variables to gain control over certain aspects of the generated image. In this way, different types of latent variables (e.g., discrete categorical vs. continuous) can control properties like discrete (e.g., digit identity) or continuous (e.g., digit rotation) variations in the generated images. Formally, InfoGAN does this by maximizing the mutual information between the latent code c and the generated samples G(z, c), where z ∼ P noise (z) and G is the generator network. The mutual information I(c, G(c, z)) can then be used as a regularizer in the standard GAN training objective. Computing I(c, G(c, z)) however, requires P (c|x), which is intractable and hard to compute. The authors circumvent this by using a lower bound of I(c, G(c, z)), which can approximate P (c|x) via a neural network based auxiliary distribution Q(c|x). The training objective hence becomes: Figure 3: Different ways for unsupervised learning based methods to group unlabeled data; based on rotation (left) vs. digit identity (right). Here, we show two different groups for each grouping. where D is the discriminator network, and H(c) is the entropy of the latent code distribution. Training with this objective in latent codes c having control over the different factors of variation in the generated images G(z, c). To model discrete variations in the data, InfoGAN employs nondifferentiable samples from a uniform categorical distribution with fixed class probabilities; i.e., c ∼ Cat(K = k, p = 1/k) where k is the number of discrete categories to be discovered. As shown in Fig. 1, applying InfoGAN to an imbalanced dataset in suboptimal disentanglement, since the uniform prior assumption does not match the actual ground-truth data distribution of the discrete factor (e.g., digit identity). To address this, we propose two augmentations to InfoGAN. The first is to enable learning of the latent distribution's parameters (class probabilities), which requires gradients to be backpropagated through latent code samples c, and the second is to enforce identity-preserving transformation invariance in the learned latent variables so that the ing disentanglement favors groups that coincide with object identities. Learning the prior distribution To learn the prior distribution, we replace the fixed categorical distribution in InfoGAN with the Gumbel-Softmax distribution , which enables sampling of differentiable samples. The continuous Gumbel-Softmax distribution can be smoothly annealed into a categorical distribution. Specifically, if p 1, p 2..., p k are the class probabilities, then sampling of a k-dimensional vector c can be done in a differentiable way: Here g i, g j are samples drawn from Gumbel, and τ (softmax temperature) controls the degree to which samples from Gumbel-Softmax resemble the categorical distribution. Low values of τ make the samples possess properties close to that of a one-hot sample. In theory, InfoGAN's behavior in the class balanced setting (Fig. 1 left) can be replicated in the imbalanced case (where grouping becomes incoherent, Fig. 1 center), by simply replacing the fixed uniform categorical distribution with Gumbel-Softmax with learnable class probabilities p i's; i.e. gradients can flow back to update the class probabilities (which are uniformly initialized) to match the true class imbalance. And once the true imbalance gets reflected in the class probabilities, the possibility of proper categorical disentanglement (Fig. 1 right) becomes feasible. Empirically, however, this ideal behavior is not observed in a consistent manner. As shown in Fig. 3 (left), unsupervised grouping can focus on non-categorical attributes such as rotation of the digit. Although this is one valid way to group unlabeled data, our goal in this work is to prefer groupings that correspond to class identity as in Fig. 3 (right). Learning object identities To capture object identity as the factor of variation, we make another modification to InfoGAN. Specifically, to make the model focus on high level object identity and be invariant to low level factors like rotation, thickness, illumination, etc., we explicitly create these identity-preserving transformations on real images, and enforce the latent prediction Q(c|x) to be invariant to these transformations. Note that such transformations (aka data augmentations) are standard for learning invariant representations for visual recognition tasks. Formally, for any real image x ∼ P data (x), we apply a set of transformations δ to obtain a transformed image x = δ(x). It is important to point out that these transformations are not learned over the optimization process. Instead we use fixed simple transformations which guarantee that the human defined object identity label for the original image x and the transformed image x image remain the same. For example, the digit identity of a'one' from MNIST will remain the same if a transformation of rotation (±10 degree) is applied. Similarly, a face identity will remain the same upon horizontal flipping. We hence formulate our transformation constraint loss function: where d(·) is a distance metric (e.g., cosine distance), and Q(c x |x), Q(c x |x), are the latent code predictions for real image x and transformed image x, respectively. Note that ideally Q(c|x), for either x ∼ P data (x) or x ∼ P g (G), should have low entropy (peaky class distribution) for proper inference about the latent object category. Eq. 2 automatically enforces a peaky class distribution for Q(c|x) for x ∼ P g (G), because the sampled input latent code c from Gumbel-Softmax is peaky. For x ∼ P data (x) though, Eq. 4 alone isn't sufficient as it can be optimized in a sub-optimal manner (e.g., if c x ≈ c x, but both have high entropy). We hence add an additional entropy loss which forces c x and c x to have low entropy (s) class distributions: The losses L trans and L ent, along with Gumble-Softmax, constitute our overall training objective: V Inf oGAN plays the role of generating realistic images and associating the latent variables to correspond to some factor of variation in the data, while the addition of L trans will push the discovered factor of variation to be close to object identity. Finally, L ent's objective is to ensure Q behaves similarly for real and fake image distributions. The latent codes sampled from Gumbel-softmax, generated fake images, and losses operating on fake images are all functions of class probabilities p i's too. Thus, during the minimization phase of Eqn. 6, the gradients are used to optimize the class probabilities along with G and Q in the backward pass. In this section, we perform quantitative and qualitative analyses to demonstrate the advantage of Elastic-InfoGAN in discovering categorical disentanglement for imbalanced datasets. We use: MNIST and YouTube-Faces . MNIST is by default a balanced dataset with 70k images, with a similar number of training samples for each of 10 classes. We artificially introduce imbalance over 50 random splits (max imbalance ratio 10:1 between the largest and smallest class). YouTube-Faces is a real world imbalanced video dataset with varying number of training samples (frames) for the 40 face identity classes (as used in). The smallest/largest class has 53/695 images, with a total of 10,066 tightly-cropped face images. All are reported over the average of: (i) 50 runs (over 50 random imbalances) for MNIST, (ii) 5 runs over the same imbalanced dataset for YouTube-Faces. We use MNIST to provide a proof-of-concept of our approach. For example, one of the ways in which different'ones' in MNIST vary is rotation, which can be used as a factor (as opposed to object identity) to group data in imbalanced cases (recall Fig. 3 left). Thus, using rotation as a transformation in L trans should alleviate this problem. We ultimately care most about the YouTube-Faces since it is more representative of real world data, both in terms of challenging visual variations (e.g., facial pose, scale, expression, and lighting changes) a well as inherent class imbalance. For this reason, the effect of augmentations in L trans will be more reflective of how well our model can work in real world data. We design different baselines to show the importance of having learnable priors for different latent variables and applying our transformation constraints. • Uniform InfoGAN ): This is the original InfoGAN with fixed and uniform categorical distribution. • Ground-truth InfoGAN: This is InfoGAN with a fixed, but imbalanced categorical distribution where the class probabilities reflect the ground-truth class imbalance. • Ground-truth InfoGAN + Transformation constraint: Similar to the previous baseline but with our data transformation constraint (L trans). • Gumbel-softmax: In this case, InfoGAN does not have a fixed prior for the latent variables. Instead, the priors are learned using the Gumbel-softmax technique . • Gumbel-softmax + Transformation constraint: Apart from having a learnable prior we also apply our transformation constraint (L trans). This is a variant of our final approach that does not have the entropy loss (L ent). • Gumbel-softmax + Transformation constraint + Entropy Loss (Elastic-InfoGAN): This is our final model with all the losses, L trans and L ent, in addition to V Inf oGAN (D, G, Q). • JointVAE : We also include this VAE based baseline, which performs joint modeling of disentangled discrete and continuous factors. Our evaluation should capture: how well we learn class-specific disentanglement for the imbalanced dataset, and recover the ground-truth class distribution of the imbalanced dataset. To capture these aspects, we apply three evaluation metrics: • Average Entropy (ENT): Evaluates two properties: (i) whether the images generated for a given categorical code belong to the same ground-truth class i.e., whether the ground-truth class histogram for images generated for each categorical code has a low entropy; (ii) whether each ground-truth class is associated with a single unique categorical code. We generate 1000 images for each of the k latent categorical codes, compute class histograms using a pre-trained classifier 2 to get a k × k matrix (where rows index latent categories and columns index ground-truth categories). We report the average entropy across the rows (tests (i)) and columns (tests (ii)). • Normalized Mutual Information (NMI) : We treat our latent category assignments of the fake images (we generate 1000 fake images for each categorical code) as one clustering, and the category assignments of the fake images by the pre-trained classifier as another clustering. NMI measures the correlation between the two clusterings. The value of NMI will vary between 0 to 1; higher the NMI, stronger the correlation. • Root Mean Square Error (RMSE) between predicted and actual class distributions: measures the accuracy of approximating the true class distribution of the imbalanced dataset. Since the learned latent distribution may not be aligned to the ground-truth distribution (e.g., the first dimension for the learned distribution might capture 9's in MNIST whereas the first dimension for the groundtruth distribution may be for 0's), we need a way to align the two. For this, we use the pre-trained classifier to classify the generated images for a latent variable and assign the variable to the most frequent class. If more than one latent variable is assigned to the same class, then their priors are added before computing its distance with the known prior of the ground-truth class. We first evaluate disentanglement quality as measured by NMI and average entropy (ENT); see Figure 4: Representative image generations on a random imbalanced MNIST split. Each row corresponds to a learned latent variable. Our approach generates inconsistent images in only row 2 whereas Uniform InfoGAN does so in rows 1,2,6,8 and JointVAE does so in rows 3,5,6,7,9,10. particular, our full model obtains significant boosts of 0.101 and 0.104 in NMI, and -0.222 and -0.305 in ENT compared to the Uniform InfoGAN baseline for MNIST and YouTube-Faces, respectively. The boost is even more significant when compared to JointVAE: 0.1977, 0.3380 in NMI, and -0.4658, -0.9963 in ENT for MNIST and YouTube-Faces, respectively. This again is a of the assumption of a uniform categorical prior by JointVAE, along with poorer quality generations. We see that our transformation constraint generally improves the performance for both when the ground-truth prior is known (Ground-truth InfoGAN vs. Ground-truth InfoGAN + Transformation constraint) as well as when the prior is learned (Gumbel-softmax vs. Gumbel-softmax + Transformation constraint). This shows that enforcing the network to learn groupings that are invariant to identity-preserving transformations helps it to learn a disentangled representation in which the latent dimensions correspond more closely to identity-based classes. Also, learning the prior using the Gumbel-softmax leads to better categorical disentanglement than fixed uniform priors, which demonstrates the importance of learning the prior distribution in imbalanced data. Overall, our approach using Gumbel-softmax to learn the latent prior distribution together with our transformation constraint works better than applying them individually, which demonstrates their complementarity. Interestingly, using a fixed ground-truth prior (Ground-truth InfoGAN) does not in better disentanglement than learning the prior (Gumbel-softmax). This requires further investigation, but we hypothesis that having a rigid prior makes optimization more difficult compared to allowing the network to converge to a distribution on its own, as there are multiple losses that need to be simultaneously optimized. Finally, in Table 2, we evaluate how well the Gumbel-softmax can recover the ground-truth prior distribution. For this, we compute the RMSE between the learned prior distribution and ground- truth prior distribution. Our full model (transformation constraint + entropy loss) produces the best estimate of the true class imbalance for both datasets, as evident through lowest RMSE. Our improvement over the Gumbel-Softmax baseline indicates the importance of our tranformation L trans and entropy L ent losses in approximating the class imbalance. We next qualitatively evaluate the disentanglement achieved by our approach. Figs. 4, 5, and 7 show for MNIST and YouTube-Faces. Overall, Elastic-InfoGAN generates more consistent images for each latent code compared to Uniform InfoGAN and JointVAE. For example, in Fig. 4, ElasticInfoGAN only generates inconsistent images in the second row whereas the baseline approaches generate inconsistent images in several rows. Similarly, in Fig. 7, Elastic-InfoGAN generates faces of the same person corresponding to a latent variable more consistently than the baselines. Both Uniform InfoGAN and JointVAE on the other hand tend to mix up identities within the same categorical code because they incorrectly assume a prior uniform distribution. Finally, we demonstrate that Elastic-InfoGAN does not impede modeling of continuous factors in the imbalanced setting. Specifically, one can augment the input with continuous latent codes (e.g. r1, r2 ∼ Unif(-1, 1)) along with the existing categorical and noise vectors. In Fig. 6, we show the of continuous code interpolation; we can see that each of the two continuous codes largely captures a particular continuous factor (stroke width on left, and digit rotation on the right). In this work, we proposed a new unsupervised generative model that learns categorical disentanglement in imbalanced data. Our model learns the class distribution of the imbalanced data and enforces invariance to be learned in the discrete latent variables. Our demonstrate superior performance over alternative baselines. We hope this work will motivate other researchers to pursue this interesting research direction in generative modeling of imbalanced data. For MNIST, we operate on the original 28x28 image size, with 10-dimensional categorical code to represent 10 digit categories. For YouTube-Faces, we crop the faces using bounding box annotations provided, and then resize them to 64x64 resolution, and use a 40-dimensional categorical code to represent 40 face identities (first 40 categories sorted in alphabetical manner), as done in. Pre-trained classification architecture used for evaluation for MNIST: 2 Conv + 2 FC layers, with max pool and ReLU after every convolutional layer. For YouTube-Faces classification, we fine-tune a ResNet-50 network pretrained on VGGFace2, for face recognition. We set λ 1 = 1 (for L 1), λ 2 = 10 (for L trans), and λ 3 = 1 (for L ent). These hyperparameters were chosen to balance the magnitude of the different loss terms. Finally, one behavior we observe is that if the random initialization of class probabilities is too skewed (only few classes have high probability values), then it becomes very difficult for them to get optimized to the ideal state. We hence initialize them with the uniform distribution, which makes training much more stable. We follow the exact architecture as described in InfoGAN ): The generator network G takes as input a 64 dimensional noise vector z ∼ N and 10 dimensional samples from Gumbel-Softmax distribution. The discriminator D and the latent code prediction network Q share most of the layers except the final fully connected layers. Elastic-InfoGAN architecture for YouTube Faces We operate on cropped face images resized to 64x64 resolution. Our architecture is based on the one proposed in StackGANv2 , where we use its 2-stage version for generating 64x64 resolution images. The input is a 100 dimensional noise vector z ∼ N and 40 dimensional samples (c) from the Gumbel-Softmax distribution. There is an initial fully connected layer which maps the input (concatenation of z and c) to an intermediate feature representation. A series of a combination of upsampling + convolutional (interleaved with batch normalization and Gated Linear Units) increase the spatial resolution of the feature representation, starting from 1024 (feature size: 4 x 4 x 1024) channels to 64 (feature size: 64 x 64 x 64) channels. For the first stage, a convolutional network transforms the feature representation into a 3 channel output, while maintaining the spatial resolution; this serves as the fake image from the first stage. The next stage uses the 64 x 64 x 64 resolution features, forwards it through a network containing residual blocks and convolutional layers, while again maintaining the spatial resolution of 64 x 64. For the second stage, again a convolutional layer maps the ing feature into a 64 x 64 resolution fake image, which is the one used by the model for evaluation purposes. The discriminator networks are identical at both stages. It consists of 4 convolutional layers interleaved with batch normalization and leaky ReLU layers, which serve as the common layers for both the D and Q networks. After that, D has one non-shared convolutional layer which maps the feature representation into a scalar value reflecting the real/fake score. For Q, we have a pair of non-shared convolutional layers which map the feature representation into a 40 dimensional latent code prediction. We employ a similar way of training the generative and discriminative modules as described in. We first update the discriminator based on the real/fake adversarial loss. In the next step, after computing the remaining losses (mutual information + L trans + L ent), we update the generator (G) + latent code predictor (Q) + latent distribution parameters at once. Our optimization process alternates between these two phases. For MNIST, we train all baselines for 200 epochs, with a batch size of 64. For YouTube-Faces, we train until convergence, as measured via qualitative realism of the generated images. We use a batch size of 50. τ = 0.1 when used for sampling from Gumbel-Softmax, which in samples having very low entropy (very close to one hot vectors from a categorical distribution). Here we describe the exact class imbalance used in our experiments. For MNIST, we include below the 50 random imbalances created. For YouTube-Faces, we include the true ground truth class imbalance in the first 40 categories. The imbalances reflect the class frequency. A.2.1 MNIST 147, 0.037, 0.033, 0.143, 0.136, 0.114, 0.057, 0.112, 0.143, 0.078 • 0.061, 0.152, 0.025, 0.19, 0.12, 0.036, 0.092, 0.185, 0.075, 0.064 • 0.173, 0.09, 0.109, 0.145, 0.056, 0.114, 0.075, 0.03, 0.093, 0.116 • 0.079, 0.061, 0.033, 0.139, 0.145, 0.135, 0.057, 0.062, 0.169, 0.121 • 0.053, 0.028, 0.111, 0.142, 0.13, 0.121, 0.107, 0.066, 0.125, 0.118 • 0.072, 0.148, 0.092, 0.081, 0.119, 0.172, 0.05, 0.109, 0.085, 0.073 • 0.084, 0.143, 0.07, 0.082, 0.059, 0.163, 0.156, 0.063, 0.074, 0.105 • 0.062, 0.073, 0.065, 0.183, 0.099, 0.08, 0.05, 0.16, 0.052, 0.177 • 0.139, 0.113, 0.074, 0.06, 0.068, 0.133, 0.142, 0.13, 0.112, 0.03 • 0.046, 0.128, 0.059, 0.112, 0.135, 0.164, 0.142, 0.125, 0.051, 0.037 • 0.107, 0.057, 0.154, 0.122, 0.05, 0.111, 0.032, 0.044, 0.136, 0.187 • 0.129, 0.1, 0.039, 0.112, 0.119, 0.095, 0.047, 0.14, 0.156, 0.064 • 0.146, 0.08, 0.06, 0.072, 0.051, 0.119, 0.176, 0.11, 0.158, 0.028 A.3 DISCUSSION ABOUT EVALUATING PREDICTED CLASS IMBALANCE IN SEC. 4.2 To measure the ability of a generative model to approximate the class imbalance present in the data, we derive a metric in Section 4.2 of the main paper, the of which are presented in Table 2. Even though we do get better as measured by RMSE between the approximated and the original imbalance distribution, we would like to discuss certain flaws associated with this metric. In its current form, we compute the class histogram (using the pre-trained classifier, which classifies each fake image into one of the ground-truth categories) for a latent code and associate the latent code to the most frequent class. If multiple latent codes get associated to the same ground-truth class, there will be ground-truth classes for which the predicted class probability will be zero. This is rarely an issue for MNIST, as it only has 10 ground-truth classes, and thus in most cases both our method and the baselines assign each latent code to a unique ground-truth class. However, for YouTube-Faces, after associating latent codes to the ground truth categories in this manner, roughly 10-13 ground-truth classes (out of 40) get associated with 0 probability for both our approach and the baselines (due to multiple latent codes being associated to the same majority ground-truth class). Our metric therefore may be too strict, especially for difficult settings with many confusing groundtruth categories. The tricky part about evaluating how well the model is approximating the class imbalance is that there are two key aspects that need to be simultaneously measured. Specifically, not only should (i) the raw probability values discovered match the ground-truth class imbalance distribution, but (ii) the class probabilities approximated by the latent codes must correspond to the correct ground-truth classes. For example, if the original data had 80% samples from class A and 20% from class B, the generative model should not only estimate the imbalance as 80%-20%, but the model must associate 80% to class A and 20% to class B (instead of 80% to class B and 20% to class A). Another way to evaluate whether a model is capturing the ground-truth class imbalance could be the FID score, but it's worth noting that a method can still have a good FID score without disentangling the different factors of variations. Given the limitation with our metric on YouTube-Faces, we have also measured the min/max of predicted prior values. For YouTube-Faces, the min/max of predicted and ground-truth priors are: Gumbel-Softmax: Min 2.76748415e-05, Max: 0.0819286481; Ours without L ent: Min 0.00211485, Max: 0.06152404; Ours complete: Min 0.00336615, Max: 0.06798439; and GroundTruth: Min 0.005265, Max: 0.069044. Our full method's min/max more closely matches that of the ground-truth, and the overall ordering of the methods follows that of Table 2 using our RMSE based metric. In sum, we have made an effort to evaluate accurate class imbalance prediction in multiple ways, but it is important to note that this is an area which calls for better metrics to evaluate the model's ability to approximate the class imbalance distribution.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1gAO0EYwB
Elastic-InfoGAN is a modification of InfoGAN that learns, without any supervision, disentangled representations in class imbalanced data
Many real applications show a great deal of interest in learning multiple tasks from different data sources/modalities with unbalanced samples and dimensions. Unfortunately, existing cutting-edge deep multi-task learning (MTL) approaches cannot be directly applied to these settings, due to either heterogeneous input dimensions or the heterogeneity in the optimal network architectures of different tasks. It is thus demanding to develop knowledge-sharing mechanism to handle the intrinsic discrepancies among network architectures across tasks. To this end, we propose a flexible knowledge-sharing framework for jointly learning multiple tasks from distinct data sources/modalities. The proposed framework allows each task to own its task (data)-specific network design, via utilizing a compact tensor representation, while the sharing is achieved through the partially shared latent cores. By providing more elaborate sharing control with latent cores, our framework is effective in transferring task-invariant knowledge, yet also being efficient in learning task-specific features. Experiments on both single and multiple data sources/modalities settings display the promising of the proposed method, especially favourable in insufficient data scenarios. Multi-task learning (MTL) is an approach for boosting the overall performance of each individual task by learning multiple related tasks simultaneously. In the deep learning setting, jointly fitting sufficiently flexible deep neural networks (DNNs) to data of multiple tasks can be seen as adding an inductive bias to the deep models, which can facilitate the learning of feature representations that are preferable by all tasks. Recently, the deep MTL has been successfully explored in a broad range of applications, such as computer vision , natural language processing (;, speech recognition) and so on. Nevertheless, one key challenge in deep MTL remains largely unaddressed, that is, almost all existing deep MTL approaches restrict themselves only to the setting of multi-label learning (or multi-output regression) . In other words, different tasks must be fed with input data from the same source (or domain). Such requirement, however, seriously limits the applicability of those models to a more realistic scenario of deep MTL, where the tasks involve distinct data sources (domains) with unbalanced sample sizes or dimensions. More specifically, tasks from some domains with abundant samples or small input dimensions are relatively easy to handle, whereas tasks from other domains are quite challenging due to the insufficient training data and large dimensionality. For instance, classifying hand-written digits (MNIST dataset ) is somewhat similar to the recognition of hand-drawn characters (Omniglot dataset ). The Omniglot task is much harder than the MNIST task, as each character in Omniglot has only 20 training samples, while the input dimensionality is about 15 times larger than MNIST digit. As another example, predicting binary attributes (i.e., 'young', 'bald', 'receding hairline') from human face images (CelebA dataset ) ought to be related to the age group classification using human photos taken in the wild (Adience dataset ). The Adience task turns out to be the more difficult one since the wild images are not preprocessed and 7.6 times fewer than CelebA samples. Hence, it makes good sense to jointly learn these multi-task representation learning (DMTRL) for CNN setting and our TRMTL (general setting and CNN setting) w.r.t. two tasks. The shared portion is depicted in yellow. MRN: original weights are totally shared at the lower layers and the relatedness between tasks at the top layers is modelled by tensor normal priors. DMTRL (TT or Tucker): all layer-wise weights must be equal-shape so as to be stacked and decomposed into factors. For each task, almost all the factors are shard at each layer except the very last 1D vector. Such pattern of sharing is identical at all layers. TRMTL (General): layer-wise weights are separately encoded into TR-formats for different tasks, and a subset of latent cores are selected to be tied across two tasks. The portions of sharing can be different from layer to layer. TRMTL (CNN): spatial cores (height and width cores) in the tensorized convolutional kernel are shared, while cores of input/output channels of the kernel are task-specific. tasks to extract better feature representations, especially for the hard tasks, which could be achieved through transferring domain-specific knowledge from easy tasks. Unfortunately, existing cutting-edge deep MTL models are only suited for the multi-label learning where different tasks share the same training inputs (i.e., X i = X j for i = j, where X i denotes the input for task T i), and thus cannot be directly applied to above learning scenarios. This is due to those models fail to provide knowledge-sharing mechanisms that can cope with the intrinsic discrepancies among network architectures across tasks. Such discrepancies either arise from the heterogeneous dimensions of input data or from the heterogeneous designs of layer-wise structures. Conventionally, knowledge-sharing mechanisms of deep MTL can be hard or soft parameter sharing . Hard sharing models share all parameters at the lower layers but with no parameters being shared at the upper layers across tasks. Soft sharing models (; ;), on the other hand, learn one DNN per task with its own set of parameters, and the tasks are implicitly connected through imposing regularization terms on the aligned weights. The common issue with above mechanisms is that, for the sharing part, the network architectures of all tasks are strictly required to be identical. It turns out that some of the tasks have to compromise on a sub-optimal network architecture, which may lead to the deterioration in the overall performance. Ideally, at all potentially shared layers, each task should be capable of encoding both task-specific and task-independent portions of variation. To overcome this limitation, we propose a latent-subspace knowledge-sharing mechanism that allows to associate each task with distinct source (domain) of data. By utilizing tensor representation, different portions of parameters can be shared via latent cores as common knowledge at distinct layers, so that each task can better convey its private knowledge. In this work, we realize our proposed framework via tensor ring (TR) format and refer it as tensor ring multi-task learning (TRMTL), as shown in Figure 1. Our main contributions are twofold: we offer a new distributed knowledge-sharing mechanism that can address the discrepancies of network architectures among tasks. Compared to existing deep MTL models that are only for multi-label learning, the joint learning of tasks from multi-datasets (multi-domains) with heterogeneous architectures becomes feasible. we provide a TR-based implementation of the proposed framework, which further enhances the performance of deep MTL models in terms of both compactness and expressive power. High-order tensors are referred to as multi-way arrays of real numbers. Let W ∈ R N1×···×N D be a Dth-order tensor in calligraphy letter, where D is called mode or way. Some original work have successfully applied tensor decompositions to applications such as imaging analysis and computer vision (; 2003). As a special case of tensor networks, the recent TR decomposition ) decomposes a tensor W into a sequence of 3rd-order latent cores that are multiplied circularly. An example of TR-format is illustrated in Figure 2. In TR-format, any two adjacent latent cores are'linked' by a common dimension of size R k+1, k ∈ {1, ..., D}. In particular, the last core is connected back to the first core by satisfying the border rank condition Compared with tensor train (TT) format , TR generalizes TT by relaxing the border rank condition. concludes that TR is more flexible than TT w.r.t. low-rank approximation. The authors observe the pattern of ranks distribution on cores tend to be fixed in TT. In TT, the ranks of middle cores are often much larger than those of the side cores, while TR-ranks has no such drawbacks and can be equally distributed on cores. The authors find that, under the same approximation accuracy, the overall ranks in TR are usually much smaller than those in TT, which makes TR a more compact model than TT. Our general framework learns one DNN per task by representing the original weight of each layer with a tensor representation layer, i.e., utilizing a sequence of latent cores. Then, a subset of cores are tied across multiple tasks to encode the task-independent knowledge, while the rest cores of each task are treated as private cores for task-specific knowledge. We start the section by describing the tensor representation layer, which lays a groundwork for our deep MTL approach. Our TR-based implementation is called tensor ring representation layer (TRRL). Following the spirit of TT-matrix representation, TR is able to represent a large matrix more compactly via TR-matrix format. Specifically, let W be a matrix of size In this way, one can establish a one-to-one correspondence between a matrix element W(i, j) and a tensor element W((φ 1 (i), ψ 1 (j)),..., (φ D (i), ψ D (j))) using the compound index (φ k (·), ψ k (·)) for mode k ∈ {1, ..., D}. We formulate the TR-matrix format as where'Tr' is the trace operation. G (k) denotes the kth latent core, while Notice that the third line in equation 1 implies TRRL is more powerful than TT layer in terms of model expressivity, as TRRL can in fact be written as a sum of R 1 TT layers. In the deep MTL context, the benefits of tensorization in our TRRL are twofold: a sparser, more compact tensor network format for each task and a potentially finer sharing granularity across the tasks. 3.2 THE PROPOSED KNOWLEDGE-SHARING FRAMEWORK 3.2.1 THE GENERAL FORMULATION Our sharing strategy is to partition each layer's parameters into task-independent TR-cores as well as task-specific TR-cores. More specifically, for some hidden layer of an individual task t ∈ {1, ..., T}, we begin with reformulating the layer's weights W t ∈ R Ut×Vt in terms of TR-cores by means of TRRL, where. Next, the layer's input tensor H t can be transformed into layer's output tensor where the common TR-cores subset {G (·) com } has C elements which can be arbitrarily chosen from the set of all D t cores, leaving the rest cores {G (·) t } as task-specific TR-cores. Pay attention that our TRMTL neither restricts on which cores to share, nor restricts the shared cores to be in a consecutive order. Finally, we reshape tensor Y t back into a vector output y t ∈ R Vt. Note that the portion of sharing, which is mainly measured by C, can be set to different values from layer to layer. According to equation 2, TRMTL represents each weight element in weight matrix as function of a sequence product of the slice matrices of the corresponding shared cores and private cores. Intuitively, this strategy suggests the value of each weight element is partially determined by some common latent factors, and meanwhile, also partially affected by some private latent factors. Thus, our sharing is carried out in an distributed fashion. This is more efficient than conventional sharing strategies in which each weight element is either 100% shared or 100% not shared. Although we describe our general framework in terms of TR format, it is straightforward to implement our framework with other tensor network representations, such as Tucker , TT , projected entangled pair states (PEPS) and entanglement renormalization ansatz (MERA), as long as each layer-wise weight matrix is tensorized and decomposed into a sequences latent cores. Our model can be easily extended to convolutional kernel K ∈ R H×W ×I×O, where H × W is the spatial sizes and I and O are the input and output channels. Note that here TRRL is similar to TR based weight compression , but we use a different 4th-order latent cores in TR-matrix. As one special case of our general framework (TRMTL-CNN), we just share the spatial cores (height cores and width cores) in the tensorized kernel (via TRRL), while cores corresponding to input/output channels may differ from task to task: where C is typically 1 for small-sized spatial dimensions. Thus, there is no need to specify how many and which cores to share for TRMTL-CNN. 4 EXPERIMENTAL We compare our TRMTL with single task learning (STL), MRN , two variants of DMTRL . We repeat the experiments 5 times and record the average accuracy. The detailed settings and more experimental are in the supplementary material. Before the sharing, we first tensorize the layer-wise weight into a Dth-order tensor, whose D modes have roughly the same dimensionality, such that the cores have approximately equal sizes if we assume the same TR-ranks. In this manner, we may measure the fraction of knowledge sharing by the number of shared cores. D is empirically set to be from 4 to 6 in most of our tests. For simplicity, we assume TR-ranks to be identical for all TR-cores across layers for each task. We choose TR-ranks by cross validation on a range of values among 5, 10 and 15. Note that there is no tensorization step in DMTRL in , and DMTRL selects TT-ranks via tensor decomposition according to some specified threshold. Our general TRMTL is highly flexible as we impose no restrictions on which cores to be shared and where to share across tasks. In practical implementation, we may need to trade-off some model flexibility for the ease of sharing-pattern selection by introducing some useful prior knowledge about the domain. For instance, many vision tasks tend to share more cores at the lower layers than upper layers. There are various strategies on how to select the shard cores w.r.t. both the location and the number. Authors of discover that distinct cores control an image at different scales of resolution. The authors demonstrate this by decomposing a tensorized 2D image and then adding noise to one specific core at a time. They show the core in the first location controls smallscale patches while the core in the last location influences on large-scale partitions. Motivated by this, under the general formulation 3.2.1, we preferentially share the features from the detailed scale to the coarse scale, which means we follow a natural left-to-right order in location to select different C number of cores at distinct layers. C is needed to tune via cross validation. In practice, we apply a greedy search on C layer by layer to effectively reduce the searching space. Another practical option is to prune the searching space by following the very useful guidance that C tends to decrease as the layers increase. For certain CNN based architectures, we adopt the special case TRMTL-CNN. Since the cores produced by tensorized convolutional kernel have their specific roles, we just share the cores that are associated to the spatial dimensions (height and width cores), leaving input/output cores being task-specific. In our tests, C is just 1 due to the small spatial kernels, thus eliminating the need of the tuning of this hyper-parameter. We begin our test with data from single domain source to validate the basic properties of our model. Our first validation test is conducted on MNIST, where the task A is to classify the odd digits and the task B is to classify the even ones. To see how sharing styles and hyper-parameter C can affect the performance, we examine various patterns from three representative categories, as shown in Figure 3. For instance, the patterns in'bottom-heavy' category mean more parameters are shared at the bottom layers than the top layers, while'top-heavy' indicates the opposite style. The validation is conducted on MNIST using multi-layer perceptron (MLP) with three tensorized hidden layers, each of which is encoded using 4 TR-cores. The pattern'014', for example, means the C are 0, 1 and 4 from lower to higher layers, respectively. We gauge the transferability between tasks with unbalanced training samples by the averaged accuracy on the small-sample tasks. Clearly, the'bottom-heavy' patterns achieve significantly better than those from the other two categories. The pattern'420' is reasonable and obviously outperforms the pattern'044' in Figure 3, since'044' overlaps all weights at the top layers but shares nothing at the bottom layer. Within each category, TRMTL is robust to small perturbation of C for pattern selection, both'410' and'420' obtain similarly good performance. We also examine the complexity of the compared models on MNIST. In Table 1, STL and MRN have enormous 6, 060K and 3, 096K parameters, since they share weights in the original space. DMTRLTucker and TT with pre-train trick are parameterized by 1, 194K and 1, 522K parameters. In contrast, TRMTL achieves the best accuracies while the numbers of parameters are significantly down to 16K and 13K. The huge reduction is due to the tensorization and the ing more sparser TRRL with overall lower ranks. Our next validation is carried out on Omniglot dataset to verify efficacy of knowledge transfer from data-abundance tasks to data-scarcity ones within one source of data domain. Omniglot data consists of 1, 623 unique characters from 50 alphabets with resolution of 105 × 105. We divide the whole alphabets into 5 tasks (task A to task E), each of which links to the alphabets from 10 languages. We now test a more challenging case, where only 1 task (task C) has sufficient samples while the samples of the other 4 tasks are limited. Figure 4 demonstrates the amount of the accuracy changes for each task, both with and without the aid of the data-rich task. We observe our TRMTL is able to make the most of the useful knowledge from task C and significantly boosts the accuracies of all other tasks. In our last validation, we like to explore whether the proposed sharing mechanism also works for recurrent neural networks. Hence, we test on UFC11 dataset that contains 1, 651 Youtube video clips from 11 actions, which are converted to the resolution of 120 × 180 × 3. We assign 5 actions ('basketball', 'biking', 'diving', 'golf swinging' and 'horse back riding') to the task A and leave the rest 6 actions ('soccer juggling', 'swinging', 'tennis swinging', 'trampoline jumping', 'volleyball spiking' and 'walking') as the task B. The RNN is implemented using onelayer long short-term memory (LSTM) with input length of 190. The weights corresponding to the input video are tensorized and encoded into 4 TR-cores, whose input and output dimensions are and, respectively. The TR-rank is set to. Only one layer of cores need to be shared and they are shared in a left-to-right order. The recognition precisions w.r.t. the number of shared cores are recorded in Table 2. We find that sharing TR-cores between tasks via our TRMTL significantly improves the performance comparing to no sharing case, and sharing all 4 TR-cores achieves the best for this RNN situation. In this section, we show the key advantage of our method in handling multiple tasks defined on distinct data domains, where the optimal network architectures of the tasks could be different. out modes FC2 in modes out modes FC3 in modes [7, 7, 5, 5, 3, 3 We first verify on Omniglot and MNIST combination, where task A is to classify hand-drawn characters from first 10 alphabets, while task B is to recognize 10 hand-written digits. Task A is much harder than task B, as each character in task A has a very fewer training samples (only 20 per character). Table 3 shows the architecture specification of TRMTL using 4 layers MLP, we can see task A and task B possess their respective layer-wise network structures, while different portions of cores could be partially shared across layers. In contrast, to apply DMTRL, one has to first convert the heterogeneous inputs into equal-sized features using one hidden layer with totally unshared weights, so that the weights in following layers with same shape can be stacked up. In Table 4, TRMTL obtains similar to its competitors for the easier MNIST task, while both TRMTL-200 and 211 significantly outperform STL and DMTRL by a large margin for the more difficult Omniglot task. The poor performance of DMTRL due to its architecture's not being able to share any feature at the bottom hidden layer but has to share almost all the features at upper layers. We also conduct experiments on the challenging Office-Home dataset to evaluate the effectiveness of TRMTL in handling data from distinct domains. The dataset contains over 10,000 images collected from different domains including Art, Clipart, Product and Real World, which forms task A to task D, respectively. Each task is assigned to recognize 65 object categories presenting in offices or homes. The image styles and the levels of difficulty vary from task to task, e.g., images from Product (task C) have empty s while images of Art (task A) have complicated s. We train three FC layers on the features extracted from images of each task using pre-trained VGG-16 . In Figure 5, our TRMTL variants consistently outperform other competitors by a large margin, i.e., over 5% in accuracy for the toughest task A when 80% samples are available. The noticeable improvements are mainly credited to our sharing mechanism, which effectively shares the common signature of object identity across tasks regardless of their individual image styles. For TRMTL, we observe TRMTL-HT exceeds TRMTL-HM by at least 2% in the averaged accuracy and by 1% in the hardest task A, showing the efficacy of employing non-identical architectures on sharing high-level features. To further illustrate the merit of sharing knowledge using heterogeneous architectures, we next apply our TRMTL directly to the raw images via CNNs. We test on two large-scale human face datasets: and. Adience dataset contains 26, 580 unfiltered 227 × 227 face photos taken in the wild with variation in appearance, pose, lighting and etc; CelebA has a total number of 202, 599 preprocessed face images of resolution 218 × 178. For this test, the task A assigned to Adience data is to predict the label of age group that a person belongs to (8 classes), and we associate the task B to CelebA data to classify 40 binary facial attributes. Note that task A is much harder than task B, as the number of samples in Adience (with face images in the wild) is about 7.6 times fewer than that of CelebA (with cropped and aligned face images). Since Adience bears similarity to CelebA, we are interested to see whether the performance of the tough task (task A) can be enhanced by jointly learning on two domains of data. The heterogeneous architectures of TRMTL-CNN are shown in Table 5, in which we adopt the special case of TRMTL by sharing the spatial cores (e.g.,, and ) in convolutional kernel yet preserving the differences w.r.t. input and output channel cores. We focus on comparing the heterogeneous case with the homogeneous case in Table 5 where the shared structures are identical. As expected, in Figure 5, our TRMTL significantly outperforms other methods on the hard task A. In the meantime, TRMTL obtains the best averaged accuracies on two tasks in nearly all cases, indicating the data-scarcity task A has little harmful impact on the data-abundant task B. For our TRMTL, we also observe TRMTL-HM exhibits worse accuracies than TRMTL-HT, which implies that a comprise on an identical CNN design for all tasks, such as input/output channel core and stride size, lead to deteriorated overall performance. The test also shows the effectiveness of TRMTL in sharing low-level features with heterogeneous architectures. Our general TRMTL framework relies on the manual selection of shared cores, i.e., one need to specify the number of shared cores C at each layer if we choose to share the cores in a left-to-right order across tasks. Although we can employ some efficient heuristics, the search space of this hyperparameter may grow rapidly as number of the layers increase. Besides the greedy search, a more sophisticated and possible option is to automatically select sharable core pairs that have the highest similarity. We may consider two cores as a candidate pair if the same perturbation of the two cores induces similar changes in the errors of respective tasks. In this way, one can adaptively select most similar cores from tasks according to a certain threshold, leaving the rest as private cores. We should also point out that tensorization operation plays a key role in our proposed sharing mechanism. Due to the tensorization, the cores can be shared in a much finer granularity via our TRMTL framework. Furthermore, tensorizing weight matrix into high-order weight tensor yields more compact tensor network format (with much lower overall ranks), and thus a higher compression ratio for parameters. In contrast, DMTRL tends to produce a lot more parameters without tensorization. In this work, we have extended the conventional deep MTL to a broader paradigm where multiple tasks may involve more than one source data domain. To resolve the issues caused by the discrepancies among different tasks' network structures, we have introduced a novel knowledge sharing framework for deep MTL, by partially sharing latent cores via tensor network format. Our method is empirically verified on various learning settings and achieves the state-of-the-art in helping tasks to improve their overall performance. of T tasks to be equal-sized, so that these weights could be stacked up into one weight matrix W ∈ R M ×T. The work assumes W to be low-rank and factorizes it as W = LS. Here, L ∈ R M ×K consists of K task-independent latent basis vectors, whereas each column vector of S ∈ R K×T is task-specific and contains the mixing coefficients of these common latent bases. extended this to its tensorial counterpart deep multi-task representation learning (DMTRL) by making use of tensor factorization. Likewise, DMTRL starts by putting the equal-shaped weight matrices side by side along the'task' mode to form a 3rd-order weight tensor W ∈ R M ×N ×T. In the case of CNN, this weight tensor corresponds to a 5th-order filter tensor K ∈ R H×W ×U ×V ×T. DMTRL then factorizes W (or K), for instance via TT-format, into 3 TT-cores (or 5 TT-cores for K). Analogously, the first 2 TT-cores (or the first 4 TT-cores) play exactly the same role as L for the common knowledge; the very last TT-core is in fact a matrix (similar to S), with each column representing the task-specific information. The fundamental difference between our TRMTL and DMTRL is that ours can tailor heterogeneous network structures to various tasks. In contrast, DMTRL is not flexible enough to deal with such variations with tasks. Specifically, our TRMTL differs widely with DMTRL and generalizes DMTRL from a variety of aspects. In order to reach TRMTL from DMTRL-TT, one needs to take four major types of generalizations (G1-G4), as shown in Figure 6. Firstly (in G1), TRMTL tensorizes the weight into a higher-order weight tensor before factorizing it. By doing so, the weight can be embedded into more latent cores than that of just 3 cores (or 5 cores) in DMTRL, which yields a more compact model and makes the sharing at a finer granularity feasible. Secondly (in G2), DMTRL stringently requires that the first D-1 cores (D is weight tensor's order) must be all shared at every hidden layer, only the last vector is kept for private knowledge. By contrast, TRMTL allows for any sharing pattern at distinct layer. Thirdly (in G3), there is no need for layerwise weights to be equal-sized and stacked into one big tensor as in TRMTL, each task may have its individual input domains. Finally (in G4), TRMTL further generalizes TT to TR-format. For each task in DMTRL, the first core must be a matrix and the last core must be a vector (with both border rank and outer mode size being 1). Notice that our TRMTL also conceptually subsumes DMTRLTucker in terms of the first three aspects of generalizations (G1-G3). It is also worth mentioning that only applies TR-format for weight compression in a single deep net, whereas ours incorporates a more general tensor network framework into the deep MTL context. The authors of lately proposed multilinear relationship network (MRN) which incorporates tensor normal priors over the parameter tensors of the task-specific layers. However, like methods (; ;), MRN follows the architecture where all the lower layers are shared, which is also not tailored for the extended MTL paradigm, and may harm the transferability if tasks are not that tightly correlated. In addition, the relatedness of tasks is captured by the covariance structures over features, classes and tasks. Constantly updating these covariance matrices (via SVD in ) becomes computationally prohibitive for large scale networks. Compared to these non-latent-subspace methods, TRMTL is highly compact and needs much fewer parameters, which is obviously advantageous in tasks with small sample size. The detailed specification of network architecture and factorized TRRL representation of the experiments on MNIST dataset are recorded in Table 6. In Table 7, our TRMTL achieves the best and is robust to small perturbation of C for pattern selection, since both'410' and'420' patterns obtain similarly good performance. For the Omniglot Dataset, we adopt a similar architecture as in the previous experiment for CNN as, where the last two convolution layers and first fully connected layer are represented using TRRL with the input/output feature modes of TR-cores being {2, 2, 2}, {4, 2, 2}, and {2, 2, 2, 2}, {4, 4, 2, 2}, and {18, 12, 12, 9}, {4, 4, 4, 4}. Table 8 displays the details of network specification. The best sharing pattern of our model is'432'. Figure 7 demonstrates the amount of the accuracy changes for each task (for the case of 50% training data), both with and without the aid of the datarich task. Table 9 summarizes the performance of the compared methods when the distinct fractions of data are used as training data. Our TRMTL obtains the best overall performance in both data-rich and data-scarcity situations. In this section, we also conduct more experiments on CIFAR-10 dataset. We assign 10 classes into 3 tasks, in which task A relates to non-animals; task B comprises 4 animal classes including'cat','dog','deer' and'horse'; task C contains the remaining 2 classes. We like to verify the performance of different models in transferring the useful knowledge from data-abundant task to data-scarcity task within one source of data domain. To this end, we first test on CIFAR dataset with settings where each task may have insufficient training samples like 5%, 10% or 50%. For this test, we adopt the following architecture:, where C3 stands for a 3 × 3 convolutional layer. We employ the general form of TRL on the last two CNN layers and first two FC layers where the most of the parameters concentrate, yielding 4 TR-cores per layer. Figure 8 illustrates how the accuracies of one task (two tasks) vary with sample fractions, given the remaining two tasks (one task) get access to the full data. We observe the trends in which the accuracies of our model exceed the other competitors by a relatively large margin (shown in solid lines), in the cases of limited training samples, e.g., 5% or 10%. In the mean time, the advantage of our TRMTL is still significant in terms of the averaged accuracies of three tasks (shown in dash lines), which implies the data-scarcity task has little bad influence on the data-abundant tasks. Table 10 reports the of our two best patterns ('4431' and '4421'), as well as the one with'bad' pattern'4444'. Clearly, TRMTL ('4431' and '4421') outperforms other methods in nearly all the cases. As for task A, for instance, the precision of TRMTL-4431 is increased by 1.7% when the data of task C becomes 100%. Even more, such enhancement further grows up to 5.5% in the situation that both task B and C's training samples are fully available. This is in contrast to MRN whose precision improvements are merely 0.4% and 3.0% in the corresponding scenarios. Again, It is also interesting to get an idea on what our model has learned via the visualization of the high level features. Figure 9 illustrates the task-specific features of our TRMTL (and DMTRL) using t-SNE for the dimensionality reduction. We can see a clear pattern of the clustered features produced by our model that are separated for different classes, which could be more beneficial the downstream classification tasks. In this section, we show the advantage of our method in handling tasks with heterogeneous inputs within single source of data domain. For this test, the tasks are assigned to input images with different spatial sizes or distinct channels (i.e. RGB or grayscale) on CIFAR-10 dataset. In order to apply DMTRL, one has to first convert the heterogeneous inputs into equal-sized features using one hidden layer with totally unshared weights, so that the weights in following layers can be stacked up and factorized. To better show the influence of heterogeneous inputs on the competitors, we adopt MLP with 4 hidden layers. The architectures for the heterogenous spatial sizes case and distinct channels case are shown in Table 11 and 12, respectively. For a good pattern of our TRMTL, such FC2 input modes output modes FC3 input modes output modes FC4 input modes out modes
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HygUBaEFDB
a distributed latent-space based knowledge-sharing framework for deep multi-task learning
Board games often rely on visual information such as the location of the game pieces and textual information on cards. Due to this reliance on visual feedback, blind players are at a disadvantage because they cannot read the cards or see the location of the game pieces and may be unable to play a game without sighted help. We present Game Changer, an augmented workspace that provides both audio descriptions and tactile additions to make the state of the board game accessible to blind and visually impaired players. In this paper, we describe the design of Game Changer and present findings from a user study in which 7 blind participants used Game Changer to play against a sighted partner. Most players stated the game was more accessible with the additions from Game Changer and felt that Game Changer could be used to augment other games.
[ 0, 0, 1, 0, 0 ]
5IP2w6397
Game Changer is a system that provides both audio descriptions and tactile additions to make the state of the board game accessible to blind and visually impaired players.
In many applications labeled data is not readily available, and needs to be collected via pain-staking human supervision. We propose a rule-exemplar model for collecting human supervision to combine the scalability of rules with the quality of instance labels. The supervision is coupled such that it is both natural for humans and synergistic for learning. We propose a training algorithm that jointly denoises rules via latent coverage variables, and trains the model through a soft implication loss over the coverage and label variables. Empirical evaluation on five different tasks shows that our algorithm is more accurate than several existing methods of learning from a mix of clean and noisy supervision, and the coupled rule-exemplar supervision is effective in denoising rules. With the ever-increasing reach of machine learning, a common hurdle to new adoptions is the lack of labeled data and the pain-staking process involved in collecting it via human supervision. Over the years, several strategies have evolved for reducing the tedium of collecting human supervision. On the one hand are methods like active learning and crowd-consensus learning that seek to reduce the cost of supervision in the form of per-instance labels. On the other hand is the rich history of rule-based methods where humans code-up their supervision as labeling rules. There is growing interest in learning from such scalable, albiet noisy, supervision (; ; ; ;). However, clean task-specific instance labels continue to be critical for reliable even when fine-tuning models pre-trained on indirect supervision . In this paper we propose a unique blend of cheap coarse-grained supervision in the form of rules and expensive fine-grained supervision in the form of labeled instances. Instead of supervising rules and instance labels independently, we propose that each labeling rule be attached with exemplars of where the rule correctly'fires'. Thus, the rule can be treated as a noisy generalization of those exemplars. Often rules are coded up only after inspecting data. As a human inspects instances, he labels them, and then generalizes them to rules. Thus, humans provide paired supervision of rules and exemplars demonstrating correct deployment of that rule. We explain further with two illustrative applications. Our examples below are from the text domain because rules have been traditionally used in many NLP tasks, but our learning algorithm is agnostic to how rules are expressed. Sentiment Classification Consider an instance I highly recommend this modest priced cellular phone that a human inspects for a sentiment labeling task. After labeling it as positive, he can easily generalize it to a rule Contains'highly recommend' → positive label. This rule generalizes to several more instances, thereby eliminating the need of per-instance labeling on those. However, the label assigned by this rule on unseen instances may not be as reliable as the explicit label on this specific exemplar it generalized. For example, it misfires on I would highly recommend this phone if it weren't for their poor service. Slot-filling Consider a slot-filling task on restaurant reviews over labels like cuisine, location, and time. When an annotator sees an instance like: what chinese restaurants in this city have good reviews?, after labeling token chinese as cuisine, he generalizes it to a rule: (. * ese|. * ian|mexican) restaurants → (cuisine) restaurants. This rule matches hundreds of instances in the unlabeled set, but could wrongly label a phrase like these restaurants. We present in Section 3 other applications where such supervision is natural. Our focus in this paper is developing algorithms for training models under such coupled rule-exemplar supervision. Our main challenge is that the labels induced by the rules are more noisy than instance-level supervised labels because humans tend to over generalize as we saw in the illustrations above. Learning with noisy labels with or without additional clean data has been a problem of long-standing interest in ML (; ; b; ;). However, we seek to design algorithms that better capture rule-specific noise with the help of exemplars around which we have supervision that the rule fired correctly. We associate a latent random variable on whether a rule correctly'covers' an instance, and jointly learn the distribution among the label and all cover variables. This way we simultaneously train the classifier with corrected rule-label examples, and restrict over-generalized rules. In summary our contributions in this paper are as follows: Our contributions We propose the paradigm of supervision in the form of rules generalizing labeled exemplars that is natural in several applications. We design a training method that simultaneously denoises over-generalized rules via latent coverage variables, and trains a classification model with a soft implication loss that we introduce. Through experiments on five tasks spanning question classification, spam detection, sequence labeling, and record classification we show that our proposed paradigm of supervision enables an effective synergy between rule-level and instance-level supervision. We compare our algorithm to several recent frameworks for learning with noisy supervision and constraints, and show much better with our method. We first formally describe the problem of learning from rules generalizing examplars on a classification task. Let X denote the space of instances and Y = {1, . . ., K} denote the space of class labels. Let the set of labeled examples be L = {(x 1, 1, e 1),..., (x n, n, e n)} where x i ∈ X is an instance, i ∈ Y is its user-provided label, and e i ∈ {R 1, . . ., R m, ∅} denotes that x i is an exemplar for rule e i. Some labeled instances may not be generalized to rules and for them e i = ∅. Also, a rule can have more than one exemplar associated with it. Each rule R j could be a blackbox function R j: x → {j, ∅} that takes as input an instance x ∈ X and assigns it either label j or no-label. When the ith labeled instance is an exemplar for rule R j (that is, e i = R j), the label of the instance i should be j. Additionally, we have a different set of unlabeled instances U = {x n+1, . . ., x N}. The cover set H j of rule R j is the set of all instances in U ∪ L for which R j assigns a noisy label j. An instance may be covered by more than one rule or no rule at all, and the labels provided by these rules may be conflicting. Our goal is to train a classification model P θ (y|x) using L and U to maximize accuracy on unseen test instances. A baseline solution is to use R j to noisily label the covered U instances using majority or other consensus method of resolving conflicts. We then train P θ (y|x) on the noisy labels using existing algorithms for learning from noisy and clean labels (; b). However, we expect to be able to do better by learning the systematic pattern of noise in rules along with the classifier P θ (y|x). Our noise model on R j A basic premise of our learning paradigm is that the noise induced by a rule R j is due to over-generalizing the exemplar(s) seen when creating the rule. And, there exists a smaller neighborhood closer to the exemplar(s) where the noise is zero. We model this phenomenon by associating a latent Bernoulli random variable r ji for each instance x i in the stated cover set H j of each rule R j. When r ji = 1, rule R j has not over-generalized on x i, and there is no noise in the label j that R j assigns to x i. When r ji = 0 we flag an over-generalization, and abstain from labeling x i as j suspecting it to be too noisy. We call r ji s as the latent coverage variables. We propose to learn the distribution of r j using another network with parameters φ that outputs the probability P jφ (r j |x) that r j = 1. We then seek to jointly learn P θ (y|x) and P jφ (r j |x) to model the distribution over the true label y and true coverage r j for each rule j and each x in H j. Thus P jφ plays the role of restricting a rule R j so that r j is not necessarily 1 for all instances in its cover set H j An example We make our discussion concrete with an example. Figure 1 shows a two-dimensional X space with labeled points L denoted as red crosses and blue circles and unlabeled points as dots, and the true labels as color of the region. We show two rule-exemplar pairs: (x 1, y 1 = red, R 1), (x 2, y 2 = blue, R 2). Clearly, both rules R 1, R 2 have over-generalized to the wrong region. If we train a classifier with many examples in H 1 ∪ H 2 wrongly labeled by rules, then even with a noise tolerant loss function like , the classifier P θ (y|x) might be misled. In contrast, what we hope to achieve is to learn the P jφ (r j |x) distribution using the limited labeled data and the overlap among the rules such that Pr(r j |x) predicts a value of 0 for examples wrongly covered. Such examples are then excluded from training P θ. The dashed boundaries indicate the revised boundaries of R j s that we can hope to learn based on consensus on the labeled data and the set of rules. Even after such restriction, R j s are useful for training the classifier because of the unlabeled points inside the dashed regions that get added to the labeled set. 2.1 HOW WE JOINTLY LEARN P θ AND P jφ In general we will be provided with several rules with arbitrary overlap in the set of labeled L and unlabeled examples U that they cover. Intuitively, we want the label distribution P θ (y|x) to correctly restrict the coverage distribution P jφ (r j |x), which in turn can provide clean labels to instances in U that can be used to train P θ (y|x). We have two types of supervision in our setting. First, individually for each of the networks we have ground truth values of y and r j for some instances. For the P θ (y|x) distribution, supervision on y is provided by the human labeled data L, and we use these to define the usual log-likelihood as one term in our training objective: For learning the distribution P jφ (r j |x) over the coverage variables, the only sure-shot labeled data is that r ji = 1 for any x i that is an exemplar of rule R j and r ji = 0 for any x i ∈ H j whose label i is different from j. For other labeled instances x i covered with rules R j with agreeing labels, that is i = j we do not strictly require that r ji = 1. In the example above the corrected dashed red boundary excludes a red labeled point to reduce its noise on other points. However, if the number of labeled exemplars are too few, we regularize the networks towards more rule firings, by adding a noise tolerant r ji = 1 loss on the instances with agreeing labels. We use the generalized cross entropy loss of. Note for other instances x i in R j's cover H j, value of r ji is unknown and latent. The second type of supervision is on the relationship between r ji and y i for each Figure 2: Negative implication loss A rule R j imposes a causal constraint that when r ji = 1, the label y i has to be j. r ji = 1 =⇒ y i = j ∀x i ∈ H j We convert this hard constraint into a (log) probability of the constraint being satisfied under the P θ (y|x) and P jφ (r j |x) distributions as: Figure 2 shows a surface plot of the above log probability as a function of P θ (j |x) (shown as axis P(y) in figure) and P jφ (r j = 1|x) (shown Table 1 : Statistics of datasets and their rules. %Cover is fraction of instances in U covered by at least one rule. Precision refers to micro precision of rules. Conflict denotes the fraction of instances covered by conflicting rules among all the covered instances. Avg |Hj| is average cover size of a rule in U . Rules Per Instance is average number of rules covering an instance in U . as axis P(r) in figure) for a single rule. Observe that likelihood drops sharply as P (r j |x) is close to 1 but P (y = j |x) is close to zero. For all other values of these probabilities the log-likelihood is flat and close to zero. Specifically, when P jφ predicts low values of r j for a x, the P θ (y|x) surface is flat, effectively withdrawing the (x, j) supervision from training the classifier P θ. Thus maximizing this likelihood provides a soft enforcement of the constraint without any other unwanted biases. We call this the negative implication loss. We do not need to explicitly model the conflict among rules, that is when an x i is covered by two rules R j and R k of differing labels (j = k), then both r ji and r ki cannot be 1. This is because the constraint among pairs (y i, r ji) and (y i, r ki) as stated in Equation 3 subsumes this one. During training we then seek to maximize the log of the above probability along with normal data likelihood terms. Putting the terms in Equations 1, 2 and 4 together our final training objective is: We refer to our training loss as a denoised rule-label implication loss or ImplyLoss for short. The LL(φ) term seeks to denoise rule coverage which then influence the y distribution via the implication loss. We explored several other methods of enforcing the constraint among y and r j in the training of the P θ and P jφ networks. Our method ImplyLoss consistently performed the best among several methods we tried including the recent posterior regularization method of enforcing soft constraints and co-training . Network Architecture Our network has three modules. A shared embedding layer that provides the feature representation of the input. When labeled data is scarce, this will typically be a pre-trained layer from a related task. We describe the embedding module for each task individually in the experiments section. A classification network that models P θ (y|x) with parameters θ. The embedding of an input x is passed through multiple non-linear layers with ReLU activation, a last linear layer followed by Softmax to output a distribution over the class labels. A rule network that models P jφ (r j = 1|x) whose parameters φ are shared across all rules. The input to the network is rule-specific and concatenates the embedding of the input instance x, and a a one-hot encoding of the rule id'j'. The inputs are transformed through multiple layers of ReLU before passing through a Sigmoid activation which outputs the probability P jφ (r j = 1|x). We compare our training algorithms against simple baselines, existing error-tolerant learning algorithms, and existing constraint-based learning in deep networks. We evaluate across five datasets spanning three task types: text classification, sequence labeling, and record classification. We augment the datasets with rules, that we obtained manually in three cases, from pre-existing public sources in one case, and automatically in another. Table 1 presents statistics summarizing the datasets and rules. A brief description of each appears below. Question Classification : This is a TREC-6 dataset to classify a question to one of six categories: {Abbreviation, Entity, Description, Human, Location, Numeric-value}. The training set has 5452 instances which are split as 68 for L, 500 for validation, and the remaining as U. Each example in L is generalized as a rule represented by a regular expression. E.g. After labeling How do you throw a housewarming party? as Description we define a rule (how|How|what|What)(does|do|to|can). * → Description. More rules in Table 4 of supplementary. Although, creating such 68 generalised rules required 90 minutes, the generalizations cover 4637 instances in U, almost two orders of magnitude more instances than in L! On an average each of our rule covered 124 instances (|H j | column in Table 1). But the precision of labels assigned by rules was only 63.8%, and 22.5% of covered instances had an inter-rule conflict. This clearly demonstrates the noise in the rule labelings. Accuracy is used as the performance metric. MIT-R 1 : This is a slot-filling task on sentences about restaurant search and the task is to label each token as one of {Location, Hours, Amenity, Price, Cuisine, Dish, Restaurant Name, Rating, Other}. The training data is randomly split into 200 sentences (1690 tokens) as L, 500 sentences (4k tokens) as validation and remaining 6.9k sentences (64.9k tokens) as U. We manually generalize 15 examples in L. E.g. After inspecting the sentence where can i get the highest rated burger within ten miles and labeling highest rated as Rating, we provide the rule:. * (highly|high|good|top|highest)(rate|rating|rated). * → Rating to the matched positions. More examples in Table 7 of supplementary. Although, creating 15 generalizing rules took 45 minutes of annotator effort, the rules covered roughly 9k tokens in U. F1 metric is used for evaluation on the default test set of 14.2k tokens over 1.5k sentences. : This dataset contains 5.5k text messages labeled as spam/not-spam, out of which 500 were held out for validation and 500 for testing. We manually generalized 69 exemplars to rules. Remaining examples go in the U set. The rules here check for presence of keywords or phrases in the SMS. * guaranteed gift. * → spam. A rule covers 31 examples on an average and has a precision of 97.3%. However, in this case only 40% of the unlabeled set is covered by a rule. We report F1 here since class is skewed. More examples in Table 5 of supplementary. Youtube Spam Classification : Here the task is to classify comments on YouTube videos as Spam or Not-Spam. We obtain this from Snorkel's Github page 2, which provides 10 labeling functions which we use as rules, an unlabeled train set which we use as U, a labeled dev set to guide the creation of their labeling functions which we use as L, and labeled test and validation sets which we use in the same roles. Their labeling functions have a large coverage (258 on average), and a precision of 78.6%. Census Income : This UCI dataset is extracted from the 1994 U.S. census. It lists a total of 13 features of an individual such as age, education level, marital status, country of origin etc. The primary task on it is binary classification -whether a person earns more than $50K or not. The train data consists of 32563 records. We choose 83 random data points as L, 10k points as U and 5561 points as validation data. For this case we created the rules synthetically as follows: We hold out disjoint 16k random points from the training dataset as a proxy for human knowledge and extract a PART decision list from it as our set of rules. We retain only those rules which fire on L. Network Architecture Since our labeled data is small we depend on pre-trained resources. As the embedding layer we use a pretrained ELMO network where 1024 dimensional contextual token embeddings serve as representations of tokens in the MIT-R sentences, and their average serve as representation for sentences in Question and SMS dataset. Parameters of the embedding network are held fixed during training. For sentences in the YouTube dataset, we use Snorkel's 2 architecture of a simple bag-of-words feature representation marking the frequent uni-grams and bi-grams present in a sentence using a few-hot vector. For the Census dataset categorical features are represented as one hot vectors, while real valued features are simply normalized. For MIT-R, Question and SMS both classification and rule-weight network contain two 512 dimensional hidden layers with ReLU activation. For Census, both the networks contain two 256 dimensional hidden layers with ReLU activation. For YouTube, the classifier network is a simple logistic regression like in Snorkel's code. The rule network has one 32-dimensional hidden layer with ReLU activation. Each reported number is obtained by averaging over five random initializations. Whenever a method involved hyper-parameters to weigh the relative contribution of various terms in the objective, we used a validation dataset to tune the value of the hyper-parameter. Hyperparameters used are provided in Section C of supplementary. In Table 2 we compare our method with the following alternatives on each of the five datasets: Majority: that predicts via majority vote among the rules that cover an instance. This baseline indicates the stand-alone quality of rules, no network is learned here. Ties are broken arbitrarily for class-balanced datasets or by using a default class. Table 2, shows that the accuracy of majority is quite poor indicating either poor precision or poor coverage of the rule sets 3. Only-L: Here we train the classifier P θ (y|x) only on the labeled data L using the standard crossentropy loss (Equation 1). Rule generalisations are not utilized at all in this case. We observe in Table 2 that even with the really small labeled set we used for each dataset, the accuracy of a classifier learned with clean labeled data is much higher than noisy majority labels of rules. We consider this method as our baseline and report the gains on remaining methods. L+Umaj: Next we train the classifier on L along with U maj obtained by labeling instances in U with the majority label among the rules applicable to the instance. The row corresponding to L+Umaj in Table 2 provides the gains of this method over Only-L. We observe gains with the noisily labeled U in four out of the five cases. Noise-tolerant: Since labels in U maj are noisy, we next use's noise tolerant generalized cross entropy loss on them with regular cross-entropy loss on the clean L as follows: Parameter q ∈ controls the noise tolerance which we tune as a hyper-parameter. We observe that in all cases the above objective improves beyond Only-L validating that noise-tolerant loss functions are useful for learning from noisy labels on U maj. Learning to Reweight (L2R) (b): is a recent method for training with a mix of clean and noisy labeled data. They train the classifier by meta-learning to re-weight the loss on the noisily labelled instances (U maj) with the help of the clean examples (L). This method shows huge variance in its accuracy gains over Only-L across datasets and is worse in two of the cases. All the above methods employ no extra parameters to denoise or weight individual rules. We next compare with a number of methods that do. L+Usnorkel: This method replaces Majority-based consensus with Snorkel's generative model that assigns weights to rules and labels examples in U. Thereafter we use the same approach as in L+Umaj with just Snorkel's soft-labels instead of Majority on U. The are mixed and we do not get any consistent gains over Only-L and over L+Umaj. We also compare with using noise-tolerant loss on U labeled by Snorkel (Eqn:6) which we call SnorkelNoise-Tolerant. We observe more consistent improvements then, but these are not much better than Noise-Tolerant on U maj. We next compared with a method that simultaneously learns two sets of networks P θ and P jφ like in ours but with different loss function and training schedule. Posterior Regularization (PR): This method proposed in also treats rules as softconstraints and has been used for training neural networks for structured outputs. They use's posterior regularization framework to train the two networks in a teacher-student setup. We adapt the same framework and get a procedure as follows: The student proposes a distribution over y and r j s using current P θ and P jφ, the teacher uses the constraint in Eq 3 to revise the distributions so as to minimize the probability of violations, the student updates parameters θ and φ to minimize KL distance with the revised distribution. The detailed formulation appear in the Section A of supplementary. We find that this method is worse than Only-L in two cases and worse than the noise-tolerant method that does not train extra φ parameters. Overall our approach of training with denoised rule-label implication loss provides much better accuracy than all the above eight methods and we get consistent gains over Only-L on all datasets. On the question dataset we get 11.9 points gains over Only-L whereas the best gains by existing method was 0.5. A useful property of our method compared to the PR method above is that the training process is simple and fits into the batch stochastic gradient training template. In contrast, PR requires special alternating computations. We next perform a number of diagnostics experiments to explain the reasons for the superior performance of our method. Diagnostics: Effectiveness of learning true coverage via P jφ An important part of our method is the rule-specific denoising learned via the P jφ network. In the chart alongside we plot the original precision of each rule on the test data, and the precision after suppressing those rule labelings where P jφ (r j |x) predicts 0 instead of 1. Observe now that the precision is more than 91% on all datasets. For the Question dataset, the precision jumped from 64% to 98%. The percentage of labelings suppressed (shown by the dashed line) is higher on datasets with noisier rules (e.g. compare Question and SMS). This shows that P jφ is able to denoise rules by capturing the distribution of the latent true coverage variables with the limited LL(φ) loss and indirectly via the implication loss. We next evaluate the importance of the exemplar-rule pairs in learning the P jφ and P θ networks. The exemplars of a rule give an interesting new form of supervision about an instance where a labeling rule must fire. To evaluate the importance of this supervision, we exclude the r j = 1 likelihood on rule-exemplar pairs from LL(φ), that is, the first term in Equation 2 is dropped. In the table below we see that performance of ImplyLoss drops when the exemplar-rule supervision is removed. Interestingly, even after this drop, the performance of ImplyLoss surpasses most of the methods in Table 2 indicating that even without exemplar-rule pairs our training objective is effective at learning from rules and labeled instances. Effect of increasing labeled data L We increase L while keeping the number of rules fixed on the Question dataset. In the attached plot we see the accuracy of our method (ImplyLoss) against Only-L and Posterior Reg. We observe the expected trend that the gap between the method narrows as labeled data increases. Learning from noisily labeled data has been extensively studied in settings like crowdsourcing. One category of these algorithms upper-bound the loss function to make it robust to noise. These include methods like MAE , Generalized Cross Entropy (CE) , and Ramp loss . Most of these assume that noise is independent of the input given the true label. In our model noise is systematic and instance-dependent. A second category assume that a small clean dataset is available along with noisily labeled data. This is also true in our case, and we compared with a state of the art method in that category Ren et al. (2018b) that chooses a descent direction that aligns with a clean validation set using meta-learning. Others in this category include:'s method of iteratively selecting examples with smallest loss, and's method of learning a separate network to transform noisy labels to cleaned ones which are used to impose a cross-entropy loss on P θ (y|x). In contrast, we perform rule-specific cleaning via latent coverage variables and a flexible implication loss which withdraws y supervision when P jφ (r ji |x) assumes low values. Another way of relating clean and noisy labels is via an instance-independent confusion matrix learned jointly with the classifier (; ; b; a). These works assume that the confusion matrix is instance independent, which does not hold for our case. uses confidence from the classifier to eliminate noise but they need to ensure that the network does not memorize noise. Our learning setup also has the advantage of extracting confidence from a different network. There is growing interest in integrating logical rules with labeled examples for training networks, specifically for structured outputs (; ; ; ; a).; convert rules on output nodes of network, to (almost differentiable) loss functions during training. The primary difference of these methods from ours is that they assume that rules are correct whereas we assume them to be noisy. Accordingly, we simultaneously correct the rules and use them to improve the classifier, whereas they use the rules as-is to train the network outputs. A well-known framework for working with soft rules is posterior regularization which is used in to train deep structured output networks while harnessing logic rules. works only with noisy rules treating them as black-box labeling functions and assigns a linear weight to each rule based on an agreement objective. Our learning model is more powerful that attempts to learn a non-linear network to restrict rule boundaries rather than just weight their outputs. We presented a comparison with both these approaches in the experimental section, and showed superior performance. To the best of our knowledge, our proposed paradigm of coupled rule-exemplar supervision is novel, and our proposed training algorithm is able to harness them in ways not possible by existing frameworks for learning from rules or noisy supervision. We proposed a new rule-exemplar model for collecting human supervision to combine the scalability of top-level rules with the quality of instance-level labels. We show that such supervision is natural since humans typically inspect examples to code rules. Furthermore, such coupled examples provide supervision on correct firing of rules which help to denoise rules. We propose to train the classifier while jointly denoising rules via latent coverage variables imposing a soft-implication constraint on the true label. Empirically on five datasets we show that our training algorithm that performs rule-specific denoising is better than generic noise-tolerant learning. In future we plan to deploy this framework on other applications where human supervision is a scarce resource. We model a joint distribution Q(y, r 1, . . ., r n |x) to capture the interaction among the label random variable y and coverage random variables r 1,..., r n of any instance x. We use r to compactly represent r 1,..., r n. Strictly speaking, when a rule R j does not cover x, the r j is not a random variable and its value is pinned to 0 but we use this fixed-tuple notation for clarity. The random variables r j and y impose a constraint on the joint distribution Q: for a x ∈ H j when r j = 1, the label y cannot be anything other than j. r j = 1 =⇒ y = j ∀x ∈ H j We can convert this into a soft constraint on the marginals of the distribution Q by stating the probability of y = j Q(y, r j = 1|x) should be small. The singleton marginals of Q along the y and r j variables are tied to the P θ and P jφ (r j |x) we seek to learn. A network with parameters θ models the classifier P θ (y|x), and a separate network with φ variables (shared across all rules) learns the P jφ (r j |x) distribution. The marginals of joint Q should match these trained marginals and we use a KL term for that: We call the combined KL term succinctly as KL(Q, P θ) + KL(Q, P φ). Further the P θ and P jφ distributions should maximize the log-likelihood on their respective labeled data as provided in Equation 1 and Equation 2 respectively. Putting all the above objectives together with hyper-parameters α > 0, λ > 0 we get our final objective as: We show in Section A.1 that this gives rise to the solution for Q in terms of P θ, P jφ and alternately for P θ, P jφ in terms of Q as follows. where δ(y = j ∧ r j = 1) is an indicator function that is 1 when the constraint inside holds, else it is 0. Computing marginals of the above using straight-forward message passing techniques we get: Thereafter, we solve for θ and φ in terms of a given Q as Here, γ = 1 α. This gives rise to an alternating optimization algorithm as in the posterior regularization framework of. We initialize θ and φ randomly. Then in a loop, we perform the following two steps alternatively much like the EM algorithm . Here we compute marginals Q(y|x) and Q(r j |x) from current P θ and P jφ using Equations 12 and 13 respectively for each x in a batch. This computation is straight-forward and does not require any neural optimization. We can interpret the Q(y|x) as a small correction of the P θ (y|x) so as to align better with the constraints imposed by the rules in Equation 3. Likewise Q(r j |x) is an improvement of current P jφ s in the constraint preserving direction. For example, the expected r j values might be reduced for an instance if its probability of y being j is small. Parameter update step: We next reoptimize the θ and φ parameters to match the corrected Q distribution as shown in Equation 14. This is solved using standard stochastic gradient techniques. The Q terms can just be viewed as weights at this stage which multiply the loss or label likelihood. A pseudocode of our overall training algorithm is described in Algorithm 1. Input: L, U Initialize parameters θ, φ randomly for a random training batch from U ∪ L do Obtain P θ (y|x) from the classification network. Obtain P jφ (r j |x) j∈[n] from the rule-weight network. Calculate Q(y|x) using Eqn 12 and Q(r j |x) j∈[n] using Eqn 13. Update θ and φ by taking a step in the direction to minimize the loss in Eqn 14. end for Output: θ, φ Treat each Q(y, r) as an optimization variable with the constraint that y,r Q(y, r) = 1. We express this constraint with a Langrangian multiplier η in the objective. Also, define a distribution It is easy to verify that the KL terms in our objective 10 can be collapsed as KL(Q; P θ,φ). The rewritten objective (call it F (Q, θ, φ) ) is now: Next we solve for ∂F ∂Q(y,r) = 0 after expressing the marginals in their expanded forms: e.g. Q(y, r j |x) = r1,...,rj−1,rj+1,...,rn Q(y, r 1, . . ., r n |x). This gives us ∂F ∂Q(y, r) = log Q(y, r) − log P θ,φ (y, r|x) Equating it to zero and substituting for P θ,φ we get the solution for Q(y, r) in Equation 11. The proof for the optimal P θ and P jφ while keeping Q fixed in Equation 15 is easy and we skip here. We provide a list of rules for each task type. Great News! Call FREEFONE 08006344447 to claim your guaranteedå£1000 CASH orå£2000 gift. cuisine1a= ['italian','american', 'japanese','spanish','mexican', 'chinese','vietnamese','vegan'] cuisine1b= ['bistro','delis'] cuisine2= ['barbecue','halal', 'vegetarian','bakery'] can you find me some chinese food For all the experiments we use a learning rate of 1e-4, batch-size of 32 and a dropout of 0.8 (keep probability). All the models were trained for a maximum of 100 epochs. We use early stopping using a validation set. We provide a list of hyperparameters used in our experiments. Table 9: Meta-learning rate of Learning to Reweight method (L2R) for various datasets
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkeuexBtDr
Coupled rule-exemplar supervision and a implication loss helps to jointly learn to denoise rules and imply labels.
We consider a problem of learning the reward and policy from expert examples under unknown dynamics. Our proposed method builds on the framework of generative adversarial networks and introduces the empowerment-regularized maximum-entropy inverse reinforcement learning to learn near-optimal rewards and policies. Empowerment-based regularization prevents the policy from overfitting to expert demonstrations, which advantageously leads to more generalized behaviors that in learning near-optimal rewards. Our method simultaneously learns empowerment through variational information maximization along with the reward and policy under the adversarial learning formulation. We evaluate our approach on various high-dimensional complex control tasks. We also test our learned rewards in challenging transfer learning problems where training and testing environments are made to be different from each other in terms of dynamics or structure. The show that our proposed method not only learns near-optimal rewards and policies that are matching expert behavior but also performs significantly better than state-of-the-art inverse reinforcement learning algorithms. Reinforcement learning (RL) has emerged as a promising tool for solving complex decision-making and control tasks from predefined high-level reward functions BID23. However, defining an optimizable reward function that inculcates the desired behavior can be challenging for many robotic applications, which include learning social-interaction skills BID17, dexterous manipulation BID5, and autonomous driving BID10.Inverse reinforcement learning (IRL) BID14 addresses the problem of learning reward functions from expert demonstrations, and it is often considered as a branch of imitation learning BID2 ). The prior work in IRL includes maximum-margin BID0 BID18 and maximum-entropy BID24 formulations. Currently, maximum entropy (MaxEnt) IRL is a widely used approach towards IRL, and has been extended to use non-linear function approximators such as neural networks in scenarios with unknown dynamics by leveraging sampling-based techniques BID3 BID5 BID9. However, designing the IRL algorithm is usually complicated as it requires, to some extent, hand engineering such as deciding domain-specific regularizers BID5.Rather than learning reward functions and solving the IRL problem, imitation learning (IL) learns a policy directly from expert demonstrations. Prior work addressed the IL problem through behavior cloning (BC), which learns a policy from expert trajectories using supervised learning BID15. Although BC methods are simple solutions to IL, these methods require a large amount of data because of compounding errors induced by covariate shift BID19. To overcome BC limitations, a generative adversarial imitation learning (GAIL) algorithm BID8 was proposed. GAIL uses the formulation of Generative Adversarial Networks (GANs) BID7, i.e., a generator-discriminator framework, where a generator is trained to generate expert-like trajectories while a discriminator is trained to distinguish between generated and expert trajectories. Although GAIL is highly effective and efficient framework, it does not recover transferable/portable reward functions along with the policies, thus narrowing its use cases to similar problem instances in similar environments. Reward function learning is ultimately preferable, if possible, over direct imitation learning as rewards are portable functions that represent the most basic and complete representation of agent intention, and can be re-optimized in new environments and new agents. Reward learning is challenging as there can be many optimal policies explaining a set of demonstrations and many reward functions inducing an optimal policy BID14 BID24. Recently, an adversarial inverse reinforcement learning (AIRL) framework BID6, an extension of GAIL, was proposed that offers a solution to the former issue by exploiting the maximum entropy IRL method BID24 whereas the latter issue is addressed through learning disentangled reward functions by modeling the reward as a function of state only instead of both state and action. However, AIRL fails to recover the ground truth reward when the ground truth reward is a function of both state and action. For example, the reward function in any locomotion or ambulation tasks contains a penalty term that discourages actions with large magnitudes. This need for action regularization is well known in optimal control literature and limits the use cases of a state-only reward function in most practical real-life applications. A more generalizable and useful approach would be to formulate reward as a function of both states and actions, which induces action-driven reward shaping that has been shown to play a vital role in quickly recovering the optimal policies BID13.In this paper, we propose the empowerment-regularized adversarial inverse reinforcement learning (EAIRL) algorithm 1. Empowerment BID20 ) is a mutual information-based theoretic measure, like state-or action-value functions, that assigns a value to a given state to quantify the extent to which an agent can influence its environment. Our method uses variational information maximization BID12 to learn empowerment in parallel to learning the reward and policy from expert data. Empowerment acts as a regularizer to policy updates to prevent overfitting the expert demonstrations, which in practice leads to learning robust rewards. Our experimentation shows that the proposed method recovers not only near-optimal policies but also recovers robust, transferable, disentangled, state-action based reward functions that are near-optimal. The on reward learning also show that EAIRL outperforms several state-of-the-art IRL methods by recovering reward functions that leads to optimal, expert-matching behaviors. On policy learning, demonstrate that policies learned through EAIRL perform comparably to GAIL and AIRL with non-disentangled (state-action) reward function but significantly outperform policies learned through AIRL with disentangled reward (state-only) and GAN interpretation of Guided Cost Learning (GAN-GCL) BID4. We consider a Markov decision process (MDP) represented as a tuple (S, A, P, R, ρ 0, γ) where S denotes the state-space, A denotes the action-space, P represents the transition probability distribution, i.e., P: S × A × S →, R(s, a) corresponds to the reward function, ρ 0 is the initial state distribution ρ 0: S → R, and γ ∈ is the discount factor. Let q(a|s, s) be an inverse model that maps current state s ∈ S and next state s ∈ S to a distribution over actions A, i.e., q: S × S × A →. Let π be a stochastic policy that takes a state and outputs a distribution over actions such that π: S × A →. Let τ and τ E denote a set of trajectories, a sequence of state-action pairs (s 0, a 0, · · · s T, a T), generated by a policy π and an expert policy π E, respectively, where T denotes the terminal time. Finally, let Φ(s) be a potential function that quantifies a utility of a given state s ∈ S, i.e., Φ: S → R. In our proposed work, we use an empowerment-based potential function Φ(·) to regularize policy update under MaxEnt-IRL framework. Therefore, the following sections provide a brief on MaxEnt-IRL, adversarial reward and policy learning, and variational information-maximization approach to learn the empowerment. MaxEnt-IRL BID24 ) models expert demonstrations as Boltzmann distribution using parametrized reward r ξ (τ) as an energy function, i.e., DISPLAYFORM0 where r ξ (τ) = T t=0 r ξ (s t, a t) is a commutative reward over given trajectory τ, parameterized by ξ, and Z is the partition function. In this framework, the demonstration trajectories are assumed to be sampled from an optimal policy π *, therefore, they get the highest likelihood whereas the suboptimal trajectories are less rewarding and hence, are generated with exponentially decaying probability. The main computational challenge in MaxEnt-IRL is to determine Z. The initial work in MaxEnt-IRL computed Z using dynamic programming BID24 whereas modern approaches BID5 a; BID6 present importance sampling technique to approximate Z under unknown dynamics. This section briefly describes Adversarial Inverse Reinforcement Learning (AIRL) BID6 algorithm which forms a baseline of our proposed method. AIRL is the current state-of-the-art IRL method that builds on GAIL BID8, maximum entropy IRL framework BID24 and GAN-GCL, a GAN interpretation of Guided Cost Learning BID5 a). GAIL is a model-free adversarial learning framework, inspired from GANs BID7, where the policy π learns to imitate the expert policy behavior π E by minimizing the JensenShannon divergence between the state-action distributions generated by π and the expert state-action distribution by π E through following objective DISPLAYFORM0 where D is the discriminator that performs the binary classification to distinguish between samples generated by π and π E, λ is a hyper-parameter, and H(π) is an entropy regularization term E π [log π]. Note that GAIL does not recover reward; however, BID4 shows that the discriminator can be modeled as a reward function. Thus AIRL BID6 ) presents a formal implementation of BID4 and extends GAIL to recover reward along with the policy by imposing a following structure on the discriminator: DISPLAYFORM1 where f ξ,ϕ (s, a, s) = r ξ (s) + γh ϕ (s) − h ϕ (s) comprises a disentangled reward term r ξ (s) with training parameters ξ, and a shaping term F = γh ϕ (s) − h ϕ (s) with training parameters ϕ. The entire D ξ,ϕ (s, a, s) is trained as a binary classifier to distinguish between expert demonstrations τ E and policy generated demonstrations τ. The policy is trained to maximize the discriminative reward DISPLAYFORM2 consists of free-parameters as no structure is imposed on h ϕ (·), and as mentioned in BID6, the reward function r ξ (·) and function F are tied upto a constant (γ − 1)c, where c ∈ R; thus the impact of F, the shaping term, on the recovered reward r is quite limited and therefore, the benefits of reward shaping are not fully realized. Mutual information (MI), an information-theoretic measure, quantifies the dependency between two random variables. In intrinsically-motivated reinforcement learning, a maximal of mutual information between a sequence of K actions a and the final state s reached after the execution of a, conditioned on current state s is often used as a measure of internal reward BID12, known as Empowerment Φ(s), i.e., DISPLAYFORM0 where p(s |a, s) is a K-step transition probability, w(a|s) is a distribution over a, and p(a, s |s) is a joint-distribution of K actions a and final state s 2. Intuitively, the empowerment Φ(s) of a state s quantifies an extent to which an agent can influence its future. Thus, maximizing empowerment induces an intrinsic motivation in the agent that enforces it to seek the states that have the highest number of future reachable states. Empowerment, like value functions, is a potential function that has been previously used in reinforcement learning but its applications were limited to small-scale cases due to computational intractability of MI maximization in higher-dimensional problems. Recently, however, a scalable method BID12 was proposed that learns the empowerment through the moreefficient maximization of variational lower bound, which has been shown to be equivalent to maximizing MI BID1. The lower bound was derived (for complete derivation see Appendix A.1) by representing MI in term of the difference in conditional entropies H(·) and utilizing the non-negativity property of KL-divergence, i.e., DISPLAYFORM1 where DISPLAYFORM2 is a variational distribution with parameters φ and w θ (·) is a distribution over actions with parameters θ. Finally, the lower bound in Eqn. 5 is maximized under the constraint H(a|s) < η (prevents divergence, see BID12) to compute empowerment as follow: DISPLAYFORM3 where β is η dependent temperature term. BID12 also applied the principles of Expectation-Maximization (EM) BID1 to learn empowerment, i.e., alternatively maximizing Eqn. 6 with respect to w θ (a|s) and q φ (a|s, s). Given a set of training trajectories τ, the maximization of Eqn. 6 w.r.t q φ (·) is shown to be a supervised maximum log-likelihood problem whereas the maximization w.r.t w θ (·) is determined through the functional derivative ∂I/∂w = 0 under the constraint a w(a|s) = 1. The optimal w * that maximizes Eqn. 6 turns out to be 1 DISPLAYFORM4, where Z(s) is a normalization term. Substituting w * in Eqn. 6 showed that the empowerment Φ(s) = 1 β log Z(s) (for full derivation, see Appendix A.2).Note that w * (a|s) is implicitly unnormalized as there is no direct mechanism for sampling actions or computing Z(s). BID12 introduced an approximation w * (a|s) ≈ log π(a|s) + Φ(s) where π(a|s) is a normalized distribution which leaves the scalar function Φ(s) to account for the normalization term log Z(s). Finally, the parameters of policy π and scalar function Φ are optimized by minimizing the discrepancy, l I (s, a, s), between the two approximations (log π(a|s) + Φ(s)) and β log q φ (a|s, s)) through either absolute (p = 1) or squared error (p = 2), i.e., DISPLAYFORM5 3 EMPOWERED ADVERSARIAL INVERSE REINFORCEMENT LEARNINGWe present an inverse reinforcement learning algorithm that learns a robust, transferable reward function and policy from expert demonstrations. Our proposed method comprises (i) an inverse model q φ (a|s, s) that takes the current state s and the next state s to output a distribution over actions A that ed in s to s transition, (ii) a reward r ξ (s, a), with parameters ξ, that is a function of both state and action, (iii) an empowerment-based potential function Φ ϕ (·) with parameters ϕ that determines the reward-shaping function F = γΦ ϕ (s) − Φ ϕ (s) and also regularizes the policy update, and (iv) a policy model π θ (a|s) that outputs a distribution over actions given the current state s. All these models are trained simultaneously based on the objective functions described in the following sections to recover optimal policies and generalizable reward functions concurrently.3.1 INVERSE MODEL q φ (a|s, s) OPTIMIZATION As mentioned in Section 2.3, learning the inverse model q φ (a|s, s) is a maximum log-likelihood supervised learning problem. Therefore, given a set of trajectories τ ∼ π, where a single trajectory is a sequence states and actions, i.e., τ i = {s 0, a 0, · · ·, s T, a T} i, the inverse model q φ (a|s, s) is trained to minimize the mean-square error between its predicted action q(a|s, s) and the action a taken according to the generated trajectory τ, i.e., DISPLAYFORM6 Empowerment will be expressed in terms of normalization function Z(s) of optimal w * (a|s), i.e., DISPLAYFORM7. Therefore, the estimation of empowerment Φ ϕ (s) is approximated by minimizing the loss function l I (s, a, s), presented in Eqn. 7, w.r.t parameters ϕ, and the inputs (s, a, s) are sampled from the policy-generated trajectories τ. To train the reward function, we first compute the discriminator as follow: DISPLAYFORM0 where r ξ (s, a) is the reward function to be learned with parameters ξ. We also maintain the target ϕ and learning ϕ parameters of the empowerment-based potential function. The target parameters ϕ are a replica of ϕ except that the target parameters ϕ are updated to learning parameters ϕ after every n training epochs. Note that keeping a stationary target Φ ϕ stabilizes the learning as also mentioned in BID11. Finally, the discriminator/reward function parameters ξ are trained via binary logistic regression to discriminate between expert τ E and generated τ trajectories, i.e., DISPLAYFORM1 3.4 POLICY OPTIMIZATION POLICY π θ (a|s)We train our policy π θ (a|s) to maximize the discriminative rewardr(s, a, s) = log(D(s, a, s) − log(1 − D(s, a, s))) and to minimize the loss function l I (s, a, s) = β log q φ (a|s, s) − (log π θ (a|s) + Φ ϕ (s)) p which accounts for empowerment regularization. Hence, the overall policy training objective is: DISPLAYFORM2 where policy parameters θ are updated using any policy optimization method such as TRPO BID21 or an approximated step such as PPO BID22.Algorithm 1 outlines the overall training procedure to train all function approximators simultaneously. Note that the expert samples τ E are seen by the discriminator only, whereas all other models are trained using the policy generated samples τ. Furthermore, the discriminating rewardr(s, a, s) boils down to the following expression (Appendix B.1): DISPLAYFORM3 where f (s, a, s) = r ξ (s, a) + γΦ ϕ (s) − Φ ϕ (s). Thus, an alternative way to express our policy training objective is E τ [log π θ (a|s)r π (s, a, s)], where r π (s, a, s) =r(s, a, s) − λ I l I (s, a, s), DISPLAYFORM4 Update θ i to θ i+1 using natural gradient update rule (i.e., TRPO/PPO) with the gradient: DISPLAYFORM5 After every n epochs sync ϕ with ϕ Fig (b) represents a problem where environment structure is modified during testing, i.e., a reward learned on a maze with left-passage is transferred to a maze with right-passage to the goal (green).which would undoubtedly yield the same as Eqn. 11, i.e., maximize the discriminative reward and minimize the loss l I. The analysis of this alternative expression is given in Appendix B to highlight that our policy update rule is equivalent to MaxEnt-IRL policy objective BID4 except that it also maximizes the empowerment, i.e., DISPLAYFORM6 where, λ and γ are hyperparameters, andĤ(·) is the entropy-regularization term depending on π(·) and q(·). Hence, our policy is regularized by the empowerment which induces generalized behavior rather than locally overfitting to the limited expert demonstrations. Our proposed method, EAIRL, learns both reward and policy from expert demonstrations. Thus, for comparison, we evaluate our method against both state-of-the-art policy and reward learning techniques on several control tasks in OpenAI Gym. In case of policy learning, we compare our method against GAIL, GAN-GCL, AIRL with state-only reward, denoted as AIRL(s), and an augmented version of AIRL we implemented for the purposes of comparison that has state-action reward, denoted as AIRL(s, a). In reward learning, we only compare our method against AIRL(s) and AIRL(s, a) as GAIL does not recover rewards, and GAN-GCL is shown to exhibit inferior performance than AIRL BID6. Furthermore, in the comparisons, we also include the expert The performance of policies obtained from maximizing the learned rewards in the transfer learning problems. It can be seen that our method performs significantly better than AIRL BID6 and exhibits expert-like performance in all five randomly-seeded trials which imply that our method learns near-optimal, transferable reward functions.performances which represents a policy learned by optimizing a ground-truth reward using TRPO BID21. The performance of different methods are evaluated in term of mean and standard deviation of total rewards accumulated (denoted as score) by an agent during the trial, and for each experiment, we run five randomly-seeded trials. To evaluate the learned rewards, we consider a transfer learning problem in which the testing environments are made to be different from the training environments. More precisely, the rewards learned via IRL in the training environments are used to re-optimize a new policy in the testing environment using standard RL. We consider two test cases shown in the FIG0.In the first test case, as shown in FIG0, we modify the agent itself during testing. We trained a reward function to make a standard quadruped ant to run forward. During testing, we disabled the front two legs (indicated in red) of the ant (crippled-ant), and the learned reward is used to reoptimize the policy to make a crippled-ant move forward. Note that the crippled-ant cannot move sideways (Appendix C.1). Therefore, the agent has to change the gait to run forward. In the second test case, shown in FIG0, we change the environment structure. The agent learns to navigate a 2D point-mass to the goal region in a simple maze. We re-position the maze central-wall during testing so that the agent has to take a different path, compared to the training environment, to reach the target (Appendix C.2). TAB0 summarizes the means and standard deviations of the scores over five trials. It can be seen that our method recovers near-optimal reward functions as the policy scores almost reach the expert scores in all five trials even after transfering to unseen testing environments. Furthermore, our method performs significantly better than both Next, we considered the performance of the learned policy specifically for an imitation learning problem in various control tasks. The tasks, shown in FIG3, include (i) making a 2D halfcheetah robot to run forward, (ii) making a 3D quadruped robot (ant) to move forward, (iii) making a 2D swimmer to swim, and (iv) keeping a friction less pendulum to stand vertically up. For each algorithm, we provided 20 expert demonstrations generated by a policy trained on a ground-truth reward using TRPO BID21. TAB2 presents the means and standard deviations of policy learning performance scores, over the five different trials. It can be seen that EAIRL, AIRL(s, a) and GAIL demonstrate similar performance and successfully learn to imitate the expert policy, whereas AIRL(s) and GAN-GCL fails to recover a policy. This section highlights the importance of empowerment-regularized MaxEnt-IRL and modeling rewards as a function of both state and action rather than restricting to state-only formulation on learning rewards and policies from expert demonstrations. In the scalable MaxEnt-IRL framework BID4 BID6, the normalization term is approximated by importance sampling where the importance-sampler/policy is trained to minimize the KL-divergence from the distribution over expert trajectories. However, merely minimizing the divergence between expert demonstrations and policy-generated samples leads to localized policy behavior which hinders learning generalized reward functions. In our proposed work, we regularize the policy update with empowerment i.e., we update our policy to reduce the divergence from expert data distribution as well as to maximize the empowerment (Eqn.12). The proposed regularization prevents premature convergence to local behavior which leads to robust state-action based rewards learning. Furthermore, empowerment quantifies the extent to which an agent can control/influence its environment in the given state. Thus the agent takes an action a on observing a state s such that it has maximum control/influence over the environment upon ending up in the future state s.Our experimentation also shows the importance of modeling discriminator/reward functions as a function of both state and action in reward and policy learning under GANs framework. The re-ward learning show that state-only rewards (AIRL(s)) does not recover the action dependent terms of the ground-truth reward function that penalizes high torques. Therefore, the agent shows aggressive behavior and sometimes flips over after few steps (see the accompanying video), which is also the reason that crippled-ant trained with AIRL's disentangled reward function reaches only the half-way to expert scores as shown in TAB0. Therefore, the reward formulation as a function of both states and actions is crucial to learning action-dependent terms required in most real-world applications, including any autonomous driving, robot locomotion or manipulation task where large torque magnitudes are discouraged or are dangerous. The policy learning further validate the importance of the state-action reward formulation. TAB2 shows that methods with state-action reward/discriminator formulation can successfully recover expert-like policies. Hence, our empirical show that it is crucial to model reward/discriminator as a function of state-action as otherwise, adversarial imitation learning fails to learn ground-truth rewards and expert-like policies from expert data. We present an approach to adversarial reward and policy learning from expert demonstrations by regularizing the maximum-entropy inverse reinforcement learning through empowerment. Our method learns the empowerment through variational information maximization in parallel to learning the reward and policy. We show that our policy is trained to imitate the expert behavior as well to maximize the empowerment of the agent over the environment. The proposed regularization prevents premature convergence to local behavior and leads to a generalized policy that in turn guides the reward-learning process to recover near-optimal reward. We show that our method successfully learns near-optimal rewards, policies, and performs significantly better than state-of-the-art IRL methods in both imitation learning and challenging transfer learning problems. The learned rewards are shown to be transferable to environments that are dynamically or structurally different from training environments. In our future work, we plan to extend our method to learn rewards and policies from diverse human/expert demonstrations as the proposed method assumes that a single expert generates the training data. Another exciting direction would be to build an algorithm that learns from sub-optimal demonstrations that contains both optimal and non-optimal behaviors. For completeness, we present a derivation of presenting mutual information (MI) as variational lower bound and maximization of lower bound to learn empowerment. As mentioned in section 2.3, the variational lower bound representation of MI is computed by defining MI as a difference in conditional entropies, and the derivation is formalized as follow. ≥ −E w(a|s) log w(a|s) + E p(s |a,s)w(a|s) [log q(a|s, s)] DISPLAYFORM0 The empowerment is a maximal of MI and it can be formalized as follow by exploiting the variational lower bound formulation (for details see BID12). w,q E p(s |a,s)w(a|s) [− 1 β log w(a|s) + log q(a|s, s)]As mentioned in section 2.3, given a training trajectories, the maximization of Eqn. 13 w.r.t inverse model q(a|s, s) is a supervised maximum log-likelihood problem. The maximization of Eqn. 13 w.r.t w(a|s) is derived through a functional derivative ∂I w,q /∂w = 0 under the constraint a w(a|s) = 1. For simplicity, we consider discrete state and action spaces, and the derivation is as follow: By using the constraint a w(a|s) = 1, it can be shown that the optimal solution w * (a|s) = 1 Z(s) exp(u(s, a)), where u(s, a) = βE p(s |a,s) [log q(a|s, s)] and Z(s) = a u(s, a). This solution maximizes the lower bound since ∂ 2 I w (s)/∂w 2 = − a 1 w(a|s) < 0. DISPLAYFORM0 In this section we derive the Empowerment-regularized formulation of maximum entropy IRL. Let τ be a trajectory sampled from expert demonstrations D and p ξ (τ) ∝ p(s 0)Π T −1 t=0 p(s t+1 |s t, a t) exp r ξ (st,at) be a distribution over τ. As mentioned in Section 2, the IRL objective is to maximize the likelihood: DISPLAYFORM0 Furthermore, as derived in BID6, the gradient of above equation w.r.t ξ can be written as: DISPLAYFORM1 where r ξ (·) is a parametrized reward to be learned, and p ξ,t = s t =t,a t =t p ξ (τ) denotes marginalization of state-action at time t. Since, it is unfeasible to draw samples from p ξ, BID4 proposed to train an importance sampling distribution µ(τ) whose varience is reduced by defining µ(τ) as a mixture of polices, i.e., µ(a|s) = 1 2 (π(a|s) +p(a|s)), wherep is a rough density estimate over demonstrations. Thus the above gradient becomes: DISPLAYFORM2 We train our importance-sampler/policy π to maximize the empowerment Φ(·) for generalization and to reduce divergence from true distribution by minimizing DISPLAYFORM3, the matching terms of π(τ) and p ξ (τ) cancel out, ing into entropy-regularized policy update. Furthermore, as we also include the empowerment Φ(·) in the policy update to be maximized, hence the overall objective becomes: DISPLAYFORM4 Our discriminator is trained to minimize cross entropy loss as mention in Eqn. 10, and for the proposed structure of our discriminator Eqn. 9, it can be shown that the discriminator's gradient w.r.t its parameters turns out to be equal to Equation 14 (for more details, see BID6). On the other hand, our policy training objective is DISPLAYFORM5 In the next section, we show that the above policy training objective is equivalent to Equation 15. We train our policy to maximize the discriminative rewardr(s, a, s) = log(D(s, a, s) − log(1 − D(s, a, s))) and minimize the information-theoretic loss function l I (s, a, s). The discriminative rewardr(s, a, s) simplifies to: DISPLAYFORM0 where f (s, a, s) = r(s, a) + γΦ(s) − Φ(s). The entropy-regularization is usually scaled by the hyperparameter, let say λ h ∈ R, thusr(s, a, s) = f (s, a, s) − λ h log π(a|s). Hence, assuming single-sample (s, a, s), absolute-error for l I (s, a, s) = | log q φ (a|s, s) − (log π(a|s) + Φ(s))|, and l i > 0, the policy is trained to maximize following: DISPLAYFORM1 = r(s, a) + γΦ(s) − Φ(s) − λ h log π(a|s) − log q(a|s, s) + log π(a|s) + Φ(s) = r(s, a) + γΦ(s) − λ h log π(a|s) − log q(a|s, s) + log π(a|s)Note that, the potential function Φ(s) cancels out and we scale the leftover terms of l I with a hyperparameter λ I. Hence, the above equation becomes: r π (s, a, s) = r(s, a, s) + γΦ(s) + (λ I − λ h) log π(a|s) − λ I log q(a|s, s)We combine the log terms together as: r π (s, a, s) = r(s, a) + λ I Φ(s) + λĤ(·)where λ is a hyperparameter, andĤ(·) is an entropy regularization term depending on q(a|s, s) and π(a|s). Therefore, it can be seen that the Eqn. 17 is equivalent/approximation to Eqn. 15. The following figures show the difference between the path profiles of standard and crippled Ant. It can be seen that the standard Ant can move sideways whereas the crippled ant has to rotate in order to move forward. The following figures show the path profiles of a 2D point-mass agent to reach the target in training and testing environment. It can be seen that in the testing environment the agent has to take the opposite route compared to the training environment to reach the target. We use two-layer ReLU network with 32 units in each layer for the potential function h ϕ (·) and Φ ϕ (·), reward function r ξ (·), discriminators of GAIL and GAN-GCL. Furthermore, policy π θ (·) of Figure 5: The top and bottom rows show the path followed by a 2D point-mass agent (yellow) to reach the target (green) in training and testing environment, respectively.all presented models and the inverse model q φ (·) of EAIRL are presented by two-layer RELU network with 32 units in each layer, where the network's output parametrizes the Gaussian distribution, i.e., we assume a Gaussian policy. For all experiments, we use the temperature term β = 1. We evaluated both mean-squared and absolute error forms of l I (s, a, s) and found that both lead to similar performance in reward and policy learning. We set entropy regularization weight to 0.1 and 0.001 for reward and policy learning, respectively. The hyperparameter λ I was set to 1.0 for reward learning and 0.001 for policy learning. The target parameters of the empowerment-based potential function Φ ϕ (·) were updated every 5 and 2 epochs during reward and policy learning respectively. Although reward learning hyperparameters are also applicable to policy learning, we decrease the magnitude of entropy and information regularizers during policy learning to speed up the policy convergence to optimal values. Furthermore, we set the batch size to 2000-and 20000-steps per TRPO update for the pendulum and remaining environments, respectively. For the methods BID6 BID8 presented for comparison, we use their suggested hyperparameters. We also use policy samples from previous 20 iterations as negative data to train the discriminator of all IRL methods presented in this paper to prevent the parametrized reward functions from overfitting the current policy samples.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJlmHoR5tQ
Our method introduces the empowerment-regularized maximum-entropy inverse reinforcement learning to learn near-optimal rewards and policies from expert demonstrations.
Recurrent neural networks (RNNs) can learn continuous vector representations of symbolic structures such as sequences and sentences; these representations often exhibit linear regularities (analogies). Such regularities motivate our hypothesis that RNNs that show such regularities implicitly compile symbolic structures into tensor product representations (TPRs;), which additively combine tensor products of vectors representing roles (e.g., sequence positions) and vectors representing fillers (e.g., particular words). To test this hypothesis, we introduce Tensor Product Decomposition Networks (TPDNs), which use TPRs to approximate existing vector representations. We demonstrate using synthetic data that TPDNs can successfully approximate linear and tree-based RNN autoencoder representations, suggesting that these representations exhibit interpretable compositional structure; we explore the settings that lead RNNs to induce such structure-sensitive representations. By contrast, further TPDN experiments show that the representations of four models trained to encode naturally-occurring sentences can be largely approximated with a bag of words, with only marginal improvements from more sophisticated structures. We conclude that TPDNs provide a powerful method for interpreting vector representations, and that standard RNNs can induce compositional sequence representations that are remarkably well approximated byTPRs; at the same time, existing training tasks for sentence representation learning may not be sufficient for inducing robust structural representations Compositional symbolic representations are widely held to be necessary for intelligence BID8 ), particularly in the domain of language BID7. However, neural networks have shown great success in natural language processing despite using continuous vector representations rather than explicit symbolic structures. How can these continuous representations yield such success in a domain traditionally believed to require symbol manipulation?One possible answer is that neural network representations implicitly encode compositional structure. This hypothesis is supported by the spatial relationships between such vector representations, which have been argued to display geometric regularities that parallel plausible symbolic structures of the elements being represented (Mikolov et al. 2013 ; see Figure 1).Analogical relationships such as those in Figure 1 are special cases of linearity properties shared by several methods developed in the 1990s for designing compositional vector embeddings of symbolic structures. The most general of these is tensor product representations (TPRs; BID22 . Symbolic structures are first decomposed into filler-role bindings; for example, to represent the sequence, the filler 5 may be bound to the role of first element, the filler 2 may be bound to the role of second element, and so on. Each filler f i and -crucially -each role r i has a vector embedding; these two vectors are combined using their tensor product f i ⊗ r i, and these tensor products are summed to produce the representation of the sequence: f i ⊗ r i. This linear combination can predict the linear relations between sequence representations illustrated in Figure 1. (a) (b) (c) Figure 1: Plots of the first two principal components of (a) word embeddings BID14, (b) digit-sequence embeddings learned by an autoencoder (Section 2), and (c) sentences (InferSent: Conneau et al. 2017). All demonstrate systematicity in the learned vector spaces. In this article, we test the hypothesis that vector representations of sequences can be approximated as a sum of filler-role bindings, as in TPRs. We introduce the Tensor Product Decomposition Network (TPDN) which takes a set of continuous vector representations to be analyzed and learns filler and role embeddings that best predict those vectors, given a particular hypothesis for the relevant set of roles (e.g., sequence indexes or structural positions in a parse tree).To derive structure-sensitive representations, in Section 2 we look at a task driven by structure, not content: autoencoding of sequences of meaningless symbols, denoted by digits. The focus here is on sequential structure, although we also devise a version of the task that uses tree structure. For the representations learned by these autoencoders, TPDNs find excellent approximations that are TPRs. In Section 3, we turn to sentence-embedding models from the contemporary literature. It is an open question how structure-sensitive these representations are; to the degree that they are structuresensitive, our hypothesis is that they can be approximated by TPRs. Here, TPDNs find less accurate approximations, but they also show that a TPR equivalent to a bag-of-words already provides a reasonable approximation; these suggest that these sentence representations are not robustly structure-sensitive. We therefore return to synthetic data in Section 4, exploring which architectures and training tasks are likely to lead RNNs to induce structure-sensitive representations. To summarize the contributions of this work, TPDNs provide a powerful method for interpreting vector representations, shedding light on hard-to-understand neural architectures. We show that standard RNNs can induce compositional representations that are remarkably well approximated by TPRs and that the nature of these representations depends, in intrepretable ways, on the architecture and training task. Combined with our finding that standard sentence encoders do not seem to learn robust representations of structure, these findings suggest that more structured architectures or more structure-dependent training tasks could improve the compositional capabilities of existing models. The Tensor Product Decomposition Network (TPDN), depicted in FIG0, learns a TPR that best approximates an existing set of vector encodings. While TPDNs can be applied to any structured space, including embeddings of images or words, this work focuses on applying TPDNs to sequences. The model is given a hypothesized role scheme and the dimensionalities of the filler and role embeddings. The elements of each sequence are assumed to be the fillers in that sequence's representation; for example, if the hypothesized roles are indexes counting from the end of the sequence, then the hypothesized filler-role pairs for would be (4:last, 2:second-to-last, 5:third-to-last).The model then learns embeddings for these fillers and roles that minimize the distance between the TPRs generated from these embeddings and the existing encodings of the sequences. Before the comparison is performed, the tensor product (which is a matrix) is flattened into a vector, and a linear transformation M is applied (see Appendix B for an ablation study showing that this transformation, which was not a part of the original TPR proposal, is necessary). The overall function computed by the architecture is thus M (flatten( i r i ⊗ f i)). To establish the effectiveness of the TPDN at uncovering the structural representations used by RNNs, we first apply the TPDN to sequence-to-sequence networks trained on an autoencoding objective: they are expected to encode a sequence of digits and then decode that encoding to reproduce the same sequence (FIG0). In addition to testing the TPDN, this experiment also addresses a scientific question: do different architectures (specifically, unidirectional, bidirectional, and tree-based sequence-to-sequence models) induce different representations? Digit sequences: The sequences consisted of the digits from 0 to 9. We randomly generated 50,000 unique sequences with lengths ranging from 1 to 6 inclusive and averaging 5.2; these sequences were divided into 40,000 training sequences, 5,000 development sequences, and 5,000 test sequences. Architectures: For all sequence-to-sequence networks, we used gated recurrent units as the recurrent units. We considered three encoder-decoder architectures: unidirectional, bidirectional, and tree-based. 3 The unidirectional encoders and decoders follow the setup of BID26: the encoder is fed the input elements one at a time, left to right, updating its hidden state after each element. The decoder then produces the output sequence using the final hidden state of the encoder as its input. The bidirectional encoder combines left-to-right and right-toleft unidirectional encoders BID19; for symmetry, we also create a bidirectional decoder, which has both a left-to-right and a right-to-left unidirectional decoder whose hidden states are concatenated to form bidirectional hidden states from which output predictions are made. Our final topology is tree-based RNNs BID17 BID24, specifically the Tree-GRU encoder of BID6 and the tree decoder of. These architectures require a tree structure as part of their input; we generated a tree for each sequence using a deterministic algorithm that groups digits based on their values (see Appendix C). To control for initialization effects, we trained five instances of each architecture with different random initializations. Role schemes: We consider 6 possible methods that networks might use to represent the roles of specific digits within a sequence; see FIG1 for examples of these role schemes.1. Left-to-right: Each digit's role is its index in the sequence, counting from left to right. 2. Right-to-left: Each digit's role is its index in the sequence, counting from right to left. 3. Bidirectional: Each digit's role is an ordered pair containing its left-to-right index and its right-to-left index (compare human representations of spelling, Fischer-Baum et al. 2010). 4. Wickelroles: Each digit's role is the digit before it and the digit after it BID35. 5. Tree positions: Each digit's role is its position in a tree, such as RRL (left child of right child of right child of root). The tree structures are given by the algorithm in Appendix C.6. Bag-of-words: All digits have the same role. We call this a bag-of-words because it represents which digits ("words") are present and in what quantities, but ignores their positions. We hypothesize that RNN autoencoders will learn to use role representations that parallel their architectures: left-to-right roles for a unidirectional network, bidirectional roles for a bidirectional network, and tree-position roles for a tree-based network. Evaluation: We evaluate how well a given sequence-to-sequence network can be approximated by a TPR with a particular role scheme as follows. First, we train a TPDN with the role scheme in question (Section 1.1). Then, we take the original encoder/decoder network and substitute the fitted TPDN for its encoder FIG0 ). We do not conduct any additional training upon this hybrid network; the decoder retains exactly the weights it learned in association with the original encoder, while the TPDN retains exactly the weights it learned for approximating the original encoder (including the weights on the final linear layer). We then compute the accuracy of the ing hybrid network; we call this metric the substitution accuracy. High substitution accuracy indicates that the TPDN has approximated the encoder well enough for the decoder to handle the ing vectors. Performance of seq2seq networks: The unidirectional and tree-based architectures both performed the training task nearly perfectly, with accuracies of 0.999 and 0.989 (averaged across five runs). Accuracy was lower (0.834) for the bidirectional architecture; this might mean that the hidden size of 60 becomes too small when divided into two 30-dimensional halves, one half for each direction. Quality of TPDN approximation: For each of the six role schemes, we fitted a TPDN to the vectors generated by the trained encoder, and evaluated it using substitution accuracy (Section 2.1). The , in FIG1, show that different architectures do use different representations to solve the task. The tree-based autoencoder can be well-approximated using tree-position roles but not using any of the other role schemes. By contrast, the unidirectional architecture is approximated very closely (with a substitution accuracy of over 0.99 averaged across five runs) by bidirectional roles. Left-to-right roles are also fairly successful (accuracy = 0.87), and right-to-left roles are decidedly unsuccessful (accuracy = 0.11). This asymmetry suggests that the unidirectional network uses mildly bidirectional roles: while it is best approximated by bidirectional roles, it strongly favors one direction over the other. Though the model uses bidirectional roles, then, roles with the same left-to-right position (e.g.,, and) can be collapsed without much loss of accuracy. Finally, the bidirectional architecture is not approximated well by any of the role schemes we investigated. It may be implementing a role scheme we did not consider, or a structure-encoding scheme other than TPR. Alternately, it might simply not have adopted any robust method for representing sequence structure; this could explain why its accuracy on the training task was relatively low (0.83). Will the TPDN's success with digit-sequence autoencoders extend to models trained on naturally occurring data? We explore this question using sentence representations from four models: InferSent , a BiLSTM trained on the Stanford Natural Language Inference (SNLI) corpus BID3; Skip-thought , an LSTM trained to predict the sentence before or after a given sentence; the Stanford sentiment model (SST) BID25, a tree-based recursive neural tensor network trained to predict movie review sentiment; and SPINN BID4, a tree-based RNN trained on SNLI. More model details are in Appendix E. We now fit TPDNs to these four sentence encoding models. We experiment with all of the role schemes used in Section 2 except for Wickelroles; for sentence representations, the vocabulary size |V | is so large that the Wickelrole scheme, which requires |V | 2 distinct roles, becomes intractable. Preliminary experiments showed that the TPDN performed poorly when learning the filler embeddings from scratch, so we used pretrained word embeddings; for each model, we use the word embeddings used by that model. We fine-tuned the embeddings with a linear transformation on top of the word embedding layer (though the embeddings themselves remain fixed). Thus, what the model has to learn are: the role embeddings, the linear transformation to apply to the fixed filler embeddings, and the final linear transformation applied to the sum of the filler/role bindings. We train TPDNs on the sentence embeddings that each model generates for all SNLI premise sentences BID3. For other training details see Appendix E. Table 1a shows the mean squared errors (MSEs) for various role schemes. In general, the MSEs show only small differences between role schemes, except that tree-position roles do noticeably outperform other role schemes for SST. Notably, bag-of-words roles perform nearly as well as the other role schemes, in stark contrast to the poor performance of bag-of-words roles in Section 2. MSE is useful for comparing models but is less useful for assessing absolute performance since the exact value of this error is not very interpretable. In the next section, we use downstream tasks for a more interpretable evaluation. Tasks: We assess how the tensor product approximations compare to the models they approximate at four tasks that are widely accepted for evaluating sentence embeddings: Stanford Sentiment Treebank (SST), rating the sentiment of movie reviews BID25; Microsoft Research Evaluation: We use SentEval to train a classifier for each task on the original encodings produced by the sentence encoding model. We freeze this classifier and use it to classify the vectors generated by the TPDN. We then measure what proportion of the classifier's predictions for the approximation match its predictions for the original sentence encodings.4Results: For all tasks besides SNLI, we found no marked difference between bag-of-words roles and other role schemes (Table 2a). For SNLI, we did see instances where other role schemes outperformed bag-of-words (Table 2b). Within the SNLI , both tree-based models (SST and SPINN) are best approximated with tree-based roles. InferSent is better approximated with structural roles than with bag-of-words roles, but all structural role schemes perform similarly. Finally, Skip-thought cannot be approximated well with any role scheme we considered. It is unclear why Skip-thought has lower than the other models. Overall, even for SNLI, bag-of-words roles provide a fairly good approximation, with structured roles yielding rather modest improvements. Based on these , we hypothesize that these models' representations can be characterized as a bag-of-words representation plus some incomplete structural information that is not always encoded. This explanation is consistent with the fact that bag-of-words roles yield a strong but imperfect approximation for the sentence embedding models. However, this is simply a conjecture; it is possible that these models do use a robust, systematic structural representation that either involves a role scheme we did not test or that cannot be characterized as a tensor product representation at all. We now complement the TPDN tests with sentence analogies. By comparing pairs of minimally different sentences, analogies might illuminate representational details that are difficult to discern in individual sentences. We construct sentence-based analogies that should hold only under certain role schemes, such as the following analogy (expressed as an equation as in Mikolov et al. 2013): DISPLAYFORM0 A left-to-right role scheme makes equivalent to (f :r denotes the binding of filler f to role r): In, both sides reduce to now:2, so holds for representations using left-to-right roles. However, if instead used right-to-left roles, it would not reduce in any clean way, so would not hold. We construct a dataset of such role-diagnostic analogies, where each analogy should only hold for certain role schemes. For example, works for left-to-right roles or bag-of-words roles, but not the other role schemes. The analogies use a vocabulary based on to ensure plausibility of the constructed sentences. For each analogy, we create 4 equations, one isolating each of the four terms (e.g. I see = I see now -you know now + you know). We then compute the Euclidean distance between the two sides of each equation using each model's encodings. The are in Table 1b. InferSent, Skip-thought, and SPINN all show most consistent with bidirectional roles, while SST shows most consistent with tree-based or bidirectional roles. The bag-of-words column shows poor performance by all models, indicating that in controlled enough settings these models can be shown to have some more structured behavior even though evaluation on examples from applied tasks does not clearly bring out that structure. These analogies thus provide independent evidence for our from the TPDN analysis: these models have a weak notion of structure, but that structure is largely drowned out by the non-structure-sensitive, bag-of-words aspects of their representations. However, the other possible explanations mentioned above−namely, the possibilities that the models use alternate role schemes that we did not test or that they use some structural encoding other than tensor product representation−still remain. The previous section suggested that all sentence models surveyed did not robustly encode structure and could even be approximated fairly well with a bag of words. Motivated by this finding, we now investigate how aspects of training can encourage or discourage compositionality in learned representations. To increase interpretability, we return to the setting (from Section 2) of operating over digit sequences. We investigate two aspects of training: the architecture and the training task. Teasing apart the contribution of the encoder and decoder: In Section 2, we investigated autoencoders whose encoder and decoder had the same topology (unidirectional, bidirectional, or treebased). To test how each of the two components contributes to the learned representation, we now expand the investigation to include networks where the encoder and decoder differ. We crossed all three encoder types with all three decoder types (nine architectures in total). The are in TAB6 in Appendix D. The decoder largely dictates what roles are learned: models with unidirectional decoders prefer mildly bidirectional roles, models with bidirectional decoders fail to be well-approximated by any role scheme, and models with tree-based decoders are best approximated by tree-based roles. However, the encoder still has some effect: in the tree/uni and tree/bi models, the tree-position roles perform better than they do for the other models with the same decoders. Though work on novel architectures often focuses on the encoder, this finding suggests that focusing on the decoder may be more fruitful for getting neural networks to learn specific types of representations. The contribution of the training task: We next explore how the training task affects the representations that are learned. We test four tasks, illustrated in Table 3a: autoencoding (returning the input sequence unchanged), reversal (reversing the input), sorting (returning the input digits in ascending order), and interleaving (alternating digits from the left and right edges of the input). Table 3b gives the substitution accuracy for a TPDN trained to approximate a unidirectional encoder that was trained with a unidirectional decoder on each task. Training task noticeably influences the learned representations. First, though the model has learned mildly bidirectional roles favoring the left-to-right direction for autoencoding, for reversal the right-to-left direction is far preferred over left-to-right. For interleaving, the model is approximated best with strongly bidirectional roles: that is, bidirectional roles work nearly perfectly, while neither unidirectional scheme works well. Finally, for sorting, bag-of-words roles work nearly as well as all other schemes, suggesting that the model Table 3: (a) Tasks used to test for the effect of task on learned roles (Section 4). (b) Accuracy of the TPDN applied to models trained on these tasks with a unidirectional encoder and decoder. All numbers are averages across five random initializations. has learned to discard most structural information since sorting does not depend on structure. These experiments suggest that RNNs only learn compositional representations when the task requires them. This might explain why the sentence embedding models do not seem to robustly encode structure: perhaps the training tasks for these models do not heavily rely on sentence structure (e.g. BID12 achieved high accuracy on SNLI using a model that ignores word order), such that the models learn to ignore structural information, as was the case with models trained on sorting. There are several approaches for interpreting neural network representations. One approach is to infer the information encoded in the representations from the system's behavior on examples targeting specific representational components, such as semantics BID13; BID16 or syntax . Another approach is based on probing tasks, which assess what information can be easily decoded from a vector representation BID20 BID2 Kádár et al. 2017; Ettinger et al. 2018; compare work in cognitive neuroscience, e.g. BID9 ). Our method is wider-reaching than the probing task approach, or the Mikolov et al. FORMULA0 analogy approach: instead of decoding a single feature, we attempt to exhaustively decompose the vector space into a linear combination of filler-role bindings. The TPDN's successful decomposition of sequence representations in our experiments shows that RNNs can sometimes be approximated with no nonlinearities or recurrence. This finding is related to the of , who argued that LSTMs dynamically compute weighted sums of their inputs; TPRs replace the weights of the sum with the role vectors. also showed that recurrence is largely unnecessary for practical applications. BID33 report very good performance for a sequence model without recurrence; importantly, they find it necessary to incorporate sequence position embeddings, which are similar to the left-to-right roles discussed in Section 2. Methods for interpreting neural networks using more interpretable architectures have been proposed before based on rules and automata BID10 BID34.Our decomposition of vector representations into independent fillers and roles is related to work on separating latent variables using singular value decomposition and other factorizations BID29 BID0. For example, in face recognition, eigenfaces BID21 BID30 and TensorFaces (; BID32 use such techniques to disentangle facial features, camera angle, and lighting. Finally, there is a large body of work on incorporating explicit symbolic representations into neural networks (for a recent review, see BID1 ; indeed, tree-shaped RNNs are an example of this approach. While our work is orthogonal to this line of work, we note that TPRs and other filler-role representations can profitably be used as an explicit component of neural models (; BID11 ; BID28 BID18 . What kind of internal representations could allow simple sequence-to-sequence models to perform the remarkable feats they do, including tasks previously thought to require compositional, symbolic representations (e.g., translation)? Our experiments show that, in heavily structure-sensitive tasks, sequence-to-sequence models learn representations that are extremely well approximated by tensorproduct representations (TPRs), distributed embeddings of symbol structures that enable powerful symbolic computation to be performed with neural operations BID23. We demonstrated this by approximating learned representations via TPRs using the proposed tensor-product decomposition network (TPDN). Variations in architecture and task were shown to induce different types and degrees of structure-sensitivity in representations, with the decoder playing a greater role than the encoder in determining the structure of the learned representation. TPDNs applied to mainstream sentence-embedding models reveal that unstructured bag-of-words models provide a respectable approximation; nonetheless, this experiment also provides evidence for a moderate degree of structuresensitivity. The presence of structure-sensitivity is corroborated by targeted analogy tests motivated by the linearity of TPRs. A limitation of the current TPDN architecture is that it requires a hypothesis about the representations to be selected in advance. A fruitful future research direction would be to automatically explore hypotheses about the nature of the TPR encoded by a network. Here we analyze how several aspects of the TPDN architecture contribute to our . For all of the experiements described in this section, we used TPDNs to approximate a sequence-to-sequence network with a unidirectional encoder and unidirectional decoder that was trained to perform the reversal task (Section 4); we chose this network because it was strongly approximated by right-toleft roles, which are relatively simple (but still non-trivial). One area where our model diverges from traditional tensor product representations is in the presence of the final linear layer (step 5 in FIG0). This layer is necessary if one wishes to have freedom to choose the dimensionality of the filler and role embeddings; without it, the dimensionality of the representations that are being approximated must factor exactly into the product of the dimensionality of the filler embeddings and the dimensionality of the role embedding (see FIG0 . It is natural to wonder whether the only contribution of this layer is in adjusting the dimensionality or whether it serves a broader function. TAB3 shows the of approximating the reversal sequence-tosequence network with and without this layer; it indicates that this layer is highly necessary for the successful decomposition of learned representations. (Tables follow all appendix text.) Two of the parameters that must be provided to the TPDN are the dimensionality of the filler embeddings and the dimensionality of the role embeddings. We explore the effects of these parameters in FIG7. For the role embeddings, substitution accuracy increases noticeably with each increase in dimensionality until the dimensionality hits 6, where accuracy plateaus. This behavior is likely due to the fact that the reversal seq2seq network is most likely to employ right-to-left roles, which involves 6 possible roles in this setting. A dimensionality of 6 is therefore the minimum embedding size needed to make the role vectors linearly independent; linear independence is an important property for the fidelity of a tensor product representation BID22. The accuracy also generally increases as filler dimensionality increases, but there is a less clear point where it plateaus for the fillers than for the roles. The body of the paper focused on using the tensor product (f i ⊗ r i, see FIG0) as the operation for binding fillers to roles. There are other conceivable binding operations. Here we test two alternatives, both of which can be viewed as special cases of the tensor product or as related to it: circular convolution, which is used in holographic reduced representations BID15, and elementwise product (f i r i). Both of these are restricted such that roles and fillers must have the same embedding dimension (N f = N r). We first try setting this dimension to 20, which is what was used as both the role and filler dimension in all tensor product experiments with digit sequences. Red indicates accuracy under 1%; dark blue indicates accuracy over 80%. The models whose substitution accuracies are displayed are all TPDNs trained to approximate a sequence-to-sequence model that was trained on the task of reversal. We found that while these dimensions were effective for the tensor product binding operation, they were not effective for elementwise product and circular convolution TAB4 ). When the dimension was increased to 60, however, the elementwise product performed roughly as well as as the tensor product; circular convolution now learned one of the two viable role schemes (right-to-left roles) but failed to learn the equally viable bidirectional role scheme. Thus, our preliminary experiments suggest that these other two binding operations do show promise, but seem to require larger embedding dimensions than tensor products do. At the same time, they still have fewer parameters overall compared to the tensor product because their final linear layers (of dimensionality N) are much smaller than those used with a tensor product (of dimensionality N 2). When inputting digit sequences to our tree-based model, the model requires a predefined tree structure for the digit sequence. We use the following algorithm to generate this tree structure: at each timestep, combine the smallest element of the sequence (other than the last element) with its neighbor immediately to the right, and replace the pair with that neighbor. If there is a tie for the smallest digit, choose the leftmost tied digit. For example, the following shows step-by-step how the tree for the sequence 523719 would be generated:• 5 2 3 7 1 9• 5 2 3 7 DISPLAYFORM0 Section 4 summarized the of our experiments which factorially varied the training task, the encoder and the decoder. Here we report the full of these experiments in two tables: TAB5 shows the accuracies achieved by the sequence-to-sequence models at the various training tasks, and TAB6 shows the substitution accuracies of TPDNs applied to the trained sequence-to-sequence models for all architectures and tasks. As much as possible, we standardized parameters across all sequence-to-sequence models that we trained on the digit-sequence tasks. For all decoders, when computing a new hidden state, the only input to the recurrent unit is the previous hidden state (or parent hidden state, for a tree-based decoder), without using any previous outputs as inputs to the hidden state update. This property is necessary for using a bidirectional decoder, since it would not be possible to generate the output both before and after each bidirectional decoder hidden state. We also inform the decoder of when to stop decoding; that is, for sequential models, the decoder stops once its output is the length of the sequence, while for tree-based models we tell the model which positions in the tree are leaves. Stopping could alternately be determined by some action of the decoder (e.g., generating an end-of-sequence symbol); for simplicity we chose the strategy outlined above instead. For all architectures, we used a digit embedding dimensionality of 10 (chosen arbitrarily) and a hidden layer size of 60 (this hidden layer size was chosen because 60 has many integer factors, making it amenable to the dimensionality analyses in Appendix B.2). For the bidirectional architectures, the forward and backward recurrent layers each had a hidden layer size of 30, so that their concatenated hidden layer size was 60. For bidirectional decoders, a linear layer condensed the 60-dimensional encoding into 30 dimensions before it was passed to the forward and backward decoders. The networks were trained using the Adam optimizer with the standard initial learning rate of 0.001. We used negative log likelihood, computed over the softmax probability distributions for each output sequence element, as the loss function. Training proceeded with a batch size of 32, with loss on the held out development set computed after every 1,000 training examples. Training was halted when the loss on the heldout development set had not improved for any of the development loss checkpoints for a full epoch of training (i.e. 40,000 training examples). Once training completed, the parameters from the best-performing checkpoint were reloaded and used for evaluation of the network. When applying TPDNs to the digit-based sequence-to-sequence models, we always used 20 as both the filler embedding dimension and the role embedding dimension. This decision was based on the experiments in Appendix B.2; we selected filler and role embedding dimensions that were safely above the cutoff needed to lead to successful decomposition. The TPDNs were trained with the same training regimen as the sequence-to-sequence models, except that, instead of using negative log likelihood as the loss function, for the TPDNs we used mean squared error between the predicted vector representation and the actual vector representation from the original sequence-to-sequence network. The TPDNs were given the sequences of fillers (i.e. the digits), the roles hypothesized to go with those fillers, the sequence embeddings produced by the RNN, and the dimensionalities of the filler embeddings, role embeddings, and final linear transformation. The parameters that were updated by training were the specific values for the filler embeddings, the role embeddings, and the final linear transformation. For all four sentence encoding models, we used publicly available and freely downloadable pretrained versions found at the following links:• we use the SPINN-PI-NT version, which is equivalent to a tree-LSTM BID27 with 300-dimensional hidden states. For training a TPDN to approximate the sentence encoding models, the filler embedding dimensions were dictated by the size of the pretrained word embeddings; these dimensions were 300 for InferSent and SPINN, 620 for Skip-thought, and 25 for SST. The linear transformation applied to the word embeddings did not change their size. For role embedding dimensionality we tested all role dimensions in {1, 5, 10, 20, 40, 60}. The best-performing dimension was chosen based on preliminary experiments and used for all subsequent experiments; we thereby chose role dimensionalities of 10 for InferSent and Skip-thought, 20 for SST, and 5 for SPINN. In general, role embedding dimensionalities of 5, 10, and 20 all performed noticeably better than 1, 40, and 60, but there was not much difference between 5, 10, and 20.The training regimen for the TPDNs on sentence models was the same as for the TPDNs trained on digit sequences. The TPDNs were given the sequences of fillers (i.e. the words), the roles hypothesized to go with those fillers, the sequence embeddings produced by the RNN, the initial pretrained word embeddings, the dimensionalities of the linearly-transformed filler embeddings, the role embeddings, and the final linear transformation. The parameters that were updated by training were the specific values for the role embeddings, the linear transformation that was applied to the pretrained word embeddings, and the final linear transformation. The sentences whose encodings we trained the TPDNs to approximate were the premise sentences from the SNLI corpus BID3. We also tried instead using the sentences in the WikiText-2 corpus but found better performance with the SNLI sentences. This is plausibly because the shorter, simpler sentences in the SNLI corpus made it easier for the model to learn the role embeddings without distraction from the fillers. For each TPDN trained to approximate a sentence encoder, we evaluate it on four downstream tasks: (i) Stanford Sentiment Treebank (SST), which is labeling the sentiment of movie reviews BID25; this task is further subdivided into SST2 (labeling the reviews as positive or negative) and SST5 (labeling the reviews on a 5-point scale, where 1 means very negative and 5 means very positive). The metric we report for both tasks is accuracy.(ii) Microsoft Research Paraphrase Corpus (MRPC), which is labeling whether two sentences are paraphrases of each other . For this task, we report both accuracy and F1. (iii) Semantic Textual Similarity Benchmark (STS-B), which is giving a pair of sentences a score on a scale from 0 to 5 indicating how similar the two sentences are BID5. For this task, we report Pearson and Spearman correlation coefficients. (iv) Stanford Natural Language Inference (SNLI), which involves labeling a pair of sentences to indicate whether the first entails the second, contradicts the second, or neither BID3. For this task, we report accuracy as the evaluation metric. The first we report for the TPDN approximations of sentence encoders is similar to the substitution accuracy used for digit encoders. Here, we use SentEval Table 9: The proportion of times that a classifier trained on a sentence encoding model gave the same downstream-task predictions based on the original sentence encoding model and based on a TPDN approximating that model, where the TPDN uses the role schemes indicated by the column header. For all tasks but STS-B, these numbers show the proportion of predictions that matched; chance performance is 0.5 for SST2 and MRPC, 0.2 for SST5, and 0.33 for SNLI. For STS-B, the metric shown is the Pearson correlation between the TPDN's similarity ratings and the original model's similarity ratings; chance performance here is 0.0. Table 10: Downstream task performance for classifiers trained and tested on the TPDNs that were trained to approximate each of the four applied models. The rightmost column indicates the performance of the original model (without the TPDN approximation).
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJx0sjC5FX
RNNs implicitly implement tensor-product representations, a principled and interpretable method for representing symbolic structures in continuous space.
We address the problem of teaching an RNN to approximate list-processing algorithms given a small number of input-output training examples. Our approach is to generalize the idea of parametricity from programming language theory to formulate a semantic property that distinguishes common algorithms from arbitrary non-algorithmic functions. This characterization leads naturally to a learned data augmentation scheme that encourages RNNs to learn algorithmic behavior and enables small-sample learning in a variety of list-processing tasks. Since the earliest days of neural network research, some of the most important questions about neural models have focused on their ability to capture the crispness, systematicity and compositionality that characterize symbolic computation and human cognition BID2 BID11, and to do so with a human-like number of examples BID10. While recent studies have demonstrated promising in training recurrent neural networks (RNNs) to approximate symbolic algorithms in domains like list manipulation BID4 BID7, binary arithmetic BID8, graph traversal BID3, and planar geometry BID12, the question of sample efficiency remains very much open. Difficult algorithmic problems may require tens or hundreds of thousands of labelled training examples, and even simple tasks on small inputs seem to require more data than should be necessary BID9.Our goal in this paper is to teach RNNs to approximate list-processing algorithms f:: DISPLAYFORM0. Inspired by the idea of parametricity BID13 ) from type theory and functional programming, we hypothesize that a feature that distinguishes many algorithms from arbitrary functions is that they commute with some family of element-wise changes to their inputs. We describe a method for learning this family from the training set D, and show how this learned information can be used to create an augmented training set for an RNN. Our experiments show that this augmentation scheme makes it possible to approximate algorithms from small training sets, in some cases requiring only a single example per input list length. RNN inductive biases. Our data augmentation approach is motivated by the failure patterns of unaugmented training. The confusion matrix in FIG0 shows the performance of an RNN (an LSTM BID5 with 128 hidden units) trained with ten examples to copy lists of two elements. The failure mode is clear: the model acts as an interpolating lookup table: the model tends to map the regions of input space around each training input x i to the training output f (x i). This is an entirely appropriate function model for classification, but a lookup table is clearly a poor example to follow for algorithm learning. Our approach for the remainder of this paper will be to formulate a semantic property that distinguishes algorithms from lookup tables, and then use data augmentation to nudge an RNN in an algorithmic direction. The confusion matrix for an RNN trained with ten examples to implement the copy algorithm. The inputs (rows) and outputs (columns) are each pairs of digits 0-9. A red pixel in cell (i, j) indicates that the model predicted output sequence j for input sequence i; perfect performance for this copy task would fill the diagonal of the heatmap with red pixels. The gray stripes indicate the ten training examples. Parametricity. Any computable function is technically an algorithm, but we have the intuition that some functions are more "algorithm-y" than others: an algorithm is a function that "does the same thing" to any input fed to it, while a nonalgorithmic function like a lookup table has different, idiosyncratically defined behavior for each possible input. Put another way, a change to the input of an algorithm should translate systematically to a change to its output. In our copy example, f =. If we modify the input by replacing the'9' token with a'3', then making the same substitution on the output side produces the correct equation We can make this intuition quantitative by drawing on a family of in type theory and programming language theory based on the idea of type parametricity, and often called "theorems for free" BID13. The main is the following: Any parametrically polymorphic (for brevity, we will simply say "polymorphic" henceforth) function f:: DISPLAYFORM0 where a is a type parameter 1, commutes with element-wise application of any function g:: a → a: DISPLAYFORM1 An illustrative example: doubling each of the elements of an integer list and then reversing the gives the same output as reversing and then doubling. Parametricity captures some intuitions about what makes a function algorithmic; it accounts for the copy example above, for instance, and in general, if a function commutes with element-wise transformations of its inputs, it cannot depend in a lookup-y way on the details of its input's elements. Since our interest is not limited to polymorphic functions, some generalization of equation 1 is required. The function drop evens, which removes the even elements from a list of integers, for instance, is a clearly legitimate algorithm that fails to obey equation 1; g: x → 2x is a counterexample. But while drop evens does not commute with all element-wise transformations, it does commute with a subset of them, namely those that preserve parity. So this motivates a hypothesis: an algorithm f:: [Int] → [Int] should have some systematicity in its definition that is reflected in its commuting with some set of element-wise transformations DISPLAYFORM2 We can draw a high-level analogy here with techniques from math and physics that characterize an object in terms of its symmetries. If we can learn G, we can use it to augment our training set to help learn f: given an input-output training example (x, f (x)), then for any g ∈ G, (g(x), g(f (x)) will also be a valid training example, and can be use for data augmentation. This section describes a technique for learning G from f's training data. We parameterize each g as a collection of swaps DISPLAYFORM0, each of which replaces each instance of an s token with the corresponding t token. For instance, if g = {3 → 4, 7 → 1}, then g =. Our approach to learning g will be to train a classifier C that predicts whether or not a given collection of swaps should commute with f. Given two training input-output pairs (x i, f (x i)) and (x j, f (x j)), chosen so that x i and x j have the same length, we first determine whether or not x i and x j are related by a set of swaps. Supposing there is a swap set g with g(x i) = x j, we then see whether g also relates the output lists g(f (x i))? = f (x j): if it does, then we have a positive training example of a swap set that commutes with f, and if it does not, then we have a negative example. Repeating this process for each pair of length-matched lists in our training set, we obtain a collection of training examples of positive (commutes with f) and negative (does not commute with f) swap sets. A promising feature of this setup is that while the original learning problem (learning f) has n training examples, the problem of learning G has O(n 2) examples. To support small-sample learning, we make the simplifying factorization assumption that each swap contributes independently to the commutativity of the whole swap set. Our classifier C acts on a swap set g = {g 1, . . ., g m} = {s 1 → t 1, . . ., s m → t m} by C(g) = smoothed min m i=1 c(g i), where c classifies individual swaps, and smoothed min(v) = −γlog i e −vi/γ BID1. For the experiments in this paper, c consisted of a bilinear layer that combined ten-dimensional embeddings of s i and t i, followed by a RELU nonlinearity and a linear layer with a scalar output. Per-sequence classification. We have assumed that a given swap can be classified independently of the input list to which it is to be applied. This assumption is violated by functions like sort: g = {3 → 6} commutes with sort for the input list, but not for the list, for example. To deal with this, we distinguish per-task classification, as described above, from persequence classification, which depends on the input list elements in addition to the swap set. Extending the factorization assumption from the per-task model, for a swap set g = {g 1, . . . g m} and input sequence x = [x 1, . . . x k], the per-sequence classifier predicts C(g, x) = smoothed min({c(g i, x j) | i = 1,... m, j = 1,... k}. Here, c extends the per-task classifier by adding a bilinear combination of the sequence element with the combination of the swap elements. Augmentation. To use a trained classifier C to generate an augmented training set, we randomly generate a large collection of candidate swap sets g, recording for each the classifier score C(g). Each combination of a swap set g and training pair (x, f (x)) gives a candidate augmentation example (g(x), g(f (x))). Any given candidate augmentation example could be generated in multiple ways by starting from different input-output pairs, so we assign to each candidate example the sum of the scores of the swap sets g that could be used to generate it. We take our augmentation training set to be the 1000 top-scoring candidates for each list length. For both the per-sequence and per-task models, if our classifier's training set is entirely missing either positive or negative examples, the model reverts to random augmentation. We compare four models: the per-task and per-sequence augmentation models described above, a random augmentation model that samples swap sets uniformly at random, and a no-augmentation baseline. We evaluate each on the list-processing functions shown in figure 2. We divide these functions into three types: polymorphic, token-dependent (depends on the identities of individual tokens), and order-dependent (depends on the order relations between tokens). These function categories are used in post-hoc analysis only; they are not explicitly made available to our models during training or testing. Expecting problem difficulty to scale to some extent with output sequence length, we also note for each function the length of its output when evaluated on an input list of length n. Functions that implement a filtering operation will have variable length outputs, but for simplicity, we group them with functions like copy in the "≈ n" output-length category. For each target function, we trained our models on even-length lists of lengths 2, 4, 6 and 8, and tested on all lengths 2-8. All list items were integers in the range. To evaluate our models' sample efficiency, we considered learning problems with 1, 5, 10, 20, 30, 40, and 50 input-output examples per input list length. For all problems, our model was an LSTM with two hidden layers of 128 units. Results. On the polymorphic target functions, all augmentation models, including the random one, substantially outperformed the non-augmented baseline. This performance is to be expected, as equation 1 guarantees that these functions commute with all element-wise input transformations. For functions with output length 1 and n, the augmented models are able to achieve close to perfect accuracy with only a single example per input list length. Moving to the non-polymorphic token-dependent functions, random augmentation not only ceases to suffice for good performance, but in fact often delivers lower accuracy than the non-augmented baseline, while both learned augmentation models continue to perform well on most target functions. For the order-dependent functions, the analysis in section 2 suggests that per-sequence augmentation should outperform per-task. In practice, the two learned models achieve roughly equal accuracy, perhaps reflecting the fact that the more expressive per-sequence model requires more data to train correctly. Moreover, while the learned augmentation models outperform the non-augmented baseline on the triangle task and on the very few-shot versions of the sorting tasks, their advantage over the non-augmented baseline is much less marked than on the other function types. Still, the learned models largely avoid the destructive effects of applying random augmentation. Future directions. Our augmentation schemes are model-agnostic; it will be interesting in the future to pair them with models like pointer networks or memory-augmented RNNs. It will also be interesting to extend the techniques of this paper to domains beyond list processing, for instance to the geometric algorithms studied in BID12.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1lQeoCVu4
Learned data augmentation instills algorithm-favoring inductive biases that let RNNs learn list-processing algorithms from fewer examples.
The goal of multi-label learning (MLL) is to associate a given instance with its relevant labels from a set of concepts. Previous works of MLL mainly focused on the setting where the concept set is assumed to be fixed, while many real-world applications require introducing new concepts into the set to meet new demands. One common need is to refine the original coarse concepts and split them into finer-grained ones, where the refinement process typically begins with limited labeled data for the finer-grained concepts. To address the need, we propose a special weakly supervised MLL problem that not only focuses on the situation of limited fine-grained supervision but also leverages the hierarchical relationship between the coarse concepts and the fine-grained ones. The problem can be reduced to a multi-label version of negative-unlabeled learning problem using the hierarchical relationship. We tackle the reduced problem with a meta-learning approach that learns to assign pseudo-labels to the unlabeled entries. Experimental demonstrate that our proposed method is able to assign accurate pseudo-labels, and in turn achieves superior classification performance when compared with other existing methods. Multi-label learning (MLL) is an important learning problem with a wide range of applications BID2 BID0 BID11. While traditional setting focuses on the scenario where the label classes are fixed before learning, many real-world applications face different situations. One scenario that is common in many applications is the growing number of classes BID13, where the growth splits high-level concepts to finer-grained ones BID1. For example, the set of classes might start from high-level concepts such as {Animal, . . ., Food}, and then grow to include finer-grained concepts like {Cat, . . ., Dog, . . ., Apple, . . ., Banana}. Typical applications may have collected sufficient number of labeled data for learning the high-level concepts in a fully supervised manner, but it can be challenging for the applications to efficiently adapt the classifier from the high-level (coarse-grained) concepts to the finer-grained ones. Conquering the challenge calls for two components: one is a strategic algorithm to actively collect a few fine-grained and informative labels, and the other is an effective learning model to exploit the fine-grained labels that have been partially collected. This work focuses on the design of the second component-learning an accurate fine-grained classifier with only limited supervision. In particular, we assume that the model receives a data set that contains all the coarse-grained labels and a few fine-grained ones, as shown in FIG0. Then, the problem of constructing a predictive fine-grained model with the presented data set falls under the big umbrella of weakly supervised learning. Specifically, when we focus on leveraging the coarse-grained labels to build a fine-grained classifier, the problem resembles learning with inexact supervision considered by BID12, where the coarse-grained labels are not in the exact form for the desired output and could only provide weak information about the target fine-grained labels. On the other hand, if we focus on using the fine-grained part of the labels to train the classifier, the problem can be viewed as a multi-label variant of learning with incomplete supervision as some instances receive their exact fine-grained ground-truth labels whereas some do not have labels at all BID12. While both the aforementioned problems have attracted much research attention, the combination of them (inexact and incomplete supervision) which our problem of interest can be cast as, has not yet been carefully investigated to the best of our knowledge. Organization In this work, we start from a formal definition of our problem of interest. We then demonstrate a simple way to reduce the original problem into a special form of negative-unlabeled learning problem BID7 leveraging the label hierarchy. To tackle the reduced problem, we begin with a discussion on the caveats carried by some possible existing approaches, and propose a new model that undertakes the challenges posed by inexact and incomplete supervision through a novel learning to learn method which jointly exploits the hierarchical relationship between the coarse-and fine-grained labels, as well as the benefits of all available data in hand. The key idea within our model is to take into account all available information to learn the labeling assignments for the unlabeled entries, called pseudo-labels, and use them to guide the decent direction of the parameter updates on the underlying classifier. Finally, we experimentally demonstrate that the proposed method not only assigns accurate pseudo-labels to the unknown entries but also enjoys significantly better performance than other methods for learning fine-grained classifiers under the limited supervision setting. Formally, we denote an instance by a feature vector x ∈ R d, and its relevant labels by a bit vector y ∈ {−1, 1}K to indicate whether the labels in a pre-defined set Y = {y 1, ..., y K} are relevant, i.e., y[k] = 1 if and only if y k is relevant. In this work, rather than assuming that the set Y is fixed, we consider the problem of splitting the original high-level concepts into finer-grained ones, refining the label set of interest from Y c = {y 1, ..., y C} into Y f = {y 11, y 12, ..., y C1, y C2, ...} as shown in FIG2. Let y c and y f be the corresponding label vectors for Y c and Y f respectively. Assume that we receive a data set DISPLAYFORM0 consisting of N examples that are annotated only with the high-level (coarse-grained) labels, and an additional small set DISPLAYFORM1 of M examples with their fine-grained labels annotated, our goal is to leverage these examples to learn an accurate fine-grained classifier Φ(θ, x): DISPLAYFORM2 K where θ is the model parameter and K is the total number of fine-grained classes. Fully-Supervised Learning A straightforward way to learn a fine-grained classifier is to utilize only the fully-annotated training examples in D tr through standard supervised approaches. Nevertheless, the small number of examples in this set might be unable to train a strong classifier. Moreover, it completely ignores the (weak) supervision provided by the abundant coarse labels. Multi-Label Learning with Missing Labels One way to make use of the higher-level supervision on learning the fine-grained concepts is to leverage the hierarchical relationship where the TAB0 To tackle the reduced problem, one way is to treat the unknown fine-grained labels as missing entries, and apply MLL algorithms that can learn with the presence of missing labels BID4 BID8 BID9. BID9 proposed a classic empirical risk minimization styled method LEML, attempting to solve the optimization problem that arrives at the model parameters which can most accurately recover the observed training labels. Roughly, their objective is formulated as: DISPLAYFORM0 where Ω is the set of indices of the observed entries in Y f and L is a loss function that measures the discrepancy between the predicted and ground-truth labels. From the objective, however, we note that only the observed entries contribute to the learning of model parameters, and the unobserved ones are basically ignored in the model training process. Unfortunately, in our setting, as the observed fine-grained labels are mostly deduced from the irrelevance of their parent labels, LEML is thus unable to exploit the weak supervision provided by the relevant coarse labels. One-Class Multi-Label Learning Another plausible direction to approach the reduced problem is through one-class multi-label learning methods BID10. A common approach took in these methods is to assume the values of the unobserved entries to be the opposite class of the observed ones, and train a cost-sensitive classifier with different weights given to the observed and unobserved entries. Nonetheless, as the underlying ground truths for the missing entries are not necessarily the presumed class, without careful algorithm redesign or label distribution estimation, these methods may suffer from the introduced label bias that in suboptimal performances. While existing solutions have developed different ways of treating the unknown entries during the learning process, they somehow do not delicately exploit the benefits of the unlabeled entries as mentioned in the previous section. In light of this, we seek for a method that could more properly leverage the missing entries with a key assumption that: When the missing entries are all correctly recovered and used in the training process, the classifier learned could achieve the best performance. Based on the assumption, we attempt to find the best labeling assignment to the unknown entries, called pseudo-labels, which when the model is trained accordingly, can lead to best classification performance on the fine-grained concepts. Towards this goal, we propose to use the few examples that receive their fully-annotated fine-grained labels in D tr as a validation set to evaluate the classifier's performance on the fine-grained concepts. Formally, we aim to train our fine-grained classifier using the examples in D tr with a pseudo fine-grained label matrix Y pseudo where: DISPLAYFORM0 where every p ij is a pseudo label to be determined and the objective is: DISPLAYFORM1 Note that with different pseudo-labels assigned to the missing entries, we arrive at different optimal model parameter θ * (Y pseudo). And the optimal assignment of the pseudo-labels should be based on the validation performance of the ing classifier: DISPLAYFORM2 However, solving Eq. 4 to find the optimal pseudo-label assignment requires a computationally prohibiting two-loop optimization procedure. To conquer the optimization challenge, inspired by recent works in meta-learning literature BID3 BID6, we attempt to tackle the problem with an iterative approach which dynamically find the best pseudo-label assignments locally at each optimization step. Specifically, consider a typical gradient descent update step: DISPLAYFORM3 where α is the step size and t is the current timestep. Then, at each iteration t, we aim to learn the pseudo-label assignment which leads to the model parameters that minimize the validation loss after a single update step: DISPLAYFORM4 Solving Eq. 6 at each timestep t could, nevertheless, still be very expensive. As a , we propose a simple approximation of (Y pseudo) * t by looking at the gradient direction (sign) of the validation loss wrt. the pseudo-labels. Particularly, we assign pseudo-labels at timestep t by: DISPLAYFORM5 4 EXPERIMENTS AND DISCUSSIONTo justify the effectiveness of the proposed method, we test our method on a multi-label image dataset MS COCO BID5. We compare our method with three baseline methods, namely, a standard fully-supervised (FS) learning model, LEML, a classic approach in handling typical missing-label setup BID9, and a representative method that tackles the problem of one-class classification (OCC) BID10. We deploy a fully-connected neural network as our underlying base model. In TAB0, we show the of different methods with varying size of D tr. It can be seen that our method consistently achieves the best performances across different settings. It is worthwhile to note that although the standard fully-supervised approach does not leverage the partially labeled examples in D tr, it surprisingly outperforms the other two baseline methods in many cases. To investigate the reasons for so, we plot the learning curves of different methods in Figure 3. From the figure, we see that although LEML and OCC achieve comparable, or even better, performances than the fully-supervised approach at the very beginning of the learning process, the two approaches then quickly suffers from overfitting that in the performance drop. For LEML, we conjecture that the performance degrading comes from the overwhelming number of negative entries dominating the learning dynamic. And arguably, the severe overfitting of OCC from the over-simple assumption on the missing entries which brings label bias into the learning objective. Recover rate of our method To understand the benefits of the pseudo-labels learned in our approach, we show how our method is capable of correctly recovering the missing entries, as well as the correlation between the recover rate and model performance. In Figure 4, we plot the recover performance of the learned pseudo-labels measured by F1-loss (1 − F1-score), and the horizontal bars are the corresponding F1-loss by simply treating all missing entries as ones and assigning them random labels. We can see from the figure that the pseudo-labels learned from our method could much more correctly recover the missing entries than the two naive baselines. In addition, there is a strong correlation between the recover rate and model classification performance. With more accurate assignment of pseudo-labels on the unknown entries, the trained model is able to achieve stronger classification performance. We design a tailored method through a meta-learning strategy, which learns to accurately assign pseudo-labels to the unknown entries of a special weakly supervised MLL problem. Experimental show that our proposed method not only assigns accurate pseudo-labels, but also enable the underlying classifier learned to perform better than other possible existing solutions.
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rylVYjqHdN
We propose a special weakly-supervised multi-label learning problem along with a newly tailored algorithm that learns the underlying classifier by learning to assign pseudo-labels.
We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent. Our approach caters to subjective search where the user is seeking digital assets such as images which is fundamentally different from the tasks which have objective and limited search modalities. Labeled conversational data is generally not available in such search tasks and training the agent through human interactions can be time consuming. We propose a stochastic virtual user which impersonates a real user and can be used to sample user behavior efficiently to train the agent which accelerates the bootstrapping of the agent. We develop A3C algorithm based context preserving architecture which enables the agent to provide contextual assistance to the user. We compare the A3C agent with Q-learning and evaluate its performance on average rewards and state values it obtains with the virtual user in validation episodes. Our experiments show that the agent learns to achieve higher rewards and better states. Within the domain of search, the recent advances have focused on personalizing the search through recommendations BID27 BID19. While the quality of recommendations have improved, the conventional search interface has not innovated much to incorporate useful contextual cues which are often missed. Conventional search interface enables the end user to perform a keyword based faceted search where the typical work flow goes as follows: the end user types in her search query, applies some filters and then modifies the query based on the . This iterative interaction naturally paves way for incorporating conversations in the process. Instead of the search engine just retrieving the best set, it can interact with the user to collect more contextual cues. For example, if a user searches for birthday gift, the search engine could follow-up by asking who are you buying the gift for. Such information and interaction can provide more humanlike and engaging search experience along with assisting user in discovering their search intent. In this work we address this problem by developing a Reinforcement Learning (RL) BID21 based conversational search agent which interacts with the users to help them in narrowing down to relevant search by providing them contextual assistance. RL based dialogue agents have been designed for tasks like restaurant, bus and hotel reservation BID18 which have limited and well-defined objective search modalities without much scope for subjective discussion. For instance, when searching for a restaurant, the user can specify her preferences (budget, distance, cuisines etc) due to which the problem can be modeled as a slot filling exercise. In contrast, suppose a designer is searching for digital assets (over a repository of images, videos etc) to be used in a movie poster. She would start with a broad idea and her idea would get refined as the search progresses. The modified search intent involves an implicit cognitive feedback which can be used to improve the search . We model our agent for this type of search task. Since the user preferences can not be modeled using a fixed set of facets, we end up with a very large search space which is not the case with most other goal oriented RL agents. We model the search process as a sequence of alternate interactions between the user and the RL agent. The extent to which the RL agent could help the user depends on the sequence and the type of actions it takes according to user behavior. Under the RL framework, intermediate rewards is given to the agent at each step based on its actions and state of conversational search. It learns Since true conversational data is not easily available in search domain, we propose to use query and session log data to develop a stochastic virtual user environment to simulate training episodes and bootstrap the learning of the agent. Our agent interacts with the user to gauge user intent and treats the search engine as a black box service which makes it easily deployable over any search engine. We perform qualitative experiments by simulating validation episodes with different reinforcement learning algorithms under various formulations of the state space to evaluate the performance of the trained agent. Our contributions are three-fold: 1) formulating conversational interactive search as a reinforcement learning problem and proposing a generic and easily extendable set of states, actions and rewards; 2) developing a stochastic user model which can be used to efficiently sample user actions while simulating an episode; 3) we develop A3C (Asynchronous Advantage Actor-Critic) BID15 algorithm based architecture to predict the policy and state value functions of RL agent and compare it with other RL algorithms over performance on validation episodes. There have been various attempts at modeling conversational agents, as dialogue systems BID4 BID31 BID23 BID12 and text-based chat bots BID5 BID13 b; BID24 BID28. Some of these have focused on modeling goal driven RL agent such as indoor way finding system BID5 to assist humans to navigate to their destination and visual input agents which learn to navigate and search an object in a 3-D environment space BID32. Domain independent RL based dialogue systems have been explored in the past. For example, BID23 uses User Satisfaction (US) as the sole criteria to reward the learning agent and completely disregards Task Success(TS). But US is a subjective metric and is much harder to measure or annotate real data with. In our formulation, we provide a reward for task success at the end of search along with extrinsic and auxiliary rewards at intermediate steps (discussed in section 3.4). Other RL based information seeking agents extract information from the environment by sequentially asserting questions but these have not been designed on search tasks involving human interaction and behavior BID1.Neural Conversation Model, based on the SEQ2SEQ framework, uses an end-to-end and domain independent encoder-decoder architecture to maximize the likelihood of next utterance given the previous utterances BID24. The ing model generates grammatically correct sentences though they tend to be repetitive, less engaging and lack consistency such that it cannot perform coherent and meaningful conversation with real humans. To overcome these issues, deep RL has been combined with Neural Conversation Model to foster sustained conversations based on the long-term success of dialogues BID14. The model is initialized with MLE parameters and tuned further using policy gradient BID22 through rewards which capture ease of answering, information flow and semantic coherence. Neural conversation models initialized with MLE parameters have also been improved using batch policy gradient method which efficiently uses labeled data comprising of scores assigned to each dialogue in the conversation BID10. These models require labeled conversational data which is not available for the subjective search tasks we discussed. Our model assists the user at different steps in the search through a set of assist actions instead. RL has also been used for improving document retrieval through query reformulation where the agent sequentially reformulates a given complex query provided by the user BID17 BID16. But their work focuses on single turn episodes where the model augments the given query by adding new keywords. In contrast, our agent engages the user directly into the search which comprises of sequence of alternate turns between user and agent with more degrees of freedom (in terms of different actions the agent can take).To minimize human intervention while providing input for training such agents in spoken dialogue systems, simulated speech outputs have been used to bypass spoken language unit BID4. The system uses word based features obtained from raw text dialogues to represent the state of the Reinforcement Learning Agent. This approach enables to reduce the system's dependence on hand engineered features. User models for simulating user responses have been obtained by using LSTM which learns inter-turn dependency between the user actions. They take as input multiple user dialogue contexts and outputs dialogue acts taking into account history of previous dialogue acts and dependence on the domain BID0.Another prominent research area, closely related to conversational agents, is that of question answering. In case of machine comprehension, the task is to reason over multiple statements to formulate the answer to a given query. Memory Networks BID28 use memories to store information (facts and contexts) internally in a compressed form for later use. RL has been used in visual tasks BID20 BID6 where given an image, history of the utterances and a follow-up question about the image, the agent is trained to answer the question. Unlike machine comprehension based question answering tasks, Visual Dialog tasks are closer to conversational agents as they require the model to maintain and employ the context of the conversation when formulating the replies. But unlike Visual Dialog, our responses are not grounded in the given image and require knowledge beyond the immediate context of the conversation. Often task oriented dialogue systems are difficult to train due to absence of real conversations and subjectivity involved in measuring shortcomings and success of a dialogue BID8. Evaluation becomes much more complex for subjective search systems due to absence of any label which tells whether the intended task had been completed or not. We evaluate our system through rewards obtained while interacting with the user model and also on various real world metrics (discussed in experiments section) through human evaluation. In this paper, we experiment with two different RL algorithms -the Asynchronous Advantage Actor Critic (A3C) algorithm BID15 and the Q-learning BID29. We first discuss preliminaries of RL, then provide the details of the action-state spaces, rewards and virtual user we modeled followed by discussion of the above algorithms and our architecture. Reinforcement Learning is the paradigm to train an agent to operate in an environment E. The agent interacts with the environment in a series of independent episodes and each episode comprises of a sequence of turns. At each turn, the agent observes the state s of the environment (s ∈ S, where S is defined as the state space -the set of possible states) and performs an action a (a ∈ A, where A is defined as the action space -the set of all the possible actions). When the agent performs an action, the state of the environment changes and the agent gets the corresponding reward BID21 ). An optimal policy maximizes cumulative reward that the agent gets based on the actions taken according to the policy from start till the final terminal state is reached in the episode. Action space A is designed to enable the search agent to interact with the user and help her in searching the desired assets conveniently. The agent actions can be divided into two sets -the set of probe intent actions -P and general actions -G as described in TAB0 respectively. The agent uses the probe intent actions P to explicitly query the user to learn more about her context. For instance, the user may make a very open-ended query ing in a diverse set of even though none of them is a good match. In such scenarios, the agent may prompt the user to refine her query or add some other details like where the search would be used. Alternatively, the agent may cluster the search and prompt the user to choose from the clustered categories. These actions serve two purposes -they carry the conversation further and they provide various cues about the search context which is not evident from the search query provided by the user. show display corresponding to most recent user query add to cart suggest user to bookmark assets for later reference ask to download suggest user to download some if they suit her requirement ask to purchase advise the user to buy some paid assets provide discount offer special discounts to the user based on search history sign up ask the user to create an account to receive updates regarding her search ask for feedback take feedback about the search so far provide help list possible ways in which the agent can assist the user salutation greet the user at the beginning; say goodbye when user concludes the searchThe set G consists of generic actions like displaying assets retrieved corresponding to the user query, providing help to the user etc. While probe intent actions are useful to gauge user intent, set G comprises of actions for carrying out the functionality which the conventional search interface provides like "presenting search ". We also include actions which promote the business use cases (such as prompting the user to signup with her email, purchase assets etc). The agent is rewarded appropriately for such prompts depending on the subsequent user actions. Our experiments show that the agent learns to perform different actions at appropriate time steps in search episodes. We model the state representation in order to encapsulate facets of both search and conversation. The state s at every turn in the conversation is modeled using the history of user actionshistory user, 1 history of agent actions -history agent, discretized relevance scores of search -score and a variable length conv which represents number of user responses in the conversation till that point. The variables history user and history agent comprises of user and agent actions in last k turns of the conversational search respectively. This enables us to capture the context of the conversation (in terms of sequence of actions taken). Each user-action is represented as one-hot vector of length 9 (which is the number of unique user actions). Similarly, each agent-action has been represented as a one-hot vector of length 12. The history of the last 10 user and agent actions is represented as concatenation of these one-hot vectors. We use vectors with zero padding wherever needed such as when current history comprises of less than 10 turns. The variable score quantifies the degree of similarity between most recent query and the top 10 most relevant search assets retrieved. They have been taken in state representation to incorporate the dependency between the relevance of probe intent actions and quality of search retrieved. Similarly, length conv has been included since appropriateness of other agent actions like sign up may depend on the duration for which the user has been searching. Reinforcement Learning is concerned with training an agent in order to maximize some notion of cumulative reward. In general, the action taken at time t involves a long term versus short term reward trade-off leading to the classic exploration-exploitation problem. This problem manifests itself even more severely in the context of conversational search. For instance, let us say that the user searches for "nature". Since the user explicitly searched for something, it would seem logical that the most optimal action is to provide the search to the user. Alternatively, instead of going for immediate reward, the agent could further ask the user if she is looking for "posters" or "portraits" which would help in narrowing down the search in the long run. Determining the most optimal action at any point of the conversation is a non-trivial task which highlights the importance of reward modeling. Since we aim to optimize dialogue strategy and do not generate dialogue utterances, we assign the rewards corresponding to the appropriateness of the action performed by the agent considering the state and history of the search. We have used some rewards such as task success (based on implicit and explicit feedback from the user during the search) which is also used in PARADISE framework BID25. At the same time several metrics used by the PARADISE cannot be used for evaluating our system or modeling rewards. For instance, time required (number of turns) for user to search desired cannot be penalized since it can be possible that user is finding the system engaging and helpful in refining the better which may increase number of turns in the search. We model the total reward which the agent gets in one complete dialogue comprises of three kinds of rewards and can be expressed in the form of following equation: DISPLAYFORM0 First kind of reward (r T C) is based on the completion of the task (Task Completion TC) which is download and purchase in the case of our search problem. This reward is provided once at the end of the episode depending on whether the task is completed or not. As second kind of rewards, we provide instantaneous extrinsic rewards BID7 -(r extrinsic) based on the response that the user gives subsequent to an agent action. Rewards provided on the basis of interaction with simulated user have been studied and compared with inverse RL previously BID3. We categorize the user action into three feedback categories, namely good, average or bad. For example, if the agent prompts the user to refine the query and the user does follow the prompt, the agent gets a high reward because the user played along with the agent while if the user refuses, a low reward is given to the agent. A moderate reward will be given if the user herself refines the query without the agent's prompt. Depending on these feedback categories, r extrinsic is provided at every time step in the search. Apart from the extrinsic rewards, we define a set of auxiliary tasks T A specific to the search problem which can be used to provide additional reward signals, r auxiliary, using the environment. We define T A = {# click , # add to cart, # cluster category click, if sign up option exercised}. r auxiliary is determined and provided at every turn in the search based on the values of different auxiliary tasks metrics defined in T A till that turn in the search. Such rewards promotes a policy which improves the performance on these tasks. The RL agent is trained to learn the optimal action policy which requires actual conversational search data which is not available as conversational agents have not been used in the context of search task we defined. To bypass this issue and bootstrap training, we propose a user model that simulates user behavior to interact with the agent during training and validation. Our methodology can be used to model a virtual user using any query and log sessions data. We developed a stochastic environment where the modeled virtual human user responds to agent's actions. The virtual human user has been modeled using query sessions data from a major stock Figure 1: Example of mapping session data to user actions. The session data comprises of sequence of logs, each log comprises of search query, filters applied (content type), offset field and interaction performed by the user (such as search, click etc) photography and digital asset marketplace which contain information on queries made by real users, the corresponding clicks and other interactions with the assets. This information has been used to generate a user which simulates human behavior while searching and converses with the agent during search episode. We map every record in the query log to one of the user actions as depicted in TAB2. Figure 1 shows an example mapping from session data to user action. To model our virtual user, we used the query and session log data of approximately 20 days. The virtual user is modeled as a finite state machine by extracting conditional probabilities -P (U ser Action u| History h of U ser Actions). These probabilities are employed for sampling next user action given the fixed length history of her actions in an episode. The agent performs an action in response to the sampled user action. Subsequent to the action performed by the agent, next user action is sampled which modifies the state and is used to determine the reward the agent gets for its previous action. TAB3 shows a snippet of conditional probability matrix of user actions given the history of last 3 user actions. The query and session log data has been taken from an asset search platform where the marketer can define certain offers/promotions which kick in when the user takes certain actions, for instance the user can be prompted to add some images to cart (via a pop-up box). Users response to such prompts on the search interface is used as proxy to model the effect of RL agent on virtual user's sampled action subsequent to different probe actions by the agent. This ensures that our conditional probability distribution covers the whole probability space of user behavior. In order to incorporate the effect of other agent actions such as sign up which are not present in the query logs, we tweaked the probability distribution realistically in order to bootstrap the agent. The agent can be trained through Q-learning BID26 ) which consists of a real valued function Q: S × A → IR. This Q-function maps every state-action pair (s, a) to a Q-value which is a numerical measure of the expected cumulative reward the agent gets by performing a in state s. Suppose the agent takes an action a in state s such that the environment makes a transition to a state s, the Q-value for the pair (s, a) is updated as follows: DISPLAYFORM0 where α is the learning rate, r is the immediate reward for performing action a in state s in i th user turn in the episode. For our case, an episode refers to one complete session of conversational search between the user and the agent. Once the Q-values are learned, given a state, the action with the maximum Q-value is chosen. In order to prevent the agent from always exploiting the best action in a given state, we employ an − greedy exploration policy BID30, 0 < < 1. The size of our state space is of the order of ≈ 10 7. For Q-learning, we use the table storage method where the Q-values for each state is stored in a lookup table which is updated at every step in a training episode. In this algorithm, we maintain a value function V π and a stochastic policy π as a function of the state. The policy π: A × S → IR defines a probability distribution π(a|s) over the set of actions which the agent may take in state s and is used to sample agent action given the state. The value function V π: S → IR represents the expected cumulative reward from current time step in an episode if policy π is followed after observing state s i.e. V π (s) = IE a∼π(.|s) [Q π (s, a)]. We propose a neural architecture (figure 2) which preserves the context of the conversational search for approximating the policy and value functions. The architecture comprises of a LSTM BID9 which processes the state at a time step t (input i t = s t) and generates an embedding h t which is further processed through a fully connected layer to predict the probability distribution over different actions using softmax function BID2 ) and the value of the input state. Following equations describes our architecture. DISPLAYFORM0, where w LSTM represents the parameters of the LSTM, θ p and θ v represents the set of parameters of the last fully connected layer which outputs the policy p and value v st of the input state s t respectively. We represent all the parameters by θ = {w LSTM, θ p, θ v}. The LSTM state is reset to zero vectors at the start of a search episode. At time-step t in the search episode, the state s t is given as input to the model. The cell and hidden state (c t−1, h t−1) of the LSTM is maintained based on the previous states (s 0, s 1, ..., s t−1) which have already been processed. The LSTM unit remembers the previous states which enables our model to capture the effect of observed states in the search while predicting the probability of different actions. This "memory" implicitly allows our agent to make the next prediction based on the transitions and user behavior observed so far allowing it to mimic the strategy of a real agent assisting the user. The parameters are tuned by optimizing the loss function loss total which can be decomposed into three types of losses.loss total (θ) = loss policy (θ) + loss value (θ) + loss entropy (θ) Figure 2: A3C architecture for predicting policy p t and value V (s t). Current search state s t is processed by a LSTM followed by a fully connected layer. The cell state c t and hidden state h t of LSTM from previous time step is retained while processing the next state during an episode. The same fully connected layer is used for prediction at different time steps in an episode. The episode terminates at time step T.We now explain all the three losses. In A3C algorithm, the agent is allowed to interact with the environment to roll-out an episode. The network parameters are updated after completion of every n-steps in the roll-out. An n-step roll-out when the current state is s t can be expressed as (s t, a t, r t, s t+1, v st) → (s t+1, a t+1, r t+1, s t+1, v st+1) →... → (s t+n−1, a t+n−1, r t+n−1, s t+n, v st+n−1). We also calculate V (s t+n ; θ) in order to estimate loss value which is defined as: DISPLAYFORM1 Thus an n-step roll-out allows us to estimate the target value of a given state using the actual rewards realized and value of the last state observed at the end of the roll-out. Value of a terminal state s T is defined as 0. Each roll-out yields n samples to train the network on the value loss function using these estimated values. In a similar way, the network is trained on loss policy which is defined as: DISPLAYFORM2 The above loss function tunes the parameter in order to shift the policy in favor of actions which provides better advantage A(a t, s t, θ) given the state s t. This advantage can be interpreted as additional reward the agent gets by taking action a t in state s t over the average value of the state V (s t ; θ) as the reference. However, this may bias the agent towards a particular or few actions due to which the agent may not explore other actions in a given state. To prevent this, we add entropy loss to the total loss function which aims at maximizing the entropy of probability distribution over actions in a state.loss entropy (θ) = − a∈A −p(a|s i ; θ) log(p(a|s i ; θ)), for i = t, t + 1,..., t + n − 1The total loss function loss total incorporates exploitation-exploration balance through policy and entropy loss functions optimization. The value function V π (s) is used for determining value of a state to be used as reference while determining advantage of different actions in loss policy. We use Adam optimizer BID11 for optimizing the loss function on model parameters θ. In order to improve the exploration capacity of the final agent trained, A3C comprises of a global model and uses multiple asynchronous agents which interact with their own copy of environment in parallel. Each agent uses its local gradients of the loss function with respect to model parameters to update the parameters of the global model and then copies the parameters of the global model for subsequent training. This is repeated after completion of every fixed number of episodes for each agent which in faster convergence. Including the vectors which encode the history of agent and user actions in last k turns of the search to the state captures the local context. User behavior at current time-step can be affected by queries far away in the history. Since the search episode may extend indefinitely, local context is not sufficient to capture this behavior. The LSTM unit in our architecture aggregates the local context as it sequentially processes the states in an episode into a global context which in capturing context at a global search level. In this section, we evaluate the trained agent with the virtual user model and discuss the obtained with the two reinforcement learning techniques, A3C and Q-learning, and compare them. For each algorithm, we simulate validation episodes after each training episode and plot the average rewards and mean value of the states obtained during the validation episodes. We also developed a chat-search interface (implementation details in section 6.1 in the appendix) where real users can interact with the trained agent during their search. 2 Some conversations between the trained agent and real humans using this interface have been provided in section 6.2 in the appendix. The global model is obtained using 10 local agents which are trained in parallel threads (each trained over 350 episodes). We compare the validation using this global model for different state representations for conversational search and hyper-parameter settings such as discount factor (γ) (which affects exploration vs exploitation trade-off) and the LSTM size which controls the context preserving capacity of our architecture. We experiment with 3 values of discount factor and fix the LSTM size to 250. Figure 3 shows the validation trend in average rewards corresponding to 3 discount factors. Greater discount factor (lower value of γ) in lowers weights for the future rewards. With a large discount factor, the agent tries to maximize the immediate rewards by taking the greedy actions since future rewards are discounted to a larger extent. We validate this by computing the variance in the for each case. We do not consider the values before 100 episodes as the network is under-fitting in that region. The variance values for the 3 cases (γ = 0.90, 0.70, 0.60) are 1.5267, 1.627, and 1.725 respectively. Since the agent takes more greedy actions with higher discount factors, the variance in the reward values also increases since the greedy approach yields good rewards in some episodes and bad rewards in others. Figure 3: Plot of average validation reward against number of training episodes for A3C agent. The size of LSTM is 250 for each plot with varying discount factor; γ = 0.90 (left), γ = 0.80 (middle) and γ = 0.60 (right). It can be observed that a lower γ value in higher variance in the rewards ing in a greedy (less exploratory) policy. For further experiments, we fix the discount value to 0.90 as this value achieves better explorationexploitation balance. In the next setup, we vary the size of the LSTM as 100, 150 and 250 to determine the effect of size of the context preserved. FIG0 depicts the trend in mean value of states observed in an episode. We observe that larger size of the LSTM in better states which the agent observes on an average since average state value is higher. This demonstrates that a bigger LSTM size providing better capacity to remember the context in agent performing actions which yield improved states in search. In this experiment, we model the state vector without incorporating the action history vectorshistory user and history agent. In FIG1, we plot the mean state values observed in an episode and compare the case where the history of actions is added to state vector with the case where it is not added. The 3 plots correspond to different LSTM sizes. For large LSTM size (= 250), the history need not be explicitly added to the state as the LSTM is able to preserve the context and eventually achieves the same mean state values. But if the LSTM does not have enough capacity, as in case of LSTM size = 100, the mean state values observed with history vector included in the state is more than when it is not included. This demonstrates that including the local context in state representation is useful to enable the architecture to aggregate it into global context. FIG2 shows the trend in average reward in validation episode versus number of training episodes. We experimented with values of different hyper-parameters for Q-learning such as discount (γ) and exploration control parameter determined their optimal values to be 0.70 and 0.90 respectively based on trends in the validation curve and average reward value at convergence. We compare the A3C agent (with LSTM size 250 and γ = 0.90 (left plot in figure 3) ) with the Q-learning agent (figure 6). It can be observed that the A3C agent is able to obtain better averaged awards (≈ 2.0) in validation episodes upon convergence as compared to the Q-agent which obtains ≈ 0.50. Since A3C algorithm performs and generalize better than Q-learning approach, we evaluated our system trained using A3C algorithm in the rebuttal period through professional designers who regularly use image search site for their design tasks. To evaluate the effectiveness of our system when interacting with professional designers, we asked them to search images which they will use while designing a poster on natural scenery using both our conversational search agent and conventional search interface provided by stock photography marketplace. We collected feedback, from 12 designers, for both the search modalities across the different set of tasks. We asked the designers to rate our conversational search system on following metrics. TAB4 shows average rating value of each of these metrics.1. Information flow to measure the extent to which the agent provide new information and suggestions which helped in driving the search forward (on a scale of 1 to 5 where 5 represents high information flow). 2. Appropriateness of actions to measure the suitability of actions taken by the agent during the search in terms of coherence (on a scale of 1 to 5 where 5 represents high appropriateness and denotes that it took right actions at right time during the search). 3. Repetitiveness to measure how repetitive was the agents actions in providing assistance during their search (on a scale of 1-5 where 1 represents not repetitive at all). In addition to the above metrics, we also asked the designers to compare our system to conventional search interface of the stock photography marketplace in terms of following metrics:1. Engagement: This is to measure how interactive and engaging conversational search is compared to conventional search on scale of 1 to 5 where 5 represents that conversational search is much more engaging and 1 represents same engagement as conventional search. We averaged the rating provided by the designers. Our system could achieve an average rating of 2.67 in this metric. We asked the designers to compare the two search modalities in terms of time required to search and reach to desired search . We asked them to choose one of three options -conversational search required, 1.more time, 2. About the same time, 3. Less time (faster), than conventional search. About 33.3% of designers said that it requires more time while 16.7% said that conversational search reduced the time required to search the desired . The remaining 50% believed that it required about the same time. 3. Ease of using conversational search compared to conventional search: We asked them to choose one of three options -conversational search is, 1. Difficult to use and adds additional burden, 2. About the same to use, 3. Much easier to use, compared to conventional search. 33.3% of the designers believed that conversational search is easier than conventional search, 41.7% said that it is same as conventional search while 25% believed that it is difficult to perform conversational search than conventional search. The above evaluation shows that although we trained the bootstrapped agent through user model, it performs decently well with actual users by driving their search forward with appropriate actions without being much repetitive. The comparison with the conventional search shows that conversational search is more engaging. In terms of search time, it ed in more search time for some designers while it reduces overall time required to search the desired in some cases, in majority cases it required about the same time. The designers are regular users of conventional search interface and well versed with it, even then majority of them did not face any cognitive load while using our system with 33.3% of them believing that it is easier than conventional search. In this paper, we develop a Reinforcement Learning based search assistant to interact with customers to help them search digital assets suited to their use-case. We model the rewards, state space, action space and develop an A3C based architecture which leverages the context of search to predict the policy. The trained agent is able to obtain higher average rewards in the validation episodes with virtual user and observes states with better values indicative of providing better search experience. We also propose a virtual stochastic user model to interact and train the RL agent in absence of labeled conversational data which accelerates the process of obtaining a bootstrapped agent. As the next step, we would deploy our system to collect true conversational data which can be used to fine tune the current model as well as to train a new model which can generate the natural language responses in addition to deciding the action. In different search domains, designing the state and action space can take significant time which makes every situation an absolutely new task to be solved. To approach this issue as a future work, another system can be designed which helps in the automation of state space characterization with the help of system query logs. Chat interface comprises of a two-pane window, one for text dialogues and other for viewing search . The chat interface allows the user to convey queries in form of dialogue in an unrestricted format. The user message is parsed by NLU which deduces user action from the input message. The NLU additionally obtains and redirects the query to search engine if the user action is new query or ref ine query. User action and search engine output are forwarded to RL agent by NLU. The RL agent performs an action according to the learned policy based on which a formatted response is displayed on chat interface by the NLU. In addition to inputting the message, the chat interface provides functionality for other user actions such as liking a search , adding assets to cart etc. The NLU is a rule based unit which utilizes dependencies between words and their POS tags in a sentence to obtain query and user action. 3 The query is formulated sequentially by locating the primary keyword and then determining the surrounding keywords. A distinction between the user actions new query and ref ine query is made depending on whether the current primary keyword is present in previous queries. A database is used to store the user queries to incorporate context of conversation while formulating the search query to be sent to the search engine. Similarly rules and stop words are used for determining different user actions. For instance, words like'yes/'no' are used for providing feedback signals when the agent asks different probing questions, words like'more' means that the user is requesting more for the same query,'cluster category click' determined based on whether the user clicks on one of the categorical options provided by the agent,'search similar' is determined on the basis of whether the user drags the url of one of the search into the text area. The search engine seeks assets from asset database corresponding to the search query formulated by NLU. The asset database comprises of about 5 million assets of different types like images, videos etc. Assets are indexed in asset database using tags and other metadata for retrieval during the search. 4. The search engine also fetches and clusters assets having tags similar to most relevant assets retrieved to formulate related categorical asset options. Relevance scores of retrieved assets are calculated on the basis of the degree of match between asset metadata and the query. These scores are dispatched with fetched assets and categorical options to the NLU. User: hello Agent: Hello, how may I help you? User: i want images of cars Agent: Categories might help you get better responses, click on the options below -sporty cars, expensive cars, city cars, sedan cars User: city cars Agent: Here are some of the images User: show more Agent: Do you want to browse through the following options? -urban city cars, wrecked city cars, traffic city cars User: urban city cars Agent: Your cart is the place where you can add the images you like. Click on the add to cart icon User: <User adds few assets to cart> Agent: Could you refine your query further so I can get you better images? User: racing Agent: Your cart is the place where you can add the images you like. Click on the add to cart icon User: <User drags an asset into text area to search for similar ones> Agent: What are you going to use the images for? User: i am organizing a racing competition Agent: Here you go, these are some of the best matches for your query User: <User adds few assets to cart> Agent: Would you like to add something to your query? User: no, bye Example 2 Agent: Hi, type in the box below User: show some cpu Agent: Here are some of the images User: can you show some more Agent: Check out some images that we have User: <User drags an asset into text area to search for similar ones> Agent: Where will you use these images? User: in my mac Agent: Results for your query User: <User adds few assets to cart> Agent: I can sign you up, search images for you, add them to your cart and much more. Type in the box to chat User: show me more Agent: Is there anything else you want to search? User: yes Agent: We also have many images in these categories -Computer golden cpu, Technology golden cpu, Internet golden cpu, Abstract golden cpu, Chip golden cpu User: internet golden cpu Agent: Want me to get you anything else? User: no, thanks for the help. Agent: You may enter your search query User: show me some windows Agent: Check out the images User: more Agent: Here are some of the images User: more Agent: Check out some images that we have User: <User search for assets similar to one of the search displayed> Agent: Refine your query further to get better User: reflecting windows Agent: Would you like to add something to your collections now? You can simply click on the add to cart icon User: <User adds few assets to cart> Agent: Could you refine your query further so I can get you better images? User: shining reflecting windows Agent: You could add the images to your cart for better exploration, click on the cart icon User: no Agent: Where will you use these images? User: i will use them for construction designs Agent: Here you go, these are some of the best matches for your query User: show some images of mountains Agent: We also have many images in these categories, should I show you some?User: nature mountains Agent: Where will you use these images? User: i am organizing adventure sports Agent: Check out the images User: can you show some more Agent: Images can be added to cart for building a separate collection of stuff that you like. You can simply click on add to cart button User: <User adds few assets to cart> Agent: Want me to get you anything else? User: i want people riding bike on mountains Agent: We also have many images in these categories, should I show you some? -high mountains, hiking sports mountains, mountaineering mountains User: hiking sports mountains Agent: Signing up takes a moment, just give me your email id User: ABC@XYZ.com
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkfbLilAb
A Reinforcement Learning based conversational search assistant which provides contextual assistance in subjective search (like digital assets).
We present a simple approach based on pixel-wise nearest neighbors to understand and interpret the functioning of state-of-the-art neural networks for pixel-level tasks. We aim to understand and uncover the synthesis/prediction mechanisms of state-of-the-art convolutional neural networks. To this end, we primarily analyze the synthesis process of generative models and the prediction mechanism of discriminative models. The main hypothesis of this work is that convolutional neural networks for pixel-level tasks learn a fast compositional nearest neighbor synthesis/prediction function. Our experiments on semantic segmentation and image-to-image translation show qualitative and quantitative evidence supporting this hypothesis. Convolutional neural networks (CNNs) have revolutionized computer vision, producing impressive for discriminative tasks such as image classification and semantic segmentation. More recently, they have also produced startlingly impressive for image generation through generative models. However, in both cases, such feed-forward networks largely operate as "black boxes." As a community, we are still not able to succinctly state why and how such feed-forward functions generate a particular output from a given input. If a network fails on a particular input, why? How will a network behave on never-before-seen data? To answer such questions, there is a renewed interest in so-called explainable AI. The central goal in this (re)invigorated space is the development of machine learning systems that are designed to be more interpretable and explanatory. Explanation-by-correspondence: One attractive approach to interpretability stems from casebased reasoning or "explanation-by-example" BID30. Such an approach dates back to classic AI systems that predict medical diagnoses or legal judgments that were justified through case studies or historical precedent BID0. For example, radiologists can justify diagnoses of an imaged tumor as'malignant' by reference to a previously-seen example BID11. However, this approach can generate only N explanations given N training exemplars. Our work demonstrates that deep networks can generate exponentially more explanations through composition: e.g., this part of the image looks like this part of exemplar A, while another part looks like that part of exemplar B. We term this "explanation-by-correspondence", since our explanations provide detailed correspondence of parts (or even pixels) of a query image to a set of exemplars. Spatial prediction: In this work, we focus on the class of CNNs designed to make predictions at each image pixel. Many problems in computer vision can be cast in this framework, e.g., semantic segmentation, depth estimation, image synthesis, and image translation. We explore a simple hypothesis for explaining the behavior of such networks: they operate by cutting-and-pasting image patches found in training data. Consider the top row of FIG0, where we visualize the output of BID27's translation network trained to synthesize images of building facades from label masks. Why does the network generate the strange diagonal gray edge at the top? To answer this question, we visualize image pixels extracted from the closest-matching nearest-neighbor (NN) patches found in the training data. Remarkably, NN-synthesis looks quite similar to the CNN output, providing a clear explanation for the synthesized corner artifact. Figure 1: We propose a non-parametric method to explain and modify the behavior of convolutional networks, including those that classify pixels and those that generate images. For example, given the label mask on the top, why does a network generate strange gray border artifacts? Given the image on the bottom, how is a network able to segment out the barely-visible lamppost? We advocate an "explanation-by-example" approach to interpretation BID11. Next to the CNN output, we show the closest-matching training exemplar image. It appears to provide coarse explanations of such behaviors, though the quality of the output is still lacking (e.g., additional cars are hallucinated in the bottom row). One the right, we show the output obtained through a compositional nearest-neighbor operation that simply matches input patches to those in the training set and returns the corresponding output label. This means that the output is created by cuttingand-pasting (composing) patches of training images. To ensure that inconsistent patches are not composed together, one needs to match patches using an embedding that captures both global semantics (e.g., architectural styles) and local structure (e.g., windows versus doors). We demonstrate that local convolutional neighborhoods of feature activations produce such rich embedding. Such a perspective allows one to explain errors and modify the biases of a network by changing the set of image patches used for non-parametric matching. [Deva: Can you switch the order of comp nn and global nn, like teaser-deva.pdf?]Compositional nearest-neighbors: Our central thesis is consistent with recent work on network memorization BID47, but notably, naive memorization fails to explain how and why networks generalize to never-before-seen data. We explain the latter through composition: the synthesized output in FIG0 consists of image patches copied from different training images. Given a database of N training images with K pixels each, global nearest-neighbors can produce N possible output images. On the other hand, compositional nearest-neighbors can produce (N K) DISPLAYFORM0 Under review as a conference paper at ICLR 2018Figure 1: We propose a non-parametric method to explain and modify the behavior of conv networks, including those that classify pixels and those that generate images. For examp the label mask on the top, why does a network generate strange gray border artifacts? image on the bottom, how is a network able to segment out the barely-visible lamppost vocate an "explanation-by-example" approach to interpretation BID11. N CNN output, we show the closest-matching training exemplar image. It appears to provi explanations of such behaviors, though the quality of the output is still lacking (e.g., addit are hallucinated in the bottom row). One the right, we show the output obtained throug positional nearest-neighbor operation that simply matches input patches to those in th set and returns the corresponding output label. This means that the output is created b and-pasting (composing) patches of training images. To ensure that inconsistent patche composed together, one needs to match patches using an embedding that captures both g mantics (e.g., architectural styles) and local structure (e.g., windows versus doors). We de that local convolutional neighborhoods of feature activations produce such rich embeddin perspective allows one to explain errors and modify the biases of a network by changing image patches used for non-parametric matching. [Deva: Can you switch the order of and global nn, like teaser-deva.pdf?]Compositional nearest-neighbors: Our central thesis is consistent with recent work on memorization BID47, but notably, naive memorization fails to explain how networks generalize to never-before-seen data. We explain the latter through composition thesized output in FIG0 consists of image patches copied from different training images database of N training images with K pixels each, global nearest-neighbors can produce N output images. On the other hand, compositional nearest-neighbors can produce (N K) K an exponentially larger set of outputs. Each output image can be obtained by independent as a conference paper at ICLR 2018 e propose a non-parametric method to explain and modify the behavior of convolutional cluding those that classify pixels and those that generate images. For example, given sk on the top, why does a network generate strange gray border artifacts? Given the e bottom, how is a network able to segment out the barely-visible lamppost? We adxplanation-by-example" approach to interpretation BID11. Next to the, we show the closest-matching training exemplar image. It appears to provide coarse of such behaviors, though the quality of the output is still lacking (e.g., additional cars ted in the bottom row). One the right, we show the output obtained through a comarest-neighbor operation that simply matches input patches to those in the training eturns the corresponding output label. This means that the output is created by cutting-(composing) patches of training images. To ensure that inconsistent patches are not gether, one needs to match patches using an embedding that captures both global se-., architectural styles) and local structure (e.g., windows versus doors). We demonstrate nvolutional neighborhoods of feature activations produce such rich embedding. Such a allows one to explain errors and modify the biases of a network by changing the set of es used for non-parametric matching. [Deva: Can you switch the order of comp nn n, like teaser-deva.pdf?] nal nearest-neighbors: Our central thesis is consistent with recent work on network n BID47, but notably, naive memorization fails to explain how and why neralize to never-before-seen data. We explain the latter through composition: the synput in FIG0 consists of image patches copied from different training images. Given a training images with K pixels each, global nearest-neighbors can produce N possible es. On the other hand, compositional nearest-neighbors can produce (N K) K outputs, ially larger set of outputs. Each output image can be obtained by independently match- We propose a non-parametric method to explain and modify the behavior of convolutional networks, including those that classify pixels and those that generate images. For example, given the label mask on the top, why does a network generate strange gray border artifacts? Given the image on the bottom, how is a network able to segment out the left-most telephone pole in shadow? We advocate an "explanation-by-example" approach to interpretation BID11. Next to the CNN output, we show the closest-matching training exemplar image. It appears to provide coarse explanations of such behaviors, though the quality of the output is still lacking (e.g., additional cars are hallucinated in the bottom row). One the right, we show the output obtained through a compositional nearest-neighbor operation that simply matches input patches to those in the training set and returns the corresponding output label. This means that the output is created by cutting-and-pasting (composing) patches of training images. To ensure that inconsistent patches are not composed together, one needs to match patches using an embedding that captures both global semantics (e.g., architectural styles) and local structure (e.g., windows versus doors). We demonstrate that local convolutional neighborhoods of feature activations produce such rich embedding. Such a perspective allows one to explain errors and modify the biases of a network by changing the set of image patches used for non-parametric matching. Compositional nearest-neighbors: Our central thesis is consistent with recent work on network memorization BID47, but notably, naive memorization fails to explain how and why networks generalize to never-before-seen data. We explain the latter through composition: the synthesized output in FIG0 consists of image patches copied from different training images. Given a database of N training images with K pixels each, global nearest-neighbors can produce N possible output images. On the other hand, compositional nearest-neighbors can produce (N K) K outputs, an exponentially larger set of outputs. Each output image can be obtained by independently matching each of the K patches in the input query to one of N K patches in the training set. However, many of these outputs may be unrealistic. For example, one should not synthesize a facade by composing a door above a window. To ensure global consistency, one needs to match patches using a carefully-tuned metric that captures such global knowledge. But where do we obtain such a metric? BID16 show that the penultimate ("FC7") layer of image classification networks learn embeddings of images that produce remarkably accurate nearestneighbors. We apply this observation to spatial prediction networks in order to learn local embed-dings of image patches or even pixels. These embeddings are quite rich in that they encode semantic knowledge that is both local (geometric structures centered at the pixel) and global (e.g., color and architectural style of the entire facade).Correspondence and bias: Beyond being a mechanism for interpretation, we demonstrate that compositional NN matching is a viable algorithm that may approach the accuracy of highly-tuned CNNs. Although slower than a feed-forward net, compositional matching is attractive in two respects: It provides spatial correspondences between pixels in the predicted output and pixels in the training set. Spatial correspondences may be useful in practical applications such as label transfer BID31. FORMULA2 Implicit biases of the network can be explicitly manipulated by changing the set of images used for matching -it need not be the same set used for training the network. As an illustrative example, we can force a pre-trained image generation network to predict European or American building facades by restricting the set of images used for matching. Such a manipulation may, for example, be used to modify a biased face recognition network to process genders and races in a more egalitarian fashion. We introduce a Compositional Nearest Neighbors pipeline for interpreting and modifying the behavior of Convolutional Neural Networks. Specifically, we demonstrate that CNNs appear to work by memorizing image patches from training data, and then composing them into new configurations. To make compositional matching viable, CNNs learn a local embedding of image patches that captures both global and local semantics. An accurate local embedding is crucial in order to efficiently process an exponentially-large set of potential outputs. We validate our hypothesis on state-of-the-art networks for image translation and semantic image segmentation. Finally, we also show evidence that compositional matching can be used to predict activations of internal layers, generate spatial correspondences, and manipulate the implicitly-learned biases of a network. We broadly classify networks for spatial prediction into two categories: discriminative prediction, where one is seeking to infer high-level semantic information from RGB values; and image generation, where the intent is to synthesize a new image from a given input "prior". There is a broad literature for each of these tasks, and here we discuss the ones most relevant to ours. Discriminative models: An influential formulation for state-of-the-art spatial prediction tasks is that of fully convolutional networks BID33. These have been used for pixel prediction problems such as semantic segmentation BID33 BID24 BID36 BID4 BID12, depth/surface-normal estimation BID3 BID17, or low-level edge detection BID44 BID4. Substantial progress has been made to improve the performance by employing deeper architectures BID25, or increasing the capacity of the models BID4, or utilizing skip connections, or intermediate supervision BID44. However, we do not precisely know what these models are actually capturing to do pixel-level prediction. In the race for better performance, the interpretability of these models has been typically ignored. In this work, we focus on interpreting encoder-decoder architectures for spatial classification BID36 BID2.Image generation: BID23 proposed a two-player min-max formulation where a generator G synthesized an image from random noise z, and a discriminator (D) is used to distinguish the generated images from the real images. While this Generative Adversarial Network (GAN) formulation was originally proposed to synthesize an image from random noise vectors z, this formulation could also be used to synthesize new images from other priors, such as, a low resolution image or label mask by treating z as an explicit input to be conditioned upon. This conditional image synthesis via generative adversarial formulation has been well utilized by multiple follow-up works to synthesize a new image conditioned on a low-resolution image BID15, class labels BID35, and other inputs BID27 BID49. While the quality of synthesis from different inputs has rapidly improved in recent history, interpretation of GANs has been relatively unexplored. In this work, we examine the influential Pix2Pix network BID27 and demonstrate an intuitive non-parametric representation for explaining its impressive . Interpretability: There is a substantial body of work BID46 BID34 BID48 BID6 on interpreting general convolutional neural networks (CNNs). The earlier work of BID46 presented an approach to understand and visualize the functioning of intermediate layers of CNN. BID34 proposed to invert deep features to visualize what is learned by CNNs, similar to inverting HOG features to understand object detection BID42. BID48 demonstrated that object detectors automatically pop up while learning the representation for scene categories. BID28 explored interactive modification of a pre-trained network to learn novel concepts, and recently BID6 proposed to quantify interpretability by measuring scene semantics such as objects, parts, texture, material etc. Despite this, understanding the space of pixel-level CNNs is not well studied. The recent work of PixelNN BID5 focuses on highquality image synthesis by making use of a two-stage matching process that begins by feed-forward CNN processing and ends with a nonparametric matching of high-frequency detail. We differ in our focus on interpretability rather than image synthesis, our examination of networks for both discriminative classification and image synthesis, and our simpler single-stage matching process that does not require feed-forward processing. Compositionality: The design of part-based models BID14 BID18, pictorial structures or spring-like connections BID21, star-constellation models BID43 BID20, and the recent works using CNNs share a common theme of compositionality. While the earlier works explicitly enforce the idea of composing different parts for object recognition in the algorithmic formulation, there have been suggestions that CNNs also take a compositional approach BID46 BID28 BID6. We see compositional embeddings as rather different than compositional objects/parts. Using BID26 )'s terminology, embeddings can be viewed as "distributed representations", while objects/parts can be viewed as "sparse representations". Much past work has argued that distributed representations are central to the success of deep networks BID29. We agree and posit that this is one reason why CNNs outperform classic hierarchical models of parts/objects. Specifically, point out that classic part models can be implemented as CNNs with sparse activations, where individual neurons correspond to individual part responses. In practice, many neurons are not interpretable when examined individually, as pointed out by BID48. An embedding perspective offers one solution that does not require individual dimensions to be meaningful -e.g., nearest neighbors in an embedding will not change if one applies a well-behaved linear transformation (e.g., rotation) to the embedding space. This is consistent with past work BID39 that suggests that that linear combinations of activations are equally as informative as the original activations. Finally, if high-level activations represent objects, how can 4K activations (e.g., the typical dimension of FC7) represent 30K+ objects BID7? Our central thesis is that activations do not correspond to individual objects/parts, but rather the dimensions of a local embedding space in which objects/parts are points (matchable with nearest-neighbors). We now introduce our method to interpret various fully convolutional networks designed for pixellevel tasks. Global nearest-neighbors: Our starting point is the observation that classification networks can be interpreted as linear classifiers defined on nonlinear features extracted from the penultimate layer (e.g., "FC7" features) of the network. We formalize this perspective with the following notation: DISPLAYFORM0 where φ(x) ∈ R N corresponds to the penultimate FC7 features computed from input image x. Typically, the parameters of the linear classifier {w y} and those of the feature encoder φ(·) are trained on large-scale supervised datasets: Figure 2: Overview of pipeline: Given an input label or image (top-left of each box), our approach extracts an embedding for each pixel. We visualize two pixels with a yellow and white dot. The embedding captures both local and global context, which are crudely visualized with the surrounding rectangular box. We then find the closest matching patches in the training set (with a nearest neighbor search), and then report back the corresponding pixel labels to generate the final output (bottom-left of each box). We visualize an example for label-to-image synthesis on the left, and image-to-label prediction on the right. DISPLAYFORM1 a nonparametric nearest-neighbor (NN) predictor for complex tasks such as image captioning. We write this as follows: DISPLAYFORM2 Importantly, the above NN classifier performs quite well even when feature encoders φ(·) are trained for classification rather than as an explicit embedding. In some sense, deep nets seem to implicitly learn embeddings upon which simple linear classifiers (or regressors) operate. We argue that such a NN perspective is useful in interpreting the predicted classification since the corresponding training example can be seen as a visual "explanation" of the prediction -e.g., the predicted label for x is "dog" because x looks similar to training image x n *.Pixel nearest-neighbors: We now extend the above observation to pixel-prediction networks that return back a prediction for each pixel i in an image: DISPLAYFORM3 We write Label i (·) for the label of the i th pixel and φ i (·) for its corresponding feature vector. Because we will also examine pixel-level prediction networks trained to output a continuous value, we write out the following formulation for pixel-level regression: DISPLAYFORM4 For concreteness, consider a Pix2Pix BID27 ) network trained to regress RGB values at each pixel location. These predictions are obtained by convolving features from the penultimate layer with filters of size 4 × 4 × 128. In this case, the three filters that generate R,G, and B values can be written as a matrix W of size M × N, where M = 3 and N = 4 * 4 * 128 = 2048. Analogously, φ i (x) corresponds to N dimensional features extracted by reshaping local 4 × 4 convolutional neighborhoods of features from the penultimate feature map (of size H × W × 128). We now can perform nearest-neighbor regression to output pixel values: DISPLAYFORM5 where y n *,m * refers to the m th pixel from the n th training image. Importantly, pixel-level nearest neighbors reveals spatial correspondences for each output pixel. We demonstrate that these can be used to provide an intuitive explanation of pixel outputs, including an explanation of errors that otherwise seem quite mysterious (see FIG0). We explored different distance functions such as Euclidean and cosine distance. Similar to past work BID16, we found that cosine distance consistently performed slightly better, so we use that in all of our experiments. Comp NN Global NN Convolutional Neural NetworkComp NN Global NN Figure 3: We visualize a non-parametric approach to computing activations from internal layers. By matching to a training database of decoder 4 features from Pix2Pix, we can compute activations for the next layer (decoder 3) with nearest-neighbors. Each image represents a feature map of 3 continuous channels visualized in the R,G, and B planes. The collective set of 4 images displays 12 out of the 256 channels in decoder 3. Global nearest-neighbors (i.e., matching to the training image with the most similar decoder 4 layer and returning its associated decoder 3 layer) produces poor matches, but compositional-pasting matches together from different exemplars produce activations that are nearly identical to those computed by the underlying CNN. We now extend our compositional nearest-neighbor formulation to internal convolutional layers. Recall that activations a at a spatial position i and layer j can be computed using thresholded linear functions of features (activations) from the previous layer: DISPLAYFORM0 where DISPLAYFORM1 where we write a ij for the vector of activations corresponding to the i th pixel position from layer j, possibly computed with bilinear interpolation BID33. We write φ ij for the local convolutional neighborhood of features (activations) from the previous layer that are linearly combined with bank of linear filters W to produce a ij. For concreteness, let the previous layer be decoder 4 from Pix2Pix, and the current layer be decoder 3. We can then write, a ij ∈ R M where M = 256 and φ ij ∈ R N where N = 4 * 4 * 512 = 8192. We similarly posit that one can produce approximate activations by nearest neighbors. Specifically, let us run Pix2Pix on the set of training images, and construct a dataset of training patches with features φ ij (x n) as data and corresponding activation vectors a ij (x n) as labels. We can then predict activation maps for a query image x with NN: DISPLAYFORM2 Notably, this is done without requiring explicit access to the filters W. Rather such responses are implicitly encoded in the training dataset of patches and activation labels. We show that such an approach actually produces reasonable activations (Fig. 3). The previous paragraph demonstrated that composing image patches using embeddings from interior layers could "explain" the behavior of the subsequent layer. We now ask a more ambitious question -could such a procedure "explain" the behavior of all subsequent layers? That is, could it produce output predictions that mimic the behavior of the entire network? To do so, we regress the final-layer pixel value from each stored patch: DISPLAYFORM0 As written, the above procedure is inefficient because features are interpolated (to pixel resolution) before they are matched to the patch database. Instead, it is natural to match features at their native resolution, and then interpolate the matches. This in significant speed ups. For example, the Notably, this is done without requiring explicit access to the filters W. Rather such responses are implicitly encoded in the training dataset of patches and activation labels. We show that such an approach actually produces reasonable activations (Fig. 3). The previous paragraph demonstrated that composing image patches using embeddings from interior layers could "explain" the behavior of the subsequent layer. We now ask a more ambitious question -could such a procedure "explain" the behavior of all subsequent layers? That it, could it produce output predictions that mimic the behavior of the entire network? To do so, we regress the final-layer pixel value from each stored patch: DISPLAYFORM0 [Deva: Integrate above with Chunhui's text] We now consider generating final outputs image using compositional nearest-neighbor formulation for internal layers. Given a predicted activation map a ij (x), we define S j (i) as an "output" affecting field of neurons in spatial position i and layer j. One natural way to construct S j (i) is which is using the set of output pixels which neurons in spatial position i affect. Moreover, to avoiding overlapping of different, we choose the a central subset of pixels to apply a linear projection. For correctness, considering feature positions i in decoder 4 with a shape of 32 ⇤ 32 ⇤ 256, we can rewrite i as (x, y) where x = i/32, y = i%32. Thus, each feature is corresponding to a 8 ⇤ 8 image patch on final output with a shape of 256 ⇤ 256 ⇤ 3. Hence, the corresponding region linear projection S j (x, y) is P ixel[x ⇤ 8...(x + 1) ⇤ 8, y ⇤ 8...(y + 1) ⇤ 8, 0...3]. Thus, one can generate output by constructing a dataset of training patches with features ij (x n) as data and corresponding y n,Sj (i) as labels of a receptive region. We can apply compositional nearest-neighbor formulation on interior layer j to generate output image of the laster layer: DISPLAYFORM1 where y n ⇤,Sj (m ⇤) refers to pixels of set S j (m ⇤) from the n th training image. Coarse-to-fine nearest-neighbor search: An important special case is the bottleneck feature, which is computed from an activation map of size 1 ⇥ 1 ⇥ 512. In this case, we posit that the corresponding feature ij (x) 2 R 512 is a good global descriptor of image x. In our experiments, we found that such a global descriptor could be used to prune the NN pixel search, significantly speeding up runtime performance (e.g., we first prune the training database to a shortlist of images with similar bottleneck features, and then search through these images for similar patches).Bias modification: Finally, our suggest that the matching database from Eq. serves as an explicit "associative memory" of a network BID10. We can explicitly modify the memory by changing the dataset of training images {x n}, labels {y n}, or both. We experiment with various modifications in our experimental . One modification that consistently produced smoother visual was to refine the training labels to those predicted by a network: DISPLAYFORM2 Such as procedure for "self-supervised" learning is sometimes used when training labels are known to be noisy. From an associative network perspective, we posit that such labels capture a more DISPLAYFORM3 Figure 4: Adding composition by matching to later layers: We apply compositional nearestneighbor matching to features extracted from different layers, starting with the bottleneck layer and progressing to the penultimate deconv2 layer. We match local neighborhoods of convolutional embeddings, which naturally allows for more composition as we use later laters. Self-Supervised Labels Self-Supervised Labels Ground Truth Figure 5: Original labels v.s. self-supervised labels: Given the label input on the left, we show of Pix2Pix in the Convolutional Neural Networks column, and non-parametric matching to the training set using the original labels and the predicted "self-supervised" labels of the Pix2Pix network. Generating images with the predicted labels looks smoother, though the qualitative behavior of the network is still explained by the original training labels. We quantify this in our experimental , and include additional qualitative visualizations of the original and self-supervised labels in Figs. 13 and 14.bottleneck layer has an activation map of size 1 × 1 × 512. Matching to bottleneck features in the training set is quite fast because it acts as a compact global descriptor for matching entire images. The downside is that the matches are not compositional. By matching to convolutional embeddings extracted from later layers, one can compute progressively more compositional matches, that are initially global, then patch-based, and finally pixel-based (see FIG2). In our experiments, we found that such patch embeddings could be used to prune the NN pixel search, significantly speeding up run-time performance (e.g., we first prune the training database to a shortlist of images with similar bottleneck features, and then search through these images for similar patches, and then search through those patches for similar pixels).Bias modification: Finally, our suggest that the matching database from Eq. serves as an explicit "associative memory" of a network BID10. We can explicitly modify the memory by changing the dataset of training images {x n}, labels {y n}, or both. We experiment with various modifications in our experimental . One modification that consistently produced smoother visual was to refine the training labels to those predicted by a network: DISPLAYFORM0 Such as procedure for "self-supervised" learning is sometimes used when training labels are known to be noisy. From an associative network perspective, we posit that such labels capture a more faithful representation of a network's internal memory. Unless otherwise specified, all make use of the above matching database. We visualize the impact of self-supervised labels in Fig. 5. Figure 6: Reconstruction: We can use our nonparametric matching framework to generate reconstructions by replacing the exemplar target label y n with the exemplar input image x n. This can be done for both image generation and discrete label prediction. We find that, perhaps surprisingly, pixel embeddings contain enough local information to reconstruct the input pixel. We show additional in FIG0. We found that replacing the label y n with the input image x n is a helpful diagnostic for visualization. This illustrates the ability of the learned embedding and compositional matching framework to reconstruct the input query. The reconstructed input for a global NN match is simply the best-matching exemplar input image (see Fig. 6). We find that, perhaps surprisingly, pixel embeddings contain enough local information to reconstruct the input pixel. We show additional in our experimental . DISPLAYFORM0 We now discuss information-theoretic properties of the introduced nearest-neighbor embedding φ i (·) presented in Sec. 3. Specifically, we show that this embedding produces sufficient statistics BID40 BID38 BID1 for various pixel level tasks. We begin with the simpler case of predicting a global class label y from an image x. If we assume that the input image x, global feature embedding φ(x), and output label y form a Markov chain x → φ(x) → y, we can write the following: DISPLAYFORM0 [Sufficiency]As BID1 show, standard loss functions in deep learning (such as cross-entropy) search for an embedding that minimizes the entropy of the label y given the representation φ(x). This observation explicitly shows that the embedding φ(x) is trained to serve as a sufficient representation of the data x that is rich enough in information to predict the label y. In particular, if the learned embedding satisfies the above Markov assumption, the prediction y will not improve even when given access to the raw data x. Pixel-wise embeddings: Spatial networks predict a set of pixel-wise labels {y i} given an input image. If we assume that the pixel-wise labels are conditionally independent given {φ i (x)} and x, then we can write the joint posterior distribution over labels with the following product: DISPLAYFORM1 where φ i (x) are the sufficient statistics needed to predict label y i, and φ(x) = {φ i (x)} is the aggregate set of these sufficient statistics. One can similarly show that pixel-wise cross-entropy losses jointly minimize the entropy of the labels y i given φ i (x). This suggests that pixel-wise features φ i (x) do serve as a remarkably rich characterization of the image. This characterization includes both global properties (e.g., the color of a building facade being synthesized) as well as local properties (e.g., the presence of a particular window ledge being synthesized). Importantly, this requires the conditional independence assumption from Eq. to be conditioned on the entire image x rather than just the local pixel value x i (from which it would be hard to extract global properties).An interesting observation from the factorization shown in Eq. FORMULA1 is that we can synthesize an image by predicting pixel values independently. Thus, this theoretical observation suggests that a simple nearest-neighbor regression for every output pixel can synthesize plausible images. The above pixel-wise formulation also suggests that internal feature layers serve as sufficient representations to predict activations for subsequent layers. Interestingly, skip connections that directly connect lower layers to higher layers break the Markov independence assumption. In other words, skip connections suggest that higher layer features often do not serve as sufficient representations, in that subsequent predictions improve when given access to earlier layers. However, Eq. technically still holds so long as we write φ i (x) for the concatenated representation including lower-level features. For example, in Pix2Pix, we write φ i (x) ∈ R N where N = 4 * 4 * (64 + 64) = 2048, where the second set of 64 channel features are copied from the first encoder layer. In the next Section, we show qualitative and quantitative experiments supporting this analysis. Prior work on embeddings: Now that our framework has been described both algorithmically and theoretically, we compare it to a large body of related work. The idea that intermediate CNN layers learn embeddings is not new. This dates back at least to BID11, and was popularized in recent history with BID37 BID16. Indeed, much contemporary work makes use of "off-the-shelf" CNN layers as features, where earlier layers tend to encode more generic feature representations BID45. However, such representations are typically global and refer to the entire image. Alternatively, one can extract local pixel-based feature representations, but these are typically defined by 1x1 slices of a convolutional feature map BID24 BID32. Our theoretical analysis, while quite straightforward, shows that the optimal local representation (in terms of sufficiency) is given by a convolutional neighborhood of overlapping activations. Finally, we show that compositional matching with such local embeddings significantly outperforms global matching (see Fig. 3), and rivals the accuracy of feedforward CNN predictions (Table 1). We now present experimental for discriminative networks trained for semantic segmentation, as well as generative networks trained for image synthesis. The goal of the experiments is to show that images generated with a simple NN regression are a good way to interpret the internal operations of a convolutional neural network. Networks: We use SegNet BID2, a recent state-of-the-art network for image segmentation, and Pix2Pix BID27, a state-of-the-art network for conditional image synthesis and translation. We evaluate our findings for multiple datasets and tasks on which the original networks were trained. These include tasks such as synthesizing facades from architectural labels and vice versa, predicting segmentation class labels from urban RGB images, and synthesizing Google maps from aerial or satellite views. Semantic segmentation: We use the CityScape BID13 and CamVid (; BID9 datasets for the task of semantic segmentation. Both datasets are annotated with semantic class labels for outdoor images collected from a car driving on the road. We use SegNet BID2 for CamVid sequences, and Pix2Pix BID27 for the CityScape dataset. FIG3 shows qualitatively for these datasets. We can observe in FIG3 that the compositional NN produces a nearly-identical (CompNN column) to the one produced by the network (Pix2Pix and SegNet columns). This suggests that our method enables a good interpretation of discriminative deep networks on pixel-level classification. The of semantic segmentation on cityscape and CamVid dataset. This suggests the following observations. First, the difference between images generated by generative networks (Pix2Pix and SegNet columns) and NN embedding (CompNN column) is surprisingly small. Thus, our method can perfectly interpret discriminative deep networks on pixel-level classification. Second, we can notice some noise edges with a high gradient (see columns 5-8). This phenomenon can also be used to understand the difficulty of image segmentation task: borders with high gradients are usually hard to classify due to the ambiguous patches in training set. The suggest that Comp NN (our approach) can also explain the from Pix2Pix. We conclude this because our approach reproduces color and the structure of the Pix2Pix output, including a few artifacts (e.g., the image cuts in the 6th and 7th columns).Architectural labels-to-facades: We followed BID27 for this setting and used the annotations from (Tyleček &Šára, 2013). There are 400 training images in this dataset, and 100 images in the validation set. We use the same dataset to generate architectural labels from images of facades. Pix2Pix BID27 models are trained using 400 images from the training set for both labels-to-facades and vice versa. FIG4 shows qualitative examples of synthesizing real-world images using pixel-wise nearest neighbor embedding in its first and second rows. We observe that the NN-embedding perfectly explains the generation of deformed edges (see CompNN column), and how the generative architecture is eventually memorizing patches from the training set. Satellite-to-maps: This dataset contains 1096 training images and 1000 testing images scraped from Google Maps. We use the same settings used by BID27 for this task. Bias modification: Given the same label input, we show different obtained by matching to different databases (using an embedding learned by Pix2Pix). By modifying the database to include specific buildings from specific locations, one can introduce and remove implicit biases in the original network (e.g.,one can generate "European" facades versus "American" facades).can observe that our synthesis (CompNN column) is nearly identical to the image generated by the network (Pix2Pix columns).Bias modification: Figure 9 shows that the output of nonparametric matching (3rd -6th columns from left to right) can be controlled through explicit modification of the matching database. In simple words, this experiment shows that we can control properties of the synthesized image by simply specifying the exemplars in the database. We can observe in Figure 9 that the synthesized images preserve the structure (e.g., windows, doors, and roof), since it is the conditional input, but the textural components (e.g., color) change given different databases. Reconstruction: This experiment shows that the learned embeddings of a CNN also enables the reconstruction of the input images in a compositional fashion (as discussed in Sec. 3). The of this experiment are shown in FIG0. The Figure has the following organization. In the middle columns (enclosed in a box), the Figure shows the input and output images. The first two columns (from left-to-right) show the reconstructions of the input images using a global nearest-neighbors and the proposed compositional nearest-neighbors approach. The last two columns show the reconstruction of the output images, also using a global nearest-neighbors and the proposed compositional nearest-neighbors approach. We can observe that the reconstructions of the input images using the global nearest-neighbors approach overall resembles the structure of the scene. However, the reconstructions of the input images using the proposed compositional nearest-neighbors reproduce the input scene with a remarkable accuracy. These suggest that the learned embedding is rich in global and local information to either reconstruct both the input and output images. We can conclude then that CNNs understand an input image by finding the patches from the training images that enable the composition of an image reproducing the input. To the best of our knowledge, this is the first approach that reconstructs an input image using training instances using a learned pixel-embedding. Correspondence map: FIG0 shows a correspondence map that explicitly illustrates how an output image is synthesized by cutting-and-pasting patches from training images. We can observe that patches are selected from many different styles of facades, but in such a manner that ensures that the composed output is globally consistent while maintaining the appropriate local structures (such as windows, doors, awnings, etc.). This implies the learned patch embedding captures both global semantics and local structure. We present the quantitative analysis of our pixel-wise nearest neighbor approach with an end-to-end pipeline in Table 1. We report classification accuracy of ground truth labels and mean intersection-over-union (IoU) compared to the predicted labels for the task of semantic segmentation. We can observe in Table 1 that compositional matching approaches the accuracy of the baseline CNN, and dramatically outperforms global matching (sometimes by a factor of 2X). Finally, self-supervised labels (SS) overall perform similarly to the original labels (O), but almost consistently help for compositional matching and consistently hurt for global matching. We posit that this is due to the fact that self-supervised labels tend to be overly-smoothed, and so act as a form of spatial regularization that helps compositional matching. FIG0: Reconstruction: Given the correspondences of NN features from penultimate the layer, we reconstruct the test input images (third column from left-to-right) by using the compositional nearest-neighbor approach: copying and pasting corresponding image patches from the input images of the training set. The reconstructions using a compositional nearest-neighbor approach is shown in the second column, while the reconstructions using a global nearest-neighbor approach is shown in the first column. The learned embedding thus enables not only the reconstruction of the input image, but also of the output image (see the last two columns). These suggest that the embedding possess not only the information relevant to a specific task, but also semantic information from the original image. We can conclude then that CNNs understand an input image by finding the patches from the training images that enable the composition of an image reproducing the input. Implementation Details: We used U-net as the generator for Pix2Pix, and used publicly available Tensorflow code for Pix2Pix 2 and SegNet 3. For a slightly faster computation, we used the Eigen Library to implement the cosine distance. For Cityscape dataset, we shortlist 100 global neighborhoods using global bottleneck features for compositional NN searching. This leads to a 30 times speedup. For CamVid dataset, we shortlist 10 global neighborhoods. We can observe in previous that the quality of generated images is hardly affected. We used 40-threads for these experiments. The average compute time per image is 22 minutes and 13 minutes for Cityscape and CamVid dataset respectively. In this paper, we have presented a simple approach based on pixel-wise nearest neighbors to understand and interpret the functioning of convolutional neural networks for spatial prediction tasks. Our analysis suggests that CNNs behave as compositional nearest neighbor operators over a training set of patch-label pairs that act as an associative memory. But beyond simply memorizing, CNNs can generalize to novel data by composing together local patches from different training instances. Also, we argued that networks for pixel-level tasks learn sufficient statistics that enable the gener- Table 1: We compare compositional nearest neighbors (CompNN) to the baseline CNN and different global nearest neighbor approaches, obtained by matching feature maps from different layers (Global-Bottleneck and Global-Decode2). We report mean pixel accuracy and intersection-overunion, where predicted segmentation labels are compared to ground-truth labels. We specifically use the embedding learned by BID27 for Facades-to-Labels (Facades) and CityScape, and embedding learned by BID2 for CamVid. On average, CompNN performs 5% worse than the baseline CNN, though in some cases (CityScapes) it performs equally. However, compositional matching dramatically outperforms global matching, sometimes by a factor of 2X (Facade and CityScape IoU). In terms of global matching, the last feature layer (Decode2) strictly outperforms the intermediate Bottleneck layer, but is significantly larger (128 3 versus 512 dimensions). Finally, self-supervised labels (SS) overall perform similarly to the original labels (O), but almost consistently help for compositional matching and consistently hurt for global matching. We posit that this is due to the fact that self-supervised labels tend to be overly-smoothed, and so act as a form of spatial regularization for compositional matching. ation of pixel predictions. Our analysis and experiments not only support this argument, but also enables example-based explanations of network behavior and explicit modulation of the implicit biases learned by the network. We hope that our framework enables further analysis of convolutional networks from a non-parametric perspective. FIG0: Global NN v.s. Comp NN. We show synthesized images using our CompNN methods and four global NN approaches (global nearest neighbor on bottleneck feature embedding and Decode2 feature embedding using self-supervised labels and original labels respectively). We can observe that compositional nearest neighbor outperforms other global nearest neighbor approaches, using Decode2 features (the penultimate layer) sometimes can generate more similar structures (See row 1,4). FIG0 shows the synthesized images using several global NN approaches and a CompNN approach. We can observe that the of global NN approaches overall resembles global properties of the output of the Convolutional Neural Network (CNN) and of the CompNN approach. For instance, in the top two rows, the output of the global NN resembles the color of the facade and structural properties of the buildings. Also, in the bottom two rows, we can observe that the global NN overall captures the organization of the scene because many labels in the global NN overlap considerably with the output of the CNN and the ground truth. In this section, we show more of our proposed Compositional Nearest Neighbors. Both self-supervised labels and original labels are evaluated in this section. We can observe in FIG0 that the of the Compositional Nearest Neighbors (CompNN) approach are quite similar to those of the Convolutional Neural Network. We can also observe that the CompNN produces smoother when it uses self-supervised labels than when it uses the original ones. Moreover, the self-supervised CompNN method produces that are more alike to those of the Convolutional Neural Network. Original Labels Self-Supervised Labels Figure 13: Compositional Nearest Neighbors (CompNN) segmentation using self-supervised and original labels. Overall, CompNN produces similar compared with those of the Convolutional Neural Network. In particular, CompNN produces smoother when it uses selfsupervised labels than when it uses the original labels. Fig. 14 shows additional image syntheses using a CNN and CompNN with original and self-supervised labels. As discussed earlier, the CompNN with self-supervised labels produces a smoother image than when it uses the original labels. B APPENDIX: EXPERIMENTAL DETAILS B.1 COMPUTATIONAL COMPLEXITY Although Compositional Nearest Neighbors provide insights into the internal operations of a CNN, its computational complexity is very high. In this section, we show some experimental details to speed up the CompNN process. Assume a dataset with N images, each with H × W pixels, and M filters from the last layer of a CNN. Then, the computational complexity for synthesizing one image using CompNN is DISPLAYFORM0 Original Labels Self-Supervised Labels Figure 14: Synthesized images for pixel-wise prediction tasks with a Convolutional Neural Network, and Compositional Nearest Neighbors using self-supervised and original labels. We now introduce several approaches to speed up searching process. Although Numpy from Python calculates the distance between two features quickly, the iterations for synthesizing a pixel are slow. To alleviate this, we implemented the CompNN using C++. When a dataset has a large number of training instances, we used bottleneck features to narrow the training set. Especially in the segmentation problem, we can generate OK with only 5-10 training reference. Our implementation uses several threads in order to speedup the process. Specifically, each thread is in charge of synthesizing a disjoint set of pixels. The synthesis of facades using the Facades dataset (400 training samples) takes about 2 hours with 20-30 threads on the CPU. This can be used as a reference for experiments on other datasets.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1TWfmnNf
Convolutional Neural Networks behave as Compositional Nearest Neighbors!
Consider a world in which events occur that involve various entities. Learning how to predict future events from patterns of past events becomes more difficult as we consider more types of events. Many of the patterns detected in the dataset by an ordinary LSTM will be spurious since the number of potential pairwise correlations, for example, grows quadratically with the number of events. We propose a type of factorial LSTM architecture where different blocks of LSTM cells are responsible for capturing different aspects of the world state. We use Datalog rules to specify how to derive the LSTM structure from a database of facts about the entities in the world. This is analogous to how a probabilistic relational model specifies a recipe for deriving a graphical model structure from a database. In both cases, the goal is to obtain useful inductive biases by encoding informed independence assumptions into the model. We specifically consider the neural Hawkes process, which uses an LSTM to modulate the rate of instantaneous events in continuous time. In both synthetic and real-world domains, we show that we obtain better generalization by using appropriate factorial designs specified by simple Datalog programs. Temporal sequence data is abundant in applied machine learning. A common task is to impute missing events, e.g., to predict the future from the past. Often this is done by fitting a generative probability model. For evenly spaced sequences, historically popular models have included hidden Markov models and discrete-time linear dynamical systems, with more recent interest in recurrent neural network models such as LSTMs. For irregularly spaced sequences, a good starting point is the Hawkes process, a self-exciting temporal point process; many variations and enhancements have been published, including neural variants using LSTMs. All of these models can be described schematically by Figure 1a. Events e i, e i+1,... are assumed to be conditionally independent of previous events, given the system state s i (which may or may not be fully known given events e 1, . . ., e i). That is, s i is enough to determine the joint distribution of the i th event and the updated state s i+1, which is needed to recursively predict all subsequent events. Figure 1a and its caption show the three types of influence in the model. The update, affect, and depend arrows are characterized by parameters of the model. In the case of a recurrent neural network, these are the transition, input, and output matrices. Our main idea in this paper is to inject structural zeros into these weight matrices. Structural zeros are weights that are fixed at zero regardless of the model parameters. In other words, we will remove many connections (synapses) from both the recurrent and non-recurrent portions of the neural network. Parameter estimation must use the sparse remaining connections to explain the observed data. Specifically, we partition the neural state s i ∈ R d into a number of node blocks. Different node blocks are intended to capture different aspects of the world's state at step i. By zeroing out rectangular blocks of the weight matrix, we will restrict how these node blocks interact with the events and with one another. An example is depicted in Figures 1b (affect, depend) and 1d (update). In addition, by reusing nonzero blocks within a weight matrix, we can stipulate (for example) that event e affects node block b in the same way in which event e affects node block b. Such parameter tying makes it possible to generalize from frequent events to rare events of the same type. Although our present experiments are small, we are motivated by the challenges of scale. Real-world domains may have millions of event types, including many rare types. To model organizational behavior, we might consider a dataset of meetings and emails in a large organization. To model supply chains, we might consider purchases of goods and services around the world. In an unrestricted model, anything in the past could potentially influence anything in the future, making estimation extremely difficult. Structural zeroes and parameter tying, if chosen carefully, should help us avoid overfitting to coincidental patterns in the data. Analogous architectures have been proposed in the world of graphical models and causal models. Indeed, to write down such a model is to explicitly allow specific direct interactions and forbid the rest. For example, the edges of a Gaussian graphical model explicitly indicate which blocks of the inverse covariance matrix are allowed to be nonzero. Some such models reuse blocks . As another example, a factorial HMM -an HMM whose states are m-tuples-can be regarded as a simple example of our architecture. The state s i can be represented using m node blocks, each of which is a 1-hot vector that encodes the value of a different tuple element. The key aspect of a factorial HMM is that the stochastic transition matrix (update in Figure 1d) is fully block-diagonal. The affect matrix is 0, since the HMM graphical model does not feed the output back into the next state; the depend matrix is unrestricted. But how do we know which interactions to allow and which to forbid? This is a domain-specific modeling question. In general, we would like to exploit the observation that events are structured objects with participants (which is why the number of possible event types is often large). For example, a travel event involves both a person and a place. We might assume that the probability that Alice travels to Chicago depends only on Alice's state, the states of Alice's family members, and even the state of affairs in Chicago. Given that modeling assumption, parameter estimation cannot try to derive this probability (presumably incorrectly) from the state of the coal market. These kinds of systematic dependencies can be elegantly written down using Datalog rules, as we will show. Datalog rules can refer to database facts, such as the fact that Alice is a person and that she is related to other people. Given these facts, we use Datalog rules to automatically generate the set of possible events and node blocks, and the ways in which they influence one another. Datalog makes it easy to give structured names to the events and node blocks. The rules can inspect these structures via pattern-matching. In short, our contribution is to show how to use a Datalog program to systematically derive a constrained neural architecture from a database. Datalog is a blend of logic and databases, both of which have previously been used in various formalisms for deriving a graphical model architecture from a database . Our methods could be applied to RNN sequence models. In this setting, each possible event type would derive its unnormalized probability from selected node blocks of state s i. Normalizing these probabilities to sum to 1 would yield the model's distribution for event e i. Only the normalizing constant would depend on all node blocks. In this paper, we focus on the even more natural setting of real-time events. Here no normalizing constant is needed: the events are not in competition. As we will see in section 5.1, it is now even possible for different node blocks to generate completely independent sequences of timestamped events. The observed dataset is formed by taking the union of these sequences. In the real-time setting, event e i has the form k i @t i where k i ∈ K is the type of the event and t i ∈ R is its time. The probability of an event of type k at any specific instant t is infinitesimal. We will model how this infinitesimal probability depends on selected node blocks of s i. There is no danger that two events will ever occur at the same instant, i.e., the probability of this is 0. We begin by describing our baseline model for this setting, drawn from. In general, a multivariate point process is a distribution over possible sequences of events e 1 = k 1 @t 1, e 2 = k 2 @t 2,... where 0 < t 1 < t 2 <.... A common paradigm for defining such processes, starting with , is to describe their temporal evolution as in Figure 1a. Each s i is deterministically computed from s i−1 (update) and e i−1 (affect), according to some formula, so by induction, s i is a deterministic summary of the first i − 1 events. e i = k i @t i is then emitted stochastically from some distribution parameterized by s i (depend). The structure of the depend distribution is the interesting part. s i is used, for each event type k ∈ K, to define some time-varying intensity function λ k: (t i−1, ∞) → R ≥0. This intensity function is treated as the parameter of an inhomogeneous Poisson process, which stochastically generates a set of future events of type k at various times in (t i−1, ∞). 2 Thus, all these |K| Poisson processes together give us many events of the form e = k@t. The first such event-the one with the earliest time t-is taken to be the next event e i. The remaining events are discarded (or in practice, never generated). As our baseline method, we take the neural Hawkes process to be our method for computing s i and defining the intensity function λ k from it. In that work, s i actually describes a parametric function of the form h: (t i−1, ∞) → R d, which describes how the hidden state of the system evolves following event e i−1. That function is used to define the intensity functions via 2 Under an inhomogenous Poisson process, disjoint intervals generate events independently, and the number of events on the interval (a, b] is Poisson-distributed with mean b a λ k (t) dt. Thus, on a sufficiently narrow interval (t, t + dt], the probability of a single event is approximately λ k (t) dt and the probability of more than one event is approximately 0, with an error of O(dt 2) in both cases. so the parameters of depend are the vectors v k and the monotonic functions f k. Once e i = k i @t i has been sampled, the parameters for s i+1 are obtained by where Ψ is inspired by the structure of an LSTM, the affect parameters are given by matrix U and the event embeddings w k, and the depend parameters are given by matrix V. In this paper, we will show an advantage to introducing structural zeroes into v k, U, and V. In real world, atomic events typically involve a predicate and a few arguments (called entities in the following), in which case it makes sense to decompose an event type into a structured form 3 such as email(alice,bob), travel(bob,chicago), etc. For generality, we also allow entities to have structured forms when necessary. Then naturally, in such a world with many entities, we would like to partition the state vector h(t) into a set of node blocks {h b (t)} b∈B and associate node blocks with entities. For example, we may associate h mind(alice) (t) to alice and h mind(bob) (t) to bob. Note that mind(alice) is just an example of the kind of node blocks that can be associated with alice. There can be another node block associated with the physical condition of alice and be called body(alice). Of course when there is only one node block associated with alice, we can also simply call it alice. From now on, we use teal-colored typewriter font for events and orange-colored font for node blocks. From Figure 1b, we already see that an event may only depend on and affect a subset of hidden nodes in h(t), and this further prompts us to figure out a way to describe our inductive biases on which node blocks are to determine the intensity of a given event as well as which node blocks are to be updated by one. We propose a general interface based on Datalog-a declarative logic programming language-to assert our inductive biases into a deductive database as facts and rules. Then as each event happens, we can query the database to figure out which node blocks determine its intensity and which node blocks will be updated by it. In this section, we walk through our Datalog interface by introducing its keywords one step a time. We write keywords in boldfaced typewriter font, and color-code them for both presentation and reading convenience. The colors we use are consistent with the examples in Figure 1. We first need to specify what is a legal node block in our system by using the keyword is block: where b can be replaced with a node block name like alice, bob, chicago and etc. Such a Datalog statement is a database fact. Then we use the keyword is event to specify what is a legal event type in our system: where k can be replaced with email(alice,bob), email(bob,alice), travel(bob,chicago) and etc. As we may have noticed, there may be many variants of email(S,R) where the variables S and R can take values as alice, bob and etc. To avoid writing a separate fact for each pair of S and R, we may summarize facts of the same pattern as a rule: head of rule body of rule (5a) 3 Similar structured representation of events has been common in natural language semantics and philosophy . where:-is used to separate the head and body. Capitalized identifiers such as S and R denote variables. A rule mean: for any value of the variables, the head is known to be true if the body is known to be true. A fact such as is event(email(alice,bob)) is simply a rule with no body (so the :-is omitted), meaning that the body is vacuously true. To figure out what event types are legal in our system, we can query the database by: is event(K)? which returns every event type k that instantiates is event(k). Note that, unlike a fact or rule that ends with a period , a query ends with a question mark . We can declare database rules and facts about which events depend on which node blocks using the depend keyword as: where k and b are replaced with Datalog variables or values for event and node block respectively, and condition 1,...,condition N stands for the body of rule. An example is as follows: depend(travel(bob,chicago), X):-resort(X),at(X,chicago). By querying the database for a given k using depend(k,B)? we get B d k that is the set of all the node blocks b that instantiates depend(k,b) and has superscript d for depend. Then we have: where σ(·) is the sigmoid function, r ranges over all the rules and r depend(k, b) means "the rule r proves the fact depend(k, b)". The matrices A r ∈ R The aggregator ⊕ represents pooling operation on a set of non-negative vectors. We choose ⊕ = and ⊕ = max because it is appropriate to sum the dependencies over all the rules but extract the "max-dependency" among all the node blocks for each rule. As shown in equation, the intensity of travel(bob,chicago) is determined by both resorts and his friends at chicago so these two possible motivations should be summed up. But bob may only stay at one of his friends' home and can only afford going to a few places, so only the "best friend" and "signature resort" matter and that is why we use max-pooling for ⊕. As a matter of implementation, we modify each depend rule to have the rule index r as a third argument: This makes it possible to apply semantics-preserving transformations to the ing Datalog program without inadvertently changing the neural architecture. Moreover, if the Datalog programmer specifies the third argument r explicitly, then we do not modify that rule. As a , it is possible for multiple rules to share the same r, meaning that they share parameters. We can declare database rules and facts about which events affect which node blocks using the affect keyword as: such that we know which node blocks to update as each event happens. For example, we can allow travel(bob,chicago) to update h X (t) for any X who is a friend of bob and at chicago: affect(travel(bob,chicago), X)):-friend(bob,X), at(X,chicago). By querying the database for a given k using we get B a k that is the set of all the node blocks b that instantiates affect(k,b) where the superscript a stands for affect. Then each node block h b (t) updates itself as shown in equation Similar to how A r and B r in equation are declared, a U r is implicitly declared by each affect rule such that we have: where ⊕ =. This term is analogous to the Uw k term in section 2.1 Note that we can also modify each affect rule (as we do for depend in section 3.2) to have the rule index r as a third argument. By explicitly specifying r, the Datalog programmer can allow multiple affect rules to share U r. We can specify how node blocks update one another by using the update keyword: meaning the node block b updates the node block b when k happens. Note that b can equal b. It is often useful to write this rule: which means that whenever K causes B to update, B gets to see its own previous state (as well as K). To update the node block b with event k, we need where r ranges over all rules and This term is analogous to the Vh(t) term in section 2.1. Having equations and, we pass ψ 0,b,k + ψ 1,b,k through the activation functions and obtain the updated h b,new. Similar to depend and affect, we can also explicitly specify an extra argument r in each update rule to allow multiple rules to share V r. Parameter sharing (in depend, affect and update) is important because it works as a form of regularization: shared parameters tend to get updated more often than the individual ones, thus leaving the latter less likely to overfit the training data when we "early-stop" the training procedure. When each event type k is declared using is event(k), the system automatically creates event embedding vectors v k and w k and they will be used in equations and respectively. When some event types involve many entities which in a very large number of event types, this design might end up with too many parameters, thus being hard to generalize to unseen data. We can allow event types to share embedding vectors by adding an extra argument to the keyword is event: is event(k,m):-condition 1,...,condition N. where m is an index to a pair of embedding vectors v m and w m. There can be more than one pair that is used by an event type k as shown in this example: is event(email(S,R), S), is event(email(S,R), R), is event(email(S,R), email) and etc. Then we compute the final embedding vectors of email(S,R) as: Similar argument in section 3.4 applies here that sharing embedding vectors across event types is a form of regularization. In a simplified version of our approach, we could use a homogeneous neural architecture where all events have the same dimension, etc. In our actual implementation, we allow further flexibility by using Datalog rules to define dimensionalities, activation functions, and multi-layer structures for event embeddings. This software design is easy to work with, but is orthogonal to the machine learning contribution of the paper, so we describe it in Appendix A.4. , we can learn the parameters of the proposed model by locally maximizing in equation using any stochastic gradient method: Its log-likelihood given the sequence over the observation interval [0, T] is as follows: The only difference is that our Datalog program affects the neural architecture, primarily by dictating that some weights in the model are structurally zero. Concretely, to compute and its gradient, as each event e i = k i @t i happens, we need to query the database with depend(k,B)? for the node blocks that each k depends on in order to compute log λ ki (t i) and the Monte Carlo approximation to Then we need to query the database with affect(k,B)? for the node blocks to be affected and update them. A detailed recipe is Algorithm 1 of Appendix B.1 including a down-sampling trick to handle large K. Prediction Given an event sequence prefix k 1 @t 1, k 2 @t 2,..., k i−1 @t i−1, we may wish to predict the time and type of the next event. The time t i has density p i (t) = λ(t) exp − t ti−1 λ(s)ds where λ(t) = k∈K λ k (t), and we choose ∞ ti−1 tp i (t)dt as the time prediction because it has the lowest expected L 2 loss. Given the next event time t i, the most likely type would simply be arg max k λ k (t i), but the most likely next event type without knowledge of t i is arg max k The integrals in the preceding equations can be estimated using i.i.d. samples of t i drawn from p i (t). We draw t i using the thinning algorithm (; ;). Given t i, we draw k i from the distribution where the probability of each type k is proportional to λ k (t i). A full sequence can be rolled out by repeatedly feeding the sampled event back into the model and then drawing the next. See Appendix B.2 for implementation details. We show how to use our Datalog interface to inject inductive biases into the neural Hawkes process (NHP) on multiple synthetic and real-world datasets. On each dataset, we compare the model with modified architecture-we call it structured neural Hawkes process (or structured-NHP) with the plain vanilla NHP on multiple evaluation metrics. See Appendix C for experimental details (e.g., dataset statistics and training details). We implemented the model in PyTorch . pointed out, it is important for a model family to handle the superposition of real-time sequences, because in various real settings, some event types tend not to interact. For example, the activities of two strangers rarely influence each other, although they are simultaneously monitored and thus form a single observed sequence. In this section, we experiment on the data known to be drawn from a superposition of M neural Hawkes processes with randomly initialized parameters. Each process X has four event types event(K,X) where K can be 1, 2, 3 and 4. To leverage the knowledge about the superposition structure, one has to either implement a mixture of neural Hawkes processes or transform a single neural Hawkes process to a superposition model by (a) zeroing out specific elements of v k such that λ k (t) for k ∈ K X depends on only a subset S of the LSTM hidden nodes, (b) setting specific LSTM parameters such that events of type k ∈ K Y don't affect the nodes in S and (c) making the LSTM transition matrix a blocked-structured matrix such that different node blocks don't update each other. Neither way is trivial. With our Datalog interface, we can explicitly construct such a superposition process rather easily by writing simple datalog rules as follows: (20b) update(X, unit(X)):-is block(X). Events of X do not influence Y at all, and processes don't share parameters. We generated learning curves (Figure 2) by training a structured-NHP and a NHP on increasingly long prefixes of the training set. As we can see, the structured model substantially outperform NHP at all training sizes. The neural Hawkes process gradually improves its performance as more training sequences become available: it perhaps learns to set its w k and LSTM parameters from data. However, thanks to the right inductive bias, the structured model requires much less data to achieve somewhat close to the oracle performance. Actually, as shown in Figure 2, the structured model only needs 1/16 of training data as NHP does to achieve a higher likelihood. The improvement of the structured model over NHP is statistically significant with p-value < 0.01 as shown by the pair-permutation test at all training sizes of all the datasets. Elevator System Dataset . In this dataset, two elevator cars transport passengers across five floors in a building (; ;). Each event type has the form stop(C,F) meaning that C stops at F to pick up or drop off passengers where C can be car1 and car2 and F can be floor1,..., floor5. This dataset is representative of many real-world domains where individuals physically move from one place to another for, e.g., traveling, job changing, etc. With our Datalog interface, we can explicitly express our inductive bias that each stop(C,F) depends on and affects the associated node blocks C and F: (21b) The set of inductive biases is desirable because whether a C will head to a F and stops there is primarily determined by C's state (e.g., whether it is already on the way of sending anyone to that floor) and F's state (e.g., whether there is anyone on that floor waiting for a car). We also declare a global node block, building, that depends on and affects every event in order to compensate for any missing knowledge (e.g., the state of the joint controller for the elevator bank, and whether it's a busy period for the humans) and/or missing data (e.g., passengers arrive at certain floors and press the buttons). Appendix C.2 gives a full Datalog specification of the model that we used for the experiments in this domain. More details about this dataset (e.g. pre-processing) can be found in Appendix C.1.2. EuroEmail Dataset . In this domain, we model the email communications between anonymous members of an European research institute. Each event type has the form email(S,R) meaning that S sends an email to R where S and R are variables that take the actual members as values. With our Datalog interface, we can express our knowledge that each event depends on and affects its sender and receiver as the following rules: depend(send(S,R), S). depend(send(S,R), R). Appendix C.2 gives a full Datalog specification of the model that we used for the experiments in this domain. More details about this dataset (e.g. pre-processing) can be found in Appendix C.1.3. We evaluate the models in three ways as shown in Figure 3. We first plot learning curves (Figure 3a) by training a structured-NHP and an NHP on increasingly long prefixes of each training set. Then we show the per-sequence scatterplots in Figure 3b. We can see that either in learning curve or scatterplots, structured-NHP consistently outperforms NHP, which proves that structured-NHP is both more data-efficient and more predictive. Finally, we compare the models on the prediction tasks and datasets as shown in Figure 3c. We make minimum Bayes risk predictions as explained in section 4. We evaluate the type prediction with 0-1 loss, yielding an error rate. We can see, in both of Elevator and EuroEmail datasets, structured-NHP could be significantly more accurate on type prediction. We evaluate the time prediction with L 2 loss, and reported the mean squared error as a percentage of the variance of the true time interval (denoted as MSE%). Note that we get can MSE%=1.0 if we always predict t i as t i−1 + ∆t where ∆t is the average length of time intervals. Figure 3c shows that the structured model outperforms NHP on event type prediction on both datasets, although for time prediction they perform neck to neck. We speculate that it might be because the structure information is more directly related to the event type (because of its structured term) but not time. There has been extensive research about having inductive biases in the architecture design of a machine learning model. The epitome of this direction is perhaps the graphical models where edges between variables are usually explicitly allowed or forbidden . There has also been work in learning such biases from data. For example, proposed to encourage the block-structured states for Hidden Markov Models (HMM) by enforcing a sparsityinducing prior over the non-parametric Bayesian model. and Bratières et al. attempted to learn structured kernels for Gaussian processes. Our work is in the direction of injecting inductive biases into a neural temporal model-a class of models that is useful in various domains such as demand forecasting , personalization and recommendation , event prediction and knowledge graph modeling . Incorporating structural knowledge in the architecture design of such a model has drawn increasing attention over the past few years. introduced a factored state space in continuous-time Markov processes. and proposed to consider direct dependencies among events in graphical event models. developed a hybrid model that decomposes exchangeable sequences into a global part that is associated with common patterns and a local part that reflects individual characteristics. However, their approaches are all bounded to the kinds of inductive biases that are easy to specify (e.g. by hand). Our work enables people to use a Datalog program to conveniently specify the neural architecture based on a deductive database-a much richer class of knowledge than the previous work could handle. Although logic programming languages and databases have both previously been used to derive a graphical model architecture , we are, to the best of our knowledge, the first to develop such a general interface for a neural event model. As future work, we hope to develop an extension where events can also trigger assertions and retractions of facts in the Datalog database. Thanks to the Datalog rules, the model architecture will dynamically change along with the facts. For example, if Yoyodyne Corp. hires Alice, then the Yoyodyne node block begins to influence Alice's actions, and K expands to include a new (previously impossible) event where Yoyodyne fires Alice. Moreover, propositions in the database-including those derived via other Datalog rules-can now serve as extra bits of system state that help define the λ k intensity functions in. Then the system's learned neural state s i is usefully augmented by a large, exact set of boolean propositions-a division of labor between learning and expert knowledge. In this section, we elaborate on the details of the transition function Ψ that is introduced in section 2.1; more details about them may be found in. where the interval (t i−1, t i] has consecutive observations k i−1 @t i−1 and k i @t i as endpoints. At t i, the continuous-time LSTM reads k i @t i and updates the current (decayed) hidden cells c(t) to new initial values c i+1, based on the current (decayed) hidden state h(t i), as follows: At time t i, the updated state vector is ] is given by, which continues to control h(t) except that i has now increased by 1). On the interval (t i, t i+1], c(t) follows an exponential curve that begins at c i+1 (in the sense that lim t→t + i c(t) = c i+1 ) and decays, as time t increases, toward c i+1 (which it would approach as t → ∞, if extrapolated). We initialize each node block h b = 0, and then have it read a special beginning-of-stream (BOS) event bos@t 0 where bos is a special event type and t 0 is set to be 0. Then equations- define c 1 (from c 0 def = 0), c 1, δ 1, and o 1. This is the initial configuration of the system as it waits for the first event to happen: this initial configuration determines the hidden state h(t) and the intensity functions λ k (t) over t ∈ (0, t 1]. The bos event affects every node block but depends on none of them because we do not generate it. When the system is initiated, the following rule is automatically asserted by our program so users don't have to do it by themselves. affect(bos,X):-is block(X). More details about why bos is desirable can be found in. The vanilla neural Hawkes process can be specified using our interface as follows: (28b) update(global,global,K). (28c) where h global (t) is the only node block that every event type k depends on and affects. Equation falls back to f k (v k σ(Aσ(BCh global (t)))) which is not exactly the same with, yet at least as expressive as equation. A.4 OPTIONAL architecture, input AND output KEYWORDS As discussed in section 3.5, the embedding vector of each event is just the sum of trainable vectors. Actually, we further allow users to write Datalog rules to define embedding models that have multilayer structures and activation functions of interest. We can define a L-layer neural network using the architecture keyword as: where n is a (structured) term as the model name, D 0 is the input dimension, D l and a l are the output dimension and activation type of l-th layer respectively. The example below defines a model named emb that has a neural layer with hyper-tangent activation followed by a linear layer. architecture (Note that we allow using = for architecture to indicate that there should be only one model under each name n, although it is not supported in the standard datalog implementation. We can assign to each k a model n and spell out its arguments x 1, x 2, . . . (to be concatenated in order) for input embedding computation using the input keyword: and follow the same format for output embedding computation using the output keyword. Note that we use = again. The example below means that each w email(S,R) is computed by passing the concatenation of S and R into model emb and that v email(S,R) is computed the same way: input(email(S,R))= emb(S,R). (32a) output(email(S,R))= emb(S,R). B ALGORITHM DETAILS In this section, we elaborate on the details of algorithms. The log-likelihood in equation can be computed by calling Algorithm 1. The down sampling trick (line 32 of Algorithm 1) can be used when there are too many event types. It gives an unbiased estimate of the total intensity k∈K λ k (t), yet remains much less computationally expensive especially when J |K|. In our experiments, we found that its variance over the entire corpus turned out small, although it may, in theory, suffer large variance. As future work, we will explore sampling from proposal distributions where the probability of choosing any k is (perhaps trained to be) proportional to its actual intensity λ k (t), in order to further reduce the variance. But this is not within the scope of this paper. Note that, in principle, we have to make Datalog queries after every event, to figure out which node blocks are affected by that event and to find the new intensities of all events. However, certain Datalog queries may be slow. Thus, in practice, rather than repeatedly making the same queries, we just memorize the the first time and look it up when it is needed again. Problems emerge when events are allowed to change the database (e.g. asserting and retracting facts as in Appendix D), then this may change the of some queries, and thus the memos for those queries are now incorrect. In this case, we might explore using some other more flexible query language that creates memos and keeps them up to date . Given an event sequence prefix k 1 @t 1, k 2 @t 2,..., k i−1 @t i−1, we can call Algorithm 2 to draw the single next event. A full sequence can be rolled out by repeatedly feeding the sampled event back into the model and then drawing the next (calling Algorithm 2 another time). How do we construct the upper bound λ * (line 8 of Algorithm 2)? We express the upper bound as λ * = k∈K λ * k and find λ * k ≥ λ k (t) for each k. We copy the formulation of λ k (t) here for easy reference: where each summand g dd h bd (t) = g dd ·o id ·(2σ(2c d (t))− 1) is upper-bounded by max c∈{c id,c id} g dd · o id · (2σ(2c) − 1). Note that the coefficients g dd may be either positive or negative. C EXPERIMENTAL DETAILS C.1 DATASET STATISTICS Table 1 shows statistics about each dataset that we use in this paper. We synthesize data by sampling event sequences from different structured processes. Each structured process is a mixture model of M neural Hawkes processes and each neural Hawkes process(X) has four event types event1(X), event2(X), event3(X) and event4(X). We chose M = 4, 8, 16 and end up with three different datasets. We chose the sequence length I = 21 and then used the thinning algorithm (; ;) to sample the first I events over [0, ∞). We set T = t I, i.e., the time of the last generated event. We generate 2000, 100 and 100 sequences for each training, dev, and test set respectively. We examined our method in a simulated 5-floor building with 2 elevator cars. The system was initially built in Fortran by and then rebuilt in Python by. During a typical afternoon down-peak rush hour (when passengers go from floor-2,3,4,5 down to the lobby), elevator cars travel to each floor and pick up passengers that have (stochastically) arrived there according to a traffic profile that can be found in and. In this dataset, each event type is stop(C,F) where C can be car1 and car2 and F can be floor1,..., floor5. So there are 10 event types in total in this simulated building. We repeated the (one-hour) simulation 1200 times to collect the event sequences, each of which has around 1200 time-stamped records of which car stops at which floor. We randomly sampled disjoint train, dev and test sets with 1000, 100 and 100 sequences respectively. EuroEmail is proposed by. It was generated using email data from a large European research institute, and was highly anonymized. The emails only represent communications between institution members, which are indexed by integers, with timestamps. In the dataset are 986 users and 332334 email communications spanning over 800 days. However, most users only send or receive one or two emails, leaving this dataset extremely sparse. We extracted all the emails among the top 20 most active users, and end up with 5881 emails. We split the single long sequence into 120 sequences with average length of 48, and set the training, dev, test size as 100, 10, 10 respectively. In this dataset, event type is defined as send(S,R), where S and R are members in this organization. Then there're 20 × 20 = 400 different event types, where we assume that people may send emails to themselves. In this section, we give a full Datalog specification of the model that we used for the experiments on each dataset. Here is the full program for Elevator domain. (33a) is block(car2). (33b) is block(floor1). (33c) is block(floor2). (33d) is block(floor3). (33e) is block(floor4). (33f) is block(floor5). (33g) is block(building). (33h) is car(car1). (33i) is car(car2). (33j) is floor(floor1). (33k) is floor(floor2). (33l) is floor(floor3). (33m) is floor(floor4). (33n) is floor(floor5). (33o) is event(stop(C,F)):-is car(C),is floor(F). (33p) depend(stop(C,F), C). (33q) depend(stop(C,F), F). (33r) depend(stop(C,F), building). (33s)
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1ghzlHFPS
Factorize LSTM states and zero-out/tie LSTM weight matrices according to real-world structural biases expressed by Datalog programs.
In this work, we aim to solve data-driven optimization problems, where the goal is to find an input that maximizes an unknown score function given access to a dataset of input, score pairs. Inputs may lie on extremely thin manifolds in high-dimensional spaces, making the optimization prone to falling-off the manifold. Further, evaluating the unknown function may be expensive, so the algorithm should be able to exploit static, offline data. We propose model inversion networks (MINs) as an approach to solve such problems. Unlike prior work, MINs scale to extremely high-dimensional input spaces and can efficiently leverage offline logged datasets for optimization in both contextual and non-contextual settings. We show that MINs can also be extended to the active setting, commonly studied in prior work, via a simple, novel and effective scheme for active data collection. Our experiments show that MINs act as powerful optimizers on a range of contextual/non-contextual, static/active problems including optimization over images and protein designs and learning from logged bandit feedback. Data-driven optimization problems arise in a range of domains: from protein design to automated aircraft design , from the design of robots to the design of neural net architectures and learning from logged feedback, such as optimizing user preferences in recommender systems. Such problems require optimizing unknown reward or score functions using previously collected data consisting of pairs of inputs and corresponding score values, without direct access to the score function being optimized. This can be especially challenging when valid inputs lie on a low-dimensional manifold in the space of all inputs, e.g., the space of valid aircraft designs or valid images. Existing methods to solve such problems often use derivative-free optimization (Snoek et al.). Most of these techniques require active data collection where the unknown function is queried at new inputs. However, when function evaluation involves a complex real-world process, such as testing a new aircraft design or evaluating a new protein, such active methods can be very expensive. On the other hand, in many cases there is considerable prior data -existing aircraft and protein designs, and advertisements and user click rates, etc. -that could be leveraged to solve the optimization problem. In this work, our goal is to develop an optimization approach to solve such optimization problems that can readily operate on high-dimensional inputs comprising a narrow, low-dimensional manifold, such as natural images, readily utilize offline static data, and learn with minimal active data collection if needed. We can define this problem setting formally as the optimization problem where the function f (x) is unknown, and we have access to a dataset D = {(x 1, y 1),..., (x N, y N)}, where y i denotes the value f (x i). If no further data collection is possible, we call this the data-driven model-based optimization setting. This can also be extended to the contextual setting, where the aim is to optimize the expected score function value across a context distribution. That is, where π maps contexts c to inputs x, such that the expected score under the context distribution p 0 (c) is optimized. As before, f (c, x) is unknown and we have access to a dataset D = {(c i,, where y i is the value of f (c i, x i). Such contextual problems with logged datasets have been studied in the context of contextual bandits ). A simple way to approach these model-based optimization problems is to train a proxy function f θ (x) or f θ (c, x), with parameters θ, to approximate the true score, using the dataset D. However, directly using f θ (x) in place of the true function f (x) in Equation generally works poorly, because the optimizer will quickly find an input x for which f θ (x) outputs an erroneously large value. This issue is especially severe when the inputs x lie on a narrow manifold in a high-dimensional space, such as the set of natural images . The function f θ (x) is only valid near the training distribution, and can output erroneously large values when queried at points chosen by the optimizer. Prior work has sought to addresses this issue by using uncertainty estimation and Bayesian models for f θ (x), as well as active data collection (Snoek et al.). However, explicit uncertainty estimation is difficult when the function f θ (x) is very complex or when x is high-dimensional. Instead of learning f θ (x), we propose to learn the inverse function, mapping from values y to corresponding inputs x. This inverse mapping is one-to-many, and therefore requires a stochastic mapping, which we can express as f −1 θ (y, z) → x, where z is a random variable. We term such models model inversion networks (MINs). MINs provide us with a number of desirable properties: they can utilize static datasets, handle high-dimensional input spaces such as images, can handle contextual problems, and can accommodate both static datasets and active data collection. We discuss how to design simple active data collection methods for MINs, leverage advances in deep generative modeling (Goodfellow et al.;), and scale to very high-dimensional input spaces. We experimentally demonstrate MINs in a range of settings, showing that they outperform prior methods on high-dimensional input spaces, perform competitively to Bayesian optimization methods on tasks with active data collection and lower-dimensional inputs, and substantially outperform prior methods on contextual optimization from logged data (Swaminathan & Joachims, a). Bayesian optimization. In this paper, we aim to solve data-driven optimization problems. Most prior work aimed at solving such optimization problems has focused on the active setting. This includes algorithms such as the cross entropy method (CEM) and related derivative-free methods; , reward weighted regression Peters & Schaal, Bayesian optimization methods based on Gaussian processes;, and variants that replace GPs with parametric acquisition function approximators such as Bayesian neural networks and latent variable models (; b; a), as well as more recent methods such as CbAS . These methods require the ability to query the true function f (x) at each iteration to iteratively arrive at a near-optimal solution. We show in Section 3.3 that MINs can be applied to such an active setting as well, and in our experiments we show that MINs can perform competitively with these prior methods. Additionally, we show that MINs can be applied to the static setting, where these prior methods are not applicable. Furthermore, most conventional BO methods do not scale favourably to high-dimensional input spaces, such as images, while MINs can handle image inputs effectively. Contextual bandits. Equation 2 captures the class of contextual bandit problems. Prior work on batch contextual bandits has focused on batch learning from bandit feedback (BLBF), where the learner needs to produce the best possible policy that optimizes the score function from logged experience. Existing approaches build on the counterfactual risk minimization (CRM) principle (Swaminathan & Joachims, a;b), and have been extended to work with deep nets . In our comparisons, we find that MINs substantially outperform these prior methods in the batch contextual bandit setting. Deep generative modeling. Recently, deep generative modeling approaches have been very successful at modelling high-dimensional manifolds such as natural images (Goodfellow et al.; Van Den Oord et al.;), speech (van den), text (Yu et al.), alloy composition prediction (Nguyen et al.), etc. MINs combine the strength of such generative models with important algorithmic decisions to solve model-based optimization problems. In our experimental evaluation, we show that these design decisions are important for adapting deep generative models to model-based optimization, and it is difficult to perform effective optimization without them. In this section, we describe our model inversion networks (MINs) method, which can perform both active and passive model-based optimization over high-dimensional input spaces. Problem statement. Our goal is to solve optimization problems of the form x = arg max x f (x), where the function f (x) is not known, but we must instead use a dataset of input-output tuples D = {(x i, y i)}. In the contextual setting described in Equation, each datapoint is also associated with a context c i. For clarity, we present our method in the non-contextual setting, but the contextual setting can be derived analogously by conditioning all functions on the context. In the active setting, which is most often studied in prior work, the algorithm is allowed to actively query f (x) one or more times on each iteration to augment the dataset, while in the static setting, only an initial static dataset is available. The goal is to obtain the best possible x (i.e., the one with highest possible value of f (x)). One naïve way of solving MBO problems is to learn a proxy score function f θ (x), via standard empirical risk minimization. We could then maximize this learned function with respect to x via standard optimization methods. However, naïve applications of such a method would fail for two reasons. First, the proxy function f θ (x) may not be accurate outside the samples on which it is trained, and optimization with respect to it may simply lead to values of x for which f θ (x) makes the largest mistake in the negative direction. The second problem is more subtle. When x lies on a narrow manifold in very high-dimensional space (such as the space of natural images), the optimizer can produce invalid values of x, which in arbitrary outputs when fed into f θ (x). Since the shape of this manifold is unknown, it is difficult to constrain the optimizer to prevent this. This second problem is rarely addressed or discussed in prior work, which typically focuses on optimization over low-dimensional and compact domains with known bounds. Part of the reason for the brittleness of the naïve approach above is that f θ (x) has a high-dimensional input space, making it easy for the optimizer to find inputs x for which the proxy function produces an unreasonable output. Can we instead learn a function with a small input space, which implicitly understands the space of valid, in-distribution values for x? The main idea behind our approach is to model an inverse map that produces a value of x given a score value y, given by f −1 θ: Y → X. The input to the inverse map is a scalar, making it comparatively easy to constrain to valid values, and by directly generating the inputs x, an approximation to the inverse function must implicitly understand which input values are valid. As multiple x values can correspond to the same y, we design f −1 θ as a stochastic map that maps a score value along with a d z -dimensional random vector to a x, f −1 θ: Y × Z → X, where z is distributed according to a prior distribution p 0 (z). To define the inverse map objective, let the data distribution be denoted p D (x, y), let p D (y) be the marginal over y, and let p(y) be an any distribution defined on Y (which could be equal to p D (y)). We can train the proxy inverse map f −1 θ under distribution p(y) by minimizing the following objective: where p f, and D is a measure of divergence between the two distributions. Using the Kullback-Leibler divergence leads to maximum likelihood learning, while Jensen-Shannon divergence motivates a GAN-style training objective. MINs can be adapted to the contextual setting by passing in the context as an input and learning f −1 θ (y i, z, c i). In standard empirical risk minimization, we would choose p(y) to be the data distribution p D (y), such that the expectation be approximated simply by sampling training tuples (x i, y i) from the training set. However, as we will discuss in Section 3.3, a more careful choice for p(y) can lead to better performance. The MIN algorithm is based on training an inverse map, and then using it via the inference procedure in Section 3.2 to infer the x that approximately optimizes f (x). The structure of the MIN algorithm is shown in Algorithm 1. Once the inverse map is trained, the goal of our algorithm is to generate the best possible x, which will maximize the true score function as well as possible under the dataset. Since a score y needs to be provided as input to the inverse map, we must select for which score y to query the inverse map to obtain a near-optimal x. One naïve heuristic is to pick the best y max ∈ D and produce x max ∼ f −1 θ (y * max) as the output. However, the method should be able to extrapolate beyond the best score seen in the dataset, especially in contextual settings, where a good score may not have been observed for all contexts. In order to extrapolate as far as possible, while still staying on the valid data manifold, we need to measure the validity of the generated values of x. One way to do this is to measure the agreement between the learned inverse map and an independently trained forward model f θ: the values of y for which the generated samples x are predicted to have a score similar to y are likely in-distribution, whereas those where the forward model predicts a very different score may be too far outside the training distribution. Since the latent variable z captures the multiple possible outputs of the one-tomany inverse map, we can further optimize over z for a given y to find the best, most trustworthy output x. This can be formalized as the following optimization: y *,z *:= arg max This optimization can be motivated as finding an extrapolated score that corresponds to values of x that lie on the valid input manifold, and for which independently trained forward and inverse maps agree. Although this optimization uses an approximate forward map f θ (x), we show in our experiments in Section 4 that it produces substantially better than optimizing with respect to a forward model alone. The inverse map substantially constraints the search space, requiring an optimization over a 1-dimensional y and a (relatively) low-dimensional z, rather than the full space of inputs. This scheme can be viewed as a special (deterministic) case of a probabilistic optimization procedure described in Appendix A. A naïve implementation of the training objective in Equation samples y from the data distribution p D (y). However, as we are most interested in the inverse map's predictions for high values of y, it is much less important for the inverse map to predict accurate x values for values of y that are far from the optimum. We could consider increasing the weights on datapoints with larger values of y. In the extreme case, we could train only on the best datapoint -either the single datapoint with the largest y or, in the contextual case, the datapoint with the largest y for each context. More generally, we can define the optimal y distribution p * (y), which is simply the delta function centered on the best y, p * (y) = δ y * (y), in the deterministic case. If we instead assume that the observed scores have additive noise (i.e., we observe f (x) + ε, ε ∼ N ), then p * (y) would be a distribution centered around the optimal y. Of course, training on p * (y) is not practical, since it heavily down-weights most of the training data, leading to a very high-variance training objective, and is not even known in general, since the optimal data point is likely not in our training set. In this section, we will propose a better choice for p(y) that trades off the variance due to an overly peaked training distribution and the bias due to training on the "wrong" distribution (i.e., anything other than p * (y)). We can train under a distribution other than the empirical distribution by using importance sampling, such that we sample from p D and assign an importance weight, given by. By bounding the variance and the bias of the gradient of L p (D) estimate, with respect to the reweighted objective without sampling error under y drawn from p * (y), we obtain the following : (Proof in Appendix B) Theorem 3.1 ((Informal) Bias + variance bound in MINs). Let L(p *) be the objective under p * (y) without sampling error: Let N y be the number of datapoints with the particular y value observed in D, For some constants C 1, C 2, C 3, with high confidence, Theorem 3.1 suggests a tradeoff between being close to the optimal distribution p * (y) and reducing variance by covering the full data distribution p D. We observe that the distribution p(y) that minimizes the RHS bound in Theorem 3.1 has the following form: linear function of p * (y) that ensures that the distributions p and p * are close. Theoretically, g(•) is an increasing, piece-wise linear function of •. We can interpret the expression for p(y) as a product of two likelihoods -the optimality of a particular y value and the likelihood of a particular y not being rare in D. We empirically choose an exponential parameteric form for this function, which we describe in Section 3.5. This upweights the samples with higher scores, reduces the weight on rare y-values (i.e., those with low N y), while preventing the weight on common y-values from growing, since Ny Ny+K saturates to 1 for large N y. This is consistent with our intuition: we would like to upweight datapoints with high y-values, provided the number of samples at those values is not too low. Of course, for continuous-valued scores, we rarely see the same score twice. Therefore, we bin the y-values into discrete bins for the purpose of weighting, as we discuss in Section 3.5. While the passive setting requires care in finding the best value of y for the inverse map, the active setting presents a different challenge: choosing a new query point x at each iteration to augment the dataset D and make it possible to find the best possible optimum. Prior work on bandits and Bayesian optimization often uses Thompson sampling (TS) (; ; Srinivas et al.) as the data-collection strategy. TS maintains a posterior distribution over functions p(f t |D 1:t). At each iteration, it samples a function from this distribution and queries the point x t that greedily minimizes this function. TS offers an appealing query mechanism, since it achieves sub-linear Bayesian regret (defined as the expected cumulative difference between the value of the optimal input and the selected input), given by O(where T is the number of queries. Maintaining a posterior over high-dimensional parametric functions is generally intractable. However, we can devise a scheme to approximate Thompson sampling with MINs. To derive this method, first note that sampling f t from the posterior is equivalent to sampling (x, y) pairs consistent with f tgiven sufficiently many (x, y) pairs, there is a unique smooth function f t that satisfies For example, we can infer a quadratic function exactly from three points. For a more formal description, we refer readers to the notion of Eluder dimension (Russo & Van Roy). Thus, instead of maintaining intractable beliefs over the function, we identify a function by the samples it generates, and define a way to sample synthetic (x, y) points such that they implicitly define a unique function sample from the posterior. To apply this idea to MINs, we train the inverse map f −1 θt at each iteration t with an augmented is a dataset of synthetically generated input-score pairs corresponding to unseen y values in D t. Training f −1 θt on D t corresponds to training f −1 θt to be an approximate inverse map for a function f t sampled from p(f t |D 1:t), as the synthetically generated samples S t implicitly induce a model of f t. We can then approximate Thompson sampling by obtaining x t from f −1 θt, labeling it via the true function, and adding it to D t to produce D t+1. Pseudocode for this method, which we call "randomized labeling," is presented in Algorithm 2. In Appendix C, we further derive O(√ T) regret guarantees under mild assumptions. Implementationwise, this method is simple, does not require estimating explicit uncertainty, and works with arbitrary function classes, including deep neural networks. corresponding to unseen data points yi (by randomly pairing noisy observed xi values with unobserved y values.) 4: Train inverse map f −1 t on D t = Dt ∪ St, using reweighting described in Section 3.3. 5: Query function f at xt = f Observe outcome: (xt, f (xt)) and update Dt+1 = Dt ∪ (xt, f (xt)) 7: end for In this section, we describe our instantiation of MINs for high-dimensional inputs with deep neural network models. GANs (Goodfellow et al.) have been successfully used to model the manifold of high-dimensional inputs, without the need for explicit density modelling and are known to produce more realistic samples than other models such as VAEs or Flows . The inverse map in MINs needs to model the manifold of valid x thus making GANs a suitable choice. We can instantiate our inverse map with a GAN by choosing D in Equation 3 to be the Jensen-Shannon divergence measure. Since we generate x conditioned on y, the discriminator is parameterized as Disc(x|y), and trained to output 1 for a valid (x, y) pair (i.e., where y = f (x) and x comes from the data) and 0 otherwise. Thus, we optimize the following objective: This model is similar to a conditional GAN (cGAN), which has been used in the context of modeling distribution of x conditioned on a discrete-valued label . As discussed in Section 3.3, we additionally reweight the data distribution using importance sampling. To that end, we discretize the space Y into B discrete bins b 1, · · ·, b B and, following Section 3.3, weight each bin, where N bi is the number of datapoints in the bin, y * is the maximum score observed, and τ is a hyperparameter. (After discretization, using notation from Section 3.3, for any y that lies in bin b, p In the active setting, we perform active data collection using the synthetic relabelling algorithm described in Section 3.4. In practice, we train two copies of f −1 θ . The first, which we call the exploration model f −1 expl, is trained with data augmented via synthetically generated samples (i.e., D t). The other copy, called the exploitation model f −1 exploit, is trained on only real samples (i.e., D t). This improves stability during training, while still performing data collection as dictated by Algorithm 2. To generate the augmented dataset D t in practice, we sample y values from p * (y) (the distribution over high-scoring ys observed in D t), and add positive-valued noise, thus making the augmented y values higher than those in the dataset which promotes exploration. The corresponding inputs x are simply sampled from the dataset D t or uniformly sampled from the bounded input domain when provided in the problem statement. (for example, benchmark function optimization) After training, we infer best possible x from the trained model using the inference procedure described in Section 3.2. In the active setting, the inference procedure is applied on f −1 exploit, the inverse map that is trained only on real data points. The goal of our empirical evaluation is to answer the following questions. Can MINs successfully solve optimization problems of the form shown in Equations 1 and 2, in static settings and active settings, better than or comparable to prior methods? Can MINs generalize to high dimensional spaces, where valid inputs x lie on a lower-dimensional manifold, such as the space of natural images? Is reweighting the data distribution important for effective data-driven model-based optimization? Does our proposed inference procedure effectively discover valid inputs x with better values than any value seen in the dataset? Does randomized labeling help in active data collection? We first study the data-driven model-based optimization setting. This requires generating points that achieve a better function value than any point in the training set or, in the contextual setting, better than the policy that generated the dataset for each context. We evaluate our method on a batch contextual bandit task proposed in prior work and on a high-dimensional contextual image optimization task. We also evaluate our method on several non-contextual tasks that require optimizing over high-dimensional image inputs to evaluate a semantic score function, including hand-written characters and real-world photographs. Batch contextual bandits. We first study the contextual optimization problem described in Equation 2. The goal is to learn a policy, purely from static data, that predicts the correct bandit arm x for each context c, such that the policy achieves a high overall score f (c, π(c)) on average across contexts drawn from a distribution p 0 (c). We follow the protocol set out by , while the BanditNet column is our implementation; we were unable to replicate the performance from prior work (details in Appendix D). MINs outperform both BanditNet and BanditNet *, both with and without the inference procedure in Section 3.2. MINs w/o reweighting perform at par with full MINs on MNIST, and slightly worse on CIFAR 10, while still outperforming the baseline. which evaluates contextual bandit policies trained on a static dataset for a simulated classification tasks. The data is constructed by selecting images from the (MNIST/CIFAR) dataset as the context c, a random label as the input x, and a binary indicator indicating whether or not the label is correct as the score y. Multiple schemes can be used for selecting random labels for generating the dataset, and we evaluate on two such schemes, as described below. We report the average score on a set of new contexts, which is equal to the average 0-1 accuracy of the learned model on a held out test set of images (contexts). We compare our method to previously proposed techniques, including the BanditNet model proposed by on the MNIST and CIFAR-10 datasets. Note that this task is different from regular classification, in that the observed feedback ((c i, x i, y i) pairs) is partial, i.e. we do not observe the correct label for each context (image) c i, but only whether or not the label in the training tuple is correct or not. We evaluate on two datasets: data generated by selecting random labels x i for each context c i and data where the correct label is used 49% of the time, which matches the protocol in prior work . We compare to BanditNet on identical dataset splits. We report the average 0-1 test accuracy for all methods in Table 1. The show that MINs drastically outperform BanditNet on both MNIST and CIFAR-10 datasets, indicating that MINs can successfully perform contextual model-based optimization in the static (data-driven) setting. The also show that utilizing the inference procedure in Section 3.2 produces an improvement of about 1.5% and 1.0% in test-accuracy on MNIST and CIFAR-10, respectively. Character stroke width optimization. In the next experiment, we study how well MINs optimize over high-dimensional inputs, where valid inputs lie on a lower-dimensional manifold. We constructed an image optimization task out of the MNIST dataset. The goal is to optimize directly over the image pixels, to produce images with the thickest stroke width, such that the image corresponds either (a) to any valid character or (b) a valid instance of a particular character class. A Figure 2: MIN optimization to obtain the youngest faces when trained on faces older than 15 (left) and older than 25 (right). Generated faces (bottom) are obtained via inference in the inverse map at different points during model training. Real faces of varying ages (including ages lower than those used to train the model) are shown in the top rows. We overlay the actual age (negative of the score function) for each face on the real images, and the age obtained from subjective user rankings on the generated faces. The score function being optimized (maximized) in this case is the negative age of the face. successful algorithm will produce the thickest character that is still recognizable. In Figure 1, we observe that MINs generate images x that maximize the respective score functions in each case. We also evaluate on a harder task where the goal is to maximize the number of disconnected blobs of black pixels in an image of a digit. For comparison, we evaluate a method that directly optimizes the image pixels with respect to a forward model, of the form f θ (x). In this case, the solutions are far off the manifold of valid characters. We also compare to MINs without the reweighting scheme and the inference procedure, where y is the maximum possible y in the dataset to demonstrate the benefits of these two aspects. Semantic image optimization. The goal in these tasks is to quantify the ability of MINs to optimize high-level properties that require semantic understanding of images. We consider MBO tasks on the IMDB-Wiki faces (; dataset, where the function f (x) is the negative of the age of the person in the image. Hence, images with younger people have higher scores. We construct two versions of this task: one where the training data consists of all faces older than 15 years, and the other where the model is trained on all faces older than 25 years. This ensures that our model cannot simply copy the youngest face. To obtain ground truth scores for the generated faces, we use subjective judgement from human participants. We perform a study with 13 users. Each user was asked to answer a set of 35 binary-choice questions each asking the user to pick the older image of the two provided alternatives. We then fit an age function to this set of binary preferences, analogously to. Figure 2 shows the images produced by MINs. For comparison, we also present some sample of images from the dataset partitioned by the ground truth score. We find that the most likely age for optimal images produced by training MINs on images of people 15 years or older was 13.6 years, with the best image having an age of 12.2. The model trained on ages 25 and above produced more mixed , with an average age of 26.2, and a minimum age of 23.9. We report these in Table 2. This task is exceptionally difficult, since the model must extrapolate outside of the ages seen in the training set, picking up on patterns in the images that can be used to produce faces that appear younger than any face that the model had seen, while avoiding unrealistic images. We also conducted experiments on contextual image optimization with MINs. We studied contextual optimization over hand-written digits to maximize stroke width, using either the character category as the context c, or the top one-fourth or top half of the image. In the latter case, MINs must learn to complete the image while maximizing for the stroke width. In the case of class-conditioned optimization, MINs attain an average score over the classes of 237.6, while the dataset average is 149.0. In the case where the context is the top half or quarter of the image, MINs obtain average scores of 223.57 and 234.32, respectively, while the dataset average is 149.0 for both tasks. We report these in Table 3. We also conducted a contextual optimization experiment on faces from the Celeb-A dataset, with some example images shown in Figure 3. The context corresponds to the choice for the attributes brown hair, black hair, bangs, or moustache. The optimization score is given by the sum of the attributes wavy hair, eyeglasses, smiling, and no beard. Qualitatively, we can see that MINs successfully optimize the score while obeying the target context, though evaluating the true score is impossible without subjective judgement on this task. We discuss these experiments in more detail in Appendix D.1. Figure 3: Optimized x produced from contextual training on Celeb-A. Context = (brown hair, black hair, bangs, moustache and f (x) = 1(wavy hair, eyeglasses, smiling, no beard). We show the produced x for two contexts. The model optimizes score for both observed contexts such as brown or black hair and extrapolates to unobserved contexts such as brown and black hair. In the active MBO setting, MINs must select which new datapoints to query to improve their estimate of the optimal input. In this setting, we compare to prior model-based optimization methods, and evaluate the exploration technique described in Section 3.4. Global optimization on benchmark functions. We first compare MINs to prior work in Bayesian optimization on standard benchmark problems (DNGO) : the 2D Branin function, and the 6D Hartmann function. As shown in Table 4, MINs reach within ±0.1 units of the global minimum (minimization is performed here, instead of maximization), performing comparably with commonly used Bayesian optimization methods based on Gaussian processes. We do not expect MINs to be as efficient as GP-based methods, since MINs rely on training parametric neural networks with many parameters, which is less efficient than GPs on low-dimensional tasks. Exact Gaussian processes and adaptive Bayesian linear regression outperform MINs in terms of optimization precision and the number of samples queried, but MINs achieve comparable performance with about 4× more samples. We also report the performance of MINs without the random labeling exploration method, instead selecting the next query point by greedily maximizing the current model with some additive noise. We find that the random relabeling method produces substantially better than the greedy data collection approach, indicating the importance of effective exploration methods for MINs. Function Spearmint DNGO MIN MIN + greedy Branin (0.398) 0.398 ± 0.0 0.398 ± 0.0 0.398 ± 0.02 0.4 ± 0.05 Hartmann6 (-3.322) −3.3166 ± 0.02 −3.319 ± 0.00 −3.315 ± 0.05 −3.092 ± 0.12 Protein fluorescence maximization. In the next experiment, we study a high-dimensional active MBO task, previously studied by. This task requires optimizing over protein designs by selecting variable length sequences of codons, where each codon can take on one of 20 values. In order to model discrete values, we use a Gumbel-softmax GAN also previously employed in , and as a baseline in . For backpropagation, we choose a temperature τ = 0.75 for the Gumbel-softmax operation. This is also mentioned in Appendix D. The aim in this task is to produce a protein with maximum fluorescence. Each algorithm is provided with a starting dataset, and then allowed a identical, limited number of score function queries. For each query made by an algorithm, it receives a score value from an oracle. We use the trained oracles released by. These oracles are separately trained forward models, and can potentially be inaccurate, especially for datapoints not observed in the starting static dataset. We compare to CbAS and other baselines, including CEM (Cross Entropy Method), RWR (Reward Weighted Regression) and a method that uses a forward model -GB (Gómez-) reported by. For evaluation, we report the groundtruth score of the output of optimization (max), and the 50th-percentile groundtruth score of all the samples produced via sampling (this is without inference in the MIN case) so as to be comparable to. In Table 5, we show that MINs are comparable to the best performing method on this task, and produce samples with the highest score among all the methods considered. These suggest that MINs can perform competitively with previously proposed model-based optimization methods in the active setting, reaching comparable or better performance when compared both to Bayesian optimization methods and previously proposed methods for a higher-dimensional protein design task. In this work, we presented a novel approach towards model-based optimization (MBO). Instead of learning a proxy forward function f θ (x) from inputs x to scores y, MINs learn a stochastic inverse mapping from scores y to inputs. MINs are resistent to out-of-distribution inputs and can optimize over high dimensional x values where valid inputs lie on a narrow manifold. By using simple and principled design decisions, such as re-weighting the data distribution, MINs can perform effective model-based optimization even from static, previously collected datasets in the data-driven setting without the need for active data collection. We also described ways to perform active data collection if needed. Our experiments showed that MINs are capable of solving MBO optimization tasks in both contextual and non-contextual settings, and are effective over highly semantic score functions such as age of the person in an image. Prior work has usually considered MBO in the active or "onpolicy" setting, where the algorithm actively queries data as it learns. In this work, we introduced the data-driven MBO problem statement and devised a method to perform optimization in such scenarios. This is important in settings where data collection is expensive and where abundant datasets exist, for example, protein design, aircraft design and drug design. Further, MINs define a family of algorithms that show promising on MBO problems on extremely large input spaces. While MINs scale to high-dimensional tasks such as model-based optimization over images, and are performant in both contextual and non-contextual settings, we believe there are a number of interesting open questions for future work. The interaction between active data collection and reweighting should be investigated in more detail, and poses interesting consequences for MBO, bandits and reinforcement learning. Better and more principled inference procedures are also a direction for future work. Another avenue is to study various choices of training objectives in MIN optimization. In this section, we show that the inference scheme described in Equation 4, Section 3.2 emerges as a deterministic relaxation of the probabilistic inference scheme described below. We re-iterate that in Section 3.2, a singleton x * is the output of optimization, however the procedure can be motivated from the perspective of the following probabilistic inference scheme. Let p(x|y) denote a stochastic inverse map, and let p f (y|x) be a probabilistic forward map. Consider the following optimization problem: arg max where p θ (x|y) is the probability distribution induced by the learned inverse map (in our case, this corresponds to the distribution of f −1 θ (y, z) induced due to randomness in z ∼ p 0 (·)), p f (x|y) is the learned forward map, H is Shannon entropy, and D is KL-divergence measure between two distributions. In Equation 4, maximization is carried out over the input y to the inverse-map, and the input z which is captured inp in the above optimization problem, i.e. maximization over z in Equation 4 is equivalent to choosingp subject to the choice of singleton/ Dirac-deltap. The Lagrangian is given by: In order to derive Equation 4, we restrictp to the Dirac-delta distribution generated by querying the learned inverse map f −1 θ at a specific value of z. Now note that the first term in the Lagrangian corresponds to maximizing the "reconstructed"ŷ similarly to the first term in Equation 4. If p f is assumed to be a Gaussian random variable with a fixed variance, then log p f (ŷ|x) = −||ŷ − µ(x)|| Finally, in order to obtain the log p 0 (z) term, note that, D(p(x|y), p θ (x|y)) ≤ D(δ z (·), p 0 (·)) = − log p 0 (z) (by the data processing inequality for KL-divergence). Hence, constraining log p 0 (z) instead of the true divergence gives us a lower bound on L. Maximizing this lower bound (which is the same as Equation 4) hence also maximizes the true Lagrangian L. In this section, we provide details on the bias-variance tradeoff that arises in MIN training. Our analysis is primarily based on analysing the bias and variance in the 2 norm of the gradient in two cases -if we had access to infinte samples of the distribution over optimal ys, p * (y) (this is a Dirac-delta distribution when function f (x) evaluations are deterministic, and a distribution with non-zero variance when the function evaluations are stochastic or are corrupted by noise). Let −1 (y j)) denote the empirical objective that the inverse map is trained with. We first analyze the variance of the gradient estimator in Lemma B.2. In order to analyse this, we will need the expression for variance of the importance sampling estimator, which is captured in the following Lemma. Lemma B.1 (Variance of IS ). Let P and Q be two probability measures on the space (X, F) such that d 2 (P ||Q) < ∞. Let x 1, · · ·, x N be N randomly drawn samples from Q, and f: X → R is a uniformly-bounded function. Then for any δ ∈, with probability atleast 1 − δ, Equipped with Lemma B.1, we are ready to show the variance in the gradient due to reweighting to a distribution for which only a few datapoints are observed. θ. Let N y denote the number of datapoints observed in D with score equal to y, and letL p (D) be as defined, where the expectation is computed with respect to the dataset D. Then, there exist some constants C 1, C 2 such that with a confidence at least 1 − δ, Proof. We first bound the range in which the random variable ∇ θLp (D) can take values as a function of number of samples observed for each y. All the steps follow with high probability, i.e. with probability greater than 1 − δ, is the exponentiated Renyi-divergence between the two distributions p and q, i.e. dy. The first step follows by applying Hoeffding's inequality on each inner term in the sum corresponding to y j and then bounding the variance due to importance sampling ys finally using concentration bounds on variance of importance sampling using Lemma B.1. Thus, the gradient can fluctuate in the entire range of values as defined above with high probability. Thus, with high probability, atleast 1 − δ, The next step is to bound the bias in the gradient that arises due to training on a different distribution than the distribution of optimal ys, p * (y). This can be written as follows: where D TV is the total variation divergence between two distributions p and p *, and L is a constant that depends on the maximum magnitude of the divergence measure D. Combining Lemma B.2 and the above , we prove Theorem 3.1. In this section, we explain in more detail the randomized labeling algorithm described in Section 3.4. We first revisit Thompson sampling, then provide arguments for how our randomized labeling algorithm relates to it, highlight the differences, and then prove a regret bound for this scheme under mild assumptions for this algorithm. Our proof follows commonly available proof strategies for Thompson sampling. 1: Initialize a policy πa: X → R, data so-far D0 = {}, a prior over θ in f θ -P (θ * |D0) 2: for step t in {0, . . ., T-1} do 3: θt ∼ P (θ * |Ft) (Sample θt from the posterior) 4: Query xt = arg maxx E[f θ t (x) | θ = θt] (Query based on the posterior probability xt is optimal) 5: Observe outcome: (xt, f (xt)) 6: Dt+1 = Dt ∪ (xt, f (xt)) 7: end for Notation The TS algorithm queries the true function f at locations (x t) t∈N and observes true function values at these points f (x t). The true function f (x) is one of many possible functions that can be defined over the space R |X |. Instead of representing the true objective function as a point object, it is common to represent a distribution p * over the true function f. This is justified because, often, multiple parameter assignments θ, can give us the same overall function. We parameterize f by a set of parameters θ *. The T period regret over queries x 1, · · ·, x T is given by the random variable Since selection of x t can be a stochastic, we analyse Bayes risk , we define the Bayes risk as the expected regret over randomness in choosing x t, observing f (x t), and over the prior distribution P (θ *). This definition is consistent with. Let π TS be the policy with which Thompson sampling queries new datapoints. We do not make any assumptions on the stochasticity of π TS, therefore, it can be a stochastic policy in general. However, we make 2 assumptions (A1, A2). The same assumptions have been made in.: sup x f (x) − inf x f (x) ≤ 1 (Difference between max and min scores is bounded by 1) -If this is not true, we can scale the function values so that this becomes true. A2: Effective size of X is finite. 1 TS (Alg 3) queries the function value at x based on the posterior probability that x is optimal. More formally, the distribution that TS queries x t from can be written as: π TS t = P (x * = ·|D t). When we use parameters θ to represent the function parameter, and thus this reduces to sampling an input that is optimal with respect to the current posterior at each iteration: MINs (Alg 2) train inverse maps f θ (z, y), where y ∈ R. We call an inverse map optimal if it is uniformly optimal given θ t, i.e. ||f where ε t is controllable (usually the case in supervised learning, errors can be controlled by cross-validation). Now, we are ready to show that the regret incurred the randomized labelling active data collection scheme is bounded by O(√ T). Our proof follows the analysis of Thompson sampling presented in. We first define information ratio and then use it to prove the regret bound. related the expected regret of TS to its expected information gain i.e. the expected reduction in the entropy of the posterior distribution of X *. Information ratio captures this quantity, and is defined as: where I(·, ·) is the mutual information between two random variables and all expectations E t are defined to be conditioned on D t. If the information ratio is small, Thompson sampling can only incur large regret when it is expected to gain a lot of information about which x is optimal. then bounded the expected regret in terms of the maximum amount of information any algorithm could expect to acquire, which they observed is at most the entropy of the prior distribution of the optimal x. Lemma C.1 (Bayes-regret of vanilla TS) ). For any T ∈ N, if Γ t ≤ Γ (i.e. information ratio is bounded above) a.s. for each t ∈ {1, . . ., T}, We refer the readers to the proof of Proposition 1 in. The proof presented in does not rely specifically on the property that the query made by the Thompson sampling algorithm at each iteration x t is posterior optimal, but rather it suffices to have a bound on the maximum value of the information ratio Γ t at each iteration t. Thus, if an algorithm chooses to query the true function at a datapoint x t such that these queries always contribute in learning more about the optimal function, i.e. I(·, ·) appearing in the denominator of Γ is always more than a threshold, then information ratio is lower bounded, and that active data collection algorithm will have a sublinear asymptotic regret. We are interested in the case when the active data collection algorithm queries a datapoint x t at iteration t, such that x t is the optimum for a functionfθ t, wherê θ t is a sample from the posterior distribution over θ t, i.e.θ t lies in the high confidence region of the posterior distribution over θ t given the data D t seen so far. In this case, the mutual information between the optimal datapoint x and the observed (x t, f (x t)) input-score pair is likely to be greater than 0. More formally, The randomized labeling scheme for active data collection in MINs performs this step. The algorithm samples a bunch of (x, y) datapoints, sythetically generated, -for example, in our experiments, we add noise to the values of x, and randomly pair them with unobserved or rarely observed values of y. If the underlying true function f is smooth, then there exist a finite number of points that are sufficient to uniquely describe this function f. One measure to formally characterize this finite number of points that are needed to uniquely identify all functions in a function class is given by Eluder dimension (Russo & Van Roy). By augmenting synthetic datapoints and training the inverse map on this data, the MIN algorithm ensures that the inverse map is implicitly trained to be an accurate inverse for the unique function fθ t that is consistent with the set of points in the dataset D t and the augmented set S t. Which sets of functions can this scheme represent? The functions should be consistent with the data seen so far D t, and can take randomly distributed values outside of the seen datapoints. This can roughly argued to be a sample from the posterior over functions, which Thompson sampling would have maintained given identical history D t. Lemma C.2 (Bounded-error training of the posterior-optimal x t preserves asymptotic Bayes-regret). ∀t ∈ N, letx t be any input such that f (x t) ≥ max x E[f (x)|D t ] − ε t. If MIN chooses to query the true function atx t and if the sequence (ε t) t∈N satisfies T t=0 ε t = O(√ T), then, the regret from querying this ε t -optimalx t which is denoted in general as the policyπ TS is given by E[Regret(T,π Proof. This lemma intuitively shows that if posterior-optimal inputs x t can be "approximately" queried at each iteration, we can still maintain sublinear regret. To see this, note: The second term can be bounded by the absolute value in the worst case, which amounts T t=0 ε t extra Bayesian regret. As Bayesian regret of TS is O( √ T) and Theorem C.3 (Bayesian Regret of randomized labeling active data collection scheme proposed in Section 3.4 is O( √ T)). Regret incurred by the MIN algorithm with randomized labeling is of the order O((ΓH(X *) + C)T ). Proof. Simply put, we will combine the insight about the mutual information I(x, (x t, f (x t))) > 0 and C.2 in this proof. Non-zero mutual information indicates that we can achieve a O(√ T) regret if we query x t s which are optimal corresponding to some implicitly defined forward function lying in the high confidence set of the true posterior given the observed datapoints D t. Lemma C.2 says that if bounded errors are made in fitting the inverse map, the overall regret remains O(√ T). More formally, if ||f and now application of Lemma C.2 gives us the extra regret incurred. (Note that this also provides us a way to choose the number of training steps for the inverse map) Further, note if we sample x t at iteration t from a distribution that shares support with the true posterior over optimal x t (which is used by TS), we still incur sublinear, bounded O(Γ H(A *)T ) regret. In the worst case, the overall bias caused due to the approximations will lead to an additive cumulative increase in the Bayesian regret, and hence, there is a constant Figure 4: Contextual MBO on MNIST. In (a) and (b), top one-half and top one-fourth of the image respectively and in (c) the one-hot encoded label are provided as contexts. The goal is to produce the maximum stroke width character that is valid given the context. In (a) and (b), we show triplets of the groundtruth digit (green), the context passed as input (yellow) and the produced images x from the MIN model (purple). In this set of static dataset experiments, we study contextual MBO tasks on image pixels. Unlike the contextual bandits case, where x corresponds to an image label, here x corresponds to entire images. We construct several tasks. First, we study stroke width optimization on MNIST characters, where the context is the class of the digit we wish to optimize. Results are shown in Figure 4. MINs correctly produce digits of the right class, and achieve an average score over the digit classes of 237.6, whereas the average score of the digits in the dataset is 149.0. The next task is to test the ability of MINs to be able to complete/inpaint unobserved patches of an image given an observed context patch. We use two masks: mask A: only top half and mask B: only top one-fourth parts of the image are visible, to mask out portions of the image and present the masked image as context c to the MIN, with the goal being to produce a valid completion x, while still maximizing score corresponding to the stroke width. We present some The aim is to maximize the score of an image which is given by the sum of attributes: eyeglasses, smiling, wavy hair and no beard. MINs produce optimal x -visually these solutions indeed optimize the score. sample completions in Figure 4. The quantitative are presented in Table 6. We find that MINs are effective as compared completions for the context in the dataset in terms of score while still producing a visibly valid character. We evaluate MINs on a complex semantic optimization task on the CelebA dataset. We choose a subset of attributes and provide their one-hot encoding as context to the model. The score is equal to the 1 norm of the binary indicator vector for a different subset of attributes disjoint from the context. We present our in Figure 3. We observe that MINs produce diverse images consistent with the context, and is also able to effectively infer the score function, and learn features to maximize it. Some of the model produced optimized solutions were presented in Section 4 in Figure 3. In this section, we present the produced generations for some other contexts. Figure 7 shows these . In this section, we present some additional for non-contextual image optimization problems. We also evaluated our contextual optimization procedure on the CelebA dataset in a non-contextual setting. The reward function is the same as that in the contextual setting -the sum of attributes: wavy hair, no beard, smiling and eyeglasses. We find that MINs are able to sucessfully produce solutions in this scenario as well. We show some optimized outputs at different iterations from the model in Figure 5. cGAN baseline. We compare our MIN model to a cGAN baseline on the IMDB-Wiki faces dataset for the semantic age optimization task. In general, we found that the cGAN model learned to ignore the score value passed as input even when trained on the entire dataset (without excluding the youngest faces) and behaved almost like a regular unconditional GAN model when queried to produce images x corresponding to the smallest age. We suspect that this could possibly be due to the fact that age of a person doesn't have enough direct signal to guide the model to utilize it unless other tricks like reweighting proposed in Section 3.3 which explicitly enforce the model attention to datapoints of interest, are used. We present the produced optimized x in Figure 6. In Figure 8, we highlight the quantitative score values for the stroke width score function (defined as the number of pixels which have intensity more than a threshold). Note that MINs achieve the highest value of average score while still resembling a valid digit, that stays inside the manifold of valid digits, unlike a forward model which can get high values of the score function (number of pixels turned on), but doesn't stay on the manifold of valid digits. Figure 6: Optimal x solutions produced by a cGAN for the youngest face optimization task on the IMDB-faces dataset. We note that a cGAN learned to ignore the score value and produced images as an unconditional model, without any noticeable correlation with the score value. The samples produced mostly correspond to the most frequently occurring images in the dataset. Figure 7: Images returned by the MIN optimization for optimization over images. We note that MINs perform successful optimization over the an objective defined by the sum of desired attributes. Moreover, for unseen contexts, such as both brown and black hair, the optimized solutions look aligning with the context reasonably, and optimize for the score as well. In this section, we explain the experimental details and the setup of our model. For our experiments involving MNIST and optimization of benchmark functions task, we used the same architecture as a fully connected GAN -where the generator and discriminator are both fully connected networks. We based our code for this part on the open-source implementation (Linder-Norén). For the forward model experiments in these settings, we used a 3-layer feedforward ReLU network with hidden units of size 256 each in this setting. For all experiments on CelebA and IMDB-Wiki faces, we used the VGAN model and the associated codebase as our starting setup. For experiments on batch contextual bandits, we used a fully connected discriminator and generator for MNIST, and a convolutional generator and Resnet18-like discriminator for CIFAR-10. The prediction in this setting is categorical -1 of 10 labels needs to be predicted, so instead of using reinforce or derivative free optimization to train the inverse map, we used the trick with a temperature τ = 0.75, to be able to use stochastic gradient descent to train the model. For the protein flourescence maximization experiment, we used a 2-layer, 256-unit feed-forward gumbel-softmax inverse map and a 2-layer feed-forward discriminator. We trained models present in open-source implementations of BanditNet (Sachdeva), but were unable to reproduce as reported by. Thus we reported the paper reported numbers from the BanditNet paper in the main text as well. Temperature hyperparameter τ which is used to compute the reweighting distribution is adaptively chosen based on the 90 th percentile score in the dataset. For example, if the difference between y max and y 90 th −percentile is given by α, we choose τ = α. This scheme can adaptively change temperatures in the active setting. In order to select the constant which decides whether the bin corresponding to a particular value of y is small or not, we first convert the expression Ny Ny+λ to use densities rather than absolute counts, that is,p D (y) p D (y)+λ, wherep D (y) is the empirical density of observing y in D, and now we use the same constant λ = 0.003. We did not observe a lot of sensitivity to λ values in the range [0.0001, 0.007], all of which performed reasonably similar. We usually fixed the number of bins to 20 for the purposed of reweighting, however note that the inverse map was still trained on continuous y values, which helps it extrapolate. In the active setting, we train two copies of f −1 jointly side by side. One of them is trained on the augmented datapoints generated out of the randomized labelling procedure, and the other copy is just trained on the real datapoints. This was done so as to prevent instabilities while training inverse maps. Training can also be made more incremental in this manner, and we need to train an inverse map to optimality inside every iteration of the active MIN algorithm, but rather we can train both the inverse maps for a fixed number of gradient steps.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SklsBJHKDS
We propose a novel approach to solve data-driven model-based optimization problems in both passive and active settings that can scale to high-dimensional input spaces.
Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output. Most state-of-the-art neural sequence models do not explicitly capture such structure information, and thus do not perform well on these tasks. In this paper, we propose a new encoder-decoder model based on Tensor Product Representations (TPRs) for Natural- to Formal-language generation, called TP-N2F. The encoder of TP-N2F employs TPR'binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR'unbinding' to generate a sequence of relational tuples, each consisting of a relation (or operation) and a number of arguments, in symbolic space. TP-N2F considerably outperforms LSTM-based Seq2Seq models, creating a new state of the art on two benchmarks: the MathQA dataset for math problem solving, and the AlgoList dataset for program synthesis. Ablation studies show that improvements are mainly attributed to the use of TPRs in both the encoder and decoder to explicitly capture relational structure information for symbolic reasoning. When people perform explicit reasoning, they can typically describe the way to the step by step via relational descriptions. There is ample evidence that relational representations are important for human cognition (e.g., (; ; ; ;) ). Although a rapidly growing number of researchers use deep learning to solve complex symbolic reasoning and language tasks (a recent review is ), most existing deep learning models, including sequence models such as LSTMs, do not explicitly capture human-like relational structure information. In this paper we propose a novel neural architecture, TP-N2F, to solve natural-to formal-language generation tasks (N2F). In the tasks we study, math or programming problems are stated in naturallanguage, and answers are given as programs, sequences of relational representations, to solve the problem. TP-N2F encodes the natural-language symbolic structure of the problem in an input vector space, maps this to a vector in an intermediate space, and uses that vector to produce a sequence of output vectors that are decoded as relational structures. Both input and output structures are modelled as Tensor Product Representations (TPRs) . During encoding, NL-input symbolic structures are encoded as vector space embeddings using TPR'binding' (following); during decoding, symbolic constituents are extracted from structure-embedding output vectors using TPR'unbinding' (following Huang et al. (2018;). Our contributions in this work are as follows. (i) We propose a role-level analysis of N2F tasks. (ii) We present a new TP-N2F model which gives a neural-network-level implementation of a model solving the N2F task under the role-level description proposed in (i). To our knowledge, this is the first model to be proposed which combines both the binding and unbinding operations of TPRs to achieve generation tasks through deep learning. (iii) State-of-the-art performance on two recently developed N2F tasks shows that the TP-N2F model has significant structure learning ability on tasks requiring symbolic reasoning through program synthesis. The TPR mechanism is a method to create a vector space embedding of complex symbolic structures. The type of a symbol structure is defined by a set of structural positions or roles, such as the leftchild-of-root position in a tree, or the second-argument-of-R position of a given relation R. In a particular instance of a structural type, each of these roles may be occupied by a particular filler, which can be an atomic symbol or a substructure (e.g., the entire left sub-tree of a binary tree can serve as the filler of the role left-child-of-root). For now, we assume the fillers to be atomic symbols. 1 The TPR embedding of a symbol structure is the sum of the embeddings of all its constituents, each constituent comprising a role together with its filler. The embedding of a constituent is constructed from the embedding of a role and the embedding of the filler of that role: these are joined together by the TPR'binding' operation, the tensor (or generalized outer) product ⊗. Formally, suppose a symbolic type is defined by the roles {r i}, and suppose that in a particular instance of that type, S, role r i is bound by filler f i. The TPR embedding of S is the order-2 tensor where {f i} are vector embeddings of the fillers and {r i} are vector embeddings of the roles. In Eq. 1, and below, for notational simplicity we conflate order-2 tensors and matrices. As a simple example, consider the symbolic type string, and choose roles to be r 1 = first element, r 2 = second element, etc. Then in the specific string S = cba, the first role r 1 is filled by c, and r 2 and r 3 by b and a, respectively. The TPR for S is c ⊗ r 1 + b ⊗ r 2 + a ⊗ r 3, where a, b, c are the vector embeddings of the symbols a, b, c, and r i is the vector embedding of role r i. A TPR scheme for embedding a set of symbol structures is defined by a decomposition of those structures into roles bound to fillers, an embedding of each role as a role vector, and an embedding of each filler as a filler vector. Let the total number of roles and fillers available be n R, n F, respectively. Define the matrix of all possible role vectors to be R ∈ R dR×nR, with column i, [R]:i = r i ∈ R dR, comprising the embedding of r i. Similarly let F ∈ R dF×nF be the matrix of all possible filler vectors. The TPR T ∈ R dF×dR. Below, d R, n R, d F, n F will be hyper-parameters, while R, F will be learned parameter matrices. Using summation in Eq.1 to combine the vectors embedding the constituents of a structure risks non-recoverability of those constituents given the embedding T of the the structure as a whole. The tensor product is chosen as the binding operation in order to enable recovery of the filler of any role in a structure S given its TPR T. This can be done with perfect precision if the embeddings of the roles are linearly independent. In that case the role matrix R has a left inverse U: U R = I. Now define the unbinding (or dual) vector for role r j, u j, to be the j th column of U: U:j. Then, we have r i u j = δ ji. This means that, to recover the filler of r j in the structure with TPR T, we can take its tensor inner product (or matrix-vector product) with u j: In the architecture proposed here, we will make use of both TPR binding using the tensor product with role vectors r i and TPR unbinding using the tensor inner product with unbinding vectors u j. Binding will be used to produce the order-2 tensor T S embedding of the NL problem statement. Unbinding will be used to generate output relational tuples from an order-3 tensor H. Because they pertain to different representations (of different orders in fact), the binding and unbinding vectors we will use are not related to one another. We propose a general TP-N2F neural network architecture operating over TPRs to solve N2F tasks under a proposed role-level description of those tasks. In this description, natural-language input is represented as a straightforward order-2 role structure, and formal-language relational representations of outputs are represented with a new order-3 recursive role structure proposed here. Figure 1 shows an overview diagram of the TP-N2F model. It depicts the following high-level description. As shown in Figure 1, while the natural-language input is a sequence of words, the output is a sequence of multi-argument relational tuples such as (R A 1 A 2), a 3-tuple consisting of a binary relation (or operation) R with its two arguments. The "TP-N2F encoder" uses two LSTMs to produce a pair consisting of a filler vector and a role vector, which are bound together with the tensor product. These tensor products, concatenated, comprise the "context" over which attention will operate in the decoder. The sum of the word-level TPRs, flattened to a vector, is treated as a representation of the entire problem statement; it is fed to the "Reasoning MLP", which transforms this encoding of the problem into a vector encoding the solution. This is the initial state of the "TP-N2F decoder" attentional LSTM, which outputs at each time step an order-3 tensor representing a relational tuple. To generate a correct tuple from decoder operations, the model must learn to give the order-3 tensor the form of a TPR for a (R A 1 A 2) tuple (detailed explanation in Sec. 3.1.2). In the following sections, we first introduce the details of our proposed role-level description for N2F tasks, and then present how our proposed TP-N2F model uses TPR binding and unbinding operations to create a neural network implementation of this description of N2F tasks. In this section, we propose a role-level description of N2F tasks, which specifies the filler/role structures of the input natural-language symbolic expressions and the output relational representations. Instead of encoding each token of a sentence with a non-compositional embedding vector looked up in a learned dictionary, we use a learned role-filler decomposition to compose a tensor representation for each token. Given a sentence S with n word tokens {w 0, w 1, ..., w n−1}, each word token w t is assigned a learned role vector r t, soft-selected from the learned dictionary R, and a learned filler vector f t, soft-selected from the learned dictionary F (Sec. 2). The mechanism closely follows that of , and we hypothesize similar : the role and filler approximately encode the grammatical role of the token and its lexical semantics, respectively. 3 Then each word token w to the set of all its token embeddings {T 0, . . ., T n−1}, the sentence S as a whole is assigned a TPR equal to the sum of the TPR embeddings of all its word tokens: Using TPRs to encode natural language has several advantages. First, natural language TPRs can be interpreted by exploring the distribution of tokens grouped by the role and filler vectors they are assigned by a trained model (as in). Second, TPRs avoid the Bag of Word (BoW) confusion : the BoW encoding of Jay saw Kay is the same as the BoW encoding of Kay saw Jay but the encodings are different with TPR embedding, because the role filled by a symbol changes with its context. In this section, we propose a novel recursive role-level description for representing symbolic relational tuples. Each relational tuple contains a relation token and multiple argument tokens. Given a binary relation R, a relational tuple can be written as (rel arg 1 arg 2) where arg 1, arg 2 indicate two arguments of relation rel. Let us adopt the two positional roles, p is arg i. Now let us use role decomposition recursively, noting that the role p rel i can itself be decomposed into a sub-role p i = arg i -of-which has a sub-filler rel. Suppose that arg i, rel, p i are embedded as vectors a i, r, p i. Then the TPR encoding of p rel i is r ⊗ p i, so the TPR encoding of filler arg i bound to role p rel i is a i ⊗ (r ⊗ p i). The tensor product is associative, so we can omit parentheses and write the TPR for the formal-language expression, the relational tuple (rel arg 1 arg 2), as: Given the unbinding vectors p i for positional role vectors p i and the unbinding vector r for the vector r that embeds relation rel, each argument can be unbound in two steps as shown in Eqs. 4-5. Here · denotes the tensor inner product, which for the order-3 H and order-1 p i in Eq. 4 can be defined as in Eq. 5, · is equivalent to the matrix-vector product. Our proposed scheme can be contrasted with the TPR scheme in which (rel arg 1 arg 2) is embedded as r ⊗ a 1 ⊗ a 2 (e.g., ;). In that scheme, an n-ary-relation tuple is embedded as an order-(n + 1) tensor, and unbinding an argument requires knowing all the other arguments (to use their unbinding vectors). In the scheme proposed here, an n-ary-relation tuple is still embedded as an order-3 tensor: there are just n terms in the sum in Eq. 3, using n position vectors p 1,..., p n; unbinding simply requires knowing the unbinding vectors for these fixed position vectors. In the model, the order-3 tensor H of Eq. 3 has a different status than the order-2 tensor T S of Sec. 3.1.1. T S is a TPR by construction, whereas H is a TPR as a of successful learning. To generate the output relational tuples, the decoder assumes each tuple has the form of Eq. 3, and performs the unbinding operations which that structure calls for. In Appendix Sec. A.3, it is shown that, if unbinding each of a set of roles from some unknown tensor T gives a target set of fillers, then T must equal the TPR generated by those role/filler pairs, plus some tensor that is irrelevant because unbinding from it produces the zero vector. In other words, if the decoder succeeds in producing filler vectors that correspond to output relational tuples that match the target, then, as far as what the decoder can see, the tensor that it operates on is the TPR of Eq. 3. To generate formal relational tuples from natural-language descriptions, a learning strategy for the mapping between the two structures is particularly important. As shown in, we formalize the learning scheme as learning a mapping function f mapping (·), which, given a structural representation of the natural-language input, T S, outputs a tensor T F from which the structural representation of the output can be generated. At the role level of description, there's nothing more to be said about this mapping; how it is modeled at the neural network level is discussed in Sec. 3.2.1. As shown in Figure 1, the TP-N2F model is implemented with three steps: encoding, mapping, and decoding. The encoding step is implemented by the TP-N2F natural-language encoder (TP-N2F Encoder), which takes the sequence of word tokens as inputs, and encodes them via TPR binding according to the TP-N2F role scheme for natural-language input given in Sec. 3.1.1. The mapping step is implemented by an MLP called the Reasoning Module, which takes the encoding produced by the TP-N2F Encoder as input. It learns to map the natural-language-structure encoding of the input to a representation that will be processed under the assumption that it follows the role scheme for output relational-tuples specified in Sec. 3.1.2: the model needs to learn to produce TPRs such that this processing generates correct output programs. The decoding step is implemented by the TP-N2F relational tuples decoder (TP-N2F Decoder), which takes the output from the Reasoning Module (Sec. 3.1.3) and decodes the target sequence of relational tuples via TPR unbinding. The TP-N2F Decoder utilizes an attention mechanism over the individual-word TPRs T t produced by the TP-N2F Encoder. The detailed implementations are introduced below. The TP-N2F encoder follows the role scheme in Sec. 3.1.1 to encode each word token w t by softselecting one of n F fillers and one of n R roles. The fillers and roles are embedded as vectors. These embedding vectors, and the functions for selecting fillers and roles, are learned by two LSTMs, the Filler-LSTM and the Role-LSTM. (See Figure 2.) At each time-step t, the Filler-LSTM and the Role-LSTM take a learned word-token embedding w t as input. The hidden state of the Filler-LSTM, h t F, is used to compute softmax scores u F k over n F filler slots, and a filler vector f t = F u F is computed from the softmax scores (recall from Sec. 2 that F is the learned matrix of filler vectors). Similarly, a role vector is computed from the hidden state of the Role-LSTM, h t R. f F and f R denote the functions that generate f t and r t from the hidden states of the two LSTMs. The token w t is encoded as T t, the tensor product of f t and r t. T t replaces the hidden vector in each LSTM and is passed to the next time step, together with the LSTM cell-state vector c t: see-. After encoding the whole sequence, the TP-N2F encoder outputs the sum of all tensor products t T t to the next module. We use an MLP, called the Reasoning MLP, for TPR mapping; it takes an order-2 TPR from the encoder and maps it to the initial state of the decoder. Detailed equations and implementation are provided in Sec. A.2.1 of the Appendix. Figure 2: Implementation of the TP-N2F encoder. The TP-N2F Decoder is an RNN that takes the output from the reasoning MLP as its initial hidden state for generating a sequence of relational tuples (Figure 3). This decoder contains an attentional LSTM called the Tuple-LSTM which feeds an unbinding module: attention operates on the context vector of the encoder, consisting of all individual encoder outputs {T t}. The hidden-state H of the Tuple-LSTM is treated as a TPR of a relational tuple and is unbound to a relation and arguments. During training, the Tuple-LSTM needs to learn a way to make H suitably approximate a TPR. At each time step t, the hidden state H t of the Tuple-LSTM with attention (The version in) is fed as input to the unbinding module, which regards H t as if it were the TPR of a relational tuple with m arguments possessing the role structure described in Sec. 3.1.2: Figure 3, the assumed hypothetical form of H t, as well as that of B t i below, is shown in a bubble with dashed border.) To decode a binary relational tuple, the unbinding module decodes it from H t using the two steps of TPR unbinding given in-. The positional unbinding vectors p i are learned during training and shared across all time steps. After the first unbinding step, i.e., the inner product of H t with p i, we get tensors B t i. These are treated as the TPRs of two arguments a t i bound to a relation r t. A relational unbinding vector r t is computed by a linear function from the sum of the B t i and used to compute the inner product with each B t i to yield a t i, which are treated as the embedding of argument vectors. Based on the TPR theory, r t is passed to a linear function to get r t as the embedding of a relation vector. Finally, the softmax probability distribution over symbolic outputs is computed for relations and arguments separately. In generation, the most probable symbol is selected. (More detailed equations are in Appendix Sec. A.2.3) Figure 3: Implementation of the TP-N2F decoder. During inference time, natural language questions are encoded via the encoder and the Reasoning MLP maps the output of the encoder to the input of the decoder. We use greedy decoding (selecting the most likely class) to decode one relation and its arguments. The relation and argument vectors are concatenated to construct a new vector as the input for the Tuple-LSTM in the next step. TP-N2F is trained using back-propagation with the Adam optimizer and teacher-forcing. At each time step, the ground-truth relational tuple is provided as the input for the next time step. As the TP-N2F decoder decodes a relational tuple at each time step, the relation token is selected only from the relation vocabulary and the argument tokens from the argument vocabulary. For an input I that generates N output relational tuples, the loss is the sum of the cross entropy loss L between the true labels L and predicted tokens for relations and arguments as shown in. The proposed TP-N2F model is evaluated on two N2F tasks, generating operation sequences to solve math problems and generating Lisp programs. In both tasks, TP-N2F achieves state-of-the-art performance. We further analyze the behavior of the unbinding relation vectors in the proposed model. Results of each task and the analysis of the unbinding relation vectors are introduced in turn. Details of experiments and datasets are described in Sec. A.1 in the Appendix. Given a natural-language math problem, we need to generate a sequence of operations (operators and corresponding arguments) from a set of operators and arguments to solve the given problem. Each operation is regarded as a relational tuple by viewing the operator as relation, e.g., (add, n1, n2). We test TP-N2F for this task on the MathQA dataset . The MathQA dataset consists of about 37k math word problems, each with a corresponding list of multi-choice options and the corresponding operation sequence. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed with the execution script from to select a multi-choice answer. As there are about 30% noisy data (where the execution script returns the wrong answer when given the ground-truth program; see Sec. A.1 of the Appendix), we report both execution accuracy (of the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly). TP-N2F is compared to a baseline provided by the seq2prog model in , an LSTM-based seq2seq model with attention. Our model outperforms both the original seq2prog, designated SEQ2PROG-orig, and the best reimplemented seq2prog after an extensive hyperparameter search, designated SEQ2PROG-best. Table 1 presents the . To verify the importance of the TP-N2F encoder and decoder, we conducted experiments to replace either the encoder with a standard LSTM (denoted LSTM2TP) or the decoder with a standard attentional LSTM (denoted TP2LSTM). We observe that both the TPR components of TP-N2F are important for achieving the observed performance gain relative to the baseline. Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in , and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in. As shown in Table 2, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations. To interpret the structure learned by the model, we extract the trained unbinding relation vectors from the TP-N2F Decoder and reduce the dimension of vectors via Principal Component Analysis. Kmeans clustering on the average vectors are presented in Figure 4 and Figure 5 (in Appendix A.6). Results show that unbinding vectors for operators or functions with similar semantics tend to be close to each other. For example, with 5 clusters in the MathQA dataset, arithmetic operators such as add, subtract, multiply, divide are clustered together, and operators related to square or volume of geometry are clustered together. With 4 clusters in the AlgoLisp dataset, partial/lambda functions and sort functions are in one cluster, and string processing functions are clustered together. Note that there is no direct supervision to inform the model about the nature of the operations, and the TP-N2F decoder has induced this role structure using weak supervision signals from question/operationsequence-answer pairs. More clustering are presented in the Appendix A.6. N2F tasks include many different subtasks such as symbolic reasoning or semantic parsing (; ; ; ; ;). These tasks require models with strong structure-learning ability. TPR is a promising technique for encoding symbolic structural information and modeling symbolic reasoning in vector space. TPR binding has been used for encoding and exploring grammatical structural information of natural language . TPR unbinding has also been used to generate natural language captions from images . Some researchers use TPRs for modeling deductive reasoning processes both on a rule-based model and deep learning models in vector space (; ;). However, none of these previous models takes advantage of combining TPR binding and TPR unbinding to learn structure representation mappings explicitly, as done in our model. Although researchers are paying increasing attention to N2F tasks, most of the proposed models either do not encode structural information explicitly or are specialized to particular tasks. Our proposed TP-N2F neural model can be applied to many tasks. In this paper we propose a new scheme for neural-symbolic relational representations and a new architecture, TP-N2F, for formal-language generation from natural-language descriptions. To our knowledge, TP-N2F is the first model that combines TPR binding and TPR unbinding in the encoderdecoder fashion. TP-N2F achieves the state-of-the-art on two instances of N2F tasks, showing significant structure learning ability. The show that both the TP-N2F encoder and the TP-N2F decoder are important for improving natural-to formal-language generation. We believe that the interpretation and symbolic structure encoding of TPRs are a promising direction for future work. We also plan to combine large-scale deep learning models such as BERT with TP-N2F to take advantage of structure learning for other generation tasks. In this section, we present details of the experiments of TP-N2F on the two datasets. We present the implementation of TP-N2F on each dataset. The MathQA dataset consists of about 37k math word problems ((80/12/8)% training/dev/testing problems), each with a corresponding list of multi-choice options and an straight-line operation sequence program to solve the problem. An example from the dataset is presented in the Appendix A.4. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed to generate the solution for the given math problem. We use the execution script from to execute the generated operation sequence and compute the multi-choice accuracy for each problem. During our experiments we observed that there are about 30% noisy examples (on which the execution script fails to get the correct answer on the ground truth program). Therefore, we report both execution accuracy (the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly). The AlgoLisp dataset is a program synthesis dataset, which has 79k/9k/10k training/dev/testing samples. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of commands from leaves to root and (as in MathQA) use the symbol # i to indicate the of the i th command (generated previously by the model). A dataset sample with our parsed command sequence is presented in the Appendix A.4. AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: accuracy of passing all test cases (Acc), accuracy of passing 50% of test cases (50p-Acc), and accuracy of generating an exactly matched program (M-Acc). AlgoLisp has about 10% noise data (where the execution script fails to pass all test cases on the ground truth program), so we report both on the full test set and the cleaned test set (in which all noisy testing samples are removed). We use d R, n R, d F, n F to indicate the TP-N2F encoder hyperparameters, the dimension of role vectors, the number of roles, the dimension of filler vectors and the number of fillers. d Rel, d Arg, d P os indicate the TP-N2F decoder hyper-parameters, the dimension of relation vectors, the dimension of argument vectors, and the dimension of position vectors. In the experiment on the MathQA dataset, we use n F = 150, n R = 50, d F = 30, d R = 20, d Rel = 20, d Arg = 10, d P os = 5 and we train the model for 60 epochs with learning rate 0.00115. The reasoning module only contains one layer. As most of the math operators in this dataset are binary, we replace all operators taking three arguments with a set of binary operators based on hand-encoded rules, and for all operators taking one argument, a padding symbol is appended. For the baseline SEQ2PROG-orig, TP2LSTM and LSTM2TP, we use hidden size 100, single-direction, one-layer LSTM. For the SEQ2PROG-best, we performed a hyperparameter search on the hidden size for both encoder and decoder; the best score is reported. In the experiment on the AlgoLisp dataset, we use n F = 150, n R = 50, d F = 30, d R = 30, d Rel = 30, d Arg = 20, d P os = 5 and we train the model for 50 epochs with learning rate 0.00115. We also use one-layer in the reasoning module like in MathQA. For this dataset, most function calls take three arguments so we simply add padding symbols for those functions with fewer than three arguments. A.2.1 TP-N2F ENCODER Atten is the attention mechanism used in , which computes the dot product between h t input and each T t. Then a linear function is used on the concatenation of h t input and the softmax scores on all dot products to generate H t. The following equations show the attention mechanism: score is the score function of the attention. In this paper, the score function is dot product. At each timestep t, the 2-step unbinding process described in Sec. 3.1.2 operates first on an encoding of the triple as a whole, H, using two unbinding vectors p i that are learned but fixed for all tuples. This first unbinding gives an encoding of the two operator-argument bindings, B i. The second unbinding operates on the B i, using a generated unbinding vector for the operator, r, giving encodings of the arguments, a i. The generated unbinding vector for the operator, r, and the generated encodings of the arguments, a i, each produce a probability distribution over symbolic operator outputs Rel and symbolic argument outputs Arg i; these probabilities are used in the cross-entropy loss function. For generating a single symbolic output, the most-probable symbols are selected. The dimensions are: Question: Consider a number a, compute factorial of a TP-N2F(correct): (¡=,arg1,1) (-,arg1,1) (self,#1) (*,#2,arg1) (if,#0,1,#3) (lambda1,#4) (invoke1,#5,a) LSTM(wrong): (¡=,arg1,1) (-,arg1,1) (self,#1) (*,#2,arg1) (if,#0,1,#3) (lambda1,#4) (len,a) (invoke1,#5,#6) Question: Given an array of numbers and numbers b and c, add c to elements of the product of elements of the given array and b, what is the product of elements of the given array and b? TP-N2F(correct): (partial, b,*) (partial1,c,+) (map,a,#0) (map,#2,#1) LSTM(wrong): (partial1,b,+) (partial1,c,+) (map,a,#0) (map,#2,#1) Question: You are given an array of numbers a and numbers b, c and d, let how many times you can replace the median in a with sum of its digits before it becomes a single digit number and b be the coordinates of one end and c and d be the coordinates of another end of segment e, your task is to find the length of segment e rounded down TP-N2F(correct): (digits arg1) (len #0) (== #1 1) (digits arg1) (reduce #3 0 +) (self #4) (+ 1 #5) (if #2 0 #6) (lambda1 #7) (sort a) (len a) (/ #10 2) (deref #9 #11) (invoke1 #8 #12) (-#13 c) (digits arg1) (len #15) (== #16 1) (digits arg1) (reduce #18 0 +) (self #19) (+ 1 #20) (if #17 0 #21) (lambda1 #22) (sort a) (len a) (/ #25 2) (deref #24 #26) (invoke1 #23 #27) (-#28 c) (* #14 #29) (-b d) (-b d) (* #31 #32) (+ #30 #33) (sqrt #34) (floor #35) LSTM(wrong): (digits arg1) (len #0) (== #1 1) (digits arg1) (reduce #3 0 +) (self #4) (+ 1 #5) (if #2 0 #6) (lambda1 #7) (sort a) (len a) (/ #10 2) (deref #9 #11) (invoke1 #8 #12 c) (-#13) (-b d) (-b d) (* #15 #16) (* #14 #17) (+ #18) (sqrt #19) (floor #20) Question: Given numbers a, b, c and e, let d be c, reverse digits in d, let a and the number in the range from 1 to b inclusive that has the maximum value when its digits are reversed be the coordinates of one end and d and e be the coordinates of another end of segment f, find the length of segment f squared TP-N2F(correct): (digits c) (reverse #0) (* arg1 10) (+ #2 arg2) (lambda2 #3) (reduce #1 0 #4) (-a #5) (digits c) (reverse #7) (* arg1 10) (+ #9 arg2) (lambda2 #10) (reduce #8 0 #11) (-a #12) (* #6 #13) (+ b 1) (range 0 #15) (digits arg1) (reverse #17) (* arg1 10) (+ #19 arg2) (lambda2 #20) (reduce #18 0 #21) (digits arg2) (reverse #23) (* arg1 10) (+ #25 arg2) (lambda2 #26) (reduce #24 0 #27) (¿ #22 #28) (if #29 arg1 arg2) (lambda2 #30) (reduce #16 0 #31) (-#32 e) (+ b 1) (range 0 #34) (digits arg1) (reverse #36) (* arg1 10) (+ #38 arg2) (lambda2 #39) (reduce #37 0 #40) (digits arg2) (reverse #42) (* arg1 10) (+ #44 arg2) (lambda2 #45) (reduce #43 0 #46) (¿ #41 #47) (if #48 arg1 arg2) (lambda2 #49) (reduce #35 0 #50) (-#51 e) (* #33 #52) (+ #14 #53) LSTM(wrong): (-a d) (-a d) (* #0 #1) (digits c) (reverse #3) (* arg1 10) (+ #5 arg2) (lambda2 #6) (reduce #4 0 #7) (-#8 e) (+ b 1) (range 0 #10) (digits arg1) (reverse #12) (* arg1 10) (+ #14 arg2) (lambda2 #15) (reduce #13 0 #16) (digits arg2) (reverse #18) (* arg1 10) (+ #20 arg2) (lambda2 #21) (reduce #19 0 #22) (¿ #17 #23) (if #24 arg1 arg2) (lambda2 #25) (reduce #11 0 #26) (-#27 e) (* #9 #28) (+ #2 #29) A.6 UNBINDING RELATION VECTOR CLUSTERING We run K-means clustering on both datasets with k = 3, 4, 5, 6 clusters and the are displayed in Figure 4 and Figure 5. As described before, unbinding-vectors for operators or functions with similar semantics tend to be closer to each other. For example, in the MathQA dataset, arithmetic operators such as add, subtract, multiply, divide are clustered together at middle, and operators related to geometry such as square or volume are clustered together at bottom left. In AlgoLisp dataset, basic arithmetic functions are clustered at middle, and string processing functions are clustered at right.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BylPSkHKvB
In this paper, we propose a new encoder-decoder model based on Tensor Product Representations for Natural- to Formal-language generation, called TP-N2F.
This paper we present a defogger, a model that learns to predict future hidden information from partial observations. We formulate this model in the context of forward modeling and leverage spatial and sequential constraints and correlations via convolutional neural networks and long short-term memory networks, respectively. We evaluate our approach on a large dataset of human games of StarCraft: Brood War, a real-time strategy video game. Our models consistently beat strong rule-based baselines and qualitatively produce sensible future game states. We consider the problem of joint state estimation and next-state prediction in partially observable environments with complex dynamics. We take as a concrete example the problem of defogging in the real-time strategy (RTS) video game StarCraft, which we define as predicting the features of the game state that are hidden to the player. Forward modeling, the prediction of what is going to happen next, is a core enabler both for reactive control and for longer term planning. Many researchers are attempting to build and create algorithms that are able to model the future, especially in next frame video prediction and robotic planning BID9 BID0 One particular difficulty of forward modeling is to deal with the uncertainty of making a prediction with only a partial model and a partial view of the world. BID4; BID3.In RTS games such as StarCraft, players must build an economy and control agents, called units, on a 2 dimensional grid to overcome their opponents. Several inherent limitations of any real-world setting are made explicit in such RTS games. First, by the "fog of war" which only allows players to see the surroundings of their own units and are thus unable to fully access the true game state. Second, the low-level dynamics are extremely complex, because several hundreds of agents interact together. However, there is an implicit spatio-temporal structure that makes long-term reasonning depend mostly on lower-resolution abstractions that can be obtained by averaging fine-grained characteristics over time and space. This poses a challenge for both human and computer players alike and predicting hidden information is a key step towards efficient planning in such environments, In this paper, as a first step towards learning a fully-featured forward model of the environment, the task we propose is to uncover hidden information and to predict the next state from observational data. We present a comprehensive analysis of a StarCraft Defogger, which predict features of the game at different levels of granularity: global features of the game such as the buildings of the opponent, and local features such as army density averaged by regions. Starting from a map of the environment subsampled in time and space, we propose a deep architecture of stacked long short-term memory cells applied convolutionally in an encoder-decoder architecture to predict the full state at different spatial resolutions. Individual layers of convolutional LSTMs encode the dynamics of the local features, and are aggregated in subsequent layers to model lower-resolution movements and global features. Trained on a large dataset of human replays BID8, the model significantly outperforms strong rule-based baselines on several metrics. In this paper, we look at inferring the hidden information, from partial observations, either to make use of the inferred values directly, or to use them to decide what subsequent information to gather, i.e. lift our known unknowns. We do so with a forward model (see Section 4), because it allows us to leverage the spatio-temporal structure of the game. StarCraft is an imperfect information game with a rich body of research BID13. Few works attempt to deal with the fog of war: by using particle filtering to estimate the position of the opponent units BID18, or by predicting where, how and when the opponent will attack BID15. Poker is another imperfect information game, on which research started jointly with the field of game theory. BID11 yielded AIs that beat human professionals. One can draw a distant parallel between the general problem of defogging and instance segmentation in vision (when in presence of occlusions) BID14. Learning a forward model can be used to do video frame prediction BID10. Methods predicting in feature space include BID9, in which the authors predict future frames in the semantic segmentation space. Forward models have a rich history in control theory, various parametric and nonparametric methods were proposed for optimal filtering of different classes of stochastic systems BID5 ). Attempts were made to treat partial observations cases BID7, but not in the case of branching processes. BID12 enhances the exploration of a deep reinforcement algorithm with a learned predictive model in Atari. BID19 learn a forward model (jointly with a model-free controller) and use it for planning in Sokoban and MiniPacman. Finally, BID17 estimate combat outcomes in StarCraft by employing forward models in a Monte Carlo Tree Search algorithm.3 "DEFOGGING": PROBLEM STATEMENT "Defogging" consists of recovering the whole state s t from partial observations o 1...t in imperfect information games. We explain why it matters in RTS games. Let {U} n denote the set of all sets {u 1, ..., u n} consisting of n units, u i ∈ U with U being unit types. As a state space we shall use S = N n=1 {U} n where N is the maximum number of player's units. We will consider two-player games for which the full game state at time t is s t = (s DISPLAYFORM0 t) ∈ S 2. At each time-step t, each player p receives observations of state s t. We assume that each player fully observes her own state s These visibility rules for player p can be represented as a function ζ (p): S 2 → S which maps a state s t to {v ∈ {s DISPLAYFORM1 The task of defogging, akin to filtering, consists in deriving the full state s t from the observations of one player o DISPLAYFORM2 . Hence, the partial observation setting considered here is different from the classical one, where only diffusion in the state space considered or the moments of branching are supposed to be known. In Real-Time Strategy games, the task of defogging poses several challenges. First, it requires remembering virtually everything that was shown even when those units are hidden under the fog of war again (long term memory), with as many characteristics as possible, as seeing a unit of a given type at a given location at a given time may represent different information sets (or lead to different belief states) than if any of these characteristics was changed. Second, excellent performance in defogging can only be achieved by a model that can reason about this memory and apply inference steps that correspond to the game dynamics. Third, such a model can leverage the correlations of observations (due to the distribution over valid strategies), but should not overfit those. Finally, in most RTS games, inference in such a model from raw state representation is a real computational burden. As a consequence, it is necessary to formulate abstractions for making model training and evaluation tractable. Even though it is difficult to even estimate the set of valid states for s t with o 1...t precisely, humans have no difficulty in obtaining a useful estimate of the state of the opponent. It allows them to rule out large subsets of the possible states, which would reduce the computational cost of any algorithmic approach (even for model-free control, it would lower the variability in the input). Also, their estimate of the state of the opponent allows them to efficiently gather information (known unknowns). Having a good enough defogger would allow to test direct perfect information approaches in real RTS settings. Finally, human professional players exhibit theory of mind (recursive, counterfactual, reasoning about both players strategies) based on their inferred state. Algorithmic defoggers may enable a more game theoretic approach to the abstract strategic view of RTS games. We propose models based on an encoder-decoder architecture, as baselines for the defogging problem. We consider the formulated task as a parametric estimation problem. The goal is to reconstruct some coarse representation of s t from the observations o 1...t and predict both the number of units n in the opponent's state and their approximate states {u 1, . . . ., u n} that include unit type and location. We take two types of state representation in our approach: 1) unit counts per type and map region, and 2) unit type existence in a map region. The map is split into a grid of H × W cells. For each cell (c x, c y), we compute x (p) t,cx,cy ∈ N d which contains counts of units of player p for each unit type, and DISPLAYFORM0 t,cx,cy ∈ {0, 1} d that accounts for unit type presence or absence in the cell. Here d is the number of different unit types. To estimate parameters, we consider two types of loss functions: Huber ("smooth L1") loss L r (x, x) for unit type counts and binary cross-entropy loss L c (ŷ, y) for unit presence. We denoted x = (x We restricted the family of models to explore by having an encoder-decoder approach (as shown in FIG2), in which we encode x with a ConvNet (E), which does poolings in c x and c y coordinates if necessary so that we only have one vector per time-step. Then we apply a temporal inference mechanism and memory (R), for which we mostly experimented with recurrent neural networks. We tried using a simple temporal convolution, but did not notice significantly increased improvements in our metrics. We concatenate the localized embeddings of x with the output of R, and we decode with a ConvNet (D). Finally we apply a regression head (H r) and a classification head (H c) to each of the decoded vectors in each coordinate. In practice we do 2D spatial pooling before the classification head H c. DISPLAYFORM1 Because the individual statistics of each unit are difficult to predict on a macro level, and the resolution and dimensionality would be too high to be efficiently tractable, we downsample the state space into the counts of units per unit type, in evenly spaced blocks, at larger time steps than the game engine tick. The dimensions of the most popular (and representative) map are 512 × 512 walktiles, we do different models with spatially sum-pooled tiles from 32 × 32 with a striding of 32, up to 128 × 128 with a striding of 64 walktiles. Precisely, when the game map has a true height and width of H 0 and W 0, we apply a grid of size g to downsample it into a map of height H = H0 g and W = W0 g Our feature map of height H and width W is of size T × C × H × W, where T is time, C has the first half reserved for our own units and the second for enemy units. To produce coarse frame predictions, we do not need to do so at a small time difference. However, skipping a large number of frames between each prediction runs into the problem where during a period of time, a unit may appear and disappear into the fog of war, as we uncover a part but cover it up soon after. If this quick glance of the enemy is not featurized, we lose key information during the scouting process. Thus, we combine frames instead of simply skipping, and featurize units by their last seen location in a set of frames. With a step of s frames, we downsample the game in time to create T = T0 s, with the real game taking T 0 frames. The full game state holds auxiliary information that are essential to making good predictions in the future. The opponent faction (or race), if selected, is key to predicting the type of units that we need to predict. The map layout is key to predicting unit movement, as ground units cannot walk across cliffs, as well as give restrictions as to the starting locations (before any opponent units has been seen). We featurize the faction as a static embedding, replicated H × W times and concatenated to each time slice along the feature dimension We also have a ConvNet (M) that takes in features specific to each map: the walkability of each tile, buildability of each tile, ground height of each tile, and starting locations. This ConvNet is designed with the same grid size and stride as the featurizer, to ensure a H × W output size. We consider several different models, and we have done a search through model space to find the most effective models for this problem. We employ random hyperparameter search followed by manually guided random plus grid searches to find the best hyperparameters for our models. We did a preliminary search over grid sizes, and choose a grid of 32 × 32 predicting 15 seconds at a time to do most experiments on. We reason that 30 seconds is sufficient for some fast units to reach across the map, so it should be a good upper bound on how well we can expect a defogger to perform. We also present for a grid of 64 × 64, as well as models with time skips of 5 and 30 seconds. Finally, we fix the dataset such that we predict minutes 3 to 11, with the reasoning that the very early game is not as interesting since enemies have not been found, and any prediction might just be guessing, and the most interesting parts of the openings happen during this time frame. We choose not to use later than 11 minutes to avoid the long tail, since variability increases with game duration. We tried several different specifications of the model described in FIG2. We tried vanilla ConvNets, as well as one with enough striding such that the map is eventually downsampled into a 1 × 1 feature vector before R, thereby avoiding a pooling layer at the end of the encoder. We also try a conv-lstm encoder, where there are recurrent connections in the encoder. This is done by tiling the encoder recurrent layer one after the other E 1 → R 1 → E 2 → R 2 →...., where each encoder does some downsampling and each recurrent layer is replicated across the spatial grid. The recurrent layer (R) was always an LSTM, except for a conv-only model in which we replaced it with a temporal convolution. The decoder (D) was always another ConvNet, whose properties where other hyperparameters, with the same activation function (or residuals) as the encoder, without LSTMs ever. Its padding was set at all layers to be half the kernel size of the convolution rounded down, so that it produces exaclty one output tile per input tile. We did random search over the space of hyperparameters. We searched over two optimizers (SGD, BID6), the regression loss (MSE, Huber), λ values, and predicting the target value directly or predicting the difference between input and target, number of dimensions for hidden units, number of layers. We also searched over using residual blocks, gated convolutions, different nonlinearities. In our final models, we noticed several trends. Generally, gated convolutions BID2 did the best, while residual blocks with an information bottleneck did not improve over ReLU (probably due to the limited depth of our models). Adam was the more robust optimizer and converges faster than SGD, though SGD might converge to a better minimum given that it was trained for long enough. We noticed that the most robust (across tasks) model was the conv-lstm. The simple model performed the worst, as too much information is lost in the pooling layer after E (before R), and this layer is hard to avoid due to the difference in sizes of StarCraft maps. The striding model could learn, but is much more sensitive to hyperparameters, as was the conv-only model. Additionally, the lack of a memory in conv only make it much more difficult to reason about long term dependencies, as each frame of the model must reason about details such as the location of the enemy base, and therefore is less computationally efficient. In reference to section 4.1 and FIG2, we report with: L r Huber, L c binary cross entropy, R LSTM, D vanilla ConvNet, and two different flavors of E. For conv-lstm, we show for a model with depth d, meaning that there are d instances of a d-convolution layers followed by downsampling and a recurrent layer (so a d = 2 corresponds to 4 convolutions, d = 3 to 9). For striding, we show for a model with depth 4, entailing that the input is downsampled by a factor of 16, so in most cases there is no global pooling at the end. We define four metrics as proxies for measuring the impact of defogging in RTS games, we present some baselines and the of our models on these metrics, on a human dataset of StarCraft: Brood War games. We use the train, valid, and test set given by BID8 to do forward modeling and defogging. The dataset includes 65k human games of StarCraft: Brood War at generally high skill level. We implemented the models in PyTorch using TorchCraft BID16. We considered a variety of different metrics to measure the performance of our model on. Two downstream StarCraft tasks obvious for a forward model is strategy prediction and tactics prediction. In strategy prediction, presence or absence of certain buildings is central to determining what units the opponent will produce. Thus, we can measure the prediction of all opponent buildings in a future frame. In tactics prediction, the key is to determine the approximate locations of enemy units in the future. Furthermore, we care the most about enemy units that we cannot see. However, it's unclear how to define the fog of war when doing this prediction, since the areas we have vision might change drastically between two skip frames, so we measure the error on both visible and hidden enemy units. Thus, we measure two metrics, one where we define the fog of war to be just where the input frame does not see units, and one where we simply observe all enemy units. Finally, to use this in control, we wish to minimize the number of miscounts of enemy hidden units. This in 4 metrics. All metrics but the first (g_op_b) are measured per type per tile. When we measure existence/absence from a regression head, we take a threshold on the regression outputs so that numbers above this threshold counts as existence. These are cross validated per model. The metrics are: g_op_b (global opponent buildings) Existence of each opponent building type on any tile, from the classification head.hid_u (hidden units) Existence of hidden units, necessarily belonging to your opponent, from the regression head.op_u (opponent units) Existence of all opponent units output from the regression head, note that we see some of the opponent units (but they can move or be destroyed).abs_diff (absolute difference) An L1 loss on the counts of enemy units, from the regression head, over true positives Table 1: Score of the models, and of the best baseline for each task (in F1). See Table 2 in Appendix for full details. The absolute difference metric is measured only on the true positives of the model, so we subtract the best baseline to each model score, because a high precision low recall score will have a weaker cost and baseline cost than a high recall model. Thus, as the baseline is different for each model, to give an order of magnitude, we display the lowest (i.e. best of) the best baselines on the eponym line. These abs_diff numbers can only compare the models (who beat all the baselines as they are all negatives). More negative is better than baseline. * We could not train a single model to do well on both the regression and classification heads, so we display for op_b from a striding model with slightly different weights on each head. To validate the strength of our models, we validate their performance against some hard-coded baselines, similar to what rule-based bots use traditionally in StarCraft: Brood War competitions. These baselines rely exclusively on what was previously seen, as well as some rules of the games (to infer hidden requirements of units that we saw).We rely on four different baselines to measure success:1. Input -The baseline predicts by copying the input frame.2. Perfect memory (PM) -The baseline remembers everything in the past, and units are never removed, maximizing recall.3. Perfect memory + rules (PM+R) -This baseline is designed to maximize g_op_b, by using perfect memory and game rules to infer the existence of unit types that are prequisites for unit types that are seen.4. Previous Seen (PS) -This baseline takes the position of the last seen unit in the map, which is what most rule based bots do in real games. When a location is revealed and no units are at the spot, the count is again reset to 0.In order to beat these baselines, our models have to learn a good correlation model on the units and buildings type, remember what it has seen before and understand the dynamics of the movement of the units. We report baselines and models scores according to the metrics described above, on 64 and 32 walktiles effective grids (striding), with time steps of 5, 15, and 30 seconds, in Table 1.Figure 2: Plots show unit counts of the specified type per map cell (darker dots correspond to higher counts). Top row of each plot shows model inputs (i.e. observed units), middle row shows model predicted unit distributions and bottom row shows real unit distributions. In the left two plots, we observe that our model sees only a few opponent units, and tries to predict the location of their army under the for of war, in places it had not seen units before. In the right two plots, we observe our model remembering the locations of the opponent base and army. To obtain the existence thresholds from a regression output, we run a sweep on the validation set on 0.1 increments up to 1.5, where the model predicts existence if it predicts more than this value in the output. This value is usually very low, indicating that our model is sure of grid locations with zero units. We also finetune the probability of predicting global opponent building existence the same way, in that we predict existence if the probability output by the model is greater than p. Generally, p tends to be slightly above 0.5 to most maximize the F1 score. We report the on the test set with the best thresholds on the validation set. We note that for g_op_b prediction, the baselines already do very well. It is hard to beat the best baseline, PM+R. However, most of our models have higher recall than the baseline, indicating that they predict many more unexpected buildings, at the expense of mispredicting existing buildings. We do best above baseline on predicting the global existence of opponent buildings 30 seconds in the future, on whatever grid size. Our models make the most gains above baseline on unit prediction. Since units often move very erratically, this is difficult for a baseline that only predicts the previous frame. In order to predict units well, the model must have a good understanding of the dynamics of the game as well as the possible strategies taken by players in the game. For our baselines, the more coarse the grid size the easier it is to predict unit movement, since small jitters won't change the featurization. Additionally, predicting closer in time also improves the , since units move less often. We do 24% better in F1 in predicting the baseline for hidden opponent units 15 seconds in the future with a grid size of 32. In predicting all opponent units, we do 19% better, since baselines are able to more easily predict the existence of units that are already seen. These are the most useful cases for a defogger, since predict tactical movements 15 seconds in the future can aid a bot controller in predicting what opponents will do next. Predicting 5 seconds is not as useful, as enemies will not move much, and predicting at a grid size of 64 is harder for our model to use, as units on the diagonals of a grid box might not even see each other. We notice that our models have much higher precision than the baselines, with some over twice the precision. This supports the observation that our models have a better usage of their memory than the baselines, and are able to remember objects, but also "forget" them based on visibility of their previous positions and other correlates. On all unit prediction tasks, our models beat the baseline by a significant amount. Finally, we try to give a metric to approximate an estimate of how well the model can do in control. During control, we wish to minimize the number of mispredictions of opponent units. However, we noticed that the absolute number of mispredicted units would average to 10s of thousands per game for the baselines, and only several thousand per game for our model. This is because if we continually mispredict, for example, using the perfect memory baseline, then the mispredictions would add up over the length of the game. This shows how bad the baselines are at the regression task compared to classification, often 10x or more off from our models. To create more sane outputs comparable to the outputs of our models, we only display the L1 score over the true positives of the model. Table 2: Score of the models, and of the best baseline for each task (in F1). The absolute difference metric is measured only on the true positives of the model, so we subtract the best baseline. Thus, the baseline is different for each model, so we do not display it. More negative is better than baseline.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1nxTzbRZ
This paper presents a defogger, a model that learns to predict future hidden information from partial observations, applied to a StarCraft dataset.
In this work we study generalization of neural networks in gradient-based meta-learning by analyzing various properties of the objective landscapes. We experimentally demonstrate that as meta-training progresses, the meta-test solutions obtained by adapting the meta-train solution of the model to new tasks via few steps of gradient-based fine-tuning, become flatter, lower in loss, and further away from the meta-train solution. We also show that those meta-test solutions become flatter even as generalization starts to degrade, thus providing an experimental evidence against the correlation between generalization and flat minima in the paradigm of gradient-based meta-leaning. Furthermore, we provide empirical evidence that generalization to new tasks is correlated with the coherence between their adaptation trajectories in parameter space, measured by the average cosine similarity between task-specific trajectory directions, starting from a same meta-train solution. We also show that coherence of meta-test gradients, measured by the average inner product between the task-specific gradient vectors evaluated at meta-train solution, is also correlated with generalization. To address the problem of the few-shot learning, many meta-learning approaches have been proposed recently , , , and among others. In this work, we take steps towards understanding the characteristics of the landscapes of the loss functions, and their relation to generalization, in the context of gradient-based few-shot meta-learning. While we are interested in understanding the properties of optimization landscapes that are linked to generalization in gradient-based meta-learning in general, we focus our experimental work here within a setup that follows the recently proposed Model Agnostic Meta-Learning (MAML) algorithm . The MAML algorithm is a good candidate for studying gradient-based meta-learning because of its independence from the underlying network architecture. Our main insights and contributions can be summarized as follows: 1. As gradient-based meta-training progresses: • the adapted meta-test solutions become flatter on average, while the opposite occurs when using a finetuning baseline. • the adapted final solutions reach lower average support loss values, which never increases, while the opposite occurs when using a finetuning baseline. 2. When generalization starts to degrade due to overtraining, meta-test solutions keep getting flatter, implying that, in the context of gradient-based meta-learning, flatness of minima is not correlated with generalization to new tasks. 3. We empirically show that generalization to new tasks is correlated with the coherence between their adaptation trajectories, measured by the average cosine similarity between trajectory directions. Also correlated with generalization is the coherence between metatest gradients, measured by the average inner product between meta-test gradient vectors evaluated at meta-train solution. We also show that this metric is correlated to generalization for few-shot regression tasks where the model must learn to fit sine function curves. Furthermore, based on these observations, we take initial steps to propose a regularizer for MAML based training and provide experimental evidence for its effectiveness. There has been extensive research efforts on studying the optimization landscapes of neural networks in the standard supervised learning setup. Such work has focused on the presence of saddle points versus local minima in high dimensional landscapes,, the role of overparametrization in generalization , loss barriers between minima and their connectivity along low loss paths, ; , to name a few examples. One hypothesis that has gained popularity is that the flatness of minima of the loss function found by stochastic gradient-based methods in good generalization, ; . and measure the flatness by the spectral norm of the hessian of the loss, with respect to the parameters, at a given point in the parameter space. Both and consider the determinant of the hessian of the loss, with respect to the parameters, for the measure of flatness. For all of the work on flatness of minima cited above, authors have found that flatter minima correlate with better generalization. In contrast to previous work on understanding the objective landscapes of neural networks in the classical supervised learning paradigm, in our work, we explore the properties of objective landscapes in the setting of gradient-based meta-learning. We consider the meta-learning scenario where we have a distribution over tasks p(T), and a model f parametrized by θ, that must learn to adapt to tasks T i sampled from p(T). The model is trained on a set of training tasks {T i} train and evaluated on a set of testing tasks {T i} test, all drawn from p(T). In this work we only consider classification tasks, with {T i} train and {T i} test using disjoint sets of classes to constitute their tasks. Here we consider the setting of k-shot learning, that is, when f adapts to a task T test i, it only has access to a set of few support samples. We then evaluate the model's performance on T test i using a new set of target samples D i. By gradient-based meta-learning, we imply that f is trained using information about the gradient of a certain loss function L(f (D i ; θ)) on the tasks. Throughout this work the loss function is the cross-entropy between the predicted and true class. MAML learns an initial set of parameters θ such that on average, given a new task T order approximation of MAML, where these second-order derivatives are ommited, and we refer to that other algorithm as First-Order MAML. For the finetuning baseline, the model is trained in a standard supervised learning setup: the model is trained to classify all the classes from the training split using a stochastic gradient-based optimization algorithm, its output layer size being equal to the number of meta-train classes. During evaluation on meta-test tasks, the model's final layer (fully-connected) is replaced by a layer with the appropriate size for the given meta-test task (e.g. if 5-way classification, the output layer has five logits), with its parameter values initialized to random values or with another initialization algorithm, then all the model parameters are optimized to the meta-test task, just like for the other meta-learning algorithms. Figure 1: Visualizations of metrics measuring properties of objective loss landscapes. The black arrows represent the descent on the support loss and the dotted lines represent the corresponding displacement in the parameter space.: Curvature of the loss for an adapted meta-test solutionθi (for a task Ti), is measured as the spectral norm of the hessian matrix of the loss.: Coherence of adaptation trajectories to different meta-test tasks is measured as the average cosine similarity for pairs of trajectory directions. A direction vector is obtained by dividing a trajectory displacement vector (from meta-train solution θ s to meta-test solutioñ Characterizing a meta-train solution by the coherence of the meta-test gradients, measured by the average inner product for pairs of meta-test gradient In the context of gradient-based meta-learning, we define generalization as the model's ability to reach a high accuracy on a testing task T test i, evaluated with a set of target samples D i, for several testing tasks. This accuracy is computed after f, starting from a given meta-training parametrization θ s, has optimized its parameters to the task T, we consider the optimization landscapes L(f (D i ; θ)), and 1) the properties of these loss landscapes evaluated at the solutionsθ test i; 2) the adaptation trajectories when f, starting from θ s, adapts to those solutions; as well as 3) the properties of those landscapes evaluated at the meta-train solutions θ s. See Figure 1 for a visualization of our different metrics. We follow the evolution of the metrics as meta-training progresses: after each epoch, which in a different parametrization θ s, we adapt f to several meta-test tasks, compute the metrics averaged over those tasks, and compare with We do not deal with the objective landscapes involved during meta-training, as this is beyond the scope of this work. From here on, we drop the superscript test from our notation, as we exclusively deal with objective landscapes involving meta-test tasks T i, unless specified otherwise. We start our analysis of the objective loss landscapes by measuring properties of the landscapes at the adapted meta-test solutionsθ i. More concretely, we measure the curvature of the loss at those minima, and whether flatter minima are indicative of better generalization for the meta-test tasks. After s meta-training iterations, we have a model f parametrized by θ s. During the meta-test, f must adapt to several meta-test tasks T i independently. For a given T i, f adapts by performing a few steps of full-batch gradient descent on the objective landscape L(f (D i ; θ) ), using the set of support samples D i, and reaches an adapted solutionθ i. Here we are interested in the curvature of L(f (D i ;θ i)), that is, the objective landscape when evaluated at such solution, and whether on average, flatter solutions favour better generalization. Considering the hessian matrix of this loss w.r.t the model parameters, defined as, we measure the curvature of the loss surface aroundθ i using the spectral norm · σ of this hessian matrix: as illustrated in Figure 1 (We define the average loss curvature for meta-test solutionsθ i, obtained from a meta-train solution θ s, as: Note that we do not measure curvature of the loss at θ s, since θ s is not a point of convergence of f for the meta-test tasks. In fact, at θ s, since the model has not been adapted to the unseen meta-test classes, the target accuracy for the meta-test tasks is random chance on average. Thus, measuring the curvature of the meta-test support loss at θ s does not relate to the notion of flatness of minima. Instead, in this work we characterize the meta-train solution θ s by measuring the average inner product between the meta-test gradients, as explained later in Section 4.3. Other than analyzing the objective landscapes at the different minima reached when f adapts to new tasks, we also analyze the adaptation trajectories to those new tasks, and whether some similarity between them can be indicative of good generalization. Let's consider a model f adapting to a task T i by starting from θ s, moving in parameter space by performing T steps of full-batch gradient descent with ∇ θ L(f (D i ; θ)) until reachingθ i. We define the adaptation trajectory to a task T i starting from θ s as the sequence of iterates (θ s, θ i, ...,θ i). To simplify the analyses and alleviate some of the challenges in dealing with trajectories of multiple steps in a parameter space of very high dimension, we define the trajectory displacement vector (θ i − θ s). We define a trajectory direction vector θ i as the unit vector: We define a metric for the coherence of adaptation trajectories to meta-test tasks T i, starting from a meta-train solution θ s, as the average inner product between their direction vectors: The inner product between two meta-test trajectory direction vectors is illustrated in Figure 1. In addition to characterizing the adaptation trajectories at meta-test time, we characterize the objective landscapes at the meta-train solutions θ s. More concretely, we measure the coherence of the meta-test The coherence between the meta-test gradients can be viewed in relation to the metric for coherence of adaptation trajectories of Eq. 5 from Section 4.2. Even after simplifying an adaptation trajectory by its displacement vector, measuring distances between trajectories of multiple steps in the parameter space can be problematic: because of the symmetries within the architectures of neural networks, where neurons can be permuted, different parameterizations θ can represent identically the same function f that maps inputs to outputs. This problem is even more prevalent for networks with higher number of parameters. Since here we ultimately care about the functional differences that f undergoes in the adaptation trajectories, measuring distances between functions in the parameter space, either using Euclidean norm or cosine similarity between direction vectors, can be problematic . Thus to further simplify the analyses on adaptation trajectories, we can measure coherence between trajectories of only one step (T = 1). Since we are interested in the relation between such trajectories and the generalization performance of the models, we measure the target accuracy at those meta-test solutions obtained after only one step of gradient descent. We define those solutions as: To make meta-training consistent with meta-testing, for the meta-learning algorithms we also use T = 1 for the inner loop updates of Eq. 1. We thus measure coherence between the meta-test gradient vectors g i that lead to those solutions. Note that the learning rate α is constant and is the same for all experiments on a same dataset. In contrast to Section 4.2, here we observed in practice that the average inner product between meta-test gradient vectors, and not just their direction vectors, is more correlated to the average target accuracy. The ing metric is thus the average inner product between meta-test gradients evaluated at θ s. We define the average inner product between meta-test gradient vectors g i, evaluated at a meta-train solution θ s, as: The inner product between two meta-test gradients, evaluated at θ s, is illustrated in Figure 1. We show in the experimental in Section 5.2 and 5.3 that the coherence of the adaptation trajectories, as well as of the meta-test gradients, correlate with generalization on the meta-test tasks. We apply our analyses to the two most widely used benchmark datasets for few-shot classification problems: Omniglot and MiniImagenet datasets. We use the standardized CNN architecture used by and . We perform our experiments using three different gradient-based meta-learning algorithms: MAML, First-Order MAML and a Finetuning baseline. For more details on the meta-learning datasets, architecture and meta-learning hyperparameters, see Appendix A We closely follow the experimental setup of . Except for the Finetune baseline, the meta-learning algorithms use during meta-training the same number of ways and shots as during metatesting. For our experiments, we follow the setting of : for MiniImagenet, training and testing our models on 5-way classification 1-shot learning, as well as 5-way 5-shot, and for Omniglot, 5-way 1-shot; 5-way 5-shot; 20-way 1-shot; 20-way 5-shot. Each experiment was repeated for five independent runs. For the meta-learning algorithms, the choice of hyperparameters closely follows . For our finetuning baseline, most of the original MAML hyperparameters were left unchanged, as we want to compare the effect of the pre-training procedure, thus are kept fixed the architecture and meta-test procedures. We kept the same optimizer as for the meta-update of MAML (ADAM), and performed hyperparameter search on the mini-batch size to use, for each setting that we present. (For our reproduction on the meta-train and meta-test accuracy, see Figure 10a and 10b in B.1.) After each training epoch, we compute E[H θ (D i ;θ i) σ ] using a fixed set of 60 randomly sampled meta-test tasks T i. Across all settings, we observe that MAML first finds sharper solutionsθ i until reaching a peak, then as the number of epoch grows, those solutions become flatter, as seen in Figure 2. To verify the correlation between On the contrary, and remarkably, even as f starts to show poorer generalization (see Figure 3a), the solutions keep getting flatter, as shown in Figure 3c. Thus for the case of gradient-based meta-learning, flatter minima don't appear to favour better generalization. We perform the same analysis for our finetuning baseline (Figures 4a, 4c), with suggesting that flatness of solutions might be more linked with E[L(f (D i ;θ i))], the average level of support loss attained by the solutionsθ i (see Figures 4b and 3b), which is not an indicator for generalization. We also noted that across all settings involving MAML and First-Order MAML, this average meta-test support loss E[L(f (D i ;θ i))] decreases monotonically as meta-training progresses. In this section, we use the same experimental setup as in Section 5.1, except here we measure To reduce the variance on our , we sample 500 tasks after each meta-training epoch. Also for experiments on Omniglot, we drop the analyses with First-Order MAML, since it yields performance very similar to that of the Second-Order MAML. We start our analyses with the setting of "MiniImagenet, First-Order MAML, 5-way 1-shot", as it allowed us to test and invalidate the correlation between flatness of solutions and generalization, earlier in Section 5.1. We clearly observe a correlation between the coherence of adaptation trajectories and generalization to new tasks, with higher average inner product between trajectory directions, thus smaller angles, being linked to higher average target accuracy on those new tasks, as shown in Figure 5a. We then performed the analysis on the other settings, with the same observations (see Figure 5b and Figure 11 in Appendix B.2 for full set of experiments). We also perform the analysis on the Finetuning baselines, which reach much lower target accuracies, and where we see that E[θ T i θ j] remains much closer to zero, meaning that trajectory directions are roughly orthogonal to each other, akin to random vectors in high dimension (see Figure 6a). As an added observation, here we include our experimental on the average meta-test trajectory norm E[θ i − θ Despite the clear correlation between E[ θ T i θ j] and generalization for the settings that we show in Figure 5 and 11, we observed that for some other settings, this relationship appears less linear. We conjecture that such behavior might arise from the difficulties of measuring distances between networks in the parameter space, as explained in Section 4.3. Here we present our on the characterization of the objective landscapes at the meta-train solutions θ s, by measuring the average inner product between meta-test gradient vectors g i. We observe that coherence between meta-test gradients is correlated to generalization, which is consistent with the observations on the coherence of adaptation trajectories from Section 5.2. In Figure 7, we compare E[g experiments. This metric consistently correlates with generalization across the different settings. Similarly as in Section 5.2, for our finetuning baselines we observe very low coherence between meta-test gradients (see Figure 6b). Based on the observations we make in Section 5.2 and 5.3, we propose to regularize gradient-based meta-learning as described in Section 6. Here we extend our analysis by presenting experimental on E[g T i g j] for few-shot regression. Specifically we use a leaning problem which is composed of training task and test tasks, where each of these tasks are sine functions parameterized as y = a sin(bx + c). We train a two-layer MLP which learns to fit meta-training sine functions using only few support samples, and generalization implies reaching a low Mean Squared Error (MSE) averaged over the target set of many meta-test sine functions. Results are presented in Figure 8. Similar to our analysis of Few-shot classification setting, we observe in the case of Few-shot regression, generalization (negative average target MSE on Meta-test Task) strongly correlates with E[g Although, MAML has become a popular method for meta-training, there exist a significant generalization gap between its performance on target set of the meta-train tasks and the target set of the meta-test task, and regularizing MAML has not received much research attention yet. Based on our observations on the coherence of adaptation trajectories, we take first steps in this direction by adding a regularization term based on E[ θ T i θ j]. Within a meta-training iteration, we first let f adapt to the n training tasks T i following Eq 1. We then compute the average direction vector θ µ = 1 n n i=1 θ i. For each task, we want to reduce the angle defined by θ T i θ µ, and thus introduce the penalty on T i θ µ, obtaining the regularized solutionsθ i. The outer loop gradients are then computed, just like in MAML following Eq 2, but using these regularized solutionsθ i instead ofθ i. We obtain the variant of MAML with regularized inner loop updates, as detailed in Algorithm 1. We used this regularizer with MAML (Second-Order), for "Omniglot 20-way 1-shot", thereby tackling the most challenging few-shot classification setting for Omniglot. As shown in Figure 9, we observed an increase in meta-test target accuracy: the performance increases from 94.05% to 95.38% (average over five trials, 600 test tasks each), providing ∼ 23% relative reduction in meta-test target error. Algorithm 1 Regularized MAML: Added penalty on angles between inner loop updates 1: Sample a batch of n tasks Perform inner loop adaptation as in Eq. 1: i )) 4: end for 5: Compute the average direction vector: Compute the corrected inner loop updates: 7: for all T i do 8:θ i =θ i −γ∇ θ Ω(θ) where Ω(θ) = − θ T i θ µ 9: end for 10: Perform the meta-update as in Eq. 2, but using the corrected solutions: We experimentally demonstrate that when using gradient-based meta-learning algorithms such as MAML, meta-test solutions, obtained after adapting neural networks to new tasks via few-shot learning, become flatter, lower in loss, and further away from the meta-train solution, as metatraining progresses. We also show that those meta-test solutions keep getting flatter even when generalization starts to degrade, thus providing an experimental argument against the correlation between generalization and flat minima. More importantly, we empirically show that generalization to new tasks is correlated with the coherence between their adaptation trajectories, measured by the average cosine similarity between the adaptation trajectory directions, but also correlated with the coherence between the meta-test gradients, measured by the average inner product between meta-test gradient vectors evaluated at meta-train solution. We also show this correlation for few-shot regression tasks. Based on these observations, we take first steps towards regularizing MAML based meta-training. As a future work, we plan to test the effectiveness of this regularizer on various datasets and meta-learning problem settings, architectures and gradient-based meta-learning algorithms. A ADDITIONAL EXPERIMENTAL DETAILS We use the architecture proposed by which is used by , consisting of 4 modules stacked on each other, each being composed of 64 filters of of 3 × 3 convolution, followed by a batch normalization layer, a ReLU activation layer, and a 2 × 2 maxpooling layer. With Omniglot, strided convolution is used instead of max-pooling, and images are downsampled to 28 × 28. With MiniImagenet, we used fewer filters to reduce overfitting, but used 48 while MAML used 32. As a loss function to minimize, we use cross-entropy between the predicted classes and the target classes. The Omniglot dataset consists of a total of 1623 classes, each comprising 20 instances. The classes correspond to distinct characters, taken from 50 different datasets, but the taxonomy among characters isn't used. The MiniImagenet dataset comprises 64 training classes, 12 validation classes and 24 test classes. Each of those classes was randomly sampled from the original Imagenet dataset, and each contains 600 instances with a reduced size of 84 × 84. We follow the same experimental setup as for training and testing the models using MAML and First-Order MAML. During meta-training, the inner loop updates are performed via five steps of full batch gradient descent (except for Section 5.3 where T = 1), with a fixed learning rate α of 0.1 for Omniglot and 0.01 for MiniImagenet, while ADAM is used as the optimizer for the meta-update, without any learning rate scheduling, using a meta-learning rate β of 0.001. At meta-test time, adaptation to meta-test task is always performed by performing the same number of steps as for the meta-training inner loop updates. We use a mini-batch of 16 and 8 tasks for the 1-shot and 5-shot settings respectively, while for the MiniImagenet experiments, we use batches of 4 and 2 tasks for the 1-shot and 5-shots settings respectively. Let's also precise that, in k-shot learning for an m-way classification task T i, the set of support samples D i comprises k × m samples. Each meta-training epoch comprises 500 meta-training iterations. For the finetuning baseline, we kept the same hyperparameters for the ADAM optimizer during meta-training, and for the adaptation during meta-test. We searched the training hyperparameter values for the mini-batch size and the number of iterations per epoch. Experiments are run for a 100 epochs each. In order to limit meta-overfitting and maximize the highest average meta-test target accuracy, the finetuning models see roughly 100 times less training data per epoch compared to a MAML training epoch. In order to evaluate the baseline on the 1-shot and 5-shot meta-test tasks, during training we used mini-batches of 64 images with 25 iterations per epoch for 1-shot learning, and mini-batches of 128 images with 12 iterations per epoch, for 5-shot learning. At meta-test time, we use Xavier initialization to initialize the weights of the final layer. For the few-shot regression problems (which is also present in the work of ( The performance of the models trained with MAML and First-Order MAML, for the few-shot learning settings of Omniglot and MiniImagenet, are presented in Figure 10 . They include the target accuracies on meta-train tasks and on meta-test tasks (generalization), as meta-training progresses. The relation between target accuracy on meta-test tasks, and angles between trajectory directions is presented in Figure 11. The relation between target accuracy on meta-test tasks, and average inner product between meta-test gradients evaluated at meta-train solution, is presented in Figure 12.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SygT21SFvB
We study generalization of neural networks in gradient-based meta- learning by analyzing various properties of the objective landscape.
There have been multiple attempts with variational auto-encoders (VAE) to learn powerful global representations of complex data using a combination of latent stochastic variables and an autoregressive model over the dimensions of the data. However, for the most challenging natural image tasks the purely autoregressive model with stochastic variables still outperform the combined stochastic autoregressive models. In this paper, we present simple additions to the VAE framework that generalize to natural images by embedding spatial information in the stochastic layers. We significantly improve the state-of-the-art on MNIST, OMNIGLOT, CIFAR10 and ImageNet when the feature map parameterization of the stochastic variables are combined with the autoregressive PixelCNN approach. Interestingly, we also observe close to state-of-the-art without the autoregressive part. This opens the possibility for high quality image generation with only one forward-pass. In representation learning the goal is to learn a posterior latent distribution that explains the observed data well BID0. Learning good representations from data can be used for various tasks such as generative modelling and semi-supervised learning (; BID14 BID14 BID23 . The decomposition of variational auto-encoders (VAE) (; BID14 provides the potential to disentangle the internal representation of the input data from local to global features through a hierarchy of stochastic latent variables. This makes the VAE an obvious candidate for learning good representations. However, in order to make inference tractable VAEs contain simplifying assumptions. This limits their ability to learn a good posterior latent representation. In complex data distributions with temporal dependencies (e.g. text, images and audio), the VAE assumption on conditional independence in the input distribution limits the ability to learn local structures. This has a significant impact on its generative performance, and thereby also the learned representations. Additionally, the one-layered VAE model with a N (0, I) latent prior poses serious constraints on the posterior complexity that the model is able to learn. A deep hierarchy of stochastic latent variables should endow the model with more expressiveness, but the VAE has a tendency to skip the learning of the higher representations since they pose a direct cost in its optimization term. There have been several attempts to eliminate the limitations of the VAE. Some concern formulating a more expressive variational distribution BID3 BID25 BID30 where other concerns learning a deeper hierarchy of latent variables. These contributions have ed in better performance, but are still limited when modelling complex data distributions where a conditional independence does not apply. When parameterizing the VAE decoder with recurrent neural networks BID17 BID1 BID7, the decoding architecture gets too powerful which in unused latent stochastic variables.The limitations of the VAE have spawned interest towards other generative models such as Generative Adversarial Networks (GAN) BID8 and the autoregressive Pixel-CNN/PixelRNN models BID33. These methods have proven powerful in learning good generative models, but the lack of stochastic latent variables makes them less suitable for representation learning purposes. Lately, we have seen several successful attempts to combine VAEs with PixelCNNs BID11. This Figure 1: A visualization of FAME where the solid lines denote the variational approximation (inference/encoder/recognition) network and dashed lines denote the generative model (decoder) network for training. When performing reconstructions during training, the input image is concatenated with the output of the generative model (blue) and when generating the model follows a normal autoregressive sampling flow (red) while also using the stochastic latent variables z = z 1,..., z L. Both the variational approximation and the generative model follow a top-down hierarchical structure which enables precision weighted stochastic variables in the variational approximation.in a model where the global structure of the data is learned in the stochastic latent variables of the VAE and the local structure is learned in the PixelCNN. However, despite the additional complexity and potential extra expressiveness, these models do not outperform a simple autoregressive model BID32.In this paper we present the Feature Map Variational Auto-Encoder (FAME) that combines the top-down variational approximation presented in the Ladder Variational Auto-Encoder (LVAE) ) with a spatial (feature map) representation of the stochastic latent variables and an autoregressive decoder. We show that (i) FAME outperforms previously state-of-the-art loglikelihood on MNIST, OMNIGLOT, CIFAR10 and ImageNet, (ii) FAME learns a deep hierarchy of stochastic latent variables without inactivated latent units, (iii) by removing the autoregressive decoder FAME performs close to previous state-of-the-art log-likelihood suggesting that it is possible to get good quality generation with just one forward pass. The VAE BID14 ) is a generative model with a hierarchy of stochastic latent variables: DISPLAYFORM0 where z = z 1,..., z L, θ denotes the parameters, and L denotes the number of stochastic latent variable layers. The stochastic latent variables are usually modelled as conditionally independent Gaussian distributions with a diagonal covariance: DISPLAYFORM1 Since the posterior p(z|x) often is intractable we introduce a variational approximation q φ (z|x) with parameters φ. In the original VAE formulation q φ (z|x) is decomposed as a bottom-up inference path through the hierarchy of the stochastic layers: DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 We optimize an evidence lower-bound (ELBO) to the log-likelihood log p θ (x) = log z p θ (x, z)dz. BID2 introduced the importance weighted bound: DISPLAYFORM5 and proved that DISPLAYFORM6 For K = 1 the bound co-incides with the standard ELBO: L(θ, φ; x) = L 1 (θ, φ; x). The hierarchical structure of both the variational approximation and generative model give the VAE the expressiveness to learn different representations of the data throughout its stochastic variables, going from local (e.g. edges in images) to global features (e.g. class specific information). However, we can apply as recursive argument BID22 to show that when optimizing with respect to the parameters θ and φ the VAE is regularized DISPLAYFORM7. This is evident if we rewrite Equation 6 for K = 1: DISPLAYFORM8 is a local maxima and learning a useful representation in z L can therefore be disregarded throughout the remainder of the training. The same argumentation can be used for all subsequent layers z 2:L, hence the VAE has a tendency to collapse towards not using the full hierarchy of latent variables. There are different ways to get around this tendency, where the simplest is to down-weight the KL-divergence with a temperature term BID1. This term is applied during the initial phase of optimization and thereby downscales the regularizing effect. However, this only works for a limited number of hierarchically stacked latent variables.Formulating a deep hierarchical VAE is not the only cause of inactive latent variables, it also occurs when the parameterization of the decoder gets too powerful BID17 BID7. This can be caused by using autoregressive models such as p(circumvent this by introducing the Variational Lossy AutoEncoder (VLAE) where they define the architecture for the VAE and autoregressive model such that they capture global and local structures. They also utilize the power of more expressive posterior approximations using inverse autoregressive flows BID25 BID15. In the PixelVAE, BID11 takes a similar approach to defining the generative model but makes a simpler factorizing decomposition in the variational approximation q φ (z|x) = L i q φ (z i |x), where the terms have some degree of parameter sharing. This formulation in a less flexible model. DISPLAYFORM9 In BID15; BID11 we have seen that VAEs with simple decompositions of the stochastic latent variables and a powerful autoregressive decoder can in good generative performance and representation learning. However, despite the additional cost of learning a VAE we only see improvement in the log-likelihood over the PixelCNN for small gray-scale image datasets. We propose FAME that extends the VAE with a top-down variational approximation similar to the LVAE (Sønderby et al., 2016) combined with spatial stochastic latent layers and an autoregressive decoder, so that we ensure expressive latent stochastic variables learned in a deep hierarchy (cf. Figure 1). The LVAE does not change the generative model but changes the variational distribution to be top-down like the generative model. Furthermore the variational distribution shares parameters with the generative model which can be viewed as a precision-weighted (inverse variance) combination of information from the prior and data distribution. The variational approximation is defined as: DISPLAYFORM0 The stochastic latent variables are all fully factorized Gaussian distributions and are therefore modelled by q φ (z i |z i+1, x) = N (z i |µ i, diag(σ separate parameters (as in the VAE), the LVAE let the mean and variance be defined in terms of a function of x (the bottom-up data part) and the generative model (the top-down prior): DISPLAYFORM1 DISPLAYFORM2 where µ φ,i = µ φ,i (x) and µ θ,i = µ θ,i (z i+1) and like-wise for the variance functions. This precision weighted parameterization has previously yielded excellent for densely connected networks. We have seen multiple contributions (e.g. BID11) where VAEs (and similar models) have been parameterized with convolutions in the deterministic layers h i j, for j = 1,..., M, and M is the number of layers connecting the stochastic latent variables z i. The size of the spatial feature maps decreases towards higher latent representations and transposed convolutions are used in the generative model. In FAME we propose to extend this notion, so that each of the stochastic latent layers z i,..., z L−1 are also convolutional. This gives the model more expressiveness in the latent layers, since it will keep track of the spatial composition of the data (and thereby learn better representations). The top stochastic layer z L in FAME is a fully-connected dense layer, which makes it simpler to condition on a non-informative N (0, I) prior and sample from a learned generative model p θ (x, z). For the i = 1,..., L − 1 stochastic latent variables, the architecture is as follows: DISPLAYFORM0 where CNN and CONV denote a convolutional neural network and convolutional layer respectively. The top-most latent stochastic layer z L is computed by: DISPLAYFORM1 This new feature map parameterization of the stochastic layers should be viewed as a step towards a better variational model where the test ELBO and the amount of activated stochastic units are direct meaures hereof. From van den Oord et al. FIG0; we have seen that the PixelCNN architecture is very powerful in modelling a conditional distribution between pixels. In FAME we introduce a PixelCNN in the input dimension of the generative model p θ (x|z) (cf. Figure 1). During training we concatenate the input with the reconstruction data in the channel dimension and propagate it through the PixelCNN, similarly to what is done in BID11. When generating samples we fix a sample from the stochastic latent variables and generate the image pixel by pixel autoregressively. We test FAME on images from which we can compare with a wide range of generative models. First we evaluate on gray-scaled image datasets: statically and dynamically binarized MNIST and OMNIGLOT . The OMNIGLOT dataset is of particular interest due to the large variance amongst samples. Secondly we evaluate our models on natural image datasets: CIFAR10 BID18 Table 2: Negative log-likelihood performance on dynamically (left) and statically (right) binarized MNIST in nats. For the dynamically binarized MNIST show the for the FAME No Concatenation that has no dependency on the input image. The evidence lower-bound is computed with 5000 importance weighted samples L 5000 (θ, φ; x).modelling the gray-scaled images we assume a Bernoulli B distribution using a Sigmoid activation function as the output and for the natural images we assume a Categorical distribution π by applying the 256-way Softmax approach introduced in van den BID33. We evaluate the grayscaled images with L 5000 (cf. Equation 6) and due to runtime and space complexity we evaluate the natural images with L 1000.We use a hierarchy of 5 stochastic latent variables. In case of gray-scaled images the stochastic latent layers are dense with sizes 64, 32, 16, 8, 4 (equivalent to Sønderby et al. FORMULA0) and for the natural images they are spatial (cf. Table 1). There was no significant difference when using feature maps (as compared to dense layers) for modelling gray-scaled images. We apply batchnormalization BID12 and ReLU activation functions as the non-linearity between all hidden layers h i,j and use a simple PixelCNN as in van den BID33 with 4 residual blocks. Because of the concatenation in the autoregressive decoder (cf. Figure 1), generation is a cumbersome process that scales linearly with the amount of pixels in the input image. Therefore we have defined a slightly changed parameterization denoted FAME No Concatenation, where the concatenation with the input is omitted. The generation has no dependency on the input data distribution and can therefore be performed in one forward-pass through the generative model. For optimization we apply the Adam optimizer with a constant learning rate of 0.0003. We use 1 importance weighted sample and temperature scaling from.3 to 1. during the initial 200 epochs for gray-scaled images and.01 to 1. during the first 400 epochs for natural images. All models are trained using the same optimization scheme. The MNIST dataset serves as a good sanity check and has a myriad of previously published generative modelling benchmarks. We experienced much faster convergence rate on FAME compared to training a regular LVAE. On the dynamically binarized MNIST dataset we see a sig- Table 1: The convolutional layer (Conv), filter size (F), depth (K), stride (S), dense layer (Dense) and dimensionality (D) used in defining FAME for gray-scaled and natural images. The architecture is defined such that we ensure dimensionality reduction throughout the hierarchical stochastic layers. The autoregressive decoder is a PixelCNN (van den b) with a mask A convolution F=7x7, K=64, S=1 followed by 4 residual blocks of convolutions with mask B, F=3x3, K=64, S=1.Finally there are three non-residual layers of convolutions with mask B where the last is the output layer with a Sigmoid activation for gray-scaled images and a 256-way Softmax for natural images.nificant improvement (cf. Table 2). However, on the statically binarized MNIST, the parameterization and current optimization strategy was unsuccessful in achieving state-of-the-art (cf. Table 1). In FIG1 we see random samples drawn from a N (0, I) distribution and propagated through the decoder parameters θ. We also trained the FAME No Concatenation which performs nearly on par with the previously state-of-the-art VLAE model ) that in comparison utilizes a skip-connection from the input distribution to the generative decoder: DISPLAYFORM0. This proves that a better parameterization of the VAE improves the performance without the need of tedious autoregressive generation. There was no significant difference in the KL q(z|x)||p(z) between FAME and FAME No Concatenation. FAME use 10.85 nats in average to encode images, whereas FAME No Concatenation use 12.29 nats. NLL IWAE (BURDA ET AL., 2015A) 103.38 LVAE (SØNDERBY 102.11 RBM (BURDA ET AL., 2015B) 100.46 DVAE BID26 97.43 DRAW (GREGOR 96.50 CONV DRAW (GREGOR ET AL., 2016) 91.00 VLAE CHEN 89.83 FAME 82.54Figure 3: Negative log-likelihood performance on OMNIGLOT in nats. The evidence lower-bound is computed with 5000 importance weighted samples L 5000 (θ, φ; x).OMNIGLOT consists of 50 alphabets of handwritten characters, where each character has a limited amount of samples. Each character has high variance which makes it harder to fit a good generative model compared to MNIST. TAB3 presents the negative log-likelihood of FAME for OMNIGLOT and demonstrates significant improvement over previously published state-of-the-art. FIG1 shows generated samples from the learned θ parameter space. From Sønderby et al. FORMULA0 we have seen that the LVAE is able to learn a much tighter L 1 ELBO compared to the VAE. For the MNIST experiments, the L 1 ELBO is at 80.11 nats compared to the L 5000 77.82 nats. Similarly the OMNIGLOT L 1 ELBO is 86.62 nats compared to 82.54 nats. This shows significant improvements when using importance weighted samples and indicates that the parameterization of the FAME can be done in a way so that the bound is even tighter. We also find that the top-most latent stochastic layer is not collapsing into its prior, since the KL q(z 5 |x)||p(z 5) is 5.04 nats for MNIST and 3.67 nats for OMNIGLOT. In order to analyze the contribution from the autoregressive decoder we experimented on masking the contribution from either the concatenated image or the output of the FAME decoder before feeding it into the PixelCNN layers (cf. Figure 1). In FIG2 we see the of reconstructing MNIST images when masking out the contribution from the stochastic variables and in FIG2 we mask out the contribution from the concatenated input image. We investigate the performance of FAME on two natural image datasets: CIFAR10 and ImageNet. Learning a generative model on natural images is more challenging, which is also why there are many tricks that can be done in regards to the autoregressive decoding BID32. However, since we are interested in the additional expressiveness of a LVAE parameterization with convolutional stochastic latent variables, we have chosen a suboptimal architecture for the autoregressive decoding (cf. Table 1) BID33 ). An obvious improvement to the decoder would be to incorporate the PixelCNN++, but by using the simpler architecture we ensure that the improvements in log-likelihood is not a of a strong autoregressive model. From TAB3 we see the performance from FAME and FAME No Concatenation on the CIFAR10 dataset. Similarly to the gray-scaled images, FAME outperforms current state-of-the-art sig- Table 4: Negative log-likelihood performance on ImageNet in bits/dim. The evidence lower-bound is computed with 1000 importance weighted samples L 1000 (θ, φ; x).nificantly. It is also interesting to see how FAME No Concatenation performs close to the previously published state-of-the-art . Especially in the image space, this could prove interesting, since the FAME No Concatenation has no additional autoregressive runtime complexity. We only investigated the 32x32 ImageNet dataset, since the training time is significant and it outperformed the 64x64 models (cf. Table 4), whereas the previously published 64x64 ImageNet models consistently outperform their 32x32 counterpart. In FIG0 we show samples from FAME on the CIFAR10 dataset. Similarly to previously published it is difficult to analyze the performance from the samples. However, we can conclude that FAME is able to capture spatial correlations in the images for generating sharp samples. It is also interesting to see how it captures the contours of objects in the images. We have presented FAME, an extension to the VAE that significantly improve state-of-the-art performance on standard benchmark datasets. By introducing feature map representations in the latent stochastic variables in addition to top-down inference we have shown that the model is able to capture representations of complex image distributions while utilizing a powerful autoregressive architecture as a decoder. In order to analyze the contribution from the VAE as opposed to the autoregressive model, we have presented without concatenating the input image when reconstructing and generating. This parameterization shows on par with the previously state-of-the-art without depending on the time consuming autoregressive generation. Further directions for FAME is to (i) test it on larger image datasets with images of a higher resolution, (ii) expand the model to capture other data modalities such as audio and text, (iii) combine the model in a semi-supervised framework.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hy_o3x-0b
We present a generative model that proves state-of-the-art results on gray-scale and natural images.
Plain recurrent networks greatly suffer from the vanishing gradient problem while Gated Neural Networks (GNNs) such as Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU) deliver promising in many sequence learning tasks through sophisticated network designs. This paper shows how we can address this problem in a plain recurrent network by analyzing the gating mechanisms in GNNs. We propose a novel network called the Recurrent Identity Network (RIN) which allows a plain recurrent network to overcome the vanishing gradient problem while training very deep models without the use of gates. We compare this model with IRNNs and LSTMs on multiple sequence modeling benchmarks. The RINs demonstrate competitive performance and converge faster in all tasks. Notably, small RIN models produce 12%–67% higher accuracy on the Sequential and Permuted MNIST datasets and reach state-of-the-art performance on the bAbI question answering dataset. Numerous methods have been proposed for mitigating the vanishing gradient problem including the use of second-order optimization methods (e.g., Hessian-free optimization BID15), specific training schedules (e.g., Greedy Layer-wise training BID20 BID7 BID24), and special weight initialization methods when training on both plain FFNs and RNNs BID3 BID16 BID13 BID10 BID26 BID11.Gated Neural Networks (GNNs) also help to mitigate this problem by introducing "gates" to control information flow through the network over layers or sequences. Notable examples include recurrent networks such as Long-short Term Memory (LSTM) BID8, Gated Recurrent Unit (GRU) BID1, and feedforward networks such as Highway Networks (HNs) BID21, and Residual Networks (ResNets) BID5. One can successfully train very deep models by employing these models, e.g., ResNets can be trained with over 1,000 layers. It has been demonstrated that removing (lesioning) or reordering (re-shuffling) random layers in deep feedforward GNNs does not noticeable affect the performance of the network BID23 Noticeably, one interpretation for this effect as given by BID4 is that the functional blocks in HNs or ResNets engage in an Unrolled Iterative Estimate (UIE) of representations and that layers in this block of HNs or ResNets iteratively refine a single set of representations. In this paper, we investigate if the view of Iterative Estimation (IE) can also be applied towards recurrent GNNs (Section 2.1). We present a formal analysis for GNNs by examining a dual gate design common in LSTM and GRU (Section 2.2). The analysis suggests that the use of gates in GNNs encourages the network to learn an identity mapping which can be beneficial in training deep architectures BID6 BID4.We propose a new formulation of a plain RNN, called a Recurrent Identity Network (RIN), that is encouraged to learn an identity mapping without the use of gates (Section 2). This network uses ReLU as the activation function and contains a set of non-trainable parameters. This simple yet effective method helps the plain recurrent network to overcome the vanishing gradient problem while it is still able to model long-range dependencies. This network is compared against two competing networks, the IRNN and LSTM, on several long sequence modeling tasks including the adding problem (Section 3.1), Sequential and Permuted MNIST classification tasks (Section 3.2), and bAbI question answering tasks (Section 3.3). RINs show faster convergence than IRNNs and LSTMs in the early stage of the training phase and reach competitive performance in all benchmarks. Note that the use of ReLU in RNNs usually leads to training instability, and therefore the network is sensitive to training hyperparameters. Our proposed RIN network demonstrates that a plain RNN does not suffer from this problem even with the use of ReLUs as shown in Section 3. We discuss further implications of this network and related work in Section 4. Representation learning in RNNs requires that the network build a latent state, which reflects the temporal dependencies over a sequence of inputs. In this section, we explore an interpretation of this process using iterative estimation (IE), a view that is similar to the UIE view for feedforward GNNs. Formally, we characterize this viewpoint in Eq. 1, that is, the expectation of the difference between the hidden activation at step t, h t, and the last hidden activation at step T, h T, is zero: DISPLAYFORM0 This formulation implies that an RNN layer maintains and updates the same set of representations over the input sequence. Given the fact that the hidden activation at every step is an estimation of the final activation, we derive Eq. 3. Average Estimation Error DISPLAYFORM1 DISPLAYFORM2 Figure 1: Observation of learning identity mapping in an LSTM model trained on the adding problem task (see Section 3.1). The average estimation error is computed over a batch of 128 samples of the test set. (a) and (b) show the evaluation of Eq. 1 and Eq. 3 respectively. The x-axis indicates the index of the step that compares with the final output h T or its previous step h t−1. Fig. 1 shows an empirical observation of the IE in the adding problem (experimental details in Section 3.1). Here, we use the Average Estimation Error (AEE) measure BID4 to quantify the expectation of the difference between two hidden activations. The measured AEEs in Fig. 1 are close to 0 indicating that the LSTM model fulfills the view of IE. The also suggest that the network learns an identity mapping since the activation levels are similar on average across all recurrent updates. In the next section, we shall show that the use of gates in GNNs encourages the network to learn an identity mapping and whether this analysis can be extended to plain recurrent networks. Popular GNNs such as LSTM, GRU; and recent variants such as the Phased-LSTM BID17, and Intersection RNN BID2, share the same dual gate design following: DISPLAYFORM0 where t ∈ [1, T], H t = σ(x t, h t−1) represents the hidden transformation, T t = τ (x t, h t−1) is the transform gate, and C t = φ(x t, h t−1) is the carry gate. σ, τ and φ are recurrent layers that have their trainable parameters and activation functions. represents element-wise product operator. Note that h t may not be the output activation at the recurrent step t. For example in LSTM, h t represents the memory cell state. Typically, the elements of transform gate T t,k and carry gate C t,k are between 0 (close) and 1 (open), the value indicates the openness of the gate at the kth neuron. Hence, a plain recurrent network is a subcase of Eq. 4 when T t = 1 and C t = 0.Note that conventionally, the initial hidden activation h 0 is 0 to represent a "void state" at the start of computation. For h 0 to fit into Eq. 4's framework, we define an auxiliary state h −1 as the previous state of h 0, and T 0 = 1, C 0 = 0. We also define another auxiliary state h T +1 = h T, T T +1 = 0, and C T +1 = 1 as the succeeding state of h T.Based on the recursive definition in Eq. 4, we can write the final layer output h T as follows: DISPLAYFORM1 where we use to represent element-wise multiplication over a series of terms. According to Eq. 3, and supposing that Eq. 5 fulfills the Eq. 1, we can use a zero-mean residual t for describing the difference between the outputs of recurrent steps: DISPLAYFORM2 Plugging Eq. 6 into Eq. 5, we get DISPLAYFORM3 The complete deduction of Eqs. 8-9 is presented in Appendix A. Eq. 8 performs an identity mapping when the carry gate C t is always open. In Eq. 9, the term t i=1 i represents "a level of representation that is formed between h 1 and h t ". Moreover, the term T j=t C j extract the "useful" part of this representation and contribute to the final representation of the recurrent layer. Here, we interpret "useful" as any quantity that helps in minimizing the cost function. Therefore, the contribution, λ t, at each recurrent step, quantifies the representation that is learned in the step t. Furthermore, it is generally believed that a GNN manages and maintains the latent state through the carry gate, such as the forget gate in LSTM. If the carry gate is closed, then it is impossible for the old state to be preserved while undergoing recurrent updates. However, if we set C t = 0, t ∈ [1, T] in Eq. 9, we get: DISPLAYFORM4 If h 0 = 0 (void state at the start), we can turn Eq. 10 into: DISPLAYFORM5 Eq. 11 shows that the state can be preserved without the help of the carry gate. This indicates that it is possible for a plain recurrent network to learn an identity mapping as well. Motivated by the previous iterative estimation interpretation of RNNs, we formulate a novel plain recurrent network variant -Recurrent Identity Network (RIN): DISPLAYFORM0 where W is the input-to-hidden weight matrix, U is the hidden-to-hidden weight matrix, and I is a non-trainable identity matrix that acts as a "surrogate memory" component. This formulation encourages the network to preserve a copy of the last state by embedding I into the hidden-tohidden weights. This "surrogate memory" component maintains the representation encoded in the past recurrent steps. In this section, we compare the performances of the RIN, IRNN, and LSTM in a set of tasks that require modeling long-range dependencies. The adding problem is a standard task for examining the capability of RNNs for modeling longrange dependencies BID8. In this task, two numbers are randomly selected from a long sequence. The network has to predict the sum of these two numbers. The task becomes challenging as the length of the sequence T increases because the relevant numbers can be far from each other in a long sequence. We report experimental from three datasets that have sequence lengths of T 1 = 200, T 2 = 300, and T 3 = 400 respectively. Each dataset has 100,000 training samples and 10,000 testing samples. Each sequence of a dataset has T i numbers that are randomly sampled from a uniform distribution in. Each sequence is accompanied by a mask that indicates the two chosen random positions. We compare the performance between RINs, IRNNs, and LSTMs using the same experimental settings. Each network has one hidden layer with 100 hidden units. Note that a LSTM has four times more parameters than corresponding RIN and IRNN models. The optimizer minimizes the Mean Squared Error (MSE) between the target sum and the predicted sum. We initially used the RMSprop BID22 optimizer. However, some IRNN models failed to converge using this optimizer. Therefore, we chose the Adam optimizer so a fair comparison can be made between the different networks. The batch size is 32. Gradient clipping value for all models is 100. The models are trained with maximum 300 epochs until they converged. The initial learning rates are different between the datasets because we found that IRNNs are sensitive to the initial learning rate as the sequence length increases. The learning rates α 200 = 10 −4, α 300 = 10 DISPLAYFORM0 and α 400 = 10 −6 are applied to T 1, T 2 and T 3 correspondingly. The input-to-hidden weights of RINs and IRNNs and hidden-to-hidden weights of RINs are initialized using a similar method to BID13 where the weights are drawn from a Gaussian distribution N (0, 10 −3). The LSTM is initialized with the settings where the input-to-hidden weights use Glorot Uniform BID3 and hidden-to-hidden weights use an orthogonal matrix as suggested by BID19. Bias values for all networks are initialized to 0. No explicit regularization is employed. We do not perform an exhaustive hyperparameter search in these experiments. The baseline MSE of the task is 0.167. This score is achieved by predicting the sum of two numbers as 1 regardless of the input sequence. FIG1 shows MSE plots for different test datasets. RINs and IRNNs reached the same level of performance in all experiments, and LSTMs performed the worst. Notably, LSTM fails to converge in the dataset with T 3 = 400. The use of ReLU in RINs and IRNNs causes some degree of instability in the training phase. However, in most cases, RINs converge faster and are more stable than IRNNs (see training loss plots in Fig. 5 of Appendix B). Note that because IRNNs are sensitive to the initial learning rate, applying high learning rates such as α = 10 −3 for T 2 and T 3 could cause the training of the network to fail. Sequential and Permuted MNIST are introduced by Le et al. FORMULA0 for evaluating RNNs. Sequential MNIST presents each pixel of the MNIST handwritten image BID14 to the network sequentially (e.g., from the top left corner of the image to the bottom right corner of the image). After the network has seen all 28 × 28 = 784 pixels, the network produces the class of the image. This task requires the network to model a very long sequence that has 784 steps. Permuted MNIST is an even harder task than the Sequential MNIST in that a fixed random index permutation is applied to all images. This random permutation breaks the association between adjacent pixels. The network is expected to find the hidden relations between pixels so that it can correctly classify the image. All networks are trained with the RMSprop optimizer BID22 ) and a batch size of 128. The networks are trained with maximum 500 epochs until they are converged. The initial learning rate is set to α = 10 −6. Weight initialization follows the same setup as Section 3.1. No explicit regularization is added. TAB0 summarizes the accuracy performance of the networks on the Sequential and Permuted MNIST datasets. For small network sizes, RINs outperform IRNNs in their accuracy performance. For bigger networks, RINs and IRNNs achieve similar performance; however, RINs converge much faster than IRNNs in the early stage of training (see FIG2). LSTMs perform the worst on both tasks in terms of both convergence speed and final accuracy. Appendix C presents the full experimental . To investigate the limit of RINs, we adopted the concept of Deep Transition (DT) Networks BID18 for increasing the implicit network depth. In this extended RIN model called RIN-DT, each recurrent step performs two hidden transitions instead of one (the formulation is given in Appendix D). The network modification increases the inherent depth by a factor of two. The showed that the error signal could survive 784 × 2 = 1568 computation steps in RIN-DTs. In FIG3, we show the evidence of learning identity mapping empirically by collecting the hidden activation from all recurrent steps and evaluating Eqs. 1 and 3. The network matches the IE when AEE is close to zero. We also compute the variance of the difference between two recurrent steps. FIG3 suggests that all networks bound the variance across recurrent steps. FIG3 (b) offers a closer perspective where it measures the AEE between two adjacent steps. The levels of activations for all networks are always kept the same on an average, which is an evidence of learning identity mapping. We also observed that the magnitude of the variance becomes significantly larger at the last 200 steps in IRNN and RIN. Repeated application of ReLU may cause this effect during recurrent update BID9. Other experiments in this section exhibit similar behaviors, complete are shown in Appendix C FIG1 ). Note that this empirical analysis only demonstrates that the tested RNNs have the evidence of learning identity mapping across recurrent updates as RINs and IRNNs largely fulfill the view of IE. We do not over-explain the relationship between this analysis and the performance of the network.. The x-axis indicates the index of the step that compares with the final output h T or its previous step h t−1, and y-axis represents the average estimation error (AEE). DISPLAYFORM0 The bAbI dataset provides 20 question answering tasks that measure the understanding of language and the performance of reasoning in neural networks BID25. Each task consists of 1,000 training samples and 1,000 test samples. A sample consists of three parts: a list of statements, a question and an answer (examples in TAB1). The answer to the question can be inferred from the statements that are logically organized together. The red square is below the blue square. Then she journeyed to the garden. The red square is to the left of the pink rectangle. Question: Is the blue square below the pink rectangle? Answer: Garden. Answer: No. We compare the performance of the RIN, IRNN, and LSTM on these tasks. All networks follow a network design where the network firstly embeds each word into a vector of 200 dimensions. The statements are then appended together to a single sequence and encoded by a recurrent layer while another recurrent layer encodes the question sequence. The outputs of these two recurrent layers are concatenated together, and this concatenated sequence is then passed to a different recurrent layer for decoding the answer. Finally, the network predicts the answer via a softmax layer. The recurrent layers in all networks have 100 hidden units. This network design roughly follows the architecture presented in BID11. The initial learning rates are set to α = 10 −3 for RINs and LSTMs and α = 10 −4 for IRNNs because IRNNs fail to converge with a higher learning rate on many tasks. We chose the Adam optimizer over the RMSprop optimizer because of the same reasons as in the adding problem. The batch size is 32. Each network is trained for maximum 100 epochs until the network converges. The recurrent layers in the network follow the same initialization steps as in Section 3.1.The in TAB2 show that RINs can reach mean performance similar to the state-of-theart performance reported in BID11. As discussed in Section 3.1, the use of ReLU as the activation function can lead to instability during training of IRNN for tasks that have lengthy statements (e.g.. 3-Three Supporting Facts, 5-Three Arg. Relations). In this paper, we discussed the iterative representation refinement in RNNs and how this viewpoint could help in learning identity mapping. Under this observation, we demonstrated that the contribution of each recurrent step a GNN can be jointly determined by the representation that is formed up to the current step, and the openness of the carry gate in later recurrent updates. Note in Eq. 9, the element-wise multiplication of C t s selects the encoded representation that could arrive at the output of the layer. Thus, it is possible to embed a special function in C t s so that they are sensitive to certain pattern of interests. For example, in Phased LSTM, the time gate is inherently interested in temporal frequency selection BID17.Motivated by the analysis presented in Section 2, we propose a novel plain recurrent network variant, the Recurrent Identity Network (RIN), that can model long-range dependencies without the use of gates. Compared to the conventional formulation of plain RNNs, the formulation of RINs only adds a set of non-trainable weights to represent a "surrogate memory" component so that the learned representation can be maintained across two recurrent steps. Experimental in Section 3 show that RINs are competitive against other network models such as IRNNs and LSTMs. Particularly, small RINs produce 12%-67% higher accuracy in the Sequential and Permuted MNIST. Furthermore, RINs demonstrated much faster convergence speed in early phase of training, which is a desirable advantage for platforms with limited computing resources. RINs work well without advanced methods of weight initializations and are relatively insensitive to hyperparameters such as learning rate, batch size, and selection of optimizer. This property can be very helpful when the time available for choosing hyperparameters is limited. Note that we do not claim that RINs outperform LSTMs in general because LSTMs may achieve comparable performance with finely-tuned hyperparameters. The use of ReLU in RNNs might be counterintuitive at first sight because the repeated application of this activation is more likely causing gradient explosion than conventional choices of activation function, such as hyperbolic tangent (tanh) function or sigmoid function. Although the proposed IRNN BID13 reduces the problem by the identity initialization, in our experiments, we usually found that IRNN is more sensitive to training parameters and more unstable than RINs and LSTMs. On the contrary, feedforward models that use ReLU usually produce better and converge faster than FFNs that use the tanh or sigmoid activation function. In this paper, we provide a promising method of using ReLU in RNNs so that the network is less sensitive to the training conditions. The experimental also support the argument that the use of ReLU significantly speeds up the convergence. During the development of this paper, a recent independent work BID27 presented a similar network formulation with a focus on training of deep plain FFNs without skip connections. DiracNet uses the idea of ResNets where it assumes that the identity initialization can replace the role of the skip-connection in ResNets. DiracNet employed a particular kind of activation function -negative concatenated ReLU (NCReLU), and this activation function allows the layer output to approximate the layer input when the expectation of the weights are close to zero. In this paper, we showed that an RNN can be trained without the use of gates or special activation functions, which complements the findings and provides theoretical basis in BID27.We hope to see more empirical and theoretical insights that explains the effectiveness of the RIN by simply embedding a non-trainable identity matrix. In future, we will investigate the reasons for the faster convergence speed of the RIN during training. Furthermore, we will investigate why RIN can be trained stably with the repeated application of ReLU and why it is less sensitive to training parameters than the two other models. A ALGEBRA OF EQS. 8-9Popular GNNs such as LSTM, GRU; and recent variants such as the Phased-LSTM BID17, and Intersection RNN BID2, share the same dual gate design described as follows: DISPLAYFORM0 where t ∈ [1, T], H t = σ(x t, h t−1) represents the hidden transformation, T t = τ (x t, h t−1) is the transform gate, and C t = φ(x t, h t−1) is the carry gate. σ, τ and φ are recurrent layers that have their trainable parameters and activation functions. represents element-wise product operator. Note that h t may not be the output activation at the recurrent step t. For example in LSTM, h t represents the memory cell state. Typically, the elements of transform gate T t,k and carry gate C t,k are between 0 (close) and 1 (open), the value indicates the openness of the gate at the kth neuron. Hence, a plain recurrent network is a subcase of Eq. 14 when T t = 1 and C t = 0.Note that conventionally, the initial hidden activation h 0 is 0 to represent a "void state" at the start of computation. For h 0 to fit into Eq. 4's framework, we define an auxiliary state h −1 as the previous state of h 0, and T 0 = 1, C 0 = 0. We also define another auxiliary state h T +1 = h T, T T +1 = 0, and C T +1 = 1 as the succeeding state of h T.Based on the recursive definition in Eq. 4, we can write the final layer output h T as follows: DISPLAYFORM1 where we use to represent element-wise multiplication over a series of terms. According to Eq. 3, and supposing that Eq. 5 fulfills the Eq. 1, we can use a zero-mean residual t for describing the difference between the outputs of recurrent steps: DISPLAYFORM2 Then we can rewrite Eq. 16 as: DISPLAYFORM3 Substituting Eq. 18 into Eq. 15: DISPLAYFORM4 We can rearrange Eqn. 20 to DISPLAYFORM5 The term λ in Eq. 23 can be reorganized to, DISPLAYFORM6 B DETAILS IN THE ADDING PROBLEM EXPERIMENTS Average Estimation Error RIN 2-100 1st IRNN 2-100 1st LSTM 2-100 1st 0 100 200 300 400 500 600 700 800 layer 2 step index RIN 2-100 2nd IRNN 2-100 2nd LSTM 2-100 2nd DISPLAYFORM7 DISPLAYFORM8 In Section 3.2, we tested an additional model for RINs, which takes the concept of Deep Transition Networks (DTNs) BID18. Instead of stacking the recurrent layers, DTNs add multiple nonlinear transitions in a single recurrent step. This modification massively increases the depth of the network. In our RIN-DTs, the number of transition per recurrent step is two. Because the length of the sequence for Sequential and Permuted MNIST tasks is 784, RIN-DTs have the depth of 784 × 2 = 1568. The recurrent layer is defined in Eqs. 26-27. DISPLAYFORM0 DISPLAYFORM1
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hyp3i2xRb
We propose a novel network called the Recurrent Identity Network (RIN) which allows a plain recurrent network to overcome the vanishing gradient problem while training very deep models without the use of gates.
Recently, there has been a surge in interest in safe and robust techniques within reinforcement learning (RL). Current notions of risk in RL fail to capture the potential for systemic failures such as abrupt stoppages from system failures or surpassing of safety thresholds and the appropriate responsive controls in such instances. We propose a novel approach to fault-tolerance within RL in which the controller learns a policy can cope with adversarial attacks and random stoppages that lead to failures of the system subcomponents. The of the paper also cover fault-tolerant (FT) control so that the controller learns to avoid states that carry risk of system failures. By demonstrating that the class of problems is represented by a variant of SGs, we prove the existence of a solution which is a unique fixed point equilibrium of the game and characterise the optimal controller behaviour. We then introduce a value function approximation algorithm that converges to the solution through simulation in unknown environments. Reinforcement learning (RL) provides the promise of adaptive agents being able to discover solutions merely through repeated interaction with their environment. RL has been deployed in a number of real-world settings in which, using RL, an adaptive agent learns to perform complex tasks, often in environments shared by human beings. Large scale factory industrial applications, traffic light control , robotics and autonomous vehicles are notable examples of settings to which RL methods have been applied. Numerous automated systems are however, susceptible to failures and unanticipated outcomes. Moreover, many real-world systems amenable to RL suffer the potential for random stoppages and abrupt failures; actuator faults, failing mechanical system components, sensor failures are few such examples. In these settings, executing preprogrammed behaviours or policies that have been trained in idealised simulated environments can prove vastly inadequate for the task of ensuring the safe execution of tasks. Consequently, in the presence of such occurrences, the deployment of RL agents introduces a risk of catastrophic outcomes whenever the agent is required to act so as to avoid adverse outcomes in unseen conditions. The important question of how to control the system in a way that is both robust against systemic faults and, minimises the risk of faults or damage therefore arises. In response to the need to produce RL algorithms that execute tasks with safety guarantees, a significant amount of focus has recently been placed on safe execution, robust control and riskminimisation (Garcıa and Fernández, 2015). Examples include H ∞ control , coherent risk, conditional value at risk . In general, these methods introduce an objective 1 defined with an expectation measure that either penalises actions that lead to greater uncertainty or embeds a more pessimistic view of the world (for example, by biasing the transition predictions towards less desirable states). In both cases, the ing policies act more cautiously over the horizon of the problem as compared to policies trained with a standard objective function. Despite the recent focus on safe methods within RL, the question of how to train an RL agent that can cope with random failures remains unaddressed. In particular, at present the question of how to produce an RL policy that can cope with an abrupt failure of some system subcomponent has received no systematic treatment. Similarly, the task of addressing how to produce RL policies that account for the risk of states in which such failures occur has not been addressed. In this paper, we for the first time produce a method that learns optimal policies in response to random and adversarial systems attacks that lead to stoppages of system (sub)components that may produce adverse events. Our method works by introducing an adversary that seeks to determine a stopping criterion to stop the system at states that lead to the worst possible (overall) outcomes for the controller. Using a game-theoretic construction, we then show how a policy that is robust against adversarial attacks that lead to abrupt failure can be learned by an adaptive agent using an RL updating method. In particular, the introduction of an adversary that performs attacks at states that lead to worst outcomes generates experiences for the adaptive RL agent to learn a best-response policy against such scenarios. To tackle this problem, we construct a novel two-player stochastic game (SG) in which one of the players, the controller, is delegated the task of learning to modify the system dynamics through its actions that maximise its payoff and an adversary or'stopper' that enacts a strategy that stops the system in such a way that maximises the controller's costs. This produces a framework that finds optimal policies that are robust against stoppages at times that pose the greatest risk of catastrophe. The main contribution of the paper is to perform the first systematic treatment of the problem of robust control under worst-case failures. In particular, we perform a formal analysis of the game between the controller and the stopper. Our main are centered around a minimax proof that establishes the existence of a value of the game. This is necessary for simulating the stopping action to induce fault-tolerance. Although minimax proofs are well-known in game theory (; ;), replacing a player's action set with stopping rules necessitates a minimax proof (which now relies on a construction of open sets) which markedly differs to the standard methods within game theory. Additionally, crucial to our analysis is the characterisation of the adversary optimal stopping rule (Theorem 3). Our tackle optimal stopping problems (OSPs) under worst-case transitions. OSPs are a subclass of optimal stochastic control (OSC) problems in which the goal is to determine a criterion for stopping at a time that maximises some state-dependent payoff . The framework is developed through a series of theoretical : first, we establish the existence of a value of the game which characterises the payoff for the saddle point equilibrium (SPE). Second, we prove a contraction mapping property of a Bellman operator of the game and that the value is a unique fixed point of the operator. Third, we prove the existence and characterise the optimal stopping time. We then prove an equivalence between the game of control and stopping and worst-case OSPs and show that the fixed point solution of the game solves the OSP. Finally, using an approximate dynamic programming method, we develop a simulation-based iterative scheme that computes the optimal controls. The method applies in settings in which neither the system dynamics nor the reward function are known. Hence, the agent need only observe its realised rewards by interacting with the environment. At present, the coverage of FT within RL is limited. In RL is applied to tackle systems in which faults might occur and subsequently incur a large cost. Similarly, RL is applied to a problem in in which an RL method for Bayesian discrimination which is used to segment the state and action spaces. Unlike these methods in which infrequent faults from the environment generate negative feedback, our method introduces an adversary that performs the task of simulating high-cost stoppages (hence, modelling faults) that induce an FT trained policy. A relevant framework is a two-player optimal stopping game (Dynkin game) in which each player chooses one of two actions; to stop the game or continue . Dynkin games have generated a vast literature since the setting requires a markedly different analysis from standard SG theory. In the case with one stopper and one controller such as we are concerned with, the minimax proof requires a novel construction using open sets to cope with the stopping problem for the minimax . Presently, the study of optimal control that combines control and stopping is limited to a few studies e.g. . Similarly, games of control and stopping have been analysed in continuous-time (; ;). In these analyses, all aspects of the environment are known and in general, solving these problems requires computing analytic solutions to non-linear partial differential equations which are often analytically insoluble and whose solutions can only be approximated numerically at very low dimensions. Current iterative methods in OSPs (and approximated dynamic programming methods e.g. ) in unknown environments are restricted to risk-neutral settings -introducing a notion of risk (generated adversarially) adds considerable difficulty as it requires generalisation to an SG involving a controller and stopper which alters the proofs throughout. In particular, the solution concept is now an SG SPE, the existence of which must be established. As we show, our framework provides an iterative method of solving OSPs with worst-case transitions in unknown environments and hence, generalises existing OSP analyses to incorporate a notion of risk. The paper is organised as follows: we firstly give a formal description of the FT RL problem we tackle and the OSP with worst-case transitions and give a concrete example to illustrate an application of the problem. In Sec. 2, we introduce the underlying SG framework which we use within the main theoretical analysis which we perform in Sec. 3. Lastly, in Sec. 4, we develop an approximate dynamic programming approach that enables the optimal controls to be computed through simulation, followed by some concluding remarks. We now describe the main problem with which we are concerned that is, FT RL. We later prove an equivalence between the OSPs under worst-case transitions and the FT RL problem and characterise the solution of each problem. We concern ourselves with finding a control policy that copes with abrupt system stoppages and failures at the worst possible states. Unlike standard methods in RL and game theory that have fixed time horizons (or purely random exit times) in the following, the process is stopped by a fictitious adversary that uses a stopping strategy or rule to decide when to stop given its state observations. In order to generate an FT control, we simulate the adversary's action whilst the controller determines its optimal policy. This as we show, induces a form of control that is an FT best-response control. A formal description is as follows: an agent exercises actions that influence the sequence of states visited by the system. At each state, the agent receives a reward which is dependent on the state and the chosen action. The agent's actions are selected by a policy π: S× A → -a map from the set of states S and the set of actions A to a probability. We assume that the action set is a discrete compact set and that the agent's policy π is drawn from a compact policy set Π. The horizon of the problem is T ∈ N × {∞}. However, at any given point τ S ≤ T the system may stop (randomly) and the problem terminates where τ S ∼ f ({0, . . ., T}) is a measurable, random exit time and f is some distribution on {0, . . ., T}. If after k ≤ T time steps the system stops, the agent incurs a cost of G(S k) and the process terminates. For any s ∈ S and for any π ∈ Π, the agent's performance function is given by: where a ∧ b:= min{a, b}, E is taken w.r.t. the transition function P. The performance function consists of a reward function R: S× A → R which quantifies the agent's immediate reward when the system transitions from one state to the next, a bequest function G: S → R which quantifies the penalty incurred by the agent when the system is stopped and γ ∈ [0, 1[, a discount factor. We assume R and G are bounded and measurable. The FT control problem which we tackle is one in which the controller acts both with concern for abrupt system failures and stoppages. In particular, the analysis is performed in sympathy with addressing the problem of how the controller should act in two scenarios -the first involves acting in environments that are susceptible to adversarial attacks or random stoppages in high costs states. Such situations are often produced in various real-world scenarios such as engine failures in autonomous vehicles, network power failures and digital (communication) networks attacks. The second scenario involves a controller that seeks to avoid system states that yield a high likelihood of systemic (subcomponent) failure. Examples of this case include an agent that seeks to avoid performing tasks that increase the risk of some system failure, for example increasing stress that in component failure or breakages within robotics. To produce a control that is robust in these scenarios, it is firstly necessary to determine a stopping rule that stops the system at states that incur the highest overall costs. Applying this stopping rule to the system subsequently induces a response by the controller that is robust against systemic faults at states in which stopping inflicts the greatest overall costs. This necessitates a formalism that combines an OSP to determine an optimal (adversarial) stopping rule and secondly, a RL problem. Hence, problem we consider is the following: where the minimisation is taken pointwise and V is a set of stochastic processes of the form v: Ω → T where T ⊆ {0, 1, 2 . . .} is a set of stopping times. Hereon, we employ the following shorthand R(s, a) ≡ R a s for any s ∈ S, a ∈ A. The dual objective consists of finding both a stopping rule that minimises J and an optimal policy that maximises J. By considering the tasks as being delegated to two individual players, the problem becomes an SG between a controller that seeks to maximise J by manipulating state visitations through its actions and an adversarial stopper that chooses a stopping rule to stop the process in order to minimise J. We later consider a setting in which neither player has up-front knowledge of the transition model or objective function but each only observes their realised rewards. The of this paper also tackle OSPs under a worst-case transitions -problems in which the goal is to find a stopping ruleτ under the adverse non-linear expectation E P:= min π∈Π E P,π s.th. Here, the agent seeks to find an optimal stopping time in a problem in which the system transitions according to an adversarial (worst-case) probability measure. To elucidate the ideas, we now provide a concrete practical example namely that of actuator failure within RL applications. Consider an adaptive learner, for example a robot that uses a set of actuators to perform actions. Given full operability of its set of actuators, the agent's actions are determined by a policy π: S ×A → which maps from the state space S and the set of actions A to a probability. In many systems, there exists some risk of actuator failure at which point the agent thereafter can affect the state transitions by operating only a subset of its actuators. In this instance, the agent's can only execute actions drawn from a subset of its action space ⊂ A and hence, the agent is now restricted to policies of the form π partial: S × → -thereafter its expected return is given by the value function V π partial (this plays the role of the bequest function G in). In order to perform robustly against actuator failure, it is therefore necessary to consider a set of stopping times T ⊆ {0, 1, 2, . . .} and a stopping criterionτ: Ω → T which determines the worst states for the agent's functionality to be impaired so that it can only use some subset of its set of actuators. The problem involves finding a pair (τ,π) ∈ V × Π -a stopping time and control policy s.th. where s:= s 0, a t ∼ π and Hence the role of the adversary is to determine and execute the stopping actionτ that leads to the greatest reduction in the controller's overall payoff. The controller in turn learns to execute the policyπ which involves playing a policyπ partial ∈ arg max V π partial after the adversary has executed its stopping action. The ing policyπ is hence robust against actuator failure at the worst possible states. Embedded within problem is an interdependence between the actions of the players -that is, the solution to the problem is jointly determined by the actions of both players and their responses to each other. The appropriate framework to tackle this problem is therefore an SG . In this setting, the state of the system is determined by a stochastic process {s t |t = 0, 1, 2, . . .} whose values are drawn from a state space S ⊆ R p for some p ∈ N. The state space is defined on a probability space (Ω, B, P), where Ω is the sample space, B is the set of events and P is a map from events to probabilities. We denote by F = (F n) n≥0 the filtration over (Ω, B, P) which is an increasing family of σ−algebras generated by the random variables s 1, s 2,.... We operate in a Hilbert space V of real-valued functions on L 2, i.e. a complete 2 vector space which we equip with a norm ·: is a probability measure. The problem occurs over a time interval {0, . . . K} where K ∈ N × {∞} is the time horizon. A stopping time is defined as a random variable τ: Ω → {0, . . ., K} for which {ω ∈ Ω|τ (ω) ≤ t} ∈ F t for any t ∈ {0, . . ., K} -this says that given the information generated by the state process, we can determine if the stopping criterion has occurred. An SG is an augmented Markov decision process which proceeds by two players tacking actions that jointly manipulate the transitions of a system over K rounds which may be infinite. At each round, the players receive some immediate reward or cost which is a function of the players' joint actions. The framework is zero-sum so that a reward for player I simultaneously represents a cost for player II. Formally, a two-player zero-sum SG is a 6−tuple S, A i∈{1,2}, P, R, γ where S = {s 1, s 2, . . ., s n} is a set of n ∈ N states, A i is an action set for each player i ∈ {1, 2}. The map P: S×A 1 ×A 2 ×S → is a Markov transition probability matrix i.e. P (s ; s, a 1, a 2) is the probability of the state s being the next state given the system is in state s and actions a 1 ∈ A 1 and a 2 ∈ A 2 are applied by player I and player II (resp.). The function R: S× A 1 × A 2 is the one-step reward for player I and represents one-step cost for player II when player I takes action a 1 ∈ A 1 and player II takes action a 2 ∈ A 2 and γ ∈ [0, 1[ is a discount factor. The goal of each player is to maximise its expected cumulative return -since the game is antagonistic, the total expected reward received by player I which we denote by J, represents a total expected cost for player II. Denote by Π i, the space of strategies for each player i ∈ {1, 2}. For SGs with Markovian transition dynamics, we can safely dispense with path dependencies in the space of strategies. 3 Consequently, w.log. we restrict ourselves to the class of behavioural strategies that depend only on the current state and round, namely Markov strategies, hence for each player i, the strategy space Π i consists of strategies of the form It is well-known that for SGs, an equilibrium exists in Markov strategies even when the opponent can draw from non-Markovian strategies . In SGs, it is usual to consider the case A 1 = A 2 so that the players' actions are drawn from the same set. We depart from this model and consider a game in which player II can choose a strategy which determines a time to stop the process contained within the set T ⊆ {0, 1, 2, . . .} which consists of F− measurable stopping times. In this setting, player I can manipulate the system dynamics by taking actions drawn from A 1 (we hereon use A) and at each point, player II can decide to intervene to stop the game. The value of the game exists if we can commute the max and min operators: We denote the value by J:= val and denote by (k,π) ∈ V × Π the pair that satisfies Jk,π ≡ J. The value, should it exist, is the minimum payoff each player can guarantee itself under the equilibrium strategy. In general, the functions val + [J] and val − [J] may not coincide. Should J exist, it constitutes an SPE of the game in which neither player can improve their payoff by playing some other control -an analogous concept to a Nash equilibrium for the case of two-player zero-sum games. Thus the central task to establish an equilibrium involves unambiguously assigning a value to the game, that is proving the existence of J. In this section, we present the key and perform the main analysis of the paper. Our first task is to prove the existence of a value of the game. This establishes a fixed or stable point which describes the equilibrium policies enacted by each player. Crucially, the equilibrium describes the maximum payoff that the controller can expect in an environment that is subject to adversarial attacks that stop the system or some subcomponent. Unlike standard SGs with two controllers, introducing a stopping criterion requires an alternative analysis in which i) an equilibrium with Markov strategies in which one of the players uses a stopping criterion is determined and ii) the stopping criterion is characterised. It is well-known that introducing a stopping action to one of the players alters the analysis of SGs the standard methods of which cannot be directly applied (c.f. Dynkin games ). Our second task is to perform an analysis that enables us to construct an approximate dynamic programming method. This enables the value function to be computed through simulation. This, as we show in Sec. 4, underpins a simulation-based scheme that is suitable for settings in which the transition model and reward function is a priori unknown. Lastly, we construct an equivalence between robust OSPs and games of control and stopping. We defer some of the proofs to the appendix. Our develop the theory of risk within RL to cover instances in which the agent has concern the process at a catastrophic system state. Consequently, we develop the theory of SGs to cover games of control and stopping when neither player has up-front environment knowledge. We prove an equivalence between robust OSPs and games of control and stopping and demonstrate how each problem can be solved in unknown environments. A central task is to prove that the Bellman operator for the game is a contraction mapping. Thereafter, we prove convergence to the unique value. Consider a Borel measurable function which is absolutely integrable w.r.t. the transition kernel ss, where P a ss ≡ P (s ; s, a) is the probability of the state s being the next state given the action a ∈ A and the current state is s. In this paper, we denote by (P J)(s):= S J[s]P a sds. We now introduce the operator of the game which is of central importance: The operator T enables the game to be broken down into a sequence of sub minimax problems. It will later play a crucial role in establishing a value iterative method for computing the value of the game. We now briefly discuss strategies. A player strategy is a map from the opponent's policy set to the player's own policy set. In general, in two player games the player who performs an action first employs the use of a strategy. Typically, this allows the player to increase its rewards since their action is now a function of the other player's later decisions. Markov controls use only information about the current state and duration of the game rather than using information about the opponent's decisions or the game history. Seemingly, limiting the analysis to Markov controls in the current game may restrict the abilities of the players to perform optimally. Our first however proves the existence of the value in Markov controls: Theorem 1 establishes the existence of the game which permits commuting the max and min operators of the objective. Crucially, the theorem secures the existence of an equilibrium pair (τ,π) ∈ V × Π, whereπ ∈ Π is the controller's optimal Markov policy when it faces adversarial attacks that stop the system. Additionally, Theorem 1 establishes the existence of a given by J, the computation of which, is the subject of the next section. We can now establish the optimal strategies for each player. To this end, we now define best-response strategies which shall be useful for further characterising the equilibrium: Definition 1. The set of best-response (BR) strategies for player I against the stopping time τ ∈ V (BR strategies for player II against the control policy π ∈ Π) is defined by: The question of computing the value of the game remains. To this end, we now prove that repeatedly applying T produces a sequence that converges to the value. In particular, the game has a fixed point property which is stated in the following: There exists a unique function J ∈ L 2 s.th. Theorem 2 establishes the existence of a fixed point of T and that the fixed point coincides with the value of the game. Crucially, it suggests that J can be computed by an iterative application of the Bellman operator which underpins a value iterative method. We study this aspect in Sec. 4 where we develop an iterative scheme for computing J. Definition 2. The pair (τ,π) ∈ V × Π is an SPE iff: An SPE therefore defines a strategic configuration in which both players play their BR strategies. With reference to the FT RL problem, an SPE describes a scenario in which the controller optimally responds against stoppages at the set of states that inflict the greatest costs to the controller. In particular, we will demonstrate thatπ ∈ Π is a BR to a system that undergoes adversarial attacks. Proposition 1. The pair (τ,π) ∈ V × Π consists of BR strategies and constitutes an SPE. By Prop. 1, when the pair (τ,π) is played, each player executes its BR strategy. The strategic response then induces FT behaviour by the controller. We now turn to the existence and characterising the optimal stopping time for player II. The following establishes its existence. Theorem 3. There exists an F-measurable stopping time: The theorem characterises and establishes the existence of the player II optimal stopping time which, when executed by the adversary, induces an FT control by the controller. Having shown the existence of the optimal stopping time τ, by Theorem 3 and Theorem 1, we find: Theorem 4. Letτ be the player II optimal stopping time defined in and let τ be the optimal stopping time for the robust OSP (c.f.) then τ =τ. Theorem 4 establishes an equivalence between the robust OSP and the SG of control and stopping hence, any method that computesτ for the SG yields a solution to the robust OSP. We now develop a simulation-based value-iterative scheme. We show that the method produces an iterative sequence that converges to the value of the game from which the optimal controls can be extracted. The method is suitable for environments in which the transition model and reward functions are not known to either player. The fixed point property of the game established in Theorem 2 immediately suggests a solution method for finding the value. In particular, we may seek to solve the fixed point equation (FPE) J = T J. Direct approaches at solving the FPE are not generally fruitful as closed solutions are typically unavailable. To compute the value function, we develop an iterative method that tunes weights of a set of basis functions {φ k : R p → R|k ∈ 1, 2, . . . D} to approximate J through simulated system trajectories and associated costs. Algorithms of this type were first introduced by Watkins as an approximate dynamic programming method and have since been augmented to cover various settings. Therefore the following can be considered as a generalised Q-learning algorithm for zero-sum controller stopper games. Let us denote by Φr:= D j=1 r(j)φ j an operator representation of the basis expansion. The algorithm is initialised with weight vector r 0 = (r 0,..., r 0 (P)) ∈ R d. Then as the trajectory {s t |t = 0, 1, 2, . . .} is simulated, the algorithm produces an updated series of vectors {r t |t = 0, 1, 2, . . .} by the update: Theorem 5 demonstrates that the method converges to an approximation of J. We provide a bound for the approximation error in terms of the basis choice. We define the function Q which the algorithm approximates by: We later show that Q serves to approximate the value J. In particular, we show that the algorithm generates a sequence of weights r n that converge to a vector r and that Φr, in turn approximates Q. To complete the connection, we provide a bound between the outcome of the game when the players use controls generated by the algorithm. We introduce our player II stopping criterion which now takes the form: Let us define a orthogonal projection Π and the function F by the following: ΠQ:= arg min We now state the main of the section: Theorem 5. r n converges to r where r is the unique solution: ΠF (Φr) = Φr. The following provide approximation bounds when employing the projection Π:, then the following hold: Hence the error bound in approximation of J is determined by the goodness of the projection. Theorem 5 and Theorem 6 thus enable the FT RL problem to be solved by way of simulating the behaviour of the environment and using the update rule to approximate the value function. Applying the stopping rule in, by Theorem 6 and Theorem 2, means the pair (τ,π) is generated where the policyπ approximates the policyπ which is FT against adversarial stoppages and faults. In this paper, we tackled the problem of fault-tolerance within RL in which the controller seeks to obtain a control that is robust against catastrophic failures. To formally characterise the optimal behaviour, we constructed a new discrete-time SG of control and stopping. We established the existence of an equilibrium value then, using a contraction mapping argument, showed that the game can be solved by iterative application of a Bellman operator and constructed an approximate dynamic programming algorithm so that the game can be solved by simulation. Assumption A.2. Ergodicity: i) Any invariant random variable of the state process is P −almost surely (P −a.s.) a constant. Assumption A.3. Markovian transition dynamics: the transition probability function P satisfies the following equality: Assumption A.4. The constituent functions {R, G} in J are square integrable: that is, R, G ∈ L 2 (µ). We begin the analysis with some preliminary lemmata and definitions which are useful for proving the main . Definition A.1. An operator T: V → V is said to be a contraction w.r.t a norm · if there exists a constant c ∈ [0, 1[ s.th for any V 1, V 2 ∈ V we have that: Definition A.2. An operator T : V → V is non-expansive if ∀V 1, V 2 ∈ V we have: Definition A.3. The residual of a vector V ∈ V w.r.t the operator T : V → V is: Lemma A.1. Define val + [f]:= min b∈B max a∈A f (a, b) and define val − [f]:= max a∈A min b∈B f (a, b), then for any b ∈ B we have that for any f, g ∈ L and for any c ∈ R >0: Lemma A.2. For any f, g, h ∈ L and for any c ∈ R >0 we have that: The following lemma, whose proof is deferred is a required for proving the contraction mapping property of the operator T. Lemma A.4. The probability transition kernel P is non-expansive, that is: The following estimates provide bounds on the value J which we use later in the development of the iterative algorithm. We defer the proof of the to the appendix. Proposition A.1. The operator T in is a contraction. Lemma A.5. Let T: V → V be a contraction mapping in · and let J be a fixed point so that T J = J then there exists a constant c ∈ [0, 1[ s.th: Lemma A.6. Let T 1 : V → V, T 2 : V → V be contraction mappings and suppose there exists vectors J 1, J 2 s.th T 1 J 1 = J 1 and T 2 J 2 = J 2 (i.e. J 1, J 2 are fixed points w.r.t T 1 and T 2 respectively) then ∃c 1, c 2 ∈ [0, 1[ s.th: Lemma A.7. The operator T satisfies the following: 2. (Constant shift) Let I(s) ≡ 1 be the unit function, then for any J ∈ L 2 and for any scalar α ∈ R, T satisfies T (J + αI)(s) = T J(s) + αI(s). Proof of Lemma A.1. We begin by noting the following inequality for any f: have that for all b ∈ V: From we can straightforwardly derive the fact that for any b ∈ V: (this can be seen by negating each of the functions in and using the properties of the max operator). Assume that for any b ∈ V the following inequality holds: Since holds for any b ∈ V and, by, we have in particular that whenever holds which gives the required . Lemma A.2 and Lemma A.3 are given without proof but can be straightforwardly checked. Proof of Lemma A.4. The proof is standard, we give the details for the sake of completion. Indeed, using the Tonelli-Fubini theorem and the iterated law of expectations, we have that: where we have used Jensen's inequality to generate the inequality. This completes the proof. Proof of Proposition A.1. We wish to prove that: Firstly, we observe that: 1[) and. The follows after applying Lemma A.2 and Lemma A.3. Proof of Lemma A.5. The proof follows almost immediately from the triangle inequality, indeed for any J ∈ L 2: where we have added and subtracted T J to produce the inequality. The then follows after inserting the definition of T (J). Proof of Lemma A.6. The proof follows directly from Lemma A.5. Indeed, we observe that for any J ∈ L 2 we have where we have added and subtracted J to produce the inequality. The then follows from Lemma A.5. Proof of Lemma A.7. Part 2 immediately follows from the properties of the max and min operators. It remains only to prove part 1. We seek to prove that for any s ∈ S, if J ≤J then We begin by firstly making the following observations: 1. For any x, y, h ∈ V x ≤ y =⇒ min{x, h} ≤ min{y, h}. 2. For any Assume that J ≤J, then we observe that: ≤ γ max where we have used in the penultimate line. The immediately follows after applying. Proof of Theorem 1. We begin by noting the following inequality holds: The inequality follows by noticing J k,π ≤ max We now observe that: where we have used the stationarity property and, in the limit m → ∞ and, in the last line we used the Fatou lemma. The constant c is given by c: Hence, we now find that Now since holds ∀π ∈ Π we find that: Lastly, applying min operator we observe that: It now remains to show the reverse inequality holds: Indeed, we observe that = min We now apply the min operator to both sides of which gives: After taking expectations, we find that: Now by Jensen's inequality and, using the stationarity of the state process (recall the expectation is taken under π) we have that: By standard arguments of dynamic programming, the value of the game with horizon n can be obtained from n iterations of the dynamic recursion; in particular, we have that: Inserting and into gives: where c(m):= Hence, we find that: we deduce the after noting that G(The proofs of the in Sec. 4 are constructed in a similar fashion that in (approximate dynamic programming). However, the analysis incorporates some important departures due to the need to accommodate the actions of two players that operate antagonistically. We now prove the first of the two of Sec. 4. Proof of Theorem 5. We firstly notice the construction ofτ given bŷ is sensible since we observe that min{t|G(s t) ≤ J } = min{t|G(s t) ≤ min{G(s t), Q (s t)} = min{t|G(s t) ≤ Q }. Step 1 Our first step is to prove the following bound: Proof. which is the required . Step 2 Our next task is to prove that the quantity Q is a fixed point of F and hence we can apply the operator F to achieve the approximation of the value. Proof. Using the definition of T (c.f. Step 3 We now prove that the operator ΠF is a contraction on Q, that is the following inequality holds: Proof. The proof follows straightforwardly by the properties of a projection mapping: Step 4 The is proven using the orthogonality of the (orthogonal) projection and by the Pythagorean theorem. Indeed, we have that: Proof. Φr − Q 2 = Φr − ΠQ 2 + ΠQ − Q Hence, we find that which is the required . Result 2 Proof. The proof by Jensen's inequality, stationarity and the non-expansive property of P. In particular, we have Inserting the definitions of Q andQ into then gives: It remains therefore to place a bound on the term Q −Q. We observe that by the triangle inequality and the fixed point properties of F on Q andF onQ we have ≤ γ Q − Φr + Q − Φr So that The then follows after substituting the of step 4. Let us now define the following quantity: Step 5 Proof. We now observe that s k can be described in terms of an inner product. Indeed, using the iterated law of expectations we have that Step 5 enables us to use classic arguments for approximate dynamic programming. In particular, following step 5, Theorem 6 follows directly from Theorem 2 in with only a minor adjustment in substituting the max operator with min.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Bygw86VKwS
The paper tackles fault-tolerance under random and adversarial stoppages.
We propose a novel framework to generate clean video frames from a single motion-blurred image. While a broad range of literature focuses on recovering a single image from a blurred image, in this work, we tackle a more challenging task i.e. video restoration from a blurred image. We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors, and the blurred image as an observation. Our framework is based on an encoder-decoder structure with spatial transformer network modules to restore a video sequence and its underlying motion in an end-to-end manner. We design a loss function and regularizers with complementary properties to stabilize the training and analyze variant models of the proposed network. The effectiveness and transferability of our network are highlighted through a large set of experiments on two different types of datasets: camera rotation blurs generated from panorama scenes and dynamic motion blurs in high speed videos. Our code and models will be publicly available. Capturing an image is not an instant process; to capture enough photons, the photosensitive elements of a camera have to be exposed to light for a certain interval of time, called exposure time. Therefore, during this interval if an object is moving in the observed scene or the camera is undergoing an arbitrary motion, the ing image will contain a blurring artifact known as motion blur. In general, motion blur is an unwanted behaviour in vision applications e.g.image editing , visual SLAM (and 3D reconstruction , as it degrades the visual quality of images. To cope with this type of artifact, image deblurring aims to restore a sharp image from a blurred image. This problem is known to be ill-posed since the blur kernel used for deconvolution is generally assumed to be unknown. Earlier studies assume a uniform-blur over the image to simplify the estimation of the single deconvolution blur kernel used to remove the blur (; ;). Even though the methods deploy deblurring tasks with uniform-blur assumption, the assumption is often violated in practice. For instance, when the blur is caused by out-of-plane camera rotation, the blur pattern becomes spatially variant. Moreover, the problem is more complex when objects in a scene are moving i.e.dynamic blur. While previous literature focuses on recovering a sharp image from a blurred image, we tackle a more challenging task i.e.video restoration from a blurred image. Restoring the underlying image sequence of a blurred image requires both contents and motion prediction. We formulate video restoration from a blurred image as an inverse problem where a clean sequence of images and their motion as latent factors, and a blurred image as an observation. Some of previous deblurring approaches (; ; ; ;) also estimate the underlying motion in a blurred image, however, their goal remains in single frame restoration. proposed to extract video frames from a single motion-blurred image. Their approach is close to image translation model without inferring underlying motions between the latent frames. addressed this issue by estimating pixel level motion from a given blurred input. However, their model is still prone to sequential error propagation as frames are predicted in a sequential manner using a deblurred middle frame. In this paper, we propose a novel framework to generate a clean sequence of images from a single motion-blurred image. Our framework is based on a single encoder-decoder structure with Spatial Transformer Network modules (STN) and Local Warping layers (LW) to restore an image sequence and its underlying motion. Specifically, a single encoder is used to extract intermediate features which are passed to multiple decoders with predicted motion from STN and LW modules to generate a sequence of deblurred images. We evaluate our model on two types of motion blur. For rotation blur, which is caused by abrupt camera motion, we generated a synthetic dataset from panoramic images (J. Xiao & Torralba., 2012). For dynamic blur caused by fast moving objects in a scene, we used a high speed video dataset . The proposed model is evaluated on the panorama and the high speed video datasets under various motion patterns. Both the quantitative metrics and qualitative highlight that our method is more robust and performs favorably against the competing approach 1. For further investigation, we demonstrate the transferability of our model by cross-dataset evaluation. We also propose a simpler and lighter variation of our model guiding that our approach is flexible and can be easily extended to arbitrary number of frame prediction model with negligible performance trade-off. In short, our contributions are as follows. 1) We propose a novel unified architecture to restore clean video frames from a single motion-blurred image in an end-to-end manner. 2) Loss terms are designed to stably train the proposed network. 3) We perform thorough experiments to analyze the transferability and flexibility of the proposed architecture. 4) The performance of our model quantitatively and qualitatively performs favorably against the competing approach. Moreover due to flexibility of our model, we show that our approach is robust to heavy blurs where the previous approach fails. Image deblurring in general refers to the restoration of an image affected by blur. In this paper, we focus exclusively on the motion blur. Image deblurring is an ill-posed inverse problem when a blur kernel is unknown i.e.blind deconvolution problem, as different latent images can be transformed to a blurred image depending on its blur kernel. Early stage of deblurring studies (; ; ; ; ; ; ;) assume a single blur kernel that is applied to an image globally. The restoration of blur images is often modeled as a maximization problem of probabilistic models . To narrow down the ambiguity of the blur kernel estimation, natural image priors (; ; 2016;) are exploited. For instance, formulate the blur kernel estimation as a process to recover internal recurrence of image patches. 0 regularization , dark channel prior and extreme channel prior are also used to improve image deblurring. While single blur kernel estimation approaches are effective when blur kernels are shift-invariant, they fail when the blur is not spatially uniform. A non-uniform blur can be caused by camera rotation, depth variation or moving objects in a scene. To restore images affected by motion blur from pure rotations, use the geometric information of the camera motion as a prior to recover the non-uniform blur model. Recently, deep network based methods are proposed to handle general blur patterns without the uniform blur assumption. propose multi-scale deep networks with multi-scale loss that mimics coarse-to-fine approaches to restore sharp images under non-uniform blurred images. proposed a spatially variant neural networks to learn spatially variant kernels. proposed to extract a video sequence from a single motion-blurred image using multiple deep networks. They showed that deep networks can successfully generate an image sequence from a blurred image, however there remains a few limitations. Their proposed framework consists of multiple networks of which each network is specialized to predict a specific frame in a sequence. Each network is trained separately and sequentially starting from the middle frame and then adjacent frames taking previously predicted frames as inputs. As a , the non-middle frame prediction heavily relies on previously predicted frames including the middle frame itself, therefore when the middle frame is erroneous the error propagates across frames. proposed a two-step strategy to generate a video from a motion-blurred image using three complementary networks. They used video autoencoder to learn motion and frame generation from clean frames as a pretraining phase. Latter, they introduced a motion disentangle network to extract motion from blurred image. They also used independent deblurring network as their approach requires a clean middle frame generated from a blurred image in advance. Although their approach takes motion information into account, the approach generates frames sequentially starting from the middle frame to adjacent frames which in error propagation just as in. Unlike the previous works, our approach runs in an end-to-end manner within a single training stage without error propagation across frames. Collecting a large number of natural motion-blurred images is a daunting task. Hence, a common practice in computer vision research is to generate blurry images by combining a sequence of sharp images using various approaches ranging from simple averaging to learnable methods . The source of motion blur in an image can be generalized into two main categories: rapid camera motion (camera shake) and dynamic motion of objects in the scene. In this section, we briefly explain how we generate a blurry image dataset by considering each case individually. In order to generate a rotation blurred image dataset, we use the SUN360 panorama dataset (J. Xiao & Torralba., 2012). This dataset provides various panoramas with 360 • field of view. Hence, a virtual camera can be modeled to point at different orientations to represent the camera rotation in SO. Given a panorama P of size H × W, we developed a simple yet effective framework to generate blurred images. First, the panorama is projected onto a unit sphere by linearly mapping each pixel coordinate (x, y) ∈ P into spherical coordinates (θ, φ) with θ ∈ (0, 2π) and φ ∈ (−π/2, π/2). Then, a synthetic image can be captured via a virtual camera by re-projecting the 3D points on the sphere into an image plane as briefly discussed in and. Using this procedure we first capture an image by positioning the virtual camera at an arbitrary orientation. We call the image generated at this orientation initial image. Then, we rotate the camera by a random rotation matrix (with β = (β x, β y, β z) its Euler angle representation) and capture a second image at the new camera position called final image. We finally use a quaternion spherical linear interpolation technique (Slerp) The camera rotation angle is uniformly sampled from [−10 •, 10 •]. In order to generate a realistic blurred image, the number of intermediate images have to be adjusted automatically depending upon the rotation magnitude between the initial and final frames. Therefore, we use a simple linear relationship between the number of frames to be generated (n) and the rotation magnitude as follows: n = c + 1 3 β, where c is a constant and β is the magnitude of β. In this manner, we use 1000 panoramic images from which we generate 26, 000 training and 3, 200 test images of size 128 × 128px. In order to generate more realistic and generic (arbitrary camera motions and dynamic scene) blurred images, we take advantage of a GoPro high speed video dataset . This dataset provides 22 training and 11 test scenes, each scene containing frames of size 1280 × 720px. A blurry image is generated by averaging n consecutive frames . In our experiments, we fixed n = 7 and generated 20, 000 training and 2000 test images by randomly cropping images of size 256 × 256px. In this section, we describe our network structure and loss functions. To explicitly take the camera motion into consideration, our network is designed with spatial transformer networks (STNs) in an encoder-decoder structure (Fig. 2). Given a blurry image I b as an input, our model outputs {I 1, ..., I m−1, I m, I m+1, ..., I n}, where I m is the deblurred middle frame and {I j} n j=1 where j = m are the recovered non-middle frames. To ensure sharp image generation and stable training, the network is trained using three loss terms: multi-scale photometric loss, transformation consistency loss and penalty term. The middle frame I m is reconstructed using a U-net like network. The encoder contains five convolutional blocks, each block containing two layers of convolutions with spatial kernel size of 3 × 3 and stride size of 2 and 1, respectively. The feature maps are downsampled to half size and the number of channels is doubled after each convolutional block. The decoder network also contains five convolutional blocks to upsample features and to predict images at different scales. In each block, a feature is first upscaled using a transposed convolution (deconvolution) layer of kernel size 4×4 and a stride size of 2. The image predicted in the previous block is also upscaled in the same manner. The upsampled feature and its respective image are then concatenated channel-wise with the corresponding feature from the encoder (skip connection as shown in the Fig. 2a), then passed through five layers of convolutions with dense connections to output a feature, which will be used to predict an image at current block. In this manner, features and images are successively upsampled to predict a full scale middle frame. Along with the last feature map from the decoder, the predicted image is finally passed through a refining convolutional block. It contains seven dilated convolutional layers each with kernel size of 3×3 and different dilation constants. The purpose of this network is to further refine the predicted frame with contextual information by effectively enlarging the receptive field size of the network. The non-middle frames are reconstructed based on the encoded features of the middle frame via learned transformation by STN modules and local warping (LW) networks. First, the encoded middle frame feature U e ∈ R H×W ×C with width W, height H and C channels from the encoder is transformed into a non-middle frame feature using a feature transformer network (FTN). Second, the decoded middle frame image I d ∈ R H×W ×3 predicted from the corresponding middle frame decoder is transformed using an image transformer network (ITN). Third, the transformed feature and image are concatenated channel-wise and are passed through a decoder to predict a non-middle frame (Fig. 2b). We also input the middle frame feature into the non-middle frame decoder in order to guide the decoder to learn the spatial relation between the middle and the non-middle frame. The decoder network here is similar to the one used for predicting a middle frame in the previous section. These mechanisms are summarized in the following equations, where l = {1, ..., k} is an index for k feature levels (scales) and t is a subscript for transformed feature/image. I p is the predicted frame from a non-middle frame decoder D. Each non-middle frame is reconstructed by applying multi-scale transformer networks to the middle frame encoder. Given ground truth non-middle frames during training, our model learns the transformation parameters to be applied to the middle frame at different scales in order to output the desired non-middle frames. The fact that unique transformer networks are applied at each feature and image scale gives the model a capacity to learn various types of transformations, hence, making it robust to different blur patterns including large blurs. STNs in our model learn non-local transformations in a given motion-blurred image. In order to compensate for locally variant transformations, we designed a local warping network. This network is conditioned on the input feature like STN, however, instead of predicting global transformation parameters, it predicts pixel-wise displacement i.e.motion flow. Given an input feature U ∈ R H×W ×C, the local warping network outputs a motion flow of size H × W × 2. We used two convolutional layers of kernel size 3 × 3 for the network. By warping the input feature with the predicted motion flow, we obtain a locally transformed feature which is used as an input to the decoder (Fig. 2b). Given a blurry input, in order to generate video frames that are sharp both locally and globally, we trained our network with a multi-scale photometric loss between the images predicted by the decoder network and the ground truth image. A bilinear downsampling is used to resize the ground truth image to the corresponding predicted frame size at different scales. Let l denotes a scale level, k denotes total number of scales, {ŷ} k l=1 denotes a set of predicted images from the smallest size (ŷ 1) to the full scale (ŷ k), and {y} k l=1 represent a set of downsampled ground truth images where y k is a full scale ground truth image. For training a model predicting a sequence with n frames from a single blurry image, we compute multi-scale photometric loss as the term L mp in Eq., We use individual transformer networks for each feature level when predicting non-middle frames. This augments our model with the capacity to learn transformations at different levels making it robust to various blur patterns. However, we expect the transformations at different scales to be aligned for successfully reconstructing temporally consistent non-middle frames. Especially at the initial stages of the training where the transformer parameters are random, it is beneficial that our model understands the relationship between the transformations across different frame levels. In order to impose this notion into our model and facilitate a smooth training, we propose the transformation consistency loss. Let {θ} k l=1 be the set of predicted transformation parameters at different scales. The transformation consistency loss for predicting n − 1 non-middle frames can be defined as the term L tc in Eq., where |.| 2 is an 2 loss between the transformation parameters. Predicting multiple frames from a single blurry image can be problematic at times when the model fails to learn any type of transformation and simply replicates the middle frame predicition as nonmiddle frames. In order to remedy this issue, we design a penalty term to enforce diversity among generated images. This is accomplished by explicitly maximizing the sum of absolute difference (SAD) i.e. minimizing the negative SAD between a predicted frame and its time-symmetric (about the middle frame) ground truth frame. For example, when predicting seven frames {I 1, ..., I 4, ...., I 7}, we enforce the predicted image I 1 to be different content-wise from the ground truth image I 7 and vice versa. The penalty is imposed in a symmetric manner such that the model learns to be sensitive to smaller transformations close to the middle frame as well as larger transformations at the end frames. Given a predicted frameŷ i and the corresponding time-symmetric ground truth y n+1−i, the penalty term is computed as the term L p in Eq., where m is the middle frame index, n is the total number of frames. The final training loss function is defined as follows, where λ tc and λ p are weight coefficients for transformation consistency loss and penalty term, respectively. We used λ tc = 1e + 3 and λ p = 0.5. The task at hand has two main ambiguities. i. temporal shuffling and ii. reverse ordering. As explained in section 3, motion-blur is the of an averaging process and, restoring temporally consistent (no shuffling) sharp frame sequence from a given motion-blurred input is a non-trivial task as the averaging destroys the temporal order. mentions that photometric loss is not a sufficient constraint to make their network converge. Hence, they propose a pair-wise order invariant loss to train their network. also uses the same loss function to fine-tune the recurrent video decoder in their network. We find experimentally that a multi-scale photometric loss is a sufficient constraint to train our network. We further impose more constraints using other loss terms to improve performance (see ablation studies in Sec. 5.4.2). By design nature, our model allows motions to be learned in a symmetric manner (about the middle frame) with transformer networks close to the middle frame decoding smaller motions and those further from the middle frame decoding larger motions. This notion is enforced by transformation consistency loss and symmetric constraint term during training. The fact that our model is optimized in a joint manner allows frames to be reconstructed in a motion-guided sequence. Other than temporal shuffling, another issue is reverse ordering. Given a single motion-blurred input, recovering ground truth order is a highly ill-posed problem which is intractable since reversely ordered frames in the same motion-blurred image. Neither our work nor previous works (, ) are capable of predicting the right order. Hence, we evaluate frame reconstructions using both ground truth order and its reverse order, then report the higher metric in the experiment section. Our model is implemented and trained using pyTorch . We chose Adam as an optimizer with β 1 and β 2 fixed to 0.9 and 0.999, respectively. On our synthetic blur dataset, we train the model using images of size 128 × 128px and a mini-batch size of 8 to predict initial, middle and final frames. A mini-batch size of 4 and input size of 256 × 256px is used to predict sequences of frames when training on the high speed video dataset. In all experiments, we train our model for 80 epochs. We set the learning rate λ = 1e − 4 at the start of the training and decay it by half at epochs 40 and 60. All the training and test images are cropped from the original resolution images without resizing. In this section, we analyze the performance of our model qualitatively and quantitatively on both camera shake blurs generated from panoramic scenes and dynamic blurs obtained from averaging frames in high speed videos. We report test using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) metrics. To purely evaluate the quality of generated images without ordering estimation issue due to reverse ordering, we report the higher PSNR/SSIM metric of either ground truth order or reverse order of frames i.e.max{PSNR/SSIM( For dynamic blur, we compare our with 2 as they also use the same high speed video dataset . The averaged metrics obtained on the testing set are summarized in Table 1. The show that our model performs favorably against their model on both PSNR and SSIM on the high speed video dataset. While the middle frame prediction shows moderate performance increase, for the initial and final frame predictions, our model outperforms by a large margin. The performance gap between the middle frame and non-middle frames is relatively larger in than our method. This is due to sequential prediction in which makes non-middle frame prediction heavily dependent on the generated middle frame, ing in error propagation. As stated in , this limitation is particularly problematic when a heavy blur affects the input image since the middle frame prediction becomes less reliable. Our approach is Figure 3: Rotation blurred images generated from panorama scenes. The top row is ground truth frames and the bottom row is restored frames from the blurs. Input Figure 5: Partially blurred dynamic motion example. The man is blurred with dynamic motion while the is close to static. relatively robust to heavy blur as the proposed model generates frames independently from multiple decoders, therefore the error is not propagated (Fig. 7). We also evaluate our model on camera rotational blurs (Table 1 Panorama blur). In terms of blurriness, the panorama scenario is more challenging as we simulate broader range of blurs than the blurs in the high speed video dataset i.e.from sharp images to heavy blurred images. We observed visually better for the panorama scenario but lower quantitative number. One main reason is that panorama ground truths are relatively sharper while the high speed video contains blurry ground truth images due to dynamic motion and short exposure time. Therefore, the visual of panorama scenes from our model are sharp even though the quantitative performance is relatively lower. Please refer to Sec. 7.1 in the appendix for the quantitative on seven frame predictions. The qualitative for panoramic scenes and high speed videos show that our model can successfully restore multiple frames from a blurred input under various blur patterns. The restored frames from a blurred image in the panorama scenario (Fig. 3) are relatively sharper than dynamic blur case. As mentioned earlier, one of reasons is due to high quality ground truth image. In the high speed video scenario (Fig. 4), ground truth images have locally blurred contents due to fast dynamic motions. We compare our approach and previous method on relatively heavily blurred images from the high speed video dataset. As can be seen from Fig. 4, our method reconstructs contents consistently across frames and restores visually sharper videos compared to. In case of dynamic blurs where the motion of multiple objects and static objects are Figure 6: STN transformation visualization. The ground truth middle frame is transformed to non-middle frame using pure STN transformation. mixed as shown in Fig. 5, our model can generate video frames unravelling the underlying spatially varying motion. Failure cases happen for undersampled and heavily blurred inputs as can be seen from Fig. 7 and Fig. 9d. Please refer to Sec. 8 in the appendix for more qualitative . In order to visually analyze if the STN module in our model reasonably infers the blur motion, we apply the STN transformation to the middle frame and compare it with the ground truth non-middle frames. Let i be the index of the initial frame, m the middle frame, and f the final frame. The input image is obtained by averaging a set of frames {F j} f j=i. We can obtain the STN transformation parameters {θ} f j=1,j =m from the feature transformer network which transforms middle frame features to non-middle frame features. We apply the STN transformation T with the parameter θ j to middle frame F m to visualize whether the transformation implies valid motion information. Fig. 6 shows that the transformation spatially aligns contents of the middle frame to that of the non-middle frames. We report a cross-dataset panorama→high speed video evaluation to assess the generalization capability of our model. A model trained on the panoramic scenes is evaluated on high speed video test set (Table 2). Despite a performance degradation, our model trained on the panorama dataset performs on par with the competing approach trained on the high speed video dataset. The absence of dynamic motion on the panorama dataset, which is apparent in high speed videos, can be one contributing factor explaining the performance loss in addition to the domain gap e.g.image contents, sharpness, blurriness. Model size can be a bottleneck when the number of frames in a video to be predicted increases since our model uses multiple decoders and transformer networks. Hence, we experiment with a lighter model by replacing decoders and STN modules to weight shared layers. As opposed to using individual decoders for each non-middle frame, we use a single decoder i.e.weight shared decoders. For STN, we apply inverse transformations by assuming symmetric motion about the middle frame, therefore, reducing the number of transformer networks by half. We tested this light architecture on a model that predicts three (initial, middle and final) frames and it reduces the number of model parameters by 48%, yet yielding only a 0.4dB performance decrease on average (Table 2). The feature transformer network (FTN) at different levels of abstraction is the core part of our model for network convergence. Local warping (LW) layer also significantly improves the performance of our model. The best model performance is, yet, achieved with the three network components (FTN, LW, ITN) combined (Table 3). As mentioned earlier, the multi-scale photometric loss (PML) is a sufficient constraint to make our network converge during training. We also experimentally find that a model trained with transformation consistency loss (TCL) not only converges faster with smoother behavior but also gives a better performance during testing. The penalty term (PT) gives a marginal performance improvement when predicting fewer frames as photometric loss is already a sufficient constraint. In 3 frame prediction model, the penalty term improved performance marginally around 0.25dB while in 7 frame prediction model, it improved approximately 0.6dB. Penalty term enforces the model to consider subtle differences especially when the motion is small. We present a novel unified architecture that restores video frames from a single blurred image in an end-to-end manner. During training, feature and image transformer networks indirectly learn blur motions without motion supervision. The designed a loss function with regularizers enforce the model to consider subtle differences with fast loss convergence. We evaluate our model on the two datasets with rotation blurs and dynamic blurs and demonstrate qualitatively and quantitatively favorable performance against the competing approach. The cross-dataset evaluation demonstrates that our model can generalize even when the training and test set have significantly different blur patterns and domain gap. We additionally propose a lighter version of the model by weight-sharing of the decoders and STN modules under symmetric frame motion assumption. This modification enables the model to have negligible parameter size increment even when the number of predicted frames are high. Unlike the previous approaches, our model predicts frames in a single step without middle frame dependency. It is advantageous not only because it is simple to use but also robust to heavy blurs where middle frame prediction often fails. Overall, the simplicity and flexibility of our method makes it a promising approach for future applications such as deblurring, temporal super resolution and flow estimation from a motion-blurred image. Here, we present additional details pertaining to the experiments that could not be included in the main text due to space constraints. As can be inferred from Table 4, our method consistently performs favorably against the competing method. The prediction performs best for middle frames and consistently decreases for non-middle frames on both our method and the competing method. The overall performance reported here is consistent with the in the main paper. Here, we show qualitative (frame-by-frame comparison) from the high speed video dataset that could not be included in the main text due to space constraints. We also tested our model on motion-blurred examples from and the restored videos are shown in Fig. 9. Here, we show qualitative from the SUN360 panorama dataset (J. Xiao & Torralba., 2012) that could not be included in the main text due to space constraints. The samples contain various blur patterns e.g.static, partial blur, and heavy blur.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJe4ipEKDB
We present a novel unified architecture that restores video frames from a single motion-blurred image in an end-to-end manner.
High performance of deep learning models typically comes at cost of considerable model size and computation time. These factors limit applicability for deployment on memory and battery constraint devices such as mobile phones or embedded systems. In this work we propose a novel pruning technique that eliminates entire filters and neurons according to their relative L1-norm as compared to the rest of the network, yielding more compression and decreased redundancy in the parameters. The ing network is non-sparse, however, much more compact and requires no special infrastructure for its deployment. We prove the viability of our method by achieving 97.4%, 47.8% and 53% compression of LeNet-5, ResNet-56 and ResNet-110 respectively, exceeding state-of-the-art compression reported on ResNet without losing any performance compared to the baseline. Our approach does not only exhibit good performance, but is also easy to implement on many architectures. While deep learning models have become the method of choice for a multitude of applications, their training requires a large number of parameters and extensive computational costs (energy, memory footprint, inference time). This limits their deployment on storage and battery constraint devices, such as mobile phones and embedded systems. To compress deep learning models without loss in accuracy, previous work proposed pruning weights by optimizing network's complexity using second order derivative information BID1 BID4. While second order derivative introduces a high computational overhead, BID7 BID9 explored low rank approximations to reduce the size of the weight tensors. Another line of work BID3 BID14, proposed to prune individual layer weights with the lowest absolute value (nonstructural sparsification of layer weights). BID2 followed the same strategy while incorporating quantization and Huffman coding to further boost compression. While the aforementioned methods considered every layer independently, BID12 proposed to prune the network weights in a class-blind manner, e.g. individual layer weights are pruned according to their magnitude as compared to all weights in the network. Noteworthy, all approaches that prune weights non-structurally, generally in high sparsity models that require dedicated hardware and software. Structured pruning alleviates this by removing whole filters or neurons, producing a non-sparse compressed model. In this regard, BID11 proposed channel-wise pruning according to the L1-norm of the corresponding filter. BID15 learned a compact model based on learning structured sparsity of different parameters. A data-free algorithm was implemented to remove redundant neurons iteratively on fully connected layers in BID13. In BID6, connections leading to weak activations were pruned. Finally, BID16 pruned neurons by measuring their importance with respect to the penultimate layer. Generally, in structured pruning, each layer is pruned separately, which requires calculation of layer importance before training. This work features two key components: a) Blindness: all layers are considered simultaneously; blind pruning was first introduced by BID12 to prune individual weights; b) Structured Pruning: removal of entire filters instead of individual weights. To the best of our knowledge, we are the first to use these two components together to prune filters based on their relative L1-norm compared to the sum of all filters' L1-norms across the network, instead of pruning filters according to their L1-norm within the layer BID11, inducing a global importance score for each filter. The contribution of this paper is two-fold: i) Proposing a structured class-blind pruning technique to compress the network by removing whole filters and neurons, which in a compact non-sparse network with the same baseline performance. ii) Introducing a visualization of global filter importance to devise the pruning percentage of each layer. As a , the proposed approach achieves higher compression gains with higher accuracy compared to the state-of-the-art reported on ResNet-56 and ResNet-110 on the CIFAR10 dataset BID8. Consider a network with a convolutional (conv) layer and a fully connected (fc) layer. We denote each filter F ilter i, where i ∈ [1, F], and F is the total number of filters in the conv layer. Each filter is a 3D kernel space consisting of channels, where each channel contains 2D kernel weights. For the fc layer, we denote W m, a 1-D feature space containing all the weights connected to certain neuron N euron m, with m ∈ [1, N] and N denoting the number of neurons. It should be noted that We do not prune the classification layer. Each pruning iteration in our algorithm is structured as follows:Algorithm 1 Pruning procedure 1: for i ← 1 to F do loop over filters of a conv layer 2: DISPLAYFORM0 calculate L1-norm of all channels' kernel weights 3: if norm_conv (i) < threshold then 12: prune(F ilter i) remove filter if its normalized norm is less than threshold 13: for m ← 1 to N do 14: if norm_f c(m) < threshold then 15: prune(N euron m) remove neuron if its normalized norm is less than threshold Importance calculation. Although pre-calculation of filters or layers' sensitivity to be pruned is not needed in our method, it can be visualized as part of the pruning criteria. In our algorithm, blindness implies constructing a hidden importance score, which corresponds to the relative normalized L1-norm. For instance, the relevant importance for a certain filter in a conv layer w.r.t. all other filters in all layers is the ratio between the filter's normalized norm and the sum of all filters' normalized norms across the network. DISPLAYFORM1 Normalization. As each layer's filters have different number of kernel weights, we normalize filters' L1-norms by dividing each over the number of kernel weights corresponding to the filter (Line 3 and 6 as indicated in Algorithm 1). Alternatively without normalization, filters with a higher number of kernel weights would have higher probabilities of higher L1-norms, hence lower probability to get pruned. Retraining process. Pruning without further adaption, in performance loss. Therefore, in order to regain base performance, it is necessary for the model to be retrained. To this end, we apply an iterative pruning schedule that alternates between pruning and retraining. This is conducted until a maximum compression is reached without losing the base accuracy. BID3 0.77 92.00 84.00 Srinivas et al. BID14 0.81 95.84 91.68 Han et al. BID2 0.74 97.45 -. Table 2: Results on LeNet-5. Error% percentage for different percentage of parameters pruned (Par.%); "E.Par%" is the effective pruning percentage after adding the extra indices' storage for non-structured pruning as studied by BID0 3 ExperimentIn order to assess the efficacy of the proposed method, we evaluate the performance of our technique on a set of different networks: first, LeNet-5 on MNIST BID10; second, ResNet-56 and ResNet-110 (BID5) on CIFAR-10 BID8. We use identical training settings as BID5, after pruning we retrain with learning rate of 0.05.For ResNet, when a filter is pruned, the corresponding batch-normalization weight and bias applied on that filter are pruned accordingly. After all pruning iterations are finished, a new model with the remaining number of parameters is created. We report compression on the existing benchmark BID11 BID16. As shown in Table 1, we outperform the state-of-the-art compression reported by BID16 on both ResNet-56 and ResNet-110 with a lower classification error even compared to the baseline. In Table 2, while using one-shot pruning, the influence of our method's different components; structured pruning and blindness, is analyzed by removing a component each test, ing in: i) Non-Structured -pruning applied on weights separately. ii) Non-Blind -every layer is pruned individually. Then, the effect of the pruning strategy on the method with all its components is analyzed by comparing: i) Ours-Oneshot -using one-shot pruning and ii) Ours -using iterative pruning. By comparing the previous versions that are using one-shot pruning, our method has less number of parameters compared to the other versions; ("Non-Structured" and "Non-Blind").Finally, applying pruning iteratively is superior to one-shot pruning. We also show that our method performs better than previously mentioned non-structured weight pruning techniques BID3 BID14. Proposed structured class-blind pruning offers comparable performance as BID2, without requiring dedicated hardware and software to realize compression. We presented a novel structured pruning method to compress neural networks without losing accuracy. By pruning layers simultaneously instead of looking at each layer individually, our method combines all filters and output features of all layers and prunes them according to a global threshold. We have surpassed state-of-the-art compression reported on ResNet-56 and ResNet-110 on CIFAR-10 BID16, compressing more than 47% and 53% respectively. Also, we showed that only 11K parameters are sufficient to exceed the baseline performance on LeNet-5, compressing more than 97%. To realize the advantages of our method, no customized hardware or libraries are needed. It is worth to say that due to removing whole filters and neurons, the pruning percentage reflects the effective model compression percentage. For the future work, we are dedicated to proving the applicability of our method on several different architectures and datasets. Hence, we plan to experiment on VGG-16, ResNet on ImageNet and/or other comparable architectures.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1MdLyrYom
We propose a novel structured class-blind pruning technique to produce highly compressed neural networks.
The recent success of neural networks for solving difficult decision tasks has incentivized incorporating smart decision making "at the edge." However, this work has traditionally focused on neural network inference, rather than training, due to memory and compute limitations, especially in emerging non-volatile memory systems, where writes are energetically costly and reduce lifespan. Yet, the ability to train at the edge is becoming increasingly important as it enables applications such as real-time adaptability to device drift and environmental variation, user customization, and federated learning across devices. In this work, we address four key challenges for training on edge devices with non-volatile memory: low weight update density, weight quantization, low auxiliary memory, and online learning. We present a low-rank training scheme that addresses these four challenges while maintaining computational efficiency. We then demonstrate the technique on a representative convolutional neural network across several adaptation problems, where it out-performs standard SGD both in accuracy and in number of weight updates. Deep neural networks have shown remarkable performance on a variety of challenging inference tasks. As the energy efficiency of deep-learning inference accelerators improves, some models are now being deployed directly to edge devices to take advantage of increased privacy, reduced network bandwidth, and lower inference latency. Despite edge deployment, training happens predominately in the cloud. This limits the privacy advantages of running models on-device and in static models that do not adapt to evolving data distributions in the field. Efforts aimed at on-device training address some of these challenges. Federated learning aims to keep data on-device by training models in a distributed fashion (Konecný et al., 2016). On-device model customization has been achieved by techniques such as weight-imprinting , or by retraining limited sets of layers. On-chip training has also been demonstrated for handling hardware imperfections . Despite this progress with small models, on-chip training of larger models is bottlenecked by the limited memory size and compute horsepower of edge processors. Emerging non-volatile (NVM) memories such as resistive random access memory (RRAM) have shown great promise for energy and area-efficient inference . However, on-chip training requires a large number of writes to the memory, and RRAM writes cost significantly more energy than reads (e.g., 10.9 pJ/bit versus 1.76 pJ/bit ). Additionally, RRAM endurance is on the order of 10 6 writes , shortening the lifetime of a device due to memory writes for on-chip training. In this paper, we present an online training scheme amenable to NVM memories to enable next generation edge devices. Our contributions are an algorithm called Streaming Kronecker Sum Approximation (SKS), and its analysis, which addresses the two key challenges of low write density and low auxiliary memory; two techniques "gradient max-norm" and "streaming batch norm" to help training specifically in the online setting; a suite of adaptation experiments to demonstrate the advantages of our approach. Efficient training for resistive arrays. Several works have aimed at improving the efficiency of training algorithms on resistive arrays. Of the three weight-computations required in training (forward, backprop, and weight update), weight updates are the hardest to parallelize using the array structure. Stochastic weight updates allow programming of all cells in a crossbar at once, as opposed to row/column-wise updating. Online Manhattan rule updating can also be used to update all the weights at once. Several works have proposed new memory structures to improve the efficiency of training . The number of writes has also been quantified in the context of chip-in-the-loop training . Distributed gradient descent. Distributed training in the data center is another problem that suffers from expensive weight updates. Here, the model is replicated onto many compute nodes and in each training iteration, the mini-batch is split across the nodes to compute gradients. The distributed gradients are then accumulated on a central node that computes the updated weights and broadcasts them. These systems can be limited by communication bandwidth, and compressed gradient techniques have therefore been developed. , the gradients are accumulated over multiple training iterations on each compute node and only gradients that exceed a threshold are communicated back to the central node. In the context of on-chip training with NVM, this method helps reduce the number of weight updates. However, the gradient accumulator requires as much memory as the weights themselves, which negates the density benefits of NVM. Low-Rank Training. Our work draws heavily from previous low-rank training schemes that have largely been developed for use in recurrent neural networks to uncouple the training memory requirements from the number of time steps inherent to the standard truncated backpropagation through time (TBPTT) training algorithm. Algorithms developed since then to address the memory problem include Real-Time Recurrent Learning (RTRL) , Unbiased Online Recurrent Optimization (UORO) , Kronecker Factored RTRL (KF-RTRL) , and Optimal Kronecker Sums (OK) . These latter few techniques rely on the weight gradients in a weight-vector product looking like a sum of outer products (i.e., Kronecker sums) of input vectors with backpropagated errors. Instead of storing a growing number of these sums, they can be approximated with a low-rank representation involving fewer sums. The meat of most deep learning systems are many weight matrix -activation vector products W · a. Fully-connected (dense) layers use them explicitly: layer, where σ is a non-linear activation function (more details are discussed in detail in Appendix C.1). Recurrent neural networks use one or many matrix-vector products per recurrent cell. Convolutional layers can also be interpreted in terms of matrix-vector products by unrolling the input feature map into strided convolution-kernel-size slices. Then, each matrix-vector product takes one such input slice and maps it to all channels of the corresponding output pixel (more details are discussed in Appendix C.2). The ubiquity of matrix-vector products allows us to adapt the techniques discussed in "Low-Rank Training" of Section 2 to other network architectures. Instead of reducing the memory across time steps, we can reduce the memory across training samples in the case of a traditional feedforward neural network. However, in traditional training (e.g., on a GPU), this technique does not confer advantages. Traditional training platforms often have ample memory to store a batch of activations and backpropagated gradients, and the weight updates ∆W can be applied directly to the weights W once they are computed, allowing temporary activation memory to be deleted. The benefits of low-rank training only become apparent when looking at the challenges of proposed NVM devices: Low write density (LWD). In NVM, writing to weights at every sample is costly in energy, time, and endurance. These concerns are exacerbated in multilevel cells, which require several steps of an iterative write-verify cycle to program the desired level. We therefore want to minimize the number of writes to NVM.. NVM is the densest form of memory. In 40nm technology, RRAM 1T-1R bitcells @ 0.085 um 2 are 2.8x smaller than 6T SRAM cells @ 0.242 um 2 . Therefore, NVM should be used to store the memory-intensive weights. By the same token, no other on-chip memory should come close to the size of the on-chip NVM. In particular, if our b−bit NVM stores a weight matrix of size n o × n i, we should use at most r(n i + n o)b auxiliary non-NVM memory, where r is a small constant. Despite these space limitations, the reason we might opt to use auxiliary (large, high endurance, low energy) memory is because there are places where writes are frequent, violating LWD if we were to use NVM. In the traditional minibatch SGD setting with batch size B, an upper limit on the write density per cell per sample is easily seen: 1/B. However, to store such a batch of updates without intermediate writes to NVM would require auxiliary memory proportional to B. Therefore, a trade-off becomes apparent. If B is reduced, LAM is satisfied at the cost of LWD. If B is raised, LWD is satisfied at the cost of LAM. Using low-rank training techniques, the auxiliary memory requirements are decoupled from the batch size, allowing us to increase B while satisfying both LWD and LAM 1. Additionally, because the low-rank representation uses so little memory, a larger bitwidth can be used, potentially allowing for gradient accumulation in a way that is not possible with low bitwidth NVM weights. In the next section, we elaborate on the low-rank training method. Let z (i) = W a (i) + b be the standard affine transformation building block of some larger network, e.g., y where. A minibatch SGD weight update accumulates this gradient over B samples: For a rank-r training scheme, approximate the sum by iteratively updating two rankr matricesL ∈ R no×r,R ∈ R ni×r with each new outer product:. Therefore, at each sample, we convert the rank-q = r + 1 systemLR + dz into the rank-rLR. In the next sections, we discuss how to compute rankReduce. One option for rankReduce(X) to convert from rank q = r + 1 X to rank r is a minimum error estimator, which is implemented by selecting the top r components of a singular value decomposition (SVD) of X. However, a naïve implementation is computationally infeasible and biased: solves these problems by proposing a minimum variance unbiased estimator for rankReduce, which they call the OK algorithm 2. The OK algorithm can be understood in two key steps: first, an efficient method of computing the SVD of a Kronecker sum; second, a method of splitting the singular value matrix Σ into two rank-r matrices whose outer product is a minimum-variance, unbiased estimate of Σ. Details can be found in their paper, however we include a high-level explanation in Sections 4.1.1 and 4.1.2 to aid our discussions. Note that our variable notation differs from. Recall that rankReduce should turn rank-q LR into an updated rank-rLR. q×q. Then we can find the SVD of & ), making it computationally feasible on small devices. Now we have: which gives the SVD of LR since Q L U C and Q R V C are orthogonal and Σ is diagonal. This SVD computation has a time complexity of O((n i +n o +q)q 2 ) and a space complexity of O((n i +n o +q)q). , it is shown that the problem of finding a rank-r minimum variance unbiased estimator of LR can be reduced to the problem of finding a rank-r minimum variance unbiased estimator of Σ and plugging it in to. Further, it is shown that such an optimal approximator for Σ = diag(σ 1, σ 2, . . ., σ q), where σ 1 ≥ σ 2 ≥ · · · ≥ σ q will involve keeping the m − 1 largest singular values and mixing the smaller singular values σ m,..., σ q within their (k + 1) × (k + 1) submatrix with m, k defined below. Let: Note that ||x 0 || 2 = 1. Let X ∈ R (k+1)×(k) be orthogonal such that its left nullspace is the span of x 0. Then XX = I − x 0 x 0. Now, let s ∈ {−1, 1} (k+1)×1 be uniform random signs and define: where is an element-wise product. ThenΣ LΣ R =Σ is a minimum variance, unbiased 3 rank-r approximation of Σ. PluggingΣ into, gives us a minimum variance, unbiased, rank-r approximationLR. Although the standalone OK algorithm presented by has good asymptotic computational complexity, our vector-vector outer product sum use case permits further optimizations. In this section we present these optimizations, and we refer readers to the explicit implementation called Streaming Kronecker Sum Approximation (SKS) in Algorithm 1 of Appendix A. The main optimization is a method of avoiding recomputing the QR factorization of L and R at every step. Instead, we keep track of orthogonal matrices Q L, Q R, and weightings c x such that Upon receiving a new sample, a single inner loop of the numerically-stable modified Gram-Schmidt (MGS) algorithm (Björck, 1967) can be used to update Q L and computed during MGS can be used to find the new value of C = c L c R + diag(c x). 3 The fact that it is unbiased: E[Σ] = Σ can be easily verified. After computingΣ L =Σ R in, we can orthogonalize these matrices intoΣ. With this formulation, we can maintain orthogonality in Q L, Q R by setting: These matrix multiplies require O((n i +n o)q 2 ) multiplications, so this optimization does not improve asymptotic complexity bounds. This optimization may nonetheless be practically significant since matrix multiplies are easy to parallelize and would typically not be the bottleneck of the computation compared to Gram-Schmidt. The next section discusses how to orthogonalizeΣ L efficiently and why Orthogonalization ofΣ L is relatively straightforward. From, the columns ofΣ L are orthogonal since Z is orthogonal. However, they do not have unit norm. We can therefore pull out the norm into a separate diagonal matrix R x with diagonal elements √ c x: We generated X by finding an orthonormal basis that was orthogonal to a vector x 0 so that we could have XX = I − x 0 x 0. An efficient method of producing this basis is through Householder matrices (x 0, X) = I − 2 vv /||v|| 2 where v = x 0 − e and (x 0, X) is a k + 1 × k + 1 matrix with first column x 0 and remaining columns X (; user1551, 2013 . The OK/SKS methods require O((n i + n o + q)q 2 ) operations per sample and O(n i n o q) operations after collecting B samples, giving an amortized cost of O((n i + n o + q)q 2 + n i n o q/B) operations per sample. Meanwhile, a standard approach expands the Kronecker sum at each sample, costing O(n i n o) operations per sample. If q B, n i, n o then the low rank method is superior to minibatch SGD in both memory and computational cost. SKS introduces variance into the gradient estimates, so here we analyze the implications for online convex convergence. We analyze the case of strongly convex loss landscapes f t (w t) for flattened weight vector w t and online sample t. In Appendix B, we show that with inverse squareroot learning rate, when the loss landscape Hessians satisfy 0 ≺ cI ∇ 2 f t (w t) and under constraint for the size of gradient errors ε t, where w * is the optimal offline weight vector, the online regret is sublinear in the number of online steps T. We can approximate ||ε|| and show that convex convergence is likely when is satisfied in the biased, zero-variance case (equivalent to raw SVD, i.e., not applying Section 4.1.2), or when is satisfied in the unbiased, minimum-variance case. Equations suggest conditions under which fast convergence may be more or less likely and also point to methods for improving convergence. We discuss these in more detail in Appendix B.3. We validate with several linear regression experiments on a static input batch X ∈ R and target Y t ∈ R 256×100. In Figure 1 (a), Gaussian noise at different strengths (represented by different colors) is added to the true batch gradients at each update step. Notice that convergence slows significantly to the right of the dashed lines, which is the region where no longer holds 4. In Figure 1 (b), we validate Equations by testing the SVD and SKS cases with rank r = 10. In these particular experiments, SKS adds too much variance, causing it to operate to the right of the dashed lines. However, both SVD and SKS can be seen to reduce their variance as training progresses. In the case of SVD, it is able to continue training as it tracks the right dashed line. Quantization. The NN is quantized in both the forward and backward directions with uniform power-of-2 quantization, where the clipping ranges are fixed at the start of training 5. Weights are quantized to 8 bits between -1 and 1, biases to 16 bits between -8 and 8, activations to 8 bits between 0 and 2, and gradients to 8 bits between -1 and 1. Both the weights W and weight updates ∆W are quantized to the same LSB so that weights cannot be used for accumulation beyond the fixed quantization dynamic range. This is in contrast to using high bitwidth or floating point accumulators. See Appendix D for more details on quantization. Gradient Max-Norming. State-of-the-art methods in training, such as Adam , use auxiliary memory per parameter to normalize the gradients. Unfortunately, we lack the memory budget to support these additional variables, especially if they must be updated every sample 6. Instead, we propose dividing each gradient tensor by the maximum absolute value of its elements. This stabilizes the range of gradients across samples. See Appendix E for more details on gradient max-norming. In the experiments, we refer to this method as "max-norm" (opposite "no-norm"). Streaming Batch Normalization. Batch normalization is a powerful technique for improving training performance which has been suggested to work by smoothing the loss landscape . We hypothesize that this may be especially helpful when parameters are quantized as in our case. However, in the online setting, we receive samples one-at-a-time rather than in batches. We therefore propose a streaming batch norm that uses moving average statistics rather than batch statistics as described in detail in Appendix F. To test the effectiveness of SKS, experiments are performed on a representative CNN with four 3 × 3 convolution layers and two fully-connected layers. We generate "offline" and "online" datasets based on MNIST (see Appendix G), including one in which the statistical distribution shifts every 10k images. We then optimize an online SGD and rank-4 SKS model for fair comparison (see Appendix H). To see the importance of different training techniques, we run several ablations in Appendix I. Finally, we compare these different training schemes in different environments, meant to model real life. In these hypothetical scenarios, a model is first trained on the offline training set, and is then deployed to a number of devices at the edge that make supervised predictions (they make a prediction, then are told what the correct prediction would have been). We present on four hypothetical scenarios. First, a control case where both external/environment and internal/NVM drift statistics are exactly the same as during offline training. Second, a case where the input image statistical distribution shifts every 10k samples, selecting from augmentations such as spatial transforms and gradients (see Section G). Third and fourth are cases where the NVM drifts from the programmed values, roughly modeling NVM memory degradation. In the third case, Gaussian noise is applied to the weights as if each weight was a single multi-level memory cell whose analog value drifted in a Brownian way. In the fourth case, random bit flips are applied as if each weight was represented by b memory cells (see Appendix G for details). For each hypothetical scenario, we plot five different training schemes: pure quantized inference (no training), bias-only training, standard SGD training, SKS training, and SKS training with max-normed gradients. In SGD training and for training biases, parameters are updated at every step in an online fashion. These are seen as different colored curves in Figure 2. Inference does best in the control case, but does poorly in adaptation experiments. SGD doesn't improve significantly on bias-only training, likely because SGD cannot accumulate gradients less than a weight LSB. SKS, on the other hand, shows significant improvement, especially after several thousand samples in the weight drift cases. Additionally, SKS shows about three orders of magnitude improvement compared to SGD in the worst case number of weight updates. Much of this reduction is due to the convolutions, where updates are applied at each pixel. However, reduction in fullyconnected writes is still important because of potential energy savings. SKS/max-norm performs best in terms of accuracy across all environments and has similar weight update cost to SKS/no-norm. To test the broader applicability of low rank training techniques, we run several experiments on ImageNet with ResNet-34 , a potentially realistic target for dense NVM inference on-chip. For ImageNet-size images, updating the low-rank approximation at each pixel quickly becomes infeasible, both because of the single-threaded nature of the algorithm, and because of the increased variance of the estimate at larger batch sizes. Instead, we focus on training the final layer weights (1000 × 512). ResNet-34 weights are initialized to those from and the convolution layers are used to generate feature vectors for 10k ImageNet training images 7, which are quantized and fed to a one-layer quantized 8 neural network. To speed up experiments, the layer weights are initialized to the pretrain weights, modulated by random noise that causes inference top-1 accuracy to fall to 52.7% ± 0.9%. In Table 1, we see that the unbiased SKS has the strongest recovery accuracies, although biased SVD also does quite well. The high-variance UORO and true SGD have weak or non-existent recoveries. -+0.3 ± 0.2 +0.3 ± 0.2 +0.3 ± 0.2 +0.9 ± 0.2 −3.9 ± 0.8 UORO 1 +0.4 ± 0.2 +0.3 ± 0.4 −1.8 ± 0.9 −7.6 ± 1.6 −31.7 ± 1.6 SVD 1 +1.9 ± 0.2 +5.8 ± 1.0 −3.4 ± 1.0 −19.4 ± 0.9 −40.7 ± 1.1 2 +1.4 ± 0.4 +6.5 ± 0.7 +6.3 ± 0.6 −5.2 ± 0.9 −36.3 ± 0.9 4 +1.3 ± 0.4 +6.5 ± 0.7 +5.2 ± 0.8 −3.3 ± 1.0 −33.8 ± 0.8 8 +1.4 ± 0.3 +5.6 ± 0.8 +4.3 ± 0.9 −2.4 ± 1.0 −32.8 ± 0.9 SKS 1 +0.3 ± 0.2 +0.3 ± 0.2 −0.7 ± 0.4 −2.7 ± 1.7 −26.5 ± 2.6 2 +0.3 ± 0.2 +0.4 ± 0.3 −0.1 ± 0.4 +1.3 ± 0.9 −12.9 ± 1.1 4 +0.4 ± 0.2 +0.6 ± 0.2 +1.9 ± 0.3 +8.0 ± 1.1 −5.1 ± 1.1 8 +0.4 ± 0.2 +1.1 ± 0.2 +3.3 ± 0.7 +4.8 ± 1.5 −15.8 ± 1.7 We demonstrated the potential for SKS to solve the major challenges facing online training on NVM-based edge devices: low write density and low auxiliary memory. SKS is a computationallyefficient, memory-light algorithm capable of decoupling batch size from auxiliary memory, allowing larger effective batch sizes, and consequently lower write densities. Additionally, we noted that SKS may allow for training under severe weight quantization constraints as rudimentary gradient accumulations are handled by the L, R matrices, which can have high bitwidths (as opposed to SGD, which may squash small gradients to 0). We found expressions for when SKS might have better convergence properties. Across a variety of online adaptation problems and a large-scale transfer learning demonstration, SKS was shown to match or exceed the performance of SGD while using a small fraction of the number of updates. Finally, we suspect that these techniques could be applied to a broader range of problems. Auxiliary memory minimization may be analogous to communication minimization in training strategies such as federated learning, where gradient compression is important. State: In this section we will attempt to bound the regret (defined below) of an SGD algorithm using noisy SKS estimatesg = g + ε in the convex setting, where g are the true gradients and ε are the errors introduced by the low rank SKS approximation. Here, g is a vector of size N and can be thought of as a flattened/concatenated version of the gradient tensors (e.g., N = n i · n o). Our proof follows the proof in. We define F as the convex feasible set (valid settings for our weight tensors) and assume that F is bounded with D = max w,v∈F ||w − v|| being the maximum distance between two elements of F. Further, assume a batch t of B samples out of T total batches corresponds to a loss landscape f t (w t) that is strongly convex in weight parameters w t, so there are positive constants C ≥ c > 0 such that cI ∇ 2 f t (w t) CI for all t (, Section 9. 3). We define regret as where w * = argmin w T t=1 f t (w) (i.e., it is an optimal offline minimizer of The gradients seen during SGD are g t = ∇f t (w t) and we assume they are bounded by G = max w∈F,t∈ [1,T] For the unbiased, minimum-variance case, Theorem A.4 from states that the minimum variance is s. Assuming errors between samples are uncorrelated, this leads to a total variance: For either case, ||ε|| 2 ≈ N σ 2 ε. For the t-th batch and i-th sample, we denote σ (t,i) q as the q-th singular value. For simplicity, we focus on the biased, zero-variance case (the unbiased case is similar). From, an approximately sufficient condition for sublinear-regret convergence is: B.3 DISCUSSION ON CONVERGENCE Equation suggests that as w t → w *, the constraints for achieving sublinear-regret convergence become more difficult to maintain. However, in practice this may be highly problem-dependent as the σ q will also tend to decrease near optimal solutions. To get a better sense of the behavior of the left-hand side of, suppose that: (no×ni) are the matrix weight W t gradients at batch t and || · || F is a Frobenius norm. We therefore expect both the left (proportional to ||G t || 2 F) and the right (proportional to ||w t − w * || 2) of to decrease during training as w t → w *. This behavior is in fact what is seen in Figure 1(b). If achieving convergence is found to be difficult, provides some insight for convergence improvement methods. One solution is to reduce batch size B to satisfy the inequality as necessary. This minimizes the weight updates during more repetitive parts of training while allowing dense weight updates (possibly approaching standard SGD with small batch sizes) during more challenging parts of training. Another solution is to reduce σ q. One way to do this is to increase the rank r so that the spectral energy of the updates are spread across more singular components. There may be alternate approaches based on conditioning the inputs to shape the distribution of singular values in a beneficial way. A third method is to focus on c, the lower bound on curvature of the convex loss functions. Perhaps a technique such as weight regularization can increase c by adding constant curvature in all Eigendirections of the loss function Hessian (although this may also increase the LHS of). Alternatively, perhaps low-curvature Eigen-directions are less important for loss minimization, allowing us to raise the c that we effectively care about. This latter approach requires no particular action on our part, except the recognition that fast convergence may only be guaranteed for high-curvature directions. This is exemplified in Figure 1(b), where we can see SVD track the curve for C more so than c. Finally, we note that this analysis focuses solely on the errors introduced by a floating-point version of SKS. Quantization noise can add additional error into the ε t term. We expect this to add a constant offset to the LHS of. For a weight LSB ∆, quantization noise has variance ∆ 2 /12, so we desire: C KRONECKER SUMS IN NEURAL NETWORK LAYERS A dense or fully-connected layer transforms an input a ∈ R ni×1 to an intermediate z = W · a + b to an output y = σ(z) ∈ R no×1 where σ is a non-linear activation function. Gradients of the loss function with respect to the weight parameters can be found as: which is exactly the per-sample Kronecker sum update we saw in linear regression. Thus, at every training sample, we can add (dz (i) ⊗ a (i) ) to our low rank estimate with SKS. A convolutional layer transforms an input feature map A ∈ R hin×win×cin to an intermediate feature map Z = W kern * A + b ∈ R hout×wout×cout through a 2D convolution * with weight kernel W kern ∈ R cout×k h ×kw×cin. Then it computes an output feature map y = σ(z) where σ is a non-linear activation function. Convolutions can be interpreted as matrix multiplications through the im2col operation which converts the input feature map A into a matrix A col ∈ R (houtwout)×(k h kwcin) where the i th row is a flattened version of the sub-tensor of a which is dotted with W kern to produce the i th pixel of the output feature map . We can multiply A col by a flattened version of the kernel, W ∈ R cout×(k h hwcin) to perform the W kern * A convolution operation with a matrix multiplication. Under the matrix multiplication interpretation, weight gradients can be represented as: which is the same as h out w out Kronecker sum updates. Thus, at every output pixel j of every training sample i, we can add (dZ col,j) to our low rank estimate with SKS. Note that while we already save an impressive factor of B/q in memory when computing gradients for the dense layer, we save a much larger factor of Bh out w out /q in memory when computing gradients for the convolution layers, making the low rank training technique even more crucial here. However, some care must be taken when considering activation memory for convolutions. For compute-constrained edge devices, image dimensions may be small and in minimal intermediate feature map memory requirements. However, if image dimensions grow substantially, activation memory could dominate compared to weight storage. Clever dataflow strategies may provide a way to reduce intermediate activation storage even when performing backpropagation 9. In a real device, operations are expected to be performed in fixed point arithmetic. Therefore, all of our training experiments are conducted with quantization in the loop. Our model for quantization is shown in Figure 3. The green arrows describe the forward computation. Ignoring quantization for a moment, we would have a = ReLU α W * a −1 + b, where * can represent either a convolution or a matrix multiply depending on the layer type and α is the closest power-of-2 to He initialization . For quantization, we rely on four basic quantizers: Qw, Qb, Qa, Qg, which describe weight quantization, bias and intermediate accumulator quantization, activation quantization, and gradient quantization, respectively. All quantizers use fixed clipping ranges as depicted and quantize uniformly within those ranges to the specified bitwidths. In the backward pass, follow the orange arrows from δ. Backpropagation follows standard backpropagation rules including using the straight-through estimator for quantizer gradients. However, because we want to perform training on edge devices, these gradients must themselves be quantized. The first place this happens is after passing backward through the ReLU derivitive. The other two places are before feeding back into the network parameters W, b, so that W, b cannot be used to accumulate values smaller than their LSB. Finally, instead of deriving ∆W from a backward pass through the * operator, the SKS method is used. SKS collects a −1, dz for many samples before computing the approximate ∆W. It accumulates information in two low rank matrices L, R which are themselves quantized to 16 bits with clipping ranges determined dynamically by the max absolute value of elements in each matrix. While SKS accumulates for B samples, leading to a factor of B reduction in the rate of updates to W, b is updated at every sample. This is feasible in hardware because b is small enough to be stored in more expensive forms of memory that have superior endurance and write power performance. Because of the coarse weight LSB size, weight gradients may be consistently quantized to 0, preventing them from accumulating. To combat this, we only apply an update if a minimum update density ρ min = 0.01 would be achieved, otherwise we continue accumulating samples in L and R, which have much higher bitwidths. When an update does finally happen, the "effective batch size" will be a multiple of B and we increase the learning rate correspondingly. In the literature, a linear scaling rule is suggested (see), however we empirically find square-root scaling works better (see Appendix H). E GRADIENT MAX-NORMING Figure 4: Maximum magnitude of weight gradients versus training step for standard SGD on a CNN trained on MNIST. Figure 4 plots the magnitude of gradients seen in a weight tensor over training steps. One apparent property of these gradients is that they have a large dynamic range, making them difficult to quantize. Even when looking at just the spikes, they assume a wide range of magnitudes. One potential method of dealing with this dynamic range is to scale tensors so that their max absolute element is 1 (similar to a per-tensor AdaMax or Range Batch-Norm applied to gradients). Optimizers such as Adam, which normalize by gradient variance, provide a justification for why this sort of scaling might work well, although they work at a per-element rather than per-tensor level. We choose max-norming rather than variance-based norming because the former is easier computational and potentially more ammenable to quantization. However, a problem with the approach of normalizing tensors independently at each sample is that noise might be magnified during regions of quiet as seen in the Figure. What we therefore propose is normalization by the maximum of both the current max element and a moving average of the max element. Explicitly, max-norm takes two parameters -a decay factor β = 0.999 and a gradient floor ε = 10 −4 and keeps two state variables -the number of evaluations k:= 0 and the current maximum moving average x mv:= ε. Then for a given input x, max-norm modifies its internal state and returns x norm: Standard batch normalization normalizes a tensor X along some axes, then applies a trainable affine transformation. For each slice X of X that is normalized independently: where µ b, σ b are mean and standard deviation statistics of a minibatch and γ, β are trainable affine transformation parameters. In our case, we do not have the memory to hold a batch of samples at a time and must compute µ b, σ b in an online fashion. To see how this works, suppose we knew the statistics of each sample µ i, σ i for i = 1... B in a batch of B samples. For simplicity, assume the i th sample is a vector X i,: ∈ R n containing elements X i,j. Then: In other words, the batch variance is not equal to the average of the sample variances. However, if we keep track of the sum-of-square values of samples σ for each sample i. After B samples, we divide both state variables by B and apply to get the desired batch statistics. Unfortunately, in an online setting, all samples prior to the last one in a given batch will only see statistics generated from a portion of the batch, ing in noisier estimates of µ b, σ b. In streaming batch norm, we alter the above formula slightly. Notice that in online training, only the most recently viewed sample is used for training, so there is no reason to weight different samples of a given batch equally. Therefore we can use an exponential moving average instead of a true average to track µ s, sq s. Specifically, let: If we set η = 1 − 1/B, a weighting of 1/B is seen on the current sample, just as in standard averages with a batch of size B, but now all samples receive similarly clean batch statistic estimates, not just the last few samples in a batch. For our experiments, we construct a dataset comprising an offline training, validation, and test set, as well as an online training set. Specifically, we start with the standard MNIST dataset of and split the 60k training images into partitions of size 9k, 1k, and 50k. Elastic transforms are used to augment each of these partitions to 50k offline training samples, 10k offline validation samples, and 100k online training samples, respectively. Elastic transforms are also applied to the 10k MNIST test images to generate the offline test samples. The source images for the 100k online training samples are randomly drawn with replacement, so there is a certain amount of data leakage in that an online algorithm may be graded on an image that has been generated from the same image a previous sample it has trained on has been generated from. This is intentional and is meant to mimic a real-life scenario where a deployed device is likely to see a restrictive and repetitive set of training samples. Our experiments include comparisons to standard SGD to show that SKS's improvement is not merely due to overfitting the source images. From the online training set, we also generate a "distribution shift" dataset by applying unique additional augmentations to every contiguous 10k samples of the 100k online training samples. Four types of augmentations are explored. Class distribution clustering biases training samples belonging to similar classes to have similar indices. For example, the first thousand images may be primarily "0"s and "3"s, whereas the next thousand might have many "5"s. Spatial transforms rotate, scale, and shift images by random amounts. Background gradients both scale the contrast of the images and apply black-white gradients across the image. Finally, white noise is random Gaussian noise added to each pixel. In addition to distribution shift for testing adaptation, we also look at internal statistical shift of weights in two ways -analog and digital. For analog weight drift, we apply independent additive Gaussian noise to each weight every d = 10 steps with σ = σ 0 / 1M/d where σ 0 = 10 and re-clip the weights between -1 and 1. This can be interpreted as each cell having a Gaussian cumulative error with σ = σ 0 after 1M steps. For digital weight drift, we apply independent binary random flips to the weight matrix bits every d steps with probability p = p 0 /(1M/d) where p 0 = 10. This can be interpreted as each cell flipping an average of p 0 times over 1M steps. Note that in real life, σ 0, p 0 depend on a host of issues such as the environmental conditions of the device (temperature, humidity, etc), as well as the rate of seeing training samples. In order to compare standard SGD with the SKS approach, we sweep the learning rates of both to optimize accuracy. In Figure 6, we compare accuracies across a range of learning rates for four different cases: SGD or SKS with or without max-norming gradients. Optimal accuracies are found when learning rate is around 0.01 for all cases. For most experiments, 8b weights, activations, and gradients, and 16b biases are used. Experiments similar to those in Section I are used to select some of the hyperparameters related to the SKS method in particular. In most experiments, rank-4 SKS with batch sizes of 10 (for convolution layers) or 100 (for fully-connected layers) are used. Additional details can be found in the supplemental code. Accuracy (Last 500 of 10k) Figure 6: The left two heat maps are used to select the base / standard SGD learning rate. The right two heat maps are used to select the SKS learning rate using the optimal SGD learning rate for bias training from the previous sweeps. For the SKS sweeps, the learning rate is scaled proportional to the square-root of the batch size B. This in an approximately constant optimal learning rate across batch size, especially for the max-norm case. Accuracy is reported averaged over the last 500 samples from a 10k portion of the online training set, trained from scratch. In Figure 7, rank and weight bitwidth is swept for SKS with gradient max-norming. As expected, training accuracy improves with both higher SKS rank and bitwidth. In dense NVM applications, higher bitwidths may be achievable, allowing for corresponding reductions in the SKS rank and therefore, reductions in the auxiliary memory requirements. In Table 2, biased (zero-variance) and unbiased (low-variance) versions of SKS are compared. Accuracy improvements are generally seen moving from biased to unbiased SKS although the pattern differs between the no-norm and max-norm cases. In the no-norm case, a significant improvement is seen favoring unbiased SKS for fully-connected layers. In the max-norm case, the choice of biased or unbiased SKS has only a minor impact on accuracy. It might be expected that as the number of accumulated samples for a given pseduobatch increases, lower variance would be increasingly important at the expense of bias. For our network, this implies convolutions, which receive updates at every pixel of an output feature map, would preferentially have biased SKS, while the fully-connected layer would preferentially be unbiased. This hypothesis is supported by the no-norm experiments, but not by the max-norm experiments. In Table 3, several ablations are performed on SKS with max-norm. Most notably, weight training is found to be extremely important for accuracy as bias-only training shows a ≈ 15 − 30% accuracy hit depending on whether max-norming is used. Streaming batch norm is also found to be quite helpful, especially in the no-norm case. Now, we explain the κ th ablation. In Section 4.1.1, we found the SVD of a small matrix C and its singular values σ 1,..., σ q. This allows us to easily find the condition number of C as κ(C) = σ 1 /σ q. We suspect high condition numbers provide relatively useless update information akin to noise, especially in the presence of L, R quantization. Therefore, we prefer not to update L, R on samples whose condition number exceeds threshold κ th. We can avoid performing an actual SVD (saving computation) by noting that C is often nearly diagonal, leading to the approximation κ(C) ≈ C 1,1 /C q,q. Empirically, this rough heuristic works well to reduce computation load while having minor impact on accuracy. In Table 3, κ th = 10 8 does not appear to ubiquitously improve on the default κ th = 100, despite being ≈ 2× slower to compute. Table 3: Miscellaneous selected ablations. Accuracy is calculated from the last 500 samples of 10k samples trained from scratch. Mean and unbiased standard deviation are calculated from five runs of different random seeds. Accuracy (no-norm) Accuracy (max-norm) baseline (no modifications) 80.2% ± 1.0% 83.0% ± 1.1% bias-only training 51.8% ± 3.2% 68.6% ± 1.4% no streaming batch norm 68.2% ± 1.9% 81.8% ± 1.3% no bias training 81.3% ± 1.0% 83.0% ± 1.4% κ th = 10 8 instead of 100 79.8% ± 1.4% 84.2% ± 1.4%
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkeXL0NKwH
We use Kronecker sum approximations for low-rank training to address challenges in training neural networks on edge devices that utilize emerging memory technologies.
Knowledge extraction techniques are used to convert neural networks into symbolic descriptions with the objective of producing more comprehensible learning models. The central challenge is to find an explanation which is more comprehensible than the original model while still representing that model faithfully. The distributed nature of deep networks has led many to believe that the hidden features of a neural network cannot be explained by logical descriptions simple enough to be understood by humans, and that decompositional knowledge extraction should be abandoned in favour of other methods. In this paper we examine this question systematically by proposing a knowledge extraction method using \textit{M-of-N} rules which allows us to map the complexity/accuracy landscape of rules describing hidden features in a Convolutional Neural Network (CNN). Experiments reported in this paper show that the shape of this landscape reveals an optimal trade off between comprehensibility and accuracy, showing that each latent variable has an optimal \textit{M-of-N} rule to describe its behaviour. We find that the rules with optimal tradeoff in the first and final layer have a high degree of explainability whereas the rules with the optimal tradeoff in the second and third layer are less explainable. The shed light on the feasibility of rule extraction from deep networks, and point to the value of decompositional knowledge extraction as a method of explainability. Recently there has been an increase in interest in explainable Artificial Intelligence (AI). Although in the past decade there have been major advances in the performance of neural network models, these models tend not to be explainable . In large part, this is due to the use of very large networks, specifically deep networks, which rely on distributed representations to model data accurately BID11. In contrast with symbolic AI, in which specific features are often hand picked for a problem, or symbolic Machine Learning (ML), which takes a localist approach BID15, the features used by a distributed representation do not necessarily correlate with obviously identifiable features of the data. A distributed representation may owe its strength to weak statistical correlations that a human would not be able to detect or describe in any comprehensible way. Knowledge extraction seeks to increase the explainability of neural networks by attempting to uncover the knowledge that a neural network has learned implicitly in its weights. One way of doing this is to translate trained neural networks into a set of symbolic rules or decision trees similar to the ones found in symbolic AI, ML and logic programming BID16 BID7. Rule extraction techniques have been around for decades BID20 ) with a number of rule extraction algorithms having been developed over the years BID12 BID4 BID22 ) (d'Avila BID5 . These techniques generally take one of two approaches: decompositional, in which the parameters of the network are used to generate rules, or pedagogical, in which the behaviour of the network is used to generate rules BID1 . In either case, the major issue with rule extraction is the complexity of the extracted rules. Even if it is possible to find a symbolic system which describes exactly a neural network (for example, feedforward, Boolean, deterministic networks can always be written as a logic program), a very large rule set derived from a very large CNN may be no more comprehensible than the original network. Perhaps the main reason knowledge extraction proves difficult (and in particular decompositional methods of extraction) is the distributed representations found in neural networks BID11. This means that important concepts which can be used for reasoning are not always represented by single neurons but by patterns of activity over many neurons. It has been argued that the distributed nature of neural networks plays an important part in many of their capabilities BID19. Distributed representations have been identified as one of the fundamental properties of connectionism BID18. This has led many to conclude that attempting to explain latent features using symbolic knowledge extraction is a dead end, and that methods akin to distillation should be adopted instead BID7. Distillation has also been proposed as a method for improving robustness but it's efficacy has been questioned BID13 BID3. Other approaches take a more practical view. Rather than attempting to open the black box, one may settle for some guarantees on the network's behaviour, or for visualizations seeking to explain individual classifications rather than the learned model BID9 BID17 BID10 In this paper, we develop a method for empirically examining the explainability of the latent variables in neural networks. We use rule extraction by searching through a space of M-of-N rules BID20 ) describing a latent variable, and measuring the error and complexity of each rule. By selecting various error/complexity trade-offs, we are able to map out a rule extraction landscape which shows the relationship between how complex the extracted rules are allowed to be and how accurately they capture the behaviour of a network. When applied to a standard 4-layer CNN trained on fashion MNIST, we find that some layers have very accurate rules whereas this is not the case for others even when using very complex rules. The discovery of a'critical point' on the rule extraction landscape shows that there is an ideal M-of-N rule to describe each latent variable. The accuracy of those rules depends highly on the variable that we are attempting to describe, with the overall explainability trends differing greatly between layers and architectures. All layers showed similarly shaped curves but in the convolutional layers the rules extracted with no penalty in complexity were much more complex relatively than the ones extracted from the fully connected layers with relative complexities over 0.4 in the convolutional layers and complexities of under 0.2 in the fully connected layers. Additionally, it was possible to find rules with near 0% error in the first and final layer whereas rules from the second and third layer could not do much better than 15% error. In Section 2 we give a brief overview of previous algorithms used for knowledge extraction. In Section 3 we give definitions of accuracy and complexity for M-of-N rules and outline the extraction process. In Section 4 we give the experimental of our rule extraction process for the mapping of the accuracy/complexity landscape before concluding in Section 5. One of the first attempts at knowledge extraction used a decompositional approach applied to feedforward networks, in particular the Knowledge-based Artificial Neural Networks (KBANN) BID21. This algorithm used the weights of a hidden variable to extract symbolic rules of the form IF M out of a set of N neurons (or concepts) are activated (or hold) THEN a given neuron (concept) is activated (holds), called M-of-N rules BID20. This was followed by more sophisticated algorithms which generate binary trees in which each node is an M-of-N rule BID12 BID4 (Notice that these binary trees can be reduced to IF-THEN propositional logic sentences as before). These more recent algorithms are pedagogical in that they select an M-of-N rule using the input units as the concepts (called literals in logic), based on the maximum information gain with respect to the output. By default these methods do not attempt to explain any of the latent variables of a network and simply treat the model as a black box which can be queried as an oracle to generate data for rule extraction. Many other extraction methods can be described as eclectic containing both aspects of pedagogical and decompositional methods (c.f. (d'Avila BID5) for examples. Other methods abandon the knowledge extraction paradigm and opt for alternative techniques, often more visually oriented BID9 BID7. A survey of various methods developed to solve the'black-box' problem of neural networks can be found in BID8.Most decompositional rule extraction techniques have been applied only to shallow networks with other techniques focusing on input/output relationships rather than attempting to explain the latent features of a deep network. The multiple hidden layers in a deep network mean that in order to explain an arbitrary hidden feature in terms of the input, a decompositional technique has to produce a hierarchy of rules (see BID22 for an example of hierarchical rule extraction). With many hidden layers, the extracted rules can quickly grow far too complex for a human to understand, unless each constituent of the rule hierarchy is exceedingly simple. Thus, the use of decompositional techniques to explain the features of a deep network end-to-end seems impractical, as argued in BID7. Nevertheless, experiments reported in this paper show that some layers of a deep network may be associated with highly explainable rules, and that within a layer some extracted rules may explain the network's behaviour remarkably well in terms of certain features. This opens the possibility of rule extraction being used as a tool for the modular explanation of network models, and it could provide insight into the similarity and disentanglement BID2 of latent features by comparing their optimal extracted rules. In what follows, we define the above ideas around rule explainability and optimality formally. In logic programming, a logical rule is an implication of the form A ← B, called A if B. The literal A is called the head of the rule and B stands for a conjunction of literals, B 1 ∧ B 2 ∧... ∧ B n called the body of the rule. Disjunctions in the body can be modelled simply as multiple rules having the same head. Most logic programs adopt a negation by failure paradigm so that A is true if and only if B is true BID6. When using rules to explain a neural network the literals will refer to the states of neurons. For example, if a neuron x takes binary values {0,1} then we define the literal X by X = T rue if x = 1, and X = F alse if x = 0. For neurons with continuous activation values, we can define a literal by including a threshold a such that X = T rue if x > a, and X = F alse otherwise. In other words, the literal X is shorthand for the statement x > a. In neural networks, a latent variable is usually poorly described by a single conjunctive rule since there are many different input configurations which will activate a neuron. Rather than simply adding a rule for each input pattern that activates a neuron (which essentially turns the network into a large lookup table), we look for M-of-N rules which have been commonly used in rule extraction starting with BID20. M-of-N rules soften the conjunctive constraint on the body of logical rules by requiring only M of the variables in the body to be true for some specific value of M < N (notice that when M = N we are left with a conjunction). For example, the rule DISPLAYFORM0 where ¬ stands for negation by failure. M-of-N rules are an attractive candidate for rule extraction for several reason. First, they offer a compact representation which is more general than a conjunction and which reflects naturally the input/output dependencies of the neurons in a neural network fan-in. Second, M-of-N rules are only a subset of all propositional formulas (it is easy to see that XOR cannot be represented as an M-of-N rule), meaning that our rule extraction process will not simply generate a lookup table to explain a neuron. Finally, M-of-N rules share a structural similarity with neural networks. This can be seen by viewing M-of-N rules as'weightless perceptrons'. Any M-of-N rule can be represented by a perceptron with its output neuron representing the head, and visible neurons representing the body of the rule. In order to encode an M-of-N rule in a neural network, one just needs to set the bias of the output neuron to M and the weights of each input neuron to 1 or −1 for neurons corresponding to positive or negative literals, respectively. M-of-N rules have been used in the early days of knowledge extraction but have since been forgotten. This paper brings M-of-N rules to the forefront of the debate on explainability again. When our network has continuous activation values, in order to define the literals to use for rule extraction we must choose a splitting value a for each neuron which will lead to a literal of the form x > a. In order to choose splitting values for continuous neurons we use information gain BID14. Given a target neuron we wish to explain, h, we generate a literal for the target neuron by selecting a split based on the information gain with respect to the output labels of the network. That is, given a set of test examples, choose the value of the target neuron h which splits the examples in such a way as to in the maximum decrease in entropy of the network outputs on the test examples. The input literals are then generated from the inputs to the target neuron by choosing splits for each input which maximize the information gain with respect to the target literal generated in the previous step. In practice this means that each target literal in a layer will have its own set of input literals, each corresponding to the same set of input neurons but with different splits. In the case that the layer is convolution, each feature map corresponds to a group of neurons each with a different input patch. Rather than test every single neuron in the feature map we only test the one whose optimal split has the maximum information gain with respect to the network output. This gives us a single rule for each feature map rather than a collection of them. The two metrics we are concerned with in rule extraction are comprehensibility and accuracy. For a given rule we can define accuracy in terms of a soundness measure. This is simply the expected difference between the predictions made by the rules and the network. More concretely given a neuron h in a neural network with input neurons x i, we can use the network to compute the state of h from the state of the input neurons which then determines the truth of literal H. Thus we can use the network to determine the truth of H, call this N (x). Furthermore, if we have some rule R relating variables H and X i, we can use the state of the input x to determine the value of the variables X i, and then use R to determine the value of H, call this R(x). Given a set of input configurations to test I (not necessarily from the test set of the network) we can measure the discrepancy between the output of the rules and the network as DISPLAYFORM0 In other words we measure the average error of the rules when trying to predict the output of the network over a test set. Comprehensibility is more difficult to define as there is a degree of subjectivity. The approach we take is to look at the complexity of a rule. Here, we think of complexity in an analogous way to the Kolmogorov complexity which is determined by a minimal description. Thus we determine the complexity of a rule by the length of its body when expressed by a (minimal) rule in disjunctive normal form (DNF). For an M-of-N rule, the complexity is simply M N M, where denotes the binomial coefficient. For our experiments we measure complexity in a relative manner by normalizing w.r.t. a maximum complexity. Given N possible input variables the maximum complexity is DISPLAYFORM1, where denotes the ceiling function (rounding to the next highest integer). Finally in order to control for growth we take the logarithm giving the following normalized complexity measure. DISPLAYFORM2 As an example, suppose we have a simple perceptron whose output unit has a bias of 1 and two binary visible units with weights w 1,1 = 1 and w 2,1 = −0.5. Then consider the rule h = 1 ⇐⇒ 1-of{x 1 = 1, ¬(x 2 = 1)}. Over the entire input space we see that R(x) = N (x) only when x 1 = 0 and x 2 = 1 giving us an error of 0.25. Furthermore, a 1 − of − 2 rule is the most complex rule possible for 2 variables as it has the longest DNF of any M-of-N rule giving us a complexity of 1. Using Eqs. and we define a loss function for a rule R as a weighted sum in which a parameter β ∈ R + determines the trade-off between soundness and complexity. DISPLAYFORM3 By using a brute force search procedure with various values of β we are able to explicitly determine the relationship between the allowed complexity of a rule and its maximum accuracy. For β = 0 the rule with the minimum loss will simply be the rule with minimum error regardless of complexity, and for β large enough the rule with the minimum loss will be a rule with 0 complexity, either a 1 − of − 1 rule or one of the trivial rules which either always predicts true or always predicts false (these can be represented as M-of-N rules by 0−of −N and N +1−of −N respectively). Given a neuron h j with n input neurons x i, we generate splits for each neuron using the technique just described to give us a set of literals H j and X i. Then, we negate the literals corresponding to neurons which have a negative weight to h j. Using these we search through O(n 2) M-of-N rules with variables X i in the body and H j in the head, which minimize L(R). To do this we reorder the variables according to the magnitude of the weight connecting x i to h j (such that we have |w 1,j | ≥ |w 2,j | ≥ ... ≥ |w n,j |). Then we consider the rule M − of − {X 1, ..., X N} for each 1 ≤ N ≤ n and each 0 ≤ M ≤ N + 1. The search procedure only relies on the ordering of the variables X i. Generate a split, s, for h by choosing the value which maximizes the information gain with respect to the network output. Use this to define the literal H for Each neuron x which is an input of h, do Generate a split for x by choosing the value which maximizes the information gain with respect to H. Use this value to define the literal X if the connection between x and h is positive, and use it to define ¬X otherwise end for A neuron with n input neurons has O(2 n) possible M-of-N rules which makes an exhaustive search intractable. However, here we rely on the assumption that the most accurate M-of-N rules with N literals use the literals corresponding to the N neurons with the strongest weights (i.e. the highest weight values). This assumption is easily justified by the conditional independence of hidden units in all layers except for the final one. Here, since we use a softmax function, the conditional independence of the output neurons is no longer valid. One way around this would be to order the literals by their information gains rather than their weights. However, the high accuracy found in the experimental of the rules extracted from softmax layer when ordering the literals by weight seem to suggest that this is not necessary. By defining an order on the literals we reduce an exponential search space to a polynomial one. However, this is still computationally difficult when a large number of test examples and input neurons exist. In order to complete the rule extraction in a reasonable time, the algorithm was implemented in Spark and run on IBM cloud services. The examples used to measure the accuracy of the extracted rules were taken from the training set rather than the test set as we evaluate the accuracy of the rules with respect to the output of the network rather than the actual output and using examples that the network was trained on was deemed to be a better representation of the behaviour the network learned. By running the search in parallel, we can map the accuracy/complexity graph for about 50 hidden neurons in the second and third layer in several hours. Increasing the number of examples used in the accuracy calculation greatly increases the time taken and for this reason we only use 1000 examples. To demonstrate the procedure we will examine the extraction process for the first hidden feature in the CNN trained on the fashion MNIST data set. First we select 1000 random input examples from the training set and use them to compute the activations of each neuron in the CNN as well as the predicted labels of the network. Since the CNN pads the input with zeros and the input images are 28 × 28 we have 28 × 28 = 784 neurons per feature in the first layer. Each of these neurons correspond to a different 5 × 5 patch of the input. To select a neuron to test we find the optimal splitting value of each neuron by computing the information gain of each neuron with respect to the networks predicted labels. For the first feature we find that the neuron with the maximum information gain is neuron 96 which has an information gain of 0.015 when split on the value 0.0004. This neuron corresponds to the image patch centered at (96/28, 96%28) =. With this split we define the variable H by H:= 1 iff h 96 ≥ 0.0004.Using this variable we define the input splits by choosing the values which in the maximum information gain with respect to H. Note our test input consists of the 1000 5 × 5 image patches centered at taken from the input examples. We then search through the M-of-N rules whose bodies consist of the input variables defined by the splits to determine an optimal M-of-N rule explaining H for various error/complexity tradeoffs. As we increase the complexity penalty we extract three different rules which are visualized in Figure 1. We can see from the figure that many of Figure 1: An example of the various rules extracted for the first feature. The first image represents the weights of the neruon and the following three are rules of decreasing complexity explaining the neuron. Here grey indicates that the input feature is not included in the M-of-N rule, white indicates a positive literal, and black indicates a negative literal the weights are filtered out by the rules. The most complex rule is a 5-of-13 rule which has a 0.025 error, or in other words the output of the rule agrees with the network 97.5% of the time. Adding a mild penalty to complexity changes the optimal rule to the much simpler 3-of-4 rule but raises the error to 0.043. Finally a heavy penalty to complexity produces the trivial 1-of-1 rule which has the significantly higher error of 0.13 In order to demonstrate the on a simple network known to be amenable to rule extraction we apply the technique to the classic example of the DNA promoter dataset. Training a feed forward network with a single hidden layer of 100 nodes We find that, like we will see with the Fashion MNIST dataset, the relationship between complexity and error in the first layer is exponential, suggesting an ideal complexity/error tradeoff (FIG1 . Furthermore, in the output layer we find that the rule 1 − of − {H 39, H 80} gives 100% fidelity to the network. Since the splits for the hidden layer are defined only by the information gain with respect to the output we can describe each of the literals in our 1-of-2 rule with an M-of-N rule extracted from the input layer. Extracting with no complexity penalty, the rules in question are of the form 64-of-119 for the variable H 3 9 which produces a single incorrect classification of H 3 9, and a rule of the form 32-of-61 for the variable H 8 0 which produces no incorrect classifications for H 8 0. Denoting L 119 as the set of literals in the rule explaining H 3 9 and L 61 as the set of literals in the rule explaining H 8 0 the output of the network can be predicted by the rule DISPLAYFORM0 In general the errors through a layer propagate when stacking rules that don't perfectly approximate each layer. This is exacerbated by the fact that in most networks, when we choose splits for the input neurons of a layer and then move down a layer we will end up choosing different splits for the same layer when they are treated as output. In order to replace a network with an end to set of heirarchical rules we must decide on a single set of splits for each layer. This can be done by simpy moving down one layer at a time and selecting the input splits based on the information gain against all configurations of the new output layer but it introduces more error. With this in mind we conduct our experiments layer by layer independently in order to provide an idealized complexity/error curve that can provide a baseline for the best achievable rules when doing rule extraction with M-of-N rules. By doing this we can examine under what circumstances M-of-N rule extraction might be useful as well as provide a method for evaluating other extraction algorithms. In order to examine the rule extraction landscape of a neural network trained on a practical example, we tested the layerwise rule extraction search on a basic CNN trained on fashion MNIST in tensorflow. The CNN had a standard architecture with a 32 feature convolutional layer with a 5 × 5 convolutional window followed by a 2×2 max pooling layer, then a 64 feature convolution with a 5×5 convolutional window and another 2 × 2 max pooling layer followed by a final hidden fully connected layer of 1024 units. All units used a rectified linear activation function; 1000 random inputs from the fashion MNIST training data were chosen to test extracted rules against the network. For the convolutional layers each feature was tested on the image patches corresponding to the maximum information gain as described in Section 3.2. For the third layer 50 features were chosen at random to test. In the third layer we limited the search to rules with 1000 literals or less to save time as it was very rare for even the most complex rules extracted to have over 900 literals. For the final layer the output was tested as 10 one-hot neurons, each of which underwent the rule searching procedure. For each layer, we repeated the search procedure for 5 different values of β, 0, 0.1, 0.2, 1, 5, 1000. This produced 5 different sets of extracted rules each with a different error/complexity trade-off. For a given value of β, we averaged the complexities and errors of rules extracted for each target neuron. This allowed us to produce a graph mapping out the error/complexity landscape for rules extracted from each layer (see Figure 1.) Here we can see that the Complexity/Error trade-off for the extracted rules from each layer are different. For the first and final layers we were able to extract quite accurate rules approaching a near 0 error whereas the second and third layer had a similar accuracy/complexity tradeoff with second layer showing a very slight improvement in accuracy with much more complex rules whereas the minimum error of the third layer is not able to be improved at all with increasingly complex rules. Additionally the trivial rules perform much worse on the second than the third layer. Importantly we see that the optimal accuracy/complexity tradeoff is not simply a function of the number of input nodes since the third layer performs about as well as the second layer despite having 3136 vs 800 input nodes. Similarly the final layer provides much more accurate rules that are relatively less complex than the first layer despite having 1024 vs 25 input nodes. Despite the actual values varying for each layer, the paint a similar picture for the shape of the rule extraction landscape in each case. Rather than a gradual decline in accuracy there is a critical point at which the error starts to increase rapidly as the penalty on complexity increases. This suggests that there is a'natural' set of rules for explaining the latent features. Although ultimately the trade off is subjective, the set of rules existing at the critical points cannot be made significantly more accurate if they are more complex, and will start to do exponentially worse if they are made any simpler. Current rule extraction algorithms do not explicitly take complexity into account for their optimization. Although they may include steps to attempt to simplify the extracted rules, it is not clear a priori where on the Error/Complexity graph these rules will land. To the best of our knowledge, this paper is the first to make rule complexity an integral part of the extraction algorithm, as exemplified in the analysis of Figure 1.Empirical evaluation of popular extraction algorithms should be an important step in their validation. The limitations, as well as potential, of rule extraction algorithms are also outlined in these . In certain cases, such as we see in the second and third layer of the CNN, there are simply no simple explanations for the features in terms of their input with even the most complex rules having an error rate of 15%. However, in other cases, such as in the final layer in the CNN, the behaviour of the output neurons can be accurately captured by relatively simple rules with extracted rules from the final layer obtaining near 0% error even for rules with complexity of under 0.05. This lends weight to the idea that decompositional rule extraction as a general method of explainability is not possible, but opens up the possibility of selective use of decompositional algorithms depending on which layer we wish to explain in terms of the previous layer. The black box problem of neural networks presents an obstacle to their deployment into society. The black box problem has been an issue for neural networks since their creation, but as neural networks have become more integrated into society, the need for explainability has attracted considerably more attention. The success of knowledge extraction in this endeavor has overall been mixed with most large neural networks today remaining difficult to interpret and explain. Traditionally knowledge extraction has been a commonly used paradigm and it has been applied to various tasks. Critics, however, point out that the distributed nature of neural networks makes the specific method of decomposition rule extraction unfeasible as individual latent features and unlikely to represent anything of significance. We test this claim by applying a novel search method for M-of-N rules to explain the latent features of a CNN, and find that generally latent features can be described by an'optimal' rule representing an ideal error/complexity trade-off for the explanation. We do this by including rule complexity as an explicit measure in the search for extracted rules. The large discrepancy in this trade-off between neurons in different layers, neurons in different layers with different architectures, and even different neurons in the same layer, suggests that rule extraction as a general technique is unlikely to provide adequate descriptions for all, or even most latent variables. However, the fact that in many cases the explanations can be made much simpler without reducing the accuracy of the rules suggests that rule extraction can be a useful tool when examining networks with features that are likely to be easily understandable. These indicate that decompositional rule extraction may still be an important tool for understanding the behaviour of networks. Further research would examine the effects on the accuracy/interpretability landscape of using different transfer functions, other data sets, different architectures, and various forms of regularization of the learning.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByEtPiAcY7
Systematically examines how well we can explain the hidden features of a deep network in terms of logical rules.
Recent findings show that deep generative models can judge out-of-distribution samples as more likely than those drawn from the same distribution as the training data. In this work, we focus on variational autoencoders (VAEs) and address the problem of misaligned likelihood estimates on image data. We develop a novel likelihood function that is based not only on the parameters returned by the VAE but also on the features of the data learned in a self-supervised fashion. In this way, the model additionally captures the semantic information that is disregarded by the usual VAE likelihood function. We demonstrate the improvements in reliability of the estimates with experiments on the FashionMNIST and MNIST datasets. Deep Generative Models (DGMs) have gained in popularity due to their ability to model the density of the observed training data from which one can draw novel samples. However, as pointed out in their recent paper, the inferences made by likelihood-based models, such as Variational Autoencoders (VAEs) and flow-based models (; van den), are not always reliable. They can judge out-of-distribution (OOD) samples to be more likely than in-distribution (ID) samples that are drawn from the same distribution as the training data. Concretely, a DGM trained on the FashionMNIST dataset will on average assign higher likelihoods to images from the MNIST dataset than to test images from the FashionMNIST dataset (see for example top left image in Figure 1(a) ). In this work we tackle the problem of misaligned likelihood estimates produced by VAEs on image data and propose a novel likelihood estimation during test time. Our method leverages findings reported in our earlier work Bütepage et al., which are summarised in Section 2, and is based on the idea to evaluate a given test image not only locally, using individual parameters returned by a VAE as it is usually done, but also globally using learned feature representations of the data. The main contribution of this paper is the introduction of a feature-based likelihood trained in a self-supervised fashion. This likelihood evaluates the model also based on the semantics of a given image and not solely on the values of each pixel. We elaborate on this idea in Section 3 and demonstrate the improvements with an empirical evaluation presented in Section 4. We emphasise that the aim of our work is exclusively to improve the reliability of the likelihood estimation produced by VAEs. We focus on image data in particular as we have not observed the misalignment in our earlier experiments on various non-image datasets from UCI Machine Learning Repository . We plan to investigate this further in the future work. Due to the lack of space we omit the experiments on non-image data as well as the specifics of VAEs for which we refer the reader to;. This section provides a on the evaluation of VAEs and summarizes our earlier work presented in (Bütepage et al., 2019). In VAEs, the observed random variable X is assumed to be generated from the joint distribution p(X, Z) = p(X|Z)p(Z) where Z denotes the latent variables. Using variational inference the intractable true posterior distribution p * (Z|X) is approximated with a simpler parametrised distribution q(Z|X). VAEs employ amortized inference where encoder and decoder neural networks, φ z (X) and φ x (Z), are jointly trained to represent the approximate posterior distribution q(Z|φ z (X)) and likelihood function p(X|φ x (Z)), respectively. From a Bayesian perspective, we can evaluate a successfully trained VAE using two different evaluation schemes where p P R V AE denotes the prior predictive (PR) and p AP O V AE the approximate posterior predictive (APO) distribution. Bütepage et al. argue that the likelihood estimates produced by a trained VAE are influenced by both 1) the choice of the above listed evaluation scheme and 2) the choice of the parametrisation of the likelihood function p(X|φ x (Z)). Here, two common choices are a Gaussian distribution in the case of colored images or a Bernoulli distribution in the case of black and white (or grey-scaled) images. The effect of both 1) and 2) is best demonstrated in Figure 1 (a) where we visualise the log likelihood estimates from a VAE V 1, parametrised by a Bernoulli likelihood (top row), and a VAE V 2, parametrised by a Gaussian likelihood (bottom row), using both PR (left column) and APO (right column) evaluation schemes from Equations and. Both VAEs were trained on the FashionMNIST dataset and tested on test images from both the FashionMNIST and MNIST datasets. In the case of V 1 the pixel values of the images were binarised with threshold 0.5, and in the case of V 2 scaled to the interval. The choice of the evaluation scheme influences the variance of the estimates of the training data as it directly affects the variability of the parameters φ x (z) returned by the VAE (see left vs right column in Figure 1(a) ). Namely, PR produces more diverse parameters corresponding to the latent representations of the whole training data while APO generates more homogeneous samples corresponding to the latent representation of a given test point x. On the other hand, the choice of the likelihood parametrisation (top vs bottom row in Figure 1(a) ) influences the actual values of the estimates since images are evaluated under distributions of different shapes. We refer the interested reader to (Bütepage et al., 2019) for a detailed discussion. Note that only the top-left combination in Figure 1(a) reproduces the reported in . (b) Using the improved p F EV AE (x|φ x (z)) likelihood from Equation. This section describes the self-supervised feature-based likelihood function which is the main contribution of this work. In addition to the influencing factors discussed in Section 2, we hypothesise that the likelihood estimates are also affected by the assumption that image pixels are independent and identically distributed (iid) around the likelihood function parameterised by the decoder. Let a test image x be represented as a concatenated vector of length D and let x d denote its d-th component. Using the assumption of iid pixels, the likelihood function becomes a product of individual pixel-wise likelihoods: Therefore, when computing the probability of x, the likelihood only captures pixel-wise errors that are evaluated locally under the parameters φ x (z) returned by the VAE and does not take into account the "global" information contained in the image (such as the semantics of the dataset). To mitigate the lack of the global evaluation, we propose to weight the likelihood term during test time with an additional term that relates the semantic information of both the test point x and the parameters φ x (z) to the semantics of the whole training dataset. We define the details below. We separately train a self-supervised classifier Γ and use its l-th layer to extract a low dimensional feature representation f x = l(x) of an image x. We train Γ on the same training datasetX = {x 1, . . ., x N} as we train the VAE. We then fit a Bayesian Gaussian Mixture (BGM) model with C components to the set F = {f x 1, . . ., f xn} of feature representations extracted from a randomly sampled subset ofX of size n < N (see also Section 4 for details). Let f x be the feature representation of a test image x. During the evaluation of the BGM on f x each mixture component is assigned a weight that indicates its contribution to the generation of f x. Let C x denote the mixture component with the highest weight. Given likelihood parameters φ x (z) returned by the VAE, we define the global likelihood of x as the product where p F E (f x |C φx(z) ) is the likelihood of the test point in feature space under the mixture component C φx(z) determined by the representation f φx(z) of the parameters φ x (z) and ) is the likelihood of f φx(z) under the same component C φx(z). The first term can be seen as a global likelihood of the test point under the decoded parameters and the second term represents a global likelihood of the parameters themselves. We then propose to evaluate the test image x under the combined likelihood function where p V AE as before captures local pixel-wise errors and p F E additionally captures the global (semantic) likelihood. We evaluate our method with experiments on FashionMNIST and MNIST datasets and present the below. Feature extraction We obtained low dimensional features of the training data by deploying a self-supervised Jigsaw classifier Γ presented by. The classifier receives a Jigsaw puzzle, which is a shuffled 3×3 grid of tiles extracted from a given image, and outputs (the class of) the permutation that was applied to the original unshuffled grid (see Appendix A for the implementation details). Note that any self-supervised learning strategy could be deployed as long as the obtained low dimensional features are of high quality and represent the training data well. After the completed training we randomly sampled n = 10000 training images {x 1, . . ., x n} and obtained their low dimensional representations {f 1, . . ., f n} from the first layer l 1 of the classifier Γ, to which we fitted a BGM model with C = 15 components. The parameters n and C were determined using a hyperparameter grid search. We used representations from the first layer because we hypothesise that the earlier layers of the classifier carry useful information about the training data while the later layers carry information about the task itself. We leave experiments with representations obtained from different layers for the future work. Experiment We trained two VAEs, V 1 and V 2, and two Jigsaw classifiers, Γ 1 and Γ 2 with specifications described in Appendix A on the FashionMNIST dataset. Here, the subscripts 1 and 2 denote that the model in consideration was trained on images binarised with threshold 0.5 and on images with pixel values scaled to the interval, respectively. As in the experiment producing the in Figure 1 (a), V 1 additionally assumes a Bernoulli likelihood and V 2 a Gaussian likelihood. For a given (binarised) test image x and parameters φ x (z) obtained from the trained VAE V i, we first calculate the VAE likelihood p V AE in the usual way using the assumption of iid pixels. We then obtain their low dimensional features f x = l 1 i (x) and f φx(z) = l 1 i (φ x (z)) from the first layer l 1 i of the trained Jigsaw classifier Γ i and calculate p F E under the fitted BGM following Equation. The product of the two likelihoods then equals the newly proposed likelihood p F EV AE from Equation. Given this pipeline, VAE i + Γ i for i = 1, 2 and our likelihood p F EV AE, we compared the log likelihood estimates using the PR and APO evaluation schemes from Equations and on the images from the test splits of FashionMNIST and MNIST datasets. The are visualised in Figure 1(b). We see that our method significantly improves the estimates when using Gaussian likelihood parametrisation (bottom row) as it clearly separates the OOD samples from the ID samples. Note that the VAE parameters φ x (z) in the PR evaluation always reflect the distribution of the entire training data. This means that the global likelihood of a test point evaluates the test point under all classes that were presented during training time. In practice this means that the PR evaluation of the global likelihood averages over all classes which in a less distinct separation of the OOD samples. When using Bernoulli likelihood (top row) our method increases the variance of the likelihood of OOD samples but fails to achieve the same separation as in the Gaussian case. This is because a significant amount of the semantic information is lost during the binarisation process of the FashionMNIST dataset. The ing binarised images are often unrecognisable with a sparse pixel distribution which makes the task of solving Jigsaw puzzles more difficult. Since digits in MNIST images are also sparse they become likely under p F E. We observe their estimates fusing with FashionMNIST estimates if we corrupt the using salt and pepper noise (see Figure 2 in Appendix B). We therefore hypothesise that in this particular case OOD samples simply become too similar to the ID samples, suggesting that the Bernoulli likelihood is not the most appropriate modelling choice. The inadequacy of the Bernoulli distribution in VAEs has also recently been discussed by who instead suggest to use their fully characterized continuous Bernoulli distribution. We have discussed how the problematic assumption that the image pixels are iid around the decoded parameters narrows the focus of the VAE likelihood function p V AE to a local area of the data density. Thus, the model likelihood function disregards the global data density, including the semantic information. Our proposed likelihood function mitigates this problem by leveraging self-supervised feature learning. In the future, we aim to evaluate our method on more complex datasets, such as CIFAR-10 and SVHN, and to design an end-to-end training procedure of VAEs using our proposed likelihood.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
HylWY1n4Yr
Improved likelihood estimates in variational autoencoders using self-supervised feature learning
Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model; 2) they are an "end-to-end" approach, directly optimizing the performance metric of interest; 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities. Recent years have seen major advances in the control of uncertain dynamical systems using reinforcement learning and data-driven approaches; examples range from allowing robots to perform more sophisticated controls tasks such as robotic hand manipulation (; BID1 ; ; ; a), to sequential decision making in game domains, e.g. AlphaGo and Atari game playing . Deep reinforcement learning (DeepRL) are becoming increasingly popular for tackling such challenging sequential decision making problems. Many of these successes have relied on sampling based reinforcement learning algorithms such as policy gradient methods, including the DeepRL approaches; here, there is little theoretical understanding of their efficiency, either from a statistical or a computational perspective. In contrast, control theory (optimal and adaptive control) has a rich body of tools, with provable guarantees, for related sequential decision making problems, particularly those that involve continuous control. These latter techniques are often model-based -they estimate an explicit dynamical model first (e.g. system identification) and then design optimal controllers. This work builds bridges between these two lines of work, namely, between optimal control theory and sample based reinforcement learning methods, using ideas from mathematical optimization. In the standard optimal control problem, the dynamics model f t, where f t is specified as x t+1 = f t (x t, u t, w t), maps a state x t ∈ R d, a control (the action) u t ∈ R k, and a disturbance w t, to the next state x t+1 ∈ R d. The objective is to find the control input u t which minimizes the long term cost, minimize T t=1 c t (x t, u t)such that x t+1 = f t (x t, u t, w t).Here the u t are allowed to depend on the history of observed states. In practice, this is often solved by considering the linearized control (sub-)problem where the dynamics are approximated by x t+1 = A t x t + B t u t + w t, and the costs are approximated by a quadratic function in x t and u t, e.g. . This work considers an important special case: the time homogenous, infinite horizon problem referred to as the linear quadratic regulator (LQR) problem. The herein can also be extended to the finite horizon, time in-homogenous setting, discussed in Section 5.In the LQR problem, the objective is minimize E ∞ t=0 (x t Qx t + u t Ru t)such that x t+1 = Ax t + Bu t, x 0 ∼ D.where initial state x 0 ∼ D is assumed to be randomly distributed according to distribution D; the matrices A ∈ R d×d and B ∈ R d×k are referred to as system (or transition) matrices; Q ∈ R d×d and R ∈ R k×k are both positive definite matrices that parameterize the quadratic costs. For clarity, this work does not consider a noise disturbance but only a random initial state. The importance of (some) randomization for analyzing direct methods is discussed in Section 3.Throughout, assume that A and B are such that the optimal cost is finite (for example, the controllability of the pair (A, B) would ensure this). Optimal control theory BID2 BID13 BID5 BID6 shows that the optimal control input can be written as a linear function in the state, DISPLAYFORM0 Planning with a known model. Planning can be achieved by solving the algebraic Riccati equation, DISPLAYFORM1 for a positive definite matrix P which parameterizes the "cost-to-go" (the optimal cost from a state going forward). The optimal control gain is then given as: DISPLAYFORM2 There are both algebraic solution methods to find P and (convex) SDP formulations to solve for P. More broadly, even though there are convex formulations for planning, these formulations: 1) do not directly parameterize the policy 2) they are not "end-to-end" approaches in that they are not directly optimizing the cost function of interest and 3) it is not immediately clear how to utilize these approaches in the model-free setting, where the agent only has simulation access. These formulations are discussed in Section A, where there is a discussion of how the standard SDP formulation is not a direct method that minizes the cost over the set of feasible policies. Even in the most basic case of the standard linear quadratic regulator model, little is understood as to how direct (model-free) policy gradient methods fare. This work provides rigorous guarantees, showing that, while in fact the approach is a non-convex one, directly using (model free) local search methods leads to finding the globally optimal policy. The main contributions are as follows:• (Exact case) Even with access to exact gradient evaluation, little is understood about whether or not convergence to the optimal policy occurs, even in the limit, due to the non-convexity in the problem. This work shows that global convergence does indeed occur (and does so efficiently) for local search based methods.• (Model free case) Without a model, this work shows how one can use simulated trajectories (as opposed to having knowledge of the model) in a stochastic policy gradient method where provable convergence to a globally optimal policy is guaranteed, with (polynomially) efficient computational and sample complexities.• (The natural policy gradient) Natural policy gradient methods BID18 ) -and related algorithms such as Trust Region Policy Optimization and the natural actor critic -are some of the most widely used and effective policy gradient methods (see BID12). While many argue in favor of this method based on either information geometry BID18 BID3 or based on connections to actor-critic methods BID11, these do not provably show an improved convergence rate. This work is the first to provide a guarantee that the natural gradient method enjoys a considerably improved convergence rate over its naive gradient counterpart. More broadly, the techniques in this work merge ideas from optimal control theory, mathematical (and zeroth order) optimization, and sample based reinforcement learning methods. These techniques may ultimately help in improving upon the existing set of algorithms, addressing issues such as variance reduction or improving upon the natural policy gradient method (with, say, a GaussNewton method). The Discussion touches upon some of these issues. In the reinforcement learning setting, the model is unknown, and the agent must learn to act through its interactions with the environment. Here, solution concepts are typically divided into: modelbased approaches, where the agent attempts to learn a model of the world, and model-free approaches, where the agent directly learns to act and does not explicitly learn a model of the world. The related work on provably learning LQRs is reviewed from this perspective. Model-based learning approaches. In the context of LQRs, the agent attempts to learn the dynamics of "the plant" (i.e. the model) and then plans, using this model, for control synthesis. Here, the classical approach is to learn the model with subspace identification . BID14 provides a provable learning (and non-asymptotic) , where the quality of the policy obtained is shown to be near optimal (efficiency is in terms of the persistence of the training data and the controllability Gramian). BID0 also provides provable, nonasymptotic learning (in a regret context), using a bandit algorithm that achieves lower sample complexity (by balancing exploration-exploitation more effectively); the computational efficiency of this approach is less clear. More recently, BID10 expands on an explicit system identification process, where a robust control synthesis procedure is adopted that relies on a coarse model of the plant matrices (A and B are estimated up to some accuracy level, naturally leading to a "robust control" setup). Arguably, this is the most general (and non-asymptotic) , that is efficient from both a statistical perspective (computationally, the method works with a finite horizon to approximate the infinite horizon). This only needs the plant to be controllable; the work herein needs the stronger assumption that the initial policy in the local search procedure is a stable controller (an assumption which may be inherent to local search procedures, discussed in Section 5).Model-free learning approaches. Model-free approaches that do not rely on an explicit system identification step typically either: 1) estimate value functions (or state-action values) through Monte Carlo simulation which are then used in some approximate dynamic programming variant or 2) directly optimize a (parameterized) policy, also through Monte Carlo simulation. Model-free approaches for learning optimal controllers is not well understood, from a theoretical perspective. Here, BID7 provides an asymptotic learnability using a value function approach, namely Q-learning. This work seeks to characterize the behavior of (direct) policy gradient methods, where the policy is linearly parameterized, as specified by a matrix K ∈ R k×d which generates the controls: DISPLAYFORM0 The cost of this K is denoted as: DISPLAYFORM1 where {x t, u t} is the trajectory induced by following K, starting with x 0 ∼ D. The importance of (some) randomization, either in x 0 or noise through having a disturbance, for analyzing gradient methods is discussed in Section 3. Here, K * is a minimizer of C(·).Gradient descent on C(K), with a fixed stepsize η, follows the update rule: DISPLAYFORM2 It is helpful to explicitly write out the functional form of the gradient. Define P K as the solution to: DISPLAYFORM3 and, under this definition, it follows that C(K) can be written as: DISPLAYFORM4 Also, define Σ K as the (un-normalized) state correlation matrix, i.e. DISPLAYFORM5 Lemma 1. (Policy Gradient Expression) The policy gradient is: DISPLAYFORM6 Observe: DISPLAYFORM7 This implies: DISPLAYFORM8 x t x t using recursion and that x 1 = (A − BK)x 0. Taking expectations completes the proof. Sample based policy gradient methods introduce some randomization for estimating the gradient. REINFORCE. Let π θ (u|x) be a parametric stochastic policy, where u ∼ π θ (·|x). The policy gradient of the cost, C(θ), is: DISPLAYFORM0 where the expectation is with respect to the trajectory {x t, u t} induced under the policy π θ and where Q π θ (x, u) is referred to as the state-action value. The REINFORCE algorithm uses Monte Carlo estimates of the gradient obtained by simulating π θ.The natural policy gradient. The natural policy gradient BID18 ) follows the update: DISPLAYFORM1 where G θ is the Fisher information matrix. There are numerous succesful related approaches (; ; BID12). An important special case is using a linear policy with additive Gaussian noise (b) DISPLAYFORM2 where K ∈ R k×d and σ 2 is the noise variance. Here, the natural policy gradient of K (when σ is considered fixed) takes the form: DISPLAYFORM3 To see this, one can verify that the Fisher matrix of size kd × kd, which is indexed as DISPLAYFORM4 where i, i ∈ {1, . . . k} and j, j ∈ {1, . . . d}, has a block diagonal form where the only non-zeros blocks are [G K] (i,·),(i,·) = Σ K (this is the block corresponding to the i-th coordinate of the action, as i ranges from 1 to k). This form holds more generally, for any diagonal noise. Zeroth order optimization. Zeroth order optimization is a generic procedure BID9 ) for optimizing a function f (x), using only query access to the function values of f (·) at input points x (and without explicit query access to the gradients of f). This is also the approach in using "evolutionary strategies" . The generic approach can be described as follows: define the perturbed function as DISPLAYFORM5 For small σ, the smooth function is a good approximation to the original function. Due to the Gaussian smoothing, the gradient has the particularly simple functional form (see BID9): DISPLAYFORM6. This expression implies a straightforward method to obtain an unbiased estimate of the ∇f σ 2 (x), through obtaining only the function values f (x + ε) for random ε. This section provides a brief characterization of the optimization landscape, in order to help provide intuition as to why global convergence is possible and as to where the analysis difficulties lie. Lemma 2. (Non-convexity) If d ≥ 3, there exists an LQR optimization problem, min K C(K), which is not convex, quasi-convex, and star-convex. Section B provides a specific example. In general, for a non-convex optimization problem, gradient descent may not even converge to the global optima in the limit. For the case of LQRs, the following corollary (of Lemma 8) provides a characterization of the stationary points. Corollary 3. (Stationary point characterization) If ∇C(K) = 0, then either K is an optimal policy or Σ K is rank deficient. This lemma is the motivation for using a distribution over x 0 (as opposed to a deterministic starting point): E x0∼D x 0 x 0 being full rank guarantees that Σ K is full rank, which implies all stationary points are a global optima. An additive disturbance in the dynamics model also suffices. The concept of gradient domination is important in the non-convex optimization literature (; ;). A function f: R d → R is said to be gradient dominated if there exists some constant λ, such that for all x, DISPLAYFORM0 If a function is gradient dominated, this implies that if the magnitude of the gradient is small at some x, then the function value at x will be close to that of the optimal function value. The following corollary of Lemma 8 shows that C(K) is gradient dominated. DISPLAYFORM1 where λ is a problem dependent constant (and ·, · denotes the trace inner product).With gradient domination and no (spurious) local optima, one may hope that recent on escaping saddle points (; BID16 BID17 immediately imply that gradient descent converges quickly. This is not the case due to that it is not straightforward to characterize the (local) smoothness properties of C(K); this is a difficulty well studied in the optimal control theory literature, related to robustness and stability. In fact, if it were the case that C(K) is a smooth function 1 (in addition to being gradient dominated), then classical mathematical optimization would not only immediately imply global convergence, these would also imply convergence at a linear rate. First, on exact gradient methods are provided. From an analysis perspective, this is the natural starting point; once global convergence is established for exact methods, the question of using simulation-based, model-free methods can be approached with zeroth-order optimization methods. Notation. Z denotes the spectral norm of a matrix Z; Tr(Z) denotes the trace of a square matrix; σ min (Z) denotes the minimal singular value of a square matrix Z. Also, it is helpful to define DISPLAYFORM0 The following three exact update rules are considered: DISPLAYFORM0 Natural policy gradient descent: DISPLAYFORM1 Gauss-Newton: DISPLAYFORM2 Kn. The natural policy gradient descent direction is defined so that it is consistent with the stochastic case, as per Equation 4. It is straightforward to verify that the policy iteration algorithm is a special case of the Gauss-Newton method when η = 1 (for the case of policy iteration, convergence in the limit is provided in Todorov & Li ( The Gauss-Newton method requires the most complex oracle to implement: it requires access to ∇C(K), Σ K, and R+B P K B; it also enjoys the strongest convergence rate guarantee. At the other extreme, gradient descent requires oracle access to only ∇C(K) and has the slowest convergence rate. The natural policy gradient sits in between, requiring oracle access to ∇C(K) and Σ K, and having a convergence rate between the other two methods. Theorem 5. (Global Convergence of Gradient Methods) Suppose C(K 0) is finite and and µ > 0.• Gauss-Newton case: For a stepsize η = 1 and for DISPLAYFORM3 the Gauss-Newton algorithm (Equation 7) enjoys the following performance bound: DISPLAYFORM4 • Natural policy gradient case: For a stepsize DISPLAYFORM5 and for DISPLAYFORM6 natural policy gradient descent (Equation 6) enjoys the following performance bound: DISPLAYFORM7 Algorithm 1 Model-Free Policy Gradient (and Natural Policy Gradient) Estimation 1: Input: K, number of trajectories m, roll out length, smoothing parameter r, dimension d 2: DISPLAYFORM8 Sample a policy K i = K + U i, where U i is drawn uniformly at random over matrices whose (Frobenius) norm is r. Simulate K i for steps starting from x 0 ∼ D. Let C i and Σ i be the empirical estimates: DISPLAYFORM0 where c t and x t are the costs and states on this trajectory. 5: end for 6: Return the (biased) estimates: DISPLAYFORM1 • Gradient descent case: For an appropriate (constant) setting of the stepsize η, DISPLAYFORM2 and for DISPLAYFORM3, gradient descent (Equation 5) enjoys the following performance bound: DISPLAYFORM4 In comparison to model-based approaches, these require the (possibly) stronger assumption that the initial policy is a stable controller, i.e. C(K 0) is finite (an assumption which may be inherent to local search procedures). The Discussion mentions this as direction of future work. In the model free setting, the controller has only simulation access to the model; the model parameters, A, B, Q and R, are unknown. The standard optimal control theory approach is to use system identification to learn the model, and then plan with this learned model This section proves that model-free, policy gradient methods also lead to globally optimal policies, with both polynomial computational and sample complexities (in the relevant quantities).Using a zeroth-order optimization approach (see Section 2.2), Algorithm 1 provides a procedure to find (controllably biased) estimates, ∇C(K) and Σ K, of both ∇C(K) and Σ K. These can then be used in the policy gradient and natural policy gradient updates as follows: DISPLAYFORM0 Natural policy gradient descent: DISPLAYFORM1 where Algorithm 1 is called at every iteration to provide the estimates of ∇C(K n) and Σ Kn.The choice of using zeroth order optimization vs using REINFORCE (with Gaussian additive noise, as in Equation 3) is primarily for technical reasons 2. It is plausible that the REINFORCE estimation procedure has lower variance. One additional minor difference, again for technical reasons, is that Algorithm 1 uses a perturbation from the surface of a sphere (as opposed to a Gaussian perturbation).Theorem 6. (Global Convergence in the Model Free Setting) Suppose C(K 0) is finite, µ > 0, and that x 0 ∼ D has norm bounded by L almost surely. Also, for both the policy gradient method and the natural policy gradient method, suppose Algorithm 1 is called with parameters: DISPLAYFORM2 • Natural policy gradient case: For a stepsize DISPLAYFORM3 and for DISPLAYFORM4 then, with high probability, i.e. with probability greater than 1 − exp(−d), the natural policy gradient descent update (Equation 9) enjoys the following performance bound: DISPLAYFORM5 • Gradient descent case: For an appropriate (constant) setting of the stepsize η, DISPLAYFORM6 and if N satisfies DISPLAYFORM7, then, with high probability, gradient descent (Equation 8) enjoys the following performance bound: DISPLAYFORM8 This work has provided provable guarantees that model-based gradient methods and model-free (sample based) policy gradient methods convergence to the globally optimal solution, with finite polynomial computational and sample complexities. Taken together, the herein place these popular and practical policy gradient approaches on a firm theoretical footing, making them comparable to other principled approaches (e.g. subspace ID methods and algebraic iterative approaches).Finite C(K 0) assumption, noisy case, and finite horizon case. These methods allow for extensions to the noisy case and the finite horizon case. This work also made the assumption that C(K 0) is finite, which may not be easy to achieve in some infinite horizon problems. The simplest way to address this is to model the infinite horizon problem with a finite horizon one; the techniques developed in Section D.1 shows this is possible. This is an important direction for future work. Open Problems.• Variance reduction: This work only proved efficiency from a polynomial sample size perspective. An interesting future direction would be in how to rigorously combine variance reduction methods and model-based methods to further decrease the sample size.• A sample based Gauss-Newton approach: This work showed how the Gauss-Newton algorithm improves over even the natural policy gradient method, in the exact case. A practically relevant question for the Gauss-Newton method would be how to both: a) construct a sample based estimator b) extend this scheme to deal with (non-linear) parametric policies.• Robust control: In model based approaches, optimal control theory provides efficient procedures to deal with (bounded) model mis-specification. An important question is how to provably understand robustness in a model free setting. This section briefly reviews some parameterizations and solution methods for the classic LQR and related problems from control theory. Finite horizon LQR. First, consider the finite horizon case. The basic approach is to view it as a dynamic program with the value function x T t P t x t, where DISPLAYFORM0 which in turn gives optimal control DISPLAYFORM1 Another approach is to view the LQR problem as a linearly-constrained Quadratic Program in all x t and u t (where the constraints are given by the dynamics, and the problem size equals the horizon).The QP is clearly a convex problem, but this observation is not useful by itself as the problem size grows with the horizon, and naive use of quadratic programming scales badly. However, the special structure due to linear dynamics allows for simplifications and control-theoretic interpretation as follows: the Lagrange multipliers can be interpreted as "co-state" variables, and they follow a recursion that runs backwards in time known as the "adjoint system". Using Lagrange duality, one can show that this approach is equivalent to solving the Riccati recursion mentioned above. Popular use of the LQR in control practice is often in the receding horizon LQR, BID8;: at time t, an input sequence is found that minimizes the T -step ahead LQR cost starting at the current time, then only the first input in the sequence is used. The ing static feedback gain converges to the infinite horizon optimal solution as horizon T becomes longer. Infinite horizon LQR. Here, the constrained optimization view (QP) is not informative as the problem is infinite dimensional; the dynamic programming viewpoint readily extends. Suppose the system A, B is controllable (which guarantees optimal cost is finite). It turns out that the value function and the optimal controller are static (do not depend on t) and can be found by solving the Algebraic Riccati Equation (ARE) given in. The optimal K can then be found from equation.The main computational step is solving the ARE, which is extensively studied (e.g. ). One approach due to is to simply run the recursion DISPLAYFORM2 T P k A with P 1 = Q, which converges to the unique positive semidefinite solution of the ARE (since the fixed-point iteration is contractive). Other approaches are direct and based on linear algebra, which carry out an eigenvalue decomposition on a certain block matrix followed by a matrix inversion .Direct computation of the control input has also been considered in the optimal control literature, e.g., gradient updates in function spaces . For the linear quadratic setup, direct iterative computation of the feedback gain has been examined in (Mårtensson &), and explored further in (Mårtensson, 2012) with a view towards distributed implementations. There methods are presented as local search heuristics without provable guarantees of reaching the optimal policy. SDP formulation. The LQR problem can also be expressed as a semidefinite program (SDP) with variable P, as given in (section 5, equation, this is for a continuous-time system but there are similar discrete-time versions). This SDP can be derived by relaxing the equality in the Riccati equation to an inequality, then using the Schur complement formula to rewrite the ing Riccati inequality as linear matrix inequality; the objective in the case of LQR is the trace of the positive definite matrix variable. This formulation and its dual has been explored in BID4.It is important to note that while the optimal solution of this SDP is the unique positive semidefinite solution to the Riccati equation, which in turn gives the optimal policy K *, other feasible P (not equal to P *) do not necessarily correspond to a feasible, stabilizing policy K. This means that the feasible set of this SDP is not a convex characterization of all P that correspond to stabilizing K. Thus it also implies that if one uses any optimization algorithm that maintains iterates in the feasible set (e.g. interior point methods), no useful policy can be extracted from the iterates before convergence to P *. For this reason, this convex formulation is not helpful for parametrizing the space of policies K in manner that supports the use of local search methods (those that directly lower the cost function of interest), which is the focus of this work. Let K(A, B) denote the set of state feedback gains K such that A − BK is stable, i.e., its eigenvalues are inside the unit circle in the complex plane. This set is generally nonconvex. A concise counterexample to convexity is provided here. Let A and B be 3 × 3 identity matrices and This section provides the analysis of the convergence rates of the (exact) gradient based methods. First, some helpful lemmas for the analysis are provided. Throughout, it is convenient to use the following definition: DISPLAYFORM0 The policy gradient can then be written as: DISPLAYFORM1 Define the value V K (x), the state-action value Q K (x, u), and the advantage A K (x, u). V K (x, t) is the cost of the policy starting with x 0 = x and proceeding with K onwards: DISPLAYFORM0 is the cost of the policy starting with x 0 = x, taking action u 0 = u and then proceeding with K onwards: DISPLAYFORM1 The advantage can be viewed as the change in cost starting at state x and taking a one step deviation from the policy K.The next lemma is identical to that in BID19 BID20 for Markov decision processes. Lemma 7. (Cost difference lemma) Suppose K and K have finite costs. Let {x t} and {u t} be state and action sequences generated by K, i.e. starting with x 0 = x and using u t = −K x t. It holds that: DISPLAYFORM2 Also, for any x, the advantage is: DISPLAYFORM3 Proof. Let c t be the cost sequence generated by K. Telescoping the sum appropriately: DISPLAYFORM4 which completes the first claim. For the second claim, observe that: DISPLAYFORM5 And, for u = K x, DISPLAYFORM6 which completes the proof. This lemma is helpful in proving that C(K) is gradient dominated. Lemma 8. (Gradient domination) Let K * be an optimal policy. Suppose K has finite cost and µ > 0. It holds that: DISPLAYFORM7 For a lower bound, it holds that: DISPLAYFORM8 Proof. From Equation 10 and by completing the square, DISPLAYFORM9 with equality when DISPLAYFORM10 Let x * t and u * t be the sequence generated under K *. Using this and Lemma 7, DISPLAYFORM11 which completes the proof of the upper bound. For the lower bound, consider DISPLAYFORM12 −1 E K where equality holds in Equation 11. Let x t and u t be the sequence generated under K. Using that DISPLAYFORM13 which completes the proof. Recall that a function f is said to be smooth (or C 1 -smooth) if it satisfies for some finite β, it satisfies: DISPLAYFORM14 for all x, y (equivalently, it is smooth if the gradients of f are continuous). Lemma 9. ("Almost" smoothness) C(K) satisfies: DISPLAYFORM15 To see why this is related to smoothness (e.g. compare to Equation 12), suppose K is sufficiently close to K so that: DISPLAYFORM16 and the leading order term 2Tr(Σ K (K − K) E K ) would then behave as Tr((K − K) ∇C(K)). The challenge in the proof (for gradient descent) is quantifying the lower order terms in this argument. Proof. The claim immediately from Lemma 7, by using Equation 10 and taking an expectation. The next lemma spectral norm bounds on P K and Σ K are helpful: Lemma 10. It holds that: DISPLAYFORM17 Proof. For the first claim, C(K) is lower bounded as: DISPLAYFORM18 Alternatively, C(K) can be lower bounded as: DISPLAYFORM19 which proves the second claim. The next lemma bounds the one step progress of Gauss-Newton. Lemma 11. Suppose that: DISPLAYFORM0 −1 E K. Using Lemma 9 and the condition on η, DISPLAYFORM1 where the last step uses Lemma 8.With this lemma, the proof of the convergence rate of the Gauss Newton algorithm is immediate. Proof. (of Theorem 5, Gauss-Newton case) The theorem is due to that η = 1 leads to a contraction of 1 − ηµ Σ K * at every step. The next lemma bounds the one step progress of the natural policy gradient. Lemma 12. Suppose: DISPLAYFORM0. It holds that: DISPLAYFORM1 Proof. Since K = K − ηE K, Lemma 9 implies: DISPLAYFORM2 The last term can be bounded as: DISPLAYFORM3 Continuing and using the condition on η, DISPLAYFORM4 using Lemma 8.With this lemma, the proof of the natural policy gradient convergence rate can be completed. Proof. (of Theorem 5, natural policy gradient case) Using Lemma 10, DISPLAYFORM5 The proof is completed by induction: DISPLAYFORM6, since Lemma 12 can be applied. The proof proceeds by arguing that Lemma 12 can be applied at every step. If it were the case that DISPLAYFORM7 and by Lemma 12: DISPLAYFORM8 which completes the proof. As informally argued by Equation 13, the proof seeks to quantify how Σ K changes with η. Then the proof bounds the one step progress of gradient descent. DISPLAYFORM0 This subsections aims to prove the following: Lemma 13. (Σ K perturbation) Suppose K is such that: DISPLAYFORM1 It holds that: DISPLAYFORM2 The proof proceeds by starting with a few technical lemmas. First, define a linear operator on symmetric matrices, T K (·), which can be viewed as a matrix on d+1 2dimensions. Define this operator on a symmetric matrix X as follows: DISPLAYFORM3 Also define the induced norm of T as follows: DISPLAYFORM4 where the supremum is over all symmetric matrices X (whose spectral norm is non-zero).Also, define DISPLAYFORM5 Proof. For a unit norm vector v ∈ R d and unit spectral norm matrix X, DISPLAYFORM6 The proof is completed using the upper bound on Σ K in Lemma 10.Also, with respect to K, define another linear operator on symmetric matrices: DISPLAYFORM7 Let I to denote the identity operator on the same space. Define the induced norm · of these operators as in Equation 14. Note these operators are related to the operator T K as follows:Lemma 15. When (A − BK) has spectral radius smaller than 1, DISPLAYFORM8 Proof. When (A − BK) has spectral radius smaller than 1, T K is well defined and is the solution of DISPLAYFORM9 Since, DISPLAYFORM10 The proof of Lemma 13 seeks to bound: DISPLAYFORM11 The following two perturbation bounds are helpful in this. Lemma 16. It holds that: DISPLAYFORM12 Proof. Let ∆ = K − K. For every matrix X, DISPLAYFORM13 The operator norm of F K − F K is the maximum possible ratio in spectral norm of (F K − F K)(X) and X. Then the claim follows because AX ≤ A X. DISPLAYFORM14 Proof. Define A = I − F K, and DISPLAYFORM15 Observe: DISPLAYFORM16. DISPLAYFORM17 and so DISPLAYFORM18 This proves the main inequality. The last step of the inequality is just applying definition of the norm of DISPLAYFORM19 With these Lemmas, the proof is completed as follows: Proof. (of Lemma 13) First, the proof shows T K F K − F K ≤ 1/2, which is the desired condition in Lemma 17. First, observe that under the assumed condition on K − K, implies that DISPLAYFORM20 using that DISPLAYFORM21 ≤ 1 due to Lemma 10. Using Lemma 16, DISPLAYFORM22 Using this and Lemma 14, DISPLAYFORM23 where the last step uses the condition on K − K.Thus, DISPLAYFORM24 using Lemmas 10 and 16. Equipped with these lemmas, the one step progress of gradient descent can be bounded. Lemma 18. Suppose that DISPLAYFORM0 It holds that: DISPLAYFORM1 Proof. By Lemma 9, DISPLAYFORM2 where the last step uses Lemma 8.By Lemma 13, DISPLAYFORM3 using the assumed condition on η. Using this last claim and Lemma 10, DISPLAYFORM4 using the condition on η. In order to prove a gradient descent convergence rate, the following bounds are helpful: Lemma 19. It holds that DISPLAYFORM5 and that: DISPLAYFORM6 Proof. Using Lemma 10, DISPLAYFORM7 By Lemma 8, DISPLAYFORM8 µ which proves the first claim. Again using Lemma 8, DISPLAYFORM9 which proves the second claim. With these lemmas, the proof of the gradient descent convergence rate follows:Proof. (of Theorem 5, gradient descent case) First, the following argues that progress is made at t = 1. Based on Lemma 10 and Lemma 19, by choosing η to be an appropriate polynomial in DISPLAYFORM10, σ min (Q) and µ, the stepsize condition in Equation 16 is satisfied. Hence, by Lemma 18, DISPLAYFORM11 which implies that the cost decreases at t = 1. Proceeding inductively, now suppose that C(K t) ≤ C(K 0), then the stepsize condition in Equation 16 is still satisfied (due to the use of C(K 0) in bounding the quantities in Lemma 19). Thus, Lemma 18 can again be applied for the update at time t + 1 to obtain: DISPLAYFORM12 ≤ ε, and the follows. This section shows how techniques from zeroth order optimization allow the algorithm to run in the model-free setting with only black-box access to a simulator. The dependencies on various parameters are not optimized, and the notation h is used to represent different polynomial factors in the relevant factors (DISPLAYFORM0 µσmin(Q), A, B, R, 1/σ min (R)). When the polynomial also depend on dimension d or accuracy 1/, this is specified as parameters (h FIG2).The section starts by showing how the infinite horizon can be approximated with a finite horizon. This section shows that as long as there is an upper bound on C(K), it is possible to approximate both C(K) and Σ(K) with any desired accuracy. Lemma 20. For any K with finite C(K), let Σ DISPLAYFORM0 Proof. First, the bound on Σ K is proved. Define the operators T K and F K as in Section C.4, observe DISPLAYFORM1, this follows immediately from the form of F K (X) = (A + BK)X(A + BK). If X is PSD then W XW is also PSD for any W. Now, since tr(DISPLAYFORM2 (Here the last step is by Lemma 10), and all traces are nonnegative, then there must exists j < such that tr(DISPLAYFORM3 Therefore as long as DISPLAYFORM4, it follows that: DISPLAYFORM5 Here the first step is again because of all the terms are PSD, so using more terms is always better. The last step follows because F j (Σ K) is also a PSD matrix so the spectral norm is bounded by trace. In fact, it holds that tr(DISPLAYFORM6 Therefore if DISPLAYFORM7 The next lemma show that the function value and its gradient are approximate preserved if a small perturbation to the policy K is applied. Lemma 21. (C K perturbation) Suppose K is such that: DISPLAYFORM0, K then: DISPLAYFORM1 As in the proof of Lemma 16, the assumption implies that T K F K − F K ≤ 1/2, and, from Equation 15, that DISPLAYFORM2 Hence, DISPLAYFORM3 For the first term, DISPLAYFORM4 Combining the two terms completes the proof. The next lemma shows the gradient is also stable after perturbation. Lemma 22. (∇C K perturbation) Suppose K is such that: DISPLAYFORM5 Let's first look at the second term. By Lemma 8, DISPLAYFORM6 then by Lemma 13 DISPLAYFORM7 Therefore the second term is bounded by DISPLAYFORM8 By the previous lemma, DISPLAYFORM9 Since K ≤ 2 K, and K can be bounded by C(K) (Lemma 19), all the terms can be bounded by polynomials of related parameters multiplied by K − K. This section analyzes the smoothing procedure and completes the proof of gradient descent. Although Gaussian smoothing is more standard, the objective C(K) is not finite for every K, therefore technically E u∼N (0,σ 2 I) [C(K + u)] is not well defined. This is avoidable by smoothing in a ball. Let S r represent the uniform distribution over the points with norm r (boundary of a sphere), and B r represent the uniform distribution over all points with norm at most r (the entire sphere). When applying these sets to matrix a U, the Frobenius norm ball is used. The algorithm performs gradient descent on the following function DISPLAYFORM0 The next lemma uses the standard technique (e.g. in BID15) to show that the gradient of C r (K) can be estimated just with an oracle for function value. DISPLAYFORM1 This is the same as Lemma 2.1 in BID15, for completeness the proof is provided below. Proof. By Stokes formula, DISPLAYFORM2 By definition, DISPLAYFORM3 Under review as a conference paper at ICLR 2018Also, DISPLAYFORM4 The Lemma follows from combining these equations, and use the fact that DISPLAYFORM5 From the lemma above and standard concentration inequalities, it is immediate that it suffices to use a polynomial number of samples to approximate the gradient. Lemma 24. Given an, there are fixed polynomials h r (1/), h sample (d, 1/) such that when r ≤ 1/h r (1/), with m ≥ h sample (d, 1/) samples of U 1,..., U n ∼ S r, with high probability (at least DISPLAYFORM6 is also close to ∇C(K) with high probability. Proof. For the first part, the difference is broken into two terms: DISPLAYFORM7 For the first term, choose h r (1/) = min{1/r 0, 2h grad /} (r 0 is chosen later). By Lemma 22 when r is smaller than 1/h r (1/) = /2h grad, every point u on the sphere have ∇C(K + U) − ∇C(K) ≤ /4. Since ∇C r (K) is the expectation of ∇C(K + U), by triangle inequality ∇C r (K) − ∇C(K) ≤ /2.The proof also makes sure that r ≤ r 0 such that for any U ∼ S r, it holds that C(K + U) ≤ 2C(K). By Lemma 21, 1/r 0 is a polynomial in the relevant factors. Adding these two terms and apply triangle inequality gives the . For the second part, the proof breaks it into more terms. Let ∇ be equal to The third term is just what was bounded earlier, using h r,trunc (1/) = h r (2/) and making sure h sample,trunc (d, 1/) ≥ h sample (d, 2/), this guarantees that it is smaller than /2.For the second term, choose ≥ Therefore,∇ − ∇ is again a sum of independent vectors with bounded norm, so by Vector Bernstein's inequality, when h sample,trunc (d, 1/, L 2 /µ) is a large enough polynomial, ∇ − ∇ ≤ /4 with high probability. Adding all the terms finishes the proof. Theorem 25. There are fixed polynomials h GD,r (1/), h GD,sample (d, 1/, L 2 /µ), h GD, (d, 1/) such that if every step the gradient is computed as Lemma 24 (truncated at step), pick step size η and T the same as the gradient descent case of Theorem 5, it holds that C(K T) − C(K) ≤ with high probability (at least 1 − exp(−d)). Before the Theorem for natural gradient is proven, the following lemma shows the variance can be estimated accurately. satisfies Σ − Σ K ≤. Further, when ≤ µ/2, it holds that σ min (Σ K) ≥ µ/2.Proof. This is broken into three terms: let Σ Therefore, standard concentration bounds show that when h varsample,trunc is a large enough polynomial, Σ −Σ ≤ /2 holds with high probability. Adding these three terms gives the . Finally, the bound on σ min (Σ K) follows simply from Weyl's Theorem. Theorem 27. Suppose C(K 0) is finite and and µ > 0. The natural gradient follows the update rule: DISPLAYFORM0 Suppose the stepsize is set to be: DISPLAYFORM1 If the gradient and variance are estimated as in Lemma 24, Lemma 26 with r = 1/h N GD,r (1/), with m ≥ h N GD,sample (d, 1/, L 2 /µ) samples, both are truncated to h N GD, (d, 1/) iterations, then with high probability (at least 1 − exp(−d)) in T iterations where DISPLAYFORM2 µσ min (R) log 2(C(K 0) − C(K *)) ε then the natural gradient satisfies the following performance bound: DISPLAYFORM3
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJDEbngCZ
This paper shows that model-free policy gradient methods can converge to the global optimal solution for non-convex linearized control problems.
Low-dimensional vector embeddings, computed using LSTMs or simpler techniques, are a popular approach for capturing the “meaning” of text and a form of unsupervised learning useful for downstream tasks. However, their power is not theoretically understood. The current paper derives formal understanding by looking at the subcase of linear embedding schemes. Using the theory of compressed sensing we show that representations combining the constituent word vectors are essentially information-preserving linear measurements of Bag-of-n-Grams (BonG) representations of text. This leads to a new theoretical about LSTMs: low-dimensional embeddings derived from a low-memory LSTM are provably at least as powerful on classification tasks, up to small error, as a linear classifier over BonG vectors, a that extensive empirical work has thus far been unable to show. Our experiments support these theoretical findings and establish strong, simple, and unsupervised baselines on standard benchmarks that in some cases are state of the art among word-level methods. We also show a surprising new property of embeddings such as GloVe and word2vec: they form a good sensing matrix for text that is more efficient than random matrices, the standard sparse recovery tool, which may explain why they lead to better representations in practice. Much attention has been paid to using LSTMs BID15 and similar models to compute text embeddings BID3 BID7. Once trained, the LSTM can sweep once or twice through a given piece of text, process it using only limited memory, and output a vector with moderate dimensionality (a few hundred to a few thousand), which can be used to measure text similarity via cosine similarity or as a featurization for downstream tasks. The powers and limitations of this method have not been formally established. For example, can such neural embeddings compete with and replace traditional linear classifiers trained on trivial Bag-of-n-Grams (BonG) representations? Tweaked versions of BonG classifiers are known to be a surprisingly powerful baseline and have fast implementations BID17. They continue to give better performance on many downstream supervised tasks such as IMDB sentiment classification BID21 than purely unsupervised LSTM representations BID19 BID13 BID25. Even a very successful character-level (and thus computation-intensive, taking a month of training) approach does not reach BonG performance on datasets larger than IMDB BID31. Meanwhile there is evidence suggesting that simpler linear schemes give compact representations that provide most of the benefits of word-level LSTM embeddings (; BID1 . These linear schemes consist of simply adding up, with a few modifications, standard pretrained word embeddings such as GloVe or word2vec BID24 BID29 .The current paper ties these disparate threads together by giving an information-theoretic account of linear text embeddings. We describe linear schemes that preserve n-gram information as lowdimensional embeddings with provable guarantees for any text classification task. The previous linear schemes, which used unigram information, are subcases of our approach, but our best schemes can also capture n-gram information with low additional overhead. Furthermore, we show that the original unigram information can be (approximately) extracted from the low-dimensional embedding using sparse recovery/compressed sensing BID6. Our approach also fits in the tradition of the older work on distributed representations of structured objects, especially the works of BID30 and BID18. The following are the main achieved by this new world-view:1. Using random vectors as word embeddings in our linear scheme (instead of pretrained vectors) already allows us to rigorously show that low-memory LSTMs are provably at least as good as every linear classifier operating on the full BonG vector. This is a novel theoretical in deep learning, obtained relatively easily. By contrast, extensive empirical study of this issue has been inconclusive (apart from character-level models, and even then only on smaller datasets BID31). Note also that empirical work by its nature can only establish performance on some available datasets, not on all possible classification tasks. We prove this theorem in Section 4 by providing a nontrivial generalization of a combining compressed sensing and learning BID5 ). In fact, before our work we do not know of any provable quantification of the power of any text embedding.2. We study theoretically and experimentally how our linear embedding scheme improves when it uses pretrained embeddings (GloVe etc.) instead of random vectors. Empirically we find that this improves the ability to preserve Bag-of-Words (BoW) information, which has the following restatement in the language of sparse recovery: word embeddings are better than random matrices for "sensing" BoW signals (see Section 5). We give some theoretical justification for this surprising finding using a new sparse recovery property characterizing when nonnegative signals can be reconstructed by 1 -minimization.3. Section 6 provides empirical supporting the above theoretical work, reporting accuracy of our linear schemes on multiple standard classification tasks. Our embeddings are consistently competitive with recent and perform much better than all previous linear methods. Among unsupervised word-level representations they achieve state of the art performance on both the binary and fine-grained SST sentiment classification tasks BID33. Since our document representations are fast, compositional, and simple to implement given standard word embeddings, they provide strong baselines for future work. Neural text embeddings are instances of distributed representations, long studied in connectionist approaches because they decay gracefully with noise and allow distributed processing. BID14 provided an early problem formulation, and BID30 provided an elementary solution, the holographic distributed representation, which represents structured objects using circular vector convolution and has an easy and more compact implementation using the fast Fourier transform (FFT). Plate suggested applying such ideas to text, where "structure" can be quantified using parse trees and other graph structures. Our method is also closely related in form and composition to the sparse distributed memory system of BID18. In the unigram case our embedding reduces to the familiar sum of word embeddings, which is known to be surprisingly powerful , and with a few modifications even more so BID1.Representations of BonG vectors have been studied through the lens of compression by BID28, who computed representations based on classical lossless compression algorithms using a linear program (LP). Their embeddings are still high-dimensional (d > 100K) and quite complicated to implement. In contrast, linear projection schemes are simpler, more compact, and can leverage readily available word embeddings. BID25 also used a linear scheme, representing documents as an average of learned word and bigram embeddings. However, the motivation and benefits of encoding BonGs in low-dimensions are not made explicit. The novelty in the current paper is the connection to compressed sensing, which is concerned with recovering high-dimensional sparse signals x ∈ R N from low-dimensional linear measurements Ax, specifically by studying conditions on matrix A ∈ R d×N when this is possible (see Appendix A for some on compressed sensing and the previous work of BID5 that we build upon). In this section we define the two types of representations that our analysis will relate:1. high-dimensional BonG vectors counting the occurrences of each k-gram for k ≤ n 2. low-dimensional embeddings, from simple vector sums to novel n-gram-based embeddings Although some of these representations have been previously studied and used, we define them so as to make clear their connection via compressed sensing, i.e. that representations of the second type are simply linear measurements of the first. We now define some notation. Let V be the number of words in the vocabulary and V n be the number of n-grams (independent of word order), so that V = V 1. Furthermore set V sum n = k≤n V k and V max n = max k≤n V k. We will use words/n-grams and indices interchangeably, e.g. if (a, b) is the ith of V 2 bigrams then the one-hot vector e (a,b) will be 1 at index i. Where necessary we will use {,} to denote a multi-set and (,) to denote a tuple. For any m vectors v i ∈ R d for i = 1,..., m we define [v 1, . . ., v m] to be their concatenation, which is thus an element of R md. Finally, for any subset X ⊂ R N we denote by ∆X the set {x − x : x, x ∈ X}. Assigning to each word a unique index i ∈ [V] we define the Bag-of-Words (BoW) representation x BoW of a document to be the V -dimensional vector whose ith entry is the number of times word i occurs in the document. The n-gram extension of BoW is the Bag-of-n-Grams (BonG) representation, which counts the number of times any k-gram for k ≤ n appears in a document. Linear classification over such vectors has been found to be a strong baseline .For ease of analysis we simplify the BonG approach by merging all n-grams in the vocabulary that contain the same words but in a different order. We call these features n-cooccurrences and find that the modification does not affect performance significantly (see Table 3 in Appendix F.1). Formally for a document w 1,..., w T we define the Bag-of-n-Cooccurrences (BonC) vector as the concatenation DISPLAYFORM0 e wt,..., DISPLAYFORM1 which is thus a V sum n -dimensional vector. Note that for unigrams this is equivalent to the BoW vector. Now suppose each word w has a vector v w ∈ R d for some d V. Then given a document w 1,..., w T we define its unigram embedding as z u = T t=1 v wt. While this is a simple and widely used featurization, we focus on the following straightforward relation with BoW: if A ∈ R d×V is a matrix whose columns are word vectors v w then Ax BoW = T t=1 Ae wt = T t=1 v wt = z u. Thus in terms of compressed sensing the unigram embedding of a document is a d-dimensional linear measurement of its Bag-of-Words vector. We could extend this unigram embedding to n-grams by first defining a representation for each ngram as the tensor product of the vectors of its constituent words. Thus for each bigram b = (w 1, w 2) we would have v b = v w1 v T w2 and more generally v g = n t=1 v wt for each n-gram g = (w 1, . . ., w n). The document embedding would then be the sum of the tensor representations of all n-grams. The major drawback of this approach is of course the blowup in dimension, which in practice prevents its use beyond n = 2. To combat this a low-dimensional sketch or projection of the tensor product can be used, such as the circular convolution operator of BID30. Since we are interested in representations that can also be constructed by an LSTM, we instead sketch this tensor product using the element-wise multiplication operation, which we find also usually works better than circular convolution in practice (see Table 4 in Appendix F.1). Thus for the n-cooccurrence g = {w 1, . . ., w n}, we define the distributed cooccurrence (DisC) embeddingṽ g = d n−1 2 n t=1 v wt. The coefficient is required when the vectors v w are random and unit norm to ensure that the product also has close to unit norm (see Lemma B.1). In addition to their convenient form, DisC embeddings have nice theoretical and practical properties: they preserve the original embedding dimension, they reduce to unigram (word) embeddings for n = 1, and under mild assumptions they satisfy useful compressed sensing properties with overwhelming probability (Lemma 4.1).We then define the DisC document embedding to be the nd-dimensional weighted concatenation, over k ≤ n, of the sum of the DisC vectors of all k-grams in a document: DISPLAYFORM0 Here scaling factors C k are set so that all spans of d coordinates have roughly equal norm (for random embeddings C k = 1; for word embeddings C k = 1/k works well). Note that sinceṽ wt = v wt we have z = z u in the unigram case. Furthermore, as with unigram embeddings by comparing and one can easily construct a DISPLAYFORM1 As discussed previously, LSTMs have become a common way to apply the expressive power of RNNs, with success on a variety of classification, representation, and sequence-to-sequence tasks. For document representation, starting with h 0 = 0 m an m-memory LSTM initialized with word vectors v w ∈ R d takes in words w 1,..., w T one-by-one and computes the document representation DISPLAYFORM0 where h t ∈ R m is the hidden representation at time t, the forget gate f, input gate i, and input function g are a.e. differentiable nondecreasing elementwise "activation" functions R m → R m, and affine transformations T * (x, y) = W * x + U * y + b * have weight matrices W * ∈ R m×d, U * ∈ R m×m and bias vectors b * ∈ R m. The LSTM representation of a document is then the state at the last time step, i.e. z LSTM = h T. Note that we will follow the convention of using LSTM memory to refer to the dimensionality of the hidden states. Since the LSTM is initialized with an embedding for each word it requires O(m 2 + md + V d) computer memory, but the last term is just a lookup table so the vocabulary size does not factor into iteration or representation complexity. From our description of LSTMs it is intuitive to see that one can initialize the gates and input functions so as to construct the DisC embeddings defined in the previous section. We state this formally and give the proof in the unigram case (the full proof appears in Appendix B.3): Proposition 3.1. Given word vectors v w ∈ R d, one can initialize an O(nd)-memory LSTM that takes in words w 1,..., w T (padded by an end-of-document token assigned vector 0 d) and constructs the DisC embedding (up to zero padding), i.e. such that for all documents z LSTM = z (n). DISPLAYFORM1 By Proposition 3.1 we can construct a fixed LSTM that can compute compressed BonC representations on the fly and be further trained by stochastic gradient descent using the same memory. Our main contribution is to provide the first rigorous analysis of the performance of the text embeddings that we are aware of, showing that the embeddings of Section 3.2 can provide performance on downstream classification tasks at least as well any linear classifier over BonCs. Before stating the theorem we make two mild simplifying assumptions on the BonC vectors:1. The vectors are scaled by DISPLAYFORM0, where T is the maximum document length. This assumption is made without loss of generality.2. No n-cooccurrence contains a word more than once. While this is (infrequently) violated in practice, the problem can be circumvented by merging words as a preprocessing step. DISPLAYFORM1 (1 − γ)(1 − 2δ) the classifierŵ minimizing the 2 -regularized logistic loss over its representations satisfies DISPLAYFORM2 The above theoretical bound shows that LSTMs match BonC performance as ε → 0, which can be realized by increasing the embedding dimension d (c.f. FIG7). Compressed sensing is concerned with recovering a high-dimensional k-sparse signal x ∈ R N from a few linear measurements; given a design matrix A ∈ R d×N this is formulated as DISPLAYFORM0 where z = Ax is the measurement vector. As l 0 -minimization is NP-hard, research has focused on sufficient conditions for tractable recovery. One such condition is the Restricted Isometry Property (RIP), for which BID6 proved that can be solved by convex relaxation: DISPLAYFORM1 We will abuse notation and say (k, ε)-RIP when X is the set of k-sparse vectors. This is the more common definition, but ours allows a more general Theorem 4.2 and a tighter bound in Theorem 4.1. DISPLAYFORM2 If is a λ-Lipschitz convex loss function and w 0 ∈ R N is its minimizer over D then w.p. 1 − 2δ the linear classifierŵ A ∈ R d minimizing the 2 -regularized empirical loss function DISPLAYFORM3 for appropriate choice of C. Recall that ∆X = {x − x : x, x ∈ X} for any X ⊂ R N.While a detailed proof of this theorem is spelled out in Appendix C, the main idea is to compare the distributional loss incurred by a classifierŵ in the original space to the loss incurred by Aŵ in the compressed space. We show that the minimizer of the regularized empirical loss in the original space (ŵ) is a bounded-coefficient linear combination of samples in S, so its loss depends only on inner products between points in X. Thus using RIP and a generalization error by BID34 we can bound the loss ofŵ A, the regularized classifier in the compressed domain. Note that to get back from Theorem 4.2 the O(√ ε) bound for k-sparse inputs of BID5 we can set X to the be the set of k-sparse vectors and assume A is (2k, ε)-RIP. To apply Theorem 4.2 we need the design matrix A (n) transforming BonCs into the DisC embeddings of Section 3.2 to satisfy the following RIP condition (Lemma 4.1), which we prove using a restricted isometry for structured random sampling matrices in Appendix D: Lemma 4.1. Assume the setting of Theorem 4.1 and let A (n) be the nd × V sum n matrix relating DisC and BonC representations of any document by DISPLAYFORM0 is the set of BonCs of documents of length at most T.Proof of Theorem 4. DISPLAYFORM1 is the set of BonC vectors of documents of length at most T. By BonC assumption all BonCs lie within the unit ball, so we can apply Theorem 4.2 with the logistic loss, λ = 1, and R = 1 to get that a classifierŵ trained using 2 -regularized logistic loss overŜ will satisfy the required bound. Since by Proposition 3.1 one can DISPLAYFORM2 T, this completes the proof. Theorem 4.1 is proved using random vectors as the word embeddings in the scheme of Section 3. However, in practice LSTMs are often initialized with standard word vectors such as GloVe. Such embeddings cannot satisfy traditional compressed sensing properties such as RIP or incoherence. This follows essentially from the definition: word embeddings seek to capture word similarity, so similar words (e.g. synonyms) have embeddings with high inner product, which violates both properties. Thus the efficacy of real-life LSTMs must have some other explanation. But in this section we present the surprising empirical finding that pretrained word embeddings are more efficient than random vectors at encoding and recovering BoW information via compressed sensing. We further sketch a potential explanation for this , though a rigorous explanation is left for subsequent work. In recent years word embeddings have been discovered to have many remarkable properties, most famously the ability to solve analogies BID24. Our connection to compressed sensing indicates that they should have another: preservation of sparse signals as low-dimensional linear measurements. To examine this we subsample documents from the SST BID33 and IMDB BID21 classification datasets, embed them as d-dimensional unigram embeddings z = Ax for d = 50, 100, 200,..., 1600 (where A ∈ R d×V is the matrix of word embeddings and x is a document's BoW vector), solve the following LP, known as Basis Pursuit (BP), which is the standard 1 -minimization problem for sparse recovery in the noiseless case (see Appendix A): DISPLAYFORM0 Success is measured as the F 1 score of retrieved words. We use Squared Norm (SN) vectors BID0 trained on a corpus of Amazon reviews BID23 and normalized i.i.d. Rademacher vectors as a baseline. SN is used due to similarity to GloVe and its formulation via an easy-to-analyze generative model that may provide a framework to understand the (see Appendix F.2), while the Amazon corpus is used for its semantic closeness to the sentiment datasets. compared to dimension. Pretrained word embeddings (SN trained on Amazon reviews) need half the dimensionality of normalized Rademacher vectors to achieve near-perfect recovery. Note that IMDB documents are on average more than ten times longer than SST documents. Figures 1 and 2 show that pretrained embeddings require a lower dimension d than random vectors to recover natural language BoW. This is surprising as the training objective goes against standard conditions such as approximate isometry and incoherence; indeed as shown in FIG3 recovery is poor for randomly generated word collections. The latter outcome indicates that the fact that a document is a set of mutually meaningful words is important for sparse recovery using embeddings trained on co-occurrences. We achieve similar with other objectives (e.g. GloVe/word2vec) and other corpora (see Appendix F.1), although there is some sensitivity to the sparse recovery method, as other 1 -minimization methods work well but greedy methods, such as Orthogonal Matching Pursuit (OMP), work poorly, likely due to their dependence on incoherence BID36.For the n-gram case (i.e. BonC recovery for n > 1), although we know by Lemma 4.1 that DisC embeddings composed from random vectors satisfy RIP, for pretrained vectors it is unclear how to reason about suitable n-gram embeddings without a rigorous understanding of the unigram case, and experiments do not show the same recovery benefits. One could perhaps do well by training on cooccurrences of word tuples, but such embeddings could not be used by a word-level LSTM. As shown in FIG3, the success of pretrained embeddings for linear sensing is a local phenomenon; recovery is only efficient for naturally occurring collections of words. However, applying statistical RIP/incoherence ideas BID2 to explain this is ruled out since they require collections to be incoherent with high probability, whereas word embeddings are trained to give high inner product to words appearing together. Thus an explanation must come from some other, weaker condition. The usual necessary and sufficient requirement for recovering all signals with support S ⊂ [N] is the local nullspace property (NSP), which stipulates that vectors in the kernel of A not have too much mass on S (see Definition A.2). While NSP and related properties such as restricted eigenvalue (see Definition A.3) are hard to check, we can impose some additional structure to formulate an intuitive, verifiable perfect recovery condition for our setting. Specifically, since our signals (BoW vectors) are nonnegative, we can improve upon solving BP by instead solving nonnegative BP (BP+): minimize w 1 subject to Aw = z, w ≥ 0 d The following geometric then characterizes when solutions of BP+ recover the correct signal: Theorem 5.1 BID9. Consider a matrix A ∈ R d×N and an index subset S ⊂ [N] of size k. Then any nonnegative vector x ∈ R N + with support supp(x) = S is recovered from Ax by BP+ iff the set A S of columns of A indexed by S comprise the vertices of a k-dimensional face of the convex hull conv(A) of the columns of A together with the origin. This theorem equates perfect recovery of a BoW vector via BP+ with the vectors of its words being the vertices of some face of the polytope conv(A). The property holds for incoherent columns since the vectors are far enough that no one vector is inside the simplex formed by any k others. On the other hand, pretrained embeddings satisfy it by having commonly co-occurring words close together and other words far away, making it easier to form a face from columns indexed by the support of a BoW. We formalize this intuition as the Supporting Hyperplane Property (SHP): SHP is a very weak property implied by NSP (Corollary E.1). However, it can be checked by using convex optimization to see if the hyperplane exists (Appendix E.2). Furthermore, we show (full proof in Appendix E.1) that this hyperplane is the supporting hyperplane of the face of conv(A) with vertices A S, from which it follows by Theorem 5.1 that SHP characterizes recovery using BP+: Corollary 5.1. BP+ recovers any x ∈ R N + with supp(x) = S from Ax iff A satisfies S-SHP.Proof Sketch. By Theorem 5.1 it suffices to show equivalence of S-SHP with the column set A S comprising the vertices of a k-dimensional face of conv(A). A face F of polytope P is defined as its intersection with some hyperplane such that all points in P \F lie on one side of the hyperplane. (=⇒) Let F be the face of conv(A) formed by the columns A S. Then there must be a supporting hyperplane H containing F. Since the columns of A are in general position, all columns A S = A\A S lie in conv(A)\F and hence must all be on one side of H, so H is the desired hyperplane. (⇐=) Let H be the hyperplane supporting A S, with all other columns on one side of H. By convexity, H contains the simplex F of A S. Any point in conv(A)\F can be written as a convex combination of points in F and columns A S, with a positive coefficient on at least one of the columns, and so must lie on the same side of H as A S. Thus A S comprises the vertices of a face F.Thus perfect recovery of a BoW via BP+ is equivalent to the existence of a hyperplane separating embeddings of words in the document from those of the rest of the vocabulary. Intuitively, words in the same document are trained to have similar embeddings and so will be easier to separate out, providing some justification for why pretrained vectors are better for sensing. We verify that SHP is indeed more likely to be satisfied by such designs in FIG4, which also serves as an empirical check of Corollary 5.1 since SHP satisfaction implies BP recovery as the latter can do no better than BP+. We further compare to recovery using OMP/OMP+ (the latter removes negative values and recomputes the set of atoms at each iteration); interestingly, while OMP+ recovers the correct signal from SN almost as often as BP/BP+, it performs quite poorly for GloVe, indicating that these embeddings may have quite different sensing properties despite similar training objectives. As similarity properties that may explain these also relate to downstream task performance, we conjecture a relationship between embeddings, recovery, and classification that may be understood under a generative model (see Appendix F.2). However, the Section 4 bounds depend on RIP, not recovery, so these experiments by themselves do not apply. They do show that the compressed sensing framework remains relevant even in the case of non-random, pretrained word embeddings. BID1 Reported performance of best hyperparameter using Amazon GloVe embeddings. BID31 shown for comparison. The top three for each dataset are bolded, the best is italicized, and the best word-level performance is underlined. Our theoretical show that simple tensor product sketch-based n-gram embeddings can approach BonG performance and be computed by a low-memory LSTM. In this section we compare these text representations and others on several standard tasks, verifying that DisC performance approaches that of BonCs as dimensionality increases and establishing several baselines for text classification. Code to reproduce is provided at https://github.com/NLPrinceton/text_embedding. We test classification on MR movie reviews BID27, CR customer reviews BID16, SUBJ subjectivity dataset BID26, MPQA opinion polarity subtask , TREC question classification BID20, SST sentiment classification (binary and fine-grained) BID33, and IMDB movie reviews BID21. The first four are evaluated using 10-fold cross-validation, while the others have train-test splits. In all cases we use logistic regression with 2 -regularization determined by cross-validation. We further test DisC on the SICK relatedness and entailment tasks BID22 and the MRPC paraphrase detection task BID8. The inputs here are sentences pairs (a, b) and the standard featurization for document embeddings x a and x b of a and b is et al., 2015). We use logistic regression for SICK entailment and MRPC and use ridge regression to predict similarity scores for SICK relatedness, with 2 -regularization determined by cross-validation. Since BonGs are not used for pairwise tasks our theory says nothing about performance here; we include these evaluations to show that our representations are also useful for other tasks. DISPLAYFORM0 Embeddings: In the main evaluation TAB1 we use normalized 1600-dimensional GloVe embeddings BID29 ) trained on the Amazon Product Corpus BID23, which are released at http://nlp.cs.princeton.edu/DisC. We also compare the SN vectors of Section 5 trained on the same corpus with random vectors when varying the dimension FIG7 ). Results: We find that DisC representation performs consistently well relative to recent unsupervised methods; among word-level approaches it is the top performer on the SST tasks and competes on many others with skip-thoughts and CNN-LSTM, both concatenations of two LSTM representations. While success may be explained by training on a large and in-domain corpus, being able to use so much text without extravagant computing resources is one of the advantages of a simple approach. Overall our method is useful as a strong baseline, often beating BonCs and many more complicated approaches while taking much less time to represent and train on documents than neural representations FIG6 ).Finally, we analyze empirically how well our model approximates BonC performance. As predicted by Theorem 4.1, the performance of random embeddings on IMDB approaches that of BonC as dimension increases and the isometry distortion ε decreases (FIG7). Using pretrained (SN) vectors, DisC embeddings approach BonC performance much earlier, surpassing it in the unigram case. In this paper we explored the connection between compressed sensing, learning, and natural language representation. We first related LSTM and BonG methods via word embeddings, coming up with simple new document embeddings based on tensor product sketches. Then we studied their classification performance, proving a generalization of the compressed learning of BID5 to convex Lipschitz losses and a bound on the loss of a low-dimensional LSTM classifier in terms of its (modified) BonG counterpart, an issue which neither experiments nor theory have been able to resolve. Finally, we showed how pretrained embeddings fit into this sparse recovery framework, demonstrating and explaining their ability to efficiently preserve natural language information. A COMPRESSED SENSING The field of compressed sensing is concerned with recovering a high-dimensional k-sparse signal x ∈ R N from few linear measurements. In the noiseless case this is formulated as minimize w 0 subject to Aw = zwhere A ∈ R d×N is the design matrix and z = Ax is the measurement vector. Since 0 -minimization is NP-hard, a foundational approach is to use its convex surrogate, the 1 -norm, and characterize when the solution to is equivalent to that of the following LP, known as basis pursuit (BP): DISPLAYFORM0 Related approaches such as Basis Pursuit Denoising (LASSO) and the Dantzig Selector generalize BP to handle signal or measurement noise BID11; however, the word embeddings case is noiseless so these methods reduce to BP. Note that throughout Section 5 and the Appendix we say that an 1 -minimization method recovers x from Ax if its optimal solution is unique and equivalent to the optimal solution of.An alternative way to approximately solve FORMULA1 is to use a greedy algorithm such as matching pursuit (MP) or orthogonal matching pursuit (OMP), which pick basis vectors one at a time by multiplying the measurement vector by A T and choosing the column with the largest inner product BID36. One condition through which recovery can be guaranteed is the Restricted Isometry Property (RIP): DISPLAYFORM0 A line of work started by BID6 used the RIP property to characterize matrices A such that FORMULA1 and FORMULA1 have the same minimizer for any k-sparse signal x; this occurs with overwhelming probability when d = Ω k log N k and DISPLAYFORM1 Since the ability to recover a signal x from a representation Ax implies information preservation, a natural next step is to consider learning after compression. BID5 DISPLAYFORM2 and a (2k, ε)-RIP matrix A, the hinge loss of a classifier trained on {( DISPLAYFORM3 is bounded by that of the best linear classifier over the original samples. Theorem 4.2 provides a generalization of this to any convex Lipschitz loss function. RIP is a strong requirement, both because it is not necessary for perfect, stable recovery of k-sparse vectors usingÕ(k) measurements and because in certain settings we are interested in using the above ideas to recover specific signals -those statistically likely to occur-rather than all k-sparse signals. The usual necessary and sufficient condition to recover any vector x ∈ R N with index support set S ⊂ [N] is the local nullspace property (NSP), which is implied by RIP: Definition A.2 BID11. A matrix A ∈ R d×N satisfies NSP for a set S ⊂ [N] if w S 1 < w S 1 for all nonzero w ∈ ker(A) = {v : Av = 0 d}.Theorem A.1 BID11. BP recovers any x ∈ R N + with supp(x) = S from Ax iff A satisfies NSP for S.A related condition that implies NSP is the local restricted eigenvalue property (REP): Definition A.3 BID32 DISPLAYFORM4 Lastly, a simple condition that can sometimes provide recovery guarantees is mutual incoherence: Definition A.4. A ∈ R d×N is µ-incoherent if max a,a |a T a | ≤ µ, where the maximum is taken over any two distinct columns a, a of A.While incoherence is easy to verify (unlike the previous recovery properties), word embeddings tend to have high coherence due to the training objective pushing together vectors of co-occurring words. Apart from incoherence, the properties above are hard show empirically. However, we are compressing BoW vectors, so our signals are nonnegative and we can impose an additional constraint on FORMULA1 The polytope condition is equivalent to nonnegative NSP (NSP+), a weaker form of NSP: Definition A.5 BID10. DISPLAYFORM0 w i > 0 for all nonzero w ∈ ker(A).Lemma A.1. If A ∈ R d×N satisfies NSP for some S ⊂ [N] then it also satisfies NSP+ for S.Proof (Adapted from BID10). Since A satisfies NSP, we have w S 1 < w S 1. Then for a nonzero w ∈ ker(A) such that w S ≥ 0 we will have DISPLAYFORM1 Lemma A.2. BP+ recovers any x ∈ R N + with supp(x) = S from Ax iff A satisfies NSP+ for S. For any nonzero w ∈ ker(A) such that w S ≥ 0, ∃ λ > 0 such that x + λw ≥ 0 N and A(x + λw) = Ax. Since BP+ uniquely recovers x, we have x + λw 1 > x 1, so NSP+ follows from the following inequality and the fact that λ is positive: DISPLAYFORM0 For any x ≥ 0 such that Ax = Ax we have that w = x − x ∈ ker(A) and w S = x S ≥ 0 since the support of x is S. Thus by NSP+ we have that DISPLAYFORM1 w i > 0, which yields DISPLAYFORM2 Thus BP+ will recover x uniquely. Lemma A.2 shows that NSP+ is equivalent to the polytope condition in Theorem A.2, as they are both necessary and sufficient conditions for BP+ recovery. Table 3: The performance of an l 2 -regularized logit classifier over Bag-of-n-Grams (BonG) vectors is generally similar to that of Bag-of-n-Cooccurrences (BonC) vectors for n = 2, 3 (largest differences bolded). Evaluation settings are the same as in Section 6. Note that for unigrams the two representations are equivalent. Table 4: Performance comparison of element-wise product (DisC) and circular convolution for encoding local cooccurrences (best for each task is bolded). Evaluation settings are the same as in Section 6. Note that for unigrams the two representations are equivalent. In this section we compare the performance of several alternative representations with the ones presented in the main evaluation TAB1. Table 3 provides a numerical justification for our use of unordered n-grams (cooccurrences) instead of n-grams, as the performance of the two featurizations are closely comparable. In Table 4 we examine the use of circular convolution instead of elementwise multiplication as linear measurements of BonC vectors BID30. To construct the former from a document w 1,..., w T we compute DISPLAYFORM0 where F is the discrete Fourier transform and F −1 its inverse. Note that for n = 1 this is equivalent to the simple unigram embedding (and thus also to the DisC embedding in). d} then for any n-gram g = (w 1, . . ., w n) we have E ṽ g 2 2 = 1. The same holds true with the additional assumption that all words in g are distinct if the word vectors are i.i.d. d-dimensional spherical Gaussians. Proof. DISPLAYFORM0 Substituting these parameters into the LSTM update and using h 0 = 0 we have ∀ t > 0 that DISPLAYFORM1... DISPLAYFORM2, by padding the end of the document with an end-of-document token whose word vector is 0 d the entries in those dimensions will be set to zero by the update at the last step. Thus up to zero padding we will have z LSTM = h T =z (n).C PROOF OF THEOREM 4.2Throughout this section we assume the setting described in Theorem 4.2. Furthermore for some positive constant C define the 2 -regularization of the loss function as DISPLAYFORM3, where (·, ·) is a convex λ-Lipschitz function in the first cordinate. Then DISPLAYFORM4 where |α i | ≤ λC m ∀ i. This holds in the compressed domain as well. Proof. If is an λ-Lipschitz function, its sub-gradient at every point is bounded by λ. So by convexity, the unique optimizer is given by taking first-order conditions: DISPLAYFORM5 Since is Lipschitz, |∂ w T xi (w T x i, y i)| ≤ λ. Therefore the first-order optimal solution ofŵ can be expressed as FORMULA1 for some α 1,..., α m satisfying |α i | ≤ λC m ∀ i, which is the desired . DISPLAYFORM6 Also since 0 N ∈ X, A is also (X, ε)-RIP and the then follows by the same argument as in (, Lemma 4.2-3). DISPLAYFORM7 Proof. The first bound follows by expanding ŵ 2 2 and using x 2 ≤ R; the second follows by expanding ŵ A 2 2, applying Lemma C.2 to bound inner product distortion, and using x 2 ≤ R. Lemma C.3. Letŵ be the linear classifier minimizing L S. Then DISPLAYFORM8 Proof. By Lemma C.1 we can re-expressŵ using Equation 14 and then apply the inequality from Lemma C.2 to get DISPLAYFORM9 for any x ∈ R N. Since is λ-Lipschitz taking expectations over D implies DISPLAYFORM10 Substituting Equation 14 applying Lemma C.2 also yields DISPLAYFORM11 Together the inequalities bounding the loss term and the regularization term FORMULA1 imply the . Lemma C.4. Letŵ be the linear classifier minimizing L S and let w * be the linear classifier minimizing L D. Then with probability 1 − γ DISPLAYFORM12 This holds in the compressed domain as well. Proof. By Corollary C.1 we have thatŵ is contained in a closed convex subset independent of S. Therefore since is λ-Lipschitz, L is 1 C -strongly convex, and x 2 ≤ O(R), we have by BID34, Theorem 1) that with probability 1 − γ DISPLAYFORM13, which substituted into the previous equation completes the proof. Proof of Theorem 4.2. Applying Lemma C.4 in the compressed domain yields DISPLAYFORM14, so together with Lemma C.3 and the previous inequality we have DISPLAYFORM15 We now apply Lemma C.4 in the sparse domain to get DISPLAYFORM16 2, so by the previous inequality we have DISPLAYFORM17 Substituting the C that minimizes the r.h.s. of this inequality completes the proof. D PROOF OF LEMMA 4.1We assume the setting described in Lemma 4.1, where we are concerned with the RIP condition of the matrix A (n) when multiplying vectors x ∈ X (n)T, the set of BonC vectors for documents of length at most T. This matrix can be written as DISPLAYFORM18 where A p is the d × V p matrix whose columns are the DisC embeddings of all p-grams in the vocabulary (and thus A = A 1 = A, the matrix of the original word embeddings). Note that from any x ∈ X (n) T can be written as x = [x 1, . . ., x n], where x p is a T -sparse vector whose entries correspond to p-grams. Thus we also have DISPLAYFORM19 Proof. By union bound we have that A p is (2k, ε)-RIP ∀ p ∈ [n] with probability at least 1 − nγ. Thus by Definition 4.1 we have w. DISPLAYFORM20 Similarly, DISPLAYFORM21. From Definition 4.1, taking the square root of both sides of both inequalities completes the proof. Definition D.1 BID11. Let D be a distribution over a subset S ⊂ R n. Then the set Φ = {φ 1, . . ., φ N} of functions φ i: S → R is a bounded orthonormal system (BOS) with constant B if we have E D (φ i φ j) = 1 i=j ∀ i, j and sup s∈S |φ i (s)| ≤ B ∀ i. Note that by definition B ≥ 1. DISPLAYFORM22 Proof. Note that by Theorem D.1 it suffices to show that √ dA p is a random sampling matrix associated with a BOS with constant B = 1. Let D = U V {±1} be the uniform distribution over V i.i.d. Rademacher random variables indexed by words in the vocabulary. Then by definition the matrix A p ∈ R d×Vp can be constructed by drawing random variables DISPLAYFORM23 and assigning to the ijth entry of DISPLAYFORM24 gt, where each function φ j: {±1} V → R is uniquely associated to its p-gram. It remains to be shown that this set of functions is a BOS with constant B = 1.For any two p-grams g, g and their functions φ i, φ j we have E D (φ i φ j) = E x∼D p t=1 x gt x g t, which will be 1 iff each word in g ∪ g occurs an even number of times in the product and 0 otherwise. Because all p-grams are uniquely defined under any permutation of its words (i.e. we are in fact using p-cooccurrences) and we have assumed that no p-gram contains a word more than once, each word occurs an even number of times in the product iff g = g ⇐⇒ i = j. Furthermore we have that DISPLAYFORM25 V ∀ i by construction. Thus according to Definition D.1 the set of functions {φ 1, . . ., φ Vp} associated to the p-grams in the vocabulary is a BOS with constant B = 1.Proof of Lemma 4.1. DISPLAYFORM26. Applying Lemma D.1 yields the . In Section 5.2, Definition 5.1 we introduced the Supporting Hyperplane Property (SHP), which by Corollary 5.1 characterizes when BP+ perfectly recovers a nonnegative signal. Together with Lemmas A.1 and A.2 this fact also shows that SHP is a weaker condition than the well-known nullspace property (NSP): Corollary E.1. If a matrix A ∈ R d×N with columns in general position satisfies NSP for some S ⊂ [N] then it also satisfies S-SHP.In this section we give the entire proof of Corollary 5.1 and describe how to verify SHP given a design matrix and a set of support indices. E.1 PROOF OF COROLLARY 5.1Recall that it suffices to show equivalence of A being S-SHP with the columns A S forming the vertices of a k-dimensional face of conv(A), where we can abuse notation to set A ∈ R d×(N +1), with the extra column being the origin 0 d, so long as we constrain N + 1 ∈ S.(=⇒): The proof of the forward direction appeared in full in the proof sketch (see Section 5.2). DISPLAYFORM0 We also know that F = {i∈S λ i A i : λ ∈ ∆ |S|} ⊆ H by convexity of H. Since any point y ∈ conv(A)\F can be written as y = DISPLAYFORM1 ∈ S such that λ j = 0, we have that DISPLAYFORM2 This implies that conv(A)\F ⊆ H − and F = conv(A) ∩ H, so since the columns of A are in general position F is a k-dimensional face of conv(A) whose vertices are the columns A S. Recall that a matrix R d×N satisfies S-SHP for S ⊂ [N] if there is a hyperplane containing the set of all columns of A indexed by S and the set of all other columns together with the origin are on one side of it. Due to Corollary 5.1, checking S-SHP allows us to know whether all nonnegative signals with index support S will be recovered by BP+ without actually running the optimization on any one of them. The property can be checked by solving a convex problem of the form The constraint enforces the property that the hyperplane contains all support embeddings, while the optimal objective value is zero iff SHP is satisfied (this follows from the fact that scaling h does not affect the constraint so if the minimal objective is zero for any single ε > 0 it is zero for all ε > 0). The problem can be solved via using standard first or second-order equality-constrained convex optimization algorithms. We set ε = 1 and p = 3 (to get a C 2 objective) and adapt the second-order method from Boyd & Vandenberghe (2004, Chapter 10). Our implementation can be found at https://github.com/NLPrinceton/sparse_recovery. Figure 7: Efficiency of pretrained embeddings as sensing vectors at d = 300 dimensions, measured via the F 1 -score of the original BoW. 200 documents from each dataset were compressed and recovered in this experiment. For fairness, the number of words V is the same for all embeddings so all documents are required to be subsets of the vocabulary of all corpora. word2vec embeddings trained on Google News and GloVe vectors trained on Common Crawl were obtained from public repositories BID24 BID29 while Amazon and Wikipedia embeddings were trained for 100 iterations using a symmetric window of size 10, a min count of 100, for SN/GloVe a cooccurrence cutoff of 1000, and for word2vec a down-sampling frequency cutoff of 10 −5 and a negative example setting of 3. 300-dimensional normalized random vectors are used as a baseline. We show in Figure 7 that the surprising effectiveness of word embeddings as linear measurement vectors for BoW signals holds for other embedding objectives and corpora as well. Specifically, we see that widely used embeddings, when normalized, match the efficiency of random vectors for retrieving SST BoW and are more efficient when retrieving IMDB BoW. Interestingly, SN vectors are most efficient and are also the only embeddings for normalizing is not needed for good performance. In Section 5.2 we gave some intuition for why pretrained word embeddings are efficient sensing vectors for natural language BoW by examining a geometric characterization of local equivalence due to BID9 in light of the usual similarity properties of word embeddings. However, this analysis does not provide a rigorous theory for our empirical . In this section we briefly discuss a model-based justification that may lead to a stronger understanding. We need a model relating BoW generation to the word embeddings trained over words co-occurring in the same BoW. As a starting point consider the model of BID0, in which a corpus is generated by a random walk c t over the surface of a ball in R d; at each t a word w is emitted w.p. DISPLAYFORM0 Minimizing the SN objective approximately maximizes corpus likelihood under this model. Thus in an approximate sense a document of length T is generated by setting a context vector c and emitting T words via with c t = c. This model is a convenient one for analysis due its simplicity and invariance to word order as well as the fact that the approximate maximum likelihood document vector is the sum of the embeddings of words in the document. Building upon the intuition established following Corollary 5.1 one can argue that, if we have the true latent SN vectors, then embeddings of words in the same document (i.e. emitted by the same context vector) will be close to each other and thus easy to separate from the embeddings of other words in the vocabulary. However, we find empirically that not all of the T words closest to the sum of the word embeddings (i.e. the context vector) are the ones emitted; indeed individual word vectors in a document may have small, even negative inner product with the context vector and still be recovered via BP. Thus any further theoretical argument must also be able to handle the recovery of lower probability words whose vectors are further away from the context vector than those of words that do not appear in the document. We thus leave to future work the challenge of explaining why embeddings ing from this (or another) model provide such efficient sensing matrices for natural language BoW.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1e5ef-C-
We use the theory of compressed sensing to prove that LSTMs can do at least as well on linear text classification as Bag-of-n-Grams.
Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks. However, we found that these conventional transforms have the ability to capture the cross-channel correlations without any learnable parameters in DNNs. This paper firstly proposes to apply conventional transforms on pointwise convolution, showing that such transforms significantly reduce the computational complexity of neural networks without accuracy performance degradation. Especially for DWHT, it requires no floating point multiplications but only additions and subtractions, which can considerably reduce computation overheads. In addition, its fast algorithm further reduces complexity of floating point addition from O(n^2) to O(nlog n). These non-parametric and low computational properties construct extremely efficient networks in the number parameters and operations, enjoying accuracy gain. Our proposed DWHT-based model gained 1.49% accuracy increase with 79.4% reduced parameters and 48.4% reduced FLOPs compared with its baseline model (MoblieNet-V1) on the CIFAR 100 dataset. Large Convolutional Neural Networks (CNNs) (; ; ; b; a) and automatic Neural Architecture Search (NAS) based networks; ) have evolved to show remarkable accuracy on various tasks such as image classification , object detection , benefited from huge amount of learnable parameters and computations. However, these large number of weights and high computational cost enabled only limited applications for mobile devices that require the constraint on memory space being low as well as for devices that require real-time computations . With regard to solving these problems,;; Zhang et al. (2017b); proposed parameter and computation efficient blocks while maintaining almost same accuracy compared to other heavy CNN models. All of these blocks utilized depthwise separable convolution, which deconstructed the standard convolution with the (3 × 3 × C) size for each kernel into spatial information specific depthwise convolution (3 × 3 × 1) and channel information specific pointwise (1 × 1 × C) convolution. The depthwise separable convolution achieved comparable accuracy compared to standard spatial convolution with hugely reduced parameters and FLOPs. These reduced resource requirements made the depthwise separable convolution as well as pointwise convolution (PC) more widely used in modern CNN architectures. Nevertheless, we point out that the existing PC layer is still computationally expensive and occupies a lot of proportion in the number of weight parameters . Although the demand toward PC layer has been and will be growing exponentially in modern neural network architectures, there has been a little research on improving the naive structure of itself. Therefore, this paper proposes a new PC layer formulated by non-parametric and extremely fast conventional transforms. Conventional transforms that we applied on CNN models are Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT), which have widely been used in image processing but rarely been applied in CNNs . We empirically found that although both of these transforms do not require any learnable parameters at all, they show the sufficient ability to capture the cross-channel correlations. This non-parametric property enables our proposed CNN models to be significantly compressed in terms of the number of parameters, leading to get the advantages (i.e. efficient distributed training, less communication between server and clients) referred by. We note that especially DWHT is considered to be a good replacement of the conventional PC layer, as it requires no floating point multiplications but only additions and subtractions by which the computation overheads of PC layers can significantly be reduced. Furthermore, DWHT can take a strong advantage of its fast version where the computation complexity of the floating point operations is reduced from O(n 2) to O(n log n). These non-parametric and low computational properties construct extremely efficient neural network from the perspective of parameter and computation as well as enjoying accuracy gain. Our contributions are summarized as follows: • We propose a new PC layer formulated with conventional transforms which do not require any learnable parameters as well as significantly reducing the number of floating point operations compared to the existing PC layer. • The great benefits of using the bases of existing transforms come from their fast versions, which drastically decrease computation complexity in neural networks without degrading accuracy. • We found that applying ReLU after conventional transforms discards important information extracted, leading to significant drop in accuracy. Based on this finding, we propose the optimal computation block for conventional transforms. • We also found that the conventional transforms can effectively be used especially for extracting high-level features in neural networks. Based on this, we propose a new transformbased neural network architecture. Specifically, using DWHT, our proposed method yields 1.49% accuracy gain as well as 79.4% and 49.4% reduced parameters and FLOPs, respectively, compared with its baseline model (MobileNet-V1) on CIFAR 100 dataset. For reducing computation complexity of the existing convolution methods, several approaches of rethinking and deconstructing the naive convolution structures have been proposed. factorized a large sized kernel (e.g. 5 × 5) in a convolution layer into several small size kernels (e.g. 3 × 3) with several convolution layers. pointed out the limitation of existing convolution that it has the fixed receptive field. Consequently, they introduced learnable spatial displacement parameters, showing flexibility of convolution. Based on , proved that the standard convolution can be deconstructed as a single PC layer with the spatially shifted channels. Based on that, they proposed a very efficient convolution layer, namely active shift layer, by replacing spatial convolutions with shift operations. It is worth noting that the existing PC layer takes the huge proportion of computation and the number of weight parameters in modern lightweight CNN models (; ;). Specifically, MobileNet-V1 requires 94%, 74% of the overall computational cost and the overall number of weight parameters for the existing PC layer, respectively. Therefore, there were attempts to reduce computation complexity of PC layer. Zhang et al. (2017b) proposed ShuffleNet-V1 where the features are decomposed into several groups over channels and PC operation was conducted for each group, thus reducing the number of weight parameters and FLOPs by the number of groups G. However, it was proved in that the memory access cost increases as G increases, leading to slower inference speed. Similarly to the aforementioned methods, our work is to reduce computation complexity and the number of weight parameters in a convolution layer. However, our objective is more oriented on finding out mathe-matically efficient algorithms that make the weights in convolution kernels more effective in feature representation as well as efficient in terms of computation. Quantization in neural networks reduced the number of bits utilized to represent the weights and/or activations. applied 8-bit quantization on weight parameters, which enabled considerable speed-up with small drop of accuracy. applied 16-bit fixed point representation with stochastic rounding. Based on Han et al. (2015b) which pruned the unimportant weight connections through thresholding the values of weight, Han et al. (2015a) successfully combined the pruning with 8 bits or less quantization and huffman encoding. The extreme case of quantized networks was evolved from , which approximated weights with the binary (+1, −1) values. From the milestone of ,; constructed Binarized Neural Networks which either stochastically or deterministically binarize the real value weights and activations. These binarized weights and activations lead to significantly reduced run-time by replacing floating point multiplications with 1-bit XNOR operations. Based on ), Local Binary CNN proposed a convolution module that utilizes binarized nonlearnable weights in spatial convolution based on Local Binary Patterns, thus replacing multiplications with addition/subtraction operations in spatial convolution. However, they did not consider reducing computation complexity in PC layer and remained the weights of PC layer learnable floating point variables. Our work shares the similarity to Local Binary CNN in using binary fixed weight values. However, Local Binary Patterns have some limitations for being applied in CNN since they can only be used in spatial convolution as well as there are no approaches that enable fast computation of them. In general, several transform techniques have been applied for image processing. Discrete Cosine Transform (DCT) has been used as a powerful feature extractor . For N -point input sequence, the basis kernel of DCT is defined as a list of cosine values as below: where m is the index of a basis and captures higher frequency information in the input signal as m increases. This property led DCT to be widely applied in image/video compression techniques that emphasize the powers of image signals in low frequency regions . Discrete Walsh Hadamard Transform (DWHT) is a very fast and efficient transform by using only +1 and −1 elements in kernels. These binary elements in kernels allow DWHT to perform without any multiplication operations but addition/subtraction operations. Therefore, DWHT has been widely used for fast feature extraction in many practical applications, such as texture image segmentation , face recognition , and video shot boundary detection (G. & S., 2014). Further, DWHT can take advantage of a structured-wiring-based fast algorithm (Algorithm 1) as well as allowing very high efficiency in encoding the spatial information . The basis kernel matrix of DWHT is defined using the previous kernel matrix as below: where H 0 = 1 and D ≥ 1. In this paper we denote H D m as the m-th row vector of H D in Eq. 2. Additionally, we adopt fast DWHT algorithm to reduce computation complexity of PC layer in neural networks, ing in an extremely fast and efficient neural network. We propose a new PC layer which is computed with conventional transforms. The conventional PC layer can be formulated as follows: where (i, j) is a spatial index, and m is output channel index. In Eq. 3, N and M are the number of input and output channels, respectively. X ij ∈ R N is a vector of input X at the spatial index (i, j), N is a vector of m-th weight W in Eq. 3. For simplicity, the stride is set as 1 and the bias is omitted in Eq. 3. Our proposed method is to replace the learnable parameters W m with the bases in the conventional transforms. For example, replacing W m with H D m in Eq. 3, we now can formulate the new multiplication-free PC layer using DWHT. Similarly, the DCT basis kernels C m in Eq. 1 can substitute for W m in Eq. 3, formulating another new PC layer using DCT. Note that the normalization factors in the conventional transforms are not applied in the proposed PC layer, because Batch Normalization performs a normalization and a linear transform which can be viewed as a normalization in the existing transforms. The most important benefit of the proposed method comes from the fact that the fast algorithms of the existing transforms can be applied in the proposed PC layers for further reduction of computation. Directly applying above new PC layer gives computational complexity of O(N 2). Adopting the fast algorithms, we can significantly reduce the computation complexity of PC layer from O(N 2) to O(N logN) without any change of the computation . We demonstrate the pseudo-code of our proposed fast PC layer using DWHT in Algorithm 1 based on the Fast DWHT structure shown in Figure 1a. In Algorithm 1, for logN iterations, the evenindexed channels and odd-indexed channels are added and subtracted in element-wise manner, respectively. The ing elements which were added and subtracted are placed in the first N/2 elements and the last N/2 elements of the input of next iteration, respectively. In this computation process, each iteration requires only N operations of additions or subtractions. Consequently, Algorithm 1 gives us complexity of O(N logN) in addition or subtraction. Compared to the existing PC layer that requires complexity of O(N 2) in multiplication, our method is extremely cheaper than the conventional PC layer in terms of computation costs as shown in Figure 1b and in power consumption of computing devices . Note that, similarly to fast DWHT, DCT can also be computed in a fast manner that recursively decomposes the N -point input sequence into two subproblems of N/2-point DCT . Compared to DWHT, DCT takes advantage of using more natural shapes of cosine basis kernels, which tend to provide better feature extraction performance through capturing the frequency information. However, DCT inevitably needs multiplications for inner product between C and X vectors, and a look up table (LUT) for computing cosine kernel bases which can increase the processing time and memory access. On the other hand, as mentioned, the kernels of DWHT consist only of +1, −1 which allows for building a multiplication-free module. Furthermore, any memory access towards kernel bases is not needed if our structured-wiring-based fast DWHT algorithm (Algorithm 1) is applied. Our comprehensive experiments in Section 3.1 and 3.2 show that DWHT is more efficient than DCT in being applied in PC layer in terms of trade-off between the complexity of computation cost and accuracy. Note that, for securing more general formulation of our newly defined PC layer, we padded zeros along the channel axis if the number of input channels is less than that of output channels while truncating the output channels when the number of output channels shrink compared to that of input channels as shown in Algorithm 1. Figure 1a shows the architecture of fast DWHT algorithm described in Algorithm 1. This structuredwiring-based architecture ensures that the receptive field of each output channels is N, which means each output channel is fully reflected against all input channels through log 2 N iterations. This fullyreflected property helps to capture the input channel correlations in spite of the computation process of what channel elements will be added and subtracted being structured in a deterministic manner. For successfully fusing our new PC layer into neural networks, we explored two themes: i) an optimal block search for the proposed PC; ii) an optimal insertion strategy of the proposed block found by i), in a hierarchical manner on the blocks of networks. We assumed that there are an optimal block unit structure and an optimal hierarchy level (high-, middle-, low-level) blocks in the neural networks favored by these non-learnable transforms. Therefore, we conducted the experiments for the two aforementioned themes accordingly. We evaluated the effectiveness for each of our networks by accuracy fluctuation as the number of learnable weight parameters or FLOPs changes. For comparison, we counted total FLOPs with summation of the number of multiplications, additions and subtractions performed during the inference. Unless mentioned, we followed the default experimental setting as 128 batch size, 200 training epochs, 0.1 initial learning rate where 0.94 is multiplied per 2 epochs, and 0.9 momentum with 5e-4 weight decay value. In all the experiments, the model accuracy was obtained by taking an average of Top-1 accuracy values from three independent training . (a) A black circle indicates a channel element, and black and red lines are additions and subtractions, respectively. The number of input channels is set as 8 for simplicity. Best viewed in color. (b) x axis denotes logarithm of the number of input channels which range from 2 0 to 2 n. For simplicity, the number of output channels is set to be same as that of the input channel for all PC layers. Best viewed in color. Algorithm 1 A new pointwise convolution using fast DWHT algorithm ZeroPad1D(X, axis=1) pad zeros along the channel axis 4: end if 5: for i ← 1 to n do From a microscopic perspective, the block unit is the basic foundation of neural networks, and it determines the efficiency of the weight parameter space and computation costs in terms of accuracy. Accordingly, to find the optimal block structure for our proposed PC layer, our experiments are conducted to replace the existing PC layer blocks with our new PC layer blocks in ShuffleNet-V2 . The proposed block and its variant blocks are listed in Figure 2. Comparing the of (c) and (d) in Table 1 informs us the important fact that the ReLU activation function significantly harms the accuracy of our neural networks equipped with the conventional transforms. We empirically analyzed this phenomenon in Section 4.1. Additionally, comparing the accuracy of (b) and (d) in Table 1 denotes that the proposed PC layers are superior to the PC layer which randomly initialized and fixed its weights to be non-learnable. These imply that DWHT and DCT kernels can better extract meaningful information of cross-channel correlations compared to the kernels which are randomly initialized and non-learnable. Compared to the baseline model in Table 1, (d)-DCT w/o ReLU and (d)-DWHT w/o ReLU blocks show accuracy drop by approximately 2.3% under the condition that 42% and 49.5% of learnable weight parameters and FLOPs are reduced, respectively. These imply that the proposed blocks (c) and (d) are still inefficient in trade-off between accuracy and computation costs of neural networks, leading us to more explore to find out an optimal neural network architecture. In the next subsection, we address this problem through applying conventional transforms on the optimal hierarchy level features (See Section 3.2). Based on our comprehensive experiments, we set the block structure (d) as our default proposed block which will be exploited in all the following experiments. In this section, we search on an optimal hierarchy level where our optimal block which is based on the proposed PC layer is effectively applied in a whole network architecture. The optimal hierarchy level will allow the proposed network to have the minimal number of learnable weight parameters and FLOPs without accuracy drop, which is made possible by non-parametric and extremely fast conventional transforms. It is noted that applying our proposed block on the high-level blocks in the network provides much more reduced number of parameters and FLOPs rather than applying on low-level blocks, because channel depth increases exponentially as the layer goes deeper in the network. Figure 2 on CIFAR100 dataset. All the experimented models are based on ShuffleNet-V2 with width hyper-parameter 1.1x which we customized to make the number of output channels in Stage2, 3, 4 as 128, 256, 512, respectively for fair comparison with DWHT which requires 2 n input channels. We replaced all of 13 stride 1 basic blocks (i.e. Figure 3: Performance curve of hierarchically applying our optimal block on CIFAR100, Top: in the viewpoint of the number of learnable weight parameters, Bottom: in the viewpoint of the number of FLOPs. The performance of baseline models was evaluated by ShuffleNet-V2 architecture with width hyper-parameter 0.5x, 1x, 1.1x, 1.5x. Our models were all experimented with 1.1x setting, and each dot in the figures represents mean accuracy of 3 network instances. Note that the blue line denotes the indicator of the efficiency of weight parameters or FLOPs in terms of accuracy. The upper left part from the blue line is the superior region while lower right part from blue line is the inferior region compared to the baseline models. In Figure 3, we applied our optimal block (i.e. (d) block in Figure 2 ) on high-, middle-and low-level blocks, respectively. In our experiments, we evaluate the performance of the networks depending on the number of blocks where the proposed optimal block is applied. The model that we have tested is denoted as (transform type)-(# of the proposed blocks)-(hierarchy level in Low (L), Middle (M), and High (H) where the proposed optimal block is applied). For example, DWHT-3-L indicates the neural network model where the first three blocks in ShuffleNet-V2 consist of the proposed blocks, while the other blocks are the original blocks of ShuffleNet-V2. It is noted that in this experiment, we fix all the blocks with stride = 2 in the baseline model to be original ShuffleNet-V2 stride = 2 blocks. Figure 3 shows the performance of the proposed methods depending on the transform types {DCT, DWHT}, hierarchy levels {L, M, H} and the number of the proposed blocks that replace the original ones in the baseline {3, 6, 10} in terms of Top-1 accuracy and the number of learnable weight parameters (or FLOPs). It is noted that, since the baseline model has only 7 blocks in the middlelevel Stage (i.e. Stage3), we performed the middle-level experiments only for DCT/DWHT-3-M and -7-M models where the proposed blocks are applied from the end of Stage3 in the baseline model. In Figure 3, the performance of our 10-H (or 10-L), 6-H (or 6-L), 3-H (or 3-L) models (7-M and 3-M only for middle-level experiments) is listed in ascending order of the number of learnable weight parameters and FLOPs. As shown in the first column of Figure 3, applying our optimal block on the high-level blocks achieved much better trade-off between the number of learnable weight parameters (FLOPs) and accuracy. Meanwhile, applying on middle-and low-level features suffered, respectively, slightly and severely from the inefficiency of the number of weight parameters (FLOPs) with regard to accuracy. This tendency is shown similarly for both DWHT-based models and DCT-based models, which implies that there can be an optimal hierarchical level of blocks favored by conventional transforms. Also note that our DWHT-based models showed slightly higher or same accuracy with less FLOPs in all the hierarchy level cases compared to our DCT-based models. This is because the fast version of DWHT does not require any multiplication but needs less amount of addition or subtraction operations compared to the fast version of DCT while it also has the sufficient ability to extract cross-channel information with the exquisite wiring-based structure. For verifying the generality of the proposed method, we also applied our methods into MobileNet-V1 . Inspired by the above showing that optimal hierarchy blocks for conventional transforms can be found in the high-level blocks, we replaced high-level blocks of baseline model (MobileNet-V1) and changed the number of proposed blocks that are replaced to verify the effectiveness of the proposed method. The experimental are described in Table 2. Remarkably, as shown in Table 2, our DWHT-6-H model yielded the 1.49% increase in Top-1 accuracy even under the condition that the 79.4% of parameters and 49.4% of FLOPs are reduced compared with the baseline 1x model. This outstanding performance improvement comes from the depthwise separable convolutions used in MobileNet-V1, where PC layers play dominant roles in computation costs and memory space, i.e. they consume 94.86% in FLOPs and 74% in the total number of parameters in the whole network . The full performance for all the hierarchy levels {L, M, H} and the number of blocks {3, 6, 10} (exceptionally, {3, 7} blocks for the middle level experiments) are described in Appendix A. In Appendix A, based on the comprehensive experiments it can be concluded that i) the proposed PC block always shows its better efficiency of number of parameters and FLOPs when applied on high-levels compared to when applied on low-level in the network hierarchy; ii) the performance gain start to decrease when the number of transform based PC blocks exceeded a certain capacity of the networks. Top-1 Acc (%) # of Weights (ratio) # of FLOPs (ratio) Baseline 67.15 ± 0.3 Table 2: Performance of hierarchically applying our optimal block on CIFAR100 dataset. All the models are based on MobileNet-V1 with width hyper-parameter 1x. We replaced both stride 1, 2 blocks in the baseline model with the optimal block that consist of [3 × 3 depthwise convolutionBatch Normalization -ReLU -CTPC -Batch Normalization] in series. In this section, we analyze the significant accuracy degradation of applying ReLU after our proposed PC layer. Additionally, we analyze the active utilization of 3x3 depthwise convolution weight kernel values which takes an auxiliary role for conventional transform being non-learnable. As shown in Table 1, applying ReLU after conventional transforms significantly harmed the accuracy. This is due to the properties of conventional transform basis kernels that both H D m in Eq. 2 and C m in Eq. 1 have the same number of positive and negative parameters in the kernels except for m = 0 and that the distributions of absolute values of positive and negative elements in kernels are almost identical. These properties let us know that the output channel elements that have under zero value should also be considered during the forward pass; when forwarding X ij in Eq. 3 through the conventional transforms if some important channel elements in X ij that have larger values than others are combined with negative values of C m or H D m, the important cross-channel information in the output Z ijm in Eq. 3 can reside in the value range under zero. Figure 4 shows that all the hierarchy level activations from both DCT and DWHT based PC layer have not only positive values but also negative values in almost same proportion. These negative values possibly include important cross-channel correlation information. Thus, applying ReLU on activations of PC layers which are based on conventional transforms discards crucial cross-channel information contained in negative values that must be forwarded through, leading to significant accuracy drop as shown in the of Table 1. Figure 6 empirically demonstrates above theoretical analysis by showing that as the negative value regions are fully ignored (i.e. F = ReLU), the accuracy is significantly degraded while fully reflecting the negative value regions (i.e. g = 1) shows the best accuracy. From above kernel value based analysis and its experiment, we do not use non-linear activation function after the proposed PC layer. In Figure 5 and Appendix B, it is observed that 3 × 3 depthwise convolution weights of last 3 blocks in DWHT-3-H and DCT-3-H have much less near zero values than that of baseline model. That Figure 6: Ablation study of negative slope term g in activation function F, which is defined as F (x) = max(0, x) + g * min(0, x). The performance of models were evaluated based on DCT or DWHT-13-H ShuffleNet-V2 1.1x where we applied F as an activation function after every DCT or DWHT based PC layer and Batch Normalization layer. is, the number of values which are apart from near-zero is much larger on DCT-3-H and DWHT-3-H models than on baseline model. We conjecture that these learnable weights whose values are apart from near-zero were actively fitted to the optimal domain that is favored by conventional transforms. Consequently, these weights are actively and sufficiently utilized to take the auxiliary role for conventional transforms which are non-learnable, deriving accuracy increase compared to the conventional PC layer as shown in Figure 3. To verify the impact of activeness of these 3 × 3 depthwise convolution weights in the last 3 blocks, we experimented with regularizing these weights varying the weight decay values. Higher weight decay values strongly regularize the scale of 3 × 3 depthwise convolution weight values in the last 3 blocks. Thus, strong constraint on the scale of these weight values hinders active utilization of these weights, which in accuracy drop as shown in Figure 7. We propose the new PC layers through conventional transforms. Our new PC layers allow the neural networks to be efficient in complexity of computation and learnable weight parameters. Especially for DWHT-based PC layer, its floating point multiplication-free property enabled extremely efficient in computation overhead. With the purpose of successfully fusing our PC layers into neural networks, we empirically found the optimal block unit structure and hierarchy level blocks in neural networks for conventional transforms, showing accuracy increase and great representability in cross-channel correlations. We further intrinsically revealed the hindrance of ReLU toward capturing the cross-channel representability and the activeness of depthwise convolution weights on the last blocks in our proposed neural network. Figure 8: Performance curve of hierarchically applying our optimal block (See Table 2 for detail settings) on CIFAR100, Top: in the viewpoint of the number of learnable weight parameters, Bottom: in the viewpoint of the number of FLOPs. The performance of baseline models was evaluated by MobileNet-V1 architecture with width hyper-parameter 0.2x, 0.35x, 0.5x, 0.75x, 1x, 1.25x. Our proposed models were all experimented with 1x setting, and each dot in the figures represents mean accuracy of 3 network instances. Our models are experimented with 10-H, 6-H, 3-H models (first column), 7-M, 3-M-Rear, 3-M-Front models (second column) and 10-L, 6-L, 3-L models (final column), listed in ascending order of the number of learnable weight parameters and FLOPs. In Figure 8, for the purpose of finding more definite hierarchy level of blocks favored by our proposed PC layers, we subdivided our middle level experiment scheme; DCT/DWHT-3-M-Front model denotes the model which applied the proposed blocks from the beginning of Stage3 in the baseline while DCT/DWHT-3-M-Rear model denotes the model which applied from the end of Stage3. The performance curves of all our proposed models in Figure 8 show that if we apply the proposed optimal block within the first 6 blocks in the network, the Top-1 accuracy is mildly or significantly deteriorated compared to the required computational cost and number of learnable parameters, informing us the important fact that there are the definite hierarchy level blocks which are favored or not favored by our proposed PC layers in the network. For the purpose of demonstrating the superiority of our proposed DCT/DWHT based PC layers over RCPC layer in all the hierarchical (i.e. low/mid/high) level layers, we compared the performance trade-off in Figure 10. It is noted that DCT/DWHT based PC layers almost always get higher accuracy than RCPC layer in all the hierarchical level layers. Comparing the distance between the orange or green line with the red line in Figure 10, our DCT/DWHT based PC layers showed high efficiency in trade-off between accuracy and the computational costs or number of learnable parameters, compared to RCPC layer in almost all the hierarchical levels. Table 4: Quantitative comparison between the baseline model and our proposed DCT/DWHT-based models on FDDB dataset. AP means the true positive rate at 1,000 false positives and all the models were evaluated with discontinuous criterion in FDDB dataset. In order to demonstrate the domain-generality of the proposed method, we conducted comprehensive experiments on applying our proposed PC layers to object detection, specifically to the face detection task. For face detection schemes such as anchor design, data augmentation and featuremap resolution design, we followed Zhang et al. (2017a) which is one of the baseline methods in face detection field. It is noted that there is a huge demand on real-time face detection algorithms having high detection accuracy, which leads us to applying our PC layers to a lightweight face detection network. Therefore, instead of using VGG16 as backbone network as in Zhang et al. (2017a), we set MobileNet-V1 0.25x as our baseline backbone model where extra depthwise separable blocks are added for detecting more diverse scales of face in the images. In this baseline model, we replaced the conventional PC layers within last 3, 6 blocks with our DCT/DWHT based PC layers. We trained all the models on the WIDER FACE train dataset and evaluated on WIDER FACE validation dataset and Face Detection Data Set and Benchmark (FDDB) dataset . WIDER FACE validation set has Easy, Medium and Hard subsets, which correspond to large, medium and small scale faces, respectively. Validation of the baseline model and our proposed DCT/DWHT models on WIDER FACE are described in Table 3. In Table 3, we note that, overall, our DWHT-3-H and DWHT-6-H models showed comparable or even higher mAP values than the baseline model on all the subsets (Easy, Medium, and Hard) with significantly reduced number of learnable parameters and FLOPs. Especially, DWHT-3-H model achieved 0.27% higher mAP than the baseline model under the condition that 79% of parameters and 16% of FLOPs are reduced on Hard subset. Regarding DCT-3-H and DCT-6-H models, they showed a solid improvement of mAP on Easy and Medium subsets with significantly reduced number of parameters and FLOPs compared to the baseline model. Additionally, we verified the effectiveness of the proposed method on the FDDB dataset in Table 4. We note that our DWHT-6-H and DWHT-3-H models showed comparable or even 0.09% higher AP than the baseline model with significantly reduced number of learnable parameters and FLOPs. On the other hand, our DCT-6-H and DCT-3-H models showed a small degree of degradation on AP compared to the baseline model, which is a mild degradation considering the reduced amount of parameters and FLOPs. Consequently, our comprehensive experiments on both WIDER FACE and FDDB datasets reveal the generality of our proposed method, enabling neural networks to be extremely lightweight and reduce the computational overhead.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1l0O6EYDH
We introduce new pointwise convolution layers equipped with extremely fast conventional transforms in deep neural network.
We introduce the notion of \emph{lattice representation learning}, in which the representation for some object of interest (e.g. a sentence or an image) is a lattice point in an Euclidean space. Our main contribution is a for replacing an objective function which employs lattice quantization with an objective function in which quantization is absent, thus allowing optimization techniques based on gradient descent to apply; we call the ing algorithms \emph{dithered stochastic gradient descent} algorithms as they are designed explicitly to allow for an optimization procedure where only local information is employed. We also argue that a technique commonly used in Variational Auto-Encoders (Gaussian priors and Gaussian approximate posteriors) is tightly connected with the idea of lattice representations, as the quantization error in good high dimensional lattices can be modeled as a Gaussian distribution. We use a traditional encoder/decoder architecture to explore the idea of latticed valued representations, and provide experimental evidence of the potential of using lattice representations by modifying the \texttt{OpenNMT-py} generic \texttt{seq2seq} architecture so that it can implement not only Gaussian dithering of representations, but also the well known straight-through estimator and its application to vector quantization. With a few notable exceptions, the majority of the practical research in representation learning assumes that the representation of the objects of interest (sentences, images, audio signals, etc.) are vectors of real numbers, as this allows us to use powerful optimization algorithms such as variants of gradient descent in the training of computational networks which encode objects into such representations and then use those representations in downstream tasks. Yet, the idea of representing objects using discrete structures (for example, through categorical variables or through the use of quantization of otherwise real valued representations) is rather enticing: sometimes we might inherently believe discrete representations to be the right way to model objects, or may want to use such representations in settings such as reinforcement learning and planning, where discrete actions are important. A classical by for maximum likelihood learning of mixture models tells us that the optimal mixing distribution can be chosen to be discrete (and in fact, the discrete set need not be larger than the amount of training data); this implies also that the optimal associated "approximate posterior" (when seen as a variational inference problem) in fact can be chosen so that it produces discrete representations. The main difficulty associated with discrete representations is that it is not straightforward to train networks that produce and use them because either there is no meaningful sense in which differentiation can be used directly (e.g. in the case of categorical variables), or in the case of quantization, the associated gradient is zero almost everywhere. In spite of these difficulties, notable progress has been made. For example, for categorical variables, , and proposed essentially the same idea, under the names of Gumbel-Softmax and the Concrete Distribution, respectively. This idea, further improved by , uses a continuous approximation to a "one-hot" encoded categorical distribution which can be learned by appropriately setting a parameter controlling the fidelity of the approximation. For the setting where a continuous representation is quantized to obtain the discrete one, an important development is the idea of straight-through estimation , in which quantization is applied in the "forward" direction, but replaces quantization with the identity operator when performing the "backwards" differentiation step -see also recent work attempting to provide theoretical justification of straight-through estimation . Of particular relevance to our work is VQ-VAE (van den) in which general vector quantization is used, together with straight-through estimation, to train the network and the vector quantizers. In fact, the entirety of this paper can be seen as a type of VQ-VAE where the vector. Also shown for each lattice is the cell P 0, containing the origin. The parameter of the cubic lattice ∆ has been set so that the areas of both lattice cells are identical. quantizers are highly structured instead of being free parameters in a network; as we will see, the advantages of introducing structure are that we can bring to bear analytical tools that give insight not only on how to train these networks with an equivalent continuous version, but also shed light on how we can approximate the performance of continuous Gaussian VAEs in a systematic way. In this article, we study the possibility of using lattices for object representations by borrowing from theoretical developments in information and coding theory , in particular as they apply to lossy source coding problems -see Figure 1 for an example of two lattices. In our article, a lattice is defined as the set of all integral linear mixtures of a given set of basis vectors for an Euclidean space. Lattices have been long studied in information theory as they provide powerful means for building structured codes which are analytically tractable and have good space packing and covering properties, which are useful in channel coding and lossy source coding applications, respectively . We note that it has long been known that machine learning and lossy compression are related to each other; for example, we refer the reader to the classical work on the information bottleneck (; ; ;), the connection between maximum likelihood estimation and rate distortion explored by , Lastras-Montaño, and implicitly, Rose, Neal & Hinton (1998 and a line of research on autoencoders (Giraldo & Príncipe, 2013; ;). Our work adds to this general line of research by adopting more specific structures -in this case, lattices -from the information theory field and applying them to machine learning problems. Our primary contribution will be to provide the necessary theory to train computational networks that employ lattices and dithered quantization, leading to a class of algorithms which we call dithered stochastic gradient descent algorithms. Dithering refers to the act of adding random noise to a quantized signal (e.g. an image or a time series) for the purposes of diminishing the effect that quantization has on it. The main reason dithering is important to our problem can be traced to a fundamental in the study of lattices in information theory called the "Crypto-Lemma" (see , G. D.) which to our knowledge, had not been used before to optimize, with gradient techniques, computational networks that employ quantization in a provably correct way. We will additionally connect the task of lattice representation learning with a well known technique in generative modeling using variational autoencoders; specifically, Gaussian approximate posteriors . In our experimental section, we will demonstrate how we can use our to train a continuous VAE that can be re-interpreted, through our main , as a discrete VAE that uses a finite dimensional lattice. To make our work concrete, we will illustrate it in the context of a standard encoder/decoder architecture; for a given object d ∈ D (where D is the space of all possible objects), it's representation is produced using an encoder: which is then decoded using a decoder: g(e(d)). We pause to remark that there is a subtle difference between the way in which the information theory and machine learning communities use the words "encoder" and "decoder"; the reader needs to be aware of this difference so as to not get lost in this article. In information theory, an encoder/decoder is almost always a means to represent and retrieve information digitally, in particular, there is a significant emphasis on the efficiency of such representations. In machine learning, an encoder's output is generally thought of as a continuous vector (of course, as we stated earlier, there are notable exceptions to this); the fact that such a continuous vector is represented digitally (in the form of a computer representation of a floating point number) is in a sense an afterthought. Significantly more important in machine learning is the capability of learning parameters for an encoder and decoder through training examples, in which case the assumption of continuity, and more generally, differentiability is much more important. Our work lies in the middle of the two fields. We are interested in a machine learning application, but want to emphasize the representation cost for the objects being encoded as a first class metric to be optimized for. In our work, the representations are discrete, this is for any d ∈ D, e(d) belongs to a discrete space, which we denote as X. To each element x ∈ X we assign a probability p(x), so that we can talk about a code for the representation space X. In this setting, a code is a mechanism for mapping x ∈ X to l(x) bits (where l(x) stands for the code bit length) so that x can be recovered, without loss, from the code for x of length l(x). It is well known that for any given code one can construct a probability distribution using the equation p(x) = 2 −l(x). It is also known that for a given distribution {p(x)}, the ideal choice for l(x) (in units of bits) is − log 2 p(x), in the sense of minimizing the number of expected bits under the assumption that the x are being generated with a distribution {p(x)}. It is also known that such ideal code length in general can only be achieved with block codes which compress multiple elements of X simultaneously. In our work, we will be indifferent to this -we will simply use l(x) = − log p(x) for the code length; in fact for our theoretical development we will be using the unit of nats (natural logarithms) as it is more convenient. Assume that D is a random quantity uniformly distributed over the training data {d 1, · · ·, d n}. Then the average representation cost for the training data is This is one of the metrics in our work. The other metric is free to be specified by the designer of the architecture. Typically, this will be of the form where is some loss function. This covers both unsupervised and supervised settings; in the latter, we assume that the loss function also includes the label for D. It also includes variational autoencoders when they are trained using the Evidence Lower Bound (ELBO). One way to create discrete representations e(D) is to quantize an otherwise continuous representation e c: where K denotes a quantization operation; for example, K could use the uniform quantizer with spacing ∆ applied to every one of the m dimensions. The encoder e, the decoder g and the representation code − log p all participate in the end-to-end objective function Under review as a conference paper at ICLR 2020 where λ > 0 is a parameter that controls the importance of the representation cost for the overall optimization process. In a nutshell, the problem to be solved is to find a theoretically sound basis for the design of good quantizers K that can be trained in an end-to-end computational network. As stated earlier, this can be seen as an attempt to bring additional mathematical rigor to the VQ-VAE concept. 3 DITHERED STOCHASTIC GRADIENT DESCENT The principles behind the use of lattices in information theory can be found in the excellent book of. We borrow from this exposition in here in order to make this work self contained. Let b 1, · · ·, b m be a basis for R m, this is, m linearly independent column vectors in R m, and define B to be the matrix obtained using these basis vectors as its columns: is defined as the set of all integral linear combinations of the {b 1, · · ·, b m}: where When clear from the context, we will use Λ to refer to a lattice. We define P 0 (Λ) to be the set of all points in R m whose closest lattice point is the origin. Given x ∈ R m, we define K Λ (x) to be the operator (the "quantizer") that takes x and maps it to the closest element of the lattice Λ: One of the main reasons we are interested in lattices is because of a mathematical tool that can be used in their analysis, which in information theory is colloquially referred to as the "Crypto-Lemma" , (G. D.): Lemma 1 (Crypto-Lemma). For a given m dimensional lattice Λ, let U be uniformly distributed over P 0 (Λ). For any x ∈ R m, the distribution of the random vector The Crypto-Lemma will give us the main mechanism for training a computational network using gradient descent algorithms and using such network (during inference) with an explicit quantization step, with a firm theoretical guarantee of equivalence. A few additional mathematical preliminaries are needed. When taking expectations we will explicitly state the probability law being used in the expectation by placing a random quantity (most of the time, a random vector) as a subindex of the expectation. If two random vectors A, B are independent, we will write A ⊥ ⊥ B. If these random vectors are independent conditional on a third random vector C, we will write (A ⊥ ⊥ B|C). We will use an upper case P to denote a probability measure, this is, a function that assigns a probability to an event passed as argument. We will use the notation P A (a) as a summary form of P ([A = a]); we sometimes will use a random vector as an argument, thus, for example, −E A log PÂ(A) can be interpreted as the average cost, in nats, of using a code designed for on the random vector A, which is drawn using a (generally) different probability law. We will use the notation f A to denote density of a continuous random vector A. We will use the notation P A|B and f A|B to denote conditional versions of the objects described above. In a slight overload of notation, recall we use f and e c to denote encoders; the correct usage should be clear from context. If P 1 and P 2 are two probability measures, we say that P 1 P 2 (in words, P 1 is absolutely continuous with respect to P 2) if for any event A such that P 2 (A) = 0 then we have P 1 (A) = 0. Our main contribution will be centered on the following new : Theorem 1 (representation cost for dithered SGD). Let Λ be any lattice. Let X be a random R m vector distributed according to P X, andX be a random R m vector distributed according to PX, where we assume that P X PX. Assume U is uniformly distributed over P 0, and assume that The proof of this is based on a dual application of the Crypto-Lemma, and can be found in the Appendix. In the application of , P X will be associated with the empirical statistics of the encoding of training data (prior to quantization), and PX will be associated with a prior belief on the statistics of the encodings of any data, prior to quantization. Notice that the right hand side of Equation has no quantization involved. Notice also that the expression in the expectation does not use directly any global statistics about X; instead it relies on fX −U, which is a density that the designer can specify when the statistics ofX are specified and when the lattice Λ is designed. By avoiding the explicit quantization we eliminate the problem ing from the gradient of a quantized signal being zero almost everywhere, and by ensuring that the expression being optimized does not directly depend on global statistics about X, we ensure that stochastic gradient descent is feasible, since it requires that you update parameters based on gradients computed on small batches of training data. For these reasons, we shall call the machine learning algorithms that one derives from Theorem 1 dithered stochastic gradient descent algorithms. Now examine the left hand side of Equation, and notice that it does involve explicitly quantization: in fact, Z will become the discrete lattice representation of our work. Notice the conditioning in PẐ |U on the dither value U. This means that the discrete representation not only depends on the dither U, but also is being encoded using a code that depends U as well. Notice also that − log PẐ |U may not be the very best code one could be using which is − log P Z|U, in fact where the former is the "true" best possible representation cost (for a given encoder) and the latter term (a conditional KL divergence which is known to be nonnegative) denotes the excess cost. Thus the representation cost may not be, in a certain sense, "optimum", which is what we pay to make stochastic gradient descent feasible by avoiding having to compute global statistics at every step of the optimization. Having said this, our anecdotal experience is that this is not a significant problem if one allows the parameters of the encoder to also be trained, as then the encoder "adapts" so as to produce representations that approximate the prior that we have set. A reader knowledgable in the literature of lattices in lossy compression theory may be wondering what is the connection between Theorem 1 and the information theoretic linking entropy coded dithered quantization and mutual information . The answer is that Theorem 1 generalizes this to a setting where we aren't necessarily using the optimum code, which happens to be particularly useful in stochastic gradient descent algorithms. We now construct the system that we intend to study. We refer the reader to Figure 2, where we illustrate an encoder/decoder architecture coupled to a loss function. In the left hand side, we have a network that employs pre/post dithered quantization of a representation quantized using a lattice Λ; in the right, there is subtractive dithering but no quantization. Following Figure 2, define With this definition, we can see that in the left hand side of Figure 2, the quantized representation Z matches the definition in Equation of Theorem 1: Pre/post dithered quantization in the context of lattice representation learning. The system in the left is used during inference time, the one in the right during training time. Now in reference to the optimization objective in Equation, note that one of the things we need to do is to define a code that will be used to encode Z. For this purpose, we will use a code constructed using a distribution PẐ |U. The way we will construct this conditional distribution is also given by the equations in Theorem 1. In particular, the designer will be free to specify any distribution PX of their choice, and then construct the random vectorsX, U so that they are independent random vectors distributed according to PX and P U, respectively. Next, as in Equation we definê The distribution PẐ |U is the one associated with this definition. Encoding Z, the lattice representation of D, using the code implied by PẐ |U incurs on the cost Continuing our analysis of the optimization objective in Equation, the designer specified loss function value is (D, g(Z − U)) so that the total objective function is transformed to The difficulty in optimizing this objective function is that Z is the of quantization operation; this affects both terms. To deal with the term in the left, we observe that Lemma 1 implies that P X,Z−U = P X,X−U Now observe that (Z − U ⊥ ⊥ D|X) and (X − U ⊥ ⊥ D|X); and as a consequence This demonstrates how we get rid of the quantization by replacing it with subtractive dithering for the first term in. For the second term (the representation cost), we use Theorem 1 to obtain. so that the new optimization objective is now As indicated earlier, stochastic gradient descent is possible because fX −U does not use any global information about the training data D. The way we propose this objective function be optimized is similar to the optimization algorithms used in Variational Autoencoders (the so called "re-parametrization trick"); in particular, we propose to sample for every D, one dither U and then performing gradient descent regarding the sampled U regarded as a constant in the computational network. Assume that the role of the decoder g is to output a distribution over the data space D, this is, where Q is a distribution over D. Assume that that we choose for the loss function In this setting, the loss function is quantifying the likelihood of the data d given the dithered lattice representation Z − U. From an information theoretic coding perspective, this likelihood is the cost in bits needed to represent the data d, given Z − U. In information theory, U is regarded as common randomness shared between the encoder and decoder, and therefore there is no cost in transmitting it (in practice, pseudo random numbers with a shared seed can be used to implement this, for example). Thus to complete the cost needed to represent the data D, we need the cost of representing Z, which is in our work is given by − log PẐ |U (Z|U). As per our earlier development, we can substitute both of these likelihoods instead with ones which do not employ quantization, arriving to Those versed in the literature of variational autoencoders, might recognize the famous Evidence Lower BOund (ELBO) (for λ = 1) in the expression above. The term in the right-hand side is associated with the KL term (prior to taking expectation). One way to see is by noticing that ) which is the "approximate posterior" and fX −U is the "prior" (which explains why we are giving the designer full control over the distribution ofX). The term on the left, on the other hand, is associated with the "decoder" in the VAE. A very common practice in VAEs is to model the prior and the approximate posterior as Gaussians. It turns out that there is a sense in which the distribution of U approximates that of independently identically distributed (i.i.d.) Gaussian noise. To see this, we need to introduce additional concepts from the theory of lattices. The volume, second moment and and normalized second moment of the lattice cell are defined as respectively. The smallest possible second moment for any lattice on dimension m is defined as G m; we shall call any such lattice optimal. The normalized second moment of an m-dimensional hyper-sphere is denoted by G * m. It is known that A classical in the theory of lattices is that the normalized second moment of an m dimensional optimal lattice converges to the same quantity in other words, there exist lattices whose normalized second order moment approaches that of a hyper-sphere as the lattice dimension m grows to infinity. Now denote Z * to be an m dimensional Gaussian vector with independent entries each with a variance equal to σ 2 Λ, and let U m denote a random vector uniformly distributed over P 0 (Λ). Then, it is not difficult to show that where D KL denotes the KL divergence and as a consequence, combining and, we see that One way to interpret this colloquially is that the distribution of the dither of a good lattice approximates that of a Gaussian as the dimension grows to infinity. This establishes more firmly the connection between variational autoencoders that employ Gaussian approximate posteriors with lattice representations. Suppose for now thatX is chosen also to be i.i.d. Gaussian. In light of, let us analyze the performance of a good high dimensional lattice Λ by assuming U to be also i.i.d. Gaussian with variance σ 2 Λ. Then fX −U is the distribution of another i.i.d. Gaussian. Let us assume that the parameters ofX and U are such thatX − U has unit variance then becomes At this point, the expression is identical to one that one might derive from VAE theory. One important point is to note that in VAEs, the approximate posterior can have general parameters that depend on the encoder output in complex forms; for example, in the case of a Gaussian approximate posterior both the mean and the correlation matrix in the approximate posterior could depend on the encoder output. In the case in which we use lattice representations, only the mean of the approximate posterior can have dependencies on the encoder output. The significance of the observation in this subsection is that many of the (with relatively minor modifications) that the VAE community has obtained likely can be re-interpreted as involving (discrete) lattice quantized representations together with pre/post quantized dithering. The reader may be wondering: so what is new? The point of this section is to suggest that very good lattices approximate the behavior of Gaussian VAEs which are known to be good. To implement a practical system, Theorem 1 can be used to implement finite lattices. We will show this in the experimental section. The reader may also be asking: but why even use lattices at all? Why not use, say, categorical variables and some variant of the Concrete/Gumbel-Softmax trick, or use VQ-VAEs directly? One way to answer this question is: consider using lattices if you interested in a theoretically sound basis obtaining Gaussian VAE-like performance using discrete representations. For our experiments with finite dimensional lattices, we will be using a lattice whose basis is a diagonal matrix with positive entries. This is a simple variant of a cubic lattice in which the spacing parameter can vary per dimension so we will call it the rectangular lattice. To implement training of a VAE using this lattice, we rely on Theorem 1. For this, we need to specify a prior distribution onX. Note that one of the key things we will need is fX −U, the density ofX − U where U is uniformly distributed over the cell of the lattice we are considering here. Therefore, it is highly advantageous to choose a distribution forX so that the distribution ofX − U has a simple form. Our choice for the distribution ofX is for each entry of the vectorX to be a zero mean Laplacian distribution, this is, an origin centered symmetric distribution that decays exponentially on each side. To simplify, assume m = 1 and let ∆ > 0 the width of the uniform distribution so that U is uniformly distributed over (−∆/2, ∆/2). Let b be the parameter for the Laplacian distribution ofX, then In general we assume a collection of {∆ i} and {b i} potentially distinct parameters. Comparison of the performance (negative log likelihood in nats) of a rectangular lattice VAE, a hypothetical high dimensional lattice (in green), approximated using a Gaussian VAE where the variance of the approximate posterior is not data dependent, and and a Gaussian VAE (in red) where the variance of the Gaussian of the approximate posterior is allowed to be data dependent. We use static MNIST in our experiments. For both the encoder we use a three layer network in which the dimensions of the intermediate layer outputs are 784 → 300 → 300 → 40. The first two layers are gated linear networks with no activation function (the only nonlinearity is the multiplicative gating) and the last layer is a simple linear network with no gating or activation. For the decoder the dimensions are 40 → 300 → 300 → 784; as before the first two layers are gated linear networks with no activation function but the last layer is an ungated linear network with a sigmoid activation function. We use a batch size of 100 and Adam with learning rate 5e-4. We keep track of the model with the best validation set performance and if more than 50 epochs pass without being able to improve this performance, we stop the experiment. Then we compute the an improved log likelihood estimate by making use of importance sampling with 1000 samples for each test example. To implement this, we adapted the code made available for the in. When training for the finite dimensional lattice, the training objective for an MNIST image D is where Q is the decoder network, e c denotes the encoder network and [] i denotes the ith element of a vector. When train two types of Gaussian VAEs. In one case, the approximate posterior variance is not allowed to depend on the original MNIST image; we call this the Lattice Gaussian VAE because it denotes the performance of an ideal lattice VAE. The other setting is a Gaussian VAE where the approximate posterior variance is allowed to depend on the data; this we simply term Gaussian VAE. The of our experiments can be seen in Figure 3, where we illustrate the negative log likelihood of the validation set as a function of the number of training epochs. As seen, the performance of the rectangular lattice VAE is not as good as that of either of the two Gaussian VAEs, which is expected; as described in this article, simpler lattices won't perform as well as high dimensional good lattices. Notice however that the performance of the rectangular lattice is quite competitive and furthermore, notice that the performance of the two Gaussian VAEs are rather close to each other. This is encouraging for lattice based VAEs because it implies that it is in principle possible to get rather close to the performance of a Gaussian VAE using higher dimensional lattices. We then evaluated a better upper bound on negative log likelihood through the technique of importance sampling ; the are in Table 1. We see here that the performance of the rectangular lattice VAE remains quite competitive; for example the numbers reported for GumbelSoftmax for MNIST in are above 100 nats. To get closer to the performance of a Gaussian Lattice VAE, what we need to do is to increase the effective lattice dimension; for example we could use the hexagonal lattice or even higher dimensional lattices. Theorem 1 remains the main tool for training such more complex systems; this is left for future work. The present work is inspired by a belief that information theory, and in particular lossy compression theory can be very effective in serving as a theoretical foundation for problems in representation learning, including the design and analysis of highly performant practical algorithms. We have introduced lattices as a possible way to create discrete representations, and proved a fundamental which allows us to train computational networks that use lattice quantized dithering using an equivalent (in an expected sense) computational network which replaces quantization with dithering, thus allowing gradient descent to apply. This also allows us to use only local information during the optimization, thus additionally enabling stochastic gradient descent. We also established a fundamental connection between the use of good high dimensional lattices and the idea of Gaussian dithering, which is common in generative modeling settings such as Variational Autoencoders. Finally, we provided initial experimental evidence of the potential of using lattices in an VAE setting, where we contrasted the performance of a rectangular lattice based VAE and two types of Gaussian VAEs. The bottom line is that if one is interested in getting close to the performance of a Gaussian VAE with discrete representations with a good theoretical basis, we suggest the reader to consider lattices and to train them using dithered stochastic gradient descent. A.1 PROOF OF THEOREM 1 -CORE ON DITHERED SGD For reference purposes, we restate the statement of the theorem: Theorem 1 (representation cost for dithered SGD). Let Λ be any lattice. Let X be a random R m vector distributed according to P X,X be a random R m vector distributed according to PX, where we assume that P X PX. Assume U is uniformly distributed over P 0, and assume that U ⊥ ⊥ X and U ⊥ ⊥X. Define Then To prove this , we will rely on this Lemma: Lemma 2. Let X, U, Z be R m valued random vectors such that (X ⊥ ⊥ U |Z − U) and X ⊥ ⊥ U, with U continuous and Z discrete. Then The proof thus proceeds as follows: where (a) follows from the definition ofẐ which implies that for all x and all u, PẐ |U,X (K Λ (x + u)|u, x) = 1, (b) follows from the definition of Z, (c) follows from an application of Lemma 2, (d) is a change of variables which is feasible since X and Z − U are the only random vectors under the expectation, (e) follows from an application of the Crypto Lemma, (f) is another change of variables, (g) follows from another application of the Crypto Lemma, but this time applied to the joint distribution of X,Ẑ − U. In this appendix, we describe some for the autoencoding setting, where the goal is not to estimate densities but rather to create a representation that describes with high fidelity the original object of interest. In the autoencoding setting we are able to make a stronger statement than in the VAE example in the main body of the paper about the benefits of using better lattices. Let I m denote the m × m identity matrix. The following Lemma describes how the second moment of a lattice whose dither is "white" controls the quantization error as observed through an arbitrary function: Lemma 3 (approximation error due to lattice quantization). For a given dimension m ≥ 1, let η: R m → R be a twice differentiable function. Assume that U is a column vector uniformly distributed over the cell P 0 (Λ) of a lattice Λ, and assume that where H η denotes the Hessian (the matrix of second order derivatives) of η. Proof. It is easy to see that if u ∈ P 0 (Λ), then −u ∈ P 0 (Λ). Due to this property, the first moment of a lattice is zero: Using a Taylor series expansion, assuming σ Λ is small, Lattices that achieve G m have white dithers. This is a by Poltyrev and can be found in : Lemma 4. For any m ≥ 1, if a lattice Λ that achieves G m and U is uniformly distributed over If one keeps constant the volume V 2/m Λ of the lattice cell then by definition, We consider two lattices: a lattice spanned by a matrix proportional to I m, and a lattice with a normalized second moment equal to G m. From the previous discussion, it is easy to see that both lattices have white dithers, and therefore Lemma 3 applies. We also note that the normalized second moment of lattice spanned by a matrix proportional to I m is G 1 = 1/12. Applying Lemma 3 twice, the difference between the two approximations (one for each lattice) can be estimated by The above is an absolute estimate. In relative terms, the size of the error by using the best lattice with dimension m relative to the size of the error incurred by the cubic lattice. This is then given by In the above, η was a generic function but in our case, we would like to identify it with. Thus, we can improve the quantization error in the designer's choice of a loss function by up to 30% by using a better lattice, all at a constant representation cost. The purpose of our experiments is to draw a comparison between two techniques for quantized representation learning. Both techniques will be using stochastic gradient descent, which implies that we need to guarantee that the "backwards" computational network (the one used to compute the gradient of the objective functional with respect to all parameters) needs to be associated with a meaningful gradient, this is, one that can be used to iteratively improve the loss function. The first technique is a network trained with scalar or vector quantization in the forward direction. Because quantization has a zero derivative almost everywhere, it is not "meaningful" in the sense of the paragraph above. Therefore, in the backward direction we use straight-through estimation in order to create a fully differentiable backwards computational network (; van den). The second technique is an estimate of how a very good lattice in high dimensions will perform, using non-data dependent Gaussian dithering to approximate a uniform distribution over this lattice. For the experimental setting, we have chosen a seq2seq autoencoder as implemented in OpenNMT-py (; . This open source package implements a generic seq2seq architecture that encodes a variety of input types (text, audio, images) into a Figure 4: Comparison of the performance of a hypothetical high dimensional lattice, approximated using Gaussian dithering, and several quantizers trained using straight-through estimation. The x-axis is the average word negative log likelihood in bits. The y-axis is the representation cost of the quantized representation vector, averaged per word. Lower is better for both. All numbers plotted are for the test data. vector valued representation, and then decodes this representation into text. For our experiments, we have chosen a strict autoencoder experiment where we encode text into a representation and where we expect that exactly the same text be produced by the decoder from this representation. We modified this open source package (the source code will be released) by intercepting the communication between the encoder and decoder in order to apply the various quantization techniques that we are proposing to evaluate. In a seq2seq setup like the one being described in here, the loss function is the average word negative log likelihood at the output of the decoder. We will maintain the same loss function verbatim. In addition, we computed the representation cost of the quantized vector communicated from the encoder to the decoder, normalized also on a per word basis. There is a tradeoff between these two quantities -the higher the allowed representation cost, in principle the better loss function we can obtain. For the specifics of the architecture, we chose a 2 layer bidirectional GRU recurrent neural network for both the encoder and decoder, with each direction having 250 dimensions in its internal state and in the GRU's output vector (so that the total communication pipe between encoder and decoder is 500 dimensions). The OpenNMT-py package implements an optional global attention which creates additional communication paths between encoder and decoder. We disabled this, as it presents additional complications that are not important for the purposes of this article. We use Adam with an initial learning rate of 1e-4, which decays by a factor of 0.75 every 50k steps after an initial 100k steps where it is kept constant. The total number of training steps is 500k. The parameter λ which weights the representation cost in the total objective function is slowly annealed from a value close to 0 to its (asymptotic) value of 1.0; the value of 0.5 is achieved at 100k steps. When performing quantization and straight-through estimation, the 250 dimensions of each direction of the bi-directional GRU can be quantized in several ways, depending on how we partition the 250 dimensions. One family of such possibilities is to use N = 250 scalar quantizers each using M levels, for various values of M. Another example we demonstrate is N = 125 quantizers, each with M = 4 two dimensional code vectors. In any of the experiments, each quantizer is independent of the other quantizer, in the sense that its M levels are parameters of the model which are optimized using gradient descent. Each of the N codes is associated with a code (a distribution over its M levels), which can also be optimized using gradient descent. The average code length obtain as the representations are encoded with this code is the representation cost. For the experiments using Gaussian dithering, the output of the encoder is first linearly transformed using a linear operator with free parameters, and then it is dithered using an uncorrelated zero mean Gaussian with a diagonal correlation matrix that is also a set of free parameters. The dithered signal is then passed through another linear operator. The representation cost for Gaussian dithering is computed using techniques from Variational Autoencoders; in essence we assume an isotropic unit variance Gaussian as a prior over the representation space and then estimate the KL divergence between the dithered representation conditional on the encoder output and the Gaussian prior. The specific data that we use for the text autoencoding experiment is a set of over 4 million english sentences prepared using scripts that are available in Google's NMT tutorial (https://github.com/tensorflow/nmt, in particular the wmt16_en_de.sh script) for a German/English translation system; as this is an autoencoding setup, we only use the English sentences. For the test data, we use 3003 sentences also typically used in translation experiments for testing (newstest2014). This data set was then processed using OpenNMT-py preprocess.py tool. We only report for the test data. The can be seen in Figure 4. In the lower left hand side we see the tradeoff between the representation cost and the average word negative log likelihood. We can see that the projected performance of a good lattice is significantly better than the performance of the specific quantizers, trained with straight-through estimation that we tested. The reader may be wondering whether the high dimensional assumption that we make on the gaussian dithering approximation to good lattices implies that the projected performance may be unrealistic; we do not know with certainty at this point however we believe that a good lattice in 250 dimensions will likely be very well approximated with a Gaussian.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJlwAa4YwS
We propose to use lattices to represent objects and prove a fundamental result on how to train networks that use them.
There were many attempts to explain the trade-off between accuracy and adversarial robustness. However, there was no clear understanding of the behaviors of a robust classifier which has human-like robustness. We argue why we need to consider adversarial robustness against varying magnitudes of perturbations not only focusing on a fixed perturbation threshold, why we need to use different method to generate adversarially perturbed samples that can be used to train a robust classifier and measure the robustness of classifiers and why we need to prioritize adversarial accuracies with different magnitudes. We introduce Lexicographical Genuine Robustness (LGR) of classifiers that combines the above requirements. We also suggest a candidate oracle classifier called "Optimal Lexicographically Genuinely Robust Classifier (OLGRC)" that prioritizes accuracy on meaningful adversarially perturbed examples generated by smaller magnitude perturbations. The training algorithm for estimating OLGRC requires lexicographical optimization unlike existing adversarial training methods. To apply lexicographical optimization to neural network, we utilize Gradient Episodic Memory (GEM) which was originally developed for continual learning by preventing catastrophic forgetting. Even though deep learning models have shown promising performances in image classification tasks, most deep learning classifiers mis-classify imperceptibly perturbed images, i.e. adversarial examples. This vulnerability can occur even when the adversarial attacks were applied before they print the images, and the printed images were read through a camera. That shows real-world threats of classifiers can exist. In addition, adversarial examples for a classifier can be transferable to other models. This transferability of adversarial examples enables attackers to exploit a target model with limited access to the target classifier. This kinds of attacks is called black-box attacks. An adversarially perturbed sample refers to the of the perturbation (adversary generation) methods that has increased adversarial loss usually starting from an original sample. It is important to notice that an adversarially perturbed sample of a classifier may not be an adversarial example, which will be explained later in subsection 1.1. It can be just a non-adversarial perturbed sample (see Figure 4). Adversary generation methods try to effectively increase adversarial loss using the available information of the target classifier. Methods to generate adversarially perturbed samples include Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), Projected Gradient Descent (PGD), Distributionally Adversarial Attack (DAA) and Interval Attack. We will use the following terminology for the following paragraphs unless we specify otherwise. x is an original sample, y is corresponding label for x, is the perturbation norm, sign indicates the element-wise sign function and L(θ, x, y) is the loss of the classifier parameterized by θ. Fast Gradient Sign Method (FGSM) generates the adversarial x F GSM with the following formula. x F GSM = x + sign [∇ x L(θ, x, y)]. FGSM was suggested from the hypothesis that linear behaviors of classifiers are enough to cause adversarial susceptibility of models. The formula was obtained by applying local linearization of the cost function and finding the optimal perturbation. Note that we only show formula for l ∞ norm (max norm) attacks, but we can easily get the formula for other attacks when we replace the sign function with identity function or others. In order to get the strongest attacks using first order information of the models, Projected gradient descent (PGD) generates the adversarial x P GD by applying iterative steps like the following., y) where x + S is the set of allowed perturbation region for sample x that is limited by, x is the random starting points in x + S, x+S [] means the projection on x + S and α is the step size. Note that we also only show formula for l ∞ norm attacks. Basic Iterative Method (BIM) use the same iterative steps with PGD method except that it start from a fixed starting point, i.e. x = x for BIM. Adversarial training was developed to avoid adversarial vulnerability of a classifier. It tries to reduce the weighted summation of standard loss (empirical risk) E [L(θ, x, y)] and adversarial loss E [L(θ, x, y)], i.e. αE [L(θ, x, y)] + (1 − α)E [L(θ, x, y)] where α is a hyperparameter for adversarial training, and x is an adversarially perturbed sample from x with x − x ≤. (Usually, α = 0.5 is used for adversarial training.) By considering both standard and adversarially perturbed samples, adversarial training try to increase accuracies on both clean 3 and adversarially perturbed samples. In the literatures on adversarial training, inner maximization of a classifier refers to generating adversarial attacks, i.e. generating adversarially perturbed samples x * that maximally increase the loss. And outer minimization refers to minimizing the adversarial loss of the model. Madry et al. explained that inner maximization and outer minimization of the loss can train models that are robust against adversarial attacks. However, adversarial training has shown some issues. As some researches on adversarial robustness explained the trade-offs between accuracy on clean data and adversarial robustness, when we used adversarial training, we can get a classifier whose accuracy is lower than using standard (non-adversarial) training method. Also, a research studied samples whose perceptual classes are changed due to perturbation, but not in the model's prediction, what they called "invariance-based adversarial examples". They found that classifiers trained with adversarial training can be more susceptible to invariance-based adversarial examples. We define three properties of human-like classification: human-like classification is robust against varying magnitudes of adversarially perturbed samples and not just on a fixed maximum norm perturbations, when we think about adversarially perturbed samples with increasing magnitudes, a human-like classifier does not consider already considered samples multiple times and human-like classification prioritizes adversarial accuracies with smaller Figure 1: Examples of confusing near image pairs with different classes of MNIST training dataset. The l 2 norms of the pairs are 2.399, 3.100 and 3.131 from left to right. From these examples, we can say the exclusive belongingness assumption may not be realistic. perturbation norm. The objective of this paper is to design and train a classifier whose robustness is more resemblance to robustness of human than a model trained by standard adversarial training. We introduce Lexicographical Genuine Robustness (LGR) of classifiers to combine three properties. Using LGR can prevent the problem of training classifiers with lower accuracy on clean data by considering the third property. LGR also enable to avoid what we named "pseudo adversarial examples" of models which are conceptually similar to invariance-based adversarial examples. From LGR, we introduce a candidate oracle classifier called "Optimal Lexicographically Genuinely Robust Classifier (OLGRC)". We know move on to more precise definition of adversarial examples and the detailed explanation of our ideas. The definition of adversarial example by Biggio et al. was used in many theoretical analysis of adversarial robustness. These analyses showed adversarial examples are inevitable and there is a trade-off between accuracy on clean data and adversarial robustness, i.e. accuracy on adversarially perturbed samples. However, we argue why simply increasing adversarial robustness can get a classifier whose behavior is different from humans. Problem setting 1. In a clean input set X ⊆ R d, let every sample x exclusively belong to one of the classes Y, and their classes will be denoted as c x. A classifier f assigns a class label from Y for each sample x ∈ R d. Assume f is parameterized by θ and L(θ, x, y) is the loss of the classifier provided the input x and the label y ⊆ Y. Note that this exclusive belonging assumption is introduced to simplify the analysis and it can be unrealistic. In a real situation, 1) input information might not be enough to perfectly predict the class, 2) input samples might contain noises which erase class information, 3) some input samples might be better to give non-exclusive class (see Figure 1), or 4) sometimes labels might also contain some noises due to mistakes. Definition 1. Given a clean sample x ∈ X and a maximum permutation norm (threshold), a perturbed sample x is an adversarial example by the definition of Biggio et al. if x − x ≤ and f (x) = c x. Note that some perturbed samples x and some adversarial examples may also belong to X. Although generated by adversary generation methods mentioned in subsection 1.0.1, perturbed samples are not necessarily adversarial examples (see Figure 4). For example, when allowed perturbation norm is too small, predicted class of adversarially perturbed samples x can be c x. We are only focus on the analysis using l p norm for measuring the distance, but the concept of adversarial examples and our analysis are not confined to these metrics. Many ideas in our analysis can might be applied to adversarial examples based on relaxed versions of metrics. Let's consider the classification task on MNIST dataset. We will use the norms calculated by viewing an image input as a (flatten) 784 dimensional vector. The smallest l 2 norm 5 of the image pair for different class on training data is 2.399. However, as the l 2 norm of the nearest image pair for class 6 and 7 on training data is 5.485, when we train a classifier using adversarial training with = 2.399 (one can use half of the minimum distance = 2.399 2 = 1.200 to consider the distance between decision boundary and the samples in different classes, but our explanation doesn't consider that approach), the trained classifier might mis-classifies a perturbed image of digit 6 as an image of digit 7 when perturbation norm is 2.5 (> 2.399) even if such perturbation norm is smaller than the half (2.742) of the expected minimum norm 5.485. Hence, we might want a classifier who is also robust when is larger than 2.399. If we want a classifier which has no adversarial example when = 5.4 < 5.485, we need to have a classifier that outputs original class for every training image and perturbed images when norms of the perturbations are at most 5.4. However, the l 2 norm of the nearest image pair for class 4 and 9 on training data is 2.830 (see Figure 2). What does it mean? As the image on the bottom left can also be considered as an adversarial example perturbed from top left, the classifier needs to classify its class is 9 when it was an original image and its class is 4 when it was an adversarial example. Can we have a classifier with such psychometric power that knows the previous history of images? Do we have such psychometric power? Or, are we not robustness enough even for classifying MNIST data? The more important question we need to ask is "Do we really want such kinds of ability?". And we answer the answer is "No!". This confusion arises from the gap between our intuitive understanding and Biggio et al.'s definition of adversarial examples. (Note that the reason we encounter these kinds of problems is not because we are doing multiclass classification. A similar problem can occur in binary classification problems as shown in Figure 3 .) Even though intuitive definition of adversarial examples are samples which are generated by applying imperceptible perturbations to clean samples, by relying on the Biggio et al.'s definition of adversarial examples, some theoretical analyses tried to analyze the adversarial robustness even when the norms of the adversarial perturbations are big enough so that they can change the perceptual classes of samples. Definition 2. Let us distinguish two kinds of class for a given clean sample x ∈ X and its corresponding perturbed sample x. 5 l∞ norm was more commonly used in the literature as l∞ norm of the perturbation should be large in order to change the perceptual class. However, a classifier trained by adversarial training with l∞ adversary is susceptible to adversarial attacks with l0 or l2 adversary which suggests that we also need to consider l0 and l2 norm robustness. • De facto class: c x of the clean sample x. c x of the perturbed sample x if x ∈ X. De facto class is undefined for some perturbed sample x ∈ X C = R d − X. • De jure class: c x of the clean sample x. c x of the perturbed sample x, i.e. original class of the perturbed example x. Intuitively speaking, de facto class of a sample is current perceptual class of a sample. The name "De jure" can be debatable and confusing, but it follows the tradition of the researchers who consider the original class of a perturbed sample as a legitimate class and try to increase robustness based on that even if the perturbation can change the de facto class. One thing to notice is that we can change the de facto class by perturbating a clean sample x when large perturbation is allowed, but we can't change the de jure class of it. De facto class and de jure class are not dependent on the classifier f. Definition 3. Furthermore, we distinguish two kinds of adversarial example x when x was the original sample of x before the adversarial perturbation. • Pseudo adversarial example: an adversarial example x by definition 1 whose de facto class is different from its de jure class, i.e. c x = c x. • Genuine adversarial example: an adversarial example x by definition 1 whose de facto class is undefined, i.e. x ∈ X C. Note that even though the classifier f determines whether a given perturbed sample x is an adversarial example or not, it doesn't affect whether an adversarial example x is a pseudo adversarial example or genuine adversarial example. For a given sample, the history (whether it was perturbed or not) of a sample will determine whether the sample is a clean sample x or a perturbed sample x. For a perturbed sample x, its de jure class c x and the classifier f will determine whether it is a non-adversarial perturbed sample or an adversarial example. Finally, the existence of de facto class will determine whether an adversarial example is a genuine or pseudo adversarial example. With our definitions, let's consider again the classification task on MNIST dataset (see Figure 2). When we think about adversarial examples for = 5.4, again, the image on the bottom left can be considered as an adversarial example perturbed from the top left image. As its de facto class is 9 no matter it's a clean or adversarial example, it is a pseudo adversarial example of top left image if it was an adversarial example. Do we want our classifier to be robust against pseudo adversarial examples? Short answer to this question is "No, we don't need to.". When we consider the classification process of humans, we do not care about whether a given sample was a clean or perturbed sample, i.e. the previous history of the sample. We only care about the most likely class of the current sample and such class is close to the concept of de facto class. And this principle was commonly used in many visual assessment of adversarial robustness [10, 13, even if some of them follow the definition of adversarial example of Biggio et al.. Let's consider a general situation where a classifier f tries to increase the adversarial robustness for a perturbation norm which is large enough so that the perturbation can change the de facto classes of some samples. In other words, the classifier tries to assign de jure class even for pseudo adversarial examples. This implies that the classifier tries to assign perceptually wrong classes for pseudo adversarial examples who are currently equivalent to clean examples, and this will decrease accuracy on clean data without increasing human-like robustness on these samples. Hence, not only we don't need to increase robustness against pseudo adversarial examples, but also we should avoid increasing robustness against them in order to get a model with human-like robustness (Note that robustness will be calculated by de jure classes of pseudo adversarial examples). Let's compare the training tasks when we only have clean samples and when we only have perturbed samples. Perturbed samples can be derived from clean samples and theoretically they can take any values in their allowed perturbation regions. Because of that perturbed samples have more uncertainty than clean samples. In order words, clean samples have more information than perturbed samples. This observation can lead to a preference to prefer using clean samples when we train a model. When we think about a training task with both clean and perturbed samples, the preference will be correspond to increasing natural accuracy before we consider the accuracy on perturbed samples. This preference can be generalized to a principle that we prioritize the adversarial accuracy on smaller perturbation norm. From the above explanations, we can summarize the properties of human classification or human-like robustness. 1. Human classification is robust against adversarially perturbed samples generated from varying magnitudes of perturbations and not just fixed maximum norm perturbations. 2. The previous history of a sample has no effect in classification. Only the current sample will determine the classification . From this, a human-like classifier avoids assigning de jure class for pseudo adversarial examples. More generally, a human-like classifier avoids considering already considered samples several times. 3. Human classification prioritizes the robustness for smaller perturbation norm than the robustness for larger perturbation norm. The question arising from the second property is "How do we know a given adversarial example is a pseudo adversarial example or genuine adversarial example?". It would be trivial when we know the data distribution and predefined classes for all data like the toy example in section 2. However, in practice, we only have limited training data and hard to know the data distribution. We introduce a method to estimate whether a perturbed sample x has de facto class or not, and thus try to avoid using pseudo adversarial examples for adversarial training and measure the robustness of classifiers. We then combine this with a lexicographical optimization method. Before further diving into the adversarial robustness of classifiers, we give the mathematical definitions of the accuracies. We define natural accuracy and adversarial accuracies for given maximum perturbation norm and exact perturbation norm. Note that 1 is an indicator function which has value 1 if the condition in the bracket holds and value 0 if the condition in the bracket doesn't hold. • Natural accuracy: • (Standard) Adversarial accuracy (by maximum perturbation norm): where adversarially perturbed sample x * = argmax • (Standard) Adversarial accuracy (by exact perturbation norm): where adversarially perturbed sample x * = argmax • Genuine adversarial accuracy (by maximum perturbation norm): where S max = x ∈ X |∃x ∈ X C: x − x ≤ and adversarially perturbed sample x * = argmax • Genuine adversarial accuracy (by exact perturbation norm): where S exact = x ∈ X |∃x ∈ X C: x − x = and adversarially perturbed sample x * = argmax Note that the only difference of adversarial accuracies by maximum perturbation norm and exact perturbation norm is that their allowed regions of adversarially perturbed sample x *, i.e. x: x − x ≤ vs. x: x − x =. The reason why we are separating them will be explained later. Due to the additional requirement x ∈ X C in adversarially perturbed sample x, pseudo adversarial examples will not be considered in genuine adversarial accuracy and thus give more meaningful adversarial accuracy. Depending on X, genuine adversarial accuracies can be undefined. In other word, genuine adversarial accuracies will be undefined when S max = ∅ or S exact = ∅. Definition 5. We define adversarial accuracy functions a: [0, ∞) → for a classifier f. These functions are defined by measuring adversarial accuracies with varying perturbation norms, but genuine adversarial accuracy function uses slightly modified formula. • (Standard) Adversarial accuracy function (by maximum perturbation norm): ] where x * = argmax • (Standard) Adversarial accuracy function (by exact perturbation norm): ] where x * = argmax • Genuine adversarial accuracy function (by exact perturbation norm): previously allowed perturbation region X = x ∈ R d: x − x < where x ∈ X and x * = argmax Likewise, the only difference of adversarial accuracy functions by maximum perturbation norm and exact perturbation norm is that their allowed regions of adversarially perturbed sample x *. Adversarial accuracy function will be also called the change of adversarial accuracy. Genuine adversarial accuracy function will be conventionally also called the change of genuine adversarial accuracy even if it is not strictly correct. We don't define genuine adversarial accuracy function by maximum perturbation norm. One thing to notice in the S exact in the definition of genuine adversarial accrucacy function is that it useX, i.e. the closure of X. The reason we are usingX instead of X will be explained in subsection 2.2. The additional requirement used in genuine adversarial accuracy function was x ∈ X C = R d − X rather than x ∈ X C. It is because we consider the situation where we continuously increase the exact perturbation norm and we want to ignore already considered points for calculation of adversarial accuracy with smaller perturbation norm. This can also be considered as using samples in previously allowed perturbation region X as a new clean input set X = X. Let's think about a toy example (see Figure 5) with predefined (pre-known) classes in order to simplify the analysis. There are only two classes −1 and 1, i.e. Y = {−1, 1}, and, i.e. we assume uniform prior probability. Let's define three classifiers f 1, f 2 and f 3 for this toy example (see Figure 6). When step function step(x) is defined Notice that natural accuracy for all three classifiers is 1. We now explain the change of adversarial accuracy for f 1 (x) by exact perturbation norm (see top right of Figure 7). When 0 < ≤ 1, we can change the predicted class for x ∈ [1, 1 +) by subtracting, and we can't change the predicted class for x / ∈ [1, 1 +), thus standard adversarial accuracy will be 1 − 1 2. When 1 < ≤ 2, there will be same amount of adversarial examples with = 1, thus (standard) adversarial accuracy will be 1 − 1 2 = 1 2. When 2 < ≤ 3, we can still change the predicted class for x ∈ by subtracting. Addition to that we can also change the predicted class for x ∈ [1 −, −1) by adding and (standard) adversarial accuracy will be − where step(x) = 1 for x ≥ 0 and step(x) = −1 for x < 0. Top: Change of (standard) adversarial accuracy for f 1 (x) by maximum perturbation norm (left) and exact perturbation norm x (right) where x = x − x, Middle: Change of adversarial accuracy for f 2 (x) by maximum perturbation norm (left) and exact perturbation norm x (right), Bottom: Change of adversarial accuracy for f 3 (x) by maximum perturbation norm (left) and exact perturbation norm x (right). Observed behaviors of f 2 and f 3 will be same when we compare the adversarial accuracy by maximum perturbation norm, however, observed behaviors of f 2 and f 3 are different when we compare the adversarial accuracy by exact perturbation norm x. When we think about the change of adversarial accuracy for f 2 (x) by exact perturbation norm, by similar analysis, we can check it will be look like middle right graph in Figure 7 when ≤ 5. However, intriguing phenomenon occurs when > 5. When 5 < ≤ 6, x ∈ [1, − 4) cannot change the predicted class as subtracting or adding will in the same class 1, thus adversarial accuracy will be −5 2. If ≥ 6, adversarial accuracy will be The change of adversarial accuracy for f 3 (x) by exact perturbation norm can be understand similarly with f 2 (x). Now, we move on to the explanation for the changes of genuine adversarial accuracy for f 1 (x), f 2 (x) and f 3 (x) (see Figure 8). When 0 < ≤ 1, previously allowed perturbation region X = (−2 −, −1 +) ∪ (1 −, 2 +). When > 1, previously allowed perturbation region X = (−2 −, 2 +). For calculation of genuine adversarial accuracies, we will consider four points, i.e. S exact = {−2 −, −1 +, 1 −, 2 +}, when 0 < ≤ 1 (point 0 will be counted twice when = 1) and two points, i.e. S exact = {−2 −, 2 +}, when > 1. Note that if we did not use closure in the definition of S exact , S exact = {−2 −, 1 −}, when 0 < ≤ 1 and S exact = {−2 −}, when > 1. This will ignore many points and can not measure proper robustness of classifiers. In the change of genuine adversarial accuracy for f 1 (x), when 0 < ≤ 1, −2 −, −1 + and 2 + will be non-adversarial perturbed samples and 1 − will be adversarial example, and thus a gen;exact = 3 4 = 0.75. When > 1, −2 − and 2 + will be non-adversarial perturbed samples, and thus its genuine adversarial accuracy is 1. When considering the change of genuine adversarial accuracy for f 2 (x), for 0 < < 1, −2 −, −1 +, 1 − and 2 + will be non-adversarial perturbed samples, and thus a gen;exact = 1. When = 1, −2 −, 1 − and 2 + will be non-adversarial perturbed samples and −1 + will be adversarial example, and thus a gen;exact = 3 4 = 0.75 (Actually, 1 − = 0 = −1 +, but they counted twice.). When 1 < ≤ 2, −2 − and 2 + will be non-adversarial perturbed samples, and thus a gen;exact = 1. However, when > 2, only 2 + will be non-adversarial perturbed samples and −2 − will be adversarial example, and thus a gen;exact = We introduce Lexicographical (Standard) Robustness (LSR or LR) which is a total preorder based on adversarial accuracy functions by the exact perturbation norm. Furthermore, we explain why LSR is not enough to specify a human-like classifier and why we need Lexicographical Genuine Robustness (LGR). From this, we suggest a candidate oracle classifier what we called "Optimal Lexicographically Genuinely Robust Classifier (OLGRC)". Let's say we have two classifiers f 1 and f 2 for given data D ⊆ X × Y (Here, we are considering general classifiers and not f 1 and f 2 for our toy example.). Let a 1, a 2: [0, ∞) → be the corresponding standard adversarial accuracy functions by exact perturbation norm for f 1 and f 2, respectively. Definition 6. We define a total preorder of classifiers called Lexicographical Standard Robustness (LSR). • We say "f 2 is lexicographically more robust (LR) than • "f 2 is lexicographically equivalently robust (LR) with f 1 " or denote " The reason why we consider adversarial robustness against varying magnitudes of perturbations and not a fixed maximum perturbation norm is that increasing robustness on a fixed maximum perturbation norm will not give a classifier that has human-like robustness as explained in 1.2 (The first property in 1.3.). The defined (total) preorder prioritizes the robustness for smaller perturbation norm because more information in the samples can be lost when larger perturbation is allowed, and thus adversarial accuracy for larger perturbation norm is less important (The third property in 1.3.). This prioritization is also related to the observation that we need to avoid increasing robustness against pseudo adversarial examples who are more likely to occur when the magnitude of the perturbation is large (It is also connected to the second property in 1.3, but in an incomplete way as samples used for adversarial accuracy with small perturbation magnitude can be repeatedly used for larger perturbation magnitudes.). Furthermore, there is also a reason for using adversarial accuracy by exact perturbation norm not by maximum perturbation norm. That was because using adversarial accuracy by exact perturbation norm enables further discretibility as shown in Figure 7. Let's go back to the toy example 2 and three classifiers f 1, f 2 and f 3 for that toy example. According to the Lexicographical Standard Robustness (LSR), we have f 1 < LR f 3 < LR f 2. Then, can we say f 3 is better than f 1, and f 2 is better than f 3? Well, it is true in terms of Standard Robustness only. However, in the following subsection 3.2, we argue why f 2 can be better than f 3 in other aspects. One thing to note here is that if we define f 4 (x) = step(x) − step(x − 4), we can check f 2 = LR f 4 while f 2 = f 4. Hence, LSR doesn't have an antisymmetric property, thus it is not a partial order. In the previous subsection, we explained that the total preorder based on Lexicographical Standard Robustness (LSR) can handle the first and third properties in subsection 1.3, but only incompletely for the second property. To also handle the second property, we use genuine adversarial accuracy function which ignores already considered points for calculation of adversarial accuracy. Let's say we have two classifiers f 1 and f 2 for given data D ⊆ X × Y (Again, we are referring general classifiers and not f 1 and f 2 for our toy example.). Let a 1, a 2: [0, ∞) → be the corresponding genuine adversarial accuracy functions by exact perturbation norm for f 1 and f 2, respectively. • We say "f 2 is lexicographically genuinely more robust (LGR) than • "f 2 is lexicographically genuinely equivalently robust (LGR) with f 1 " or denote " Let's go back again to the toy example 2 and classifiers f 1, f 2 and f 3 for that toy example. According to the Lexicographical Genuine Robustness (LGR), we have Let's consider the perturbations needed to change the predicted classification . Similar to the gradients of differentiable function, the perturbations can be considered as interpretations of classifier as they can change the predicted class. When we think about changing de facto classes, we need positive perturbation x = x − x ∈ in order to change class −1 to class 1, and we need negative perturbation x ∈ (−4, −2) in order to change class 1 to class −1. From this we can see that direction of the perturbations can explain the change of de facto classes. Considering the perturbations needed to change the predicted classes for classifier f 3, we need positive perturbation x ∈ (1, ∞) in order to change class −1 to class 1, and we need negative perturbation x ∈ (−∞, −1) in order to change class 1 to class −1. Note that direction of the perturbations can explain the change of de facto classes. Considering the perturbations needed to change the predicted classes for classifier f 2, we need negative perturbation x ∈ (−6, −1) in order to change class 1 to class −1. However, not only positive perturbation x ∈ (1, ∞) can change class −1 to class 1, but also negative perturbation x ∈ (−∞, −2) can change class −1 to class 1. Hence, the direction of the perturbations no longer explain the change of de facto classes for f 2. We saw that the directions of the perturbations of classifier f 3 explain more the change of de facto classes than classifier f 2. Also, when the Occam's razor principle was considered, we would prefer classifier f 3 over f 2 as they have same standard adversarial robustness for x ≤ 4 and f 2 has one more decision boundary point than f 3, i.e. more complex than f 3. Optimal Lexicographically Genuinely Robust Classifier (OLGRC) is defined as the maximal classifier based on Lexicographical Genuine Robustness (LGR), i.e. this classifier o satisfies either o = LGR g or o > LGR g for any classifier g. OLGRC is determined by expanding explored regions. If each expansion step is (almost everywhere) uniquely determined and expansion can fill the whole space R d, there will be unique OLGRC (in almost everywhere sense). Whether there is unique OLGRC (in almost everywhere sense) or not will be determined by the definition of the metric. We do not cover the detailed conditions for uniqueness. The behavior of OLGRC is similar to the behavior of the support vector machine (SVM) in that its boundary tries to maximize its distance (margin) to the data points. However, linear SVM can only be trained for linearly separable problems even if we assume exclusive belonging settings. On the other hand, Kernel SVM tries to maximize its distance based on the norms of the feature space. Thus, it is probably vulnerable to adversarial attacks in the input set while OLGRC tries to maximize its distance based on the norms of the input set in order to increase adversarial robustness. When we think about the problem setting in the toy example 2, the classifier f 3 is the OLGRC as it's impossible to have a classifier whose change of genuine adversarial accuracy is higher than f 3. We are going to use l 1, l 2, · · · to denote loss functions in this section unlike section 1.1 which were used to represent l p norms. As mentioned in the second properties of the human classification, we need a method that estimates whether a perturbed sample x has de facto class or not to avoid using pseudo adversarial examples in adversarial training. To do that, we train a discriminator that is trained to distinguish clean samples and adversarially perturbed samples. Even if its classification is incomplete because of the overlapping samples, this discriminator allows us to avoid using pseudo adversarial examples for adversarial training. Note that this discriminator has a similar role with the discriminator in Generative Adversarial Nets in that its gradients will be used to generate adversarial examples. In our training method, we will use different magnitudes of perturbations Then, the discriminator will assign corresponding classes for each magnitude. As we need to estimate previously allowed perturbation region X, we provides two different inputs for each class: adversarially perturbed samples L(θ, x, c x) and their opponents x * * = argmin. When we have a discriminator, we will use lexicographical optimization (that will be mentioned in 4.2) to prioritize by avoid generating samples in the previously allowed perturbation region using the discriminator, i.e. to make x * ∈ X C, and to make the perturbed samples adversarial. Gradient Episodic Memory (GEM) was originally developed to prevent catastrophic forgetting which indicates the situation when a network was trained on some tasks, and trained on a new task after finishing the train on the previous tasks, then the network performs poorly (forget to perform well) on the previous tasks. Gradient Episodic Memory (GEM) is a method that enables to minimize the loss for task t without increasing losses for all previous task k < t locally. It is based on first-order approximation of the loss and angles between different loss gradients. To our best knowledge, the lexicographical optimization of neural networks was only used to avoid catastrophic forgetting in continual learning 6. However, we argue that lexicographical optimization of neural networks is not only needed for traditional multi-task learning (MTL), but also for single task learning (STL). Single task learning problems can be described as learning tasks that have only one target loss. However, we often add regularization terms in the training loss in order to prevent over-fitting. As reducing the main loss (target loss) is more important than reducing the regularization terms, we can use lexicographical optimization by prioritizing the main loss. Progressively growing generative adversarial networks uses images with different complexity. We can also think about using lexicographical optimization in their model so that the discriminator and the generator make sure to correctly learn simple structures first. As Lexicographical Genuine Robustness (LGR) also considers multiple accuracies with preference, it can also be considered as a problem that requires lexicographical optimization. To better understand GEM, let us assume that there are losses l 1 (θ), · · ·, l T (θ) with lexicographical preference, in other words, we want to reduce l t (θ) without increasing l 1 (θ), · · ·, l t−1 (θ) for t ∈ {1, · · ·, T}. We also have (pre-projection) parameter updates g 1, · · ·, g T where g t = − ∇l t (θ) and is a learning rate. We locally satisfy lexicographical improvement when g t, g k ≥ 0, ∀k < t. If it is not satisfied, we can project g t to a nearestg t such that it satisfies g t, g k ≥ 0, ∀k < t, i.e.g t = argmin g,g k ≥0,∀k<t g t −g 2. As this problem is a quadratic program (QP) problem, they suggested solving this by its dual problem and recoveringg t when t p (where p is the number of parameters in the neural network). Unlike continual learning that only reduces loss for current task without forgetting previous tasks, we will reduce multiple losses simultaneously with lexicographical preferences. For each lexicographical training step, we can apply weights update for task 1 to task T. But, it requires to calculateg t for each task t and require much computational complexity. In stead of applying several small steps for different tasks, we suggest to apply only one combined weights update for each lexicographical training step. We will call this approach as "Onestep method". When we have suggested parameter updatesg 1, · · ·,g T, let's consider their weighted meang Onestep = T t=1 α tgt where α 1, · · ·, α T ≥ 0 and It means that we can have same lexicographical training effect by simply applying the combined weights updateg Onestep. Considering adversarial robustness for different perturbation itself is not new. As standard accuracy can be considered as adversarial accuracy with zero perturbation, measuring standard accuracy and adversarial accuracy can be regarded as an example. Recently, a research considered the model's robustness against multiple perturbation types and suggested adversarial training schemes for multiple perturbation types. However, their adversarial training methods ("Max" and "Avg" strategies) did not consider the different importance of adversarial accuracy with different magnitudes. Similar concepts with pseudo adversarial examples and problems of using them for adversarial training have been studied. The concept of pseudo adversarial example is similar to the concept called "invariance-based adversarial example" whose predicted class by f is the same with the original class c x even if the predicted class by an oracle classifier o is changed. However, their definition requires an oracle classifier o which is hard to be defined while our definition requires predefined class for samples in clean data set X, and thus it is easier to do theoretical analysis. Invalid adversarial example is also similar to pseudo adversarial example. Their definition assumes data distribution of each class should be a manifold which limits the behavior of data distribution while we don't set manifold requirement in order to consider every possible situation. There were some attempts to balance the accuracy of clean data and accuracy on perturbed samples of classifiers. MixTrain uses adversarial training by dynamically adjusting the hyperparameter α for adversarial training. TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization) method tries to minimize the difference between predicted of adversarially perturbed samples and predicted clean samples instead of minimizing the difference between predicted of adversarially perturbed samples and clean labels. To our best knowledge, there were no attempts to understand the different importance of adversarial accuracies of different magnitudes and prioritized training methods for adversarial robustness. We also handle the problem of simply increasing standard adversarial robustness, i.e. simply finding classifiers who are lexicographically more robust (LSR) than others. In order to compare the different training methods, we experimented with 5 different training methods: standard (non-adversarial) training, standard adversarial training, TRADES, OLSRC and OLGRC. OLSRC refers to the model that trained by applying Onestep method in subsection 4.3 without applying the adversary generation method that avoids generating samples in the previously allowed perturbation region in subsection 4.1. OLGRC refers to the model that trained by applying Onestep method in subsection 4.3 with adversary generation method that avoid generating samples in the previously allowed perturbation region in subsection 4.1. We used PGD method (using exact perturbation norms) to generate adversarially perturbed samples. We used ADAM algorithm to train the discriminator for OLGRC. We found that using mini-batch training for lexicographical optimization might not work well and lexicographical optimization would require full batch training. As lexicographical optimization uses different weights update from a standard method (which is using more than one objective function), simply using mini-batch gradients update can in catastrophic forgetting in other mini-batches. In other words, even if a weights update with lexicographical optimization can improve losses for current mini-batch satisfying the lexicographical improvement, as losses functions on different mini-batch will be different from current mini-batch, the current weight update can increase losses in different mini-batch. In order to avoid this problem, we applied full batch training in all experiments. We did not plot for the changes of genuine adversarial accuracy in our experiments. We think it is unnecessary to plot them for toy example 2. It is impossible to plot the changes of genuine adversarial accuracy for the MNIST experiment as we don't know the actual data distribution and don't have predefined classes for all data. (Note that even if we can use discriminators to estimate them, the discriminators depends on the trained classifiers and estimated changes of genuine adversarial accuracy may not be comparable.) We randomly generated 100 training and 100 test samples from the toy example. Fully connected neural network with one hidden layer (with 256 hidden neurons and leaky-ReLU non-linearity with parameter 0.2) was used for experiments. Full batch training was used with learning rate of 0.015 for 1000 epochs (iterations). Gradient descent algorithm was used for weights update. 6.1.1 The first experiment: examining the effect of lexicographical optimization In order to see the effect of lexicographical optimization in adversarial training, in this experiment, we only used perturbation norm 4 for adversarial attacks. We used α = 0.5 for standard adversarial training. In order to apply comparable effects on the training of OLSRC, we used α 1, α 2 = 0.5 for weights of Onestep method and 10 −10 was used for numerical stability in GEM algorithm. We used 1 λ = 1.0 for TRADES training. Standard (non-adversarial) training and OLGRC were not experimented. Comparing the change of accuracies and losses by iterations in Figure 9, we can observe that training processes of standard adversarial training and TRADES are not stable and classifiers can not be trained properly as both methods don't have prioritization of losses. (It seems TRADES method is less fluctuating than standard adversarial training, but it could be because of different effect of loss function.) On the other hand, training of OLSRC is much more stable as it prioritizes natural cross-entropy loss. Comparing the plots for change of adversarial accuracy in Figure 10, the final classifier obtained by standard adversarial training achieved 0 for both natural accuracy and adversarial accuracy (perturbation norm: 4). The final classifier obtained by TRADES training achieved 1.0 natural accuracy and almost 0 adversarial accuracy (perturbation norm: 4). However, it might achieve 1.0 natural accuracy by chance considering the fluctuating training accuracy. Final OLSRC achieved 1.0 natural accuracy and about 0.5 adversarial accuracy (perturbation norm: 4). In order to see the effect of avoiding already explored regions, in the second experiment, we used exact perturbation norms 1, 2, 3, 4, 5, 6 for adversarial attacks. Only OLSRC and OLGRC were experimented. We used for weights of Onestep method and 10 −3 was used for numerical stability in GEM algorithm. When the generated adversarially perturbed samples using discriminator were observed, we can check that the perturbation process avoids already explored regions even though it is incomplete. For example, when = 4, 5, perturbed samples went to the right direction without making any mistake (Recall that previously allowed perturbation region X is (−2 −, 2 +) when > 1, and in order to avoid already explored points, perturbed samples need to move outward.). Estimated p(x ∈ X C |x) also roughly capture the regions that need to be explored. Comparing the final classifiers and changes of adversarial accuracy in Figure 12, we can observe the shape of the trained OLGRC and its changes of adversarial accuracy are quite similar to f 3 in section 2 which is the theoretical OLGRC. Notice that it was not achievable when we only used training method for OLSRC as shown in the figure. Finding a human-like classifier Figure 11: Plotted graphs show estimated probabilities that the input is not in the previously allowed perturbation region, i.e. estimated p(x ∈ X C |x). Red: [−2, −1) and blue: dashed lines represent the regions for class −1 and class 1. Generated adversarially perturbed samples using discriminator were color plotted class −1: red and class 1: blue. In order to prevent catastrophic forgetting in different mini-batches in mini-batch training, we only used randomly sampled 2000 samples as training data and full batch training was used with learning rate 0.001 for 2000 epochs (iterations). Note that our will not be comparable with other previous analysis on MNIST data because we are using smaller training data. For this experiment, we used common architecture with two convolution layers and two fully connected layers which can be found at https://github.com/MadryLab/mnist_challenge. In order to speed up the training, we applied the ADAM algorithm after projections were applied because of the easiness of implementation. However, as applying adaptive gradient optimization after projections might violate lexicographical improvements, we speculate that it would be better to apply projection after adaptive gradient optimization method was applied. We used the Projected Gradient Descent method with 40 iterations to generates adversarial attacks. Only l 2 norm 4 adversarial attack is used for adversarial training for l 2 norm robust model and only l ∞ norm 0.3 adversarial attack is used for adversarial training for l ∞ norm robust model. We used α = 0.5 for standard adversarial training. In order to apply comparable effects on the training of OLSRC and OLGRC, we used α 1, α 2 = 0.5 for weights of Onestep method. We used 1 λ = 1.0 for TRADES training. Note that due to different formulation of losses training of TRADES will not be directly comparable with other training methods. When we compare the of different training methods (shown in Table 1), we can notice that using OLSRC and OLGRC are better than standard adversarial training Table 2: Results on test data when l 2 norm attacks were used for training and test expectation, trained OLSRC was not lexicographically more robust than trained OLGRC (even on the trained data). It could be the of simultaneously reducing more than one loss and applying the ADAM after projections were applied. (When it comes to natural accuracy for both experiments, TRADES achieved the best . It could be because of different formulation of losses. It also achieved the smallest training loss in both experiments among adversarially trained models. Results on training data were not shown.) In this work, we explained why existing adversarial training methods cannot train a classifier that has human-like robustness. We identified three properties of human-like classification: human-like classification should be robust against varying magnitudes of adversarially perturbed samples and not just on a fixed maximum norm perturbations, when we consider robustness on increasing magnitudes of adversarial perturbations, a human-like classifier should avoid considering already considered points multiple times, and human-like classification need to prioritize the robustness against adversarially perturbed samples with smaller perturbation norm. The suggested properties explain why previous methods for adversarial training and evaluation can be incomplete. For example, the second property explains why commonly used evaluation of adversarial robustness may not fully reveal our intuitive understanding of human-like robustness as standard adversarial accuracies don't avoid pseudo adversarial examples. We defined a candidate oracle classifier called Optimal Lexicographically Genuinely Robust Classifier (OL-GRC). OLGRC is (almost everywhere) uniquely determined when dataset and norm were given. In order to train a OLGRC, we suggested a method to generate adversarially perturbed samples using a discriminator. We proposed to use Gradient Episodic Memory (GEM) for lexicographical optimization and an approach to applying GEM when simultaneously reducing multiple losses with lexicographical preferences. From the first experiment on the toy example from section 2, we showed that lexicographical optimization enables stable training even when other adversarial training methods failed to do so. The second experiment on the same toy example showed that we can use discriminator to roughly generate adversarially perturbed samples by avoiding already explored regions. Because of that, we could train a classifier that is similar to the theoretical OLGRC. From the experiment on the MNIST data, we showed that our methods (OLSRC and OLGRC) achieved better performances on natural accuracy and adversarial accuracy than using standard adversarial training method. In our work, we applied GEM method to adversarial training which is not traditionally a multi-task learning (MTL) problem. This perspective also leads us to use multiobjective optimization (without lexicographical preference) to the problems those were not considered as such. For example, one can use multiobjective optimization to train a single ensemble model that reduces losses in different datasets instead of training different models separately and averaging them. Multiobjective optimization can be used to find an efficient black-box attack by finding adversarial examples that can fool a list of models. By replacing the calculation of an average, it can also be used to smoothen the interpretations of a model. Gradient episodic memory (GEM) with standard gradient descent optimization method is slow and it needs to be combined with adaptive gradient update algorithms. One needs to try applying adaptive gradient update algorithms before the projection was applied. Also, GEM with mini-batch training cannot prevent not increasing losses in the other mini-batches. It is a serious limitation in deep learning applications. Future work needs to find a way to handle this problem. To simplify the problem of finding a human-like classifier, we assumed the exclusive belonging which is unrealistic in many problems. We need analysis when this assumption is violated. We might need to consider easing the lexicographical preference as we expect to get the accuracy that is less than 1 when the exclusive belonging assumption is violated. Another approach would be estimating the hypothetical original data which satisfies the exclusive belonging assumption. In that approach, we consider current data are obtained by adding some input or label noises to the unknown original data. Our training method will find a classifier that is robust against one form (l 1, l 2 or l ∞) of adversarial attacks with different magnitudes. However, we need to find a classifier that is robust against many forms of adversarial attacks (including shift, rotation, spatial transformation, etc.) with different magnitudes as attackers can try different kinds of attacks to exploit the classifier. Our model suggest a (almost everywhere) unique classifier that is robust against one form of adversarial robustness with some conditions. Because of this, in order to find a classifier that is robust against many forms of adversaries, we need to define a combined metric (or its generalization). Figure 13, Below: Change of adversarial accuracy for f (x) by exact perturbation norm x. Notice that f does not satisfy non-increasing Lexicographical Standard Robustness property for 2 < x < 3. We can think of the same toy example in section 2 except for the prior probability. For example, p(c = −1) =. Then, it might be more reasonable to depending on a weighted version of when we define the change of adversarial accuracy functions, i.e. it will be depending on p(c=cx) rather than depending on. 9.3 Interpretation of a classifier: Negative Adversarial Remover (NAR) Definition 8. We define decision boundary (DB) for a classifier f and a class c ∈ Y. • Decision boundary: DB c = x ∈ R d: ∀N (x), ∃x 1, x 2 ∈ N (x) such that f (x 1) = c, f (x 2) = c where N (x) is a neighborhood of x. Note that when f is calculated from an accessible differentiable function g, i.e. f (x) = argmax c∈Y g(x) c, DB c is not equivalent to N B c = x ∈ R d: ∀N (x), ∃x 1, x 2 ∈ N (x) such that g(x 1) c ≥ p(C = c), g(x 2) c < p(C = c) x ∈ R d: g(x) c = p(C = c) when prior is not uniform. N B c will be called neutral boundary (NB). Definition 9. We define negative adversarial remover (NAR) and nearest decision boundary point (NDBP) for a classifier f, a sample x ∈ R d and a class c ∈ Y. • Negative adversarial remover: NAR c (x) = − argmin Note that NAR and NDBP for a sample x can be more than one points. One can check that x = NAR c (x) + NDBP c (x). This indicates that when f (x) = c, NAR c (x) can be an interpretation of the sample x as it is the perturbation that change a point in the decision boundary, i.e. NDBP c (x), to sample x. NDBP c (x) is also similar to the concept called baseline in Integrated Gradients interpretation method while NDBP c (x) is dependent on sample x unlike baseline which will be predefined by users. If f is calculated from an accessible differentiable function g, i.e. f (x) = argmax c∈Y g(x) c, we can use DeepFool algorithm or Fast Adaptive Boundary (FAB)-attack to estimate NAR c (x) when c = f (x). If we only have f, we can use Boundary attack or HopSkipJumpAttack to estimate NAR c (x) when c = f (x).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJeGFs9FsH
We try to design and train a classifier whose adversarial robustness is more resemblance to robustness of human.
Generating complex discrete distributions remains as one of the challenging problems in machine learning. Existing techniques for generating complex distributions with high degrees of freedom depend on standard generative models like Generative Adversarial Networks (GAN), Wasserstein GAN, and associated variations. Such models are based on an optimization involving the distance between two continuous distributions. We introduce a Discrete Wasserstein GAN (DWGAN) model which is based on a dual formulation of the Wasserstein distance between two discrete distributions. We derive a novel training algorithm and corresponding network architecture based on the formulation. Experimental are provided for both synthetic discrete data, and real discretized data from MNIST handwritten digits. Generative Adversarial Networks (GAN) BID3 have gained significant attention in the field of machine learning. The goal of GAN models is to learn how to generate data based on a collection of training samples. The GAN provides a unique training procedure by treating the learning optimization as a two player game between a generator network and discriminator network. Since the learning process involves optimization over two different networks simultaneously, the GAN is hard to train, often times unstable BID11. Newly developed models such as the Wasserstein GAN aim to improve the training process by leveraging the Wasserstein distance in optimization, as opposed to the Kullback-Leibler or Jensen-Shannon divergences utilized by the original GAN.A source of interest in generative models arises from natural language processing. In natural language applications, a generative model is necessary to learn complex distributions of text documents. Although both the GAN and Wasserstein GAN approximate a distance between two continuous distributions, and use a continuous sample distance, prior research efforts BID4 BID12 BID10 have applied the models to discrete probability distributions advocating for a few modifications. However, using a continuous sample distance for the discrete case may lead to discrepancies. More precisely, as will be demonstrated via explicit examples, a small continuous distance does not necessarily imply a small discrete distance. This observation has potentially serious ramifications for generating accurate natural language text and sentences using GAN models. To address the above issues, we propose a Discrete Wasserstein GAN (DWGAN) which is directly based on a dual formulation of the Wasserstein distance between two discrete distributions. A principal challenge is to enforce the dual constraints in the corresponding optimization. We derive a novel training algorithm and corresponding network architecture as one possible solution. Generative Adversarial Networks (GANs) BID3 ) model a sample generating distribution by viewing the problem as a two-player game between a generator and a discriminator which is an adversary. The generator takes an input from a random distribution p(z) over a latent variable z, and maps it to the space of data x. The discriminator takes inputs from real data and Table 2: Example of a large gap between discrete and continuous distances for a discrete sample with 9 classes. Training sample: samples from the generator, and attempts to distinguish between the real and generated samples. Formally, the GAN plays the following two player minimax game: DISPLAYFORM0 DISPLAYFORM1 where D is the discriminator network and G is the generator network. In theory, the GAN approximates the Jensen-Shannon divergence (JSD) between the generated and real data distribution. showed that several divergence metrics including the JSD do not always provide usable gradients. Therefore, optimization based on JSD minimization, as incorporated in the GAN, will not converge in certain cases. To overcome the problem, proposed the Wasserstein GAN which is an approximation to the dual problem of the Wasserstein distance. The authors showed that the Wasserstein distance provides sufficient gradients almost everywhere, and is more robust for training purposes. The dual problem of the Wasserstein distance involves an optimization over all 1-Lipschitz functions BID13. The Wasserstein GAN approximates the dual problem by clipping all network weights to ensure that the network represents a k-Lipschitz function for some value of k. A recent variant of the Wasserstein GAN BID4 enforced the k-Lipschitz property by adding a gradient penalty to the optimization. Although the formulation of the Wasserstein GAN approximates the Wasserstein distance between two continuous distributions, using a continuous sample distance x − y, existing research efforts BID4 BID12 BID10 have directly used it to model discrete probability distributions by adding the following modifications. Each component of the input vectors of training data is encoded in a one-hot representation. A softmax nonlinearity is applied in the last layer of the output of the generator to produce a probability that corresponds with the one-hot representation of the training data. During training, the output of the softmax layers becomes the input to the critic network without any rounding step. To generate a new sample, an argmax operation over each generator's softmax output vectors is applied to produce a valid discrete sample. The usage of continuous sample distance in the standard Wasserstein GAN for discrete problems as described above creates some discrepancies in the model. These discrepancies are illustrated in TAB0. In TAB0, we have two different outputs from the generator's softmax with the same real sample reference. Although the first softmax output produces the same value as the real sample when it is rounded using argmax (hence has discrete distance 0 to the real sample), it has a larger continuous distance compared to the second softmax output which produces one mistake when rounded (has discrete distance 1 to the real sample). In the discrete case, with a large number of classes, as shown in Table 2, even though the generator output produces a discrete sample with the same value as the real sample when rounded, there still exists a very large continuous distance. This difference between continuous and discrete distance becomes greater for a larger number of discrete classes. Motivated to correct modeling discrepancies as described in Section 2, which occur due to the mismatched use of the standard Wasserstein GAN in discrete problems, we propose a new GAN architecture that is directly based on the Wasserstein distance between two discrete distributions. Let a vector x = (x, x,...) be a discrete multivariate random variable where each component x(i) can take discrete values from {1, 2, 3, . . ., k}. Let P r and P s be two probability distributions over the set of values for x. The Wasserstein distance between two probability distributions P r and P s is defined as: DISPLAYFORM0 The notation Π(P r, P s) denotes the set of all joint probability distributions γ(x, x) whose marginals are P r and P s respectively, and d(x i, x j) denotes the elementary distance between two samples x i and x j. We are particularly interested with the sample distance that is defined as the hamming distance (the sum of zero-one distance of each component), i.e: TAB1 shows an example of the sample distance metric. DISPLAYFORM1 Visible in the formulation above, computing the Wasserstein distance between two discrete probability distributions is a Linear Program (LP) problem for which the runtime is polynomial with respect to the size of problem. However, for generating real-world discrete distributions, the size of problem grows exponentially. For example, if the number of variables in vector x is 100, and each variable can take values in the set {1, 2, . . ., 10} so that k = 10, the size of the LP problem is O(10 100) reflecting the number of configurations for x. The ing LP is intractable to solve. We follow a similar approach as in by considering the dual formulation of Wasserstein distance. Kantorovich duality BID2 BID13 tells us that the dual linear program of the Wasserstein distance can computed as: max DISPLAYFORM2 DISPLAYFORM3 The function f maps a sample to a real value. Note that unlike for the continuous Wasserstein distance, in which the maximization is over all 1-Lipschitz functions without additional constraints, the maximization above is over all functions that satisfy the inequality constraints in Eq. 5. The dual formulation of the Wasserstein distance is still intractable since the maximization is over all functions that satisfy the inequality constraints. We aim to approximate the dual Wasserstein distance formulation by replacing f with a family of parameterized functions f w that satisfy the inequality constraints. The parameterized functions f w are modeled using a neural network. Unfortunately, it is difficult to construct a neural network architecture to model f w while also explicitly satisfying the inequality constraints involving the discrete sample distance defined in Eq. 3.To overcome the problem of approximating f with neural networks, we note that the maximization in the dual formulation is equivalent to the following optimization: DISPLAYFORM0 subject to: DISPLAYFORM1 where DISPLAYFORM2 Instead of approximating f (x), we aim to design a neural network architecture that approximates h(x, x) and satisfies the inequality constraints in Eq. 5. The key idea is that this new optimization is equivalent to the original dual formulation of the Wasserstein distance (explained in the sequel), even though the optimal form for h is not explicitly specified. Our selected architecture for the generator network employs the same softmax nonlinearity trick for the standard Wasserstein GAN described in Section 2. The generator network is a parameterized function g θ that maps random noise z to a sample in one-hot representation. The last layer of the generator network utilizes softmax nonlinearity to produce a probability which corresponds with the one-hot representation of the real sample. Our key modeling difference lies in the critic network. The critic network takes two inputs, one from the real samples, and one from the output of the generator. The architecture of the critic network is visualized in FIG2.Let y ∈ m×k be the one-hot representation of x where m is the number of variables and k is the number of classes for each variable. The critic network takes two inputs: y from the real training data, and y from the output of the generator network. Let us define ρ w as a parameterized function that takes input (y, y) ∈ 2×m×k and produces an output vector v ∈ [−1, 1] m. From the generator output y, we compute the rounded samplex. Let u ∈ {0, 1} m be a vector that contains the element-wise zero one distance between a real training sample x and rounded samplex from the generator, i.e. u(i) = I(x(i) =x (i)). We define our approximation to the function h as a parameterized function h w that is defined as h w = u T v = u T ρ w (y, y). The "filter" vector u ensures that the output of h w always satisfies the inequality constraints DISPLAYFORM3 5. An illustration of this neural network architecture and construction is provided in FIG2.As we can see from FIG2, the critic network consists of two separate sub-networks. The first sub-network takes input from a batch of samples of the training data, while the second sub-network takes input from a batch of samples produced by the generator. Each sub-network has its own set of intermediate layers. The outputs of the first and second layers are concatenated and taken as an input to a fully connected layer which produces a tensor of size n × m. The dimension n indicates the number of samples in a batch, and m is the number of variables. To produce a tensor v whose values range from -1 to 1, a tanh nonlinearity is applied. The "filter" tensor u is applied to v via an element-wise multiplication. The output of the critic network is calculated by taking the sum of the of the element-wise multiplication of u and v, yielding a vector of n elements containing the value of h w (y, y) for each pair of real and generated samples. We also included additional modifications based on theory to facilitate the training of networks. Note that since h(x, x) = f (x) − f (x), we can attempt to enforce this optimum condition known from theory. If we flip the inputs to h w we will get the negative of the output; i.e. h w (y, y) = −h w (y, y). To model this fact, we randomly swapped the sample from the real training data and generator output so that some of the real data was fed to the first sub-network and some to the second sub-network. If a pair of samples was flipped, we multiplied the output of the network with −1. Another modification that we applied to the network was to introduce a scaling factor to the softmax function such that the output of the scaled softmax was closer to zero or one. Specifically, we applied the function: softmax(x)(i) = exp(k·x(i)) j exp(k·x(j)), for some constant k ≥ 1. The training algorithm for our proposed architecture is described in Algorithm 1. Algorithm 1 Discrete Wasserstein GAN 1: Input: learning rate α, batch size n, the number of critic iteration per generator iteration n critic 2: repeat 3: DISPLAYFORM4 Sample a batch from real data DISPLAYFORM5 Sample a batch of random noise DISPLAYFORM6 end for 8:Sample a batch from real data DISPLAYFORM7 Sample a batch of random noise DISPLAYFORM8 10: DISPLAYFORM9 In contrast with the continuous GANs where many models have been proposed to improve the performance of GAN training, only a few GAN formulations have been proposed for modeling discrete probability distributions. BID4 use the standard continuous Wassersten GAN with adjustments described in Section 2. Similar techniques are used by BID12 to address several natural language generation tasks. augment the original GAN architecture with a maximum likelihood technique and combine the discriminator output with importance sampling from the maximum likelihood training. propose a Boundaryseeking GAN (BGAN) that trains the generator to produce samples that lie in the decision boundary of the discriminator. BGAN can be applied for discrete cases provided that the generator outputs a parametric conditional distribution. Other GAN models BID15 exploit the REINFORCE policy gradient algorithm BID14 to overcome the difficulty of backpropagation in the discrete setting. BID6 combine adversarial training with Variational Autoencoders BID7 to model discrete probability distributions. Evaluating the performance of generative models objectively and effectively is hard, since it is difficult to automatically tell whether a generated sample is a valid sample from the real distribution. Previous research advocates user studies with human graders, especially in image generation tasks, or proxy measures like perplexity and corpus-level BLEU in natural language generation. However, such techniques are far from ideal to objectively evaluate the performance of GAN models. To address the limitations above, we propose a synthetic experiment that captures the complexity of modeling discrete distributions, but still has a simple strategy to objectively evaluate performance. The synthetic experiment is based on a classic tic-tac-toe game. We generalize the classic 2 player tic-tac-toe game to include arbitrary k players and arbitrary m-by-m board sizes (rather than the default 3-by-3 board). The goal is to model the true generating distribution P r which is the uniform distribution over valid configurations of the board when a generalized tic-tac-toe game has ended (e.g. the final game state). We generalized the concept of a valid board in 3-by-3 games, in which one player has a winning state and marks filling a full column, row, or diagonal. For the purpose of our experiment, we made a simplification to the valid rule, i.e. as long as the board has at least one full column, row and diagonal taken by at least one player, it is considered to be a valid configuration. FIG3 shows examples of valid and non-valid board configurations. In our construction above, it is easy to check if a generated sample is a valid sample under the real distribution. Hence it is possible to validate objectively the performance of a generative model. Furthermore, it is also easy to sample from the real distribution to create synthetic training data. We uniformly sample random board configurations, accepting a sample if it is valid, and rejecting it if invalid. We construct several metrics to track the performance of the model. The first measure is the percentage of valid samples which characterizes the quality of the samples generated by the generator network. For a bigger board the percentage of valid samples does not tell much about the progress of learning since it takes a while to get to a valid sample. We construct another metric which is the average of maximum player's gain. The maximum player's gain for a board configuration is defined as the maximum number of cells taken by a player in a full column, row, or diagonal. FIG4 shows the value of maximum player's gain for three different 5-by-5 board configurations. In the left board, player 2 and 4 have the maximum (3 cells); in the middle board player 2 takes 4 cells; and in the right board, player 2 achieves the maximum of 5 cells. Note that for k-by-k boards, if the average of maximum player's gain is equal to k, it means that all the samples are valid. Therefore, closer average of maximum player's gain to k indicates a better quality of samples. Besides those two metrics, we also track the percentage of unique samples and the percentage of new samples, i.e. samples that do not appear in the training data. In the experiment, we compare our Discrete Wasserstein GAN model with the standard Wasserstein GAN model (with tricks described in Section 2) on 3-by-3 and 5-by-5 board with 2 players and 8 players. Note that the number of classes is equal to the number of players plus one since we need an additional class for encoding empty cells. We restrict the generator and critic networks in both models to have a single hidden layer within fully connected networks to ease training. As we can see from Figure 4, our DWGAN networks achieve good performance (in terms of the average of the percentage of valid samples and the maximum player's gain metrics) much faster than the standard WGAN with softmax and one-hot representation tricks. In both 3-by-3 boards with 2 players and 5-by-5 boards with 8 players our DWGAN networks only take less than a third of the iterations taken by the standard WGAN to achieve similar performance. We observe that our DWGAN networks have a mode collapse problem that occurs after achieving top performances. Figure 5a shows that the DWGAN can achieve the average of maximum player's gain close to 5 for a 5-by-5 board in 500 iterations while maintaining the percentage of unique samples close to 100%. After it produces those diverse samples, the network model begins to suffer from a mode collapse and the percentage of unique samples decrease to less than 10% after iteration 550. Based on our analysis, this behavior is caused by the fact that the network optimizes the function difference DISPLAYFORM0, which tends to cause an advantage if the values of g θ (z i) are not diverse. To overcome this issue, we add a norm penalty to the critic network optimization, i.e: DISPLAYFORM1 where λ is the penalty constant. Figure 5b shows the effect of the norm penalty to the performance of DWGAN and its sample diversity. We observe that the DWGAN network with a norm penalty can achieve 96% valid samples while maintaining the diversity in samples it generates (around 50% unique samples). To model more complex discrete distributions, we used MNIST digits discretized to binary values BID8 as the training data with the goal to generate new digits with our proposed Discrete Wasserstein GAN. As a baseline, we trained a standard Wasserstein GAN on the continuous digit dataset. Similar to our synthetic experiments, we restricted the generator and critic networks to have only a single hidden layer within fully connected networks. Figure 6 shows that our model produces a similar quality of discretized digit images compared to the continuous value digits produced by the standard Wasserstein GAN trained on continuous-valued data. We further generated 100 samples from our DWGAN model, prior to mode collapse, illustrating the diversity of samples. We proposed the Discrete Wasserstein GAN (DWGAN) which approximates the Wasserstein distance between two discrete distributions. We derived a novel training algorithm and corresponding network architecture for a dual formulation to the problem, and presented promising experimental . Our future work focuses on exploring techniques to improve the stability of the training process, and applying our model to other datasets such as for natural language processing. A linear program (LP) is a convex optimization problem in which the objective and constraint functions are linear. Consider a vector variable x ∈ R n, matrix A ∈ R n×m, and vectors c ∈ R n, and b ∈ R m. An LP is given in the following standard form, DISPLAYFORM0 The Lagrange dual function is given by, DISPLAYFORM1 The Lagrange dual problem is to maximize g(λ, ν) subject to λ 0. Equivalently, the dual problem may be written as an LP in inequality form with vector variable ν ∈ R m, DISPLAYFORM2 The dual of the above problem is equivalent to the original LP in standard form. Due to the weaker form of Slater's condition, strong duality holds for any LP in standard or inequality form provided that the primal problem is feasible. Similarly, strong duality holds for LPs if the dual is feasible. Consider discrete probability distributions over a finite set X with cardinality |X |. Assume an elementary sample distance d X (x 1, x 2) for x 1, x 2 ∈ X. The sample distance evaluates the semantic similarity between members of set X. Define P r (x) and P s (x) for x ∈ X as two discrete probability distributions. In this case, we may define the exact discrete Wasserstein distance between P r and P s as a linear program as follows, with D ∈ R |X |×|X | + whose matrix entries correspond to the sample distance DISPLAYFORM0 The dual LP is given as follows. DISPLAYFORM1 At the optimum it is known that ν = −µ, and the dual LP is equivalent to the following optimization problem. Note that there still exist |X | × |X | constraints. DISPLAYFORM2 Example 1. The following example provides a closer look at the dual optimization problem. Consider a finite set X = {1, 2, 3}. Let P s (x) be given by the discrete distribution P s = 0.2, P s = 0.7 and P s = 0.1. Similarly, let P r (x) be given by the discrete distribution P r = 0.4, P r = 0.4, P r = 0.2. Define the elementary sample distance d X (x 1, x 2) = 1 if x 1 = x 2 and d X (x 1, x 2) = 0 if x 1 = x 2. Therefore, the sample distance matrix D for this discrete example is the following: DISPLAYFORM3 The optimal value of the matrix T provides the optimal transport of mass from P s to P r, The objective value of the primal and dual is equal to 0.3 which is the total mass moved from P s to P r. In the solution to the dual problem, ν = [0 −1 0] T and µ = −ν. In this example, it is seen that the optimal ν = −µ. DISPLAYFORM4 For the synthetic experiments, we use Julia v0.5.2 programming language with Knet deep learning framework. Below is the code containing functions needed to generate tic-tac-toe board. # np = number of player for j = 1:c_iter # fake samples z = KnetArray(randn(Float32, nz, n)) fv = netG(wG, z, np, softmax_scaling) # real + fake outputC = netC(wC, rv, fv, np, lambda) tl.log_value("output_C", outputC, itC) tl.log_value("output", outputC, it) gC = netC_grad(wC, rv, fv, np, lambda) tl.log_value("grad_C_mean", mean(map(x -> mean(Array(x)), gC)), itC) tl.log_value("grad_C_std", mean(map(x -> std(Array(x)), gC)), itC) for i in 1:length(wC) wC[i] += lrC * gC[i] end it += 1 itC += 1 end if itC % decayitC == 0 lrC = decayC * lrC end # train generator for j = 1:g_iter z = KnetArray(randn(Float32, nz, n)) outputG = netGC (wG, wC, z, rv, np, softmax_scaling, lambda) tl.log_value("output_G", outputG, itG) tl.log_value("output", outputG, it) gG = netGC_grad(wG, wC, z, rv, np, softmax_scaling, lambda) tl.log_value("grad_G_mean", mean(map(x -> mean(Array(x)), gG)), itG) tl.log_value("grad_G_std", mean(map(x -> std(Array(x)), gG)), itG) for i in 1:length ( DISPLAYFORM0
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Bkv76ilDz
We propose a Discrete Wasserstein GAN (DWGAN) model which is based on a dual formulation of the Wasserstein distance between two discrete distributions.
We introduce the open-ended, modular, self-improving Omega AI unification architecture which is a refinement of Solomonoff's Alpha architecture, as considered from first principles. The architecture embodies several crucial principles of general intelligence including diversity of representations, diversity of data types, integrated memory, modularity, and higher-order cognition. We retain the basic design of a fundamental algorithmic substrate called an ``AI kernel'' for problem solving and basic cognitive functions like memory, and a larger, modular architecture that re-uses the kernel in many ways. Omega includes eight representation languages and six classes of neural networks, which are briefly introduced. The architecture is intended to initially address data science automation, hence it includes many problem solving methods for statistical tasks. We review the broad software architecture, higher-order cognition, self-improvement, modular neural architectures, intelligent agents, the process and memory hierarchy, hardware abstraction, peer-to-peer computing, and data abstraction facility. In today's AI research, most researchers focus on specific application problems and they develop the capabilities of their AI solutions only to the extent that these specific applications require them. While challenging AI problems such as natural language understanding require a broader view, most researchers do not begin with an all-encompassing architecture and then adapt to a specific application. It is usually more efficient to pursue a bottom-up development methodology for the experimental , and as a , progress in ambitious architectures for generality may have stalled. To achieve generality, a rigorous architectural approach has several benefits such as easing development, allowing future extensions while remaining backwards compatible, and exposing problems before they happen since we can conceptualize complex use-cases. In other words, it is at least better software engineering, however, there are also scientific benefits such as understanding the functions and capabilities required by a general-purpose AI system much better, and address these problems fully. Since the most general problem is attacked, the architecture can follow a rigorous design process which will eliminate redundancies, leading us to a more mathematically elegant design. And finally, since use-cases will lead the design, the will be empirically firmer than a special-purpose application. A design from first principles is rarely undertaken, and it is arduous, but it can produce highly effective systems. We build upon the most powerful architectures for general AI, and then identify the requirements, from which we introduce refinements to the existing architectures, introducing new architectural ideas and incorporating new AI technologies in the process. The ing deep technological integration architecture is a compact, scalable, portable, AI platform for general-purpose AI with many possible applications in wide domains. In this section, we review the requirements of a general AI system, and from this vantage point we formulate design principles for constructing a general system. A general AI system cannot contain any and all specific solutions in its memory, therefore it must equal the computer scientist in terms of its productive capacity of solutions. The requirement of a universal problem solver therefore is fundamental to any such design. Naturally, this implies the existence of Turing-complete programming languages, and a universal method to generalize -which implies a universal principle of induction such as Solomonoff induction. A suitably general probabilistic inference method such as Bayesian inference is implied since most AI problems are probabilistic in nature. It must have practically effective training methods for learning tasks, such as the GPU accelerated training methods used in deep learning. The system must have an integrated memory for cumulative learning. The architecture must be modular for better scalability and extensibility; human brain is a little like that as the neo-cortex has a grid of cortical columns, which are apparently functionally equivalent structures. A general AI system must be able to support robotics, however, it should not be limited to agent architectures; it must also support traditional applications like databases, web search, and mobile computing. To accommodate for such a wide variety of functions, the architecture must expose a Swiss army knife like AI toolkit, to provide a unified AI API to developers. Such an API can then be served over the cloud, or via fog computing. Machine learning applications generally require hardware with high performance computing support. Therefore, the architecture should be compatible with high performance computing hardware such as GPUs, and FPGAs to be able to scale to many clients. The general AI system must also address all the hard challenges of a natural environment as formulated by [, Chapter 2]: the system must cope with the partially observable environments, multi-agent environments, competition and cooperation, stochastic environments, uncertainty, nondeterminism, sequential environments, dynamic environments, continuous environments, and unknown environments. A tall order, if there were ever one. Therefore, the system must be designed with these features of the environment in mind, for accommodating their needs. AIXI BID6 addresses partially observable environments, however, the rest of the features require architectural support in most cases, such as the necessity of providing a theory-theory module (a cognitive module that has a theory of other minds), or showing that the system will discover and adapt to other minds. To provide for multi-agent environments, the system can offer a self-simulation virtualization layer so that the agent can conceive of situations involving entities like itself. To support proper modeling of environments like with stochastic and uncertainty, we need an extensive probabilistic representation language to deal with non-trivial probabilistic problems; the language must cover common models such as hiearchical hidden markov layer models; it should offer a wide range of primitives to choose from, which must be supplied by the architecture. The representation language must also provide the means to combine primitives meaningfully, and obtain short programs for common patterns. The mystique art of designing compact representation languages therefore remains a vital part of AI research. To provide for effective representation of things like sequential, dynamic, continuous environments, the architecture can provide effective representation primitives and schemas. For dealing with unknown environments, the architecture can provide an agent architecture that can engage in the exploration of the unknown, much as an animal does. Without doubt, the system must also accommodate common data types, and common tasks such as speech recognition, and the examples for more specific operations should be provided. It is important that the system allows one to implement a wide family of AI tasks for the system to be considered sufficiently general. If, for instance, the user cannot feasibly implement something like style transfer, that is popular in deep learning research, with the architecture, it should rather not be termed general. The system should support a wide range of structured, and unstructured data, including popular data types like image, audio, video, speech, and text, and have sufficiently rich models to represent these challenging kinds of data. These more human data types constitute the primary means by which humans can communicate with AI's directly. However, structured, regular and irregular data types also must be supported, since these originate from a variety of sources that can be consumed by the AI system. The system must also therefore provide an adequate perception architecture by which such a system can learn a world-representation from its sensorium which includes many senses. These processes should be sufficiently general that they can be adapted to any sort of sensorium that will work under known laws of physics. The system should also support an adequate intelligent agent architecture that supports typical goal following, or utility maximization architectures. Therefore, it also is a challenge to test system generality. Typically, a benchmark that consists of a large number of diverse AI tasks and datasets must be provided for the system to demonstrate generality. The benchmark should be diverse enough to include the whole gamut of AI problems such as typical pattern recognition problems of image recognition, speech recognition, but also natural language understanding, machine learning tasks like anomaly detection (over real-world datasets such as an industrial dataset), time-series prediction (commonly used for stock market analysis), robotics problems, game playing problems, and so forth with randomly varied parameters. We therefore arrive at an understanding of generalpurpose AI design that tries to maximize generality for every distinct aspect of a problem. The solution space must be wide enough to cover every problem domain. The methods must be independent from the data type. The tasks that can be performed should not be fixed. The system should be independent from the task to be solved; any task should be specifiable. The architecture must not depend either on a particular representation, it should cover a very wide range of representations to be able to deal with different kinds of environments. The intelligent agent code should not be environment specific, it must be adaptable to any environment and agent architecture; in other words, the system must be independent of the environment. Some principles of general intelligence are depicted in FIG0 on page 2. Many of the aforementioned problems have been addressed by existing AI architectures. We therefore take a well-understood general AI architecture called the Alpha architecture of Solomonoff BID9, and define some basic capabilities better, while incorporating newer models and methods from recent research. For the purposes of general-purpose AI, two most significant events have occured since Alpha was designed in 2002. First, the Gödel Machine architecture [] which also provides a level of self-reflective thinking, and presents an agent model around it. The other notable development is the immense success of deep learning methods, which now enable machines to achieve pattern recognition at human-level or better for many basic tasks. The present design therefore merges these two threads of developments into the Alpha framework. The architecture also provides for basic universal intelligent agents, and self-reflection like Gödel Machine does. Unlike Gödel Machine, we do not assume that the environment is known to a substantial degree, such things are assumed to be learnt. Like the Alpha architecture, we assume a basic problem solver that is smart enough to bootstrap the rest of the system. This component is called the AI Kernel. The system is thought to be parameter free, dependent only on the data, and the commands given. The system's interface is a graphical web-based application that allows the user to upload datasets and then apply AI tasks from the library. The system also provides an API for programming novel tasks. A basic graphical programming environment is considered for later releases since the system aims to be usable by non-programmers. We review the major components of the system architecture, and explain their functions. The AI kernel is an inductive programming system that should use a universal reference machine such as LISP. We have proposed using Church as the reference machine of such a system. However, what matters is that the AI kernel must be able to deal with all types of data, and tasks. We assume that the reference machine is variable in the right AI kernel. The kernel must be a compact code base that can run on a variety of hardware architectures to ensure portability, and the parallelization must support heterogeneous supercomputing platforms for high energy efficiency and scalability. The AI kernel supports sophisticated programmability, allowing the user to specify most machine learning tasks with a very short API. We employ OCaml generic programming to characterize the kernel's internal components, model discovery, and transfer learning algorithms. The AI kernel supports real-time operation, and can be configured to continuosly update long-term memory splitting running-time between currently running task and meta-learning. State-of-the-art bio-mimetic machine learning algorithms based on such methods as stochastic gradient descent, and evolutionary computation are available in the AI kernel, and thus chosen and used automatically. The AI kernel has integrated multi-term memory, meaning that it solves transfer learning problems automatically, and can remember solutions and representational states at multiple time scales. Heuristic Algorithmic Memory 2.0 extends Heuristic Algorithmic Memory [Özkural, 2011] to support multiple reference machines. Problem Solution Methods (PSMs) are methods that solve a given problem. These could be algorithmic solutions like sorting a list of numbers, or statistical methods like predicting a variable. The Alpha architecture basically tries a number of PSMs on a problem until it yields. However, in Omega, it is much better specified which PSMs the system should start with. Since the system is supposed to deal with unknown environments, we give priority to machine learning and statistical methods, as well model classes that directly address some challenging properties of the environment, and support hard applications like robotics. The diversity of the model classes and methods supported expand the range of Omega applications. The Alpha architecture can invent and retain new PSMs, that is why it should be considered an openended architecture; so is Omega. The architecture is taught how to use a problem solver via unstructured natural language examples, like the intent detection task in natural language processing. Both narrowly specialized and general-purpose methods are included in the initial library of problem solvers for initially high machine-learning capability. For approximating functions, there are model-based learning algorithms like a generic implementation of stochastic gradient for an arbitrary reference machine. For model discovery, model-free learning algorithms like genetic programming are provided. Function approximation facilities can be invoked by the ensemble machine to solve machine learning problems. Therefore, a degree of method independence is provided by allowing multistrategy solvers. A basic set of methods for solving scientific and engineering problems is provided. For computer science, the solutions of basic algorithmic problems including full software development libraries for writing basic computer programs for each reference machine (standard library). For engineering, basic optimization methods and symbolic algebra. In the ultimate form of the architecture, we should have methods for computational sciences, physical, and life sciences. A full range of basic data science / machine learning methods are provided including:Clustering Clustering is generalized to yield automated statistical modeling. Universal induction can be used to infer a PDF minimizing expected divergence (AI kernel function BID2, logistic regression BID3, and SVM BID9 are supported. Generalpurpose algorithm invokes AI kernel universal induction routines to learn a mapping from the input to a finite set. NID based classifier works with arbitrary bitstrings. Regression General-purpose algorithm invokes universal induction routines in the AI kernel to learn a stochastic operator mapping from the data domain to a real number. Classical algorithms of linear regression, logistic regression, and SVM are supported. Outlier detection The generalized outlier detection finds the points least probable given the rest of the dataset using a generalization of z-score; to first model the data again a universal set induction invocation characterizes the data. Each algorithm mentioned is exposed as a PSM in the system. An ensemble machine is introduced to the system which runs PSMs in parallel with time allocated in accordance with their expected probability of success. The associations between tasks and their success are remembered as a stochastic mapping problem solved with the universal induction routines of AI Kernel, guiding future decisions. The ensemble machine itself is exposed as a PSM. We define eight reference machines to widen the range of solutions obtainable, and types of environments/applications addressable. (GNN) representation language that encompasses common neuron types and architectures used in neural network research. It is a graphical metalanguage that can be used to define a large number of network architectures. Formally, it uses a multipartite labeled directed graph with typed vertices, as a generic representation to represent neural circuits, and the richer sort of representation allows us to extend the model to more biologically plausible, or with neuroscience-inspired models. The system uses this representation to facilitate automated model discovery of the right neural network for the given task when evaluating the MetaNet representation language. Church We use the Church language BID4 to represent probability distributions and solve basic algorithmic problems like adding a list of numbers, and the Towers of Hanoi problem. Components expose their interfaces in Church machine, expanding self-reflection capability. Probabilistic Logic We define a probabilistic logic programming language to deal with uncertainty and stochasticity, and the ability to solve reasoning problems. Bayesian Networks We define a general class of bayesian networks that can be used to deal with uncertainty. We use an analog computing model to represent dynamical, continuous and stochastic systems better. Picture We use the Picture BID6 language to deal with images. We use an LAPACK based matrix algebra computing package such as GNU Octave to represent mathematical solutions. We define an asynchronous model of computation for conception of fine-grain concurrent models. There are a number of ready neural representations that the system can quickly invoke. et al., 2015] are included. Recursive Deep Networks Especially useful for language processing, these networks can recognize hierarchical structures easily BID8.The networks are specified as generic network architectures that can scale to required input/output size. Any hyper-parameters are designated as variables to be learned to the AI kernel so that the hyper parameters can adapt to the problem. These networks are considered to be sufficient as providing enough library primitives. The generators for neural networks are specified such that the program generator can indeed generate all of the library networks; however, re-inventing the wheel is not a feasible idea, therefore we aim to include a complete inventory of deep learning models. A high-level component architecture without the many inter-component interactions is depicted in Figure 2 on page 6. The system's process flow is straightforward. The user presents the system with a number of datasets, and the user selects a task to be applied to the data. The system automatically recognizes different data types, however, it also allows data to be specified in detail by a description language. The system will also accept tasks to be defined via a conversational engine, and a programming interface (API). The conversational engine can learn to recognize a task via given examples, mapping text to a task specification language and backwards. The programming interface accumulates the interfaces of all the components, unified under a single facade of a generic problem solver, which is formulated as a general optimizer BID2. As in Alpha, the most general interface the system provides is that of time-limited optimization, however, the system allows to solve any well-defined problems allowing the user to define any success criterion. The problem solver then predicts the probability that a PSM will succeed in solving the sospecified problem, and then translates the input data and the task to a format that the particular PSM will understand, and also translate any back. After a task is solved, the system automatically updates its long-term memory and writes a snapshot to the disk. It then executes higher-order cognition routines to improve its PSMs, and awaits for the next task. The execution of PSMs is parallelized as much as possible, as many PSMs may be run in parallel, but also some methods will allow data to be sharded, and will also parallelize well themselves. A main operational goal of the system is the ability to keep track of these parallelizations well enough to present an OS like stability to the user with a simple interface. The system also allows modules to be invoked concurrently and in a distributed manner to facilitate the design of distributed and decentralized applications using the API.The PSMs are executed with a hardware abstraction layer called Stardust that provides heterogeneous peer-to-peer computing capability to the architecture. MetaNet acts as a common neural network representa- tion language. Scientific Data Language is a data specification language that allows us to describe the type, format and semantic labels of the data. Two fundamental higher-order cognitive functions are defined as analysis and synthesis. Analysis decomposes a problem into components and then tries to solve the problem by first solving sub-problems and then merging their into a solution. Synthesis generates new PSMs by combining known PSMs. These operations give the ability to observe the code of its modules, and expand the system's repertoire of PSMs continuously. Analysis is self-reflective in that sense, and synthesis is self-reification. These functions correspond to a second kind of modularity where the tasks themselves can be decomposed, and entirely new PSMs may be invented and added as new modules to the system. The system continually self-reflects through updating its algorithmic memory for accelerating future solutions. It also keeps a record of task performance for trying to retroactively optimize past solutions. The components expose themselves via a high-level reference machine (Church) which acts as the system "glue code" to compose and decompose system functions. Since Church is quite expressive, it can also act as the system's task description code, and be used to recognize, decompose, and compose tasks and solutions. The synthesis and analysis modules operate over the system's modular cognition itself, helping with synthesis of new solution methods and analysis of problems. The system uses self-models to guide its self-improvement, for instance, by trying to optimize its performance. Analysis and synthesis can learn how to accomplish this as they can use the execution history to improve the retrospectively. After a new problem is solved, therefore, the system can continuously try to improve its consolidated memory of PSMs by trying to generate new PSMs that will improve performance over history, or by decomposing problems to accelerate their execution. A general objective such as maximizing energy efficiency of solutions can be sought for self-improvement. PSMs embody a basic kind of modularity in the system which are extended with modular neural architectures. These architectural schemas are a cortical organization that decomposes the networks into many cortical columns, which are henceforth again decomposed into micro-columns, with variant geometries. This organization schema is called MetaCortex, and it is a way to describe larger networks that can digest a variety of data sources, and construct larger neural models with better modularity, that is better data/model encapsulation based on affinity. There are architectures such as multi-column committee networks that already implement these architectures, however, we would expand this to the entire library of networks described. Basic goal-following and utility-maximization agents can be realized similarly to time-series prediction. A typical two part model of learning representations (world model), and planning will be provided. A basic neural template will provide for multi-modal perception, multi-tasking, task decomposition and imitation learning. Neural templates corresponding to different kinds of agents such as Deep Mind's I2A model BID9 will be provided. The intelligent agents have a real-time architecture, they run at a fixed number of iterations every second. At this shortest period of synchronization, mostly backpropagation like learning algorithms, and simulation are allowed to complete. Everything else is run in the for longer time-scales. The processes and memory are organized hierarchically from long-term, heavy tasks to short-term, lightweight tasks. At the shortest scale, the system has neural memory units like LSTM, that last at the scale of one task, and model-based local training/inference algorithms like backpropagation algorithms. At a longer scale which corresponds to one iteration of problem solution procedure, the system remembers the best solutions so far, and it updates its mid-term memory with them to improve the solution performance in the next iteration. At this scale, the system will also engage in more processes such as the just mentioned memory update operation, and more expensive training algorithms such as genetic algorithms. At the highest scale, the system runs the most expensive model-free learning algorithms that can search over architectures, models, and components, and updates its persistent, long term memory based on the statistics about solutions of the new problem after solving it to guide the solution of new problems. The system also updates its PSMs by executing its higher-order cognitive functions at this scale. The architecture depends on a Hardware Abstraction Layer (HAL) in the form of Stardust peer-to-peer computing substrate. Stardust provides a bytecode representation that can be run on both multi-core cluster, GPU clusters, and FPGA clusters in the future. Stardust uses virtualization technology for compartmentalization and basic security. It uses a lightweight kernel, and provides parallel and distributed computing primitives. Peer-to-peer computing is facilitated by a node software that users download and operate to earn fees from the network with a cryptographic utility token. Since approaching human-level will typically require several petaflops/sec of computing speed, scaling to a significant number of global users requires peer-to-peer computing. If a proportion of profit is paid to the users, this can incentivize their contribution, providing a cost-effective computing platform for the architecture. A canonical data representation seems essential for Alpha family of architectures, because the PSMs can vary wildly in their assumptions. That is why, a common data format is required. The format we propose has standard representations for both structured (tabular, tree, network, etc.), unstructured (like text, audio, image, video) and complex data types. It supports web, cloud and fog computing data sources, thus also abstracting data ingestion. Each PSM handles the data differently, mapping to an internal representation if necessary. Therefore, every element of the data can be given a type, the format of the data may be specified (such as a 10x10 table of integers), and semantic labels may be ascribed to data elements. For instance, each dataset has a different domain, which may be designated with a domain path. Likewise, the physical units, or other semantic information may be annotated on a dataset with arbitrary attributes. The specification language must be also modular allowing to include other modules. The system automatically ingests known data formats, recognizes them, and converts into this common format which may henceforth be modified by the user. In the future, we are planning to design a data cleaning facility to improve the data at this stage. We gave the overview of an ambitious architecture based on Solomonoff's Alpha Architecture, and Schmidhuber's Gödel Machine architecture. The system is like Alpha, because it re-uses the basic design of PSMs. It is also similar to Gödel Machine architecture, because it can deploy a kind of probabilistic logical inference for reasoning and it can also observe some of its internal states and improve itself. The system also has basic provisions for intelligent agents, but it is not limited to them. We saw that the first important issue with implementing Alpha was to decide a basic set of primitives that will grant it sufficient intelligence to deal with human-scale problems. It remains to be demonstrated empirically that is the case, however, two of the eight reference machines have been implemented and seen to operate effectively. A criticism may be raised that we have not explained much about how the AI Kernel works. We only assume that it presents a generalized universal induction approximation that can optimize functions, rich enough to let us define basic machine learning tasks. It surely cannot be Levin search, but it could be any effective multi-strategy optimization method such as evolutionary architecture search BID7. We are using an extension of the approach in Fourier Network Search BID6 which is also likely general enough. The memory update is also not detailed but it is assumed that it is possible to extend an older memory design called heuristic algorithmic memory so that it works for any reference machine. We also did not explain in detail how many components work due to lack of space, which is an issue to be tackled in a longer future version of the present paper. In the future, we would like to support the architectural design with experiments, showing if the system is imaginative enough to come up with neural architectures or hybrid solutions that did not appear to humans. The algorithms used are expensive, therefore they might not work very well with the extremely large models required by the best vision processing systems; but to accommodate such models, it might be required that the system evolves only parts of the system and not the entire architecture. The system is intended to be tested on basic psychometric tests first, and a variety of data science problems to see if we can match the competence of the solution a human data scientist would achieve.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1lAmD94kQ
It's a new AGI architecture for trans-sapient performance.This is a high-level overview of the Omega AGI architecture which is the basis of a data science automation system. Submitted to a workshop.
Conversational machine comprehension requires a deep understanding of the conversation history. To enable traditional, single-turn models to encode the history comprehensively, we introduce Flow, a mechanism that can incorporate intermediate representations generated during the process of answering previous questions, through an alternating parallel processing structure. Compared to shallow approaches that concatenate previous questions/answers as input, Flow integrates the latent semantics of the conversation history more deeply. Our model, FlowQA, shows superior performance on two recently proposed conversational challenges (+7.2% F1 on CoQA and +4.0% on QuAC). The effectiveness of Flow also shows in other tasks. By reducing sequential instruction understanding to conversational machine comprehension, FlowQA outperforms the best models on all three domains in SCONE, with +1.8% to +4.4% improvement in accuracy. Q3 The young girl and her dog set out a trip into the woods one day. Upon entering the woods the girl and her dog found that the woods were dark and cold. The girl was a little scared and was thinking of turning back, but yet they went on. … Figure 1: An illustration of conversational machine comprehension with an example from the Conversational Question Answering Challenge dataset (CoQA).Humans seek information in a conversational manner, by asking follow-up questions for additional information based on what they have already learned. Recently proposed conversational machine comprehension (MC) datasets BID20 BID3 aim to enable models to assist in such information seeking dialogs. They consist of a sequence of question/answer pairs where questions can only be understood along with the conversation history. Figure 1 illustrates this new challenge. Existing approaches take a single-turn MC model and augment the current question and context with the previous questions and answers BID3 BID20. However, this offers only a partial solution, ignoring previous reasoning 1 processes performed by the model. We present FLOWQA, a model designed for conversational machine comprehension. FLOWQA consists of two main components: a base neural model for single-turn MC and a FLOW mechanism that encodes the conversation history. Instead of using the shallow history, i.e., previous questions and answers, we feed the model with the entire hidden representations generated during the process of answering previous questions. These hidden representations potentially capture related information, such as phrases and facts in the context, for answering the previous questions, and hence provide additional clues on what the current conversation is revolving around. This FLOW mechanism is also remarkably effective at tracking the world states for sequential instruction understanding BID15: after mapping world states as context and instructions as questions, FLOWQA can interpret a sequence of inter-connected instructions and generate corresponding world state changes as answers. The FLOW mechanism can be viewed as stacking single-turn QA models along the dialog progression (i.e., the question turns) and building information flow along the dialog. This information transfer happens for each context word, allowing rich information in the reasoning process to flow. The design is analogous to recurrent neural networks, where each single update unit is now an entire question answering process. Because there are two recurrent structures in our modeling, one in the context for each question and the other in the conversation progression, a naive implementation leads to a highly unparallelizable structure. To handle this issue, we propose an alternating parallel processing structure, which alternates between sequentially processing one dimension in parallel of the other dimension, and thus speeds up training significantly. FLOWQA achieves strong empirical on conversational machine comprehension tasks, and improves the state of the art on various datasets (from 67.8% to 75.0% on CoQA and 60.1% to 64.1% on QuAC). While designed for conversational machine comprehension, FLOWQA also shows superior performance on a seemingly different task -understanding a sequence of natural language instructions (framed previously as a sequential semantic parsing problem). When tested on SCONE BID15, FLOWQA outperforms all existing systems in three domains, ing in a range of accuracy improvement from +1.8% to +4.4%. Our code can be found in https://github.com/momohuang/FlowQA. In this section, we introduce the task formulations of machine comprehension in both single-turn and conversational settings, and discuss the main ideas of state-of-the-art MC models. Given an evidence document (context) and a question, the task is to find the answer to the question based on the context. The context C = {c 1, c 2, . . . c m} is described as a sequence of m words and the question Q = {q 1, q 2 . . . q n} a sequence of n words. In the extractive setting, the answer A must be a span in the context. Conversational machine comprehension is a generalization of the singleturn setting: the agent needs to answer multiple, potentially inter-dependent questions in sequence. The meaning of the current question may depend on the conversation history (e.g., in Fig. 1, the third question such as ' Where?' cannot be answered in isolation). Thus, previous conversational history (i.e., question/answer pairs) is provided as an input in addition to the context and the current question. For single-turn MC, many top-performing models share a similar architecture, consisting of four major components: question encoding, context encoding, reasoning, and finally answer prediction. Initially the word embeddings (e.g., BID18 BID19 of question tokens Q and context tokens C are taken as input and fed into contextual integration layers, such as LSTMs BID9 or self attentions BID32, to encode the question and context. Multiple integration layers provide contextualized representations of context, and are often inter-weaved with attention, which inject question information. The context integration layers thus produce a series of query-aware hidden vectors for each word in the context. Together, the context integration layers can be viewed as conducting implicit reasoning to find the answer span. The final sequence of context vectors is fed into the answer prediction layer to select Figure 2: An illustration of the conversation flow and its importance. As the current topic changes over time, the answer to the same question changes accordingly.the start and end position of answer span. To adapt to the conversational setting, existing methods incorporate previous question/answer pairs into the current question and context encoding without modifying higher-level (reasoning and answer prediction) layers of the model. Our model aims to incorporate the conversation history more comprehensively via a conceptually simple FLOW mechanism. We first introduce the concept of FLOW (Section 3.1), propose the INTEGRATION-FLOW layers (Section 3.2), and present an end-to-end architecture for conversational machine comprehension, FLOWQA (Section 3.3). Successful conversational MC models should grasp how the conversation flows. This includes knowing the main topic currently being discussed, as well as the relevant events and facts. Figure 2 shows a simplified CoQA example where such conversation flow is crucial. As the conversation progresses, the topic being discussed changes over time. Since the conversation is about the context C, we consider FLOW to be a sequence of latent representations based on the context tokens (the middle part of Fig 2). Depending on the current topic, the answer to the same question may differ significantly. For example, when the dialog is about the author's father's funeral, the answer to the question What did he feel? would be lonely, but when the conversation topic changes to five years after the death of the author's father, the answer becomes we saved each other. A naive implementation of FLOW would pass the output hidden vectors from each integration layer during the (i − 1)-th question turn to the corresponding integration layer for Q i. This is highly unparalleled, as the contexts have to be read in order, and the question turns have to be processed sequentially. To achieve better parallelism, we alternate between them: context integration, processing sequentially in context, in parallel of question turns; and flow, processing sequentially in question turns, in parallel of context words (see Fig. 3). This architecture significantly improves efficiency during training. Below we describe the implementation of an INTEGRATION-FLOW (IF) layer, which is composed of a context integration layer and a FLOW component. We pass the current context representation C h i for each question i into a BiLSTM layer. All question i (1 ≤ i ≤ t) are processed in parallel during training. DISPLAYFORM0 Q u e s t io n T u r n s C o n te x t W o rd s FLOW After the integration, we have t context sequences of length m, one for each question. We reshape it to become m sequences of length t, one for each context word. We then pass each sequence into a GRU 3 so the entire intermediate representation for answering the previous questions can be used when processing the current question. We only consider the forward direction since we do not know the (i + 1)-th question when answering the i-th question. All context word j (1 ≤ j ≤ m) are processed in parallel. DISPLAYFORM1 We reshape the outputs from the FLOW layer to be sequential to context tokens, and concatenate them to the output of the integration layer. DISPLAYFORM2 In summary, this process takes C h i and generates C h+1 i, which will be used for further contextualization to predict the start and end answer span tokens. When FLOW is removed, the IF layer becomes a regular context integration layer and in this case, a single layer of BiLSTM. We construct our conversation MC model, FLOWQA, based on the single-turn MC structure (Sec. 2.2) with fully-aware attention BID10. The full architecture is shown in Fig. 4. In this section, we describe its main components: initial encoding, reasoning and answer prediction. We embed the context into a sequence of vectors, C = {c 1, . . ., c m} with pretrained GloVe BID18, CoVE and ELMo BID19 embeddings. Similarly, each question at the i-th turn is embedded into a sequence of vectors Q i = {q i,1, . . ., q i,n}, where n is the maximum question length for all questions in the conversation. Attention (on Question) Following DrQA BID1, for each question, we compute attention in the word level to enhance context word embeddings with question. The generated question-specific context input representation is denoted as C 0 i. For completeness, a restatement of this representation can be found in Appendix C.1.Question Integration with QHierRNN Similar to many MC models, contextualized embeddings for the questions are obtained using multiple layers of BiLSTM (we used two layers). DISPLAYFORM0 We build a pointer vector for each question to be used in the answer prediction layer by first taking a weighted sum of each word vectors in the question. DISPLAYFORM1 where w is a trainable vector. We then encode question history hierarchically with LSTMs to generate history-aware question vectors (QHierRNN). DISPLAYFORM2 The final vectors, p 1,..., p t, will be used in the answer prediction layer. The reasoning component has several IF layers on top of the context encoding, inter-weaved with attention (first on question, then on context itself). We use fully-aware attention BID10, which concatenates all layers of hidden vectors and uses S(x, y) = ReLU(Ux) T D ReLU(Uy) to compute the attention score between x, y, where U, D are trainable parameters and D is a diagonal matrix. Below we give the details of each layer (from bottom to top).Integration-Flow ×2 First, we take the question-augmented context representation C 0 i and pass it to two IF layers. DISPLAYFORM0 DISPLAYFORM1 Attention (on Question) After contextualizing the context representation, we perform fully-aware attention on the question for each context words. DISPLAYFORM2 Integration-Flow We concatenate the output from the previous IF layer with the attended question vector, and pass it as an input. DISPLAYFORM3 Attention (on Context) We apply fully-aware attention on the context itself (self-attention). DISPLAYFORM4 Integration We concatenate the output from the the previous IF layer with the attention vector, and feed it to the last BiLSTM layer. DISPLAYFORM5 We use the same answer span selection method BID29 BID10 to estimate the start and end probabilities P S i,j, P E i,j of the j-th context token for the i-th question. Since there are unanswerable questions, we also calculate the no answer probabilities P ∅ i for the i-th question. For completeness, the equations for answer span selection is in Appendix C.1. BID3.CoQA QuAC Prev. SotA BID31 In this section, we evaluate FLOWQA on recently released conversational MC datasets. We experiment with the QuAC BID3 and CoQA BID20 datasets. While both datasets follow the conversational setting (Section 2.1), QuAC asked crowdworkers to highlight answer spans from the context and CoQA asked for free text as an answer to encourage natural dialog. While this may call for a generation approach, BID31 shows that the an extractive approach which can handle Yes/No answers has a high upper-bound -97.8% F 1. Following this observation, we apply the extractive approach to CoQA. We handle the Yes/No questions by computing P Y i, P N i, the probability for answering yes and no, using the same equation as P ∅ i (Eq. 17), and find a span in the context for other questions. The main evaluation metric is F 1, the harmonic mean of precision and recall at the word level.5 In CoQA, we report the performance for each context domain (children's story, literature from Project Gutenberg, middle and high school English exams, news articles from CNN, Wikipedia, AI2 Science Questions, Reddit articles) and the overall performance. For QuAC, we use its original evaluation metrics: F 1 and Human Equivalence Score (HEQ). HEQ-Q is the accuracy of each question, where the answer is considered correct when the model's F 1 score is higher than the average human F 1 score. Similarly, HEQ-D is the accuracy of each dialog -it is considered correct if all the questions in the dialog satisfy HEQ.Comparison Systems We compare FLOWQA with baseline models previously tested on CoQA and QuAC. BID20 presented PGNet (Seq2Seq with copy mechanism), DrQA BID1 and DrQA+PGNet (PGNet on predictions from DrQA) to address abstractive answers. To incorporate dialog history, CoQA baselines append the most recent previous question and answer to the current question.6 BID3 applied BiDAF++, a strong extractive QA model to QuAC dataset. They append a feature vector encoding the turn number to the question embedding and a feature vector encoding previous N answer locations to the context embeddings (denoted as N -ctx). Empirically, this performs better than just concatenating previous question/answer pairs. Yatskar FLOWQA (N -Ans) is our model: similar to BiDAF++ (N -ctx), we append the binary feature vector encoding previous N answer spans to the context embeddings. Here we briefly describe the ablated systems: "-FLOW" removes the flow component from IF layer (Eq. 2 in Section 3.2), "-QHIER-RNN" removes the hierarchical LSTM layers on final question vectors (Eq. 7 in Section 3.3).Results TAB2 report model performance on CoQA and QuAC, respectively. FLOWQA yields substantial improvement over existing models on both datasets (+7.2% F 1 on CoQA, +4.0% F 1 on QuAC). The larger gain on CoQA, which contains longer dialog chains, 7 suggests that our FLOW architecture can capture long-range conversation history more effectively. TAB4 shows the contributions of three components: QHierRNN, the hierarchical LSTM layers for encoding past questions, FLOW, augmenting the intermediate representation from the machine reasoning process in the conversation history, and N -Ans, marking the gold answers to the previous N questions in the context. We find that FLOW is a critical component. Removing QHier-RNN has a minor impact (0.1% on both datasets), while removing FLOW in a substantial performance drop, with or without using QHierRNN (2-3% on QuAC, 4.1% on CoQA). Without both components, our model performs comparably to the BiDAF++ model (1.0% gain).8 Our model exploits the entire conversation history while prior models could leverage up to three previous turns. By comparing 0-Ans and 1-Ans on two datasets, we can see that providing gold answers is more crucial for QuAC. We hypothesize that QuAC contains more open-ended questions with multiple valid answer spans because the questioner cannot see the text. The semantics of follow-up questions may change based on the answer span selected by the teacher among many valid answer spans. Knowing the selected answer span is thus important. We also measure the speedup of our proposed alternating parallel processing structure (Fig. 3) over the naive implementation of FLOW, where each question is processed in sequence. Based on the training time each epoch takes (i.e., time needed for passing through the data once), the speedup is 8.1x on CoQA and 4.2x on QuAC. The higher speedup on CoQA is due to the fact that CoQA has longer dialog sequences, compared to those in QuAC. In this section, we consider the task of understanding a sequence of natural language instructions. We reduce this problem to a conversational MC task and apply FLOWQA. FIG4 gives a simplified example of this task and our reduction. Task Given a sequence of instructions, where the meaning of each instruction may depend on the entire history and world state, the task is to understand the instructions and modify the world accordingly. More formally, given the initial world state W 0 and a sequence of natural language instructions {I 1, . . ., I K}, the model has to perform the correct sequence of actions on W 0, to obtain {W 1, . . ., W K}, the correct world states after each instruction. Reduction We reduce sequential instruction understanding to machine comprehension as follows.• Context C i: We encode the current world state W i−1 as a sequence of tokens.• Question Q i: We simply treat each natural language instruction I i as a question.• Answer A i: We encode the world state change from W i−1 to W i as a sequence of tokens. At each time step i, the current context C i and question Q i are given to the system, which outputs the answer A i. Given A i, the next world state C i+1 is automatically mapped from the reduction rules. We encode the history of instructions explicitly by concatenating preceding questions and the current one and by marking previous answers in the current context similar to N -Ans in conversational MC tasks. Further, we simplify FLOWQA to prevent overfitting. Appendix C.2 contains the details on model simplification and reduction rules, i.e., mapping from the world state and state change to a sequence of token. During training, gold answers (i.e., phrases mapped from world state change after each previous instruction) are provided to the model, while at test time, predicted answers are used. We evaluate our model on the sequential instruction understanding dataset, SCONE BID15, which contains three domains (SCENE, TANGRAMS, ALCHEMY). Each domain has a different environment setting (see Appendix C.2). We compare our approaches with prior works BID15 BID8, which are semantic parsers that map each instruction into a logical form, and then execute the logical form to update the world state, and BID5, which maps each instruction into actions similar to our case. The model performance is evaluated by the correctness of the final world state after five instructions. Our learning set-up is similar to that of BID5, where the supervision is the change in world states (i.e., analogous to logical form), while that of BID15 and used world states as a supervision. The development and test set are reported in TAB5. Even without FLOW, our model (FLOWQA-FLOW) achieves comparable in two domains (Tangrams and Alchemy) since we still encode the history explicitly. When augmented with FLOW, our FLOWQA model gains decent improvements and outperforms the state-of-the-art models for all three domains. Sequential question answering has been studied in the knowledge base setting BID11 BID23 BID28, often framed as a semantic parsing problem. Recent datasets BID3 BID20 BID4 BID22 enabled studying it in the textual setting, where the information source used to answer questions is a given article. Existing approaches attempted on these datasets are often extensions of strong single-turn models, such as BiDAF BID24 and DrQA BID1, with some manipulation of the input. In contrast, we propose a new architecture suitable for multi-turn MC tasks by passing the hidden model representations of preceding questions using the FLOW design. Dialog response generation requires reasoning about the conversation history as in conversational MC. This has been studied in social chit-chats (e.g., BID21 BID14 BID7 and goal-oriented dialogs (e.g., BID2 BID13 . Prior work also modeled hierarchical representation of the conversation history BID17 . While these tasks target reasoning with the knowledge base or exclusively on the conversation history, the main challenge in conversational MC lies in reasoning about context based on the conversation history, which is the main focus in our work. We presented a novel FLOW component for conversational machine comprehension. By applying FLOW to a state-of-the-art machine comprehension model, our model encodes the conversation history more comprehensively, and thus yields better performance. When evaluated on two recently proposed conversational challenge datasets and three domains of a sequential instruction understanding task (through reduction), FLOWQA outperforms existing models. While our approach provides a substantial performance gain, there is still room for improvement. In the future, we would like to investigate more efficient and fine-grained ways to model the conversation flow, as well as methods that enable machines to engage more active and natural conversational behaviors, such as asking clarification questions. Recall that the FLOW operation takes in the hidden representation generated for answering the current question, fuses into its memory, and passes it to the next question. Because the answer finding (or reasoning) process operates on top of a context/passage, this FLOW operation is a big memory operation on an m×d matrix, where m is the length of the context and d is the hidden size. We visualize this by computing the cosine similarity of the FLOW memory vector on the same context words for consecutive questions, and then highlight the words that have small cosine similarity scores, i.e., the memory that changes more significantly. The highlighted part of the context indicates the QA model's guess on the current conversation topic and relevant information. Notice that this is not attention; it is instead a visualization on how the hidden memory is changing over time. The example is from CoQA BID20.Q1: Where did Sally go in the summer? → Q2: Did she make any friends there?Sally had a very exciting summer vacation. She went to summer camp for the first time. She made friends with a girl named Tina. They shared a bunk bed in their cabin. Sally's favorite activity was walking in the woods because she enjoyed nature. Tina liked arts and crafts. Together, they made some art using leaves they found in the woods. Even after she fell in the water, Sally still enjoyed canoeing. She was sad when the camp was over, but promised to keep in touch with her new friend. Sally went to the beach with her family in the summer as well. She loves the beach. Sally collected shells and mailed some to her friend, Tina, so she could make some arts and crafts with them. Sally liked fishing with her brothers, cooking on the grill with her dad, and swimming in the ocean with her mother. The summer was fun, but Sally was very excited to go back to school. She missed her friends and teachers. She was excited to tell them about her summer vacation. Sally had a very exciting summer vacation. She went to summer camp for the first time. She made friends with a girl named Tina. They shared a bunk bed in their cabin. Sally's favorite activity was walking in the woods because she enjoyed nature. Tina liked arts and crafts. Together, they made some art using leaves they found in the woods. Even after she fell in the water, Sally still enjoyed canoeing. She was sad when the camp was over, but promised to keep in touch with her new friend. Sally went to the beach with her family in the summer as well. She loves the beach. Sally collected shells and mailed some to her friend, Tina, so she could make some arts and crafts with them. Sally liked fishing with her brothers, cooking on the grill with her dad, and swimming in the ocean with her mother. The summer was fun, but Sally was very excited to go back to school. She missed her friends and teachers. She was excited to tell them about her summer vacation. Sally had a very exciting summer vacation. She went to summer camp for the first time. She made friends with a girl named Tina. They shared a bunk bed in their cabin. Sally's favorite activity was walking in the woods because she enjoyed nature. Tina liked arts and crafts. Together, they made some art using leaves they found in the woods. Even after she fell in the water, Sally still enjoyed canoeing. She was sad when the camp was over, but promised to keep in touch with her new friend. Sally went to the beach with her family in the summer as well. She loves the beach. Sally collected shells and mailed some to her friend, Tina, so she could make some arts and crafts with them. Sally liked fishing with her brothers, cooking on the grill with her dad, and swimming in the ocean with her mother. The summer was fun, but Sally was very excited to go back to school. She missed her friends and teachers. She was excited to tell them about her summer vacation.Q4: What was Tina's favorite activity? → Q5: What was Sally's?Sally had a very exciting summer vacation. She went to summer camp for the first time. She made friends with a girl named Tina. They shared a bunk bed in their cabin. Sally's favorite activity was walking in the woods because she enjoyed nature. Tina liked arts and crafts. Together, they made some art using leaves they found in the woods. Even after she fell in the water, Sally still enjoyed canoeing. She was sad when the camp was over, but promised to keep in touch with her new friend. Sally went to the beach with her family in the summer as well. She loves the beach. Sally collected shells and mailed some to her friend, Tina, so she could make some arts and crafts with them. Sally liked fishing with her brothers, cooking on the grill with her dad, and swimming in the ocean with her mother. The summer was fun, but Sally was very excited to go back to school. She missed her friends and teachers. She was excited to tell them about her summer vacation.Q9: Had Sally been to camp before? → Q10: How did she feel when it was time to leave?Sally had a very exciting summer vacation. She went to summer camp for the first time. She made friends with a girl named Tina. They shared a bunk bed in their cabin. Sally's favorite activity was walking in the woods because she enjoyed nature. Tina liked arts and crafts. Together, they made some art using leaves they found in the woods. Even after she fell in the water, Sally still enjoyed canoeing. She was sad when the camp was over, but promised to keep in touch with her new friend. Sally went to the beach with her family in the summer as well. She loves the beach. Sally collected shells and mailed some to her friend, Tina, so she could make some arts and crafts with them. Sally liked fishing with her brothers, cooking on the grill with her dad, and swimming in the ocean with her mother. The summer was fun, but Sally was very excited to go back to school. She missed her friends and teachers. She was excited to tell them about her summer vacation.Q16: Does she like it? → Q17: Did she do anything interesting there? (The conversation is now talking about Sally's trip to the beach with her family)Sally had a very exciting summer vacation. She went to summer camp for the first time. She made friends with a girl named Tina. They shared a bunk bed in their cabin. Sally's favorite activity was walking in the woods because she enjoyed nature. Tina liked arts and crafts. Together, they made some art using leaves they found in the woods. Even after she fell in the water, Sally still enjoyed canoeing. She was sad when the camp was over, but promised to keep in touch with her new friend. Sally went to the beach with her family in the summer as well. She loves the beach. Sally collected shells and mailed some to her friend, Tina, so she could make some arts and crafts with them. Sally liked fishing with her brothers, cooking on the grill with her dad, and swimming in the ocean with her mother. The summer was fun, but Sally was very excited to go back to school. She missed her friends and teachers. She was excited to tell them about her summer vacation.Q18: Did she fish and cook alone? → Q19: Who did she fish and cook with?Sally had a very exciting summer vacation. She went to summer camp for the first time. She made friends with a girl named Tina. They shared a bunk bed in their cabin. Sally's favorite activity was walking in the woods because she enjoyed nature. Tina liked arts and crafts. Together, they made some art using leaves they found in the woods. Even after she fell in the water, Sally still enjoyed canoeing. She was sad when the camp was over, but promised to keep in touch with her new friend. Sally went to the beach with her family in the summer as well. She loves the beach. Sally collected shells and mailed some to her friend, Tina, so she could make some arts and crafts with them. Sally liked fishing with her brothers, cooking on the grill with her dad, and swimming in the ocean with her mother. The summer was fun, but Sally was very excited to go back to school. She missed her friends and teachers. She was excited to tell them about her summer vacation. We found that in the first transition (i.e., from Q1 to Q2), many memory regions change significantly. This is possibly due to the fact that the FLOW operation is taking in the entire context at the start. Later on, the FLOW memory changes more dramatically at places where the current conversation is focusing on. For example, from Q4 to Q5, several places that talk about Sally's favorite activity have higher memory change, such as was walking in the woods, she enjoyed nature, and enjoyed canoeing. From Q16 to Q17, we can see that several memory regions on the interesting things Sally did during the trip to the beach are altered more significantly. And from Q18 to Q19, we can see that all the activities Sally had done with her family are being activated, including went to the beach with her family, fishing with her brothers, cooking on the grill with her dad, and swimming in the ocean with her mother. Together, we can clearly see that more active memory regions correspond to what the current conversation is about, as well as to the related facts revolving around the current topic. As the topic shifts through the conversation, regions with higher memory activity move along. BID20. Context: When my father was dying, I traveled a thousand miles from home to be with him in his last days. It was far more heartbreaking than I'd expected, one of the most difficult and painful times in my life. After he passed away I stayed alone in his apartment. There were so many things to deal with. It all seemed endless. I was lonely. I hated the silence of the apartment. But one evening the silence was broken: I heard crying outside. I opened the door to find a little cat on the steps. He was thin and poor. He looked the way I felt. I brought him inside and gave him a can of fish. He ate it and then almost immediately fell sound asleep. The next morning I checked with neighbors and learned that the cat had been abandoned by his owner who's moved out. So the little cat was there all alone, just like I was. As I walked back to the apartment, I tried to figure out what to do with him. But as soon as I opened the apartment door he came running and jumped into my arms. Analysis: In this example, there is also a jump in the dialog at the question Where is the pope flying to? The dialog jumps from discussing the events itself to the ending of the event (where the pope is leaving for London). BiDAF++ fails to grasp this topic shift. Although "How Great Thou Art" is a song that Boyle will sing during the event, it is not the song Boyle will sing when pope is leaving for London. On the other hand, FLOWQA is able to capture this topic shift because the intermediate representation for answering the previous question " Where is the pope flying to?" will indicate that the dialog is revolving at around the ending of the event (i.e., the last sentence). Question-specific context input representation: We restate how the question-specific context input representation C 0 i is generated, following DrQA BID1. DISPLAYFORM0 where g Q i,k is the GloVe embedding for the k-th question word in the i-th question, and g C j is the GloVe embedding for the j-th context word. The final question-specific context input representation C 0 i contains: word embeddings, a binary indicator em i,j, whether the j-th context word occurs in the i-th question, and output from the attention. DISPLAYFORM1 Answer Span Selection Method: We restate how the answer span selection method is performed (following BID29 BID10) to estimate the start and end probabilities P S i,j, P E i,j of the j-th context token for the i-th question. DISPLAYFORM2 To address unanswerable questions, we compute the probability of having no answer: DISPLAYFORM3 For each question Q i, we first use P ∅ i to predict whether it has no answer. 9 If it is answerable, we predict the span to be j s, j e with the maximum P S i,j s P E i,j e subject to the constraint 0 ≤ j e −j s ≤ 15.Hyper-parameter setting and additional details: We use spaCy for tokenization. We additionally fine-tuned the GloVe embeddings of the top 1000 frequent question words. All RNN output size is set to 125, and thus the BiRNN output would be of size 250. The attention hidden size used in fully-aware attention is set to 250. During training, we use a dropout rate of 0.4 BID25 after the embedding layer (GloVe, CoVe and ELMo) and before applying any linear transformation. In particular, we share the dropout mask when the model parameter is shared, which is also known as variational dropout BID6. We batch the dialogs rather than individual questions. The batch size is set to one dialog for CoQA (since there can be as much as 20+ questions in each dialog), and three dialog for QuAC (since the question number is smaller). The optimizer is Adamax BID12 with a learning rate α = 0.002, β = (0.9, 0.999) and = 10 −8. A fixed random seed is used across all experiments. All models are implemented in PyTorch (http://pytorch.org/). We use a maximum of 20 epochs, with each epoch passing through the data once. It roughly takes 10 to 20 epochs to converge. We begin by elaborating the simplification for FLOWQA for the sequential instruction understanding task. First, we use the 100-dim GloVe embedding instead of the 300-dim GloVe and we do not use any contextualized word embedding. The GloVe embedding is fixed throughout training. Secondly, the embeddings for tokens in the context C are trained from scratch since C consists of synthetic tokens. Also, we remove word-level attention because the tokens in contexts and questions are very different (one is synthetic, while the other is natural language). Additionally, we remove self-attention since we do not find it helpful in this reduced QA setting (possibly due to the very short context here). We use the same hidden size for both integration LSTMs and FLOW GRUs. However, we tune the hidden size for the three domains independently, h = 100, 75, 50 for SCENE, TANGRAMS and ALCHEMY, respectively. We also batch by dialog and use a batch size of 8. A dropout rate of 0.3 is used and is applied before every linear transformation. Environment for the Three Domains In SCENE, each environment has ten positions with at most one person at each position. The domain covers four actions (enter, leave, move, and trade-hats) and two properties (hat color, shirt color). In TANGRAMS, the environment is a list containing at most five shapes. This domain contains three actions (add, move, swap) and one property (shape). Lastly, in ALCHEMY, each environment is seven numbered beakers and covers three actions (pour, drain, mix) dealing with two properties (color, amount).Reducing World State to Context Now, we give details on the encoding of context from the world state. In SCENE, there are ten positions. For each position, there could be a person with shirt and hat, a person with a shirt, or no person. We encode each position as two integers, one for shirt and one for hat (so the context length is ten). Both integers take the value that corresponds to being a color or being empty. In TANGRAMS, originally there are five images, but some commands could reduce the number of images or bring back removed images. Since the number of images present is no greater than five, we always have five positions available (so the context length is five). Each position consists of an integer, representing the ID of the image, and a binary feature. Every time an image is removed, we append it at the back. The binary feature is used to indicate if the image is still present or not. In ALCHEMY, there are always seven beakers. So the context length is seven. Each position consists of two numbers, the color of the liquid at the top unit and the number of units in the beaker. An embedding layer is used to turn each integer into a 10-dim vector. Reducing the Logical Form to Answer Next, we encode the change of world states (i.e., the answer) into four integers. The first integer is the type of action that is performed. The second and third integers represent the position of the context, which the action is acted upon. Finally, the fourth integer represents the additional property for the action performed. For example, in the ALCHEMY domain, (0, i, j, 2) means "pour 2 units of liquid from beaker i to beaker j", and (1, i, i, 3) means "throw out 3 units of liquid in beaker i". The prediction of each field is viewed as a multi-class classification problem, determined by a linear layer.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByftGnR9KX
We propose the Flow mechanism and an end-to-end architecture, FlowQA, that achieves SotA on two conversational QA datasets and a sequential instruction understanding task.
We consider reinforcement learning and bandit structured prediction problems with very sparse loss feedback: only at the end of an episode. We introduce a novel algorithm, RESIDUAL LOSS PREDICTION (RESLOPE), that solves such problems by automatically learning an internal representation of a denser reward function. RESLOPE operates as a reduction to contextual bandits, using its learned loss representation to solve the credit assignment problem, and a contextual bandit oracle to trade-off exploration and exploitation. RESLOPE enjoys a no-regret reduction-style theoretical guarantee and outperforms state of the art reinforcement learning algorithms in both MDP environments and bandit structured prediction settings. Current state of the art learning-based systems require enormous, costly datasets on which to train supervised models. To progress beyond this requirement, we need learning systems that can interact with their environments, collect feedback (a loss or reward), and improve continually over time. In most real-world settings, such feedback is sparse and delayed: most decisions made by the system will not immediately lead to feedback. Any sort of interactive system like this will face at least two challenges: the credit assignment problem (which decision(s) did the system make that led to the good/bad feedback?); and the exploration/exploitation problem (in order to learn, the system must try new things, but these could be bad).We consider the question of how to learn in an extremely sparse feedback setting: the environment operates episodically, and the only feedback comes at the end of the episode, with no incremental feedback to guide learning. This setting naturally arises in many classic reinforcement learning problems (§4): a barista robot will only get feedback from a customer after their cappuccino is finished 1. It also arises in the context of bandit structured prediction BID41 BID9 (§2.2), where a structured prediction system must produce a single output (e.g., translation) and observes only a scalar loss. We introduce a novel reinforcement learning algorithm, RESIDUAL LOSS PREDICTION (RESLOPE) (§ 3), which aims to learn effective representations of the loss signal. By effective we mean effective in terms of credit assignment. Intuitively, RESLOPE attempts to learn a decomposition of the episodic loss into a sum of per-time-step losses. This process is akin to how a person solving a task might realize before the task is complete when and where they are likely to have made suboptimal choices. In RESLOPE, the per-step loss estimates are conditioned on all the information available up to the current point in time, allowing it to learn a highly non-linear representation for the episodic loss (assuming the policy class is sufficiently complex; in practice, we use recurrent neural network policies). When the system receives the final episodic loss, it uses the difference between the observed loss and the cumulative predicted loss to update its parameters. Algorithmically, RESLOPE operates as a reduction (§3.3) to contextual bandits , allowing the bandit algorithm to handle exploration/exploitation and focusing only on the credit assignment problem. RESIDUAL LOSS PREDICTION is theoretically motivated by the need for variance reduction techniques when estimating counterfactual costs (Dudík et al., 2014) and enjoys a no-regret bound (§3.3) when the underlying bandit algorithm is no-regret. Experimentally, we show the efficacy of RESLOPE on four benchmark reinforcement problems and three bandit structured prediction problems (§ 5.1), comparing to several reinforcement learning algorithms: Reinforce, Proximal Policy Optimization and Advantage Actor-Critic. We focus on finite horizon, episodic Markov Decision Processes (MDPs) in this paper, which captures both traditional reinforcement learning problems (§ 4) and bandit structured prediction problems (§ 2.2). Our solution to this problem, RESIDUAL LOSS PREDICTION (§ 3) operates in a reduction framework. Specifically, we assume there exists "some" machine learning problem that we know how to solve, and can treat as an oracle. Our reduction goal is to develop a procedure that takes the reinforcement learning problem described above and map it to this oracle, so that a good solution to the oracle guarantees a good solution to our problem. The specific oracle problem we consider is a contextual bandit learning algorithm, relevant details of which we review in §2.1.Formally, we consider a (possibly virtual) learning agent that interacts directly with its environment. The interaction between the agent and the environment is governed by a restricted class of finitehorizon Markov Decision Processes (MDP), defined as a tuple {S, s 0, A, P, L, H} where: S is a large but finite state space, typically S ⊂ R d; s 0 ∈ S is a start state; A is a finite action space 2 of size K; P = {P(s |s, a): s, s ∈ S, a ∈ A } is the set of Markovian transition probabilities; L ∈ R |S| is the state dependent loss function, defined only at terminal states s ∈ S; H is the horizon (maximum length of an episode).The goal is to learn a policy π, which defines the behavior of the agent in the environment. We consider policies that are potentially functions of entire trajectories 3, and potentially produce distributions over actions: π(s) ∈ ∆ A, where ∆ A is the A-dimensional probability simplex. However, to ease exposition, we will present the in terms of policies that depend only on states; this can be accomplished by simply blowing up the state space. Let d π h denote the distribution of states visited at time step h when starting at state s 0 and operating according to π: d π h+1 (s) = E s h ∼d π h,a h ∼π(s h) P(s | s = s h, a = a h) The quality of the policy π is quantified by its value function or q-value function: V π (s) ∈ R associates each state with the expected future loss for starting at this state and following π afterwards; Q π (s, a) ∈ R associates each state/action pair with the same expected future loss: DISPLAYFORM0 The learning goal is to estimate a policy π from a hypothesis class of policies Π with minimal expected loss: J(π) = V π (s 0). The contextual bandit learning problem can be seen as a tractable special case of reinforcement learning in which the time horizon H = 1. In particular, the world operates episodically. At each round t, the world reveals a context (i.e. feature vector) x t ∈ X; the system chooses an action a t; the world reveals a scalar loss t (x t, a t) ∈ R +, a loss only for the selected action that may depend stochastically on x t and a t. The total loss for a system over T rounds is the sum of losses: T t=1 t (x t, a t). The goal in policy optimization is to learn a policy π: x → A from a policy class Π that has low regret with respect to the best policy in this class. Assuming the learning algorithm produces a sequence of policies π 1, π 2,..., π T, its regret is: DISPLAYFORM0 The particular contextual bandit algorithms we will use in this paper perform a second level of reduction: they assume access to an oracle supervised learning algorithm that can optimize a cost-sensitive loss (Appendix C), and transform the contextual bandit problem to a cost-sensitive classification problem. Algorithms in this family typically vary along two axes: how to explore (faced with a new x how does the algorithm choose which action to take); and how to update (Given the observed loss t, how does the algorithm construct a supervised training example on which to train). More details are in Appendix A. In structured prediction, we observe structured input sequences x SP ∈ X and the goal is to predict a set of correlated output variables y SP ∈ Y. For example, in machine translation, the input x SP is a sentence in an input language (e.g., Tagalog) and the output y SP is a sentence in an output language (e.g., Chippewa). In the fully supervised setting, we have access to samples (x SP, y SP) from some distribution D over input/output pairs. Structured prediction problems typically come paired with a structured loss (y SP,ŷ SP) ∈ R + that measures the fidelity of a predicted outputŷ SP to the "true" output y SP. The goal is to learn a function f: X → Y with low expected loss under D: DISPLAYFORM0 Recently, it has become popular to solve structured prediction problems incrementally using some form of recurrent neural network (RNN) model. When the output y SP contains multiple parts (e.g., words in a translation), the RNN can predict each word in sequence, conditioning each prediction on all previous decisions. Although typically such models are trained to maximize cross-entropy with the gold standard output (in a fully supervised setting), there is mounting evidence that this has similar drawbacks to pre-RNN techniques, such as overfitting to gold standard prefixes (the model never learns what to do once it has made an error) and sensitivity to errors of different severity (due to error compounding). In order to achieve this we must formally map from the structured prediction problem to the MDP setting; this mapping is natural and described in detail in Appendix B.Our focus in this paper is on the recently proposed bandit structured prediction setting BID9 BID41, at training time, we only have access to input x SP from the marginal distribution D X. For example, a Chippewa speaker sees an article in Tagalog, and asks for a translation. A system then produces a single translationŷ SP, on which a single "bandit" loss (ŷ SP | x SP) is observed. Given only this bandit feedback, without ever seeing the "true" translation, the system must learn. Our goal is to learn a good policy in a Markov Decision Process (§ 2) in which losses only arrive at the end of episodes. Our solution, RESIDUAL LOSS PREDICTION (RESLOPE), automatically deduces per-step losses based only on the episodic loss. To gain an intuition for how this works, suppose you are at work and want to meet a colleague at a nearby coffee shop. In hopes of finding a more efficient path to the coffee shop, you take a different path than usual. While you're on the way, you run into a friend and talk to them for a few minutes. You then arrive at the coffee shop and your colleague tells you that you are ten minutes late. To estimate the value of the different path, you wonder: how much of this ten minutes is due to taking the different path vs talking to a friend. If you can accurately estimate that you spent seven minutes talking to your friend (you lost track of time), you can conclude that the disadvantage for the different path is three minutes. RESLOPE addresses the problem of sparse reward signals and credit assignment by learning a decomposition of the reward signal, essentially doing automatic reward shaping (evaluated in §5.3). Finally, it addresses the problem of exploration vs exploitation by relying on a strong underlying contextual bandit learning algorithm with provably good exploration behavior. Akin to the coffee shop example, RESLOPE learns a decomposition of the episodic loss (i.e total time spent from work to the coffee shop) into a sum of per-time-step losses (i.e. timing activities along the route). RESLOPE operates as a reduction from reinforcement learning with episodic loss to contextual bandits. In this way, RESLOPE solves the credit assignment problem by predicting residual losses, and relies on the underlying contextual bandit oracle to solve explore/exploit. RES-LOPE operates online, incrementally updating a policy π learn once per episode. In the structured Figure 1: RESIDUAL LOSS PREDICTION: the system assigns a part-of-speech tag sequence to the sentence "International Conference for Learning Representations". Each state represents a partial labeling. The end state e = [Noun, Noun, Preposition, Verb, Noun]. The end state e is associated with an episodic loss (e), which is the total hamming loss in comparison to the optimal output structure e * = [Adjective, Noun, Preposition, Noun, Noun]. We emphasize that our algorithm doesn't assume access to neither the optimal output structure, nor the hamming loss for every time step. Only the total hamming loss is observed in this case ((e) = 2).contextual bandit setting, we assume access to a reference policy, π ref, that was perhaps pretrained on supervised data, and which we wish to improve; a hyperparameter β controls how much we trust π ref.As π learn improves, we replace π ref with π learn. In the RL setting, we set β = 0.We initially present a simplified variant of RESLOPE that mostly follows the learned policy (and the reference policy as appropriate), except for a single deviation per episode. This algorithm closely follows the bandit version of the Locally Optimal Learning to Search (LOLS) approach of BID9, with three key differences: residual loss prediction; alternative exploration strategies; alternative parameter update strategies. We assume access to a contextual bandit oracle CB that supports the following API:1. CB.ACT(π learn, x), where x is the input example; this returns a tuple (a, p), where a is the selected action, and p is the probability with which that action was selected.2. CB.COST(π learn, x, a) returns the estimated cost of taking action a in the context.3. CB.UPDATE(π learn, x, a, p, c), where x is the input example, a ∈ [K] is the selected action, p ∈ is the probability of that action, and c ∈ R is the target cost. The requirement that the contextual bandit algorithm also predicts costs (CB.COST) is somewhat non-standard, but is satisfied by many contextual bandit algorithms in practice, which often operate by regressing on costs and picking the minimal predicted cost action. We describe the specific contextual bandit approaches we use in §3.2.Algorithm 1 shows how our reduction is constructed formally. It uses a MAKEENVIRONMENT(t) function to construct a new environment (randomly in RL and by selecting the tth example in bandit structured prediction). To learn a good policy, RESLOPE reduces long horizon trajectories to singlestep contextual bandit training examples. In each episode, RESLOPE picks a single time step to deviate. Prior to the deviation step, it executes π learn as a roll-in policy and after the deviation step, it executes a β mixture of π learn and π ref (Figure 5). At the deviation step, it calls CB.ACT to handle the exploration and choose an action. At every step, it calls CB.COST to estimate the cost of that action. Finally, it constructs a single contextual bandit training example for the deviation step, whose input was the observation at that step, whose action and probability are those that were selected by CB.ACT, and whose cost is the observed total cost minus the cost of every other action taken in this trajectory. This example is sent to CB.UPDATE. When the contextual bandit policy is an RNN (as in our setting), this will then compute a loss which is back-propagated through the RNN. Choose rollout policy π mix to be π ref with probability β or π learn t−1 with probability 1 − β 8:for all time steps h = 1... env. H do 9:x ← env. STATEFEATURES {computed by an RNN} 10: DISPLAYFORM0 a ← a The contextual bandit oracle receives examples where the cost for only one predicted action is observed, but no others. It learns a policy for predicting actions minimizing expected loss by estimating the unobserved target costs for the unpredicted actions and exploring different actions to balance the exploitation exploration trade-off (§ 3.2). The contextual bandit oracle then calls a cost-sensitive multi-class oracle (Appendix C) given the target costs and the selected action. CB.UPDATE: Cost Estimation Techniques. The update procedure for our contextual bandit oracles takes an example x, action a, action probability p and cost c as input and updates its policy. We do this by reducing to a cost-sensitive classification oracle (Appendix C), which expects an example x and a cost vector y ∈ R K that specifies the cost for all actions (not just the selected one). The reduction challenge is constructing this cost-sensitive classification example given the input to CB.UPDATE. We consider three methods: inverse propensity scoring BID18, doubly robust estimation (Dudík et al., 2014) and multitask regression .Inverse Propensity Scoring (IPS): IPS uses the selected action probability p to correct for the shift in action proportions predicted by the policy π learn. IPS estimates the target cost vector y as: DISPLAYFORM0, where 1 is an indicator function and where a is the selected action and c is the observed cost. While IPS yields an unbiased estimate of costs, it typically has a large variance as p → 0. The doubly robust estimator uses both the observed cost c as well as its own predicted costsŷ(i) for all actions, forming a target that combines these two sources of information. DR estimates the target cost vector y as: DISPLAYFORM0 The DR estimator remains unbiased, and the estimated loss y helps decrease its variance. The multitask regression estimator functions somewhat differently from IPS and DR. Instead of reducing to to cost-sensitive classification, MTR reduces directly to importance-weighted regression. MTR maintains K different regressors for predicting costs given input/action pairs. Given x, a, c, p, MTR constructs a regression example, whose input is (x, a), whose target output is c and whose importance weight is 1/p. Uniform: explores randomly with probability and otherwise acts greedily BID42.Boltzmann: varies action probabilities where action a is chosen with probability proportional to exp[−c(a)/temp], where temp ∈ R + is the temperature, and c(a) is the predicted cost of action a. Bootstrap Exploration: BID0 trains a bag of multiple policies simultaneously. Each policy in the bag votes once on its predicted action, and an action is sampled from this distribution. To train, each example gets passed to each policy Poisson(λ = 1)-many times, which ensures diversity. Bootstrap can operate in "greedy update" and "greedy prediction" mode . In greedy update, we always update the first policy in the bag exactly once. In greedy prediction, we always predict the action from the first policy during exploitation. For simplicity, we first consider the case where we have access to a good reference policy π ref but do not have access to good Q-value estimates under π ref.The only way one can obtain a Q-value estimate is to do a roll-out, but in a non-resettable environment, we can only do this once. We will subsequently consider the case of suboptimal (or missing) reference policies, in which the goal of the analysis will change from competing with π ref to competing with both π ref and a local optimality guarantee. Theorem 1. Setting β = 1, running RESLOPE for N episodes with a contextual bandit algorithm, the average returned policyπ = E n π n has regret equal to the suboptimality of π ref, namely: DISPLAYFORM0 where CB (N) is the cumulative regret of the underlying contextual bandit algorithm after N steps, and approx is an approximation error term that goes to zero as N → ∞ so long as the contextual bandit algorithm is no-regret and assuming all costs are realizable under the hypothesis class used by RESLOPE.In particular, when the problem is realizable and the contextual bandit algorithm is no-regret, RES-LOPE is also no-regret. The realizability assumption is unfortunate, but does not appear easy to remove (see Appendix D for the proof).In the case that π ref is not known to be optimal, or not available, we follow the LOLS analysis and obtain a regret to a convex combination of π ref and the learned policy's one-step deviations (a form of local optimality) and can additionally show the following (proof in Appendix E): Theorem 2. For arbitrary β, define the combined regret ofπ as: DISPLAYFORM1 The first term is suboptimality to π ref; the second term is suboptimality to the policy's own realizable one-step deviations. Given a contextual bandit learning algorithm, and under a realizability assumption, the combined regret ofπ satisfies: DISPLAYFORM2 Again, if the contextual bandit algorithm is no regret, then CB /N → 0 as N → ∞; see Appendix E for the proof. Finally, we present the multiple deviation variant of RESLOPE. Algorithm 2 shows how RESLOPE operates under multiple deviations. The difference between the single and multiple deviation mode is twofold: 1. Instead of deviating at a single time step, multi-dev RESLOPE performs deviations at each time step in the horizon; 2. Instead of generating a single contextual bandit example per episode, multi-dev RESLOPE generates H examples, where H is the length of the time horizon, effectively updating the policy H times. These two changes means that we update the learned policy π learn multiple times per episode. Empirically, we found this to be crucial for achieving superior performance. Although, the generated samples for the same episode are not independent, this is made-up for by the huge increase in the (a DISPLAYFORM0 env. STEP(a dev h) {updates environment and internal state of the RNN} 10:end for 11: DISPLAYFORM1 ) for all h 13: end for 14: Return average policyπ = 1 T t π learn t number of available samples for training (i.e. T×H samples for multiple deviations versus only T samples in the single deviation mode). The theoretical analysis that precedes still holds in this case, but only makes sense when β = 0 because there is no longer any distinction between roll-in and roll-out, and so the guarantee reduces to a local optimality guarantee. We conduct experiments on both reinforcement learning and structured prediction tasks. Our goal is to evaluate how quickly different learning algorithms learn from episodic loss. We implement our models on top of the DyNet neural network optimization package BID30. 4 Reinforcement Learning Environments We perform experiments in four standard reinforcement learning environments: Blackjack (classic card game), Hex (two-player board game), Cartpole (aka "inverted pendulum") and Gridworld. Our implementations of these environments are described in Appendix F and largely follows the AI Gym BID7 implementations. We report in terms of cumulative loss, where loss is −1×reward for consistency with the loss-based exposition above and the loss-based evaluation of bandit structured prediction (§2.2). We also conduct experiments on structured prediction tasks. The evaluation framework we consider is the fully online setup described in (§ 2.2), measuring the degree to which various algorithms can effectively improve by observing only the episodic loss, and effectively balancing exploration and exploitation. We learn from one structured example at a time and we do a single pass over the available examples. We measure performance in terms of average cumulative loss on the online examples as well as on a held-out evaluation dataset. The loss on the online examples measures how much the algorithm is penalized for unnecessary exploration. We perform experiments on the three tasks described in detail in Appendix G: English Part of Speech Tagging, English Dependency Parsing and Chinese Part of Speech Tagging. We compare against three common reinforcement learning algorithms: Reinforce BID47 with a baseline whose value is an exponentially weighted running average of rewards; Proximal Policy Optimization (PPO) BID39; and Advantage Actor-Critic (A2C) BID26. For the structured prediction experiments, since the bandit feedback is simulated based on labeled data, we can also estimate an "upper bound" on performance by running a supervised learning algorithm that uses full information (thus forgoing issues of both exploration/exploitation and credit assignment). We run supervised DAgger to obtain such an upper bound. In all cases, our policy is a recurrent neural network BID14 ) that maintains a real-valued hidden state and combines: (a) its previous hidden state, (b) the features from the environment (described for each environment in the preceding sections), and (c) an embedding of its previous action. These form a new hidden state, from which a prediction is made. Formally, at time step h, v h is the hidden state representation, f (state h) are the features from the environment and a h is the action taken. The recursion is: DISPLAYFORM0 Here, A is a learned matrix, const is an initial (learned) state, emb is a (learned) action embedding function, and ReLU is a rectified linear unit applied element-wise. Given the hidden state v h, an action must be selected. This is done using a simple feedforward network operating on v h with either no hidden layers (in which case the output vector is o h = Bv h) or a single hidden layer (where o h = B 2 ReLU(B 1 v h)). In the case of RESLOPE and DAgger, which expect cost estimates as the output of the policy, the output values o h are used as the predicted costs (and a h might be the argmin of these costs when operating greedily). In the case of Reinforce, PPO and A2C, which expect action probabilities, these are computed as softmax(−o h) from which, for instance, an action a h is sampled. Details on optimization, hyperparameters and "deep learning tricks" are reported in Appendix H. We study several questions empirically: 1. How does RESIDUAL LOSS PREDICTION compare to policy gradient methods on reinforcement learning and bandit structured prediction tasks? (§ 5.1) 2. What's the effect of ablating various parts of the RESLOPE approach, including multiple deviations? (§5.2) 3. Does RESLOPE succeed in learning a good representation of the loss? (§5.3) In our first set of experiments, we compare RESLOPE to the competing approaches on the four reinforcement learning tasks described above. FIG1 shows the . In Blackjack, Hex and Grid, RESLOPE outperforms all the competing approaches with lower loss earlier in the learning process (though for Hex and Grid they all finish at the same near-optimal policy). For Cartpole, RESLOPE significantly underperforms both Reinforce and PPO. 5 Furthermore, in both Blackjack and Grid, the bootstrap exploration significantly improves upon Boltzmann exploration. In general, both RESLOPE performs quite well. In our second set of experiments, we compare the same algorithms plus the fully supervised DAgger algorithm on the three structured prediction problems; the are in FIG2. Here, we can observe RESLOPE significantly outperforming all alternative algorithms (except, of course, DAgger) on training loss (also on heldout (development) loss; see Figure 9 in the appendix). There is still quite a gap to fully supervised learning, but nonetheless RESLOPE is able to reduce training error significantly on all tasks: by over 25% on English POS, by about half on English dependency parsing, and by about 10% on Chinese POS tagging. In our construction of RESLOPE, there are several tunable parameters: which contextual bandit learner to use (IPS, DR, MTR), which exploration strategy (Uniform, Boltzmann, Bootstrap), and, for Bootstrap, whether to do greedy prediction and/or greedy update. In Table 1 (in the Appendix), we show the on all tasks for ablating these various parameters. For the purpose of the ablation, we fix the "baseline" system as: DR, Bootstrap, and with both greedy prediction and greedy updates, though this is not uniformly the optimal setting (and therefore these numbers may differ slightly from the preceding figures). The primary take-aways from these are: MTR and DR are competitive, but IPS is much worse; Bootstrap is much better than either other exploration method (especially uniform, not surprisingly); Greedy prediction is a bit of a wash, with only small differences either way; Greedy update is important. In Appendix I, we consider the effect of single vs multiple deviations and observe that significant importance of multiple deviations for all algorithms, with Reinforce and PPO behaving quite poorly with only single deviations. In our final set of experiments, we study RESLOPE's performance under different-and especially non-additive-loss functions. Our goal is to investigate RESLOPE's ability to learn good representations for the episodic loss. We consider the following different incremental loss functions for each time step: Hamming (0/1 loss at each position), Time-Sensitive (cost for an error at position h is equal to h) and Distance-Sensitive (cost for predictingâ instead of a is |â − a|). To combine these per-stop losses into a per-trajectory loss τ of length H, we compute the H-dimensional loss vector suffered by RESLOPE along this trajectory. To consider both additive and non-additive combinations, we consider Lp norms of this loss vector. When the norm is L1, this is simple additive loss. More generally, we consider (τ) = p t=H t=1 p (t) for any p > 0.Reinforce. We also have conducted experiments with PPO with larger minibatches; these are reported in the appendix in FIG6. In those experiments, we adjusted the minibatch size and number of epochs to match exactly with the PPO algorithm described in BID39. In each iteration, each of N actors collect T timesteps of data. Then we construct the surrogate loss on these NT time steps of data, and optimize it with minibatch Adam for K epochs. With these adjustments, PPO's performance falls between RESLOPE and Reinforce on Blackjack, slightly superior to RESLOPE on Hex, better than everything on Cartpole, and roughly equivalent to RESLOPE on Gridworld. We were, unfortunately, unable to conduct these experiments in the structured prediction setting, because the state memoization necessary to implement PPO with large/complex environments overflowed our system's memory quite quickly.. The x-axis shows the number of episodes and the y-axis measures the incremental loss using the true loss function (light colors) and using RESLOPE (dark colors). If RESLOPE worked perfectly, these would coincide. We run six different experiments using different incremental and episodic loss functions. For each incremental loss function (i.e. hamming, time sensitive, distance sensitive) we run two experiments: using the total hamming loss (additive) and an Lp norm of five (non-additive). Results are presented in FIG3. We observe the following. RESLOPE can always learn the optimal representation for the incremental loss when the episodic loss function is additive. This is the case for all the three incremental loss functions: hamming, time sensitive, and distance sensitive. Learning is faster when the episodic loss function is additive. While RESLOPE is still able to learn a good representation even when using the L5 norm loss, this happens much later in comparison to the additive loss function (40k time steps for L5 norm vs 20k for total hamming loss). Not surprisingly, performance degrades as the episodic loss function becomes non-additive. This is most acute when using L-5 norm with the incremental hamming loss. This is expected as in the distance and time sensitive loss functions, RESLOPE observes a smoother loss function and learns to distinguish between different time steps based on the implicit encoding of time and distance information in the observed loss. RESLOPE can still learn a good representation for smoother episodic loss functions. This is shown empirically for time and distance sensitive loss functions. RESIDUAL LOSS PREDICTION builds most directly on the bandit learning to search frameworks LOLS BID9 and BLS BID40. The "bandit" version of LOLS was analyzed theoretically but not empirically in the original paper; BID40 found that it failed to learn empirically. They addressed this by requiring additional feedback from the user, which worked well empirically but did not enjoy any theoretical guarantees. RESLOPE achieves the best of both worlds: a strong regret guarantee, good empirical performance, and no need for additional feedback. The key ingredient for making this work is using the residual loss structure together with strong base contextual bandit learning algorithms. A number of recent algorithms have updated "classic" learning to search papers with deep learning underpinnings BID48 BID21. These aim to incorporate sequencelevel global loss function to mitigate the mismatch between training and test time discrepancies, but only apply in the fully supervised setting. Mixing of supervised learning and reinforcement signals has become more popular in structured prediction recently, generally to do a better job of tuning for a task-specific loss using either Reinforce BID35 or Actor-Critic BID2. The bandit variant of the structured prediction problem was studied by BID41, who proposed a reinforce method for optimizing different structured prediction models under bandit feedback in a log-linear structured prediction model. A standard technique for dealing with sparse and episodic reward signals is reward shaping BID31: supplying additional rewards to a learning agent to guide its learning process, beyond those supplied by the underlying environment. Typical reward shaping is hand-engineered; RESLOPE essentially learns a good task-specific reward shaping automatically. The most successful baseline approach we found is Proximal Policy Optimization (PPO, BID39), a variant of Trust Region Policy Optimization (TRPO, BID38) that is more practical. Experimentally we have seen RESLOPE to typically learn more quickly than PPO. Theoretically both have useful guarantees of a rather incomparable nature. Since RESLOPE operates as a reduction to a contextual bandit oracle, this allows it to continually improve as better contextual bandit algorithms become available, for instance work of Syrgkanis et al. (2016b) and BID0. Although RESLOPE is quite effective, there are a number of shortcomings that need to be addressed in future work. For example, the bootstrap sampling algorithm is prohibitive in terms of both memory and time efficiency. One approach for tackling this would be using the amortized bootstrap approach by BID27, which uses amortized inference in conjunction with implicit models to approximate the bootstrap distribution over model parameters. There is also a question of whether the reduction to contextual bandits creates "reasonable" contextual bandit problems in conjunction with RNNs. While some contextual bandit algorithms assume strong convexity or linearity, the ones we employ operate on arbitrary policy classes, provided a good cost-sensitive learner exists. The degree to which this is true will vary by neural network architecture, and what can be guaranteed (e.g., no regret full-information online neural learning). A more significant problem in the multi-deviation setting is that as RESLOPE learns, the residual costs will change, leading to a shifting distribution of costs; in principle this could be addressed using CB algorithms that work in adversarial settings BID43 BID16, but largely remains an open challenge. RESLOPE is currently designed for discrete action spaces. Extension to continuous action spaces BID22 BID23 remains an open problem. We thank Paul Mineiro and the anonymous reviewers 7 for very helpful comments and insights (especially to reviewer #3 whose patient comments on the analysis section of this paper were incredibly helpful 8). We also thank Khanh Nguyen, Shi Feng, Kianté Brantley, Moustafa Meshry, and Sudha Rao for reviewing earlier drafts for this work and Alekh Agarwal, Nan Jiang, and Adith Swaminathan for helpful discussions and comments. This work was partially funded by an Amazon Research Award. This material is based upon work supported by the National Science Foundation under Grant No. 1618193. Any opinions, findings, and or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We assume that contexts are chosen i.i.d from an unknown distribution D(x), the actions are chosen from a finite action set A, and the distribution over loss D(|a, x) is fixed over time, but is unknown. In this context, the key challenge in contextual bandit learning is the exploration/exploitation problem. Classic algorithms for the contextual bandit problem such as EXP4.P BID5 can achieve a √ T regret bound; in particular: DISPLAYFORM0 where K = |A|. When the regret is provably sublinear in T, such algorithms are often called "no regret" because their average regret per time step goes to zero as T → ∞.The particular contextual bandit algorithms we will use in this paper perform a second level of reduction: they assume access to an oracle supervised learning algorithm that can optimize a costsensitive loss, and transform the contextual bandit problem to a cost-sensitive classification problem. Algorithms in this family typically vary along two axes:1. How to explore? I.e., faced with a new x how does the algorithm choose which action to take; 2. How to update? Given the observed loss t, how does the algorithm construct a supervised training example on which to train. As a simple example, an algorithm might explore uniformly at random on 10% of the examples and return the best guess action on 90% of examples (-greedy exploration). A single round to such an algorithm consists of a tuple (x, a, p), where p is the probability with which the algorithm took action a. (In the current example, this would be DISPLAYFORM1 K for all actions except π(x) and 0.9 + 0.1 K for a = π(x).) If the update rule were "inverse propensity scaling" (IPS) BID18, the generated cost-sensitive learning example would have x as an input, and a cost vector c ∈ R K with zeros everywhere except in position a where it would take value p. The justification for this scaling is that in expectation over a ∼ p, the expected value of this cost vector is equal to the true costs for each action. Neither of these choices is optimal (IPS has very high variance as p gets small); we discuss alternative exploration strategies and variance reduction strategies (§3.2). Recently, it has become popular to solve structured prediction problems incrementally using some form of recurrent neural network (RNN) model. When the output y contains multiple parts (e.g., words in a translation), the RNN can predict each word in sequence, conditioning each prediction on all previous decisions. Although typically such models are trained to maximize cross-entropy with the gold standard output (in a fully supervised setting), there is mounting evidence that this has similar drawbacks to pre-RNN techniques, such as overfitting to gold standard prefixes (the model never learns what to do once it has made an error) and sensitivity to errors of different severity (due to error compounding).By casting the structured prediction problem explicitly as a sequential decision making problem BID11 BID10 BID37 BID29, we can avoid these problems by applying imitation-learning style algorithms to their solution. This "Learning to Search" framework (Figure 5) solves structured prediction problems by:1. converting structured and control problems to search problems by defining a search space of states S and an action set A; 2. defining structured features over each state to capture the inter-dependency between output variables; 3. constructing a reference policy π ref based on the supervised training data;4. learning a policy π learn that imitates or improves upon the reference policy. In the bandit structured prediction setting, this maps nicely to the type of MDPs described at the beginning of this section. The formal reduction, following BID11 is to ignore the Figure 5: An example for a search space defined by a Learning to Search (L2S) algorithm. A search space is defined in terms of the set of states X, and the set of actions A. The agent starts at the initial state S, and queries the roll-in policy π in twice, next, at state R, the agent considers all three actions as possible one-step deviations. The agent queries the roll-out policy π out to generate three different trajectories from the set of possible output structures Y.first action a 0 and to transition to an "initial state" s 1 by drawing an input x SP ∼ D X. The search space of the structured prediction task then generates the remainder of the state/action space for this example. The episode terminates when a state, s H that corresponds to a "final output" is reached, at which point the structured prediction loss (ŷ s H | x SP) is computed on the output that corresponds to s H. This then becomes the loss function L in the MDP. Clearly, learning a good policy under this MDP is equivalent to learning a structured prediction model with low expected loss. Many of the contextual bandit approaches we use in turn reduce the contextual bandit problem to a cost-sensitive classification problem. Cost-sensitive classification problems are defined by inputs x and cost vectors y ∈ R K, where y(i) is the cost of choosing class i on this example. The goal in cost-sensitive classification is to learn a classifier f: DISPLAYFORM0 is small. A standard strategy for solving cost-sensitive classification is via reduction to regression in a one-against-all framework BID4. Here, a regression function g(x, i) ∈ R is learned that predicts costs given input/class pairs. A predicted class on an input x is chosen as argmin i g(x, i). This cost-sensitive one-against-all approach achieves low regret when the underlying regressor is good. In practice, we use regression against Huber loss. In a now-classic lemma, BID20 and BID1 show that the difference in total loss between two policies can be computed exactly as a sum of per-time-step advantages of one over the other: Lemma 1 BID1; BID20 ). For all policies π and π: DISPLAYFORM0 Proof of Theorem 1. Let π n be the nth learned policy andπ be the average learned policy. We wish to bound J(π) − J(π *). We proceed as follows, largely following the AggreVaTe analysis BID36. We begin by noting that DISPLAYFORM1 and will concern ourselves with bounding the first difference. DISPLAYFORM2 Fix an n, and consider the sum above for a fixed deviation time step h dev. In what follows, we consider π n to represent both the learned policy as well as the contextual bandit cost estimator, CB.COST. DISPLAYFORM3 DISPLAYFORM4 where Residual(π n, h dev, s) is the estimated residual on this example. Since the above analysis holds for an arbitrary n, it holds in expectation over n; thus: DISPLAYFORM5 In the first line, the term in square brackets is exactly the cost being minimized by the contextual bandit algorithm and thus reduces to the regret of the CB algorithm. In Eq, we have H-many regret minimizing online learners: one estimating the policy and one estimating estimating the H − 1-many costs. BID8 (Theorem 7.3) proves that in a K-player game, if each player minimizes its internal regret, then the overall values convergence in time-average to the value of the game. In order to apply this to our setting we need to convert from external regret (which we are assuming about the underlying learners) to internal regret (which the theorem requires). This can be done using, for instance, the algorithm of which gives a general reduction from an algorithm that minimizes internal regret to one that minimizes external regret. From there, by the strong realizability assumption, and the fact that multiple no-regret minimizers will achieve a time-averaged minimax value, we can conclude that as N → ∞, the approximation error term will vanish. Moreover, the term in the round parentheses (. . .) is exactly the expected value of the target of the contextual bandit cost. Therefore, If the CB algorithm has regret sublinear in N, both CB (N) and the approximation error term go to zero as N → ∞. This completes the proof that the overall algorithm is no-regret. First, we observe (LOLS Eq 6): DISPLAYFORM6 Then (LOLS Eq 7): DISPLAYFORM7 So far nothing has changed. It will be convenient to define DISPLAYFORM8. For each n fix the deviation time step h dev n. We plug these together ala LOLS and get: DISPLAYFORM9 DISPLAYFORM10 DISPLAYFORM11 The final step follows because the inner-most expectation is exactly what the contextual bandit algorithm is estimating, and Q πn β (s dev) is exactly the expectation of the observed loss. At this point the rest of the proof follows that of Theorem 1, relying on the same internal-to-external regret transformation, and the joint no-regret minimization of all "players." Blackjack is a card game where the goal is to obtain cards that sum to as near as possible to 21 without going over. Players play against a fixed dealer who hits until they have at least 17. Face cards (Jack, Queen, King) have a point value of 10. Aces can either count as 11 or 1, and a card is called "usable" at 11. The reward for winning is +1, drawing is 0, and losing is −1. The world is partially visible: the player can see one their own cards and one of the two initial dealer cards. Hex is a classic two-player board game invented by Piet Hein and independently by John Nash (; BID28 . The board is an n×n rhombus of hexagonal cells. Players alternately place a stone of their color on any empty cell. To win, a player connects her two opposing sides with her stones. We use n = 5; the world is fully visible to the agent, with each hexagon showing as unoccupied, occupied with white or occupied with black. The reward is +1 for winning and −1 for losing. Cart Pole is a classic control problem variously referred to as the "cart-pole", "inverted pendulum", or "pole balancing" problem BID3 . Is is an example of an inherently unstable dynamic system, in which the objective is to control translational forces that position a cart at the center of a finite width track while simultaneously balancing a pole hinged on the cart's top. In this task, a pole is attached by a joint to a cart which moves along a frictionless track (Figure 6c). The system is controlled by applying a force of +1 or −1 to the cart, thus, we operate in a discrete action space with only two actions. The pendulum starts upright, and the goal is to prevent it from falling over. The episode ends when the pole is more than 15 degrees from the vertical axis, or the cart moves more than 2.4 units from the center. The state is represented by four values indicating the poles position, angle to the vertical axis, and the linear and angular velocities. The total cumulative reward at the end of the episode is the total number of time steps the pole remained upright before the episode terminates. Grid World consists of a simple 3×4 grid, with a +1 reward in the upper-right corner and −1 reward immediately below it; the cell at is blocked (Figure 6d). The agent starts at a random unoccupied square. Each step costs 0.05 and the agent has a 10% chance of misstepping. The agent only gets partial visibility of the world: it gets an indicator feature specifying which directions it can step. The only reward observed is the complete sum of rewards over an episode. English POS Tagging we conduct POS tagging experiments over the 45 Penn Treebank BID25 tags. We simulate a domain adaptation setting by training a reference policy on the TweetNLP dataset BID33 which achieves good accuracy in domain, but performs badly out of domain. We simulate bandit episodic loss over the entire Penn Treebank Wall Street Journal (sections 02 → 21 and 23), comprising 42k sentences and about one million words. The measure of performance is the average Hamming loss. We define the search space by sequentially selecting greedy part-of-speech tags for words in the sentence from left to right. Chinese POS Tagging we conduct POS tagging experiments over the Chinese Penn Treebank (3.0) BID49 tags. We simulate a domain adaptation setting by training a reference policy on the Newswire domain from the Chinese Treebank Dataset BID50 and simulate bandit episodic feedback from the spoken conversation domain. We simulate bandit episodic loss over 40k sentences and about 300k words. The measure of performance is the average Hamming loss. We define the search space by sequentially selecting greedy part-of-speech tags for words in the sentence from left to right. English Dependency Parsing For this task, we assign a grammatical head (i.e. parent) for each word in the sentence. We train an arc-eager dependency parser BID32 which chooses among (at most) four actions at each state: Shift, Reduce, Left or Right. The reference policy is trained on the TweetNLP dataset and evaluated on the Penn Treebank corpus. The loss is the unlabeled attachment score (UAS), which measures the fraction of words that are assigned the correct parent. In all structured prediction settings, the feature representation begins with pretrained (and nonupdated) embeddings. For English, these are the 6gb Glove embeddings BID34; for Chinese, these are the FastText embeddings BID19. We then run a bidirectional LSTM BID17 over the input sentence. The input features for labeling the nth word in POS tagging experiments are the biLSTM representations at position n. The input features for dependency actions are a concatenation of the biLSTM features of the next word on the buffer and the two words on the top of the stack. We optimize all parameters of the model using the Adam 9 optimizer , with a tuned learning rate, a moving average rate for the mean of β 1 = 0.9 and for the variance of β 2 = 0.999; epsilon (for numerical stability) is fixed at 1e − 8 (these are the DyNet defaults). The learning rate is tuned in the range {0.050.01, 0.005, 0.001, 0.0005, 0.0001}.For the structured prediction experiments, the following input features hyperparameters are tuned:• Word embedding dimension ∈ {50, 100, 200, 300} (for the Chinese embeddings, which come only in 300 dimensional versions, we took the top singular vectors to reduce the dimensionality).• BiLSTM dimension ∈ {50, 150, 300}• Number of BiLSTM layers ∈ {1, 2}• Pretraining: DAgger or AggreVaTe initialization with probability of rolling in with the reference policy ∈ {0.0, 0.999 N, 0.99999 N, 1.0}, where N is the number of examples• Policy RNN dimension ∈ {50, 150, 300}• Number of policy layers ∈ {1, 2}• Roll-out probability β ∈ {0.0, 0.5, 1.0}For each task, the network architecture that was optimal for supervised pretraining was fixed and used for all bandit learning experiments 10.For the reinforcement learning experiments, we tuned:• Policy RNN dimension ∈ {20, 50, 100}• Number of policy layers ∈ {1, 2} Some parameters we do not tune: the nonlinearities used, the size of the action embeddings (we use 10 in all cases), the input RNN form for the text experiments (we always use LSTM instead of RNN or GRU based on preliminary experiments). We do not regularize our models (weight shrinkage only reduced performance in initial experiments) nor do we use dropout. Pretraining of the structured prediction models ran for 20 passes over the data with early stopping based on held-out loss. The state of the optimizer was reset once bandit learning began. The variance across difference configurations was relatively small across RL tasks, so we chose a two layer policy with 20 dimensional vectors for all RL tasks. Each algorithm also has a set of hyperparameters; we tune them as below:• Reinforce: with baseline or without baseline 9 We initially experimented also with RMSProp BID46 and AdaGrad BID12 but Adam consistently performed as well or better than the others on all tasks.10 English POS tagging and dependency parsing: DAgger 0.99999 N, 300 dim embeddings, 300 dim 1 layer LSTM, 2 layer 300 dimensional policy; Chinese POS tagging: DAgger 0.999 N, 300 dim embeddings, 50 dim 2 layer LSTM, 1 layer 50 dimensional policy). Table 1: Results of ablating various parts of the RESIDUAL LOSS PREDICTION approach. Columns are tasks. The first two rows are the cumulative average loss over multiple runs and its standard deviation. The numbers in the rest of the column measure how much it hurts (positive number) or helps (negative number) to ablate the corresponding parameter. To keep the numbers on a similar scale, the changes are reported as multiples of the standard deviation. So a value of 2.0 means that the cumulative loss gets worse by an additive factor of two standard deviations.• A2C: a multiplier on the relative importance of actor loss and critic loss ∈ {0.1, 0.2, 0.5, 1.0, 2.0, 5.0, 10.0}• PPO: with baseline or without baseline; and epsilon parameter ∈ {0.01, 0.05, 0.1, 0.2, 0.4, 0.8}• RESLOPE: update strategy (IPS, DR, MTR) and exploration strategy (uniform, Boltzmann or Bootstrap) In each reinforcement/bandit experiment, we optimistically pick algorithm hyperparameters and learning rate based on final evaluation criteria, noting that this likely provides unrealistically optimistic performance for all algorithms. We perform 100 replicates of every experiment in the RL setting and 20 replicates in the structured prediction setting. We additionally ablate various aspects of RESLOPE in §5.2.We employ only two "tricks," both of which are defaults in dynet: gradient clipping (using the default dynet settings) and smart parameter initialization (dynet uses Glorot initialization BID15). Next, we consider the single-deviation version of RESLOPE versus the multiple-deviation version. To enable comparison with alternative algorithms, we also experiment with variants of Reinforce, PPO and DAgger that are only allowed single deviations as well (also chosen uniformly Figure 9: Average loss (top) and heldout loss (bottom) during learning for three bandit structured prediction problems. Also included are supervised learning with DAgger. The main difference between the training loss and the development loss is that in the development data, the system needn't explore, and so the gaps in algorithms which explore different amounts (e.g., especially on English POS tagging) disappear. at random). The are shown in FIG8. Not surprisingly, all algorithms suffer when only allowed single deviations. PPO makes things worse over time (likely because its updates are very conservative, such that even in the original PPO paper the authors advocate multiple runs over the same data), as does Reinforce. DAgger still learns, though more slowly, when only allowed a single deviation. RESLOPE behaves similarly though not quite as poorly. Overall, this suggests that even though the samples generated with multiple deviations by RESLOPE are no longer independent, the gain in number of samples more than makes up for this. Experiments were conducted on a synthetic sequence labeling dataset. Input sequences are random integers (between one and ten) of length 6. The ground truth label for the hth word is the corresponding input mod 4. We generate 16k training sequences for this experiment. We run RESLOPE with bootstrap sampling in multiple deviation mode. We use the MTR cost estimator, and optimize the policies using ADAM with a learning rate of 0.01.. The x-axis shows the number of episodes and the y-axis measures the incremental loss using the true loss function (light colors) and using RESLOPE (dark colors). If RESLOPE worked perfectly, these would coincide. In this section, we study RESLOPE's performance under different-and especially nonadditive-loss functions. This experiment is akin to the experimental setting in section 5.3, however it's performed on the grid world reinforcement learning environment, where the quantitative aspects of the loss function is well understood. We study a simple 4×4 grid, with a +1 reward in the upper-right corner and −1 reward immediately below it; the cells at and are blocked. The agent starts at a random position in the grid. Each step costs +0.05 and the probability of success is 0.9. The agent has full visibility of the world: it knows its horizontal and vertical position in the grid. We consider two different episodic reward settings:1. The only reward observed is the complete sum of losses over an episode. (additive setting);2. The only reward observed is the L5 norm of the vector of losses over an episode (nonadditive setting).Results are shown in FIG9. Results are very similar to the structured prediction setting (section 5.3). Performance is better when the loss is additive (blue) vs non-additive (green).
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJNMYceCW
We present a novel algorithm for solving reinforcement learning and bandit structured prediction problems with very sparse loss feedback.
Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates. We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks. We show, both in theory and practice, that the proposed method reliably converges to saddle points. This makes adversarial networks less likely to "collapse," and enables faster training with larger learning rates. Adversarial networks play an important role in a variety of applications, including image generation , style transfer BID2;; BID17, domain adaptation (; ; BID11, imitation learning BID15, privacy BID9 BID0, fair representation (; BID9, etc. One particularly motivating application of adversarial nets is their ability to form generative models, as opposed to the classical discriminative models BID13 ; BID7).While adversarial networks have the power to attack a wide range of previously unsolved problems, they suffer from a major flaw: they are difficult to train. This is because adversarial nets try to accomplish two objectives simultaneously; weights are adjusted to maximize performance on one task while minimizing performance on another. Mathematically, this corresponds to finding a saddle point of a loss function -a point that is minimal with respect to one set of weights, and maximal with respect to another. Conventional neural networks are trained by marching down a loss function until a minimizer is reached (FIG0). In contrast, adversarial training methods search for saddle points rather than a minimizer, which introduces the possibility that the training path "slides off" the objective functions and the loss goes to −∞ FIG0 ), ing in "collapse" of the adversarial network. As a , many authors suggest using early stopping, gradients/weight clipping, or specialized objective functions BID13; to maintain stability. In this paper, we present a simple "prediction" step that is easily added to many training algorithms for adversarial nets. We present theoretical analysis showing that the proposed prediction method is asymptotically stable for a class of saddle point problems. Finally, we use a wide range of experiments to show that prediction enables faster training of adversarial networks using large learning rates without the instability problems that plague conventional training schemes. If minimization (or, conversely, maximization) is more powerful, the solution path "slides off" the loss surface and the algorithm becomes unstable, ing in a sudden "collapse" of the network. Saddle-point optimization problems have the general form DISPLAYFORM0 for some loss function L and variables u and v. Most authors use the alternating stochastic gradient method to solve saddle-point problems involving neural networks. This method alternates between updating u with a stochastic gradient descent step, and then updating v with a stochastic gradient ascent step. When simple/classical SGD updates are used, the steps of this method can be written DISPLAYFORM1 Here, {α k} and {β k} are learning rate schedules for the minimization and maximization steps, respectively. The vectors L u (u, v) and L v (u, v) denote (possibly stochastic) gradients of L with respect to u and v. In practice, the gradient updates are often performed by an automated solver, such as the Adam optimizer BID19, and include momentum updates. We propose to stabilize the training of adversarial networks by adding a prediction step. Rather than calculating v k+1 using u k+1, we first make a prediction,ū k+1, about where the u iterates will be in the future, and use this predicted value to obtain v k+1. The Prediction step tries to estimate where u is going to be in the future by assuming its trajectory remains the same as in the current iteration. We now discuss a few common adversarial network problems and their saddle-point formulations. Generative Adversarial Networks (GANs) fit a generative model to a dataset using a game in which a generative model competes against a discriminator BID13. The generator, G(z; θ g), takes random noise vectors z as inputs, and maps them onto points in the target data distribution. The discriminator, D(x; θ d), accepts a candidate point x and tries to determine whether it is really drawn from the empirical distribution (in which case it outputs 1), or fabricated by the generator (output 0). During a training iteration, noise vectors from a Gaussian distribution G are pushed through the generator network G to form a batch of generated data samples denoted by D f ake. A batch of empirical samples, D real, is also prepared. One then tries to adjust the weights of each network to solve a saddle point problem, which is popularly formulated as, DISPLAYFORM0 Here f is any monotonically increasing function. Initially, BID13 proposed using f (x) = log(x).Domain Adversarial Networks (DANs) (; BID11 BID9) take data collected from a "source" domain, and extract a feature representation that can be used to train models that generalize to another "target" domain. For example, in the domain adversarial neural network (DANN BID11), a set of feature layers maps data points into an embedded feature space, and a classifier is trained on these embedded features. Meanwhile, the adversarial discriminator tries to determine, using only the embedded features, whether the data points belong to the source or target domain. A good embedding yields a better task-specific objective on the target domain while fooling the discriminator, and is found by solving DISPLAYFORM1 Here L d is any adversarial discriminator loss function and L y k denotes the task specific loss. θ f, θ d, and θ y k are network parameter of feature mapping, discriminator, and classification layers. It is well known that alternating stochastic gradient methods are unstable when using simple logarithmic losses. This led researchers to explore multiple directions for stabilizing GANs; either by adding regularization terms BID12 BID4 ), a myriad of training "hacks" (; BID14, re-engineering network architectures , and designing different solvers . Specifically, the Wasserstein GAN (WGAN) approach modifies the original objective by replacing f (x) = log(x) with f (x) = x. This led to a training scheme in which the discriminator weights are "clipped." However, as discussed in, the WGAN training is unstable at high learning rates, or when used with popular momentum based solvers such as Adam. Currently, it is known to work well only with RMSProp.The unrolled GAN is a new solver that can stabilize training at the cost of more expensive gradient computations. Each generator update requires the computation of multiple extra discriminator updates, which are then discarded when the generator update is complete. While avoiding GAN collapse, this method requires increased computation and memory. In the convex optimization literature, saddle point problems are more well studied. One popular solver is the primal-dual hybrid gradient (PDHG) method (; BID10, which has been popularized by BID3, and has been successfully applied to a range of machine learning and statistical estimation problems BID12 . PDHG relates closely to the method proposed here -it achieves stability using the same prediction step, although it uses a different type of gradient update and is only applicable to bi-linear problems. Stochastic methods for convex saddle-point problems can be roughly divided into two categories: stochastic coordinate descent BID6 ; ; ; ;) and stochastic gradient descent BID5 ). Similar optimization algorithms have been studied for reinforcement learning (; BID8 . Recently, a "doubly" stochastic method that randomizes both primal and dual updates was proposed for strongly convex bilinear saddle point problems . For general saddle point problems, "doubly" stochastic gradient descent methods are discussed in Nemirovski et al.,Palaniappan & Bach (2016, in DISPLAYFORM0 Figure 2: A schematic depiction of the prediction method. When the minimization step is powerful and moves the iterates a long distance, the prediction step (dotted black arrow) causes the maximization update to be calculated further down the loss surface, ing in a more dramatic maximization update. In this way, prediction methods prevent the maximization step from getting overpowered by the minimization update.which primal and dual variables are updated simultaneously based on the previous iterates and the current gradients. We present three ways to explain the effect of prediction: an intuitive, non-mathematical perspective, a more analytical viewpoint involving dynamical systems, and finally a rigorous proof-based approach. The standard alternating SGD switches between minimization and maximization steps. In this algorithm, there is a risk that the minimization step can overpower the maximization step, in which case the iterates will "slide off" the edge of saddle, leading to instability FIG0. Conversely, an overpowering maximization step will dominate the minimization step, and drive the iterates to extreme values as well. The effect of prediction is visualized in Figure 2. Suppose that a maximization step takes place starting at the red dot. Without prediction, the maximization step has no knowledge of the algorithm history, and will be the same regardless of whether the previous minimization update was weak (Figure 2a) or strong (Figure 2b). Prediction allows the maximization step to exploit information about the minimization step. If the previous minimizations step was weak (Figure 2a), the prediction step (dotted black arrow) stays close to the red dot, ing in a weak predictive maximization step (white arrow). But if we arrived at the red dot using a strong minimization step (Figure 2b), the prediction moves a long way down the loss surface, ing in a stronger maximization step (white arrows) to compensate. To get stronger intuition about prediction methods, let's look at the behavior of Algorithm on a simple bi-linear saddle of the form DISPLAYFORM0 where K is a matrix. When exact (non-stochastic) gradient updates are used, the iterates follow the path of a simple dynamical system with closed-form solutions. We give here a sketch of this argument: a detailed derivation is provided in the Supplementary Material. When the (non-predictive) gradient method is applied to the linear problem, the ing iterations can be written DISPLAYFORM1 When the stepsize α gets small, this behaves like a discretization of the system of differential equationṡ DISPLAYFORM2 whereu andv denote the derivatives of u and v with respect to time. These equations describe a simple harmonic oscillator, and the closed form solution for u is DISPLAYFORM3 where Σ is a diagonal matrix, and the matrix C and vector φ depend on the initialization. We can see that, for small values of α and β, the non-predictive algorithm approximates an undamped harmonic motion, and the solutions orbit around the saddle without converging. The prediction step improves convergence because it produces damped harmonic motion that sinks into the saddle point. When applied to the linearized problem, we get the dynamical systeṁ DISPLAYFORM4 which has solution DISPLAYFORM5 From this analysis, we see that the damping caused by the prediction step causes the orbits to converge into the saddle point, and the error decays exponentially fast. While the arguments above are intuitive, they are also informal and do not address issues like stochastic gradients, non-constant stepsize sequences, and more complex loss functions. We now provide a rigorous convergence analysis that handles these issues. We assume that the function L(u, v) is convex in u and concave in v. We can then measure convergence using the "primal-dual" gap, DISPLAYFORM0 is a saddle. Using these definitions, we formulate the following convergence . The proof is in the supplementary material. Theorem 1. Suppose the function L(u, v) is convex in u, concave in v, and that the partial gradient DISPLAYFORM1, then the SGD method with prediction converges in expectation, and we have the error bound DISPLAYFORM0 We present a wide range of experiments to demonstrate the benefits of the proposed prediction step for adversarial nets. We consider a saddle point problem on a toy dataset constructed using MNIST images, and then move on to consider state-of-the-art models for three tasks: GANs, domain adaptation, and learning of fair classifiers. Additional , and additional experiments involving mixtures of Gaussians, are presented in the Appendix. The code is available at https: //github.com/jaiabhayk/stableGAN. We consider the task of classifying MNIST digits as being even or odd. To make the problem interesting, we corrupt 70% of odd digits with salt-and-pepper noise, while we corrupt only 30% of even digits. When we train a LeNet network on this problem, we find that the network encodes and uses information about the noise; when a noise vs no-noise classifier is trained on the deep features generated by LeNet, it gets 100% accuracy. The goal of this task is to force LeNet to ignore the noise when making decisions. We create an adversarial model of the form in which L y is a softmax loss for the even vs odd classifier. We make L d a softmax loss for the task of discriminating whether the input sample is noisy or not. The classifier and discriminator were both pre-trained using the default LeNet implementation in Caffe BID18. Then the combined adversarial net was jointly trained both with and without prediction. For implementation details, see the Supplementary Material. Figure 3 summarizes our findings. In this experiment, we considered applying prediction to both the classifier and discriminator. We note that our task is to retain good classification accuracy while preventing the discriminator from doing better than the trivial strategy of classifying odd digits as noisy and even digits as non-noisy. This means that the discriminator accuracy should ideally be ∼ 0.7. As shown in FIG1, the prediction step hardly makes any difference when evaluated at the small learning rate of 10 −4. However, when evaluated at higher rates, FIG1 show that the prediction solvers are very stable while one without prediction collapses (blue solid line is flat) very early. FIG1 shows that the default learning rate (10 DISPLAYFORM0) of the Adam solver is unstable unless prediction is used. Next, we test the efficacy and stability of our proposed predictive step on generative adversarial networks (GAN), which are formulated as saddle point problems and are popularly solved using a heuristic approach BID13. We consider an image modeling task using CIFAR-10 on the recently popular convolutional GAN architecture, DCGAN . We compare our predictive method with that of DCGAN and the unrolled GAN using the training protocol described in. Note that we compared against the unrolled GAN with stop gradient switch 1 and K = 5 unrolling steps. All the approaches were trained for five random seeds and 100 epochs each. We start with comparing all three methods using the default solver for DCGAN (the Adam optimizer) with learning rate=0.0002 and β 1 =0.5. Figure 4 compares the generated sample images (at the 100 th epoch) and the training loss curve for all approaches. The discriminator and generator loss curves in Figure 4e show that without prediction, the DCGAN collapses at the 45 th and 57 th epochs. Similarly, Figure 4f shows that the training for unrolled GAN collapses in at least three instances. The training procedure using predictive steps never collapsed during any epochs. Qualitatively, the images generated using prediction are more diverse than the DCGAN and unrolled GAN images. Figure 5 compares all approaches when trained with 5× higher learning rate (0.001) (the default for the Adam solver). As observed in , the standard and unrolled solvers are very unstable and collapse at this higher rate. However, as shown in Figure 5d, & 5a, training remains stable when a predictive step is used, and generates images of reasonable quality. The training procedure for both DCGAN and unrolled GAN collapsed on all five random seeds. The on various additional intermediate learning rates as well as on high resolution Imagenet dataset are in the Supplementary Material. In the Supplementary Material, we present one additional comparison showing on a higher momentum of β 1 =0.9 (learning rate=0.0002). We observe that all the training approaches are stable. However, the quality of images generated using DCGAN is inferior to that of the predictive and unrolled methods. Overall, of the 25 training settings we ran on (each of five learning rates for five random seeds), the DCGAN training procedure collapsed in 20 such instances while unrolled GAN collapsed in 14 experiments (not counting the multiple collapse in each training setting). On the contrary, we find that our simple predictive step method collapsed only once. Note that prediction adds trivial cost to the training algorithm. Using a single TitanX Pascal, a training epoch of DCGAN takes 35 secs. With prediction, an epoch takes 38 secs. The unrolled GAN method, which requires extra gradient steps, takes 139 secs/epoch. Finally, we draw quantitative comparisons based on the inception score , which is a widely used metric for visual quality of the generated images. For this purpose, we consider the current state-of-the-art Stacked GAN BID16 architecture. TAB0 lists the inception scores computed on the generated samples from Stacked GAN trained (200 epochs) with and without prediction at different learning rates. The joint training of Stacked GAN collapses when trained at the default learning rate of adam solver (i.e., 0.001). However, reasonably good samples are generated if the same is trained with prediction on both the generator networks. The right end of TAB0 also list the inception score measured at fewer number of epochs for higher learning rates. It suggest that the model trained with prediction methods are not only stable but also allows faster convergence using higher learning rates. For reference the inception score on real images of CIFAR-10 dataset is 11.51 ± 0.17. We consider the domain adaptation task (; BID11) wherein the representation learned using the source domain samples is altered so that it can also generalize to samples from the target distribution. We use the problem setup and hyper-parameters as described in BID11 using the OFFICE dataset (experimental details are shared in the Supplementary Material). In TAB1, comparisons are drawn with respect to target domain accuracy on six pairs of source-target domain tasks. We observe that the prediction step has mild benefits on the "easy" adaptation tasks with very similar source and target domain samples. However, on the transfer learning tasks of AMAZON-to-WEBCAM, WEBCAM-to-AMAZON, and DSLR-to-AMAZON which has noticeably distinct data samples, an extra prediction step gives an absolute improvement of 1.3 − 6.9% in predicting target domain labels. Finally, we consider a task of learning fair feature representations (; BID9) such that the final learned classifier does not discriminate with respect to a sensitive variable. As proposed in BID9 one way to measure fairness is using discrimination, DISPLAYFORM0 Here s i is a binary sensitive variable for the i th data sample and N k denotes the total number of samples belonging to the k th sensitive class. Similar to the domain adaptation task, the learning of each classifier can be formulated as a minimax problem in BID9 ). Unlike the previous example though, this task has a model selection component. From a pool of hundreds of randomly generated adversarial deep nets, for each value of t, one selects the model that maximizes the difference y t,Delta = y acc − t * y disc.The "Adult" dataset from the UCI machine learning repository is used. The task (y acc) is to classify whether a person earns ≥ $50k/year. The person's gender is chosen to be the sensitive variable. Details are in the supplementary. To demonstrate the advantage of using prediction for model selection, we follow the protocol developed in BID9. In this work, the search space is restricted to a class of models that consist of a fully connected autoencoder, one task specific discriminator, and one adversarial discriminator. The encoder output from the autoencoder acts as input to both the discriminators. In our experiment, 100 models are randomly selected. During the training of each adversarial model, L d is a cross-entropy loss while L y is a linear combination of reconstruction and cross-entropy loss. Once all the models are trained, the best model for each value of t is selected by evaluating on the validation set. FIG3 plots the on the test set for the AFLR approach with and without prediction steps in their default Adam solver. For each value of t, FIG3, 6c also compares the number of layers in the selected encoder and discriminator networks. When using prediction for training, relatively stronger encoder models are produced and selected during validation, and hence the prediction generalize better on the test set. We present a simple modification to the alternating SGD method, called a prediction step, that improves the stability of adversarial networks. We present theoretical showing that the prediction step is asymptotically stable for solving saddle point problems. We show, using a variety of test problems, that prediction steps prevent network collapse and enable training with a wider range of learning rates than plain SGD methods. Here, we provide a detailed derivation of the harmonic oscillator behavior of Algorithm on the simple bi-linear saddle of the form L(x, y) = y T Kx where K is a matrix. Note that, within a small neighborhood of a saddle, all smooth weakly convex objective functions behave like.To see why, consider a smooth objective function L with a saddle point at x * = 0, y * = 0. Within a small neighborhood of the saddle, we can approximate the function L to high accuracy using its Taylor approximation DISPLAYFORM0 where L xy denotes the matrix of mixed-partial derivatives with respect to x and y. Note that the first-order terms have vanished from this Taylor approximation because the gradients are zero at a saddle point. The O(x 2) and O(y 2) terms vanish as well because the problem is assumed to be weakly convex around the saddle. Up to third-order error (which vanishes quickly near the saddle), this Taylor expansion has the form. For this reason, stability on saddles of the form is a necessary condition for convergence of FORMULA2, and the analysis here describes the asymptotic behavior of the prediction method on any smooth problem for which the method converges. We will show that, as the learning rate gets small, the iterates of the non-prediction method rotate in orbits around the saddle without converging. In contrast, the iterates of the prediction method fall into the saddle and converge. When the conventional gradient method is applied to the linear problem, the ing iterations can be written DISPLAYFORM1 When the stepsize α gets small, this behaves like a discretization of the differential equatioṅ DISPLAYFORM2 y = β/αKx whereẋ andẏ denote the derivatives of x and y with respect to time. The differential equations describe a harmonic oscillator. To see why, differentiate and plug into the to get a differential equation in x alonë DISPLAYFORM3 We can decompose this into a system of independent single-variable problems by considering the eigenvalue decomposition β/αK T K = U ΣU T. We now multiply both sides of by U T, and make the change of variables z ← U T x to geẗ z = −Σz. where Σ is diagonal. This is the standard equation for undamped harmonic motion, and its solution is z = A cos(Σ 1/2 t + φ), where cos acts entry-wise, and the diagonal matrix A and vector φ are constants that depend only on the initialization. Changing back into the variable x, we get the solution DISPLAYFORM4 We can see that, for small values of α and β, the non-predictive algorithm approximates an undamped harmonic motion, and the solutions orbit around the saddle without converging. The prediction step improves convergence because it produces damped harmonic motion that sinks into the saddle point. When applied to the linearized problem, the iterates of the predictive method satisfy DISPLAYFORM5 For small α, this approximates the dynamical systeṁ DISPLAYFORM6 Like before, we differentiate and use FORMULA0 to obtain DISPLAYFORM7 Finally, multiply both sides by U T and perform the change of variables z ← U T x to geẗ DISPLAYFORM8 This equation describes a damped harmonic motion. The solutions have the form DISPLAYFORM9. Changing back to the variable x, we see that the iterates of the original method satisfy DISPLAYFORM10 where A and φ depend on the initialization. From this analysis, we see that for small constant α the orbits of the lookahead method converge into the saddle point, and the error decays exponentially fast. A PROOF OF THEOREM 1 DISPLAYFORM11 In the following proofs, we use g u (u, v), g v (u, v) to represent the stochastic approximation of gradients, where DISPLAYFORM12 We show the convergence of the proposed stochastic primal-dual gradients for the primal-dual gap DISPLAYFORM13 We prove the O(1/ √ k) convergence rate in Theorem 1 by using Lemma 1 and Lemma 2, which present the contraction of primal and dual updates, respectively. DISPLAYFORM14 Proof. Use primal update in, we have DISPLAYFORM15 Take expectation on both side of the equation, substitute with DISPLAYFORM16 Since L(u, v) is convex in u, we have DISPLAYFORM17 is proved by combining FORMULA0 and FORMULA1.Lemma 2. Suppose L(u, v) is concave in v and has Lipschitz gradients, DISPLAYFORM18 Proof. From the dual update in, we have DISPLAYFORM19 Take expectation on both sides of the equation, substitute u, v), and apply DISPLAYFORM20 DISPLAYFORM21 Reorganize FORMULA1 to get DISPLAYFORM22 The right hand side of can be represented as DISPLAYFORM23 where DISPLAYFORM24 DISPLAYFORM25 DISPLAYFORM26 Lipschitz smoothness is used for; the prediction step in is used for; the primal update in is used for; bounded assumptions are used for. DISPLAYFORM27 Combine equations (25, 28, 35 to get36) DISPLAYFORM28 Rearrange the order of to achieve.We now present the proof of Theorem 1.Proof. Combining FORMULA0 and FORMULA0 in the Lemmas, the primal-dual gap DISPLAYFORM29 Accumulate from k = 1,..., l to obtain DISPLAYFORM30 Our finding is summarized in FIG1. In addition, FIG5 provides head-to-head comparison of two popular solvers Adam and SGD using the predictive step. Not surprisingly, the Adam solver shows relatively better performance and convergence even with an additional predictive step. This also suggests that the default hyper-parameter for the Adam solver can be retained and utilized for training this networks without resorting to any further hyper-parameter tuning (as it is currently in practice). Experimental details: To evaluate a domain adaptation task, we consider the OFFICE dataset . OFFICE is a small scale dataset consisting of images collected from three distinct domains: AMAZON, DSLR and WEBCAM. For such a small scale dataset, it is non-trivial to learn features from images of a single domain. For instance, consider the largest subset AMAZON, which contains only 2,817 labeled images spread across 31 different categories. However, one can leverage the power of domain adaptation to improve cross domain accuracy. We follow the protocol listed in BID11 and the same network architecture is used. Caffe BID18 is used for implementation. The training procedure from BID11 is kept intact except for the additional prediction step. In TAB1 comparisons are drawn with respect to target domain accuracy on three pairs of source-target domain tasks. The test accuracy is reported at the end of 50,000 training iterations. Experimental details: The "Adult" dataset from the UCI machine learning repository is used, which consists of census data from ∼ 45, 000 people. The task is to classify whether a person earns ≥ $50k/year. The person's gender is chosen to be the sensitive variable. We binarize all the category attributes, giving us a total of 102 input features per sample. We randomly split data into 35,000 samples for training, 5000 for validation and 5000 for testing. The reported here is an average over five such random splits. Toy Dataset: To illustrate the advantage of the prediction method, we experiment on a simple GAN architecture with fully connected layers using the toy dataset. The constructed toy example and its architecture is inspired by the one presented in. The two dimensional data is sampled from the mixture of eight Gaussians with their means equally spaced around the unit circle centered at. The standard deviation of each Gaussian is set at 0.01. The two dimensional latent vector z is sampled from the multivariate Gaussian distribution. The generator and discriminator networks consist of two fully connected hidden layers, each with 128 hidden units and tanh activations. The final layer of the generator has linear activation while that of discriminator has sigmoid activation. The solver optimizes both the discriminator and the generator network using the objective in. We use adam solver with its default parameters (i.e., learning rate = 0.001, β 1 = 0.9, β 2 = 0.999) and with input batch size of 512. The generated two dimensional samples are plotted in the figure.The straightforward utilization of the adam solver fails to construct all the modes of the underlying dataset while both unrolled GAN and our method are able to produce all the modes. We further investigate the performance of GAN training algorithms on data sampled from a mixture of a large number of Gaussians. We use 100 Gaussian modes which are equally spaced around a circle of radius 24 centered at. We retain the same experimental settings as described above and train GAN with two different input batch sizes, a small and a large batch setting. The Figure plots the generated sample output of GAN trained (for fixed number of epochs) under the above setting using different training algorithms. Note that for small batch size input, the default as well as the unrolled training for GAN fails to construct actual modes of the underlying dataset. We hypothesize that this is perhaps due to the batch size, 64, being smaller than the number of input modes. When trained with small batch the GAN observe samples only from few input modes at every iteration. This causes instability leading to the failure of training algorithms. This scenario is pertinent to real datasets wherein the number of modes are relatively high compared to input batch size. In this section we demonstrate the advantage of prediction methods for generating higher resolution images of size 128 x 128. For this purpose, the state-of-the-art AC-GAN architecture is considered and conditionally learned using images of all 1000 classes from Imagenet dataset. We have used the publicly available code for AC-GAN and all the parameter were set to it default as in. The figure 14 plots the inception score measured at every training epoch of AC-GAN model with and without prediction. The score is averaged over five independent runs. From the figure, it is clear that even at higher resolution with large number of classes the prediction method is stable and aids in speeding up the training.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Skj8Kag0Z
We present a simple modification to the alternating SGD method, called a prediction step, that improves the stability of adversarial networks.
An important type of question that arises in Explainable Planning is a contrastive question, of the form "Why action A instead of action B?". These kinds of questions can be answered with a contrastive explanation that compares properties of the original plan containing A against the contrastive plan containing B. An effective explanation of this type serves to highlight the differences between the decisions that have been made by the planner and what the user would expect, as well as to provide further insight into the model and the planning process. Producing this kind of explanation requires the generation of the contrastive plan. This paper introduces domain-independent compilations of user questions into constraints. These constraints are added to the planning model, so that a solution to the new model represents the contrastive plan. We introduce a formal description of the compilation from user question to constraints in a temporal and numeric PDDL2.1 planning setting. Explainable AI (XAI) is an emerging and important research area within AI. Recent work has shown that AI Planning is an important tool in XAI, as its decision-making mechanisms are model-based and so in principle more transparent. This recent work includes many approaches towards providing explanations in AI planning. BID3 gives an in-depth overview of this work and different terms used within the XAI landscape. In particular, BID16 shows that if an AI system behaves "explicably" there is less of a need for explanations. However, this is not always possible and explanation is sometimes required. BID2 tackles explanation as a model reconciliation problem, arguing that the explanation must be a difference between the human model and AI model. BID14 show that by representing plans as first order logic formulae generating explanations is feasible in real time. In contrast, in this paper we focus on contrastive "why" questions. BID4 highlight some important questions in XAIP and discuss possible answers, and also describe how these "why" questions are especially important. BID15 outlines the approach to planning as an iterative process for bet- ter modelling preferences and providing explanations. We propose to follow this same approach. The aim of explanations is to improve the user's levels of understanding and trust in the system they are using. These explanations can be local (regarding a specific plan) or global (concerning how the planning system works in general). In this paper we focus on local explanations of temporal and numeric planning problems, introducing an approach for explaining why a planner has made a certain decision. Through active exploration of these specific cases, the user may also gain global insight into the way in which the planner makes decisions. (See BID9 BID10 Ribeiro, Singh, and Guestrin 2016) ).To achieve an understanding of a decision, it is important that explanations adapt to the specific context and mental model of the user. One step towards this is to support the user iteratively asking different questions suitable for their context. BID6 identify ten question types that a user might have about an intelligent system, also described by BID13. BID8 show in a grounded study that of these, the questions why and why not provided the most benefit in terms of objective understanding and feelings of trust. In the context of planning why not questions are contrastive questions, because the user is asking why some action was selected rather than some other action that was not. Instead, Miller argues that all such questions can be asked as contrastive questions of the form "Why action A rather than action B?" BID11. Contrastive questions capture the context of the question; they more precisely identify the gaps in the user's understanding of a plan that needs to be explained BID7. A contrastive question about a plan can be answered by a contrastive explanation. Contrastive explanations will compare the original plan against a contrastive plan that accounts for the user expectation. Providing contrastive explanations is not only effective in improving understanding, but is simpler than providing a full causal analysis BID12.Following the approach of we propose an approach to contrastive explanations through a dialogue with the user. The proposed approach consists of an iterative four-stage process illustrated in FIG0. First the user asks a contrastive question in natural language. Second, a constraint is derived from the user question, in the following we refer to this constraint as the formal question. Third a hypothetical model (HModel) is generated which encapsulates this constraint. A solution to this model is the hypothetical plan (HPlan) that can be compared to the original plan to show the consequence of the user suggestion. The user can compare plans and iterate the process by asking further questions, and refining the HModel. This allows the user to combine different compilations to create a more constrained HModel, producing more meaningful explanations, until the explanation is satisfactory. Each stage of this process represents a vital research challenge. This paper describes and formalises the third stage of this process: compiling the formal question into a hypothetical model for temporal and numeric planning. We are interested in temporal and numeric planning problems, for which optimal solutions are difficult to find. Therefore, while the process described above serves for explanation, the insight of the user can also in guiding the planning process to a more efficient solution. As noted by BID15, the explanations could also give the user the opportunity to improve the plan with respect to their own preferences. The user could have hidden preferences which have not been captured in the model. The user could ask questions which enforce constraints that favour these preferences. The new plan could be sub-optimal, but more preferable to the user. The contribution of this paper is a formalisation of domain-independent and planner-agnostic compilations from formal contrastive questions to PDDL2.1 (Fox and Long 2003), necessary for providing contrastive explanations. The compilations shown are not exhaustive. However, they do cover an interesting set of questions which users would commonly have about both classical and temporal plans. The paper is organised as follows. The next section describes the planning definitions we will use throughout the paper. In Section 3 we describe the running example that we use to demonstrate our compilations throughout the paper. In Section 4 we list the set of formal questions that we are interested in, and formalise the compilations of each of these into constraints. Finally, we conclude the paper in Section 5 whilst touching on some interesting future work. Our definition of a planning model follows the definition of PDDL2.1 given by (Fox and Long 2003), extended by a set of time windows as follows. Definition 1 A planning model is a pair Π = D, P rob. The domain D = P s, V s, As, arity is a tuple where P s is a finite set of predicate symbols, V s is a finite set of function symbols, As is a set of action schemas, called operators, and arity is a function mapping all of these symbols to their respective arity. The problem P rob = Os, I, G, W is a tuple where Os is the set of objects in the planning instance, I is the initial state, G is the goal condition, and W is a set of time windows. A set of atomic propositions P is formed by applying the predicate symbols P s to the objects Os (respecting arities). One proposition p is formed by applying an ordered set of objects o ⊆ O to one predicate ps, respecting its arity. For example, applying the predicate (block on ? a ?b) with arity 2 to the ordered set of objects {blockA, blockB} forms the proposition (block on blockA blockB). This process is called "grounding" and is denoted with: ground(ps, χ) = p where χ ⊆ O is an ordered set of objects. Similarly the set of primitive numeric expressions (PNEs) V are formed by applying the function symbols V s to Os. A state s consists of a time t ∈ R, a logical part s l ⊆ P, and a numeric part s v that describes the values for the PNE's at that state. The initial state I is the state at time t = 0.The goal G = g 1,..., g n is a set of constraints over P and V that must hold at the end of an action sequence for a plan to be valid. More specifically, for an action sequence φ = a 1, a 2,..., a n each with a respective time denoted by Dispatch(a i), we use the definition of plan validity from (Fox and Long 2003) (Definition 15 "Validity of a Simple Plan"). A simple plan is the sequence of actions φ which defines a happening sequence, t i=0...k and a sequence of states, s i=0...k+1 such that s 0 = I and for each i = 0... k, s i+1 is the of executing the happening at time t i. The simple plan φ is valid if s k+1 |= G.Each time window w ∈ W is a tuple w = w lb, w ub, w v where w v is a proposition which becomes true or a numeric effect which acts upon some n ∈ V. w lb ∈ R is the time at which the proposition becomes true, or the numeric effect is applied. w ub ∈ R is the time at which the proposition becomes false. The constraint w lb < w ub must hold. Note that the numeric effect is not effected at w ub.Similar to propositions and PNEs, the set of ground actions A is generated from the substitution of objects for operator parameters with respect to it's arity. Each ground action is defined as follows:Definition 2 A ground action a ∈ A has a duration Dur(a) which constrains the length of time that must pass between the start and end of a; a start (end) condition P re (a) (P re (a)) which must hold at the state that a starts (ends); an invariant condition P re ↔ (a) which must hold throughout the entire execution of a; add effects Eff (a) +, Eff (a) + ⊆ P that are made true at the start and ends of the action respectively; delete effects Eff (a) −, Eff (a) − ⊆ P that are made false at the start and end of the action respectively; and numeric effects Eff (a) DISPLAYFORM0 n that act upon some n ∈ V. We use as a running example the following planning model. FIG1 shows the domain D. The domain describes a scenario in which a robot is able to move between connected waypoints and mark them as visited. The domain contains three predicate symbols (robot at, connected, visited) with arities 2, 2, and 1 respectively. The domain includes only a single function symbol travel time with arity 2.There is a single operator goto waypoint. FIG2 shows the problem P rob. The problem specifies 7 objects: wp0, wp1, wp2, wp3, wp4, wp5 and kenny. The initial state specifies which propositions are initially true, such as the current location of the robot (robot at kenny wp0), and the initial values of the PNEs, e.g. (= (travel time wp5 wp3) 4.68). The goal is specified as a constraint over P ∪ V, in this example it is that the robot has visited all of the locations. FIG4 shows an example plan that solves this problem. This plan might appear sub-optimal. The robot moves from waypoint wp2 to wp1 and then immediately returns to wp2. This second action might seem redundant to the user. However, upon closer inspection of the connectivity of waypoints (shown in FIG3) we can see that the plan is in fact the optimal one. Visiting waypoint wp1 is a goal of the problem, and it is only connected to waypoints wp0 and wp2, both of which have already been visited. Waypoint wp0 is only (define (problem task) (:domain turtlebot_demo) (:objects wp0 wp1 wp2 wp3 wp4 wp5 -waypoint kenny -robot) (:init (robot_at kenny wp0) (visited wp0) (connected wp0 wp2) (connected wp0 wp4) (connected wp1 wp0) (connected wp1 wp2) (connected wp2 wp1) (connected wp2 wp4) (connected wp2 wp5) (connected wp3 wp5) (connected wp5 wp0) (connected wp5 wp2) (connected wp5 wp3) (= (travel_time wp0 wp2) 1.45) (= (travel_time wp0 wp4) 2)... (:goal (and (visited wp1) (visited wp2) (visited wp3) (visited wp4) (visited wp5) ))) The robot is only allowed to move along the directed arrows. connected to waypoints wp2 and wp4, wp2 has been visited and wp4 is a dead end. For these reasons combined, the only logical option is to move back to wp2 after completing the goal of visiting wp1. This type of behaviour similarly happens between waypoints wp3 and wp5.A graphical representation such as FIG3 is not always available, and so even for this simple model and plan, deducing the reasoning behind the planned actions is not trivial. This is an example of where XAIP is useful. Using our proposed approach the user could have asked the question: "Why do we use the action (goto waypoint kenny wp1 wp2), rather than not using Figure 6: The hypothetical plan that accounts for the user's suggestion, avoiding the action of moving from wp1 to wp2. The cost of the plan is its duration (20.81).it?". From this question we could generate a contrastive plan with this constraint enforced (shown in Figure 6). Comparing the actions and costs of the original and the new plan could shed light on why the action needed to be used. The user can carry on asking questions until they were satisfied. Definition 3 An explanation problem is a tuple E = Π, φ, Q, in which Π is a planning model (Definition 1), φ is the plan generated by the planner, and Q is the specific question posed by the user. The problem is to provide insight that helps the user to answer question Q.In this paper, we assume that the user knows the model Π and the plan φ, so answers such as stating the goal of the problem will not increase their understanding. Given this, we propose the following set of questions, and provide a formal description for compilations of this set of formal questions of temporal plans: 1. Why is action a used in state s, rather than action b? (Section 4.1) 2. Why is action a not used in the plan, rather than being used? (Section 4.2) 3. Why is action a used in the plan, rather than not being used? (Section 4.3) 4. Why is action a used outside of time window w, rather than only being allowed within w? (Section 4.4) 5. Why is action a not used in time window w, rather than being used within w? (Section 4.5) 6. Why is action a used at time t, rather than at least some time t after/before t? (Section 4.6) 7. Why is action a not performed before (after) action b, rather than a being performed after (before) b? (Section 4.7) These questions were derived by systematically assessing ways that counterfactual situations could occur in plans, and choosing those that would be useful over many applications. This is not an exhaustive list of possible constraints that can be enforced upon the original model, however, it does represent a list of questions that would be useful in specific contexts and applications. Part of being able to answer these questions is the ability to reason about what would happen in the counterfactual cases. We approach this problem by generating plans for the counterfactual cases via compilations. A compilation of a planning instance where the model is given by Π, and a question is given by Q is shown as Compilation(Π, Q) = Π where: Π = P s, V s, As, arity, Os, I, G, W We call Π the hypothetical model, or HModel. However, Π can also be used as the input model so that the user can iteratively ask questions about some model, i.e: DISPLAYFORM0 This allows the user to stack questions, further increasing their understanding of the plan through combining compilations. Combining compilations this way provides a much wider set of possible constraints. After the HModel is formed, it is solved to give the HPlan. Any new operators that are used in the compilation to enforce some constraint are trivially renamed back to the original operators they represent. For each iteration of compilation the HPlan is validated against the original model Π. Given a plan φ, a formal question Q is asked of the form:Why is the operator o with parameters χ used in state s, rather than the operator n with parameters χ? where o = n or χ = χ For example, given the example plan in FIG4 the user might ask:"Why is (goto waypoint kenny wp2 wp5) used, rather than (goto waypoint kenny wp2 wp4)? " They might ask this because a goal of the problem is to visit wp4. As the robot visits wp5 from wp3 later in the plan, it might make sense to the user for the robot to visit wp4 earlier, as wp5 will be visited at a later point. To generate the HPlan, a compilation is formed such that the ground action b = ground(n, χ) appears in the plan in place of the action a i = ground(o, χ). Given the example above b = ground(goto waypoint, {kenny, wp2, wp4}), and a i = ground(goto waypoint, {kenny, wp2, wp5}). Given a plan: φ = a 1, a 2,..., a n The ground action a i at state s is replaced with b, which is executed, ing in state I, which becomes the new initial state in the HModel. A time window is created for each durative action that is still executing in state s. These model the end effects of the concurrent actions. A plan is then generated from this new state with these new time windows for the original goal, which gives us the plan: DISPLAYFORM0 The HPlan is then the initial actions of the original plan φ concatenated with b and the new plan φ: a 1, a 2,..., a i−1, b, a 1, a 2,..., a n Specifically, the HModel Π is: Π = P s, V s, As, arity, Os, I, G, W ∪ C where:• I is the final state obtained by executing 1 a 1, a 2,..., a i−1, b from state I.• C is a set of time windows w x, for each durative action a j that is still executing in the state I. For each such action, w x specifies that the end effects of that action will become true at the time point at which the action is scheduled to complete. Specifically: DISPLAYFORM1 In the case in which an action a j that is executing in state I has an overall condition that is violated, this is detected when the plan is validated against the original model. As an example, given the user question above, the new initial state I from the running example is shown below:(:init (robot at kenny wp4) (visited wp2) (visited wp1) (visited wp4) (connected wp0 wp2) (connected wp0 wp4) (connected wp1 wp0) (connected wp1 wp2 In this state the robot has visited the waypoints wp2, wp1, and wp4, and is currently at wp4. This new initial state is then used to plan for the original goals to get the plan φ, which, along with b and φ, gives the HPlan. However, the problem is unsolvable from this state as there are no connections from wp4 to any other waypoint. By applying the user's constraint, and showing there are no more applicable actions, it answers the above question: "because by doing this there is no way to complete the goals of the problem".This compilation keeps the position of the replaced action in the plan, however, it may not be optimal. This is because we are only re-planning after the inserted action has been performed. The first half of the plan, because it was originally planned to support a different set of actions, may now be inefficient, as shown by BID1 .If the user instead wishes to replace the action without necessarily retaining its position in the plan, then the following constraints on adding and removing an action from the plan can be applied iteratively, as mentioned previously. Given a plan φ, a formal question Q is asked of the form:Why is the operator o with parameters χ not used, rather than being used?1 We use VAL to validate this execution. We use the add and delete effects of each action, at each happening (provided by VAL), up to the replacement action to compute I. For example, given the example plan in FIG4 the user might ask: "Why is (goto waypoint kenny wp2 wp4) not used, rather than being used?" They might ask this because a goal of the problem is to visit wp4. As the robot is at wp2 early in the plan, and you can visit wp4 from wp2, it might make sense to the user for the robot to visit wp4 at that time. To generate the HPlan, a compilation is formed such that the action a = ground(o, χ) must be applied for the plan to be valid. The compilation introduces a new predicate has done a, which represents which actions have been applied. Using this, the goal is extended to include that the user suggested action has been applied. The HModel Π is: Π = P s, V s, As, arity, Os, I, G, W where • P s = P s ∪ {has done a}• As = {o} ∪ As \ {o}• arity (x) = arity(x), ∀x ∈ arity• arity (has done a) = arity (o) = arity(o)• G = G ∪ {ground(has done a, χ)} where the new operator o extends o with the add effect has done a with corresponding parameters, i.e. DISPLAYFORM0 For example, given the user question above, the operator goto waypoint from the running example is extended to goto waypoint with the additional add effect has done a:(:durative-action goto_waypoint':parameters (?v -robot ?from ?to -waypoint):duration(= ?duration (travel_time ?from ?to)):condition (at start (robot_at ?v ?from) (over all (connected ?from ?to)):effect (and (at end (visited ?to)) (at start (not (robot_at ?v ?from))) (at end (robot_at ?v ?to)) (at end (has done goto waypoint' ?v ?from ?to))))) and the goal is extended to include the proposition: (has done goto waypoint kenny wp2 wp4). Given a plan φ, a formal question Q is asked of the form:Why is the operator o with parameters χ used, rather than not being used?For example, given the example plan in FIG4 the user might ask: "Why is (goto waypoint kenny wp1 wp2) used, rather than not being used?" A user might ask this because the robot has already satisfied the goal to visit wp2 before this point with the action (goto waypoint kenny wp0 wp2). The user might think the second action (goto waypoint kenny wp1 wp2) seems redundant. The specifics of the compilation is similar to the compilation in Section 4.2. The HModel is extended to introduce a new predicate not done action which represents actions that have not yet been performed. The operator o is extended with the new predicate as an additional delete effect. The initial state and goal are then extended to include the user selected grounding of not done action. Now, when the user selected action is performed it deletes the new goal and so invalidates the plan. This ensures the user suggested action is not performed. For example, given the user question above, an HPlan is generated that does not include the action (goto waypoint kenny wp1 wp2), and is shown in Figure 6. Given a plan φ, a formal question Q is asked of the form:Why is the operator o with parameters χ used outside of time lb < t < ub, rather than only being allowed within this time window?For example, given the example plan in FIG4 the user might ask: "Why is (goto waypoint kenny wp0 wp4) used outside of times 0 and 2, rather than being restricted to that time window?" A user can ask this because the action (goto waypoint kenny wp0 wp4) is used at the end of the plan, the robot starts at wp0 and must visit wp4 to satisfy a goal. The user might think that satisfying this goal earlier in the plan will free up time for the robot to complete the other goals. To generate the HPlan, the planning model is compiled such that the ground action a = ground(o, χ) can only be used between times lb and ub. To do this, the original operator o is replaced with two operators o a and o ¬a, which extend o with extra constraints. Operator o ¬a replaces the original operator o for all other actions ground(o, χ), where χ = χ. The action ground(o ¬a, χ) cannot be used (this is enforced using the compilation for forbidding an action described in Section 4.3). Operator o a acts as the operator o specifically for the action a = ground(o, χ), which has an added constraint that it can only be performed between lb and ub. Specifically, the HModel Π is: Π = P s, V s, As, arity, Os, I, G, W where:• P s = P s ∪ {can do a, not done a} DISPLAYFORM0 where the new operators o ¬a and o a extend o with the delete effect not done a and the precondition can do a, respectively. i.e: DISPLAYFORM1 As the proposition ground(can do a, χ) must be true for ground(o a, χ) to be performed, this ensures that the action a can only be performed within the times lb and ub. Other actions from the same operator can still be applied at any time using the new operator o ¬a. As in Section 4.3 we make sure the ground action ground(o ¬a, χ) can never appear in the plan. For example, given the user question above, the operator goto waypoint from FIG1 The initial state is extended to include the proposition (not done goto waypoint kenny wp0 wp4) and the time window 0, 2, (can do goto waypoint kenny wp0 wp4). This time window enforces that the proposition (can do goto waypoint kenny wp0 wp4) is true between times 0 and 2. The ing HPlan is:Following the user suggestion, the action is no longer applied outside of the time window, and in fact does not appear in the plan at all. Given a plan φ, a formal question Q is asked of the form:Why is the operator o with parameters χ not used at time lb < t < ub, rather than being used in this time window? For example, given the example plan in FIG4 the user might ask:"Why is (goto waypoint kenny wp0 wp4) not used between times 0 and 2, rather than being used in this time window? " The HPlan given in Section 4.4 shows the user that there is a better plan which does not have the action in this time window. However, the user may only be satisfied once they have seen a plan where the action is performed in their given time window. To allow this the action may have to appear in other parts of the plan as well. This constraint differs from Section 4.4 in two ways: first the action is now forced to be applied in the time window, and second the action can be applied at other times in the plan. This constraint is useful in cases such as a robot that has a fuel level. As fuel is depleted when travelling between waypoints, the robot must refuel, possibly more than once. The user might ask "why does the robot not refuel between the times x and y (as well as the other times it refuels)?".To generate the HPlan, the planning model is compiled such that the ground action a = ground(o, χ) is forced to be used between times lb and ub, but can also appear at any other time. This is done using a combination of the compilation in Section 4.2 and a variation of the compilation in Section 4.4. Simply, the former ensures that new action ground(o a, χ) must appear in the plan, and the latter ensures that the action can only be applied within the time window. The variation of the latter compilation is that the operator o ¬a is not included, and instead the original operator is kept in the domain. This allows the original action a = ground(o, χ) to be applied at other times in the plan. Given this, the HModel Π is: Π = P s, V s, As, arity, Os, I, G, W where:• P s = P s ∪ {can do a, has done a} • As = {o a} ∪ As • arity (x) = arity(x), ∀x ∈ arity • arity (can do a) = arity (has done a) = arity (o a) = arity(o) • G = G ∪ {ground(has done a, χ)} • W = W ∪ {lb, ub, ground(can do a, χ) } As wp4 is a dead end there is no valid HPlan following this suggestion. Given a plan φ, a formal question Q is asked of the form:Why is the operator o with parameters χ used at time t, rather than at least some duration t after/before t? For example, given the example plan in FIG4 the user might ask:"Why is (goto waypoint kenny wp2 wp5) used at time 5.45, rather than at least 4 seconds earlier? " A user might ask this question in general because they expected an action to appear earlier or later in a plan. This could happen for a variety of reasons. In domains with resources that are depleted by specific actions, and are replenished by others, such as fuel for vehicles, these questions may arise often. A user might want an explanation for why a vehicle was refueled earlier or later than what was expected. In this case the refuel action can be delayed or advanced to answer this question. For this particular example the user might want the action (goto waypoint kenny wp2 wp5) to be advanced nearer the start of the plan. The user might see that in the original plan the robot goes from wp2 to wp1 at time 1.45 and then instantly goes back again. The user might think that a better action would be to go from wp2 to wp5 before this. The user might notice that wp5 is connected to more waypoints than wp1. Having these extra options might prevent redundant actions that revisit waypoints. To generate the HPlan, the planning model is compiled such that the ground action a = ground(o, χ) is forced to be used in time window w which is at least t before/after t. This compilation is an example of a combination of two other compilations: adding an action (in Section 4.2) and forbidding the action outside of a time window (in Section 4.4). The latter enforces that the action can only be applied within the user specified time window, while the former enforces that the action must be applied. The HModel Π is: Π = P s, V s, As, arity, Os, I, G, W where:• P s = P s ∪ {can do a, not done a, has done a} DISPLAYFORM0 ground(not done a, χ), ground(has done a, χ) }• W = W ∪ bef ore: 0, tReal, ground(can do a, χ) af ter: tReal, inf, ground(can do a, χ) where the new operators o a and o ¬a both extend o. The latter with the delete effect not done a, while o a extends o with the precondition can do a and add effect has done a; i.e.: DISPLAYFORM1 This ensures that the ground action a = ground(o a, χ) must be present in the plan between the times 0 and tReal, or tReal and inf, depending on the user question, and between those times only. In addition, the user selected action is forced to be performed using the same approach as in Section 4.2. Given the user question above, the HPlan is: Given a plan φ, a formal question Q is asked of the form:Why is the operator o with parameters χ used before (after) the operator n with parameters χ, rather than after (before)? where o = n or χ = χFor example, given the example plan in FIG4 the user might ask:"Why is (goto waypoint kenny wp2 wp1) used before (goto waypoint kenny wp2 wp5), rather than after?"A user might ask this because there are more connections from wp5 than wp2. The user might think that if the robot has more choice of where to move to, the planner could make a better choice, giving a more efficient plan. The compilation to the HModel is performed in the following way. First, a directed-acyclic-graph (DAG) N, E is built to represent each ordering between actions suggested by the user. For example the ordering of Q is a ≺ b where a = ground(o, χ) and b = ground(n, χ).This DAG is then encoded into the model Π to create Π. For each edge (a, b) ∈ E two new predicates are added: ordered ab representing that an edge exists between a and b in the DAG, and traversed ab representing that the edge between actions a and b has been traversed. For each node representing a ground action a ∈ N, the action is disallowed using the compilation from Section 4.3. Also, for each such action a new operator o a is added to the domain, with the same functionality of the original operator o. The arity of the new operator, arity(o a) is the combined arity of the original operator plus the arity of all of a's sink nodes. Specifically, the HModel Π is: Π = P s, V s, As, arity, Os, I, G, W where: DISPLAYFORM0 • As = {o a} ∪ As, ∀a ∈ N • arity (x) = arity(x), ∀x ∈ arity DISPLAYFORM1 where χ and χ are the parameters of a and b, respectively. In the above, we abuse the arity notation to specify the arity of an action to mean the arity of the operator from which it was ground; e.g. arity(a) = arity(o) where a = ground(o, χ).Each new operator o a extends o with the precondition that all incoming edges must have been traversed, i.e. the source node has been performed. The effects are extended to add that its outgoing edges have been traversed. That is:P re (o a) = P re (o) ∪ {ordered ab ∈ P s, ∀b} ∪ {traversed ca ∈ P s, ∀c} DISPLAYFORM2 This ensures that the ordering the user has selected is maintained within the HPlan. As the operator o a has a combined arity of the original operator plus the arity of all of a's sink nodes, there exists a large set of possible ground actions. However, for all b ∈ N, ordered ab is a precondition of o a; and for each edge (a, b) ∈ E the ground proposition ground(ordered ab, χ, χ) is added to the initial state to represent that the edge exists in the DAG. Therefore, the only grounding of the operator that can be performed is the action with parameters χ + χ. This drastically reduces the size of the search space. For example given the user question above, two new operators node goto waypoint kenny wp2 wp5 (shown in FIG5) and node goto waypoint kenny wp2 wp1 are added to the domain. These extend operator goto waypoint from FIG1 as described above. The HPlan generated is shown below: In this paper we have presented an approach to compiling a set of formal contrastive questions into domain independent constraints. These are then used within the XAI paradigm to provide explanations. We have described how these compilations form a part of a series of stages which start with a user question and end with an explanation. This paper formalises and provides examples of these compilations in PDDL 2.1 for temporal and numeric domains and planners. DISPLAYFORM3 We have defined a series of questions which we believe a user may have about a plan in a PDDL2.1 setting. These questions cover a large set of scenarios, and can be stacked to create new interesting constraints which may answer a much richer set of questions. We acknowledge that the questions we provide compilations for do not cover the full set of contrastive questions one may have about a plan. For example the question, "Why is the operator o with parameters χ used at time lb < t < ub, rather than not being used in this time window?", can be answered using a variant of Section 4.5. For future work we plan to investigate which compilations will form an atomic set whose elements can be stacked to cover the full set of possible contrastive questions. We also acknowledge that the compilations we have formalised may have equivalent compilations. However, the ones we have described have proven successful for explanations. In future work, we will look to extend this work in several ways. While we define how to calculate plans for contrastive cases, we do not take full advantage of contrastive explanations by explaining the difference between two plans BID11. In particular, we will look to extend the presentation beyond just plans into showing the difference between two causal chains as well. We will explore contrastive explanations with preferences in PDDL 3 BID5.We will look at producing a language for expressing questions and constraints on plans. LTL will likely play a role in defining the semantics of any such language. Additional concepts concerning plan structure, such as the ability to specify that an action is part of the causal support for a goal or sub-goal, will be needed. As it stands when we add a constraint to include an action, the constraint may be satisfied in trivial ways not relevant to answering the users question. The action may be redundant, or undone in the HPlan as described in BID4. In this case the explanation may not be deemed satisfactory. These additional concepts will help solve this problem, as well as allowing users to ask more expressive questions such as, "Why did you use action A rather than action B for achieving P?".Finally, we will provide functional and humanbehavioural evaluations of our explanations, to assess their effectiveness. To make sure they are both satisfactory from a user perspective, and that they provide actionable insight into the plan.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1l6Qa3mcN
This paper introduces domain-independent compilations of user questions into constraints for contrastive explanations.
Methods that learn representations of nodes in a graph play a critical role in network analysis since they enable many downstream learning tasks. We propose Graph2Gauss - an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification. Unlike most approaches that represent nodes as point vectors in a low-dimensional continuous space, we embed each node as a Gaussian distribution, allowing us to capture uncertainty about the representation. Furthermore, we propose an unsupervised method that handles inductive learning scenarios and is applicable to different types of graphs: plain/attributed, directed/undirected. By leveraging both the network structure and the associated node attributes, we are able to generalize to unseen nodes without additional training. To learn the embeddings we adopt a personalized ranking formulation w.r.t. the node distances that exploits the natural ordering of the nodes imposed by the network structure. Experiments on real world networks demonstrate the high performance of our approach, outperforming state-of-the-art network embedding methods on several different tasks. Additionally, we demonstrate the benefits of modeling uncertainty - by analyzing it we can estimate neighborhood diversity and detect the intrinsic latent dimensionality of a graph. Graphs are a natural representation for a wide variety of real-life data, from social and rating networks (Facebook, Amazon), to gene interactions and citation networks (BioGRID, arXiv). Node embeddings are a powerful and increasingly popular approach to analyze such data BID0. By operating in the embedding space, one can employ proved learning techniques and bypass the difficulty of incorporating the complex node interactions. Tasks such as link prediction, node classification, community detection, and visualization all greatly benefit from these latent node representations. Furthermore, for attributed graphs by leveraging both sources of information (network structure and attributes) one is able to learn more useful representations compared to approaches that only consider the graph BID33 BID24 BID5.All existing (attributed) graph embedding approaches represent each node by a single point in a low-dimensional continuous vector space. Representing the nodes simply as points, however, has a crucial limitation: we do not have information about the uncertainty of that representation. Yet uncertainty is inherent when describing a node in a complex graph by a single point only. Imagine a node for which the different sources of information are conflicting with each other, e.g. pointing to different communities or even revealing contradicting underlying patterns. Such discrepancy should be reflected in the uncertainty of its embedding. As a solution to this problem, we introduce a novel embedding approach that represents nodes as Gaussian distributions: each node becomes a full distribution rather than a single point. Thereby, we capture uncertainty about its representation. To effectively capture the non-i.i.d. nature of the data arising from the complex interactions between the nodes, we further propose a novel unsupervised personalized ranking formulation to learn the embeddings. Intuitively, from the point of view of a single node, we want nodes in its immediate neighborhood to be closest in the embedding space, while nodes multiple hops away should become increasingly more distant. This ordering between the nodes imposed by the network structure w.r.t the distances between their embeddings naturally leads to our ranking formulation. Taking into account this natural ranking from each node's point of view, we learn more powerful embeddings since we incorporate information about the network structure beyond first and second order proximity. Furthermore, when node attributes (e.g. text) are available our method is able to leverage them to easily generate embeddings for previously unseen nodes without additional training. In other words, Graph2Gauss is inductive, which is a significant benefit over existing methods that are inherently transductive and do not naturally generalize to unseen nodes. This desirable inductive property comes from the fact that we are learning an encoder that maps the nodes' attributes to embeddings. The main contributions of our approach are summarized as follows: a) We embed nodes as Gaussian distributions allowing us to capture uncertainty. b) Our unsupervised personalized ranking formulation exploits the natural ordering of the nodes capturing the network structure at multiple scales.c) We propose an inductive method that generalizes to unseen nodes and is applicable to different types of graphs: plain/attributed, directed/undirected. The focus of this paper is on unsupervised learning of node embeddings for which many different approaches have been proposed. For a comprehensive recent survey see BID0, BID8. Approaches such as DeepWalk and node2vec BID25 BID9 look at plain graphs and learn an embedding based on random walks by extending or adapting the Skip-Gram BID21 architecture. LINE BID30 uses first-and second-order proximity and trains the embedding via negative sampling. SDNE similarly has a component that preserves second-order proximity and exploits first-order proximity to refine the representations. GraRep BID1 is a factorization based method that considers local and global structural information. Tri-Party Deep Network Representation (TRIDNR) BID24 considers node attributes, network structure and potentially node labels. CENE BID28 similarly to BID5 treats the attributes as special kinds of nodes and learns embeddings on the augmented network. Text-Associated DeepWalk (TADW) BID33 performs low-rank matrix factorization considering graph structure and text features. Heterogeneous networks are consider in BID29, while Huang et al. similarly to considers labels. GraphSAGE is an inductive method that generates embeddings by sampling and aggregating attributes from a nodes local neighborhood and requires the edges of the new nodes. Graph convolutional networks are another family of approaches that adapt conventional CNNs to graph data BID16 BID3 BID13 BID22 BID23 BID26. They utilize the graph Laplacian and the spectral definition of a convolution and boil down to some form of aggregation over neighbors such as averaging. They can be thought of as implicitly learning an embedding, e.g. by taking the output of the last layer before the supervised component. See BID22 for an overview. In contrast to this paper, most of these methods are (semi-)supervised. The graph variational autoencoder (GAE) BID17 ) is a notable exception that learns node embeddings in an unsupervised manner. Few approaches consider the idea of learning an embedding that is a distribution. BID31 are the first to learn Gaussian word embeddings to capture uncertainty. Closest to our work, BID12 represent knowledge graphs and Dos BID4 study heterogeneous graphs for node classification. Both approaches are not applicable for the context of unsupervised learning of (attributed) graphs that we are interested in. The method in BID12 learns an embedding for each component of the triplets (head, tail, relation) in the knowledge graph. Note that we cannot naively employ this method by considering a single relation "has an edge" and a single entity "node". Since their approach considers similarity between entities and relations, all nodes would be trivially similar to the single relation. Considering the semi-supervised approach proposed in Dos BID4, we cannot simply "turn off" the supervised component to adapt their method for unsupervised learning, since given the defined loss we would trivially map all nodes to the same Gaussian. Additionally, both of these approaches do not consider node attributes. In this section we introduce our method Graph2Gauss (G2G) and detail how both the attributes and the network structure influence the learning of node representations. The embedding is carried out in two steps: (i) the node attributes are passed through a non-linear transformation via a deep neural network (encoder) and yield the parameters associated with the node's embedding distribution; (ii) we formulate an unsupervised loss function that incorporates the natural ranking of the nodes as given by the network structure w.r.t. a dissimilarity measure on the embedding distributions. Problem definition. Let G = (A, X) be a directed attributed graph, where A ∈ R N ×N is an adjacency matrix representing the edges between N nodes and X ∈ R N ×D collects the attribute information for each node where x i is a D dimensional attribute vector of the i th node. 1 V denotes the set of all nodes. We aim to find a lower-dimensional Gaussian distribution embedding DISPLAYFORM0 such that nodes similar w.r.t. attributes and network structure are also similar in the embedding space given a dissimilarity measure ∆(h i, h j). In FIG4 for example we show nodes that are embedded as two dimensional Gaussians. To capture the structural information of the network in the embedding space, we propose a personalized ranking approach. That is, locally per node i we impose a ranking of all remaining nodes w.r.t. their distance to node i in the embedding space. More precisely, in this paper we exploit the k-hop neighborhoods of each node. Given some anchor node i, we define N ik = {j ∈ V |i = j, min(sp(i, j), K) = k} to be the set of nodes who are exactly k hops away from node i, where V is the set of all nodes, K is a hyper-parameter denoting the maximum distance we are wiling to consider, and sp(i, j) returns either the length of the shortest path starting at node i and ending in node j or ∞ if node j is not reachable. Intuitively, we want all nodes belonging to the 1-hop neighborhood of i to be closer to i w.r.t. their embedding, compared to the all nodes in its 2-hop neighborhood, which in turn are closer than the nodes in its 3-hop neighborhood and so on up to K. Thus, the ranking that we want to ensure from the perspective of node i is DISPLAYFORM0 or equivalently, we aim to satisfy the following pairwise constraints DISPLAYFORM1 Going beyond mere first-order and second-order proximity this enables us to capture the network structure at multiple scales incorporating local and global structure. Dissimilarity measure. To solve the above ranking task we have to define a suitable dissimilarity measure between the latent representation of two nodes. Since our latent representations are distributions, similarly to Dos BID4 and BID12 we employ the asymmetric KL divergence. This gives the additional benefit of handling directed graphs in a sound way. More specifically, given the latent Gaussian distribution representation of two nodes h i, h j we define DISPLAYFORM2 Here we use the notation µ i, Σ i to denote the outputs of some functions µ θ (x i) and Σ θ (x i) applied to the attributes x i of node i and tr denotes the trace of a matrix. The asymmetric KL divergence also applies to the case of an undirected graph by simply processing both directions of the edge. We could alternatively use a symmetric dissimilarity measure such as the Jensen-Shannon divergence or the expected likelihood (probability product kernel). The functions µ θ (x i) and Σ θ (x i) are deep feed-forward non-linear neural networks parametrized by θ. It is important to note that these parameters are shared across instances and thus enjoy statistical strength benefits. Additionally, we design µ θ (x i) and Σ θ (x i) such that they share parameters as well. More specifically, a deep encoder f θ (x i) processes the node's attributes and outputs an intermediate hidden representation, which is then in turn used to output µ i and Σ i in the final layer of the architecture. We focus on diagonal covariance matrices. 2 The mapping from the nodes' attributes to their embedding via the deep encoder is precisely what enables the inductiveness of Graph2Gauss. Since it is intractable to find a solution that satisfies all of the pairwise constraints defined in Sec. 3.1 we turn to an energy based learning approach. The idea is to define an objective function that penalizes ranking errors given the energy of the pairs. More specifically, denoting the KL divergence between two nodes as the respective energy, E ij = D KL (N j ||N i), we define the following loss to be optimized DISPLAYFORM0 where DISPLAYFORM1 } is the set of all valid triplets. The E ij k terms are positive examples whose energy should be lower compared to the energy of the negative examples E ij l. Here, we employed the so called square-exponential loss BID18 which unlike other typically used losses (e.g. hinge loss) does not have a fixed margin and pushes the energy of the negative terms to infinity with exponentially decreasing force. In our setting, for a given anchor node i, the energy E ij should be lowest for nodes j in his 1-hop neighborhood, followed by a higher energy for nodes in his 2-hop neighborhood and so on. Finally, we can optimize the parameters θ of the deep encoder such that the loss L is minimized and the pairwise rankings are satisfied. Note again that the parameters are shared across all instances, meaning that we share statistical strength and can learn them more easily in comparison to treating the distribution parameters (e.g. µ i, Σ i) independently as free variables. The parameters are optimized using Adam BID15 ) with a fixed learning rate of 0.001.Sampling strategy. For large graphs, the complete loss is intractable to compute, confirming the need for a stochastic variant. The naive approach would be to sample triplets from D t uniformly, i.e. replace (i,j k,j l)∈Dt with E (i,j k,j l)∼Dt in Eq. 1. However, with the naive sampling we are less likely to sample triplets that involve low-degree nodes since high degree nodes occur in many more pairwise constraints. This in turn means that we update the embedding of low-degree nodes less often which is not desirable. Therefore, we propose an alternative node-anchored sampling strategy. Intuitively, for every node i, we randomly sample one other node from each of its neighborhoods (1-hop, 2-hop, etc.) and then optimize over all the corresponding pairwise constraints DISPLAYFORM2 Naively applying the node-anchored sampling strategy and optimizing Eq. 1, however, would lead to biased estimates of the gradient. Theorem 1 shows how to adapt the loss such that it is equal in expectation to the original loss under our new sampling strategy. As a consequence, we have unbiased estimates of the gradient using stochastic optimization of the reformulated loss. Theorem 1 For all i, let (j 1, . . ., j K) be independent uniform random samples from the sets (N i1, . . ., N iK) and |N i * | the cardinality of each set. Then L is equal in expectation to DISPLAYFORM3 We provide the proof in the appendix. For cases where the number of nodes N is particularly large we can further subsample mini-batches, by selecting anchor nodes i at random. Furthermore, in our experimental study, we analyze the effect of the sampling strategy on convergence, as well as the quality of the stochastic variant w.r.t. the obtained solution and the reached local optima.2 To ensure that they are positive definite, in the final layer we outputσ id ∈ R and obtain σ id = elu(σ id)+1. Inductive learning. While during learning we need both the network structure (to evaluate the ranking loss) and the attributes, once the learning concludes, the embedding for a node can be obtained solely based on its attributes. This enables our method to easily handle the issue of obtaining representations for new nodes that were not part of the network during training. To do so we simply pass the attributes of the new node through our learned deep encoder. Most approaches cannot handle this issue at all, with a notable exception being SDNE and GraphSAGE. However, both approaches require the edges of the new node to get the node's representation, and cannot handle nodes that have no existing connections. In contrast, our method can handle even such nodes, since after the model is learned we rely only on the attribute information. Plain graph embedding. Even though attributed graphs are often found in the real-world, sometimes it is desirable to analyze plain graphs. As already discussed, our method easily handles plain graphs, when the attributes are not available, by using one-hot encoding of the nodes instead. As we later show in the experiments we are able to learn useful representations in this scenario, even outperforming some attributed approaches. Naturally, in this case we lose the inductive ability to handle unseen nodes. We compare the one-hot encoding version, termed G2G oh, with our full method G2G that utilizes the attributes, as well as all remaining competitors. Encoder architecture. Depending on the type of the node attributes (e.g. images, text) we could in principle use CNNs/RNNs to process them. We could also easily incorporate any of the proposed graph convolutional layers inheriting their benefits. However, we observe that in practice using simple feed-forward architecture with rectifier units is sufficient, while being much faster and easier to train. Better yet, we observed that Graph2Gauss is not sensitive to the choice of hyperparameters such as number and size of hidden layers. We provide more detailed information and sensible defaults in the appendix. Complexity. The time complexity for computing the original loss is O(N 3) where N is the number of nodes. Using our node-anchored sampling strategy, the complexity of the stochastic version is O(K 2 N) where K is the maximum distance considered. Since a small value of K ≤ 2 consistently showed good performance, K 2 becomes negligible and thus the complexity is O(N), meaning linear in the number of nodes. This coupled with the small number of epochs T needed for convergence (T ≤ 2000 for all shown experiments, see e.g. FIG2) and an efficient GPU implementation also made our method faster than most competitors in terms of wall-clock time. We compare Graph2Gauss with and without considering attributes (G2G, G2G oh) to several competitors namely: TRIDNR and TADW BID24 BID33 as representatives that consider attributed graphs, GAE BID17 as the unsupervised graph convolutional representative, and node2vec BID9 as a representative of the random walk based plain graph embeddings. Additionally, we include a strong Logistic Regression baseline that considers only the attributes. As with all other methods we train TRIDNR in a unsupervised manner, however, since it can only process raw text as attributes (rather than e.g. bag-of-words) it is not always applicable. Furthermore, since TADW, and GAE only support undirected graphs we must symmetrize the graph before using them -giving them a substantial advantage, especially in the link prediction task. Moreover, in all experiments if the competing techniques use an L dimensional embedding, G2G's embedding is actually only half of this dimensionality so that the overall number of'parameters' per node (mean vector + variance terms of the diagonal Σ i) matches L.Dataset description. We use several attributed graph datasets. Cora BID20 is a well-known citation network labeled based on the paper topic. While most approaches report on a small subset of this dataset we additionally extract from the original data the entire network and name these two datasets CORA (N = 19793, E = 65311, D = 8710, K = 70) and CORA-ML (N = 2995, E = 8416, D = 2879, K = 7) respectively. CITESEER (N = 4230, E = 5358, D = 2701, K = 6) BID6, DBLP (N = 17716, E = 105734, D = 1639, K = 4) and PUBMBED (N = 18230, E = 79612, D = 500, K = 3) BID27 are other commonly used citation datasets. We provide all datasets, the source code of G2G, and further supplementary material (https://www.kdd.in.tum.de/g2g). Setup. Link prediction is a commonly used task to demonstrate the meaningfulness of the embeddings. To evaluate the performance we hide a set of edges/non-edges from the original graph and train on the ing graph. Similarly to BID17 and we create a validation/test set that contains 5%/10% randomly selected edges respectively and equal number of randomly selected non-edges. We used the validation set for hyper-parameter tuning and early stopping and the test set only to report the performance. As by convention we report the area under the ROC curve (AUC) and the average precision (AP) scores for each method. To rank the candidate edges we use the negative energy −E ij for Graph2Gauss, and the exact same approach as in the respective original methods (e.g. dot product of the embeddings).Performance on real-world datasets. TAB0 shows the performance on the link prediction task for different datasets and embedding size L = 128. As we can see our method significantly outperforms the competitors across all datasets which is a strong sign that the learned embeddings are useful. Furthermore, even the constrained version of our method G2G oh that does not consider attributes at all outperforms the competitors on some datasets. While GAE achieves comparable performance on some of the datasets their approach doesn't scale to large graphs. In fact, for graphs beyond 15K nodes we had to revert to slow training on the CPU since the data did not fit on the GPU memory (12GB). The simple Logistic Regression baseline showed surprisingly strong performance, even outperforming some of the more complicated methods. We also include the performance on the so called "Cora-ML Easy" dataset, obtained from the Cora-ML dataset by making it undirected and selecting the nodes in the largest connected component. We see that while node2vec struggles on the original real-world data, it significantly improves in this "easy" setting. On the contrary, Graph2Gauss handles both settings effortlessly. This demonstrates that Graph2Gauss can be readily applied in realistic scenarios on potentially messy real-world data. Sensitivity analysis. In Figs.1(a) and 1(b) we show the performance w.r.t. the dimensionality of the embedding, averaged over 10 trials. G2G is able to learn useful embeddings with strong performance even for relatively small embedding sizes. Even for the case L = 2, where we embed the points as one dimensional Gaussian distributions (L = 1 + 1 for the mean and the sigma of the Gaussian), G2G still outperforms all of the competitors irrespective of their much higher embedding sizes. Finally, we evaluate the performance w.r.t. the percentage of training edges varying from 15% to 85%, averaged over 10 trials. We can see in Figs.1(c) and 1(d) Graph2Gauss strongly outperforms the competitors, especially for small number of training edges. The dashed line indicates the percent-age above which we can guarantee to have every node appear at least once in the training set. 3 The performance below that line is then indicative of the performance in the inductive setting. Since, the structure only methods are unable to compute meaningful embeddings for unseen nodes we cannot report their performance below the dashed line. Setup. Node classification is another task commonly used to evaluate the strength of the learned embeddings -after they have been trained in an unsupervised manner. We evaluate the node classification performance for three datasets (Cora-ML, Citeseer and DBLP) that have ground-truth classes. First, we train the embeddings on the entire training data in an unsupervised manner (excluding the class labels). Then, following BID25 we use varying percentage of randomly selected nodes and their learned embeddings along with their labels as training data for a logistic regression, while evaluating the performance on the rest of the nodes. We also optimize the regularization strength for each method/dataset via cross-validation. We show averaged over 10 trials. Performance on real-world datasets. Figs. 2 compares the methods w.r.t. the classification performance for different percentage of labeled nodes. We can see that our method clearly outperforms the competitors. Again, the constrained version of our method that does not consider attributes is able to outperform some of the competing approaches. Additionally, we can conclude that in general our method shows stable performance regardless of the percentage of labeled nodes. This is a highly desirable property since it shows that should we need to perform classification it is sufficient to train only on a small percentage of labeled nodes. Figure 3(a) shows the validation set ROC score for the link prediction task w.r.t. the number of triplets (i, j k, j l) seen. We can see that both sampling strategies are able to reach the same performance as the full loss in significantly fewer (< 4.2%) number of pairs seen (note the log scale). It also shows that the naive random sampling converges slower than the node-anchored sampling strategy. FIG2 gives us some insight as to why -our node-anchored sampling strategy achieves significantly lower loss. Finally, FIG2 shows that our node-anchored sampling strategy has lower variance of the gradient updates, which is another contributor to faster convergence. Our node-anchored sampling Naive random sampling Full loss 10 4 10 5 10 6 10 7 10 8 10 9Number of pairs seen Learning an embedding that is a distribution rather than a point-vector allows us to capture uncertainty about the representation. We perform several experiments to evaluate the benefit of modeling uncertainty. FIG3 shows that the learned uncertainty is correlated with neighborhood diversity, where for a node i we define diversity as the number of distinct classes among the nodes in its p-hop neighborhood (1≤k≤p N ik). Since the uncertainty for a node i is an L-dimensional vector (diagonal covariance) we show the average across the dimensions. In line with our intuition, nodes with less diverse neighborhood have significantly lower variance compare to more diverse nodes whose immediate neighbors belong to many different classes, thus making their embedding more uncertain. The figure shows the on the Cora dataset for p = 3 hop neighborhood. Similar hold for the other datasets. This is particularly impressive given the fact that we learn our embedding in a completely unsupervised manner, yet the uncertainty was able to capture the diversity w.r.t. the class labels of the neighbors of a node, which were never seen during training. Figure 4(b) shows that using the learned uncertainty we are able to detect the intrinsic latent dimensionality of the graph. Each line represents the average variance (over all nodes) for a given dimension l for each epoch. We can see that as the training progresses past the stopping criterion (link prediction performance on validation set) and we start to overfit, some dimensions exhibit a relatively stable average variance, while for others the variance increases with each epoch. By creating a simply rule that monitors the average change of the variance over time we were able to automatically detect these relevant latent dimensions (colored in red). This holds for multiple datasets and is shown here for Cora-ML. Interestingly, the number of detected latent dimensions is close to the number of ground-truth communities.The next obvious question is then how does the performance change if we remove these highly uncertain dimensions whose variance keeps increasing with training. FIG3 (c) answers exactly that. By removing progressively more and more dimensions, starting with the most uncertain first we see imperceptibly small change in performance. Only once we start removing the true latent dimension we see a noticeable degradation in performance. The dashed lines show the performance if we re-train the model, setting L = 6, equal to the detected number of latent dimensions. As a last study of uncertainty, in a use case analysis, the nodes with high uncertainty reveal additional interesting patterns. For example in the Cora dataset, one of the highly uncertain nodes was the paper "The use of word shape information for cursive script recognition" by R.J. Whitrow -surprisingly, all citations (edges) of that paper (as extracted from the dataset) were towards other papers by the same author. As discussed in Sec. 3.4 G2G is able to learn embeddings even for nodes that were not part of the networks structure during training time. Thus, it not only supports transductive but also inductive learning. To evaluate how our approach generalizes to unseen nodes we perform the following experiment: (i) first we completely hide 10%/25% of nodes from the network at random; (ii) we proceed to learn the node embeddings for the rest of the nodes; (iii) after learning is complete we pass the (new) unseen test nodes through our deep encoder to obtain their embedding; (iv) we evaluate by calculating the link prediction performance (AUC and AP scores) using all their edges and same number of non-edges. As the in TAB1 clearly show, since we are utilizing the rich attribute information, we are able to achieve strong performance for unseen nodes. This is true even when a quarter of the nodes are missing. This makes our method applicable in the context of large graphs where training on the entire network is not feasible. Note that SDNE and GraphSAGE ) cannot be applied in this scenario, since they also require the edges for the unseen nodes to produce an embedding. Graph2Gauss is the only inductive method that can obtain embeddings for a node based only on the node attributes. One key application of node embedding approaches is creating meaningful visualizations of a network in 2D/3D that support tasks such as data exploration and understanding. Following BID30 and Pan et al. FORMULA4 we first learn a lower-dimensional L = 128 embedding for each node and then map those representations in 2D with TSNE BID19. Additionally, since our method is able to learn useful representations even in low dimensions we embed the nodes as 2D Gaussians and visualize the ing embedding. This has the added benefit of visualizing the nodes' uncertainty as well. FIG4 shows the visualization for the Cora-ML dataset. We see that Graph2Gauss learns an embedding in which the different classes are clearly separated. We proposed Graph2Gauss -the first unsupervised approach that represents nodes in attributed graphs as Gaussian distributions and is therefore able to capture uncertainty. Analyzing the uncertainty reveals the latent dimensionality of a graph and gives insight into the neighborhood diversity of a node. Since we exploit the attribute information of the nodes we can effortlessly generalize to unseen nodes, enabling inductive reasoning. Graph2Gauss leverages the natural ordering of the nodes w.r.t. their neighborhoods via a personalized ranking formulation. The strength of the learned embeddings has been demonstrated on several tasks -specifically achieving high link prediction performance even in the case of low dimensional embeddings. As future work we aim to study personalized rankings beyond the ones imposed by the shortest path distance. A PROOF OF THEOREM 1To prove Theorem 1 we start with the loss L s (Eq. 2), and show that by applying the expectation operator we will obtain the original loss L (Eq. 1). From there it trivially follows that taking the gradient with respect to L s for a set of samples gives us an unbiased estimate of the gradient of L.First we notice that both L and L s are summing over i, thus it is sufficient to show that the losses are equal in expectation for a single node i. Denoting with L (i)s the loss for a single node i and with E i,k,l = E ij k 2 + exp −Eij l for notational convenience we have: DISPLAYFORM0 In step we have expanded the sum over k < l in independent terms. In step we have marginalized the expectation over the variables that do not appear in the expression, e.g. for the term E (j1,...,j K)∼(Ni1,...,N iK) |N i1 | · |N i2 | · E i12 we can marginalize over j p where p = 1 and p = 2 since the term doesn't depend on them. In step we have expanded the expectation term. In step we have substituted p(j p) with 1 |Nij p | since we are sampling uniformly at random. Since DISPLAYFORM1 s is equal to L (i) in expectation it follows that ∇L s based on a set of samples is an unbiased estimate of ∇L. Architecture and hyperparameters. We observed that Graph2Gauss is not sensitive to the choice of hyperparameters such as number and size of hidden layers. Better yet, as shown in Sec. 4.4, Graphs2Gauss is also not sensitive to the size of the embedding L. Thus, for a new graph, one can simply pick a relatively large embedding size and if required prune it later similarly to the analysis performed in FIG3.As a sensible default we recommend an encoder with a single hidden layer of size s 1 = 512. More specifically, to obtain the embeddings for a node i we have DISPLAYFORM0 where x i are node attributes, relu and elu are the rectified linear unit and exponential linear unit respectively. In practice, we found that the softplus works equally well as the elu for making sure that σ i are positive and in turn Σ i is positive definite. We used Xavier initialization BID7 for the weight matrices W ∈ R D×s1, b ∈ R s1, W µ ∈ R s1×L/2, b µ ∈ R L/2, W Σ ∈ R s1×L/2, b Σ ∈ R L/2. As discussed in Sec. 3.4, multiple hidden layers, or other architectures such as CNNs/RNNs can also be used based on the specific problem. Unlike other approaches using Gaussian embeddings BID31 BID12 BID4 we do not explicitly regularize the norm of the means and we do not clip the covariance matrices. Given the self-regularizing nature of the KL divergence this is unnecessary, as was confirmed in our experiments. The parameters are optimized using Adam BID15 ) with a fixed learning rate of 0.001 and no learning rate annealing/decay. Edge cover. Some of the methods such as node2vec BID9 are not able to produce an embedding for nodes that have not been seen during training. Therefore, it is important to make sure that during the train-validation-test split of the edge set, every node appears at least once in the train set. Random sampling of the edges does not guarantee this, especially when allocating a low percentage of edges in the train set during the split. To guarantee that every node appears at least once in the train set we have to find an edge cover. An edge cover of a graph is a set of edges such that every node of the graph is incident to at least one edge of the set. The minimum edge cover problem is the problem of finding an edge cover of minimum size. The dashed line in FIG0 and 1(d) indicates exactly the size of the minimum edge cover. This condition had to be satisfied for the competing methods, however, since Graph2Gauss is inductive, it does not require that every node is in the train set.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1ZdKJ-0W
We embed nodes in a graph as Gaussian distributions allowing us to capture uncertainty about their representation.
While great progress has been made at making neural networks effective across a wide range of tasks, many are surprisingly vulnerable to small, carefully chosen perturbations of their input, known as adversarial examples. In this paper, we advocate for and experimentally investigate the use of logit regularization techniques as an adversarial defense, which can be used in conjunction with other methods for creating adversarial robustness at little to no cost. We demonstrate that much of the effectiveness of one recent adversarial defense mechanism can be attributed to logit regularization and show how to improve its defense against both white-box and black-box attacks, in the process creating a stronger black-box attacks against PGD-based models. Neural networks, despite their high performance on a variety of tasks, can be brittle. Given data intentionally chosen to trick them, many deep learning models suffer extremely low performance. This type of data, commonly referred to as adversarial examples, represent a security threat to any machine learning system where an attacker has the ability to choose data input to a model, potentially allowing the attacker to control a model's behavior. Today, adversarial examples are typically created by small, but carefully chosen transformations of data that models are otherwise high-performant on. This is primarily due to the ease of experimentation with existing datasets BID4, though the full threat of adversarial examples is only limited by the ability and creativity of an attacker's example generation process. Even with the limited threat models considered in current research, performance on adversarially chosen examples can be dramatically worse than unperturbed data -for example, white-box accuracy on adversarially chosen examples for the CIFAR-10 image classification task BID10 ) is lower than 50%, even for the most robust defenses known today BID12 BID9, while unperturbed accuracy can be as high as 98. 5%.Current defenses against adversarial examples generally come in one of a few flavors. Perhaps the most common approach is to generate adversarial examples as part of the training procedure and explicitly train on them ("adversarial training"). Another approach is to transform the model's input representation in a way that thwarts an attacker's adversarial example construction mechanism. While these methods can be effective, care must be taken to make sure that they are not merely obfuscating gradients BID1. Last, generative models can be built to model the original data distribution, recognizing when the input data is out of sample and potentially correcting it BID18 BID16. Of these, perhaps the most robust today is adversarial logit pairing BID9, which extends the adversarial training work of BID12 by incorporating an additional term to make the logits (pre-softmax values) of an unperturbed and adversarial example more similar. In this work, we show that adversarial logit pairing derives a large fraction of its benefits from regularizing the model's logits toward zero, which we demonstrate through simple and easy to understand theoretical arguments in addition to empirical demonstration. Investigating this phenomenon further, we examine two alternatives for logit regularization, finding that both in improved robustness to adversarial examples, sometimes surprisingly so -for example, using the right amount of label smoothing BID21 can in greater than 40% robustness to a projected gradient descent (PGD) attack BID12 on CIFAR-10 while training only on the original, unperturbed training examples, and is also a compelling black-box defense. We then present an alternative formulation of adversarial logit pairing that separates the logit pairing and logit regularization effects, improving the defense. The end of these investigations is a defense that sets a new state-of-the-art for PGD-based adversaries on CIFAR-10 for both white box and black box attacks, while requiring little to no computational overhead on top of adversarial training. Before proceeding with our analysis, it is prudent to review existing work on adversarial training for context. While adversarial examples have been examined in the machine learning community in some capacity for many years BID3, their study has drawn a sharp focus in the current renaissance of deep learning, starting with BID20 and BID5. In BID5, adversarial training is presented as training with a weighted loss between an original and adversarial example, i.e. with a loss of DISPLAYFORM0 where g(x) is a function representing the adversarial example generation process, originally presented as g(x) = x + · sign(∇ x J(θ, x, y)), α is a weighting term between the original and adversarial examples typically set to 0.5, and as usual θ are the model parameters to learn, J is a cross-entropy loss, m is the dataset size, x (i) is the ith input example, and y (i) is its label. Due to the use of a single signed gradient with respect to the input example, this method was termed the "fast gradient sign method" (FGSM), requiring a single additional forward and backward pass of the network to create. BID11 extended FGSM into a multi-step attack, iteratively adjusting the perturbation applied to the input example through several rounds of FGSM. This was also the first attack that could be described as a variant of projected gradient descent (PGD). Both of these approaches primarily target an L ∞ threat model, where the L ∞ norm between the original and adversarial example is constrained to a small value. BID12 built upon these works by initializing the search process for the adversarial perturbation randomly, and is the strongest attack currently available to the best of our knowledge. Through extensive experiments, they showed that even performing PGD with a single random initialization is able to approximate the strongest adversary found with current first-order methods. However, as with multi-step FGSM, performing adversarial training with this approach can be rather expensive, taking an order of magnitude longer than standard training due to requiring N +1 forward and backward passes of the model, where N is the number of PGD iterations. Improving on PGD-based adversarial training, BID9 introduced adversarial logit pairing (ALP), which adds a term to the adversarial training loss function that encourages the model to have similar logits for original and adversarial examples: DISPLAYFORM1 where L was set to an L 2 loss in experiments and f (x, θ) returns the logits of the model corresponding to example x. Adversarial logit pairing has the motivation of increasing the amount of structure in the learning process, by encouraging the model to have similar prediction patterns on the original and adversarial examples, a process reminiscent of distillation BID7. BID9 also studied a baseline version of ALP, called "clean logit pairing", which paired randomly chosen unperturbed examples together. Surprisingly, this worked reasonably well, inspiring them to experiment with a similar idea they call "clean logit squeezing", regularizing the L 2 norm of the model's logits, which worked even more effectively, though this idea itself was not combined with adversarial training. It is this aspect of the work that is most related to what we study in this paper. We now show how adversarial logit pairing BID9 acts as a logit regularizer. For notational convenience, denote DISPLAYFORM0 c as the logit of the model for class c on example i in its original, unperturbed form, and˜ DISPLAYFORM1 c as the logit for the corresponding adversarial example. The logit pairing term in adversarial logit pairing is a simple L 2 loss: DISPLAYFORM2 While it is obvious that minimizing this term will have the effect of making the original and adversarial logits more similar in some capacity, what precise effect does it have on the model during training? To examine this, we look at the gradient of this loss term with respect to the logits themselves: DISPLAYFORM3 Under the assumption that the adversarial example moves the model's predictions away from the correct label (as should be the case with any reasonable adversarial example, such as an untargeted PGD-based attack), we have that DISPLAYFORM4 is the correct category, and DISPLAYFORM5 otherwise. Keeping in mind that model updates move in the direction opposite of the gradient, then the update to the model's weights will attempt to make the original logits smaller and the adversarial logits larger when c = y (i) and will otherwise attempt to make the original logits larger and the adversarial logits smaller. However, this must be considered in the context of the adversarial training lossJ -in particular, the standard cross-entropy loss used inJ for the adversarial example g(x (i) ) already encourages the adversarial logits to be higher for the correct category and smaller for all incorrect categories, and furthermore the scale of the lossJ typically is an order of magnitude larger than the adversarial pairing loss. Thus, we argue that the main effect of adversarial logit pairing is actually in the remaining two types of updates, encouraging the logits of the original example to be smaller for the correct category and larger for all incorrect categories. These two updates have very similar effects to simply regularizing model logits, e.g. in a manner similar to "logit squeezing" BID9 or label smoothing BID21.We can also view this from a different perspective by explicitly incorporating the scale of the logits in the logit pairing term. If we factor out a shared scale factor γ from each logit, the logit pairing term becomes DISPLAYFORM6 implying that DISPLAYFORM7 meaning that the model would always attempt to update the scale of the logits in the opposite direction of the sign of γ, i.e. toward zero, so long as the logits were not identical. In practice, this affect is counterbalanced by the adversarial training term, which requires non-identical logits to minimize its cross-entropy loss. Given this interpretation, in this work we now explore 1) whether this can be verified experimentally, explicitly into a form where the effect of logit regularization and pairing can be disentangled, and 4) whether the above analysis can yield insights to making more adversarially robust methods. Perhaps the most straightforward way to test our hypothesis is to examine the logits of a model trained with ALP vs one trained with standard adversarial training. If true, the model trained with ALP will have logits that are generally smaller in magnitude. We do this in FIG0 We see that it is indeed the case that the logits for a model trained with ALP are of smaller magnitude than those of a model trained with PGD, with a variance reduction of the logits from 8.31 to 4.02 on clean test data (distributions on a set of PGD adversarial examples are very similar). This provides evidence that ALP does have the effect of regularizing logits, though this data alone is not sufficient to determine if this is a key mechanism in improved performance. To answer this, we can examine if standard adversarial training can be improved by explicitly regularizing the logits. If adversarial robustness can be improved, but similar improvements can not be made to ALP, then at least some of the benefits of ALP can be attributed to logit regularization. We present the of this experiments in FIG0 (right), implemented using the "logit squeezing" form of regularization (L 2 -regularization on the logits).These show that incorporating regularization on model logits is able to recover slightly more than half of the improvement from logit pairing, with too little regularization having only a small effect, and too much regularization approaching the point of being harmful. However, when added to a model already trained with ALP, regularizing the logits does not lead to any improvement, and in fact hurts performance, likely due to putting too much strength into logit regularization. This evidence makes clear that one of the key improvements from logit pairing is due to a logit regularization effect. We would like to emphasize that these are not meant to diminish ALP in any sense -adversarial logit pairing works, and our goals are to investigate the mechanism by which it works and explore if it can be generalized or improved. Given these , it is worth examining other methods that have an effect of regularizing logits in order to tell whether this is a more general phenomenon. Label Smoothing. Label smoothing is the process of replacing the one-hot training distribution of labels with a softer distribution, where the probability of the correct class has been smoothed out onto the incorrect classes BID21. Concretely, label smoothing uses the target DISPLAYFORM0 c is the target probability for class c for example i, C is the number of categories and s ∈ [0, 1 − 1 C] is the smoothing strength. Label smoothing was originally introduced as a form of regularization, designed to prevent models from being too confident about training examples, with the goal of improved generalization. It can be easily implemented as a preprocessing step on the labels, and does not affect model training time in any significant way. Interestingly, BID11 found that disabling the small amount of label smoothing present in a model trained on ImageNet actually improved adversarial robustness roughly by 1%. Here we find a different effect, with the caveat of relatively different experimental setups from BID11.In FIG2 (left) we show the effect label smoothing has on the performance of a model trained only on clean (i.e. non-adversarial) training data. Very surprisingly, using only label smoothing can in a model that is nearly as robust as models trained with PGD-based adversarial training or adversarial logit pairing and take an order of magnitude less time to train -though we note that when PGD and ALP-based models are trained only on adversarial examples rather than a mixture of clean and adversarial data, their robustness exceeds this performance by around 5%. Furthermore, this benefit of label smoothing comes at no significant loss in accuracy on unperturbed test data, while generally, adversarial training tends to trade off original vs adversarial performance. Another curiosity is that adding in any label smoothing at all dramatically improves robustness to FGSMbased adversaries (adding label smoothing of s = .01 brought accuracy up from 6.1% to 38.3%), while PGD-based attacks saw much more gradual improvement. Examining the logits FIG2, we see a striking difference between the models -the model trained with label smoothing both has a dramatically smaller dynamic range of logits and also presents a much more bimodal logit distribution than the model trained without label smoothing. In other words, it has learned to predict extremely consistent values for logits, which is what we attribute its adversarial robustness to. Anecdotally, we observed that this behavior held for all positive values of s, with a stronger effect the higher s was. Additional experiments with label smoothing are given in Section 5.3. Recently, a new form of data augmentation was found that, in contrast to standard label-preserving data augmentation, combined different training examples together, dramatically altering both the appearance of the training examples and their labels. Introduced concurrently by BID8; BID22, these types of data augmentation typically have the form of element-wise weighted averaging of two input examples (typically images), with the training label also determined as a weighted average of the original two training labels (represented as one-hot vectors). Besides making target labels soft (i.e. not 1-of-K) during training time, these methods also encourage models to behave linearly between examples, which may improve robustness to out of sample data. Interestingly, found that this type of data augmentation improved robustness to FGSM attacks on ImageNet BID15, but BID9 found that the method did not improve robustness against a targeted attack with a stronger PGD-based adversary. In our experiments we found evidence agreeing with both -when applying mixup, we found a sizeable increase in robustness to FGSM adversaries, going from 6.1% on CIFAR-10 by training without mixup to 30.8% with mixup, but did not observe a significant change when evaluated against a PGD-based adversary. While robustness to a PGD adversary with only 5 steps increased by a tiny amount (from 0.0% to 0.5%), robustness to a 10-step PGD adversary remained at 0%. In our experiments, we use VH-mixup, the slightly improved version of mixup from BID19. While we have now considered alternate methods by which logits can be regularized, at this point it is still not clear exactly how they might be used with or interact with the logit regularization effect of adversarial logit pairing. Doing so requires separating out the logit pairing and logit regularization effects of ALP.In adversarial logit pairing BID9, the logit pairing term is implemented as an L 2 loss: DISPLAYFORM0 though other losses such as an L 1 or Huber loss are also possible. We would like to break L into separate pairing and regularization terms: DISPLAYFORM1 where the purpose of the first term is explicitly for making the logits more similar (with as little logit regularization as possible), and the second term is explicitly for regularizing the logits toward zero. There are several natural choices for h, such as the the Jensen-Shannon divergence, a cosine similarity, or any similarity metric that does not have a significant regularization effect. We have found that simply taking the cross entropy between the distributions induced by the logits was effective -depending on the actual values of the logits, this can either still have a mild squeezing effect (if the logits are very different), a mild expanding effect (if the logits are very similar), or something in between. One implementation detail worth noting is that it can be difficult to reason about and set the relative strengths of the pairing loss and adversarial training loss. To that end, we set the strength of the pairing loss h as a constant fraction of the adversarial loss, implemented by setting the coefficient of the loss as a constant multiplied by a non-differentiable version of the ratio between the losses. By decomposing adversarial logit pairing explicitly into logit pairing and logit regularization terms in this way, adversarial robustness to a 10-step PGD attack improves by an absolute 1.9% over ALP, or 5.6% over standard PGD-based adversarial training. In the experiments on CIFAR-10 throughout this paper, we used a ResNet BID6, equivalent to the "simple" model of BID12, with a weight decay of 2 · 10 −4 and a momentum optimizer with strength of 0.9. Standard data augmentation of random crops and horizontal flips was used. After a warm up period of 5 epochs, the learning rate peaked at 0.1 and decayed by a factor of 10 at 100 and 150 epochs, training for a total of 200 epochs for models not trained on adversarial examples and 101 epochs for models using adversarial training -adversarial accuracy tends to increase for a brief period of time after a learning rate decay, then quickly drop by a small amount, an empirical finding also echoed by. The minibatch size was 128.Adversarial examples were constrained to a maximum L ∞ norm of.03, and all PGD-based attacks used a step size of 0.0078. Adversarial attacks were constructed using the CleverHans library BID13, implemented in TensorFlow BID0. All experiments were done on two Nvidia Geforce GTX 1080 Ti GPUs. Given these forms of logit regularization, perhaps the most natural question is whether they can be combined to create an even more robust model. Thus, in this section we focus exclusively on making a model (and comparable baselines) as robust as possible to PGD-based attacks. In particular, for baseline methods (PGD-based adversarial training BID12 and adversarial logit pairing ), we opt to train exclusively on adversarial examples, effectively setting α = 0 in Equation 1, which roughly trades off accuracy of 4 − 5% for clean test examples for a similar gain in adversarial performance. To combine the logit regularization methods together, we use a small amount of label smoothing (s = 0.1), use VH-mixup BID19 on the input examples, use the logit pairing formulation of Section 4.1 with β = 10 −3, and set the ratio between the adversarial training loss and the pairing loss to 0.125, which focuses the loss on keeping adversarial and original examples similar. These parameters were not tuned much due to resource constraints. We refer to this combination simply as LRM ("Logit Regularization Methods").White-box performance is shown in Table 1. LRM achieves the highest level of adversarial robustness of the methods considered for all PGD-based attacks, and to the best of our knowledge represents the most robust method on CIFAR-10 to date. However, like other adversarial defenses, this comes at the cost of performance on the original test set, which makes sense -from the perspective of adversarial training, a clean test image is simply the center of the set of feasible adversarial examples. Nonetheless, it is interesting that the tradeoff between adversarial and non-adversarial performance can continue to be pushed further, with the optimal value of that tradeoff dependent on application (e.g. whether worst-case performance is more important than performance on unperturbed examples).Next, black-box performance is shown in TAB1. As is standard in most black-box evaluations of adversarial defenses, this is performed by generating adversarial examples with one model (the "Source") and evaluating them on a separate independently trained model (the "Target"). As found in other works (e.g. BID12), the success of a black-box attack depends both on how similar the training procedure was between the source and target models and on the strength of the source model -for example, LRM uniformly in a stronger black-box attack than ALP BID9, which itself is a uniformly stronger black-box attack than adversarial training with. As such, using LRM as the source mildly damages the black-box defenses of PGD and ALP. In both the white-box and black-box analyses, we found that label smoothing was surprisingly effective given its near-zero cost. For black-box attacks in particular TAB1, label smoothing was generally among the most robust models across all different sources, with the label smoothing target network having the highest minimum performance across sources, a fact which was not shared even by any of the adversarially-trained models. Inspired by this, we conducted further investigation where we have found similarly surprising mixed behavior. In this experiment, we investigated the robustness of a model trained with label smoothing to stronger white-box attacks, noting that performance on PGD-based attacks from TAB0 dropped considerably when moving from 5 iterations of PGD to 10 and 20 iterations. We found that this trend continued, with accuracy dropping all the way to 13.6% when performing a 200-iteration PGD attack, a trend that was not observed with any of the three models trained on PGD attacks. This suggests that label smoothing, while providing only a mild amount of worst-case adversarial robustness, can actually make the adversarial optimization problem much more challenging, which we believe is also the underlying reason for its effectiveness against black-box attacks. The exact mechanism by which it does this, however, remains elusive, which we think is an interesting avenue for further research in black-box defenses. In this work, we have shown the usefulness of logit regularization for improving the robustness of neural networks to adversarial examples. We first presented an analysis of adversarial logit pairing, the current state-of-the-art in adversarial defense, showing that roughly half of its improvement over adversarial training can be attributed to a non-obvious logit regularization effect. Based on this, we investigated two other forms of logit regularization, demonstrating the benefits of both, and then presented an alternative method for adversarial logit pairing that more cleanly decouples the logit pairing and logit regularization effects while also improving performance. By combining these logit regularization techniques together, we were able to create both a stronger defense against white-box PGD-based attacks and also a stronger attack against PGD-based defenses, both of which come at almost no additional cost to PGD-based adversarial training. We also showed the surprising strength of label smoothing as a black-box defense and its corresponding weakness to only highly-optimized white-box attacks. We anticipate that future work will push the limits of logit regularization even further to improve defenses against adversarial examples, possibly using more techniques originally devised for other purposes BID14. We also hope that these investigations will yield insights into training adversarially-robust models without the overhead of multi-step adversarial training, an obstacle that has made it challenge to scale up adversarial defenses to larger datasets without a sizable computational budget.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Bylj6oC5K7
Logit regularization methods help explain and improve state of the art adversarial defenses
In deep learning, performance is strongly affected by the choice of architecture and hyperparameters. While there has been extensive work on automatic hyperpa- rameter optimization for simple spaces, complex spaces such as the space of deep architectures remain largely unexplored. As a , the choice of architecture is done manually by the human expert through a slow trial and error process guided mainly by intuition. In this paper we describe a framework for automatically designing and training deep models. We propose an extensible and modular lan- guage that allows the human expert to compactly represent complex search spaces over architectures and their hyperparameters. The ing search spaces are tree- structured and therefore easy to traverse. Models can be automatically compiled to computational graphs once values for all hyperparameters have been chosen. We can leverage the structure of the search space to introduce different model search algorithms, such as random search, Monte Carlo tree search (MCTS), and sequen- tial model-based optimization (SMBO). We present experiments comparing the different algorithms on CIFAR-10 and show that MCTS and SMBO outperform random search. We also present experiments on MNIST, showing that the same search space achieves near state-of-the-art performance with a few samples. These experiments show that our framework can be used effectively for model discov- ery, as it is possible to describe expressive search spaces and discover competitive models without much effort from the human expert. Code for our framework and experiments has been made publicly available Deep learning has seen a surge in popularity due to breakthroughs in applications such as computer vision, natural language processing, and reinforcement learning BID12; BID24 ). An important observation in much of the recent work is that complex architectures are important for achieving high performance BID12 BID20. Larger datasets and more powerful computing infrastructures are likely to increase our ability to effectively train larger, deeper, and more complex architectures. However, improving the performance of a neural network is not as simple as adding more layers or parameters-it often requires clever ideas such as creating more branches or adding skip connections BID12. Even popular techniques such as dropout BID27 and batch normalization BID14 do not always lead to better performance, and need to be judiciously applied to be helpful. Currently, choosing appropriate values for these architectural hyperparameters requires close supervision by a human expert, in a trial and error manual search process largely guided by intuition. The expert is burdened by having to make the large number of choices involved in the specification of a deep model. Choices interact in non-obvious ways and strongly impact performance. The typical workflow has the expert specify a single model, train it, and compute a validation score. Based on the validation score, previous experience, and information gathered during training, the expert decides if the trained model is satisfactory or not. If the model is considered unsatisfactory, the expert has to think about model variations that may lead to better performance. From the perspective of the expert, it would be convenient to search over architectures automatically, just as we search over simple scalar hyperparameters, such as the learning rate and the regularization coefficient. Ideally, the expert would have control in setting up the search space to incorporate inductive biases about the task being solved and constraints about computational resources. Prior to this work, achieving this goal was hard because expressing model search spaces using general hyperparameter optimization tools requires the human expert to manually distill a set of relevant scalar architectural hyperparameters. The main contributions of our work are 1. a modular, compositional, and extensible language for compactly representing expressive search spaces over models that (a) gives control to the human expert over what model variations to consider; (b) makes it easy to automatically search for performant models in the search space; (c) allows models to be directly compiled to computational graphs without the human expert having to write additional code. 2. model search algorithms that rely on the tree-structured search spaces induced by our language to systematically and efficiently search for performant models; namely, we (a) show that by using constructs in our language, even random search can be effective; (b) compare different model search algorithms experimentally, and show that random search is outperformed by algorithms that leverage the structure of the search space to generalize more effectively across different models. The main differences between our work and previous work are that we develop a modular, composable and extensible language, focusing on the problem of searching over deep architectures. This focus allows the expert to compactly set up a search space, search over it, and automatically compile models to their corresponding computational graphs. Our language can be seen as an effort to combine the functionalities of a deep model specification language (e.g., Tensorflow BID0) and a structured hyperparameter search language (e.g., Hyperopt BID32). Model search has a long and rich history in machine learning and statistics. There has been a wide variety of theoretical and empirical research in this area BID1 BID4 BID3 BID22, including Bayesian optimization methods BID13 BID15 BID26. However, conventional methods are primarily designed for searching over hyperparameters living in Euclidean space. Such methods are ill suited in today's context, where the discrete architectural choices are just as important as the numerical values of the hyperparameters. Searching over architectures using current hyperparameter optimization algorithms requires the expert to distill structural choices into scalar hyperparameters. As a , typically only a few simple global structural hyperparameters are considered, e.g., the depth of the network or whether to use dropout or not. This constrains the richness of the search space, preventing the expert from finding unexpected model variations leading to better performance; e.g., perhaps dropout is useful only after certain types of layers, or batch normalization only helps in the first half of the network. Architecture search has also been considered under the topic of neuroevolution BID28 ), which uses evolutionary (i.e., genetic) strategies to define and search a space of models. In classical approaches, neuroevolution attempts to jointly choose the topology and the parameters of the architecture using genetic algorithms. Architecture search has received renewed interest recently. BID31, BID9 BID21 use evolutionary algorithms which start from an initial model and evolve it based on its validation performance. BID33 propose a reinforcement learning procedure based on policy gradient for searching for convolutional and LSTM architectures. BID2 propose a reinforcement learning procedure based on Q-learning for searching for convolutional architectures. Unfortunately all these approaches consider fixed hard-coded model search spaces that do not easily allow the human expert to incorporate inductive biases about the task being solved, making them unsuitable as general tools for architecture search. For example, evolutionary approaches require an encoding for the models in the search space and genetic operators (e.g., mutation and crossover) which generate encodings for new models out of encodings of old ones. These aspects are handcrafted and hard-coded so it is hard for the human expert to change the search space in flexible ways. Perhaps different model encodings or genetic operators can be considered, but these knobs give somewhat loose and indirect control over the model search space. The reinforcement learning approaches considered suffer from similar issues-the search spaces are hard-coded and not easily modifiable. None of these approaches have the compositionality, modularity, and extensibility properties of our language. BID4 propose Tree of Parzen Estimators (TPE), which can be used to search over structured hyperparameter spaces, and use it to tune the hyperparameters of a Deep Boltzmann Machine. BID32 use TPE to search for values of the hyperparameters of a computer vision system, and show that it can find better values than the best ones previously known. TPE is a general hyperparameter search algorithm, and therefore requires considerable effort to use-for any fixed model search space, using TPE requires the human expert to distill the hyperparameters of the search space, express the search space in Hyperopt BID32 (an implementation of TPE), and write the code describing how values of the hyperparameters in the search space compile to a computational graph. In contrast, our language is modular and composable in the sense that:1. search spaces (defined through modules) are constructed compositionally out of simpler search spaces (i.e., simpler modules);2. hyperparameters for composite modules are derived automatically from the hyperparameters of simpler modules;3. once values for all hyperparameters of a module have been chosen, the ing model can be automatically mapped to a computational graph without the human expert having to write additional code. Our framework reduces the problem of searching over models into three modular components: the model search space specification language, the model search algorithm, and the model evaluation algorithm. Model Search Specification Language: The model search space specification language is built around the concept of a modular computational module. This is akin to the concept of a module BID5 used in deep learning frameworks such as Torch BID8: by implementing the module interface, the internal implementation becomes irrelevant. These modules allow one to express easily complex design choices such as whether to include a module or not, choose between modules of different types, or choose how many times to repeat a module structure. The main insight is that complex modules can be created compositionally out of simpler ones. The behavior of complex modules is generated automatically out of the behavior of simpler modules. Furthermore, our language is extensible, allowing the implementation of new types of modules by implementing a high-level interface local to the module. Model Search Algorithm: The way the model search space is explored is determined by the model search algorithm. This part of the framework decides how much effort to allocate to each part of the search space based on the performance observed for previous models. The model search algorithm typically requires a model evaluation algorithm that computes the performance of a fully specified model. The search algorithm will then use this information to determine which models to try next. The search algorithm interacts with the search space only through a minimal interface that allows it to traverse the space of models and evaluate models discovered this way. This interface is the same irrespective of the specific search space under consideration. We experiment with different search algorithms, such as Monte Carlo tree search and Sequential Model Based Optimization BID13. Having fully specified a model, i.e., having reached a leaf in the tree defined by our model search space, we can evaluate how good this model is according to some criterion defined by the expert. This typically involves training the model on a training set and evaluating it on a validation set. The training procedure often has multiple hyperparameters that can be tuned (e.g., the choice of the optimization algorithm and its hyperparameters, and the learning rate schedule). If the expert does not know how to write down a reasonable training procedure for every model in the search space, the expert can introduce hyperparameters for the evaluation algorithm and search over them using our specification language. Any of the above components can be changed, improved, or extended, while keeping the others fixed. The fact that different components interact only through well-defined interfaces makes it possible to extend and reuse this framework. We believe that DeepArchitect will be an interesting platform for future research in deep learning and hyperparameter tuning for architecture search. The computational module is the fundamental unit of our model search space specification language. We define a computational module as a function DISPLAYFORM0 where n is the dimensionality of the input, H is the set of valid values for the hyperparameters, p is the number of parameters, and m is the dimensionality of the output. The set H can be structured or simply the cross product of scalar hyperparameter sets, i.e., H = H 1 ×... × H H, where H is the number of scalar hyperparameters. The set H is assumed to be discrete in both cases. Definition merits some discussion. For conciseness we have not explicitly represented it, but the number of parameters p and the output dimensionality m can both be functions of the input dimensionality n and the chosen hyperparameter values h ∈ H. For example, an affine module with h dense hidden units has output dimensionality m = h and number of parameters p = (n + 1)h: a weight matrix W ∈ R h×n and a bias vector b ∈ R h. A similar reasoning can be carried out for a convolutional module: the number of parameters p depends on the input dimensionality, the number of filters, and the size of the filters; the dimensionality of the output m depends on the input dimensionality, the number of filters, the size of the filters, the stride, and the padding scheme. The fact that p and m are functions of the input dimensionality and the chosen hyperparameter values is one of the main observations that allows us to do architecture search-once we know the input dimensionality and have fixed values for the hyperparameters, the structure of the computation performed by the module is determined, and this information can be propagated to other modules. We say that a module is fully specified when values for all hyperparameters of the module have been chosen and the input dimensionality is known. We focus on search spaces for architectures that have a single input terminal and a single output terminal. By this, we only mean that the input and output of the module have to be a single tensor of arbitrary order and dimensionality. For example, convolutional modules take as input an order three tensor and return as output an order three tensor, therefore they are single-input single-output modules under our definition. We also assume that the output of a module is used as input to at most a single module, i.e., we assume no output sharing. These restrictions were introduced to simplify exposition. The single-input single-output case with no sharing is simpler to develop and exemplifies the main ideas that allow us to develop a framework for automatic architecture search. The ideas developed in this work extend naturally to the multipleinput multiple-output case with sharing. Additionally, often we can represent modules that are not single-input single-output by defining new modules that encapsulate many signal paths from input to output. For example, a residual module BID12 can be treated in our framework by noting that it is single-input before the skip connection split and single-output after the skip connection merge. Many top performing architectures, such as AlexNet BID19, VGG BID25, and ResNet BID12, are captured in our language. We distinguish between basic computational modules and composite computational modules. Basic modules do some well defined transformation. Affine, batch normalization, and dropout are ex- (Concat (Conv2D The ideas developed in this section are perhaps best illustrated with an example. See FIG0 for the definition of an example search space in LISP-like pseudocode that closely parallels our implementation. The search space, which from the composition of several modules, and therefore is also a module itself, encodes 24 different models, corresponding to the different 24 possible paths from the root to the leaves of the tree. The space is defined using three composite modules (Concat, MaybeSwap, and Optional) and five basic modules (Conv2D, BatchNormalization, ReLU, Dropout, and Affine). Concat introduces no additional hyperparameters, but it has to specify all the modules that have been delegated to it; MaybeSwap introduces a binary hyperparameter that encodes whether to swap the order of the pair of modules or not; Optional introduces a binary hyperparameter that encodes whether to include the module or not. The behavior of the basic modules in FIG0 is simple: Conv2D takes lists of possible values for the number of filters, the size of the filters, and the stride; BatchNormalization and ReLU have no hyperparameters; Dropout takes a list for the possible values for the dropout probability; Affine takes a list for the possible values of the number of hidden units. DISPLAYFORM1 Choosing different values for the hyperparameters of the composite modules may affect the structure of the ing architecture, while choosing different values for the hyperparameters of the basic modules only affects the structure of the corresponding local transformations. The search space of FIG0 from the composition of basic and composite modules; therefore it is a module itself and can be characterized by its input, output, parameters, and hyperparameters. Our set of composite modules in not minimal: e.g., given an Empty basic module, which has no hyperparameters or parameters and simply does the identity transformation, and a Or composite module, which introduces an extra hyperparameter encoding the choice of a specific module in its list, the composite modules Optional and MaybeSwap can be defined as (Optional B) = (Or Empty B) and (MaybeSwap B1 B2) = (Or (Concat B1 B2), (Concat B2 B1)). Given a search space defined by a module, there is an underlying tree over fully specified models: we build this tree by sequentially assigning values to each of the hyperparameters of the module. Each internal node in the tree corresponds to some partial assignment to the hyperparameters of the module, and each terminal node (i.e., each leaf) corresponds to a fully specified model. We can also think about an internal node as corresponding to the state of a module before assigning a value to the next unassigned hyperparameter. The branching factor of a node corresponds to the number of possible values for the hyperparameter under consideration at that node, and traversing a specific edge from that node to a child corresponds to assigning the value encoded by that edge to the hyperparameter under consideration. As a tree has a single path between the root and any leaf, the paths from root to leaves are in one-to-one correspondence with fully specified models. A leaf is reached when there are no hyperparameters left to specify. In FIG0 we have drawn a path through the search space of FIG0 from the root (labeled node 0), where all hyperparameters are unassigned, to a terminal node (labeled node 4), where all hyperparameters have been assigned values. Each branch in the tree corresponds to the assignment of some value to some hyperparameter. At node 0, we are choosing between 32 or 64 filters; at node 1, we are choosing between filters of size 3 or 5; at node 2, we are choosing between applying batch normalization before or after ReLU; at node 3, we are choosing whether to do dropout or not. Node 4 is terminal and corresponds to a fully specified model. Decisions at each node are conditional on decisions previously made. Internal nodes with a single child (i.e., branches for hyperparameters with a single possible value) have been collapsed and omitted from FIG0. Other paths may have different lengths, e.g., picking a path through the right child of node 3 corresponds to adding a Dropout module, which requires an additional hyperparameter choice for the dropout probability when compared to the path from the root to node 4.Search spaces arising from module composition have their traversal functionality automatically derived from the traversal functionality of their component modules: a basic module knows how to sequentially assign values to its hyperparameters, and a composite module knows how to sequentially assign values to its hyperparameters and call the sequential assignment functionality for its component modules. This is akin to recursive expression evaluation in programming languages. To traverse the search space, i.e., to assign values to all hyperparameters of the module defining the search space, all that it is needed is that each module knows how to sequentially specify itself. Modules ing from the composition of modules will then be automatically sequentially specifiable. The three local operations that a module needs to implement for traversal are: to test whether it is fully specified (i.e., whether it has reached a leaf yet); if it is not specified, to return which hyperparameter it is specifying and what are the possible values for it; and given a choice for the current hyperparameter under consideration, to traverse the edge to the child of the current node corresponding to chosen value. Once values for all hyperparameters of a module have been chosen, the fully specified model can be automatically mapped to its corresponding computational graph. We call this mapping compilation. This operation only requires that each module knows how to locally map itself to a computational graph: compilation is derived recursively from the compilation of simpler modules. For example, if we know how to compile Conv2D, ReLU, and Or modules, we will automatically be able to compile all modules built from them. This behavior is also similar to recursive expression evaluation in programming languages. In this section, we consider different search algorithms that are built on top of the functionality described above. Some of these algorithms rely on the search space being tree structured. One of the challenges of our setting is that deep models are expensive to train, so unless we have access to extraordinary computational resources, only a moderate number of evaluations will be practical. Random search is the simplest algorithm that we can consider. At each node of the tree, we choose an outgoing edge uniformly at random, until we reach a leaf node (i.e., a model). Even just random search is interesting, as the model search space specification language allows us to capture expressive structural search spaces. Without our language, randomly selecting an interesting architecture to try would not be possible without considerable effort from the human expert. Monte Carlo tree search (MCTS) (; BID17) is an approximate planning technique that has been used effectively in many domains BID24. Contrary to random search, MCTS uses the information gathered so far to steer its policy towards better performing parts of the search space. MCTS maintains a search tree that is expanded incrementally one node at a time. MCTS uses two policies: a tree policy, which determines the path to be traversed from the root to the frontier of the already expanded tree; and a rollout policy, which determines the path to be traversed from the frontier of the already expanded tree until a leaf is reached. Once a leaf is reached, the model encoded by it is evaluated (e.g., trained on the training set and evaluated on the validation set), and the ing score is used to update the statistics of the nodes in the currently expanded tree in the path to the leaf. Each node in the expanded tree keeps statistics about the number of times it was visited and the average score of the models that were evaluated in the subtree at that node. The rollout policy is often simple, e.g., the random policy described in Section 5.1.The tree policy typically uses an upper confidence bound (UCB) approach. Let n be the number of visits of a node v ∈ T, where T denotes the currently expanded tree, and n 1,..., n b andX 1,...,X b be, respectively, the number of visits and the average scores of the b children of v. The tree policy at x chooses to traverse an edge corresponding to a child maximizing the UCB score: DISPLAYFORM0 where c ∈ R + is a constant capturing the trade-off between exploration and exploitation-larger values of c correspond to larger amounts of exploration. If at node x, some of its children have not been added to the tree, there will be some i ∈ {1, . . ., b} for which n i = 0; in this case we define the UCB score to be infinite, and therefore, unexpanded children always take precedence over expanded children. If multiple unexpanded children are available, we expand one uniformly at random. When MCTS visits a node in the expanded part of the tree, it has to expand all children of that node before expanding any children of its currently expanded children. This is undesirable when there are hyperparameters that can take a large number of related values. We often consider hyperparameters which take numeric values, and similar values in similar performance. For example, choosing between 64 or 80 filters for a convolutional module might not have a dramatic impact on performance. A way of addressing such hyperparameters is to restructure the branches of the tree by doing bisection. Assume that the set of hyperparameters has a natural ordering. At a node, rather than committing directly to a value of the hyperparameter, we commit sequentially-first we decide if we are choosing a value in the first or second half of the set of hyperparameters, and then we recurse on the chosen half until we have narrow it down to a single value. See an example tree in FIG1 and the corresponding restructured tree in FIG1.Tree restructuring involves a tradeoff between depth and breadth: the tree in FIG1 has depth 1, while the tree in FIG1 has depth 3. The restructured tree can have better properties in the sense that there more sharing between different values of the hyperparameters. We could also consider restructured trees with branching factors different than two, again trading off depth and breadth. If the branching factor of the restructured tree is larger than the number of children of the hyperparameter, the restructuring has no effect, i.e., the original and restructured trees are equal. The restructuring operation allows MCTS to effectively consider hyperparameters with a large number of possible values. MCTS is tabular in the sense that it keeps statistics for each node in the tree. While the restructuring operation described in Section 5.3 increases sharing between different hyperparameter values, it still The of restructuring the tree with bisection. MCTS applied to this tree in more sharing when compared to the original tree. For example, sampling a path reaching node 1 provides information about nodes 1, 2, and 3.suffers from the problem that nodes have no way of sharing information other than through common ancestors. This is problematic because differences in hyperparameter values at the top levels of the tree lead to little sharing between models, even if the ing models happen to be very similar. Sequential Model Based Optimization (SMBO) BID13 allows us to address this problem by introducing a surrogate function which can be used to capture relationships between models and how promising it is to evaluate any specific model. The surrogate function can use expressive features to capture architecture patterns that influence performance, e.g., features about sequences of basic modules that occur in the model. The surrogate function can then be optimized to choose which model to evaluate next. Exactly optimizing the surrogate function over a search space can be difficult as often there is a combinatorially large number of models. To approximately optimize the surrogate function, we do some number of random rollouts from the root of the tree until we hit leaf nodes (i.e., models), we evaluate the surrogate function (i.e., we determine, according to the surrogate function, how promising it is to evaluate that model), and evaluate the model that has the highest score according to the surrogate function. We also introduce an exploratory component where we flip a biased coin and choose between evaluating a random model or evaluating the best model according to the surrogate function. The surrogate function is updated after each evaluation. In our experiments, we use a simple surrogate function: we train a ridge regressor to predict model performance, using the models evaluated so far and their corresponding performances as training data. We only use features based on n-grams of sequences of basic modules, disregarding the values of the hyperparameters. More complex features, surrogate functions, and training losses are likely to lead to better search performance, but we leave these to future work. As a reminder, once we assign values to all hyperparameters of the module defining the search space, we need to compute a score for the ing model, i.e., a score for the path from the root to the corresponding leaf encoding the model to evaluate. The specific way to compute scores is defined by the human expert, and it typically amounts to training the model on a training set and evaluating the trained model on a validation set. The score of the model is the ing validation performance. The training process often has its own hyperparameters, such as: what optimization algorithm to use and its corresponding hyperparameters, the learning rate schedule (e.g., the initial learning rate, the learning rate reduction multiplier, and how many epochs without improving the validation performance the algorithm waits before reducing the learning rate), how many epochs without improving the validation performance the algorithm waits before terminating the training process (i.e., early stopping), and what data augmentation strategies to use and their corresponding hyperparameters. The behavior of the evaluation algorithm with respect to the values of its hyperparameters is defined by the expert for the task being considered, so the compilation step described in Section 4.3 for this functionality has to be implemented by the expert. Nonetheless, these user hyperparameters can be included in the search space and searched over in the same way as the architecture hyperparameters described in Section 4.1. We illustrate how our framework can be used to search over all hyperparameters of a model, i.e., both architecture and training hyperparameters, using only high-level insights. We choose a search space of deep convolutional models based around the ideas that depth is important, batch normalization helps convergence, and dropout is sometimes helpful. We search over architectures and evaluate our models on CIFAR-10 .The training hyperparameters that we consider are whether to use ADAM or SGD with momentum, the initial learning rate, the learning rate reduction multiplier, and the rate reduction patience, i.e., how many epochs without improvement to wait before reducing the current learning rate. We use standard data augmentation techniques: we zero pad the CIFAR-10 images to size 40 × 40 × 3, randomly crop a 32 × 32 portion, and flip horizontally at random. We could search over these too if desired. We compare the search algorithms described in Section 5 in terms of the best model found, according to validation performance, as a function of the number of evaluations. We run each algorithm 5 times, for 64 model evaluations each time. All models were trained for 30 minutes on GeForce GTX 970 GPUs in machines with similar specifications. In FIG2 and FIG2, we see that all search algorithms find performant solutions (around 89% accuracy) after 64 evaluations. In FIG2, we see that for fewer than 6 evaluations there is considerable variance between the different algorithms; the more sophisticated model search algorithms are not able to outperform random search with so few evaluations. In FIG2, we see that both SMBO and MCTS with bisection eventually outperform random search; MCTS with bisection starts outperforming random search around 32 evaluations, while for SMBO, it happens around 16 evaluations. Surprisingly, MCTS without restructuring does not outperform random search. We think that this is because there are too many possible values for the first few hyperparameters in the tree, so MCTS will not be able to identify and focus on high-performance regions of the search space within the number of evaluations available. MCTS with bisection and SMBO do not suffer from these problems, and therefore can identify and focus on high performance regions of the search space earlier. In addition to achieving a higher top accuracy, MCTS with bisection and SMBO evaluate a larger fraction of high-performance models when compared to random search, as can be seen in FIG2.The main goal of the previous experiment is to show that more complex model search algorithms can outperform random search by better leveraging the structure of the search. We are not attempting to achieve state-of-the-art performance. We now show that using the same search space on MNIST with a larger time budget leads to close to state-of-the-art performance. The data augmentation scheme is slightly different, as we no longer randomly flip the image horizontally, but now consider random rotations where the maximum angle of rotation is also added as a hyperparameter to the search space. In this experiment, we randomly sample 16 models in the search space and train them for up to 3 hours or until the validation performance fails to increase for more than 128 epochs. The best model among the models sampled chosen according to validation performance obtained among the 16 sampled models has test accuracy equal to 99.72%, which is close to the single model state-of-theart of 99.77% BID23. Additionally, taking a simple majority voting emsemble of the 5 best performing models yielded the same validation accuracy as the best single model and increased test accuracy to 99.75%. The performance profile of the sampled models and the architecture and hyperparameters of the best model are presented in Appendix A.We can build good ensembles by sampling models in the search space and building an ensemble out of the best ones. It has been observed in the literature that model diversity often improves emsemble performance. Our suggest that it is possible to define search spaces that work well across a range of tasks, having the potential to significantly reduce the burden on the human expert. We described a framework for automatically designing and training deep models. This framework consists of three fundamental components: the model search space specification language, the model search algorithm, and the model evaluation algorithm. The model search space specification language is composable, modular, and extensible, and allows us to easily define expressive search spaces over architectures. The model evaluation algorithm determines how to compute a score for a model in the search space. Models can be automatically compiled to their corresponding computational graphs. Using the model search space specification language and the model evaluation algorithm, we can introduce model search algorithms for exploring the search space. Using our framework, it is possible to do random search over interesting spaces of architectures without much effort from the expert. We also described more complex model search algorithms, such as MCTS, MCTS with tree restructuring, and SMBO. We present experiments on CIFAR-10 comparing different model search algorithms and show that MCTS with tree restructuring and SMBO outperform random search. Code for our framework and experiments has been made publicly available. We hope that this paper will lead to more work and better tools for automatic architecture search. In Section 7, we considered a search space of deep convolutional models having structural hyperparameters for the depth of the network, whether to apply batch normalization before or after ReLU, and whether to use dropout; hyperparameters for the number and size of the convolutional filters; training hyperparameters for the learning rate schedule. We show in FIG3 the LISP-like pseudocode for the search space considered in Section 7, and in FIG4 the corresponding runnable Python implementation in our framework. {'optimizer_type' : ['adam', 'sgd'],'learning_rate_init': logspace(10ˆ-2, 10ˆ-7, 32),'rate_mult': logspace(10-2, 0.9, 8),'rate_patience':,'stop_patience': Conv2D) M2 = (RepeatTied (Concat (Conv2D In FIG3 and FIG4, to include training hyperparameters in the search space, we concatenate the module that encapsulates the training hyperparameters (the module assigned to MH) and the modules that encapsulate the remaining model hyperparameters (the modules other than MH in the declaration of M). The Python specification of the model search space in FIG4 is remarkably close in both semantics and length to the LISP-like pseudocode in FIG3. We omit some hyperparameters in FIG3 because we did not consider multiple values for them, e.g., for Conv2D modules, we always used same size padding and the initialization scheme described in BID11.Our implementation has code modularity and reusability benefits. For example, we can define an auxiliary function to instantiate modules and then use it in the instantiation of the module for the complete search space. This is illustrated in FIG4 with the definition of Module fn and its use in the declaration of M.See Figure 6a for the performance profile of 16 models randomly sampled from the search space in FIG4. See Figure 6b for the architecture and training hyperparameters of the best model found in the 16 samples. We provide a brief description of a representative subset of the types of basic and composite modules that we have implemented in our framework. It is simple to define new modules this list by implementing the module interface described in Section C. Basic modules take no other modules when instantiated, having only local hyperparameters and parameters.• Affine: Dense affine transformation. Hyperparameters: number of the hidden units and initialization scheme of the parameters. Parameters: dense matrix and bias vector. FIG3 for the specification of the same search space in the LISP-like pseudocode used throughout this paper.• UserHyperparams: User-defined hyperparameters. Hyperparameters: hyperparameters determined by the user expert. Parameters: none.• Empty: Identity. Hyperparameters: none. Parameters: none. Composite modules take other modules as arguments when instantiated, which we will call submodules. The behavior of a composite module depends on its submodules. The hyperparameters which a composite module has to specify depend on the values of the hyperparameters of the composite module and the hyperparameters of the submodules; e.g., Or takes a list of submodules but it only has to specify the hyperparameters of the submodule that it ends up choosing. A composite module is responsible for specifying its submodules, which is done through calls to the module interfaces of the submodules.• Concat: Takes a list of submodules and connects them in series. Hyperparameters: hyperparameters of the submodules. Parameters: parameters of the submodules. We describe the module interface as we implemented it in Python. To implement a new type of module, one only needs to implement the module interface.class Module(object): def initialize(self, in_d, scope) def get_outdim(self) def is_specified(self) def get_choices(self) def choose(self, choice_i) def compile(self, in_x, train_feed, eval_feed)Figure 7: Module interface used by all modules irrespective if they are basic or composite. To implement a new type of module, the human expert only needs to implement the module interface.• initialize: Tells a module its input dimensionality. A composite module is responsible for initializing the submodules that it uses.• get outdim: Once a module is fully specified, we can determine its output dimensionality by calling get outdim. The output dimensionality is a function of the input dimensionality (which is determined when initialize is called) and the values of the hyperparameters chosen.• is specified: Tests whether a module is fully specified. If a module is fully specified, outdim and compile may be called.• get choices: Returns a list of the possible values for the hyperparameter currently being specified.• choose: Chooses one of the possible values for the hyperparameter being specified. The module assigns the chosen value to that hyperparameter and either transitions to the next hyperparameter to specify or becomes fully specified. The module maintains internally the state of its search process.• compile: Creates the computational graph of the model in a deep learning model specification language, such as Tensorflow or PyTorch. For composite modules, compilation can be performed recursively, through calls to the compile functions of its submodules. Composite modules rely on calls to the module interfaces of its submodules to implement their own module interfaces. For example, Concat needs to call out dim for the last submodule of the series connection to determine its own output dimensionality, and needs to call choose on the submodules to specify itself. One of the design choices that make the language modular is the fact that a composite module can implement its own module interface through calls to the module interfaces of its submodules. All information about the specification of a module is local to itself or kept within its submodules. We can define new modules with complex signal paths as long as their existence is encapsulated, i.e., a module may have many signal paths as long they fork from a single input and merge to a single output, as illustrated in Figure 8. DISPLAYFORM0 Figure 8: A module with many signal paths from input to output. To implement a module, the human expert only needs to implement its module interface. M1, M2, M3, and M4 are arbitrary single-input single-output modules; g 1 and g 2 are arbitrary transformations that may have additional hyperparameters. The hyperparameters of g 1 and g 2 can be managed internally by NewModule. In Figure 8 there is a single input fed into M1, M2, and M3. M1, M2, M3, M4, M5 are arbitrary single-input single-output submodules of NewModule. The module interface of NewModule can be implemented using the module interfaces of its submodules. Instantiating a module of type NewModule requires submodules for M1, M2, M3, M4, and M5, and potentially lists of possible values for the hyperparameters of g 1 and g 2. A residual module which chooses what type of merging function to apply, e.g., additive or multiplicative, is an example of a module with hyperparameters for the merging functions A module of the type NewModule is fully specified after we choose values for all the hyperparameters of M1, M2, M3, M4, M5, g 1, and g 2. Testing if M1, M2, M3, M4, and M5 are fully specified can be done by calling is specified on the corresponding submodule. The output dimensionality of NewModule can be computed as a function of the values of the hyperparameters of g 2 and the output dimensionality of M5 and M4, which can be obtained by calling get outdim. Similarly, for get choices we have to keep track of which hyperparameter we are specifying, which can either come from M1, M2, M3, M4, and M5, or from g 1 and g 2. If we are choosing values for an hyperparameter in M1, M2, M3, M4, and M5 we can call get choices and choose on that submodule, while for the hyperparameters of g 1 and g 2 we have to keep track of the state in NewModule. compile is similar in the sense that it is implemented using calls to the compile functionality of the submodules.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkTBjG-AZ
We describe a modular and composable language for describing expressive search spaces over architectures and simple model search algorithms applied to these search spaces.
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems -- the models (often deep networks or wide networks or both) are compute and memory intensive. Low precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low precision networks can be significantly improved by using knowledge distillation techniques. We call our approach Apprentice and show state-of-the-art accuracies using ternary precision and 4-bit precision for many variants of ResNet architecture on ImageNet dataset. We study three schemes in which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline. Background: Today's high performing deep neural networks (DNNs) for computer vision applications comprise of multiple layers and involve numerous parameters. These networks have O(Giga-FLOPS) compute requirements and generate models which are O(Mega-Bytes) in storage BID4. Further, the memory and compute requirements during training and inference are quite different BID23. Training is performed on big datasets with large batch-sizes where memory footprint of activations dominates the model memory footprint. On the other hand, batchsize during inference is typically small and the model's memory footprint dominates the runtime memory requirements. Because of complexity in compute, memory and storage requirements, training phase of the networks is performed on CPU and/or GPU clusters in a distributed computing environment. Once trained, a challenging aspect is deployment of trained models on resource constrained inference systems such as portable devices or sensor networks, and for applications in which real-time predictions are required. Performing inference on edge-devices comes with severe constraints on memory, compute and power. Additionally, ensemble based methods, which one can potentially use to get improved accuracy predictions, become prohibitive in resource constrained systems. Quantization using low-precision numerics BID37 BID45 BID21 BID24 BID10 BID46 BID26 BID35 BID23 and model compression BID3 BID16 BID27 have emerged as popular solutions for resource constrained deployment scenarios. With quantization, a low-precision version of network model is generated and deployed on the device. Operating in lower precision mode reduces compute as well as data movement and storage requirements. However, majority of existing works in low-precision DNNs sacrifice accuracy over baseline full-precision networks. With model compression, a smallerIn the second scheme, we start with a full-precision trained network and transfer knowledge from this trained network continuously to train a low-precision network from scratch. We find that the low-precision network converges faster (albeit to similar accuracies as the first scheme) when a trained complex network guides its training. In the third scheme, we start with a trained full-precision large network and an apprentice network that has been initialised with full-precision weights. The apprentice network's precision is lowered and is fine-tuned using knowledge distillation techniques. We find that the low-precision network's accuracy marginally improves and surpasses the accuracy obtained via the first scheme. This scheme then sets the new state-of-the-art accuracies for the ResNet models at ternary and 4-bit precision. Overall, the contributions of this paper are the techniques to obtain low-precision DNNs using knowledge distillation technique. Each of our scheme produces a low-precision model that surpasses the accuracy of the equivalent low-precision model published to date. One of our schemes also helps a low-precision model converge faster. We envision these accurate low-precision models to simplify the inference deployment process on resource constrained systems and even otherwise on cloud-based deployment systems. Lowering precision of model parameters: Resource constrained inference systems impose significant restrictions on memory, compute and power budget. With regard to storage, model (or weight) parameters and activation maps occupy memory during the inference phase of DNNs. During this phase memory is allocated for input (IFM) and output feature maps (OFM) required by a single layer in the DNN, and these dynamic memory allocations are reused for other layers. The total memory allocation during inference is then the maximum of IFM and maximum of OFM memory required across all the layers plus the sum of all weight tensors BID23. When inference phase for DNNs is performed with a small batch size, the memory footprint of the weights exceeds the footprint of the activation maps. This aspect is shown in Figure 1 for 4 different networks (AlexNet BID18, Inception-Resnet-v2 BID32, ResNet-50 and ResNet-101 ) running 224x224 image patches. Thus lowering the precision of the weight tensors helps lower the memory requirements during deployment. One other aspect of lowering memory footprint is that the working set size of the workload starts to fit on chip and by reducing accesses to DRAM (off-chip) memory, the compute core starts to see better performance and energy savings (DRAM accesses are expensive in latency and energy). Benefit of low-precision compute: Low-precision compute simplifies hardware implementation. For example, the compute unit to perform the convolution operation (multiplication of two operands) involves a floating-point multiplier when using fullprecision weights and activations. The floatingpoint multiplier can be replaced with a much simpler circuitry (xnor and popcount logic elements) when using binary precision for weights and activations BID7 BID26. Similarly, when using ternary precision for weights and full-precision for activations, the multiplier unit can be replaced with a sign comparator unit. Simpler hardware also helps lower the inference latency and energy budget. Thus, operating in lower precision mode reduces compute as well as data movement and storage requirements. The drawback of low-precision models, however, is degraded accuracy. We discuss later in the paper the network accuracies obtained using methods proposed in literature. These accuracies serve as the starting point and baselines we compare to in our work. Low-precision networks: Low-precision DNNs are an active area of research. Most low-precision networks acknowledge the over parameterization aspect of today's DNN architectures and/or the aspect that lowering the precision of neurons post-training often does not impact the final performance. Reducing precision of weights for efficient inference pipeline has been very well studied. Works like Binary connect (BC), Ternary-weight networks (TWN) BID20, fine-grained ternary quantization BID22 and INQ BID44 target precision reduction of network weights. Accuracy is almost always affected when quantizing the weights significantly below 8-bits of precision. For AlexNet on ImageNet, TWN loses 5% Top-1 accuracy. Schemes like INQ, work in BID31 and BID22 do fine-tuning to quantize the network weights. Work in XNOR-NET BID26, binary neural networks BID7, DoReFa BID45 and trained ternary quantization (TTQ) BID46 target training pipeline. While TTQ targets weight quantization, most works targeting activation quantization show that quantizing activations always hurt accuracy. XNOR-NET approach degrades Top-1 accuracy by 12% and DoReFa by 8% when quantizing both weights and activations to 1-bit (for AlexNet on ImageNet). Work by BID10 advocates for low-precision fixed-point numbers for training. They show 16-bits to be sufficient for training on CIFAR10 dataset. Work by BID30 quantizes gradients in a distributed computing system. The general technique in distillation based methods involves using a teacher-student strategy, where a large deep network trained for a given task teaches shallower student network(s) on the same task. The core concepts behind knowledge distillation or transfer technique have been around for a while. BID3 show that one can compress the information in an ensemble into a single network. BID2 extend this approach to study shallow, but wide, fully connected topologies by mimicking deep neural networks. To facil-itate learning, the authors introduce the concepts of learning on logits rather than the probability distribution. BID16 propose a framework to transfer knowledge by introducing the concept of temperature. The key idea is to divide the logits by a temperature factor before performing a Softmax function. By using a higher temperature factor the activations of incorrect classes are boosted. This then facilitates more information flowing to the model parameters during back-propagation operation. FitNets BID27 extend this work by using intermediate hidden layer outputs as target values for training a deeper, but thinner, student model. Net2Net BID5 ) also uses a teacher-student network system with a function-preserving transformation approach to initialize the parameters of the student network. The goal in Net2Net approach is to accelerate the training of a larger student network. BID43 use attention as a mechanism for transferring knowledge from one network to another. In a similar theme, BID42 propose an information metric using which a teacher DNN can transfer the distilled knowledge to other student DNNs. In N2N learning work, BID1 propose a reinforcement learning based approach for compressing a teacher network into an equally capable student network. They achieve a compression factor of 10x for ResNet-34 on CIFAR datasets. Sparsity and hashing: Few other popular techniques for model compression are pruning BID19 BID11 BID12, hashing BID39 and weight sharing BID6 BID9. Pruning leads to removing neurons entirely from the final trained model making the model a sparse structure. With hashing and weight sharing schemes a hash function is used to alias several weight parameters into few hash buckets, effectively lowering the parameter memory footprint. To realize benefits of sparsity and hashing schemes during runtime, efficient hardware support is required (e.g. support for irregular memory accesses BID38 BID25). We introduce the concept of knowledge distillation in this section. BID3, BID16 and BID36 analyze this topic in great detail. DISPLAYFORM0 where, W T and W A are the parameters of the teacher and the student (apprentice) network, respectively, y is the ground truth, H(·) denotes a loss function and, α, β and γ are weighting factors to prioritize the output of a certain loss function over the other. In equation 1, lowering the first term of the cost function gives a better teacher network and lowering the second term gives a better student network. The third term is the knowledge distillation term whereby the student network attempts to mimic the knowledge in the teacher network. In BID16, the logits of the teacher network are divided by a temperature factor τ. Using a higher value for τ produces a softer probability distribution when taking the Softmax of the logits. In our studies, we use cross-entropy function for H(·), set α = 1, β = 0.5 and γ = 0.5 and, perform the transfer learning process using the logits (inputs to the Softmax function) of the teacher network. In our experiments we study the effect of varying the depth of the teacher and the student network, and the precision of the neurons in the student network. Low-precision DNNs target the storage and compute efficiency aspects of the network. Model compression targets the same efficiency parameters from the point of view of network architecture. With Apprentice we combine both these techniques to improve the network accuracy as well as the runtime efficiency of DNNs. Using the teacher-student setup described in the last section, we investigate three schemes using which one can obtain a low-precision model for the student network. The first scheme (scheme-A) jointly trains both the networks -full-precision teacher and low-precision student network. The second scheme (scheme-B) trains only the low-precision student network but distills knowledge from a trained full-precision teacher network throughout the training process. The third scheme (scheme-C) starts with a trained full-precision teacher and a full-precision student network but fine-tunes the student network after lowering its precision. Before we get into the details of each of these schemes, we discuss the accuracy numbers obtained using low-precision schemes described in literature. These accuracy figures serve as the baseline for comparative analysis. We focus on sub 8-bits precision for inference deployments, specifically ternary and 4-bits precision. We found TTQ BID46 scheme achieving the state-of-the-art accuracy with ternary precision for weights and full-precision (32-bits floating-point) for activations. On Imagenet-1K , TTQ achieves 33.4% Top-1 error rate with a ResNet-18 model. We implemented TTQ scheme for ResNet-34 and ResNet-50 models trained on Imagenet-1K and achieved 28.3% and 25.6% Top-1 error rates, respectively. This scheme is our baseline for 2-bits weight and full-precision activations. For 2-bits weight and 8-bits activation, we find work by BID22 to achieve the best accuracies reported in literature. For ResNet-50, BID22 obtain 29.24% Top-1 error. We consider this work to be our baseline for 2-bits weight and 8-bits activation models. For 4-bits precision, we find WRPN scheme BID23 to report the highest accuracy. We implemented this scheme for 4-bits weight and 8-bits activations. For ResNet-34 and ResNet-50 models trained on Imagenet-1K, we achieve 29.7% and 28.4% Top-1 error rates, respectively. In the first scheme that we investigate, a full-precision teacher network is jointly trained with a lowprecision student network. FIG0 shows the overall training framework. We use ResNet topology for both the teacher and student network. When using a certain depth for the student network, we pick the teacher network to have either the same or larger depth. In BID3 and BID16, only the student network trains while distilling knowledge from the teacher network. In our case, we jointly train with the rationale that the teacher network would continuously guide the student network not only with the final trained logits, but also on what path the teacher takes towards generating those final higher accuracy logits. We implement pre-activation version of ResNet in TensorFlow BID0 For low-precision numerics, when using ternary precision we use the ternary weight network scheme BID20 where the weight tensors are quantized into {−1, 0, 1} with a per-layer scaling coefficient computed based on the mean of the positive terms in the weight tensor. We use the WRPN scheme BID23 to quantize weights and activations to 4-bits or 8-bits. We do not lower the precision of the first layer and the final layer in the apprentice network. This is based on the observation in almost all prior works that lowering the precision of these layers degrades the accuracy dramatically. While training and during fine-tuning, the gradients are still maintained at full-precision. Results with ResNet-18: TAB2 shows the effect of lowering precision on the accuracy (Top-1 error) of ResNet-18 with baseline (no teacher) and with ResNet-34, ResNet-50 and ResNet-101 as teachers. In the table, A denotes the precision of the activation maps (in bits) and W denotes the precision of the weights. The baseline Top-1 error for full-precision ResNet-18 is 30.4%. By lowering the precision without using any help from a teacher network, the accuracy drops by 3.5% when using ternary and 4-bits precision (the column corresponding to "Res-18 Baseline" in the table). With distillation based technique, the accuracy of low-precision configurations improves significantly. In fact, the accuracy of the fullprecision ResNet-18 also improves when paired with a larger full-precision ResNet model (the row corresponding to "32A, 32W" in TAB2). The best full-precision accuracy was achieved with a student ResNet-18 and ResNet-101 as the teacher (improvement by 0.35% over the baseline). The gap between full-precision ResNet-18 and the best achieved ternary weight ResNet-18 is only 1% (improvement of 2% over previous best). With "8A, 4W", we find the accuracy of the student ResNet-18 model to beat the baseline accuracy. We hypothesize regularization with low-precision (and distillation) to be the reason for this. "8A, 4W" improving the accuracy beyond baseline figure is only seen for ResNet-18. FIG2 shows the difference in Top-1 error rate achieved by our best low-precision student networks (when trained under the guidance of a teacher network) versus not using any help from a teacher network. For this figure, the difference in Top-1 error of the best low-precision student network is calculated from the baseline full-precision network (i.e. ResNet-18 with 30.4% Top-1 error), i.e. we want to see how close a low-precision student network can come to a full-precision baseline model. We find our low-precision network accuracies to significantly close the gap between full-precision accuracy (and for some configurations even beat the baseline accuracy). BID16 mention improving the baseline full-precision accuracy when a student network is paired with a teacher network. They mention improving the accuracy of a small model on MNIST dataset. We show the efficacy of distillation based techniques on a much bigger model (ResNet) with much larger dataset (ImageNet). Results with ResNet-34 and ResNet-50: TAB3 FIG4 and FIG4 show the difference in Top-1 error achieved by our best low-precision ResNet-34 and ResNet-50 student networks, respectively, and compares with obtained using methods proposed in literature. Our Apprentice scheme significantly closes the gap between full-precision baseline networks and low-precision variants of the same networks. In most cases we see our scheme to better the previously reported accuracy numbers by 1.5%-3%. In scheme-A, we use a teacher network that is always as large or larger in number of parameters than the student network. We experimented with a ternary ResNet-34 student network which was paired with a full-precision ResNet-18. The ternary model for ResNet-34 is about 8.5x smaller in size compared to the full-precision ResNet-18 model. The final trained accuracy of the ResNet-34 ternary model with this setup is 2.7% worse than that obtained by pairing the ternary ResNet-34 network with a ResNet-50 teacher network. This suggests that the distillation scheme works only when the teacher network is higher in accuracy than the student network (and not necessarily bigger in capacity). Further, the benefit from using a larger teacher network saturates at some point. This can be seen by picking up a precision point, say "32A, 2W" and looking at the error rates along the row in TAB2, 2 and 3.One concern, we had in the early stages of our investigation, with joint training of a low-precision small network and a high precision large network was the influence of the small network's accuracy on the accuracy of the large network. When using the joint cost function, the smaller network's probability scores are matched with the predictions from the teacher network. The joint cost is added as a term to the total loss function (equation 1). This led us to posit that the larger network's learning capability will be affected by the inherent impairment in the smaller low-precision network. Further, since the smaller student network learns form the larger teacher network, a vicious cycle might form where the student network's accuracy will further drop because the teacher network's learning capability is being impeded. However, in practice, we did not see this phenomenon occurring -in each case where the teacher network was jointly trained with a student network, the accuracy of the teacher network was always within 0.1% to 0.2% of the accuracy of the teacher network without it jointly supervising a student network. This could be because of our choice of α, β and γ values. In Section 4, we mentioned about temperature, τ, for Softmax function and hyper-parameters α = 1, β = 0.5 and γ = 0.5. Since, we train directly on the logits of the teacher network, we did not have to experiment with the appropriate value of τ. τ is required when training on the soft targets produced by the teacher network. Although we did not do extensive studies experimenting with training on soft targets as opposed to logits, we find that τ = 1 gives us best when training on soft targets. BID16 mention that when the student network is significantly smaller than the teacher network, small values of τ are more effective than large values. For few of the low-precision configurations, we experimented with α = β = γ = 1, and, α = 0.9, β = 1 and γ = 0.1 or 0.3. Each of these configurations, yielded a lower performance model compared to our original choice for these parameters. For the third term in equation 1, we experimented with a mean-squared error loss function and also a loss function with logits from both the student and the teacher network (i.e. H(z T, z A)). We did not find any improvement in accuracy compared to our original choice of the cost function formulation. A thorough investigation of the behavior of the networks with other values of hyper-parameters and different loss functions is an agenda for our future work. Overall, we find the distillation process to be quite effective in getting us high accuracy low-precision models. All our low-precision models surpass previously reported low-precision accuracy figures. For example, TTQ scheme achieves 33.4% Top-1 error rate for ResNet-18 with 2-bits weight. Our best ResNet-18 model, using scheme-A, with 2-bits weight achieves ∼31.5% error rate, improving the model accuracy by ∼2% over TTQ. Similarly, the scheme in BID22 achieves 29.2% Top-1 error with 2-bits weight and 8-bits activation. The best performing Apprentice network at this precision achieves 27.2% Top-1 error. For Scheme-B and Scheme-C, which we describe next, Scheme-A serves as the new baseline. In this scheme, we start with a trained teacher network. Referring back to FIG0, the input image is passed to both the teacher and the student network, except that the learning with back-propagation happens only in the low precision student network which is trained from scratch. This is the scheme used by BID3 and BID16 for training their student networks. In this scheme, the first term in equation 1 zeroes out and only the last two terms in the equation contribute toward the loss function. Figure 5: Top-1 error rate versus epochs of four student networks using scheme-A and scheme-B.With scheme-B, one can pre-compute and store the logit values for the input images on disk and access them during training the student network. This saves the forward pass computations in the teacher network. Scheme-B might also help the scenario where a student network attempts to learn the "dark knowledge" from a teacher network that has already been trained on some private or sensitive data (in addition to the data the student network is interested in training on).With scheme-A, we had the hypothesis that the student network would be influenced by not only the "dark knowledge" in the teacher network but also the path the teacher adopts to learn the knowledge. With scheme-B we find, that the student network gets to similar accuracy numbers as the teacher network albeit at fewer number of epochs. With this scheme, the training accuracies are similar to that reported in TAB2, 2 and 3. The low-precision student networks, however, learn in fewer number of epochs. Figure 5 plots the Top-1 error rates for few of the configurations from our experiment suite. In each of these plots, the student network in scheme-B converges around 80th-85th epoch compared to about 105 epochs in scheme-A. In general, we find the student networks with scheme-B to learn in about 10%-20% fewer epochs than the student networks trained using scheme-A.5.4 SCHEME-C: FINE-TUNING THE STUDENT MODEL Scheme-C is very similar to scheme-B, except that the student network is primed with full precision training weights before the start of the training process. At the beginning of the training process, the weights and activations are lowered and the student network is sort of fine-tuned on the dataset. Similar to scheme-B, only the final two terms in equation 1 comprise the loss function and the lowprecision student network is trained with back-propagation algorithm. Since, the network starts from a good initial point, comparatively low learning rate is used throughout the training process. There is no clear recipe for learning rates (and change of learning rate with epochs) which works across all the configurations. In general, we find training with a learning rate of 1e-3 for 10 to 15 epochs, followed by 1e-4 for another 5 to 10 epochs, followed by 1e-5 for another 5 epochs to give us the best accuracy. Some configurations run for about 40 to 50 epochs before stabilizing. For these configurations, we find training using scheme-B with warm startup (train the student network at full-precision for about 25-30 epochs before lowering the precision) to be equally good. BID41 investigate a similar scheme for binary precision on AlexNet. Our experiments show that distillation is an overkill for AlexNet and one can get comparable accuracies using techniques proposed in BID33 BID23. Further, BID41 hypothesize that distillation scheme will work on larger networks, we show in this paper how to make it work. use a similar scheme for AlexNet and mention starting from a non-global optimal checkpoint gives better accuracy, though we did not find this observation to hold in our experiments. We find the final accuracy of the models obtained using scheme-C to be (marginally) better than those obtained using scheme-A or scheme-B. TAB5 shows error rates of few configurations of low-precision student network obtained using scheme-A (or scheme-B) and scheme-C. For ResNet-50 student network, the accuracy with ternary weights is further improved by 0.6% compared to that obtained using scheme-A. Note that the performance of ternary networks obtained using scheme-A are already state-of-the-art. Hence, for ResNet-50 ternary networks, 24.7% Top-1 error rate is the new state-of-the-art. With this, ternary ResNet-50 is within 0.9% of baseline accuracy (23.8% vs. 24.7%). Similarly, with 4-bits weight and 8-bits activations, ResNet-50 model obtained using scheme-C is 0.4% better than that obtained with scheme-A (closing the gap to be within 1.3% of full-precision ResNet-50 accuracy). Scheme-C is useful when one already has a trained network which can be fine-tuned using knowledge distillation schemes to produce a low-precision variant of the trained network. As mentioned earlier, low-precision is a form of model compression. There are many works which target network sparsification and pruning techniques to compress a model. With ternary precision models, the model size reduces by a factor of 2/32 compared to full-precision models. With Apprentice, we show how one can get a performant model with ternary precision. Many works targeting network pruning and sparsification target a full-precision model to implement their scheme. To be comparable in model size to ternary networks, a full-precision model needs to be sparsified by 93.75%. Further, to be effective, a sparse model needs to store a key for every non-zero value denoting the position of the value in the weight tensor. This adds storage overhead and a sparse model needs to be about 95% sparse to be at-par in memory size as a 2-bit model. Note that ternary precision also has inherent sparsity (zero is a term in the ternary symbol dictionary) -we find our ternary models to be about 50% sparse. In work by and BID12, sparsification of full-precision networks is proposed but the sparsity achieved is less than 93.75%. Further, the network accuracy using techniques in both these works lead to larger degradation in accuracy compared to our ternary models. Overall, we believe, our ternary precision models to be state-of-the-art not only in accuracy (we better the accuracy compared to prior ternary precision models) but also when one considers the size of the model at the accuracy level achieved by low-precision or sparse networks. While low-precision networks have system-level benefits, the drawback of such models is degraded accuracy when compared to full-precision models. We present three schemes based on knowledge distillation concept to improve the accuracy of low-precision networks and close the gap between the accuracy of these models and full-precision models. Each of the three schemes improve the accuracy of the low-precision network configuration compared to prior proposals. We motivate the need for a smaller model size in low batch, real-time and resource constrained inference deployment systems. We envision the low-precision models produced by our schemes to simplify the inference deployment process on resource constrained systems and on cloud-based deployment systems where low latency is a critical requirement. Top-1 error (%) In addition to ImageNet dataset, we also experiment with Apprentice scheme on CIFAR-10 dataset. CIFAR-10 dataset BID17 ) consists of 50K training images and 10K testing images in 10 classes. We use various depths of ResNet topology for this study. Our implementation of ResNet for CIFAR-10 closely follows the configuration in BID14. The network inputs are 32×32 images. The first layer is a 3×3 convolutional layer followed by a stack of 6n layers with 3×3 convolutions on feature map sizes 32, 16 and 8; with 2n layers for each feature map size. The numbers of filters are 16, 32 and 64 in each set of 2n layers. This is followed by a global average pooling, a 10-way fully connected layer and a softmax layer. Thus, in total there are 6n+2 weight layers. FIG6 shows the impact of lowering precision as the depth of ResNet varies. As the network becomes larger in size, the impact of lowering precision is diminished (relative to the accuracy of the network at that depth when using full-precision). For example, with ResNet-110, full-precision Top-1 error rate is 6.19%. At the same depth, ternarizing the model gives similar error rate (6.24%).Comparing this with ResNet-20, the gap between full-precision and ternary model (2-bits weight and 32-bits activations) is 0.8% (7.9% vs. 8.7% Top-1 error). Overall, we find that ternarizing a model closely follows accuracy of baseline full-precision model. However, lowering both weights and activations almost always leads to large accuracy degradation. Accuracy of 2-bits weight and 8-bits activation network is 0.8%-1.6% worse than full-precision model. Using Apprentice scheme this gap is considerably lowered. FIG6 shows the impact of lowering precision when a low-precision (student) network is paired with a full-precision (teacher) network. For this analysis we use scheme-A where we jointly train both the teacher and student network. The mix of ResNet depths we used for this study are 32, 44, 56, We experimented with a variation of scheme-C where a network is first compressed using distillation scheme (using a deeper ResNet as the teacher network) followed by lowering the precision and finetuning. The fine-tuning is done for 35-40 epochs with a very low learning-rate without the influence of any teacher network (no distillation). For this experiment, the student network starts from a higher accuracy compressed model compared to scheme-C (since distillation improves accuracy of student network at full-precision as well). FIG7 shows the with this experimental setting. For each configuration, we find the error-rate to lie in between the error-rates shown in FIG6 and FIG6 for the corresponding configuration, i.e. this scheme is better than low-precision training from scratch but not as good as training with methodology described in scheme-A. On an average, we find scheme-A to give 0.7% better accuracy at low-precision configurations compared to the scheme mentioned here highlighting the benefits of "joint" low-precision training from scratch with distillation (Apprentice scheme). Many works proposing low-precision knobs advocate for training from scratch or training (for a significant number of epochs) with warm-startup -the from this experiment are in line with the observations in these papers. Some works proposing low-precision networks advocate for making the layers wider (or the model larger) to recover accuracy at low-precision. These works propose making the layers wider by 2x or 3x. While these works show the benefits of low-precision, making the model larger increases the number of raw computations. Future work could investigate low-precision and less layer widening factor (say 1.10x or 1.25x). This would help inference latency while maintaining accuracy at-par with baseline full-precision networks. As mentioned in section 5.5, sparsifying a model more than a certain percentage leads to accuracy loss. Investigating hyper-sparse network models without accuracy loss using distillation based schemes is another interesting avenue of further research.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1ae1lZRb
We show that knowledge transfer techniques can improve the accuracy of low precision networks and set new state-of-the-art accuracy for ternary and 4-bits precision.
Deep neural networks (DNN) are widely used in many applications. However, their deployment on edge devices has been difficult because they are resource hungry. Binary neural networks (BNN) help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit. We propose an improved binary training method (BNN+), by introducing a regularization function that encourages training weights around binary values. In addition to this, to enhance model performance we add trainable scaling factors to our regularization functions. Furthermore, we use an improved approximation of the derivative of the sign activation function in the backward computation. These additions are based on linear operations that are easily implementable into the binary training framework. We show experimental on CIFAR-10 obtaining an accuracy of 86.5%, on AlexNet and 91.3% with VGG network. On ImageNet, our method also outperforms the traditional BNN method and XNOR-net, using AlexNet by a margin of 4% and 2% top-1 accuracy respectively. Deep neural networks (DNNs) have demonstrated success for many supervised learning tasks ranging from voice recognition to object detection BID26 BID11. The focus has been on increasing accuracy, in particular for image tasks, where deep convolutional neural networks (CNNs) are widely used. However, their increasing complexity poses a new challenge, and has become an impediment to widespread deployment in many applications; specifically when trying to deploy such models to resource constrained and lower-power devices. A typical DNN architecture contains tens to thousands of layers, ing in millions of parameters. As an example, AlexNet BID16 requires 200MB of memory, VGGNet BID26 requires 500MB memory. Large model sizes are further exacerbated by their computational cost, requiring GPU implementation to allow real-time inference. Such requirements evidently cannot be accustomed by edge devices as they have limited memory, computation power, and battery. This motivated the community to investigate methods for compressing and reducing computation cost of DNNs. To make DNNs compatible with the resource constraints of low power devices, there have been several approaches developed, such as network pruning BID17, architecture design BID25, and quantization BID0 BID4. In particular, weight compression using quantization can achieve very large savings in memory, where binary (1-bit), and ternary (2-bit) approaches have been shown to obtain competitive accuracy BID10 BID31 BID29. Using such schemes reduces model sizes by 8x to 32x depending on the bit resolution used for computations. In addition to this, the speed by quantizing the activation layers. In this way, both the weights and activations are quantized so that one can replace the expensive dot products and activation function evaluations with bitwise operations. This reduction in bit-width benefits hardware accelerators such as FPGAs and neural network chips. An issue with using low-bit DNNs is the drastic drop in accuracy compared to its full precision counterpart, and this is made even more severe upon quantizing the activations. This problem is largely due to noise and lack of precision in the training objective of the networks during back-propagation BID19. Although, quantizing the weights and activations have been attracting large interests thanks to their computational benefits, closing the gap in accuracy between the full precision and the quantized version remains a challenge. Indeed, quantizing weights cause drastic information loss and make neural networks harder to train due to a large number of sign fluctuations in the weights. Therefore, how to control the stability of this training procedure is of high importance. In theory, it is infeasible to back-propagate in a quantized setting as the weights and activations employed are discontinuous and discrete. Instead, heuristics and approximations are proposed to match the forward and backward passes. Often weights at different layers of DNNs follow a certain structure. How to quantize the weights locally, and maintaining a global structure to minimize a common cost function is important BID18.Our contribution consists of three ideas that can be easily implemented in the binary training framework presented by BID10 to improve convergence and generalization accuracy of binary networks. First, the activation function is modified to better approximate the sign function in the backward pass, second we propose two regularization functions that encourage training weights around binary values, and lastly a scaling factor is introduced in both the regularization term as well as network building blocks to mitigate accuracy drop due to hard binarization. Our method is evaluated on CIFAR-10 and ImageNet datasets and compared to other binary methods. We show accuracy gains to traditional binary training. We focus on challenges present in training binary networks. The training procedure emulates binary operations by restricting the weights and activations to single-bit so that computations of neural networks can be implemented using arithmetic logic units (ALU) using XNOR and popcount operations. More specifically, XNOR and popcount instructions are readily available on most CPU and GPU processing units. Custom hardware would have to be implemented to take advantage of operations with higher bits such as 2 to 4 bits. The goal of this binary training is to reduce the model size and gain inference speedups without performance degradation. Primary work done by BID0 (BinaryConnect) trains deep neural networks with binary weights {−1, +1}. They propose to quantize real values using the sign function. The propagated gradient applies update to weights |w| ≤ 1. Once the weights are outside of this region they are no longer updated, this is done by clipping weights between {−1, +1}. In that work, they did not consider binarizing the activation functions. BNN BID10 ) is the first purely binary network quantizing both the weights and activations. They achieve comparable accuracy to their prior work on BinaryConnect, and achieve significantly close performance to full-precision, by using large and deep networks. Although, they performed poorly on large datasets like ImageNet BID24. The ing network presented in their work obtains 32× compression rate and approximately 7× increase in inference speed. To alleviate the accuracy drop of BNN on larger datasets, BID23 proposed XNORNet, where they strike a trade-off between compression and accuracy through the use of scaling factors for both weights and activation functions. In their work, they show performance gains compared to BNN on ImageNet classification. The scaling factors for both the weights and activations are computed dynamically, which slows down training performance. Further, they introduced an additional complexity in implementing the convolution operations on the hardware, slightly reducing compression rate and speed up gains. DoReFa-Net BID30 further improves XNOR-Net by approximating the activations with more bits. The proposed rounding mechanism allows for low bit back-propagation as well. Although they perform multi-bit quantization, their model still suffers from large accuracy drop upon quantizing the last layer. Later in ABC-Net, BID29 propose several strategies, adjusting the learning rate for larger datasets. They show BNN achieve similar accuracy as XNOR-Net without the scaling overhead by adding a regularizer term which allows binary networks to generalize better. They also suggest a modified BNN, where they adopted the strategy of increasing the number of filters, to compensate for accuracy loss similar to wide reduced-precision networks BID21. More recently, developed a second-order approximation to the sign activation function for a more accurate backward update. In addition to this, they pre-train the network in which they want to binarize in full precision using the hard tangent hyperbolic (htanh) activation, see FIG0. They use the pre-trained network weights as an initialization for the binary network to obtain state of the art performance. Training a binary neural network faces two major challenges: on weights, and on activation functions. As both weights and activations are binary, the traditional continuous optimization methods such as SGD cannot be directly applied. Instead, a continuous approximation is used for the sign activation during the backward pass. Further, the gradient of the loss with respect to the weights are small. So as training progresses weight sign remains unchanged. These are both addressed in our proposed method. In this section, we present our approach to training 1-bit CNNs in detail. We quickly revisit quantization through binary training as first presented by BID0. In BID10, the weights are quantized by using the sign function which is +1 if w > 0 and −1 otherwise. In the forward pass, the real-valued weights are binarized to w b, and the ing loss is computed using binary weights throughout the network. For hidden units, the sign function non-linearity is used to obtain binary activations. Prior to binarizing, the real weights are stored in a temporary variable w. The variables w are stored because one cannot back-propagate through the sign operation as its gradient is zero everywhere, and hence disturbs learning. To alleviate this problem the authors suggest using a straight through estimator BID7 for the gradient of the sign function. This method is a heuristic way of approximating the gradient of a neuron, DISPLAYFORM0 where L is the loss function and 1 is the indicator function. The gradients in the backward pass are then applied to weights that are within [−1, +1]. The training process is summarized in Figure 1. As weights undergo gradient updates, they are eventually pushed out of the center region and instead make two modes, one at −1 and another at +1. This progression is also shown in Figure 1. DISPLAYFORM1 Figure 1: Binary training, where arrows indicate operands flowing into operation or block. Reproduced from (left). A convolutional layer depicting weight histogram progression during the popular binary training. The initial weight distribution is a standard Gaussian (right). Our first modification is on closing the discrepancy between the forward pass and backward pass. Originally, the sign derivative is approximated using the htanh(x) activation, as in FIG0. Instead, we modify the Swish-like activation BID22 BID1 BID6, where it has shown to outperform other activations on various tasks. The modifications are performed by taking its derivative and centering it around 0 DISPLAYFORM0 where σ(z) is the sigmoid function and the scale β > 0 controls how fast the activation function asymptotes to −1 and +1. The β parameter can be learned by the network or be hand-tuned as a hyperparameter. As opposed to the Swish function, where it is unbounded on the right side, the modification makes it bounded and a valid approximator of the sign function. As a , we call this activation SignSwish, and its gradient is DISPLAYFORM1 which is a closer approximation function compared to the htanh activation. Comparisons are made in FIG0. BID10 noted that the STE fails to learn weights near the borders of −1 and +1. As depicted in FIG0, our proposed SignSwish activation alleviates this, as it remains differentiable near −1 and +1 allowing weights to change signs during training if necessary. Note that the derivative d dx SS β (x) is zero at two points, controlled by β. Indeed, it is simple to show that the derivative is zero for x ≈ ±2.4/β. By adjusting this parameter beta, it is possible to adjust the location at which the gradients start saturating. In contrast to the STE estimators, where it is fixed. Thus, the larger β is, the closer the approximation is to the derivative of the sign function. DISPLAYFORM2 In general, a regularization term is added to a model to prevent over-fitting and to obtain robust generalization. The two most commonly used regularization terms are L 1 and L 2 norms. If one were to embed these regularization functions in binary training, it would encourage the weights to be near zero; though this does not align with the objective of a binary network. Instead, it is important to define a function that encourages the weights around −1 and +1. Further, in BID23 they present a scale to enhance the performance of binary networks. This scale is computed dynamically during training, using the statistics of the weights. To make the regularization term more general we introduce scaling factors α, ing in a symmetric regularization function with two minimums, one at −α and another at +α. As these scales are introduced in the regularization function and are embedded into the layers of the network they can be learned using backpropagation. The Manhattan regularization function is defined as DISPLAYFORM0 whereas the Euclidean version is defined as DISPLAYFORM1 where α > 0 is the scaling factor. As depicted in Figure 3, in the case of α = 1 the weights are penalized at varying degrees upon moving away from the objective quantization values, in this case, {−1, +1}.The proposed regularizing terms are inline with the wisdom of the regularization function R(w) = (1 − w 2)1 {|w|≤1} as introduced in BID29. A primary difference are in introducing a trainable scaling factor, and formulating it such that the gradients capture appropriate sign updates to the weights. Further, the regularization introduced in BID29 does not penalize weights that are outside of [−1, +1]. One can re-define their function to include a scaling factor as R(w) = (α − w 2)1 {|w|≤α}. In Figure 3, we depict the different regularization terms to help with intuition.−3 −2 −1 1 2 3 −3 −2 −1 1 2 3 Figure 3: R 1 (w) (left) and R 2 (w) (right) regularization functions for α = 0.5 (solid line) and α = 1 (dashed line). The scaling factor α is trainable, as a the regularization functions can adapt accordingly. Combining both the regularization and activation ideas, we modify the training procedure by replacing the sign backward approximation with that of the derivative of SS β activation. During training, the real weights are no longer clipped as in BNN training, as the network can back-propagate through the SS β activation and update the weights correspondingly. Additional scales are introduced to the network, which multiplies into the weights of the layers. The regularization terms introduced are then added to the total loss function, DISPLAYFORM0 where L(W, b) is the cost function, W and b are the sets of all weights and biases in the network, W l is the set weights at layer l and α l is the corresponding scaling factor. Here, R is the regularization function or. Further, λ controls the effect of the regularization term. To introduce meaningful scales, they are added to the basic blocks composing a typical convolutional neural network. For example, for convolutions, the scale is multiplied into the quantized weights prior to the convolution operation. Similarly, in a linear layer, the scales are multiplied into the quantized weights prior to the dot product operation. This is made more clear in the training algorithm 1.The scale α is a single scalar per layer, or as proposed in BID23 is a scalar for each filter in a convolutional layer. For example, given a CNN block with weight dimensionality (C in, C out, H, W), where C in is the number of input channels, C out is the number of output channels, and H, W, the height and width of the filter respectively, then the scale parameter would be a vector of dimension C out, that factors into each filter. As the scales are learned jointly with the network through backpropagation, it is important to initialize them appropriately. In the case of the Manhattan penalizing term, given a scale factor α and weight filter then the objective is to minimize DISPLAYFORM1 The minimum of the above is obtained when DISPLAYFORM2 Similarly, in the case of the Euclidean penalty the minimum is obtained when α * = mean(|W |) The scales are initialized with the corresponding optimal values after weights have been initialized first. One may notice the similarity of these optimal values with that defined by BID23, whereas in their case the optimal value for the weight filters and activations better matches the R 2 (w) goal. A difference is on how these approximations are computed, in our case they are updated on the backward pass, as opposed to computing the values dynamically. The final ing BNN+ training method is defined in Algorithm 1. In the following section, we present our experimental and important training details. Algorithm 1 BNN+ training. L is the unregularized loss function.λ and R 1 are the regularization terms we introduced. SS β is the SignSwish function we introduced and (SS β) is its derivative. N is the number of layers.• indicates element-wise multiplication. BatchNorm specifies how to batchnormalize the activation and BackBatchNorm how to back-propagate through the normalization. ADAM specifies how to update the parameters when their gradients are known. Require: a minibatch of inputs and targets (x 0, x *), previous weights W, previous weights' scaling factors α, and previous BatchNorm parameters θ. Ensure: updated weights W t+1, updated weights' scaling factors α t+1 and updated BatchNorm parameters θ t+1. {1. Forward propagation:} s 0 ← x 0 W 0 {We do not quantize the first layer.} DISPLAYFORM3 {We use our modified straight-through estimator to back-propagate through sign: DISPLAYFORM4 We evaluate our proposed method with the accuracy performance of training using BNN+ scheme versus other proposed binary networks, BID10 ; BID23 ; BID29 . We run our method on CIFAR-10 and ImageNet datasets and show accuracy gains. They are discussed in their respective sections below. The CIFAR-10 data BID15) consists of 50,000 train images and a test set of 10,000. For pre-processing the images are padded by 4 pixels on each side and a random crop is taken. We train both, AlexNet BID16, and VGG BID26 using the ADAM optimizer. The architecture used for VGG is conv → conv → conv → conv → conv → conv → fc → fc FORMULA2 where conv(·) is a convolutional layer, and fc(·) is a fully connected layer. The standard 3 × 3 filters are used in each layer. We also add a batch normalization layer BID12 prior to activation. For AlexNet, the architecture from BID14 is used, and batch normalization layers are added prior to activations. We use a batch size of 256 for training. Many learning rates were experimented with such as 0.1, 0.03, 0.001, etc, and the initial learning rate for AlexNet was set to 10 −3, and 3 × 10 −3 for VGG. The learning rates are correspondingly reduced by a factor 10 every 10 epoch for 50 epochs. We set the regularization parameter λ to 10 −6, and use the regularization term as defined in. In these experiments weights are initialized using BID2 initialization. Further, the scales are introduced for each convolution filter, and are initialized by sorting the absolute values of the weights for each filter and choosing the 75 th percentile value. The are summarized in TAB0. The ILSVRC-2012 dataset consists of ∼ 1.2M training images, and 1000 classes. For pre-processing the dataset we follow the typical augmentation: the images are resized to 256 × 256, then are randomly cropped to 224 × 224 and the data is normalized using the mean and standard deviation statistics of the train inputs; no additional augmentation is done. At inference time, the images are first scaled to 256 × 256, center cropped to 224 × 224 and then normalized. We evaluate the performance of our training method on two architectures AlexNet and Resnet-18. Following previous work, we used batch-normalization before each activation function. Additionally, we keep the first and last layers to be in full precision, as we lose 2-3% accuracy otherwise. This approach is followed by other binary methods that we compare to BID10 BID23 BID29. The are summarized in TAB1. In all the experiments involving R 1 regularization we set the λ to 10 −7 and R 2 regularization to 10 −6. Also, in every network, the scales are introduced per filter in convolutional layers, and per column in fully connected layers. The weights are initialized using a pre-trained model with htan activation function as done in. Then the learning rate for AlexNet is set to 2.33 × 10 − 3 and multiplied by 0.1 at the 12 th, 18 th epoch for a total of 25 epochs trained. For the 18-layer ResNet the learning rate is started from 0.01 and multiplied by 0.1 at 10 th, 20 th, 30 th epoch. We proposed two regularization terms and and an activation term with a trainable parameter β. We run several experiments to better understand the effect of the different modifications to the training method, especially using different regularization and asymptote parameters β. The parameter β is trainable and would add one equation through back-propagation. However, we fixed β throughout our experiments to explicit values. The are summarized in TAB1.Through our experiments, we found that adding regularizing term with heavy penalization degrades the networks ability to converge, as the term would in total loss be largely due to the regu- larizing term and not the target cross entropy loss. Similarly, the regularizing term was set to small values in BID29. As a , we set λ with a reasonably small value 10 −5 − 10 −7, so that the scales move slowly as the weights gradually converge to stable values. Some preliminary experimentation was to gradually increase the regularization with respect to batch iterations updates done in training, though this approach requires careful tuning and was not pursued further. From TAB1, and referring to networks without regularization, we see the benefit of using SwishSign approximation versus the STE. This was also noted in, where their second approximation provided better . There is not much difference between using R 1 versus R 2 towards model generalization although since the loss metric used was the cross-entropy loss, the order of R 1 better matches the loss metric. Lastly, it seems moderate values of β is better than small or large values. Intuitively, this happens because for small values of β, the gradient approximation is not good enough and as β increases the gradients become too large, hence small noise could cause large fluctuations in the sign of the weights. We did not compare our network with that of as they introduce a shortcut connection that proves to help even the full precision network. As a final remark, we note that the learning rate is of great importance and properly tuning this is required to achieve convergence. Table 3 summarizes the best of the ablation study and compares with BinaryNet, XNOR-Net, and ABC-Net. Table 3: Comparison of top-1 and top-5 accuracies of our method BNN+ with BinaryNet, XNORNet and ABC-Net on ImageNet, summarized from TAB1. The of BNN, XNOR, & ABC-Net are reported from the corresponding papers BID23 BID10 BID29. Results for ABC-NET on AlexNet were not available, and so is not reported. To summarize we propose three incremental ideas that help binary training: i) adding a regularizer to the objective function of the network, ii) trainable scale factors that are embedded in the regularizing term and iii) an improved approximation to the derivative of the sign activation function. We obtain competitive by training AlexNet and Resnet-18 on the ImageNet dataset. For future work, we plan on extending these to efficient models such as CondenseNet BID9, MobileNets BID8, MnasNet BID28 and on object recognition tasks.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJfHg2A5tQ
The paper presents an improved training mechanism for obtaining binary networks with smaller accuracy drop that helps close the gap with it's full precision counterpart
Clustering is a fundamental machine learning method. The quality of its is dependent on the data distribution. For this reason, deep neural networks can be used for learning better representations of the data. In this paper, we propose a systematic taxonomy for clustering with deep learning, in addition to a review of methods from the field. Based on our taxonomy, creating new methods is more straightforward. We also propose a new approach which is built on the taxonomy and surpasses some of the limitations of some previous work. Our experimental evaluation on image datasets shows that the method approaches state-of-the-art clustering quality, and performs better in some cases. Clustering is one of the most fundamental unsupervised machine learning problems. Its main goal is to separate data into clusters of similar data points. Besides having its own applications, it is beneficial for multiple other fundamental tasks. For instance, it can serve for automatic data labeling for supervised learning and as a pre-processing step for data visualization and analysis. However, the performance of clustering algorithms is dependent on the type of the input data, such that different problems and datasets could require different similarity measures and different separation techniques. As a , dimensionality reduction and representation learning have been extensively used alongside clustering, in order to map the input data into a feature space where separation is easier with respect to the problem's context. Using deep neural networks (DNNs), it is possible to learn non-linear mappings allowing to transform the data into more clustering-friendly representations. In the past, dimensionality reduction (or representation learning) and clustering have been treated separately, and sequentially applied on the data BID3 BID22 BID23. However, recent research has shown that jointly optimizing for both problems can achieve decent BID20 BID28 BID29 BID13.One of our main contributions is the formulation of a taxonomy of methods that use deep learning for clustering. Our taxonomy facilitates the overview of existing methods and the creation of new ones by using the best properties of the existing ones in a modular manner. Based on the taxonomy, we propose a new method that combines advantageous properties of some existing methods. We use an autoencoder-based method for learning better representations of the data which are clustering-friendly, with a state-of-the-art training procedure. The training has two phases, the first one being standard autoencoder training with the mean squared error reconstruction loss, and the second one is based on a loss function combining the reconstruction loss and a clustering-specific loss. Moreover, in the second phase, we alternate between optimizing the network model, and updating the clustering assignments. The rest of the paper is organized as follows: the taxonomy of clustering with deep learning and the corresponding building blocks is described in Section 2. In Section 3, several related methods are briefly described and compared based on the taxonomy. Subsequently, in Section 4, a new method is proposed and discussed based on the building blocks of the taxonomy. Results of the proposed method are shown in Section 5, followed by in Section 6. The most successful methods for clustering with deep neural networks all work following the same principle: representation learning using DNNs and using these representations as input for a specific clustering method. Every method consists of the following parts, for each of which there are several options to choose from:• Neural network training procedure, consisting of the following:-Main neural network branch and its usage * Architecture of main neural network branch, described in Section 2.1 * Set of deep features used for clustering, described in Section 2.2 -Neural network losses:* Non-clustering loss, described in Section 2.3 * Clustering loss, described in Section 2.4 * Method to combine the two losses, described in Section 2.5 -Cluster updates, described in Section 2.6• After the network training: re-run clustering (optional), described in Section 2.7 In most deep learning methods for clustering, the "main branch" of the neural network (apart from side branches towards non-clustering losses, see Section 2.3) is used to transform the inputs into a latent representation that is used for clustering. The following neural network architectures have previously been used for this purpose:• Multilayer perceptron (MLP): Feedforward network, consisting of several layers of neurons, such that the output of every hidden layer is the input to next one.• Convolutional neural network (CNN): Inspired by biology, more precisely by the organization of the animal visual cortex. Useful for applications to regular-grid data such as images, if locality and shift-equivariance/invariance of feature extraction is desired.• Deep belief network (DBN): Generative graphical model, consisting of several layers of latent variables. It is composed of several shallow networks such as restricted Boltzmann machines, such that the hidden layer of each sub-network serves as the visible layer of the next sub-network. DNNs serve for clustering as mappings to better representations. The features of these representations can be drawn from different layers of the network or even from several ones. It is possible to separate this choice into two categories:• One layer: Refers to the general case where only the output of the last layer of the network is used. This approach benefits from the low dimensionality of the representation.• Several layers: Refers to the case where the representation is a combination of the outputs of several layers. Based on that, the representation is richer and allows the embedded space to represent more complex semantic representations, which might enhance the separation process and help in the similarity computation BID19. The non-clustering loss is independent of the clustering algorithm and usually enforces a desired constraint on the learned model. The following are possible options for non-clustering loss functions:• No non-clustering loss: No additional non-clustering loss functions are used. In such cases, the network model is only constrained by the clustering loss requirements. For most clustering losses, no non-clustering loss can have a danger of worse representations/, or theoretically even collapsing clusters BID29, but the latter rarely occurs in practice.• Autoencoder reconstruction loss: The autoencoder consists of two parts: an encoder and a decoder. The encoder maps its input x to a representation z in a latent space Z. During training, the decoder tries to reconstruct x from z, making sure that useful information has not been lost by the encoding phase. In the context of clustering methods, once the training is done the decoder part is no longer used, and the encoder is left for mapping its input to the latent space Z. By applying this procedure, autoencoders can successfully learn useful representations in the cases where the output's dimensionality is different from the input's or when random noise is injected to the input BID25. Additionally, they can also be used for dimensionality reduction goals BID6. Generally the reconstruction loss is a distance measure d AE (x i, f (x i)) between the input x i to the autoencoder and the corresponding reconstruction f (x i). One particular formulation of it is using the mean squared error of the two variables: DISPLAYFORM0 where x i is the input and f (x i) is the autoencoder reconstruction. This loss function guarantees that the learned representation preserves important information from the initial one, which is why reconstruction is possible.• Other tasks: Additional information about training samples that is available in the form of targets, even if not perfectly suitable to dictate clustering, can be used in a (multi-task) non-clustering loss to encourage meaningful feature extraction. The second type of functions is specific to the clustering method and the clustering-friendliness of the learned representations, therefore such functions are called clustering loss functions. The following are options for clustering loss functions:• No clustering loss: Even if a neural network has only non-clustering losses (Section 2.3), the features it extracts can be used for clustering after training (Sections 2.6-2.7). The neural network serves in this case for changing the representation of the input, for instance changing its dimensionality. Such a transformation could be beneficial for the clustering sometimes, but using a clustering loss usually yields better BID28 BID29 ).• k-Means loss: Assures that the new representation is k-means-friendly BID29, i.e. data points are evenly distributed around the cluster centers. In order to obtain such a distribution a neural network is trained with the following loss function: DISPLAYFORM0 where z i is an embedded data point, µ k is a cluster center and s ik is a boolean variable for assigning z i with µ k. Minimizing this loss with respect to the network parameters assures that the distance between each data point and its assigned cluster center is small. Having that, applying k-means would in better clustering quality.• Cluster assignment hardening: Requires using soft assignments of data points to clusters. For instance, Student's t-distribution can be used as the kernel to measure the similarity (van der BID24 between points and centroids. This distribution Q is formulated as follows: DISPLAYFORM1 where z i is an embedded data point, µ j is the j th cluster centroid, and ν is a constant, e.g. ν = 1. These normalized similarities between points and centroids can be considered as soft cluster assignments. The cluster assignment hardening loss then enforces making these soft assignment probabilities stricter. It does so by letting cluster assignment probability distribution Q approach an auxiliary (target) distribution P which guarantees this constraint. BID28 propose the following auxiliary distribution: DISPLAYFORM2 By squaring the original distribution and then normalizing it, the auxiliary distribution P forces assignments to have stricter probabilities (closer to 0 and 1). It aims to improve cluster purity, put emphasis on data points assigned with high confidence and to prevent large clusters from distorting the hidden feature space BID28. One way to formulate the divergence between the two probability distributions is using the Kullback-Leibler divergence BID11. It is formulated as follows: DISPLAYFORM3 which is minimized for the aforementioned Q and P via neural network training.• Balanced assignments loss: This loss has been used alongside other losses such as the previous one BID4. Its goal is to enforce having balanced cluster assignments. It is formulated as follows: DISPLAYFORM4 where U is the uniform distribution and G is the probability distribution of assigning a point to each cluster: DISPLAYFORM5 By minimizing equation 6, the probability of assigning each data point to a certain cluster is uniform across all possible clusters BID4. It is important to note that this property (uniform assignment) is not always desired. Thus, in case any prior is known it is still possible to replace the uniform distribution by the known prior one.• Locality-preserving loss: This loss aims to preserve the locality of the clusters by pushing nearby data points together BID8. Mathematically, it is formulated as follows: DISPLAYFORM6 where N k (i) is the set of k nearest neighbors of the data point x i, and s(x i, x j) is a similarity measure between the points x i and x j.• Group sparsity loss: It is inspired by spectral clustering where block diagonal similarity matrix is exploited for representation learning BID17. Group sparsity is itself an effective feature selection method. In BID8, the hidden units were divided into G groups, where G is the assumed number of clusters. When given a data point x i the obtained representation has the form {φ DISPLAYFORM7 . Thus the loss can be defined as follows: DISPLAYFORM8 where DISPLAYFORM9 are the weights to sparsity groups, defined as DISPLAYFORM10 where n g is the group size and λ is a constant.• Cluster classification loss: Cluster assignments obtained during cluster updates (Section 2.6) can be used as "mock" class labels for a classification loss in an additional network branch, in order to encourage meaningful feature extraction in all network layers BID7 ).• Agglomerative clustering loss: Agglomerative clustering merges two clusters with maximum affinity (or similarity) in each step until some stopping criterion is fulfilled. A neural network loss inspired by agglomerative clustering BID30 ) is computed in several steps. First, the cluster update step (Section 2.6) merges several pairs [correct?] of clusters by selecting the pairs with the best affinity (some predefined measure of similarity between clusters). Then network training retrospectively even further optimizes the affinity of the already merged clusters (it can do so because the affinity is measured in the latent space to which the network maps). After the next cluster update step, the network training switches to retrospectively optimizing the affinity of the newest set of newly merged cluster pairs. In this way, cluster merging and retrospective latent space adjustments go hand in hand. Optimizing the network parameters with this loss function would in a clustering space more suitable for (agglomerative) clustering. In the case where a clustering and a non-clustering loss function are used, they are combined as follows: DISPLAYFORM0 where L c (θ) is the clustering loss, L n (θ) is the non-clustering loss, and α ∈ [0; 1] is a constant specifying the weighting between both functions. It is an additional hyperparameter for the network training. It can also be changed during training following some schedule. The following are methods to assign and schedule the values of α:• Pre-training, fine-tuning: First, α is set to 0, i.e. the network is trained using the nonclustering loss only. Subsequently, α is set to 1, i.e. the non-clustering network branches (e.g. autoencoder's decoder) are removed and the clustering loss is used to train (fine-tune) the obtained network. The constraint forced by the reconstruction loss could be lost after training the network long enough for clustering only. In some cases, losing such constraints may lead to worse (see TAB0).• Joint training: 0 < α < 1, for example α = 0.5, i.e. the network training is affected by both loss functions.• Variable schedule: α is varied during the training dependent on a chosen schedule. For instance, start with a low value for α and gradually increase it in every phase of the training. In phases with α = 1, no non-clustering loss is imposed, with potential disadvantages (see No nonclustering loss in Section 2.3). Similarly, in phases with α = 0, no clustering loss is imposed, with potential disadvantages (see No clustering loss in Section 2.4). Clustering methods can be broadly categorized into hierarchical and partitional (centroid-based) approaches BID10. Hierarchical clustering combines methods which aim to build a hierarchy of clusters and data points. On the other hand, partitional (centroid-based) clustering groups methods which create cluster centers and use metric relations to assign each of the data points into the cluster with the most similar center. In the context of deep learning for clustering, the two most dominant methods of each of these categories have been used. Agglomerative clustering, which is a hierarchical clustering method, has been used with deep learning BID30. The algorithm has been briefly discussed in Section 2.4. In addition, k-means, which falls into the category of centroid-based clustering, was extensively used BID28 BID29 BID13 BID7.During the network training, cluster assignments and centers (if a centroid-based method is used) are updated. Updating cluster assignments can have one of the two following forms:• Jointly updated with the network model: Cluster assignments are formulated as probabilities, therefore have continuous values between 0 and 1. In this case, they can be included as parameters of the network and optimized via back-propagation.• Alternatingly updated with the network model: Clustering assignments are strict and updated in a different step than the one where the network model is updated. In this case, several scenarios are possible, dependent on two main factors:-Number of iterations: Number of iterations of the chosen clustering algorithm, that are executed at every cluster update step. For instance, in BID28, at each cluster update step, the algorithm runs until a fixed percentage of points change assignments between two consecutive iterations. -Frequency of updates: How often are cluster updates started. For instance in BID30, for every P network model update steps, one cluster updates step happens. Once the training converges, the network should have learned a mapping from the input space to a more clustering-friendly space with respect to the dataset it was trained on. In other words, if the training was performed on digit images of N × N pixel size, the network should be able to map a set of N × N images to a space where clustering is easier. With such a mapping, it makes sense to run a clustering algorithm on a desired dataset. However, the majority of the presented methods performs clustering during the training and obtain their clustering from their last training iteration. Therefore the following are reasons for re-running the clustering after the training is done:• Clustering a similar dataset: The general and the most trivial case is to reuse the learned features representation mapping on another dataset which is similar to the one that has been used but has different data.• Obtaining better : Under certain circumstances, it is possible that the of clustering after the training are better than the ones obtained during the learning procedure. For instance, in BID30, such a behavior is reported. One possible reason for this to happen is that the cluster update step during the training doesn't go all the way till the end (see Number of iterations in Section 2.6) meaning that older steps used older representations that might be worse. Therefore, some of the cluster merging steps (agglomerative clustering) were performed on a less optimal feature representation, which is why clustering after the training performed better. Clustering has been extensively studied and researched. Its application with deep neural networks has gained additional interest in the last few years, due to the success of supervised deep learning. However, in most cases, clustering is handled in an unsupervised fashion, making its application with deep learning less trivial and requiring more modeling effort and theoretical analysis. Therefore, several approaches have been presented over the last years, trying to use the representational power of DNNs for preprocessing clustering inputs. Each of these approaches used different network architectures, structures, loss functions and training methods in order to achieve their and to improve the clustering quality. The following are some of the interesting methods that have been previously introduced. DEC is one of the most promising approaches in the field. It is based on autoencoders as network architecture and initialization method, and uses k-means for clustering BID28. As for training the neural network, the method first pretrains the model using a standard input reconstruction loss function. Secondly, the network's model is fine-tuned using the cluster assignment hardening loss and the clustering centers are updated. The clusters are iteratively refined by learning from their high confidence assignments with the help of the auxiliary target distribution. As a consequence, the method showed decent and has later been used as a reference to compare new methods performances. DCN is another autoencoder-based method that uses k-means for clustering BID29. Similar to DEC, in the first phase, the network is pretrained using the autoencoder reconstruction loss. However, the second phase is different. In contrast to DEC, the network is jointly trained using a mathematical combination of the autoencoder reconstruction loss and the k-means clustering loss function. Thus, due to the fact that strict cluster assignments were used during the training (instead of probabilities such as in DEC) the method required an alternation process between the network training and the cluster updates. The method performed well and even led to better than DEC on the MNIST dataset. With respect to the presented taxonomy, the approach in DBC BID13 is almost identical to DEC except for using convolutional autoencoders. Namely, it also uses k-means for clustering and the same training method: pretraining with autoencoder reconstruction loss and fine tuning using the cluster assignment hardening loss. Additionally, the same advantages and disadvantages are shared by both methods. Thus, due to the fact that DBC uses convolutional layers, it outperformed DEC's clustering quality on image datasets which was obviously expected. JULE uses a convolutional neural network for representation learning. For clustering, a hierarchical approach is used, specifically, the agglomerative clustering method is employed. Concerning the training, the method only uses a clustering loss, specifically, the agglomerative loss. Additionally, the method has a period hyper-parameter, by which the training behavior is altered. Namely, this hyper-parameter specifies the number of model updates that should be applied before the clustering algorithm executes a clustering iteration, for instance, ten learning sessions followed by fusing two clusters into one. In experiments, the method showed great , for example on MNIST, it performed better than all the other methods. However, the disadvantages of the lack of any nonclustering loss (see No non-clustering loss in Section 2.3) may be particularly pronounced, at least in theory BID29. CCNN uses a clustering CNN BID7 ) to achieve joint clustering and representation learning. One of the internal layers of the CCNN forms the feature space. At the same time, the CCNN's softmax layer predicts the cluster labels. Initially, features from k random images from the dataset are used to initialize the cluster centers. k-Means is performed on the features extracted from the input dataset to get corresponding cluster labels. Based on the assigned labels, and the labels predicted by the softmax layer, the CCNN parameters can be updated using the clustering classification loss discussed in section 2.4. The extracted features of the minibatch are then further used to update the corresponding cluster centroids. Besides the described methods, multiple attempts have been made in the field of clustering with deep learning. An interesting work is by BID19 where a standard autoencoder was used without additional clustering loss functions. However, the outputs of several layers of the network beside the last one are used as the final feature representation. This layer concatenation led to superior even when compared with methods which included a clustering-specific loss function. Moreover, in BID8, joint training was performed with a combination of an autoencoder reconstruction loss, a locality-preserving loss, and a group sparsity loss. Another work is by BID4, it is very similar to DEC, except for adding an additional term to the clustering loss which is a balanced assignments loss. By this addition, they alleviate the danger of obtaining degenerate solutions, but introduce again the need for alternating between network training and clustering updates. In addition to the mentioned methods, multiple others exist BID18 BID5 BID31 BID1 BID15 BID27 BID2.Rather than directly using a neural network to extract high-level features of samples, infinite ensemble clustering BID14 uses neural networks to generate infinite ensemble partitions and to fuse them into a consensus partition to obtain the final clustering. After identifying a taxonomy of clustering with deep learning (Section 2) and comparing methods in the field based on it TAB0, creating new improved methods became more straightforward. For instance, by looking at TAB0, one could notice that some combinations of method properties could lead to new methods. In some cases, such combinations could also surpass the limitations of the previous approaches and lead to better . This procedure was followed during this work. Namely, we picked an interesting combination of taxonomy features and came up with a new method FIG0.Our method uses a convolutional architecture, since our target clustering datasets are image datasets. Additionally, the network training has two phases. The first one is pretraining with an autoencoder reconstruction loss. In the second phase, the autoencoder loss and the cluster assignment hardening are jointly optimized. This second phase is different from DEC and DBC, which only use the cluster assignment hardening loss at this level. Omitting the reconstruction loss during one phase of the network training could lead to worse representations and solutions (see No non-clustering loss in Section 2.3). Therefore, combining the reconstruction loss with the cluster assignment hardening loss makes a lot more sense. This phase is also different from DCN, which has the joint training property, but uses the k-means loss. The k-means loss forces to alternate between joint training and clustering updates due to the hard cluster assignments. Using the cluster assignment hardening loss, this alternation procedure is no longer needed in our approach since this loss uses soft assignments which can be jointly updated with the network updates. Once both training phases are done, the network should be able to map its input into a more clustering-friendly space. Based on this assumption, we use the output of the network as the input to the k-means method which produces the final clustering . In this section we evaluate our model on real-world data and compare the against the methods previously discussed in section 3.Validation Metrics For evaluation, we use the clustering accuracy (ACC) and normalized mutual information (NMI) metrics BID21 BID26 BID0. These metrics lie in the range, with 1 being the perfect clustering, and 0 being the worst. Experimental Setup Training the network involved trying out several architectures and network sizes. In addition, it required tuning the learning hyper-parameters, such as the learning rate, initialization parameters, mini-batch size and others. In particular, we use a learning rate of 0.01 with a momentum of 0.9, in addition to batch normalization BID9 and L2 regularization. The presented are the best obtained ones during the experimentation phase. Datasets The experiments were performed on several publicly available datasets:• MNIST: Consists of 70000 images of hand-written digits of 28 × 28 pixel size. The digits are centered and size is normalized .• COIL20: Contains 1440, 32 × 32 gray scale images of 20 objects (72 images per object). The images of each object were taken 5 degrees apart BID16.Performance TAB0 shows the clustering performance in terms of accuracy and NMI for various clustering DNN approaches. The for all the methods are borrowed from their respective publications. From the table it can be seen the proposed algorithm performs comparable, if not better than a lot of state of the art approaches. Figure 2 and 3 show the clustering spaces at different stages of training the proposed network, with true cluster labels shown using different colors. The clustering spaces are 120-dimensional and 320-dimensional for MNIST and COIL20, respectively. It can be seen from the visualizations that the proposed method in much more clustering-friendly spaces than the original image space and the autoencoder space. In this work, we present a taxonomy for clustering with deep learning, identifying the general framework, and discussing different building blocks and possible options. In addition, a summary of methods in the field and their specific use of the taxonomy is presented alongside a general comparison of many of these methods. Using this taxonomy and the summary of previous methods, generating new methods is clearer and easier and can be done by creating new combinations of the taxonomy's building blocks. Moreover, we present a new method to the field, which is based on such a new combination. Our method overcomes the limitations of several previous ones, approaches state-ofthe-art performance and performs better in some cases.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1eT9VMgOX
Unifying framework to perform clustering using deep neural networks
Generative models often use human evaluations to determine and justify progress. Unfortunately, existing human evaluation methods are ad-hoc: there is currently no standardized, validated evaluation that: measures perceptual fidelity, is reliable, separates models into clear rank order, and ensures high-quality measurement without intractable cost. In response, we construct Human-eYe Perceptual Evaluation (HYPE), a human metric that is grounded in psychophysics research in perception, reliable across different sets of randomly sampled outputs from a model, in separable model performances, and efficient in cost and time. We introduce two methods. The first, HYPE-Time, measures visual perception under adaptive time constraints to determine the minimum length of time (e.g., 250ms) that model output such as a generated face needs to be visible for people to distinguish it as real or fake. The second, HYPE-Infinity, measures human error rate on fake and real images with no time constraints, maintaining stability and drastically reducing time and cost. We test HYPE across four state-of-the-art generative adversarial networks (GANs) on unconditional image generation using two datasets, the popular CelebA and the newer higher-resolution FFHQ, and two sampling techniques of model outputs. By simulating HYPE's evaluation multiple times, we demonstrate consistent ranking of different models, identifying StyleGAN with truncation trick sampling (27.6% HYPE-Infinity deception rate, with roughly one quarter of images being misclassified by humans) as superior to StyleGAN without truncation (19.0%) on FFHQ. Historically, likelihood-based estimation techniques served as the de-facto evaluation metric for generative models BID18 BID5. But recently, with the application of generative models to complex tasks such as image and text generation BID14 BID34, likelihood or density estimation grew no longer tractable BID46. Moreover, for high-dimensional problems, even likelihood-based evaluation has been called into question BID46. Consequently, most generative tasks today resort to analyzing model outputs BID41 BID43 BID11 BID21 BID7 BID37. These output evaluation metrics consist of either automatic algorithms that do not reach the ideals of likelihood-based estimation, or ad-hoc human-derived methods that are unreliable and inconsistent BID41 BID11.Consider the well-examined and popular computer vision task of realistic face generation BID14. Automatic algorithms used for this task include Inception Score (IS) BID43 and Fréchet Inception Distance (FID) BID17. Both have been discredited for evaluation on non-ImageNet datasets such as faces BID2 BID40 BID6 BID38. They are also much more sensitive to visual corruptions such as salt and pepper noise than to semantic distortions such as swirled images BID17. So, while automatic metrics are consistent and standardized, they cannot fully capture the semantic side of perceptual fidelity BID6.Realizing the constraints of the available automatic metrics, many generative modeling challenges resort to summative assessments that are completely human BID41 BID43 BID11. These human measures are ad-hoc, each executed in idiosyncrasy without proof of reliability or grounding to theory, and high variance in their estimates BID43 BID11 BID33. These characteristics combine to a lack of reliability, and downstream, a lack of clear separability between models. Theoretically, given sufficiently large sample sizes of human evaluators and model outputs, the law of large numbers would smooth out the variance and reach eventual convergence; but this would occur at a high cost and a long delay. In this paper, we present HYPE (HUMAN EYE PERCEPTUAL EVALUATION) that addresses these criteria in turn. It: measures the perceptual fidelity of generative model outputs via a grounded method inspired by psychophysics methods in perceptual psychology, is a reliable and consistent estimator, is statistically separable to enable a comparative ranking, and ensures a cost and time efficient method through modern crowdsourcing techniques such as training and aggregation. We present two methods of evaluation. The first, called HYPE time, is drawn directly from psychophysics literature BID22 ) and displays images using adaptive time constraints to determine the time-limited perceptual threshold a person needs to distinguish real from fake BID9. The HYPE time score is understood as the minimum time, in milliseconds, that a person needs to see the model's output before they can distinguish it as real or fake. Small HYPE time scores indicate that model outputs can be identified even at a glance; large scores suggest that people need to dedicate substantial time and attention. The second method, called HYPE ∞, is derived from the first to make it simpler, faster, and cheaper while maintaining reliability. It measures human deception from fake images with no time constraints. The HYPE ∞ score is interpretable as the rate at which people mistake fake images and real images, given unlimited time to make their decisions. We demonstrate HYPE's performance on unconditional generation of human faces using generative adversarial networks (GANs) BID14. We evaluate four state-of-the-art GANs: WGAN-GP BID16, BEGAN BID4, ProGAN BID20, and the most recent StyleGAN BID21. First, we track progress across the years on the popular CelebA dataset BID28. We derive a ranking based on perception (HYPE time, in milliseconds) and error rate (HYPE ∞, as a percentage) as follows: StyleGAN (439.4ms, 50.7%), ProGAN (363.7ms, 40.3%), BEGAN (111.1ms, 10.0%), WGAN-GP (100.0ms, 3.8%). A score of 500ms on HYPE time indicates that outputs from the model become indistinguishable from real, when shown for 500ms or less, but any more would start to reveal notable differences. A score of 50% on HYPE ∞ represents indistinguishable from real, conditioned on the real training set, while a score above 50% through 100% represents hyper-realism in which generated images appear more real than real ones when drawn from a mixed pool of both. Next, we test StyleGAN trained on the newer FFHQ dataset BID21, comparing between outputs generated when sampled with and without the truncation trick, a technique used to prune low-fidelity generated images BID7 BID21. We find that outputs generated with the truncation trick (363.2ms, 27.6%) significantly outperforms those without it (240.7ms, 19.0%), which runs counter to scores reported by FID.HYPE indicates that GANs have clear, measurable perceptual differences between them. HYPE produces identical rankings between HYPE time and HYPE ∞. We also find that even the best eval- Images on the right exhibit the highest HYPE scores, the highest human perceptual fidelity. uated model, StyleGAN trained on FFHQ and sampled with the truncation trick, only performs at 27.6% HYPE ∞, suggesting substantial opportunity for improvement. Finally, we show that we can reliably reproduce these with 95% confidence intervals using 30 human evaluators at $60 in a task that takes 10 minutes. While important measures, we do not focus on diversity, overfitting, entanglement, training stability, and computational and sample efficiency of the model BID6 BID29 and instead aim to construct the gold standard for human perceptual fidelity. We deploy HYPE as a rapid solution for researchers to measure their generative models, requiring just a single click to produce reliable scores and measure progress. We deploy HYPE at https://hype.stanford.edu, where researchers can upload a model and retrieve a HYPE score in 10 minutes for $60. Future work would extend HYPE to adapt to other generative tasks such as text generation or abstractive summarization. Model creators can choose to perform two different evaluations and receive two different scores: the HYPE time score, which gathers time-limited perceptual thresholds to measure the psychometric function and report the minimum time people need to make accurate classifications, and the HYPE ∞ score, a simplified approach which assesses people's error rate under no time constraint. HYPE displays a series of images one by one to crowdsourced evaluators on Amazon Mechanical Turk and asks the evaluators to assess whether each image is real or fake. Half of the images are drawn from the model's training set (e.g., FFHQ or CelebA), which constitute the real images. The other half are drawn from the model's output. We use modern crowdsourcing training and quality control techniques to ensure high quality labels BID31. Our first method, HYPE time, measures time-limited perceptual thresholds. It is rooted in psychophysics literature, a field devoted to the study of how humans perceive stimuli, to evaluate human time thresholds upon perceiving an image. Our evaluation protocol follows the procedure known as the adaptive staircase method (see FIG1). An image is flashed for a limited length of time, after which the evaluator is asked to judge whether it is real or fake. If the evaluator consistently answers correctly, the staircase descends and flashes the next image with less time. If the evaluator is incorrect, the staircase ascends and provides more time. This process requires sufficient iterations to converge on the minimum time needed for each evaluator to sustain correct guesses in a sample-efficient manner BID9, producing what is known as the psychometric function BID51, the relationship of timed stimulus exposure to accuracy. For example, for an easily distinguishable set of generated images, a human evaluator would immediately drop to the lowest millisecond exposure. However, for a harder set, it takes longer to converge and the person would remain at a longer exposure level in order to complete the task accurately. The modal time value is the evaluator's perceptual threshold: the shortest exposure time at which they can maintain effective performance BID9 BID15 ). HYPE time displays three blocks of staircases for each evaluator. An image evaluation begins with a 3-2-1 countdown clock, each number displaying for 500 ms. The sampled image is then displayed for the current exposure time. Immediately after each image, four perceptual mask images are rapidly displayed for 30ms each. These noise masks are distorted to prevent visual afterimages and further sensory processing on the image afterwards BID15 ). We generate masks from the test images, using an existing texture-synthesis algorithm BID35. Upon each submission, HYPE time reveals to the evaluator whether they were correct. Image exposure times fall in the range [100ms, 1000ms], which we derive from the perception literature BID13. All blocks begin at 500ms and last for 150 images (50% generated, 50% real), values empirically tuned from prior work BID9 BID10. Exposure times are raised at 10ms increments and reduced at 30ms decrements, following the 3-up/1-down adaptive staircase approach. This 3-up/1-down approach theoretically leads to a 75% accuracy threshold that approximates the human perceptual threshold BID27 BID15 BID9.Every evaluator completes multiple staircases, called blocks, on different sets of images. As a , we observe multiple measures for the model. We employ three blocks, to balance quality estimates against evaluators' fatigue (BID25 BID42 . We average the modal exposure times across blocks to calculate a final value for each evaluator. Higher scores indicate a better model, whose outputs take longer time exposures to discern from real. Building on the previous method, we introduce HYPE ∞ : a simpler, faster, and cheaper method after ablating HYPE time to optimize for speed, cost, and ease of interpretation. HYPE ∞ shifts from a measure of perceptual time to a measure of human deception rate, given infinite evaluation time. The HYPE ∞ score gauges total error on the task, enabling the measure to capture errors on both fake and real images, and effects of hyperrealistic generation when fake images look even more realistic than real images. HYPE ∞ requires fewer images than HYPE time to find a stable value, at a 6x reduction in time and cost (10 minutes per evaluator instead of 60 minutes, at the same rate of $12 per hour). Higher scores are better, like HYPE time: a HYPE ∞ value of 10% indicates that only 10% of images deceive people, whereas 50% indicates that people are mistaking real and fake images at chance, rendering fake images indistinguishable from real. Scores above 50% suggest hyperrealistic images, as evaluators mistake images at a rate greater than chance, on average mistaking more fake images to be real than real ones and vice versa. HYPE ∞ shows each evaluator a total of 100 images: 50 real and 50 fake. We calculate the proportion of images that were judged incorrectly, and aggregate the judgments over the n evaluators on k images to produce the final score for a given model. To ensure that our reported scores are consistent and reliable, we need to sample sufficient model outputs, select suitable real images for comparison, and hire, qualify, and appropriately pay enough evaluators. To ensure a wide coverage of images, we randomly select the fake and real images provided to workers from a pool of 5000 images (see Sampling sufficient model outputs, below).Comparing between single evaluators can be problematic. To ensure HYPE is reliable, we must use a sufficiently large number of evaluators, n, which can be treated as a hyperparameter. To determine a suitable number, we use our experimental (further discussed in the Results section) to compute bootstrapped 95% confidence intervals (CI) across various values of n evaluators. To obtain a high-quality pool of evaluators, each is required to pass a qualification task. Such a pre-task filtering approach, sometimes referred to as a person-oriented strategy, is known to outperform process-oriented strategies that perform post-task data filtering or processing BID31. Our qualification task displays 100 images (50 real and 50 fake) with no time limits. Evaluators pass if they correctly classify 65% of both real and fake images. This threshold should be treated as a hyperparameter and may change depending upon the GANs used in the tutorial and the desired discernment ability of the chosen evaluators. We choose 65% based on the cumulative binomial probability of 65 binary choice answers out of 100 total answers: there is only a one in one-thousand chance that an evaluator will qualify by random guessing. Unlike in the staircase task itself, fake qualification images are drawn equally from multiple different GANs. This is to ensure an equitable qualification across all GANs, as to avoid a qualification that is biased towards evaluators who are particularly good at detecting one type of GAN. The qualification is designed to be taken occasionally, such that a pool of evaluators can assess new models on demand. Payment. Evaluators are paid a base rate of $1 for working on the qualification task. To incentivize evaluators to remained engaged throughout the task, all further pay after the qualification comes from a bonus of $0.02 per correctly labeled image. This pay rate typically in a wage of approximately $12 per hour, which is above a minimum wage in our local state. Sampling sufficient model outputs. The selection of K images to evaluate from a particular model is a critical component of a fair and useful evaluation. We must sample a large enough number of images that fully capture a model's generative diversity, yet balance that against tractable costs in the evaluation. We follow existing work on evaluating generative output by sampling K = 5000 generated images from each model BID43 BID32 BID49 and K = 5000 real images from the training set. From these samples, we randomly select images to give to each evaluator. Datasets. We evaluate on two datasets of human faces:1. CelebA-64 BID28 is popular dataset for unconditional image generation, used since 2015. CelebA-64 includes 202,599 images of human faces, which we align and crop to be 64 × 64 pixel images using a standard mechanism. We train all models without using attributes.2. FFHQ-1024 BID21 ) is a newer dataset released in 2018 with StyleGAN and includes 70,000 images of size 1024 × 1024 pixels. Architectures. We evaluate on four state-of-the-art models trained on CelebA-64: StyleGAN , ProGAN BID20, BEGAN BID4, and WGAN-GP BID16. We also evaluate on two types of sampling from StyleGAN trained on FFHQ-1024: with and without the truncation trick, which we denote StyleGAN trunc and StyleGAN no-trunc respectively. For parity on our best models across datasets, StyleGAN trained on CelebA-64 is sampled with the truncation trick. We train StyleGAN, ProGAN, BEGAN, and WGAN-GP on CelebA-64 using 8 Tesla V100GPUs for approximately 5 days. We use the official released pretrained StyleGAN model on FFHQ-1024 BID21.We sample noise vectors from the d-dimensional spherical Gaussian noise prior z ∈ R d ∼ N (0, I) during training and test times. We specifically opted to use the same standard noise prior for comparison, yet are aware of other priors that optimize for FID and IS scores BID7. We select training hyperparameters published in the corresponding papers for each model. We evaluate all models for each task with the two HYPE methods: HYPE time and HYPE ∞.Evaluator recruitment. We recruit 360 total human evaluators across our 12 evaluations, each of which included 30 evaluators, from Amazon Mechanical Turk. Each completed a single evaluation in {CelebA-64, FFHQ-1024} × {HYPE time, HYPE ∞}. To maintain a between subjects study in this evaluation, we did not allow duplicate evaluators across tasks or methods. In total, we recorded (4 CelebA-64 + 2 FFHQ-1024) models × 30 evaluators × 550 responses = 99, 000 total responses for our HYPE time evaluation and (4 CelebA-64 + 2 FFHQ-1024) models × 30 evaluators × 100 responses = 18, 000 total responses for our HYPE ∞ evaluation. Metrics. For HYPE time, we report the modal perceptual threshold in milliseconds. For HYPE ∞, we report the error rate as a percentage of images, as well as the breakdown of this rate on real and fake images individually. To show that our for each model are separable, we report a oneway ANOVA with Tukey pairwise post-hoc tests to compare all models within each {CelebA-64, FFHQ-1024} × {HYPE time, HYPE ∞} combination. As mentioned previously, reliability is a critical component of HYPE, as an evaluation is not useful if a researcher can re-run it and get a different answer. To show the reliability of HYPE, we use bootstrap BID12, a form of simulation, to simulate what the would be if we resample with replacement from this set of labels. Our goal is to see how much variation we may get in the outcome. We therefore report evaluator 95% bootstrapped confidence intervals, along with standard deviation of the bootstrap sample distribution. Confidence intervals (CIs) are defined as the region that captures where the modal exposure might be estimated to be if the same sampling procedure were repeated many times. For this and all following , bootstrapped confidence intervals were calculated by randomly sampling 30 evaluators with replacement from the original set of evaluators across 10, 000 iterations. Note that bootstrapped CIs do not represent that there necessarily exists substantial uncertainty-our reported modal exposure (for HYPE time) or detection rate (for HYPE ∞) is still the best point estimate of the value. We discuss bootstrapped CIs for other numbers of evaluators later on in the Cost Tradeoffs section. First, we report using the above datasets, models and metrics using HYPE time. Next, we demonstrate the HYPE ∞'s approximates the ones from HYPE time at a fraction of the cost and time. Next, we trade off the accuracy of our scores with time. We end with comparisons to FID. CelebA-64. We find that StyleGAN trunc ed in the highest HYPE time score (modal exposure time), at a mean of 439.3ms, indicating that evaluators required nearly a half-second of exposure to accurately classify StyleGAN trunc images (Table 1). StyleGAN trunc is followed by ProGAN at 363.7ms, a 17% drop in time. BEGAN and WGAN-GP are both easily identifiable as fake, so they are tied in third place around the minimum possible exposure time available of 100ms. Both BEGAN and WGAN-GP exhibit a bottoming out effect -reaching our minimum time exposure of 100ms quickly and consistently 1. This means that humans can detect fake generated images at 100ms and possibly lower. Thus, their scores are identical and indistinguishable. To demonstrate separability between StyleGAN trunc, ProGAN, BEGAN, and WGAN-GP together, we report from a one-way analysis of variance (ANOVA) test between all four models, where each model's input is the list of modes from each model's 30 evaluators. The ANOVA confirm that there is a statistically significant omnibus difference (F = 83.5, p < 0.0001). Pairwise post-hoc analysis using Tukey tests confirms that all pairs of models are separable (all p < 0.05), with the exception of BEGAN and WGAN-GP (n.s.).FFHQ-1024. We find that StyleGAN trunc ed in a higher exposure time than StyleGAN no-trunc, at 363.2ms and 240.7ms, respectively (Table 2). While the 95% confidence intervals that represent a very conservative overlap of 2.7ms, an unpaired t-test confirms that the difference between the two models is significant (t = 2.3, p = 0.02). Table 3: HYPE ∞ on four GANs trained on CelebA-64. Evaluators were deceived most often by StyleGAN trunc images, followed by ProGAN, BEGAN, and WGAN-GP. We also display the breakdown of the deception rate on real and fake images individually; counterintuitively, real errors increase with the errors on fake images, because evaluators become more confused and distinguishing factors between the two distributions become harder to discern. We observe a consistently separable difference between StyleGAN trunc and StyleGAN no-trunc and clear delineations between models TAB2. HYPE ∞ ranks StyleGAN trunc (27.6%) above StyleGAN no-trunc (19.0%) with no overlapping CIs. Separability is confirmed by an unpaired t-test (t = 8.3, p < 0.001). One of HYPE's goals is to be cost and time efficient. When running HYPE, there is an inherent tradeoff between accuracy and time, as well as between accuracy and cost. This is driven by the law of large numbers: recruiting additional evaluators in a crowdsourcing task often produces more consistent , but at a higher cost (as each evaluator is paid for their work) and a longer amount of time until completion (as more evaluators must be recruited and they must complete their work).To manage this tradeoff, we run an experiment with HYPE ∞ on StyleGAN trunc. We completed an additional evaluation with 60 evaluators, and compute 95% bootstrapped confidence intervals, choosing from 10 to 120 evaluators (Figure 4). We see that the CI begins to converge around 30 evaluators, our recommended number of evaluators to recruit and the default that we build into our system. As FID is one of the most frequently used evaluation methods for unconditional image generation, it is imperative to compare HYPE against FID on the same models (Table 5). We show through Spearman rank-order correlation coefficients that FID is correlated with neither human judgment measure, not HYPE time (ρ = −0.0286) nor with HYPE ∞ (ρ = −0.0857), where a Spearman correlation of -1.0 is ideal because lower FID and higher HYPE scores indicate stronger models. Meanwhile, HYPE time and HYPE ∞ exhibit strong correlation (ρ = 0.9429), where 1.0 is ideal because they are directly related. We calculate FID across the standard protocol of evaluating 50K generated and 50K real images for both CelebA-64 and FFHQ-1024, reproducing scores for StyleGAN no-trunc. Table 5: HYPE scores compared to FID. We put an asterisk on the most realistic GAN for each score (lower the better for FID, higher the better for HYPE). FID scores do not correlate fully with the human evaluation scores of HYPE ∞ on both CelebA-64 and FFHQ-1024 tasks. FID scores were calculated using 50K real (CelebA-64 or FFHQ-1024) and 50K generated images for each model. Cognitive psychology. We leverage decades of cognitive psychology to motivate how we use stimulus timing to gauge the perceptual realism of generated images. It takes an average of 150ms of focused visual attention for people to process and interpret an image, but only 120ms to respond to faces because our inferotemporal cortex has dedicated neural resources for face detection BID39 BID8. Perceptual masks are placed between a person's response to a stimulus and their perception of it to eliminate post-processing of the stimuli after the desired time exposure BID44. Prior work in determining human perceptual thresholds BID15 generates masks from their test images using the texture-synthesis algorithm BID35. We leverage this literature to establish feasible lower bounds on the exposure time of images, the time between images, and the use of noise masks. Success of automatic metrics. Common generative modeling tasks include realistic image generation BID14, machine translation BID0, image captioning BID48, and abstract summarization BID30, among others. These tasks often resort to automatic metrics like the Inception Score (IS) BID43 and Fréchet Inception Distance (FID) BID17 to evaluate images and BLEU BID34, CIDEr BID47 and METEOR BID1 scores to evaluate text. While we focus on how realistic generated content appears, other automatic metrics also measure diversity of output, overfitting, entanglement, training stability, and computational and sample efficiency of the model BID6 BID29 BID2. Our metric may also capture one aspect of output diversity, insofar as human evaluators can detect similarities or patterns across images. Our evaluation is not meant to replace existing methods but to complement them. Limitations of automatic metrics. Prior work has asserted that there exists coarse correlation of human judgment to FID BID17 and IS BID43, leading to their widespread adoption. Both metrics depend on the Inception v3 Network BID45, a pretrained ImageNet model, to calculate statistics on the generated output (for IS) and on the real and generated distributions (for FID). The validity of these metrics when applied to other datasets has been repeatedly called into question BID2 BID40 BID6 BID38. Perturbations imperceptible to humans alter their values, similar to the behavior of adversarial examples BID26. Finally, similar to our metric, FID depends on a set of real examples and a set of generated examples to compute high-level differences between the distributions, and there is inherent variance to the metric depending on the number of images and which images were chosen-in fact, there exists a correlation between accuracy and budget (cost of computation) in improving FID scores, because spending a longer time and thus higher cost on compute will yield better FID scores BID29. Nevertheless, this cost is still lower than paid human annotators per image. Human evaluations. Many human-based evaluations have been attempted to varying degrees of success in prior work, either to evaluate models directly BID11 BID33 or to motivate using automated metrics BID43 BID17. Prior work also used people to evaluate GAN outputs on CIFAR-10 and MNIST and even provided immediate feedback after every judgment BID43. They found that generated MNIST samples have saturated human performance-that is, people cannot distinguish generated numbers from real MNIST numbers, while still finding 21.3% error rate on CIFAR-10 with the same model BID43. This suggests that different datasets will have different levels of complexity for crossing realistic or hyper-realistic thresholds. The closest recent work to ours compares models using a tournament of discriminators BID33 BID24. The design would likely affect humans' absolute thresholds, as cognitive load may be of consideration; the number of humans required per task may require significant increase if evaluating fairly across all possible categories. Practically, the most valuable direction for the community to pursue with HYPE is likely one that includes the most difficult categories, especially when progress on those is hard to measure using automatic metrics. In the case of text generation (translation, caption generation), HYPE time may require much longer and much higher range adjustments to the perceptual time thresholds for text comprehensibility than those used in visual perception BID24.Future Work. We plan to extend HYPE to different imaging datasets and imaging tasks such as conditional image generation, as well as to text and video, such as translation BID34 and video captioning BID23. Future work would also explore budget-optimal estimation of HYPE scores and adaptive estimation of evaluator quality BID19. Additional improvements involve identifying images that require more evaluators BID50. We also aim to build in faster time exposures under 100ms -ideally down to 13ms, the minimum time exposure of human perception -for tasks that require that level of granularity. Doing so requires careful engineering solution, since 100ms appears to be the minimum time that is trustable before we are throttled by JavaScript paint and rendering times on modern browsers. We will investigate the ecological validity of our methods -that is, whether HYPE's evaluation is representative of how a person would perceive a GAN in everyday life. For instance, HYPE shows evaluators whether they classified an image correctly immediately after they answer. While this is standard practice in the psychophysics literature for staircase tasks, it likely does not reflect how one might encounter generated content in everyday life. Notably, in pilot studies, we found that without such feedback, evaluators were far less consistent and our metric would not be stable. Finally, we plan to investigate whether the reliability of HYPE may be impacted by the month or year at which it is run, as the population of available crowdsourced workers may differ across these factors. Anecdotally, we have found HYPE to be reliable regardless of the time of day.7 HYPE provides researchers with two human evaluation methods for GANs that are grounded in psychopisics to measure human perceptual fidelity directly, provide task designs that in consistent and reliable , distinguishes between different model performances through separable , is cost and time efficient. We report two metrics: HYPE time and HYPE ∞. HYPE time uses time perceptual thresholds where longer time constraints are more difficult to achieve because they give humans more time to interpret the generated content and observe artifacts. HYPE ∞ reports the error rate under unlimited time, where higher rates indicate a more realistic set of outputs. We demonstrate the efficacy of our approach on unconditional image generation across four GANs {StyleGAN, ProGAN, BEGAN, WGAN-GP} and two datasets of human faces {CelebA-64, FFHQ-1024}, with two types of output sampling on StyleGAN {with the truncation trick, without the truncation trick}. To encourage progress of generative models towards human-level visual fidelity, we deploy our evaluation system at https://hype.stanford.edu, so anyone can upload and evaluate their models based on HYPE at the click of a button. A. CONFIDENCE INTERVALS
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJgZSULYdN
HYPE is a reliable human evaluation metric for scoring generative models, starting with human face generation across 4 GANs.
Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time. Thanks to recent advances in deep learning BID33 BID0 and the availability of large-scale parallel corpora, machine translation has now reached impressive performance on several language pairs. However, these models work very well only when provided with massive amounts of parallel data, in the order of millions of parallel sentences. Unfortunately, parallel corpora are costly to build as they require specialized expertise, and are often nonexistent for low-resource languages. Conversely, monolingual data is much easier to find, and many languages with limited parallel data still possess significant amounts of monolingual data. There have been several attempts at leveraging monolingual data to improve the quality of machine translation systems in a semi-supervised setting BID25 BID15 BID16 BID39. Most notably, BID30 proposed a very effective data-augmentation scheme, dubbed "back-translation", whereby an auxiliary translation system from the target language to the source language is first trained on the available parallel data, and then used to produce translations from a large monolingual corpus on the target side. The pairs composed of these translations with their corresponding ground truth targets are then used as additional training data for the original translation system. Another way to leverage monolingual data on the target side is to augment the decoder with a language model BID11. And finally, BID3; have proposed to add an auxiliary auto-encoding task on monolingual data, which ensures that a translated sentence can be translated back to the original one. All these works still rely on several tens of thousands parallel sentences, however. Previous work on zero-resource machine translation has also relied on labeled information, not from the language pair of interest but from other related language pairs BID7 BID17 BID2 or from other modalities BID26 BID22. The only exception is the work by BID29; BID28, where the machine translation problem is reduced to a deciphering problem. Unfortunately, their method is limited to rather short sentences and it has only been demonstrated on a very simplistic setting comprising of the most frequent short sentences, or very closely related languages. Left (autoencoding): the model is trained to reconstruct a sentence from a noisy version of it. x is the target, C(x) is the noisy input,x is the reconstruction. Right (translation): the model is trained to translate a sentence in the other domain. The input is a noisy translation (in this case, from source-to-target) produced by the model itself, M, at the previous iteration (t), y = M (t) (x). The model is symmetric, and we repeat the same process in the other language. See text for more details. In this paper, we investigate whether it is possible to train a general machine translation system without any form of supervision whatsoever. The only assumption we make is that there exists a monolingual corpus on each language. This set up is interesting for a twofold reason. First, this is applicable whenever we encounter a new language pair for which we have no annotation. Second, it provides a strong lower bound performance on what any good semi-supervised approach is expected to yield. The key idea is to build a common latent space between the two languages (or domains) and to learn to translate by reconstructing in both domains according to two principles: (i) the model has to be able to reconstruct a sentence in a given language from a noisy version of it, as in standard denoising auto-encoders BID36.(ii) The model also learns to reconstruct any source sentence given a noisy translation of the same sentence in the target domain, and vice versa. For (ii), the translated sentence is obtained by using a back-translation procedure BID30, i.e. by using the learned model to translate the source sentence to the target domain. In addition to these reconstruction objectives, we constrain the source and target sentence latent representations to have the same distribution using an adversarial regularization term, whereby the model tries to fool a discriminator which is simultaneously trained to identify the language of a given latent sentence representation BID8. This procedure is then iteratively repeated, giving rise to translation models of increasing quality. To keep our approach fully unsupervised, we initialize our algorithm by using a naïve unsupervised translation model based on a word by word translation of sentences with a bilingual lexicon derived from the same monolingual data BID4. As a , and by only using monolingual data, we can encode sentences of both languages into the same feature space, and from there, we can also decode/translate in any of these languages; see FIG0 for an illustration. While not being able to compete with supervised approaches using lots of parallel resources, we show in section 4 that our model is able to achieve remarkable performance. For instance, on the WMT dataset we can achieve the same translation quality of a similar machine translation system trained with full supervision on 100,000 sentence pairs. On the Multi30K-Task1 dataset we achieve a BLEU above 22 on all the language pairs, with up to 32.76 on English-French. Next, in section 2, we describe the model and the training algorithm. We then present experimental in section 4. Finally, we further discuss related work in section 5 and summarize our findings in section 6. In this section, we first describe the architecture of the translation system, and then we explain how we train it. The translation model we propose is composed of an encoder and a decoder, respectively responsible for encoding source and target sentences to a latent space, and to decode from that latent space to the source or the target domain. We use a single encoder and a single decoder for both domains BID17. The only difference when applying these modules to different languages is the choice of lookup tables. Let us denote by W S the set of words in the source domain associated with the (learned) words embeddings Z S = (z s 1, ...., z s |W S |), and by W T the set of words in the target domain associated with the embeddings Z T = (z t 1, ...., z t |W T |), Z being the set of all the embeddings. Given an input sentence of m words x = (x 1, x 2, ..., x m) in a particular language, ∈ {src, tgt}, an encoder e θenc,Z (x,) computes a sequence of m hidden states z = (z 1, z 2, ..., z m) by using the corresponding word embeddings, i.e. Z S if = src and Z T if = tgt; the other parameters θ enc are instead shared between the source and target languages. For the sake of simplicity, the encoder will be denoted as e(x,) in the following. These hidden states are vectors in R n, n being the dimension of the latent space. A decoder d θ dec,Z (z,) takes as input z and a language, and generates an output sequence y = (y 1, y 2, ..., y k), where each word y i is in the corresponding vocabulary W. This decoder makes use of the corresponding word embeddings, and it is otherwise parameterized by a vector θ dec that does not depend on the output language. It will thus be denoted d(z,) in the following. To generate an output word y i, the decoder iteratively takes as input the previously generated word y i−1 (y 0 being a start symbol which is language dependent), updates its internal state, and returns the word that has the highest probability of being the next one. The process is repeated until the decoder generates a stop symbol indicating the end of the sequence. In this article, we use a sequence-to-sequence model with attention BID0, without input-feeding. The encoder is a bidirectional-LSTM which returns a sequence of hidden states z = (z 1, z 2, ..., z m). At each step, the decoder, which is also an LSTM, takes as input the previous hidden state, the current word and a context vector given by a weighted sum over the encoder states. In all the experiments we consider, both encoder and decoder have 3 layers. The LSTM layers are shared between the source and target encoder, as well as between the source and target decoder. We also share the attention weights between the source and target decoder. The embedding and LSTM hidden state dimensions are all set to 300. Sentences are generated using greedy decoding. We consider a dataset of sentences in the source domain, denoted by D src, and another dataset in the target domain, denoted by D tgt. These datasets do not correspond to each other, in general. We train the encoder and decoder by reconstructing a sentence in a particular domain, given a noisy version of the same sentence in the same or in the other domain. At a high level, the model starts with an unsupervised naïve translation model obtained by making word-by-word translation of sentences using a parallel dictionary learned in an unsupervised way BID4. Then, at each iteration, the encoder and decoder are trained by minimizing an objective function that measures their ability to both reconstruct and translate from a noisy version of an input training sentence. This noisy input is obtained by dropping and swapping words in the case of the auto-encoding task, while it is the of a translation with the model at the previous iteration in the case of the translation task. In order to promote alignment of the latent distribution of sentences in the source and the target domains, our approach also simultaneously learns a discriminator in an adversarial setting. The newly learned encoder/decoder are then used at the next iteration to generate new translations, until convergence of the algorithm. At test time and despite the lack of parallel data at training time, the encoder and decoder can be composed into a standard machine translation system. Training an autoencoder of sentences is a trivial task, if the sequence-to-sequence model is provided with an attention mechanism like in our work 1. Without any constraint, the auto-encoder very quickly learns to merely copy every input word one by one. Such a model would also perfectly copy sequences of random words, suggesting that the model does not learn any useful structure in the data. To address this issue, we adopt the same strategy of Denoising Auto-encoders (DAE) BID36 ), and add noise to the input sentences (see FIG0 -left), similarly to BID13. Considering a domain = src or = tgt, and a stochastic noise model denoted by C which operates on sentences, we define the following objective function: DISPLAYFORM0 wherex ∼ d(e(C(x), ), ) means thatx is a reconstruction of the corrupted version of x, with x sampled from the monolingual dataset D. In this equation, ∆ is a measure of discrepancy between the two sequences, the sum of token-level cross-entropy losses in our case. Noise model C(x) is a randomly sampled noisy version of sentence x. In particular, we add two different types of noise to the input sentence. First, we drop every word in the input sentence with a probability p wd. Second, we slightly shuffle the input sentence. To do so, we apply a random permutation σ to the input sentence, verifying the condition ∀i ∈ {1, n}, |σ(i) − i| ≤ k where n is the length of the input sentence, and k is a tunable parameter. To generate a random permutation verifying the above condition for a sentence of size n, we generate a random vector q of size n, where q i = i + U (0, α), and U is a draw from the uniform distribution in the specified range. Then, we define σ to be the permutation that sorts the array q. In particular, α < 1 will return the identity, α = +∞ can return any permutation, and α = k + 1 will return permutations σ verifying ∀i ∈ {1, n}, |σ(i) − i| ≤ k. Although biased, this method generates permutations similar to the noise observed with word-by-word translation. In our experiments, both the word dropout and the input shuffling strategies turned out to have a critical impact on the , see also section 4.5, and using both strategies at the same time gave us the best performance. In practice, we found p wd = 0.1 and k = 3 to be good parameters. The second objective of our approach is to constrain the model to be able to map an input sentence from a the source/target domain 1 to the target/source domain 2, which is what we are ultimately interested in at test time. The principle here is to sample a sentence x ∈ D 1, and to generate a corrupted translation of this sentence in 2. This corrupted version is generated by applying the current translation model denoted M to x such that y = M (x). Then a corrupted version C(y) is sampled (see FIG0 -right). The objective is thus to learn the encoder and the decoder such that they can reconstruct x from C(y). The cross-domain loss can be written as: DISPLAYFORM0 where ∆ is again the sum of token-level cross-entropy losses. Intuitively, the decoder of a neural machine translation system works well only when its input is produced by the encoder it was trained with, or at the very least, when that input comes from a distribution very close to the one induced by its encoder. Therefore, we would like our encoder to output features in the same space regardless of the actual language of the input sentence. If such condition is satisfied, our decoder may be able to decode in a certain language regardless of the language of the encoder input sentence. Note however that the decoder could still produce a bad translation while yielding a valid sentence in the target domain, as constraining the encoder to map two languages in the same feature space does not imply a strict correspondence between sentences. Fortunately, the previously introduced loss for cross-domain training in equation 2 mitigates this concern. Also, recent work on bilingual lexical induction has shown that such a constraint is very effective at the word level, suggesting that it may also work at the sentence level, as long as the two latent representations exhibit strong structure in feature space. In order to add such a constraint, we train a neural network, which we will refer to as the discriminator, to classify between the encoding of source sentences and the encoding of target sentences BID8. The discriminator operates on the output of the encoder, which is a sequence of latent vectors (z 1, ..., z m), with z i ∈ R n, and produces a binary prediction about the language of the en- DISPLAYFORM0, where 0 corresponds to the source domain, and 1 to the target domain. The discriminator is trained to predict the language by minimizing the following cross-entropy loss: DISPLAYFORM1, where (x i, i) corresponds to sentence and language id pairs uniformly sampled from the two monolingual datasets, θ D are the parameters of the discriminator, θ enc are the parameters of the encoder, and Z are the encoder word embeddings. The encoder is trained instead to fool the discriminator: DISPLAYFORM2 with j = 1 if i = 2, and vice versa. Final Objective function The final objective function at one iteration of our learning algorithm is thus: DISPLAYFORM3 where λ auto, λ cd, and λ adv are hyper-parameters weighting the importance of the auto-encoding, cross-domain and adversarial loss. In parallel, the discriminator loss L D is minimized to update the discriminator. In this section we describe the overall training algorithm and the unsupervised criterion we used to select hyper-parameters. The final learning algorithm is described in Algorithm 1 and the general architecture of the model is shown in FIG1. As explained previously, our model relies on an iterative algorithm which starts from an initial translation model M (line 3). This is used to translate the available monolingual data, as needed by the cross-domain loss function of Equation 2. At each iteration, a new encoder and decoder are trained by minimizing the loss of Equation 4 -line 7 of the algorithm. Then, a new translation model M (t+1) is created by composing the ing encoder and decoder, and the process repeats. To jump start the process, M simply makes a word-by-word translation of each sentence using a parallel dictionary learned using the unsupervised method proposed by BID4, which only leverages monolingual data. The intuition behind our algorithm is that as long as the initial translation model M retains at least some information of the input sentence, the encoder will map such translation into a representation in feature space that also corresponds to a cleaner version of the input, since the encoder is trained to denoise. At the same time, the decoder is trained to predict noiseless outputs, conditioned on noisy features. Putting these two pieces together will produce less noisy translations, which will enable better back-translations at the next iteration, and so on so forth. Infer bilingual dictionary using monolingual data BID4 3:M ← unsupervised word-by-word translation model using the inferred dictionary 4:for t = 1, T do DISPLAYFORM0 end for 10:return M TAB1 11: end procedure In order to select hyper-parameters, we wish to have a criterion correlated with the translation quality. However, we do not have access to parallel sentences to judge how well our model translates, not even at validation time. Therefore, we propose the surrogate criterion which we show correlates well with BLEU BID27, the metric we care about at test time. For all sentences x in a domain 1, we translate these sentences to the other domain 2, and then translate the ing sentences back to 1. The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process. The performance is then averaged over the two directions, and the selected model is the one with the highest average score. Given an encoder e, a decoder d and two non-parallel datasets D src and D tgt, we denote M src→tgt (x) = d(e(x, src), tgt) the translation model from src to tgt, and M tgt→src the model in the opposite direction. Our model selection criterion M S(e, d, D src, D tgt) is: FIG3 shows a typical example of the correlation between this measure and the final translation model performance (evaluated here using a parallel dataset). DISPLAYFORM0 The unsupervised model selection criterion is used both to a) determine when to stop training and b) to select the best hyper-parameter setting across different experiments. In the former case, the Spearman correlation coefficient between the proposed criterion and BLEU on the test set is 0.95 in average. In the latter case, the coefficient is in average 0.75, which is fine but not nearly as good. For instance, the BLEU score on the test set of models selected with the unsupervised criterion are sometimes up to 1 or 2 BLEU points below the score of models selected using a small validation set of 500 parallel sentences. In this section, we first describe the datasets and the pre-processing we used, then we introduce the baselines we considered, and finally we report the extensive empirical validation proving the effectiveness of our method. We will release the code to the public once the revision process is over. In our experiments, we consider the English-French and English-German language pairs, on three different datasets. WMT'14 English-French We use the full training set of 36 million pairs, we lower-case them and remove sentences longer than 50 words, as well as pairs with a source/target length ratio above 1.5, ing in a parallel corpus of about 30 million sentences. Next, we build monolingual corpora by selecting the English sentences from 15 million random pairs, and selecting the French sentences from the complementary set. The former set constitutes our English monolingual dataset. The latter set is our French monolingual dataset. The lack of overlap between the two sets ensures that there is not exact correspondence between examples in the two datasets. The validation set is comprised of 3,000 English and French sentences extracted from our monolingual training corpora described above. These sentences are not the translation of each other, and they will be used by our unsupervised model selection criterion, as explained in 3.2. Finally, we report on the full newstest2014 dataset. WMT'16 English-German We follow the same procedure as above to create monolingual training and validation corpora in English and German, which in two monolingual training corpora of 1.8 million sentences each. We test our model on the newstest2016 dataset. The task 1 of the Multi30k dataset BID6 above, we split the training and validation sets into monolingual corpora, ing in 14,500 monolingual source and target sentences in the training set, and 500 sentences in the validation set. TAB1 summarizes the number of monolingual sentences in each dataset, along with the vocabulary size. To limit the vocabulary size on the WMT en-fr and WMT de-en datasets, we only considered words with more than 100 and 25 occurrences, respectively. Word-by-word translation (WBW) The first baseline is a system that performs word-by-word translations of the input sentences using the inferred bilingual dictionary BID4. This baseline provides surprisingly good for related language pairs, like English-French, where the word order is similar, but performs rather poorly on more distant pairs like English-German, as can be seen in Table 2.Word reordering (WR) After translating word-by-word as in WBW, here we reorder words using an LSTM-based language model trained on the target side. Since we cannot exhaustively score every possible word permutation (some sentences have about 100 words), we consider all pairwise swaps of neighboring words, we select the best swap, and iterate ten times. We use this baseline only on the WMT dataset that has a large enough monolingual data to train a language model. Using the reference, we produce the best possible generation using only the words given by WBW. The performance of this method is an upper-bound of what any model could do without replacing words. Supervised Learning We finally consider exactly the same model as ours, but trained with supervision, using the standard cross-entropy loss on the original parallel sentences. To implement our baseline and also to initialize the embeddings Z of our model, we first train word embeddings on the source and target monolingual corpora using fastText BID1, and then we apply the unsupervised method proposed by BID4 to infer a bilingual dictionary which can be use for word-by-word translation. Since WMT yields a very large-scale monolingual dataset, we obtain very high-quality embeddings and dictionaries, with an accuracy of 84.48% and 77.29% on French-English and GermanEnglish, which is on par with what could be obtained using a state-of-the-art supervised alignment method BID4.On the Multi30k datasets instead, the monolingual training corpora are too small to train good word embeddings (more than two order of magnitude smaller than WMT). We therefore learn word vectors on Wikipedia using fastText 2. Table 2: BLEU score on the Multi30k-Task1 and WMT datasets using greedy decoding. Discriminator Architecture The discriminator is a multilayer perceptron with three hidden layers of size 1024, Leaky-ReLU activation functions and an output logistic unit. , we include a smoothing coefficient s = 0.1 in the discriminator predictions. Training Details The encoder and the decoder are trained using Adam BID18, with a learning rate of 0.0003, β 1 = 0.5, and a mini-batch size of 32. The discriminator is trained using RMSProp BID35 ) with a learning rate of 0.0005. We evenly alternate between one encoder-decoder and one discriminator update. We set λ auto = λ cd = λ adv = 1. Table 2 shows the BLEU scores achieved by our model and the baselines we considered. First, we observe that word-by-word translation is surprisingly effective when translating into English, obtaining a BLEU score of 16.77 and 10.09 for fr-en on respectively Multi30k-Task1 and WMT datasets. Word-reordering only slightly improves upon word-by-word translation. Our model instead, clearly outperforms these baselines, even on the WMT dataset which has more diversity of topics and sentences with much more complicated structure. After just one iteration, we obtain a BLEU score of 27.48 and 12.10 for the en-fr task. Interestingly, we do even better than oracle reordering on some language pairs, suggesting that our model not only reorders but also correctly substitutes some words. After a few iterations, our model obtains BLEU of 32.76 and 15.05 on Multi30k-Task1 and WMT datasets for the English to French task, which is rather remarkable. Comparison with supervised approaches Here, we assess how much labeled data are worth our two large monolingual corpora. On WMT, we trained the very same NMT architecture on both language pairs, but with supervision using various amounts of parallel data. FIG4 -right shows the ing performance. Our unsupervised approach obtains the same performance than a supervised NMT model trained on about 100,000 parallel sentences, which is impressive. Of course, adding more parallel examples allows the supervised approach to outperform our method, but the good performance of our unsupervised method suggests that it could be very effective for low-resources languages where no parallel data are available. Moreover, these open the door to the development of semi-supervised translation models, which will be the focus of future investigation. With a phrase-based machine translation system, we obtain 21.6 and 22.4 BLEU on WMT en-fr and fr-en, which is better than the supervised NMT baseline we report for that same amount of parallel sentences, which is 16.8 and 16.4 respectively. However, if we train the same supervised NMT model with BPE BID31, we obtain 22.6 BLEU for en-fr, suggesting that our on unsupervised machine translation could also be improved by using BPE, as this removes unknown words (about 9% of the words in de-en are replaced by the unknown token otherwise).Iterative Learning FIG4 -left illustrates the quality of the learned model after each iteration of the learning process in the language pairs of Multi30k-Task1 dataset, other being provided in Table 2. One can see that the quality of the obtained model is high just after the first iteration Source une femme aux cheveux roses habillée en noir parleà un homme. Iteration 0 a woman at hair roses dressed in black speaks to a man. Iteration 1 a woman at glasses dressed in black talking to a man. Iteration 2 a woman at pink hair dressed in black speaks to a man. Iteration 3 a woman with pink hair dressed in black is talking to a man. Reference a woman with pink hair dressed in black talks to a man. une photo d' une rue bondée en ville. Iteration 0 a photo a street crowded in city. Iteration 1 a picture of a street crowded in a city. Iteration 2 a picture of a crowded city street. Iteration 3 a picture of a crowded street in a city. Reference a view of a crowded city street. of the process. Subsequent iterations yield significant gains although with diminishing returns. At iteration 3, the performance gains are marginal, showing that our approach quickly converges. Table 3 shows examples of translations of three sentences on the Multi30k dataset, as we iterate. Iteration 0 corresponds to the word-by-word translation obtained with our cross-lingual dictionary, which clearly suffers from word order issues. We can observe that the quality of the translations increases at every iteration. We perform an ablation study to understand the importance of the different components of our system. To this end, we have trained multiple versions of our model with some missing components: the discriminator, the cross-domain loss, the auto-encoding loss, etc. Table 4 shows that the best performance is obtained with the simultaneous use of all the described elements. Table 4: Ablation study on the Multi30k-Task1 dataset. The most critical component is the unsupervised word alignment technique, either in the form of a back-translation dataset generated using word-by-word translation, or in the form of pretrained embeddings which enable to map sentences of different languages in the same latent space. On the English-French pair of Multi30k-Task1, with a back-translation dataset but without pretrained embeddings, our model obtains a BLEU score of 25.29 and 26.10, which is only a few points below the model using all components. Similarly, when the model uses pretrained embeddings but no back-translation dataset (when λ cd = 0), it obtains 25.44 and 27.14. On the other hand, a model that does not use any of these components only reaches 8.78 and 9.15 BLEU.The adversarial component also significantly improves the performance of our system, with a difference of up to 5.33 BLEU in the French-English pair of Multi30k-Task1. This confirms our intuition that, to really benefit from the cross-domain loss, one has to ensure that the distribution of latent sentence representations is similar across the two languages. Without the auto-encoding loss (when λ auto = 0), the model only obtains 20.02, which is 8.05 BLEU points below the method using all components. Finally, performance is greatly degraded also when the corruption process of the input sentences is removed, as the model has much harder time learning useful regularities and merely learns to copy input data. A similar work to ours is the style transfer method with non-parallel text by BID32. The authors consider a sequence-to-sequence model, where the latent state given to the decoder is also fed to a discriminator. The encoder is trained with the decoder to reconstruct the input, but also to fool the discriminator. The authors also found it beneficial to train two discriminators, one for the source and one for the target domain. Then, they trained the decoder so that the recurrent hidden states during the decoding process of a sentence in a particular domain are not distinguishable according to the respective discriminator. This algorithm, called Professor forcing, was initially introduced by BID20 to encourage the dynamics of the decoder observed during inference to be similar to the ones observed at training time. Similarly, BID38 also propose to use an adversarial training approach to learn representations invariant to specific attributes. In particular, they train an encoder to map the observed data to a latent feature space, and a model to make predictions based on the encoder output. To remove bias existing in the data from the latent codes, a discriminator is also trained on the encoder outputs to predict specific attributes, while the encoder is jointly trained to fool the discriminator. They show that the obtained invariant representations lead to better generalization on classification and generation tasks. Before that, BID14 trained a variational autoencoder BID19 where the decoder input is the concatenation of an unstructured latent vector, and a structured code representing the attribute of the sentence to generate. A discriminator is trained on top of the decoder to classify the labels of generated sentences, while the decoder is trained to satisfy this discriminator. Because of the non-differentiability of the decoding process, at each step, their decoder takes as input the probability vector predicted at the previous step. Perhaps, the most relevant prior work is by, who essentially optimizes directly for the model selection metric we propose in section 3.2. One drawback of their approach, which has not been applied to the fully unsupervised setting, is that it requires to back-propagate through the sequence of discrete predictions using reinforcement learning-based approaches which are notoriously inefficient. In this work, we instead propose to a) use a symmetric architecture, and b) freeze the translator from source to target when training the translator from target to source, and vice versa. By alternating this process we operate with a fully differentiable model and we efficiently converge. In the vision domain, several studies tackle the unsupervised image translation problem, where the task consists in mapping two image domains A and B, without paired supervision. For instance, in the CoGAN architecture BID23, two generators are trained to learn a common representation space between two domains, by sharing some of their convolutional layers. This is similar to our strategy of sharing the LSTM weights across the source and target encoders and decoders. propose a similar approach, based on variational autoencoders, and generative adversarial networks BID10. BID34 use similar approaches for emoji generation, and apply a regularization term to the generator so that it behaves like an identity mapping when provided with input images from the target domain. BID40 introduced a cycle consistency loss, to capture the intuition that if an image is mapped from A to B, then from B to A, then the ing image should be identical to the input one. Our approach is also reminiscent of the Fader Networks architecture, where a discriminator is used to remove the information related to specific attributes from the latent states of an autoencoder of images. The attribute values are then given as input to the decoder. The decoder is trained with real attributes, but at inference, it can be fed with any attribute values to generate variations of the input images. The model presented in this paper can be seen as an extension to the text domain of the Fader Networks, where the attribute is the language itself. We presented a new approach to neural machine translation where a translation model is learned using monolingual datasets only, without any alignment between sentences or documents. The principle of our approach is to start from a simple unsupervised word-by-word translation model, and to iteratively improve this model based on a reconstruction loss, and using a discriminator to align latent distributions of both the source and the target languages. Our experiments demonstrate that our approach is able to learn effective translation models without any supervision of any sort.
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkYTTf-AZ
We propose a new unsupervised machine translation model that can learn without using parallel corpora; experimental results show impressive performance on multiple corpora and pairs of languages.
We derive a new intrinsic social motivation for multi-agent reinforcement learning (MARL), in which agents are rewarded for having causal influence over another agent's actions, where causal influence is assessed using counterfactual reasoning. The reward does not depend on observing another agent's reward function, and is thus a more realistic approach to MARL than taken in previous work. We show that the causal influence reward is related to maximizing the mutual information between agents' actions. We test the approach in challenging social dilemma environments, where it consistently leads to enhanced cooperation between agents and higher collective reward. Moreover, we find that rewarding influence can lead agents to develop emergent communication protocols. Therefore, we also employ influence to train agents to use an explicit communication channel, and find that it leads to more effective communication and higher collective reward. Finally, we show that influence can be computed by equipping each agent with an internal model that predicts the actions of other agents. This allows the social influence reward to be computed without the use of a centralised controller, and as such represents a significantly more general and scalable inductive bias for MARL with independent agents. Deep reinforcement learning (RL) has made impressive progress on specific tasks with well-defined reward functions, but is still difficult to learn intelligent behavior that generalizes across multiple domains. Intrinsic motivation is a technique for solving this problem by developing general reward functions that encourage an agent to learn across a variety of tasks BID26. Previous approaches to intrinsic motivation have broadly fallen into two categories: curiosity, or a drive for novelty (e.g. BID17 BID24), and empowerment, or a drive to be able to manipulate the environment .We posit that this body of work has largely overlooked an important intrinsic motivation that is key to human learning: social interaction. Humans have remarkable social learning abilities; some authors suggest that it is social learning that has given rise to cultural evolution, and allowed us to achieve unprecedented progress and coordination on a massive scale BID31 BID11. Others emphasize that our impressive capacity to learn from others far surpasses that of other animals, apes, and even other proto-human species BID10 BID9 ). Therefore, we propose an intrinsic reward function designed for multi-agent RL (MARL), which awards agents for having a causal influence on other agents' actions. Causal influence is assessed using counterfactual reasoning; at each timestep, an agent simulates alternate, counterfactual actions that it could have taken, and assesses their effect on another agent's behavior. Actions that lead to relatively higher change in the other agent are considered to be highly influential and are rewarded. We show how this reward is related to maximizing the mutual information between agents' actions, and is thus a form of social empowerment. We hypothesize that rewarding influence may therefore encourage cooperation between agents. We also take inspiration from experiments in human cognition, showing that newborn infants are sensitive to correspondences between their own actions and the actions of other people, and use this to coordinate their behavior with others BID30 BID13.To study our proposed social influence reward in the MARL setting, we adopt the Sequential Social Dilemmas (SSDs) of BID20. These are challenging MA environments with a game-theoretic reward structure, similar to Prisoner's Dilemma. For each individual agent,'defecting' (non-cooperative behavior) has the highest payoff. However, the collective reward will be better if all agents choose to cooperate. The paradoxical payoff structure of these tasks make achieving cooperative social dynamics• Finally, rather than computing social influence using a centralised training framework as in prior work (e.g. BID4 BID3), we extend the approach by attaching an internal Model of Other Agents (MOA) network to each agent and training it to predict the actions of every other agent. The agent can then simulate counterfactual actions and use its own internal MOA to predict how these will affect other agents, thus computing its own intrinsic influence reward. Using a MOA to predict and reward influence allows us to compute an intrinsic social reward by observing other agents' past actions, without a centralised controller, and without requiring access to another agent's reward function. We believe this is an important innovation over prior work (e.g. (; BID4 BID3). When we consider likely future applications of MARL, such as autonomous driving, it becomes apparent that centralised training or the sharing of reward functions are unrealistic assumptions, since autonomous vehicles are likely to be produced by a wide variety of organizations and institutions with mixed motivations. Rather, a social reward function which only depends on observing the behavior of agents acting in the environment, and which can give rise to coordinated, cooperative behavior, represents a more promising approach. We consider a MARL Markov game defined by the tuple S,T,A,r, in which multiple agents which do not share weights are trained to independently maximize their own individual reward. The environment state is given by s∈S. At each timestep t, each agent k chooses an action a k t ∈A. The actions of all N agents are combined to form a joint action a t =[a 0 t,...a N t], which produces a transition in the environment T (s t+1 |a t,s t), according to the state transition function T. Each agent then receives its own reward r k (a t,s t), which may depend on the actions of other agents. A history of these variables over time is termed a trajectory, τ = {s t,a t,r t} T t=0. We consider a partially observable setting in which each agent k can only view a portion of the true state, s k t. Each agent seeks to maximize its own total expected future reward, DISPLAYFORM0, where γ is a discount factor. A distributed asynchronous advantage actor-critic approach (A3C) BID14 ) is used to train each agent's independent policy π k. The policy is learned via REINFORCE with baseline BID33. Architecturally, our agents consist of a convolutional layer, fully connected layers, a Long Short Term Memory (LSTM) network BID7, and linear layers which output π k and the value function V π k (s). We will refer to the internal LSTM state of agent k at timestep t as u k t. Social influence intrinsic motivation modifies an agent's reward function so that it becomes R k =αE k +βI k, where E k is the extrinsic or environmental reward, and I k is the causal influence reward. We compute I k by generating counterfactual actions that the agent could have taken at each timestep, and assessing how taking these would have affected other agents' behavior. A counterfactual is the estimated probability that "Y would be y had X been x, in situation Z =z", where X,Y, and Z are random variables, and x,y and z are their values BID19. Importantly, it is a counterfactual because we condition on a set of evidence z, and because the assignment X = x is counter to what we actually observed; in reality, X took on some other value. To simplify notation, let z t = u B t,s B t, so that conditioning on z t is equivalent to conditioning on all relevant variables (all shaded variables in Figure 1). We can also forego the do operator, noting that p(a B t |do(ã A t),z t )≡p(a B t |ã A t,z t) in this case, because z satisfies the back-door criterion BID18. Now, consider averaging over several counterfactualsã A t. This gives us the marginal policy of B, p(a DISPLAYFORM0 -in other words, B's policy if A were not considered. The discrepancy between the marginal policy of B and the conditional policy of B given A's action is a measure of the causal influence of A on B; it gives the degree to which B changes its planned action distribution because of A's behavior. Thus, the causal influence intrinsic reward for agent A is DISPLAYFORM1 The causal influence reward in Eq. 1 is related to the mutual information (MI) between the actions of agents A and B, which is given by DISPLAYFORM0 where we see that the D KL factor in Eq. 2 is the causal influence reward given in Eq. 1. The connection to mutual information is interesting, because a frequently used intrinsic motivation for single agent RL is empowerment, which rewards the agent for having high mutual information between its actions and the future state of the environment (e.g. ; Capdepuy et al. FORMULA3). To the extent that the social influence reward defined in Eq. 1 is an approximation of the MI, A is rewarded for having empowerment over B s actions. By sampling N independent trajectories τ n from the environment, where A's actions a DISPLAYFORM1 Thus, in expectation, the social influence reward is the MI between agents' actions. Whether the policy trained with Eq. 1 actually learns to approximate the MI depends on the learning dynamics. We calculate the intrinsic social influence reward using Eq. 1, because unlike Eq. 2, which gives an estimate of the symmetric bandwidth between A and B, Eq. 1 gives the directed causal effect 1 Note that this requires that agent A choose its action before B, and therefore A can influence B but B cannot influence A; in other words, we must impose a sequential ordering on agents' actions, and there cannot be mutual influence. We improve upon this approach in Section 2.4. For now, we allow only a fixed number of agents (∈[1,N −1]) to be influencers, and the rest are influencees. Only an influencer gets the causal influence reward, and only an influencee can be influenced. At each timestep, the influencers choose their actions first, and these actions are then given as input to the influencees. If agent A and B are influencers, and C is an influencee, then C receives both a of the specific action taken by agent A, a A t. We believe this will in an easier reward to learn, since it allows for better credit assignment; agent A can more easily learn which of its actions lead to high influence. We also experiment with replacing the KL-divergence with several other measures, including the Jensen-Shannon Divergence (JSD), and find that the influence reward is robust to the choice of measure. According to BID13, human children rapidly learn to use communication to influence the behavior of others when engaging in cooperative activities. They explain that "this ability to influence the partner via communication has been interpreted as evidence for a capacity to form shared goals with others", and that this capacity may be "what allows humans to engage in a wide range of cooperative activities". Therefore, we investigate a second use of the social influence reward: learning inter-agent communication protocols. Using a similar approach to Reinforced Inter-Agent Learning (RIAL) BID3 FIG3. To train the agents to communicate, we augment our initial network with an additional A3C output head, that learns a communication policy π c over which symbol to emit, and a communication value function V c (this is separate from the normal policy and value function used for acting in the environment, π e and V e, which are trained only with environmental reward E).The influence reward is used, in addition to environmental reward, to train the communication policy π c. Counterfactuals are employed to assess how much influence an agent's communication message, m A t, has on another agent's action in the next timestep, a B t+1. Importantly, we hypothesize that communication can only be influential if it is useful to another agent. There is nothing that compels agent B to act based on agent A's communication message; if it does not contain valuable information, B is free to ignore it. In fact, previous work has shown that selfish agents do not learn to use this type of ungrounded, cheap talk communication channel effectively . In contrast, for A to gain influence via communication, m A t must contain valuable information that informs B about how best to maximize its own reward, so much so that it actually causes B to change its intended action. Computing the causal influence reward as introduced in Section 2.1 requires knowing the probability of B's next action given a counterfactual, p(a B t |ã A t,s B t), which we previously solved by using a centralised controller that could access other agent's policy networks. While using a centralised training framework is common in MARL (e.g. BID4 BID3), it is less realistic than a scenario in which each agent is trained independently. We can relax this assumption and achieve independent training by equipping each agent with its own internal Model of Other Agents (MOA). The MOA consists of a second set of fully-connected and LSTM layers connected to the agent's convolutional layer (see FIG4, and is trained to predict all other agents' next actions given their current action, and the agent's egocentric view of the state: p(a t+1 |a t,s A t). The MOA is trained using cross-entropy loss over observed action trajectories. A trained MOA can be used to compute the social influence reward in the following way. Each agent can "imagine" counterfactual actions that it could have taken at each timestep, and use its internal MOA to predict the effect on other agents. It can then give itself reward for taking actions that it estimates were the most influential. This has an intuitive appeal, because it resembles how humans reason about their effect on others BID2. We may often find ourselves asking counterfactual questions of the form, "How would she have reacted if I had said or done something else in that situation?", which we can only answer using our internal model of others. Both the MOA and communication approaches are an important improvement over the original model shown in Figure 1, which computed influence within a given timestep and required that agent A choose its action a A t first, and this action be transmitted to agent B as input. This meant that only some agents (those acting first) could be influencers. In contrast, using influence for communication or with a MOA are general approaches that can be implemented in any agent, and allow all agents to mutually influence each other. Shaded nodes are conditioned on, and we intervene on a A t (blue node) by replacing it with counterfactuals. Nodes with a green must be modeled using the MOA module. Note that there is no backdoor path between a A t and s t since it would require traversing a collider that is not in the conditioning set. We now seek to estimate influence in the next timestep, meaning the influence of a We enable agents to condition their policy on the actions of other agents in the previous timestep (actions are visible), and only give the social influence reward to an agent when the agent it is attempting to influence is within its field-of-view, because the estimates of p(a B t+1 |a A t,s A t) are likely to be more accurate when B is visible to A 2. The latter constraint could have the sideeffect of encouraging agents to stay in closer proximity. However, an intrinsic social reward based on proximity is also a reasonable approach to approximating human social motivation. Humans seek affiliation and to spend time near other people BID30 ). First proposed by BID20, Sequential Social Dilemmas (SSDs) are spatially and temporally extended multi-agent games that have a payoff structure similar to that of Prisoner's Dilemma (PD). That is, an individual agent can obtain higher reward by engaging in defecting, non-cooperative behavior (and thus is rationally motivated to defect), but the average payoff per agent will be higher if all agents cooperate (see Figure 9 of the Appendix). The paradoxical reward structure makes it extremely difficult for traditional RL agents to learn to coordinate to solve the tasks . We experiment with two SSDs in this work, a public goods game Cleanup, and a tragedy-of-the-commons game Harvest (see FIG7). In both games apples (green tiles) provide the rewards, and agents also have the ability to punish each other with a fining beam. Further details are available in Appendix Section 6.1. Several attempts have been made to develop intrinsic social motivation rewards 3. BID25 developed hand-crafted rewards specific to a foraging environment, in which agents were punished for eating more than their fair share of food. Another approach gave agents an emotional intrinsic reward based on their perception of their neighbours' cooperativeness in a networked version of the iterated prisoner's dilemma BID34. This approach is limited to scenarios in which it is possible to directly classify each action as cooperative or non-cooperative, which is untenable in complex settings with long-term strategies, such as the SSDs under investigation here. introduced an inequity aversion motivation, which penalized agents if their rewards differed too much from those of the group. Another approach used prosocial reward shaping to show that if even a single agent is trained to optimize for the rewards of other agents, it can help the group obtain better collective outcomes BID21. However, these both require the ability to observe other agent's rewards, which may be an unrealistic assumption, depending on the application. Another body of work has focused on training agents to learn emergent communication protocols BID3;;; ), with many authors finding that selfish agents do not learn to use an ungrounded, cheap talk communication channel effectively. BID0 find that in theory, the information revealed in communication (in equilibrium) is proportional to amount of common interest; thus, as agents' interests diverge, no communication is to be expected. And while communication can emerge when agents are prosocial BID3 ) or hand-crafted , self-interested agents do not to learn to communicate . We test whether the social influence reward can encourage agents to learn to communicate more effectively in complex environments with challenging social dilemma dynamics. Interestingly, BID15 show that a robot trained with a curiosity-based intrinsic motivation to maximize learning progress learns to prefer vocalizing sounds imitated by another robot over interaction with other objects in the environment. Follow-up papers suggest that curiosity may be a sufficient motivation to encourage agents, or even children, to learn to communicate with others BID16 BID6.Our MOA network is related to work on machine theory of mind BID22, which demonstrated that a model trained to predict agents' actions is able to model false beliefs. With LOLA, BID5 train agents that model the impact of their policy on the parameter updates of other agents, and directly incorporate this into the agent's own learning rule. propose causal influence as a way to measure coordination between agents, specifically using Convergence Cross Mapping (CCM) to analyze the degree of dependence between two agents' policies. The limitation of this approach is that CCM estimates of causality are known to degrade in the presence of stochastic effects BID29. Counterfactual reasoning has also been used in a multi-agent setting, to marginalize out the effect of one agent on a predicted global value function estimating collective reward, and thus obtain an improved baseline for computing each agent's advantage function BID4. A similar paper shows that counterfactuals can be used with potential-based reward shaping to improve credit assignment for training a joint policy in multi-agent RL BID1. However, once again these approaches rely on a centralised controller. Following in the tradition of the empowerment literature, authors have investigated mutual information (MI) as a powerful tool for designing social rewards. BID28 train agents to maximize or minimize the MI between their actions and a categorical goal, and show how this can be used to signal or hide the agent's intentions. However, this approach depends on agents pursuing a known, categorical goal. BID8, in pursuit of the ultimate video game adversary, develop an agent that maximizes its empowerment over its own states, minimizes the player's empowerment over their states, and maximizes its empowerment over the player's next state. This third goal, termed transfer empowerment, is obtained by maximizing the MI between the agent's actions and the player's future state. While similar to our approach, the authors find that agents trained with transfer empowerment simply tend to stay near the player. Further, the agents are not trained with RL, but rather analytically compute these measures in simple grid-world environments. As such, the agent cannot learn to model other agents or the environment. The following sections present the of training agents with the social influence reward in three settings: using a centralised controller, using an explicit communication channel, and using a learned model of other agents (MOA). In each case we compare against a standard A3C agent, and an ablated version of the model which is architecturally identical, but does not receive the influence reward. We measure the total collective reward obtained using the best hyperparameter setting tested with 5 random seeds. It is worth noting that we use a curriculum learning approach which gradually increases the weight of the social influence reward over C steps (C ∈[0.2−3.5]×10 8 ); this can lead to a slight delay before the influence models' performance improves. We also provide the of an additional experiment Section 6.2 of the Appendix, which tests the social influence reward in a simplified environment where the effects of influence are clear. We encourage the reader to examine that section to gain a better intuition for how social influence can foster cooperative behavior in an otherwise selfish agent. Figures 6(a) and 6(d) show the of training influence with a centralised controller as described in Section 2.1. With this method, the influencer agents transmit their intended action to the influenced agents at each timestep. Therefore, we benchmark against an ablated version of the influence model with visible actions but no influence reward. As is evident in Figures 6(a) and 6(d), introducing an awareness of other agents' actions helps, but having the social influence reward eventually leads to significantly higher collective reward in both games. While these aggregated demonstrate the success of our models, they are not sufficient to understand the mechanism through which social influence is helping the agents achieve cooperative behavior. Therefore, we investigated the trajectories produced by high scoring models in both Cleanup and Harvest; the analysis revealed interesting behavior. As an example, in the Cleanup video available here: https: //youtu.be/iH_V5WKQxmo a single agent (shown in purple) was trained with the social influence reward. We see that unlike the other agents, which continue to randomly move and explore while waiting for apples to spawn, the influencer has a strange economy of motion; it only moves on the map when it is pursuing an apple, then stops. Interestingly, examining the trajectory reveals that the influencer uses only two moves to explore the map: turn left, which turns the agent in place without traversing the map, and move right, which moves the agent one square to the right on the map. Why did the influencer learn to use only these two moves? We can see that the influencer agent only chooses to move right (i.e. traverse the map) when it is pursuing an apple which is present. The rest of the time it simply turns left on the spot. At t=49, there is a moment of high influence between the influencer and the yellow influencee, which is shown in FIG9. The influencer has chosen to move right towards an apple that is outside of the ego-centric field-of-view of the yellow agent. Because the purple agent only moves when apples are available, this signals to the yellow agent that an apple must be present above it which it cannot see. This changes the yellow agent's distribution over its planned action, p(a B t |a A t,s B t), and allows the purple agent to gain influence. A similar moment occurs when the influencer signals to an agent that has been cleaning the river that no apples have appeared by continuing to turn left (see FIG3 in the Appendix).In this example, the influencer agent learned to use its own actions as a sort of binary code, which signals the presence or absence of apples in the environment. We also observe this effect in the influence agents in the Harvest task. This type of action-based communication could be likened to the bee waggle dance discovered by von. Thus, rewarding agents for increasing the mutual information between their actions gave rise not only to cooperative behavior, but in this case, to emergent communication. These further support the idea of using influence as a reward for training agents to communicate. Figures 6(b) and 6(e) show the of training the agents to use an explicit communication channel, and its effect on their collective reward. In this case, the ablated baseline is a model that has the same structure as in FIG3, but in which the communication policy π c is trained only with environmental reward. We observe that the agents which are trained to use the communication channel with additional social influence reward achieve significantly higher collective reward in both games. In fact, in the case of Cleanup, we found that α = 0 in the optimal hyperparameter settings, meaning that it was most effective to train the communication head with zero extrinsic or environmental reward (see TAB3 in the Appendix). This suggests that influence alone can be a sufficient mechanism for training an effective communication policy. To analyze the communication behaviour learned by the agents, we introduce three metrics. Speaker consistency, is a normalized score ∈ which assesses the entropy of p(a k |m k) and p(m k |a k) to determine how consistently a speaker agent emits a particular symbol when it takes a particular action, and vice versa (the formula is given in Appendix Section 6.3.4). We expect this measure to be high if, for example, the speaker always emits the same symbol when it is cleaning the river. We also introduce two measures of instantaneous coordination (IC), which are both measures of mutual information (MI): symbol/action IC = I(m A t ;a B t+1) measures the MI between the influencer/speaker's symbol and the influencee/listener's next action, and action/action IC = I(a A t ;a B t+1) measures the MI between the influencer's action and the influencee's action in the next timestep. To compute these measures we first average over all trajectory steps, then take the maximum value between any two agents, to determine if any pair of agents are coordinating. Note that these measures are all instantaneous, as they consider only short-term dependencies across two consecutive timesteps, and cannot capture if an agent communicates influential compositional messages, i.e. information that requires several consecutive symbols to transmit and only then affects the other agents behavior. The models trained with influence reward exhibit more consistent communication and more coordination, especially in moments where influence is high. FIG10 presents the . The speaker consistencies metric reveals that agents trained with the influence reward communicate less ambiguously about their own actions, indicating that the emergent communication is more meaningful. The instantaneous coordination metrics demonstrate that the baseline agents trained without influence reward show almost no signs of co-ordinating behavior with communication, i.e. speakers saying A and listeners doing B consistently. This is aligned with both theoretical in cheap-talk literature BID0, and recent empirical in MARL (e.g. BID3 ;) ).In contrast, we do see highly coordinated behavior between influence agents, but only when we limit the analysis to timesteps on which influence was high (cf. influential moments in FIG10). If we inspect the for agents trained with influence on the two tasks, a common pattern emerges: influence is sparse in time. An agent's influence is only greater than its mean influence in less than 10% of timesteps. Because the listener agent is not compelled to listen to any given speaker, listeners selectively listen to a speaker only when it is beneficial, and influence cannot occur all the time. Only when the listener decides to change its action based on the speaker's message does influence occur, and in these moments we observe high I(m A t ;a B t+1); an effect that is lost when averaging over the entire trajectory. It appears the influencers have learned a strategy of communicating meaningful information about their own actions, and gaining influence when this becomes relevant enough for the listener to act upon it. Examining the relationship between the reward obtained by individual agents and the degree to which they were influenced by other agents gives a compelling : agents that are the most influenced also achieve higher individual environmental reward, E k. We sampled 100 different experimental conditions (i.e., hyper-parameters and random seeds) for both games, collected the influence and individual rewards, normalized them across the 5 agents in each condition, and correlated the ing list of values. We found that agents who are more often influenced tend to achieve higher task reward in both Cleanup, ρ =.67, p<0.001, and Harvest, ρ =.34, p<0.001. This supports the hypothesis stated in Section 2.3: in order to gain influence from another agent by communicating with it, the communication message should contain information that helps the listener maximize its own environmental reward. Since better listeners/influencees are more successful in terms of task reward, we have evidence that useful information was transmitted to them. Finally, we investigate whether the influence reward is still effective when computed without a centralised controller, but rather through each agent's own internal Model of Other Agents (MOA) network. In this case, we extend the training period from 3·10 8 steps to 5·10 8, in order to give the MOA model time to train. We also allow the policy LSTM to condition on the actions of other agents in the last timestep. We compare against an ablated version of this architecture (shown in FIG4), which does not use the output of the MOA module to compute a reward; rather, the MOA module can be thought of as an unsupervised auxiliary task that may help the model to learn a better shared embedding layer, encouraging it to encode information relevant to predicting other agents' behavior. Figures 6(c) and 6(f) show the collective reward obtained for agents trained with a MOA module. While we see that the auxiliary task does help to improve reward over the A3C baseline, the influence agent gets consistently higher collective reward. Impressively, for Cleanup, the MOA model scores higher than the original influence agents computed using the centralised controller (CC). As shown in Figure 6 (c), the MOA baseline also achieves high collective reward, suggesting that the auxiliary task of modeling other agents helps the MOA agents cooperate more effectively in Cleanup. Further, the independent design of the MOA method allows each agent to influence every other agent, thus generating more reward signal and a greater chance to develop two-way cooperative behavior. Table 4 of the Appendix gives the final collective reward obtained by each model for all three experiments. Interestingly, several influence models are able to achieve higher collective reward than the previous state-of-the-art scores for these environments (275 for Cleanup and 750 for Harvest) . This is compelling, given that previous work relied on the assumption that agents could view one another's rewards; we make no such assumption, instead relying only on agents viewing each other's actions. The experiments above have demonstrated that an intrinsic social reward based on having causal influence on the actions of other agents consistently improves cooperation and leads to higher collective return in the MA social dilemmas under investigation. In some cases, the influence reward drove agents to learn an emergent communication protocol via their actions. This is compelling, and confirms the connection between maximizing influence and maximizing the mutual information between agents' actions. However, it is important to consider the limitations of the influence reward. Whether it will always give rise to cooperative behavior may depend on the specifics of the environment, task, and the trade-off between environmental and influence reward. Although influence is arguably necessary for cooperation (e.g. two agents cooperating to lift a box would have a high degree of influence between their actions), it may not be sufficient, in that it may be possible to influence another agent without helping it. For example, it is possible that agents could have gained influence in the tasks studied here by threatening to attack other agents with their fining beam. We believe this type of behavior did not emerge because communicating information represents the cheapest and most effective way to gain influence. Influencers do not have to sacrifice much in terms of their own environmental reward in order to communicate to other agents. Rewarding influence over an explicit communication channel may not be subject to this limitation, because influential communication may be inherently beneficial to the listener (at least in the case where listeners and speakers interact repeatedly). Since listeners can easily ignore communication messages if they do not help to obtain environmental reward, a speaker must transmit valuable information in order to gain influence through communication. There is no advantage to the speaker for communicating unreliably, because it would lose influence with the listener over time (although this is no longer guaranteed in one-shot interactions). Indeed, our reveal that agents benefit from being influenced by (listening to) communication messages by obtaining higher individual reward, suggesting that the messages contain valuable information. Further, we found that the communication protocols learned via influence reward were more meaningful, and that the influence reward allowed agents to obtain higher collective return. Therefore, we suggest that influence could be a promising way to train emergent communication protocols in various settings. Finally, we have shown that influence can be computed by augmenting agents with an internal model that predicts the actions of other agents, and using this MOA model to simulate the effect of an agent's actions on others. This represents an important step forward in multi-agent intrinsic social motivation, because it implies that the influence reward can be computed without having access to another agent's reward function, or requiring a centralised controller. Using counterfactuals to allow agents to understand the effects of their actions on other agents could be a promising approach with a number of extensions. Perhaps agents could use counterfactuals to develop a form of'empathy', by simulating how their actions affect another agent's value function. Or, social influence could be used to drive coordinated behavior in robots attempting to do cooperative manipulation and control tasks. Finally, if we view multi-agent networks as a single agent, influence could be used as a regularizer to encourage different modules of the network to integrate information from other networks; for example, perhaps it could prevent collapse in hierarchical RL. In each of the sequential social dilemma (SSD) games studied above, an agent is rewarded +1 for every apple it collects, but the apples are a limited resource. In Harvest (a tragedy of the commons game), apples regenerate more slowly the faster they are harvested, and if an exploiting agent consumes all of the apples, they will not grow back; agents must cooperate to harvest sustainably. In Cleanup (a public goods game), apples are generated based on the amount of waste in a nearby river. Agents can use a cleaning beam action to clean the river when they are positioned in it; or they can simply consume the apples the other agent produces. Agents also have a fining beam action which they can use to fine nearby agents −50 reward. Figure 9 gives the Schelling diagram for both SSD tasks under investigation. A Schelling diagram BID23 BID20 shows the relative payoffs for a single agent's strategy given a fixed number of other agents who are cooperative. Schelling diagrams generalize payoff matrices to multi-agent settings, and make it easy to visually recognize game-theoretic properties like Nash equilibria (see BID23 for more details). As a proof-of-concept experiment to test whether the influence reward works as expected, we constructed a special environment, shown in Figure 10. In this environment, one agent (teal) is trapped in a box. The other agent (purple) has a special action it can use to open the box... or it can simply choose to consume apples, which exist outside the box and are inexhaustible in this environment. As expected, a vanilla A3C agent learns to act selfishly; the purple agent will simply consume apples, and chooses the open box action in 0% of trajectories once the policy has converged. A video of A3C agents trained in this environment is available at: https://youtu.be/C8SE9_ YKzxI, which shows that the purple agent leaves its compatriot trapped in the box throughout the trajectory. In contrast, an agent trained with the social influence reward chooses the open box action in 88% of trajectories, releasing its fellow agent so that they are both able to consume apples. A video of this behavior is shown at: https://youtu.be/Gfo248-qt3c. Further, as Figure 11 (a) reveals, the purple influencer agent usually chooses to open the box within the first few steps of the trajetory, giving its fellow agent more time to collect reward. Most importantly though, Figure 11 (b) shows the influence reward over the course of a trajectory in the Box trapped environment. The agent chooses the open box action in the second timestep; at this point, we see a corresponding spike in the influence reward. This reveals that the influence reward works as expected, incentivizing an action which has a strong -and in this case, prosocial -effect on the other agent's behavior.(a) Number of times the open box action occurs at each trajectory step over 100 trajectories.(b) Influence reward over a trajectory in Box trapped Figure 11: The Box trapped proof-of-concept experiment reveals that an agent gets high influence for letting another agent out of a box in which it is trapped. All models are trained with a single convolutional layer with a kernel of size 3, stride of size 1, and 6 output channels. This is connected to two fully connected layers of size 32 each, and an LSTM with 128 cells. We use a discount factor γ =.99. The number of agents N is fixed to 5.As mentioned in Section 2.2, the social influence reward can be computed using a number of divergence measures, including JSD. We also experiment with training the agents using the pointwise mutual information (the innermost term of Eq. 3), which is given by: DISPLAYFORM0 This PMI term is precisely the local information flow proposed by BID12 as a measure of direct causal effect; the expectation of the PMI over p(a B,a A |z) is the MI. and gives us a measure of influence of a single action of A on the single action taken by B.In addition to the comparison function used to compute influence, there are many other hyperparameters that can be tuned for each model. We use a random search over hyperparameters, ensuring a fair comparison with the search size over the baseline parameters that are shared with the influence models. For all models we search for the optimal entropy reward and learning rate, where we anneal the learning rate from an initial value lr init to lr final. The below sections give the parameters found to be most effective for each of the three experiments. In this setting we vary the number of influencers from 1−4, the influence reward weight β, and the number of curriculum steps over which the weight of the influence reward is linearly increased C. In this setting, since we have a centralised controller, we also experiment with giving the influence reward to the agent being influenced as well, and find that this sometimes helps. This'influencee' reward is not used in the other two experiments, since it precludes independent training. The hyperparameters found to give the best performance for each model are shown in Table 4: Final collective reward over the last 50 agent steps for each of the models considered. Bolded entries represent experiments in which the influence models significantly outperformed the scores reported in previous work on inequity aversion . This is impressive, considering the inequity averse agents are able to view all other agents' rewards. We make no such assumption, and yet are able to achieve similar or superior performance. The speaker consistency metric is calculated as: DISPLAYFORM0 where H is the entropy function and H max is the maximum entropy based on the number of discrete symbols or actions. The goal of the metric is to measure how much of a 1:1 correspondence exists between a speaker's action and the speaker's communication message. Figure 12: A moment of high influence between the purple influencer and magenta influencee. FIG3 shows an additional moment of high influence in the Cleanup game. The purple influencer agent can see the area within the white box, and therefore all of the apple patch. The field-of-view of the magenta influencee is outlined with the magenta box; it cannot see if apples have appeared, even though it has been cleaning the river, which is the action required to cause apples to appear. When the purple influencer turns left and does not move towards the apple patch, this signals to the magenta agent that no apples have appeared, since otherwise the influence would move right. Table 4 presents the final collective reward obtained by each of the models tested in the experiments presented in Section 4. We see that in several cases, the influence agents are even able to out-perform the state-of-the-art on these tasks reported by , despite the fact that the solution proposed by requires that agents can view other agents' rewards, whereas we do not make this assumption, and instead only require that agents can view each others' actions. It is important to note that collective reward is not always the perfect metric of cooperative behavior, a finding that was also discovered by and emphasized by BID20. In the case, we find that there is a spurious solution to the Harvest game, in which one agent fails to learn and fails to collect any apples. This leads to very high collective reward, since it means there is one fewer agent that can exploit the others, and makes sustainable harvesting easier to achieve. Therefore, for the shown in the paper, we eliminate any random seed in Harvest for which one of the agents has failed to learn to collect apples, as in previous work .However, here we also present an alternative strategy for assessing the overall collective outcomes: weighting the total collective reward by an index of equality of the individual returns. Specifically, we compute the Gini coefficient over the N agents' individual returns: DISPLAYFORM0 which gives us a measure of the inequality of the returns, where G∈, with G=0 indicating perfect equality. Thus, 1 − G is a measure of equality; we use this to weight the collective reward for each experiment, and plot the in FIG4. Once again, we see that the influence models give the highest final performance, even with this new metric. Finally, we would like to show that the influence reward is robust to the choice of hyperparameter settings. Therefore, in FIG5, we plot the collective reward of the top 5 best hyperparameter settings for each experiment, over 5 random seeds each. Once again, the influence models in higher collective reward, which provides evidence that the model is robust to the choice of hyperparameters. In this section we include the of training explicitly prosocial agents, which directly optimize for the collective reward of all agents. Previous work (e.g. BID21) has shown that training agents to optimize for the rewards of other agents can help the group to obtain better collective outcomes. Following a similar principle, we implemented agents that optimize for a convex combination of their own individual reward E k and the collective reward of all other agents, N i=1,i =k E i. Thus, the reward function for agent k is R k =E k +η N i=1,i =k E i. We conducted the same hyperparameter search over the parameters mentioned in Section 6.3.1 varying the weight placed on the collective reward, η ∈.As expected, we find that agents trained to optimize for collective reward attain higher collective reward in both Cleanup and Harvest, as is shown in FIG7. In both games, the optimal value for η = 0.85. Interestingly, however, the equality in the individual returns for these agents is extremely low. Across the hyperparameter sweep, no solution to the Cleanup game which scored more than 20 points in terms of collective return was found in which all agents scored an individual return above 0. It seems that in Cleanup, when agents are trained to optimize for collective return, they converge on a solution in which some agents never receive any reward. Note that training agents to optimize for collective reward requires that each agent can view the rewards obtained by other agents. As discussed previously, the social influence reward is a novel way to obtain cooperative behavior, that does not require making this assumption.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1lG42C9Km
We reward agents for having a causal influence on the actions of other agents, and show that this gives rise to better cooperation and more meaningful emergent communication protocols.
This work adopts the very successful distributional perspective on reinforcement learning and adapts it to the continuous control setting. We combine this within a distributed framework for off-policy learning in order to develop what we call the Distributed Distributional Deep Deterministic Policy Gradient algorithm, D4PG. We also combine this technique with a number of additional, simple improvements such as the use of N-step returns and prioritized experience replay. Experimentally we examine the contribution of each of these individual components, and show how they interact, as well as their combined contributions. Our show that across a wide variety of simple control tasks, difficult manipulation tasks, and a set of hard obstacle-based locomotion tasks the D4PG algorithm achieves state of the art performance. The ability to solve complex control tasks with high-dimensional input and action spaces is a key milestone in developing real-world artificial intelligence. The use of reinforcement learning to solve these types of tasks has exploded following the work of the Deep Q Network (DQN) algorithm BID11, capable of human-level performance on many Atari games. Similarly, ground breaking achievements have been made in classical games such as Go. However, these algorithms are restricted to problems with a finite number of discrete actions. In control tasks, commonly seen in the robotics domain, continuous action spaces are the norm. For algorithms such as DQN the policy is only implicitly defined in terms of its value function, with actions selected by maximizing this function. In the continuous control domain this would require either a costly optimization step or discretization of the action space. While discretization is perhaps the most straightforward solution, this can prove a particularly poor approximation in highdimensional settings or those that require finer grained control. Instead, a more principled approach is to parameterize the policy explicitly and directly optimize the long term value of following this policy. In this work we consider a number of modifications to the Deep Deterministic Policy Gradient (DDPG) algorithm BID9. This algorithm has several properties that make it ideal for the enhancements we consider, which is at its core an off-policy actor-critic method. In particular, the policy gradient used to update the actor network depends only on a learned critic. This means that any improvements to the critic learning procedure will directly improve the quality of the actor updates. In this work we utilize a distributional BID0 version of the critic update which provides a better, more stable learning signal. Such distributions model the randomness due to intrinsic factors, among these is the inherent uncertainty imposed by function approximation in a continuous environment. We will see that using this distributional update directly in better gradients and hence improves the performance of the learning algorithm. Due to the fact that DDPG is capable of learning off-policy it is also possible to modify the way in which experience is gathered. In this work we utilize this fact to run many actors in parallel, all feeding into a single replay table. This allows us to seamlessly distribute the task of gathering Authors contributed equally. The Deterministic Policy Gradient (DPG) algorithm BID19 upon which this work is based starts from a different set of ideas, namely the policy gradient theorem of BID22. The deterministic policy gradient theorem builds upon this earlier approach, but replaces the stochastic policy with one that includes no randomness. This approach is particularly important because it had previously been believed that the deterministic policy gradient did not exist in a model-free setting. The form of this gradient is also interesting in that it does not require one to integrate over the action space, and hence may require less samples to learn. DPG was later built upon by BID9 who extended this algorithm and made use of a deep neural network as the function approximator, primarily as a mechanism for extending these to work with vision-based inputs. Further, this entire endeavor lends itself very readily to an off-policy actorcritic architecture such that the actor's gradients depend only on derivatives through the learned critic. This means that by improving estimation of the critic one is directly able to improve the actor gradients. Most interestingly, there have also been recent attempts to distribute updates for the DDPG algorithm, (e.g. BID15 and more generally in this work we build on work of BID5 for implementing distributed actors. Recently, BID0 showed that the distribution over returns, whose expectation is the value function, obeys a distributional Bellman equation. Although the idea of estimating a distribution over returns has been revisited before BID21 BID13, Bellemare et al. demonstrated that this estimation alone was enough to achieve state-of-the-art on the Atari 2600 benchmarks. Crucially, this technique achieves these gains by directly improving updates for the critic. In this work we consider a standard reinforcement learning setting wherein an agent interacts with an environment in discrete time. At each timestep t the agent makes observations x t P X, takes actions a t P A, and receives rewards rpx t, a t q P R. Although we will in general make no assumptions about the inputs X, we will assume that the environments considered in this work have real-valued actions A " R d .In this standard setup, the agent's behavior is controlled by a policy π : X Ñ A which maps each observation to an action. The state-action value function, which describes the expected return conditioned on first taking action a P A from state x P X and subsequently acting according to π, is defined as DISPLAYFORM0 and is commonly used to evaluate the quality of a policy. While it is possible to derive an updated policy directly from Q π, such an approach typically requires maximizing this function with respect to a and is made complicated by the continuous action space. Instead we will consider a parameterized policy π θ and maximize the expected value of this policy by optimizing Jpθq " ErQ π θ px, π θ pxqqs. By making use of the deterministic policy gradient theorem BID19 one can write the gradient of this objective as DISPLAYFORM1 where ρ is the state-visitation distribution associated with some behavior policy. Note that by letting the behavior policy differ from π we are able to empirically evaluate this gradient using data gathered off-policy. While the exact gradient given by assumes access to the true value function of the current policy, we can instead approximate this quantity with a parameterized critic Q w px, aq. By introducing the Bellman operator pT π Qqpx, aq " rpx, aq`γE " DISPLAYFORM2 whose expectation is taken with respect to the next state x 1, we can minimize the temporal difference (TD) error, i.e. the difference between the value function before and after applying the Bellman update. Typically the TD error will be evaluated under separate target policy and value networks, i.e. networks with separate parameters pθ 1, w 1 q, in order to stabilize learning. By taking the twonorm of this error we can write the ing loss as DISPLAYFORM3 In practice we will periodically replace the target networks with copies of the current network weights. Finally, by training a neural network policy using the deterministic policy gradient in and training a deep neural to minimize the TD error in we obtain the Deep Deterministic Policy Gradient (DDPG) algorithm. Here a sample-based approximation to these gradients is employed by using data gathered in some replay table. The approach taken in this work starts from the DDPG algorithm and includes a number of enhancements. These extensions, which we will detail in this section, include a distributional critic update, the use of distributed parallel actors, N -step returns, and prioritization of the experience replay. First, and perhaps most crucially, we consider the inclusion of a distributional critic as introduced in BID0. In order to introduce the distributional update we first revisit in terms of the return as a random variable Z π, such that Q π px, aq " E Z π px, aq. The distributional Bellman operator can be defined as DISPLAYFORM0 where equality is with respect to the probability law of the random variables; note that this expectation is taken with respect to distribution of Z as well as the transition dynamics. While the definition of this operator looks very similar to the canonical Bellman operator defined in, it differs in the types of functions it acts on. The distributional variant takes functions which map from state-action pairs to distributions, and returns a function of the same form. In order to use this function within the context of the actor-critic architecture introduced above, we must parameterize this distribution and define a loss similar to that of Equation 4. We will write the loss as DISPLAYFORM1 for some metric d that measures the distance between two distributions. Two components that can have a significant impact on the performance of this algorithm are the specific parameterization used for Z w and the metric d used to measure the distributional TD error. In both cases we will give further details in Appendix A; in the experiments that follow we will use the Categorical distribution detailed in that section. We can complete this distributional policy gradient algorithm by including the action-value distribution inside the actor update from Equation 2. This is done by taking the expectation with respect to the action-value distribution, i.e. DISPLAYFORM2 Algorithm 1 D4PG Input: batch size M, trajectory length N, number of actors K, replay size R, exploration constant, initial learning rates α 0 and β 0 1: Initialize network weights pθ, wq at random 2: Initialize target weights pθ 1, w 1 q Ð pθ, wq 3: Launch K actors and replicate network weights pθ, wq to each actor 4: for t " 1,..., T do Sample M transitions px i:i`N, a i:i`N´1, r i:i`N´1 q of length N from replay with priority p i 6: DISPLAYFORM0 Compute the actor and critic updates DISPLAYFORM1 Update network parameters θ Ð θ`α t δ θ, w Ð w`β t δ w If t " 0 mod t target, update the target networks pθ 1, w 1 q Ð pθ, wq If t " 0 mod t actors, replicate network weights to the actors 11: end for 12: return policy parameters θ DISPLAYFORM0 Sample action a " π θ pxq` N p0, 1q DISPLAYFORM1 Execute action a, observe reward r and state x 1 4:Store px, a, r, x 1 q in replay 5: until learner finishes As before, this update can be empirically evaluated by replacing the outer expectation with a samplebased approximation. Next, we consider a modification to the DDPG update which utilizes N -step returns when estimating the TD error. This can be seen as replacing the Bellman operator with an N -step variant DISPLAYFORM2 where the expectation is with respect to the N -step transition dynamics. Although not used by, N -step returns are widely used in the context of many policy gradient algorithms (e.g. BID12 as well as Q-learning variants BID4 . This modification can be applied analogously to the distributional Bellman operator in order to make use of it when updating the distributional critic. Finally, we also modify the standard training procedure in order to distribute the process of gathering experience. Note from Equations that the actor and critic updates rely entirely on sampling from some state-visitation distribution ρ. We can parallelize this process by using K independent actors, each writing to the same replay table. A learner process can then sample from some replay table of size R and perform the necessary network updates using this data. Additionally sampling can be implemented using non-uniform priorities p i as in BID16. Note that this requires the use of importance sampling, implemented by weighting the critic update by a factor of 1{Rp i . We implement this procedure using the ApeX framework BID5 and refer the reader there for more details. Algorithm pseudocode for the D4PG algorithm which includes all the above-mentioned modifications can be found in Algorithm 1. Here the actor and critic parameters are updated using stochastic gradient descent with learning rates, α t and β t respectively, which are adjusted online using ADAM BID7 . While this pseudocode focuses on the learning process, also shown is pseudocode for actor processes which in parallel fill the replay table with data. The left-most set illustrates the actor network and critic torso used for the standard control and manipulation domains. The full critic architecture is completed by feeding the output of the critic torso into a relevant distribution, e.g. the categorical distribution, as defined in Section A. The right half of the figure similarly illustrates the architecture used by the parkour domains. In this section we describe the performance of the D4PG algorithm across a variety of continuous control tasks. To do so, in each environment we run our learning procedure and periodically snapshot the policy in order to test it without exploration noise. We will primarily be interested in the performance as a function of wall clock time, however we will also examine the data efficiency. Most interestingly, from a scientific perspective, we also perform a number of ablations which individually remove components of the D4PG algorithm in order to determine their specific contributions. First, we experiment with and without distributional updates. In this setting we focus on use of a categorical distribution as we found in preliminary experiments that the use of a mixture of Gaussians performed worse and was less stable with respect to hyperparameter values across different tasks; a selection of these runs can be found in Appendix C. Across all tasks-except for one which we will introduce later-we use 51 atoms for the categorical distribution. In what follows we will refer to non-distributional variants of this algorithm as Distributed DDPG (D3PG).Next, we consider prioritized and non-prioritized versions of these algorithm variants. For the nonprioritized variants, transitions are sampled from replay uniformly. For prioritized variants we use the absolute TD-error to sample from replay in the case of D3PG, and for D4PG we use the absolute distributional TD-error as described in Section A. We also vary the trajectory length N P t1, 5u. In all experiments we use a replay table of size R " 1ˆ10 6 and only consider behavior policies which add fixed Gaussian noise N p0, 1q to the current online policy; in all experiments we use a value of " 0.3. We experimented with correlated noise drawn from an Ornstein-Uhlenbeck process, as suggested by ), however we found this was unnecessary and did not add to performance. For all algorithms we initialize the learning rates for both actor and critic updates to the same value. In the next section we will present a suite of simple control problems for which this value corresponds to α 0 " β 0 " 1ˆ10´4; for the following, harder problems we set this to a smaller value of α 0 " β 0 " 5ˆ10´5. Similarly for the control suite we utilize a batch size of M " 256 and for all subsequent problems we will increase this to M " 512. We first consider evaluating performance on a number of simple, physical control tasks by utilizing a suite of benchmark tasks BID23 developed in the MuJoCo physics simulator BID24. Each task is run for exactly 1000 steps and provides either an immediate dense reward r t P r0, 1s or sparse reward r t P t0, 1u depending on the particular task. For each domain, the inputs presented to the agent consist of reasonably low-dimensional observations, many consisting of phys- ical state, joint angles, etc. These observations range between 6 and 60 dimensions, however note that the difficulty of the task is not immediately associated with its dimensionality. For example the acrobot is one of the lowest dimensional tasks in this suite which, due to its level of controllability, can prove much more difficult to learn than other, higher dimensional tasks. For an illustration of these domains see Figure 9; see Appendix D for more details. For algorithms in these experiments we consider actor and critic architectures of the form given in FIG0 and for each experiment we use K " 32 actors. FIG1 shows the performance of D4PG and its various ablations across the entire suite of control tasks. This set of plots is quite busy, however it serves as a broad set of tasks with which we can obtain a general idea of the algorithms performance. Later experiments on harder domains look more closely at the difference between algorithms. Here we also compare against the canonical (non-distributed) DDPG algorithm as a baseline, shown as a dotted black line. This removes all the enhancements proposed in this paper, and we can see that except on the simplest domain, Cartpole (Swingup), it performs worse than all other methods. This performance disparity worsens as we increase the difficulty of tasks, and hence for further experiments we will drop this line from the plot. Next, across all tasks we see that the best performance is obtained by the full D4PG algorithm (shown in purple and bold). Here we see that the longer unroll length of N " 5 is uniformly better (we show these as solid lines), and in particular we sometimes see for both D3PG and D4PG that an unroll length of N " 1 (shown as dashed lines) can occasionally in instability. This is especially apparent in the Cheetah (Walk) and Cartpole (Swingup Sparse) tasks. The next biggest gain is arguably due to the inclusion of the distributional critic update, where it is particularly helpful on the hardest tasks e.g. Humanoid (Run) and Acrobot. The manipulator is also quite difficult among this suite of tasks, and here we see that the inclusion of the distributional update does not help as much as in other tasks, although note that here the D3PG and D4PG variants obtain approximately the same performance. As far as the use of prioritization is concerned, it does not appear to contribute significantly to the performance of D4PG. This is not the case for D3PG, however, which on many tasks is helped significantly by the inclusion of prioritization. Next, we consider a set of tasks designed to highlight the ability of the D4PG agent to learn dexterous manipulation. Tasks of this form can prove difficult for many reasons, most notably the higher dimensionality of the control task, intermittent contact dynamics, and potential under-actuation of the manipulator. Here we use a simulated hand model implemented within MuJoCo, consisting of 13 actuators which control 22 degrees of freedom. For these experiments the wrist site is attached to a fixed location in space, about which it is allowed to rotate axially. In particular this allows the hand to pick up objects, rotate into a palm-up position, and manipulate them. We first consider a task in which a cylinder is dropped onto the hand from a random height, and the goal of the task is to catch the falling cylinder. The next task requires the agent to pick up an object from the tabletop and then maneuver it to a target position and orientation. The final task is one wherein a broad cylinder must be rotated inhand in order to match a target orientation. See Appendix E for further details regarding both the model and the tasks. For these tasks we use the same network architectures as in the previous section as well as K " 64 actors. In FIG3 we again compare the D4PG algorithm against ablations of its constituent components. Here we split the algorithms between N " 1 in the top row and N " 5 in the bottom row, and in particular we can see that across all algorithms N " 5 is uniformly better. For all tasks, the full D4PG algorithm performs either at the same level or better than other ablations; this is particularly apparent in the N " 5 case. Overall the use of priorization never seems to harm D4PG, however it does appear to be of limited additional value. Interestingly this is not necessarily the case with the D3PG variant (i.e. without distributional updates). Here we can see that prioritization sometimes harms the performance of D3PG, and this is very readily seen in the N " 1 case where the algorithm can either become unstable, or in the case of the Pickup and Orient task it completely fails to learn. Finally, we consider the parkour domain introduced by. In this setting the agent controls a simplified robotic walker which is rewarded for forward movement, but is impeded by a number of randomly sampled obstacles; see Figure 4 for a visualization and refer to the earlier work for further details. The first of our experiments considers a two-dimensional walker, i.e. a domain in which the walker is allowed to move horizontally and vertically, but is constrained to a fixed depth position. In this domain the obstacles presented to the agent include gaps in the floor surface, barriers it must jump over, and platforms that it can either run over or underneath. The agent is presented with proprioceptive observations x proprio P R 19 corresponding to the angles of its limbs and other functions of these quantities. It is also given access to observations x terrain P R Figure 4: Example frames taken from trained agents running in the two parkour domains.which includes features such as a depth map of the upcoming terrain, etc. In order to accommodate these inputs we utilize a network architecture as specified in FIG0. In particular we make use of a stack of feed-forward layers which process the terrain information to reduce it to a smaller number of hidden units before concatenating with the proporioceptive information for further processing. The actions in this domain take the form of torque controls a P R 6.In order to examine the performance of the D4PG algorithm in this setting we consider the ablations of the previous sections and we have further introduced a PPO baseline as utilized in the earlier paper of. For all algorithms, including PPO, we use K " 64 actors. These are shown in Figure 5 in the top row. As before we examine the performance separately for N " 1 and N " 5, and again we see that the higher unroll length in better performance. Note that we show the PPO baseline on both plots for consistency, but in both plots this is the same algorithm, with settings proposed in the earlier paper and unrolls of length 50.Here we again see a clear delineation and clear gains for each of the other algorithm components. The biggest gain comes from the inclusion of the distributional update, which we can see by comparing the non-prioritized D3PG/D4PG variants. We see marginal benefit to using prioritization for D3PG, but this gain disappears when we consider the distributional update. Finally, we can see when comparing to the PPO baseline that this algorithm compares favorably to D3PG in the case of N " 1, however is outperformed by D4PG; when N " 5 all algorithms outperform PPO.Next, in the plots shown in Figure 5 on the bottom row we also consider the performance not just in terms of training time, but also in terms of the sample complexity. In order to do so we plot the performance of each algorithm versus the number of actor steps, i.e. the quantity of transitions collected. This is perhaps more favorable to PPO, as the parallel actors considered in this work are not necessarily tuned for sample efficiency. Here we see that PPO is able to out-perform the non-prioritized version of D3PG, and early on in training is favorable compared to the prioritized version, although this trails off. However, we still see significant performance gains by utilizing the distributional updates, both in a prioritized and non-prioritized setting. Interestingly we see that the use of prioritization does not gain much, if any over the non-prioritized D4PG version. Early in the trajectory for N " 5, in fact, we see that the non-prioritized D4PG exhibits better performance, however later these performance curves level out. With respect to wall-clock time these small differences may be due to small latencies in the scheduling of different runs, as we see that this difference is less for the plot with respect to actor steps. Finally we consider a humanoid walker which is able to move in all three dimensions. The obstacles in this domain consist of gaps in the floor, barriers that must be jumped over, and walls with gaps that allow the agent to run through. For this experiment we utilize the same network architecture as in the previous experiment, except now the observations are of size x proprio P R 79 and x terrain P R 461. Again actions are torque controls, but in 21 dimensions. In this task we also increased the number of atoms for the categorical distribution from 51 to 101. This change increases the level of resolution for the distribution in order to keep the resolution roughly consistent with other tasks. This is a much higher dimensional problem than the previous parkour task with a significantly more difficult control task: the walker is more unstable and there are many more ways for the agent to fail than in the previous experiment. The for this particular domain are displayed in Figure 6, and here we concentrate on performance as a function of wall-clock time, restricted to the previously best performing roll-out length of N " 5. In this setting we see a clear delineation between first the PPO which are the poorest performing, the D3PG where the prioritized version has a slight edge, and finally the D4PG . Interestingly for D4PG we again see as in the twodimensional walker case, the use of prioritization seems to have no benefit, with both versions have almost identical performance curves; in fact the performance here is perhaps even closer than that of the previous set of experiments. In this work we introduced the D4PG, or Distributed Distributional DDPG, algorithm. Our main contributions include the inclusion of a distributional updates to the DDPG algorithm, combined with the use of multiple distributed workers all writing into the same replay table. We also consider a number of other, smaller changes to the algorithm. All of these simple modifications contribute to the overall performance of the D4PG algorithm; the biggest performance gain of these simple changes is arguably the use of N -step returns. Interestingly we found that the use of priority was less crucial to the overall D4PG algorithm especially on harder problems. While the use of prioritization was definitely able to increase the performance of the D3PG algorithm, we found that it can also lead to unstable updates. This was most apparent in the manipulation tasks. Finally, as our can attest, the D4PG algorithm is capable of state-of-the-art performance on a number of very difficult continuous control problems. In this section we consider two potential parameterized distributions for D4PG. Parameterized distributions, in this framework, are implemented as a neural network layer mapping the output of the critic torso (see FIG0 to the parameters of a given distribution (e.g. mean and variance). In what follows we will detail the distributions and their corresponding losses. Categorical Following Bellemare et al. FORMULA0, we first consider the categorical parameterization, a layer whose parameters are the logits ω i of a discrete-valued distribution defined over a fixed set of atoms z i. This distribution has hyperparameters for the number of atoms, and the bounds on the support (V min, V max). Given these, ∆ "Vmax´Vmin ´1corresponds to the distance between atoms, and z i " V min`i ∆ gives the location of each atom. We can then define the action-value distribution as DISPLAYFORM0 Observe that this distributional layer simply corresponds to a linear layer from the critic torso to the logits ω, followed by a softmax activation (see FIG5, left).However, this distribution is not closed under the Bellman operator defined earlier, due to the fact that adding and scaling these values will no longer lie on the support defined by the atoms. This support is explicitly defined by the (V min, V max) hyperparameters. As a we instead use a projected version of the distributional Bellman operator BID0; see Appendix B for more details. Letting p 1 be the probabilities of the projected distributional Bellman operator ΦT π applied to some target distribution Z target, we can write the loss in terms of the cross-entropy DISPLAYFORM1 Mixture of Gaussians We can also consider parameterizing the action-value distribution using a mixture of Gaussians; here the random variable Z has density given by DISPLAYFORM2 Thus, the distribution layer maps, through a linear layer, from the critic torso to the mixture weight ω i, mean µ i, and variance σ 2 i for each mixture component 0 ď i ď ´1 (see FIG5, center). We can then specify a loss corresponding to the cross-entropy portion of the KL divergence between two distributions. Given a sample transition px, a, r, x 1 q we can take samples from the target density z j " p target and approximate the cross-entropy term using DISPLAYFORM3 B CATEGORICAL PROJECTION OPERATORThe categorical parameterized distribution has finite support. Thus, the of applying the distributional Bellman equation will generally not coincide with this support. Therefore, some projection Figure 8: Results for using a mixture of Gaussians distribution on select control suite tasks. Shown are two learning rates as denoted in the legends as well as Categorical.step is required before minimizing the cross-entropy. The categorical projection of Bellemare et al. FORMULA0 is given by pΦpq i " ř ´1 j"0 h zi pz j qp j, @i, where h is a piecewise linear'hat' function, DISPLAYFORM4 In Figure 8 we display of running D4PG on a selection of control suite tasks using a mixture of Gaussians output distribution for two choices of learning rates. Here the distributional TD loss is minimized using the sample-based KL introduced earlier. While this is definitely a technique that is worth further exploration, we found in initial experiments that this choice of distribution underperformed the Categorical distribution by a fair margin. This lends further credence to the choice of distribution made in BID0. In this section we provide further details for the control suite domains. In particular see Figure 9 for images of the control suite tasks. The physics state S, action A, and observation X dimensionalities for each task are provided in Table 1. For the dexterous manipulation tasks we used a simulated model of the Johns Hopkins Modular Prosthetic Limb hand BID6 implemented in MuJoCo BID8. This anthropomorphic hand has a total of 22 degrees of freedom (19 in the fingers, 3 in the wrist), which are driven by a set of 13 position actuators (PD-controllers). The underactuation of the hand is due to coupling between some of the finger joints. For these experiments the wrist was positioned in a fixed location above a table, such that rotation and flexion about the wrist joints allowed the hand to pick up objects from the table, rotate into a palm-up position, and then manipulate them. We focused on a set of three tasks where the agent must learn to manipulate a cylindrical object FIG0 ). In each of these tasks, the observations contain the positions and velocities of all of the joints in the hand, the current position targets for the actuators in the hand, the position and quaternion of the object being manipulated, and its translational and rotational velocities. The Table 2: Observation components given in each of the manipulation tasks, and their corresponding dimensionalities. Here sin z, cos z refers to the sine and cosine of the target frame's angle of rotation about the z-axis. Figure 10: Sequences of frames illustrating the dexterous manipulation tasks we attempt to solve using D4PG. Top to bottom:'catch','pick-up-and-orient','rotate-in-hand'. The translucent objects shown in'pick-up-and-orient' and'rotate-in-hand' represent the goal states. observations given in each task are summarized in Table 2. The agent's actions are increments applied to the position targets for the actuators. In the'catch' task the agent must learn to catch a falling object before it strikes the table below. The position, height, and orientation of the object are randomly initialized at the start of each episode. The reward is given by r " ψppalm height´o bj height; c, mqwhere ψp; c, mq is a soft indicator function similar to one described by BID2 ψp; c, mq " " 1´tanhp w m q 2 if ą c, 1 otherwise. Here w " tanh´1p? 0.95q, and the tolerance c and margin m parameters are 0 cm and 5 cm respectively. Contact between the object and the table causes the current episode to terminate immediately with no reward, otherwise it will continue until a 500 step limit is reached. In the'pick-up-and-orient' task, the agent must pick up a cylindrical object from the table and maneuver it into a target position and orientation. Both the initial position and orientation of the object, and the position and orientation of the target are randomized between episodes. The reward function consists of two additive components that depend on the distance from the object to the target position, and on the angle between the z-axes of the object and target body frames where c pos =1 cm, m pos =5 cm, c ori =5˝, m ori =10˝. Note that the distance-dependent component of the reward multiplicatively gates the orientation component. This helps to encourage the agent to pick up the object before attempting to orient it to match the target. Each episode has a fixed duration of 500 steps. Finally, in the'rotate-in-hand' task the agent begins with a broad cylinder in its palm, and must rotate it axially in order to match a moving target. This requires dynamically forming and breaking contacts with the object being manipulated. The target angle is initialized uniformly, and then incremented on each time step using temporally correlated noise drawn from an Ornstein-Uhlenbeck process (σ=0.025˝, θ=0.01; Uhlenbeck & Ornstein 1930). The reward consists of two multiplicative components r " ψpcos´1pobj yaxis||xy, obj target yaxis||xy qq; c rot, m rot qψpcos´1pobj zaxis, obj target zaxis q; c ori, m ori q FORMULA0 where c rot =5˝, m rot =40˝, c ori =45˝, m ori =45˝, and ||xy denotes projection onto the global xy plane. The first component provides an incentive to match the axial rotation of the target, and the second component penalizes the agent for allowing the orientation of the cylinder's long axis to deviate too far from that of the target. The maximum episode duration is 1000 steps, with early termination if the object makes contact with the table.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyZipzbCb
We develop an agent that we call the Distributional Deterministic Deep Policy Gradient algorithm, which achieves state of the art performance on a number of challenging continuous control problems.
State-action value functions (i.e., Q-values) are ubiquitous in reinforcement learning (RL), giving rise to popular algorithms such as SARSA and Q-learning. We propose a new notion of action value defined by a Gaussian smoothed version of the expected Q-value used in SARSA. We show that such smoothed Q-values still satisfy a Bellman equation, making them naturally learnable from experience sampled from an environment. Moreover, the gradients of expected reward with respect to the mean and covariance of a parameterized Gaussian policy can be recovered from the gradient and Hessian of the smoothed Q-value function. Based on these relationships we develop new algorithms for training a Gaussian policy directly from a learned Q-value approximator. The approach is also amenable to proximal optimization techniques by augmenting the objective with a penalty on KL-divergence from a previous policy. We find that the ability to learn both a mean and covariance during training allows this approach to achieve strong on standard continuous control benchmarks. Model-free reinforcement learning algorithms often alternate between two concurrent but interacting processes: policy evaluation, where an action value function (i.e., a Q-value) is updated to obtain a better estimate of the return associated with taking a specific action, and policy improvement, where the policy is updated aiming to maximize the current value function. In the past, different notions of Q-value have led to distinct but important families of RL methods. For example, SARSA BID18 BID22 BID26 ) uses the expected Q-value, defined as the expected return of following the current policy. Q-learning BID28 ) exploits a hard-max notion of Q-value, defined as the expected return of following an optimal policy. Soft Q-learning BID7 and PCL BID14 both use a soft-max form of Q-value, defined as the future return of following an optimal entropy regularized policy. Clearly, the choice of Q-value function has a considerable effect on the ing algorithm; for example, restricting the types of policies that can be expressed, and determining the type of exploration that can be naturally applied. In this work we introduce a new notion of action value: the smoothed action value functionQ π. Unlike previous notions, which associate a value with a specific action at each state, the smoothed Qvalue associates a value with a specific distribution over actions. In particular, the smoothed Q-value of a state-action pair (s, a) is defined as the expected return of first taking an action sampled from a normal distribution N (a, Σ(s)), centered at a, then following actions sampled from the current policy thereafter. In this way, the smoothed Q-value can also be interpreted as a Gaussian-smoothed or noisy version of the expected Q-value. We show that smoothed Q-values possess a number of interesting properties that make them attractive for use in RL algorithms. For one, the smoothed Q-values satisfy a single-step Bellman consistency, which allows bootstrapping to be used to train a function approximator. Secondly, for Gaussian policies, the standard optimization objective (expected return) can be expressed in terms of smoothed Q-values. Moreover, the gradient of this objective with respect to the mean and covariance of the Gaussian policy is equivalent to the gradient and the Hessian of the smoothed Q-value function, which allows one to derive updates to the policy parameters by having access to the derivatives of a sufficiently accurate smoothed Q-value function. This observation leads us to propose an algorithm called Smoothie, which in the spirit of (Deep) Deterministic Policy Gradient (DDPG) BID21 BID11, trains a policy using the derivatives of a trained (smoothed) Q-value function, thus avoiding the high-variance of stochastic updates used in standard policy gradient algorithms BID29 BID10. Unlike DDPG, which is well-known to have poor exploratory behavior BID7, the approach we develop is able to utilize a non-deterministic Gaussian policy parameterized by both a mean and a covariance, thus allowing the policy to be exploratory by default and alleviating the need for excessive hyperparameter tuning. Furthermore, we show that Smoothie can be easily adapted to incorporate proximal policy optimization techniques by augmenting the objective with a penalty on KL-divergence from a previous version of the policy. The inclusion of a KL-penalty is not feasible in the standard DDPG algorithm, but we show that it is possible with our formulation, and it significantly improves stability and overall performance. On standard continuous control benchmarks, our are competitive with or exceed state-of-the-art, especially for more difficult tasks in the low-data regime. We consider the standard model-free RL framework, where an agent interacts with a stochastic black-box environment by sequentially observing the state of the environment, emitting an action, and receiving a reward feedback; the goal is to find an agent that achieves maximal cumulative discounted reward. This problem can be expressed in terms of a Markov decision process (MDP) that consists of a state space S and an action space A, where at iteration t the agent encounters a state s t ∈ S and emits an action a t ∈ A, after which the environment returns a scalar reward r t ∼ R(s t, a t) and places the agent in a new state s t+1 ∼ P (s t, a t).We model the behavior of the agent using a stochastic policy π that produces a distribution over feasible actions at each state s as π(a | s). The optimization objective (expected discounted return), as a function of the policy, can then be expressed in terms of the expected action value function Q π (s, a) by, DISPLAYFORM0 where ρ π (s) is the stationary distribution of the states under π, and Q π (s, a) is recursively defined using the Bellman equation, DISPLAYFORM1 where γ ∈ is the discount factor. For brevity, we will often suppress explicit denotation of the sampling distribution R over immediate rewards and the distribution P over state transitions. The policy gradient theorem BID23 expresses the gradient of O ER (π θ) w.r.t. θ, the tunable parameters of a policy π θ, as, DISPLAYFORM2 Many reinforcement learning algorithms, including policy gradient and actor-critic variants, trade off variance and bias when estimating the random variable inside the expectation in; for example, by attempting to estimate Q π (s, a) accurately using function approximation. In the simplest scenario, an unbiased estimate of Q π (s, a) is formed by accumulating discounted rewards from each state forward using a single Monte Carlo sample. In this paper, we focus on multivariate Gaussian policies over continuous action spaces, A ≡ R da. We represent the observed state of the MDP as a d s -dimensional feature vector Φ(s) ∈ R ds, and parametrize the Gaussian policy by a mean and covariance function, respectively µ(s): DISPLAYFORM3 These map the observed state of the environment to a Gaussian distribution, DISPLAYFORM4 where DISPLAYFORM5 Below we develop new RL training methods for this family of parametric policies, but some of the ideas presented may generalize to other families of policies as well. We begin the formulation by reviewing some prior work on learning Gaussian policies. BID21 present a new formulation of the policy gradient, called the deterministic policy gradient, for the family of Gaussian policies in the limit where the policy covariance approaches zero. In such a scenario, the policy becomes deterministic because sampling from the policy always returns the Gaussian mean. The key observation of BID21 is that under a deterministic policy π ≡ (µ, Σ → 0), one can estimate the expected future return from a state s as, Then, one can express the gradient of the optimization objective (expected discounted return) for a parameterized π θ ≡ µ θ as, DISPLAYFORM0 This can be thought of as a characterization of the policy gradient theorem for deterministic policies. In the limit of Σ → 0, one can also re-express the Bellman equation FORMULA1 as, DISPLAYFORM1 Therefore, a value function approximator Q π w can be optimized by minimizing the Bellman error, DISPLAYFORM2 for transitions (s, a, r, s) sampled from a dataset D of interactions of the agent with the environment. Algorithms like DDPG BID11 ) alternate between improving the value function by gradient descent on, and improving the policy based on.In practice, to gain better sample efficiency, BID5 and BID21 replace the on-policy state distribution ρ π (s) in with an off-policy distribution ρ β (s) based on a replay buffer. After this substitution, the policy gradient identity in does not hold exactly, however, prior work finds that this works well in practice and improves sample efficiency. We also adopt a similar approximation in our method to make use of off-policy data. In this paper, we introduce smoothed action value functions, the gradients of which provide an effective signal for optimizing the parameters of a Gaussian policy. Our notion of smoothed Qvalues, denotedQ π (s, a), differs from ordinary Q-values Q π (s, a) in that smoothed Q-values do not assume the first action of the agent is fully specified, but rather they assume that only the mean of the distribution of the first action is known. Hence, to computeQ π (s, a), one has to perform an expectation of Q π (s,ã) for actionsã drawn in the vicinity of a. More formally, smoothed action values are defined as, DISPLAYFORM0 With this definition ofQ π, one can re-express the expected reward objective for a Gaussian policy π ≡ (µ, Σ) as, DISPLAYFORM1 The insight that differentiates this approach from prior work including BID8; BID4 is that instead of learning a function approximator for Q π (s, a) and then drawing samples to approximate the expectation in and its derivative, we directly learn a function approximator forQ π (s, a).The key observation that enables direct bootstrapping of smoothed Q-values,Q π (s, a), is that their form allows a notion of Bellman consistency. First, note that for Gaussian policies π ≡ (µ, Σ) we have DISPLAYFORM2 Then, combining FORMULA0 and FORMULA0, one can derive the following one-step Bellman equation for smoothed Q-values, DISPLAYFORM3 wherer ands are sampled from R(s,ã) and P (s,ã). Below, we elaborate on how one can make use of the derivatives ofQ π to learn µ and Σ, and how the Bellman equation in enables direct optimization ofQ π. We parameterize a Gaussian policy π θ,φ ≡ (µ θ, Σ φ) in terms of two sets of parameters θ and φ for the mean and the covariance. The gradient of the objective w.r.t. mean parameters follows from the policy gradient theorem and is almost identical to, DISPLAYFORM0 Estimating the derivative of the objective w.r.t. covariance parameters is not as straightforward, sincẽ Q π is not a direct function of Σ. However, a key observation of this work is that the second derivative ofQ π w.r.t. actions is sufficient to exactly compute the derivative ofQ π w.r.t. Σ, DISPLAYFORM1 A proof of this identity is provided in the Appendix. The proof may be easily derived by expressing both sides of the equation using standard matrix calculus like DISPLAYFORM2 Then, the full derivative w.r.t. φ takes the form, DISPLAYFORM3 We can think of two ways to optimizeQ 2 whereã ∼ N (a, Σ(s)), using several samples. When the target values in these residuals are treated as fixed (i.e., using a target network), such a training procedure will achieve a fixed point whenQ π w (s, a) satisfies the recursion in the Bellman equation.The second approach requires a single function approximator forQ π w (s, a), ing in a simpler implementation, and thus we use this approach in our experimental evaluation. Suppose one has access to a tuple (s,ã,r,s) sampled from a replay buffer with knowledge of the sampling probability q(ã | s) (possibly unnormalized). Then assuming that this sampling distribution has a full support, we draw a phantom action a ∼ N (ã, Σ(s)) and optimizeQ π w (s, a) by minimizing a weighted Bellman error DISPLAYFORM0 2. For a specific pair of state and action (s, a) the expected value of the objective is, DISPLAYFORM1 Note that N (a|ã, Σ(s)) = N (ã|a, Σ(s)). Therefore, when the target valuer + γQ π w (s, µ(s)) is treated as fixed (e.g., when using target networks) this training procedure reaches an optimum wheñ Q π w (s, a) satisfies the recursion in the Bellman equation FORMULA0. In practice, we find that it is unnecessary to keep track of the probabilities q(ã | s), and assume the replay buffer provides a near-uniform distribution of actions conditioned on states. Other recent work has also benefited from ignoring or heavily damping importance weights BID13 BID27 BID20. However, it is possible when interacting with the environment to save the probability of sampled actions along with their transitions, and thus have access to q(ã | s) ≈ N (ã | µ old (s), Σ old (s)). Policy gradient algorithms are notoriously unstable, particularly in continuous control problems. Such instability has motivated the development of trust region methods that attempt to mitigate the issue by constraining each gradient step to lie within a trust region BID19, or augmenting the expected reward objective with a penalty on KL-divergence from a previous policy BID15 BID20 BID0. These stabilizing techniques have thus far not been applicable to algorithms like DDPG, since the policy is deterministic. The formulation we propose in this paper, however, is easily amenable to trust region optimization. Specifically, we may augment the objective with a penalty DISPLAYFORM0 where π old ≡ (µ old, Σ old) is a previous parameterization of the policy. The optimization is straightforward, since the KL-divergence of two Gaussians can be expressed analytically. This paper follows a long line of work that uses Q-value functions to stably learn a policy, which in the past has been used to either approximate expected BID18 BID26 BID6 or optimal BID28 BID21 BID14 BID7 BID12 future value. Work that is most similar to what we present are methods that exploit gradient information from the Q-value function to train a policy. Deterministic policy gradient BID21 is perhaps the best known of these. The method we propose can be interpreted as a generalization of the deterministic policy gradient. Indeed, if one takes the limit of the policy covariance Σ(s) as it goes to 0, the proposed Q-value function becomes the deterministic value function of DDPG, and the updates for training the Q-value approximator and the policy mean are identical. Stochastic Value Gradient (SVG) BID8 ) also trains stochastic policies using an update that is similar to DDPG (i.e., SVG with replay). The key differences with our approach are that SVG does not provide an update for the covariance, and the mean update in SVG estimates the gradient with a noisy Monte Carlo sample, which we avoid by estimating the smoothed Q-value function. Although a covariance update could be derived using the same reparameterization trick as in the mean update, that would also require a noisy Monte Carlo estimate. Methods for updating the covariance along the gradient of expected reward are essential for applying the subsequent trust region and proximal policy techniques. More recently, BID4 introduced expected policy gradients (EPG), a generalization of DDPG that provides updates for the mean and covariance of a stochastic Gaussian policy using gradients of an estimated Q-value function. In that work, the expected Q-value used in standard policy gradient algorithms such as SARSA BID22 BID18 BID26 ) is estimated. The updates in EPG therefore require approximating an integral of the expected Q-value function. Our analogous process directly estimates an integral (via the smoothed Q-value function) and avoids approximate integrals, thereby making the updates simpler. Moreover, while BID4 rely on a quadratic Taylor expansion of the estimated Q-value function, we instead rely on the strength of neural network function approximators to directly estimate the smoothed Q-value function. The novel training scheme we propose for learning the covariance of a Gaussian policy relies on properties of Gaussian integrals BID2 BID16. Similar identities have been used in the past to derive updates for variational auto-encoders BID9 and Gaussian back-propagation BID17.Finally, the perspective presented in this paper, where Q-values represent the averaged return of a distribution of actions rather than a single action, is distinct from recent advances in distributional RL BID1. Those approaches focus on the distribution of returns of a single action, whereas we consider the single average return of a distribution of actions. Although we restrict our attention in this paper to Gaussian policies, an interesting topic for further investigation is to study the applicability of this new perspective to a wider class of policy distributions. We utilize the insights from Section 3 to introduce a new RL algorithm, Smoothie. Smoothie maintains a parameterizedQ π w trained via the procedure described in Section 3.2. It then uses the gradient and Hessian of this approximation to train a Gaussian policy µ θ, Σ φ using the updates stated in FORMULA0 and FORMULA0. See Algorithm 1 for a simplified pseudocode of our algorithm. Input: Environment EN V, learning rates η π, η Q, discount factor γ, KL-penalty λ, batch size B, number of training steps N, target network lag τ.Initialize θ, φ, w, set θ = θ, φ = φ, w = w. for i = 0 to N − 1 do // Collect experience Sample action a ∼ N (µ θ (s), Σ φ (s)) and apply to EN V to yield r and s. Insert transition (s, a, r, s) to replay buffer. DISPLAYFORM0 We perform a number of evaluations of Smoothie compared to DDPG. We choose DDPG as a baseline because it utilizes gradient information of a Q-value approximator, much like our algorithm; and is a standard algorithm well-known to have achieve good, sample-efficient performance on continuous control benchmarks. To evaluate Smoothie we begin with a simple synthetic task which allows us to study its behavior in a restricted setting. We devised a simple single-action one-shot environment in which the reward function is a mixture of two Gaussians, one better than the other (see FIG1). We initialize the policy mean to be centered on the worse of the two Gaussians. We plot the learnable policy mean and standard deviation during training for Smoothie and DDPG in FIG1 (Left). Smoothie learns both the mean and variance, while DDPG learns only the mean and the variance plotted is the exploratory noise, whose scale is kept fixed during training. As expected we observe that DDPG cannot escape the local optimum. At the beginning of training it exhibits some movement away from the local optimum (likely due to the initial noisy approximation given by Q π w), it is unable to progress very far from the initial mean. Note that this is not an issue of exploration. The exploration scale is high enough that Q π w is aware of the better Gaussian. The issue is in the update for µ θ, which is only with regard to the derivative of Q π w at the current mean. On the other hand, we find Smoothie is successfully able to solve the task. This is because the smoothed reward function approximated byQ π w has a derivative which clearly points µ θ towards the better Gaussian. We also observe that Smoothie is able to suitably adjust the covariance Σ φ during training. Initially, Σ φ decreases due to the concavity of the smoothed reward function. As a region of convexity is entered, it begins to increase, before again decreasing to near-zero as µ θ approaches the global optimum. The learnable policy mean and standard deviation during training for Smoothie and DDPG on a simple one-shot synthetic task. The standard deviation for DDPG is the exploratory noise kept constant during training. Right: The reward function for the synthetic task along with its Gaussian-smoothed version. We find that Smoothie can successfully escape the lower-reward local optimum. We also notice Smoothie increases and decreases its policy variance as the convexity/concavity of the smoothed reward function changes. We now turn our attention to standard continuous control benchmarks available on OpenAI Gym BID3 utilizing the MuJoCo environment BID24.Our implementations utilize feed forward neural networks for policy and Q-values. We parameterize the covariance Σ φ as a diagonal given by e φ. The exploration for DDPG is determined by an Ornstein-Uhlenbeck process BID25 BID11. Additional implementation details are provided in the Appendix. Each plot shows the average reward and standard deviation clipped at the min and max of six randomly seeded runs after choosing best hyperparameters. We see that Smoothie is competitive with DDPG even when DDPG uses a hyperparameter-tuned noise scale, and Smoothie learns the optimal noise scale (the covariance) during training. Moreoever, we observe significant advantages in terms of final reward performance, especially in the more difficult tasks like Hopper, Walker2d, and Humanoid. Across all tasks, TRPO is not sufficiently sampleefficient to provide a competitive baseline. We compare the of Smoothie and DDPG in FIG2. For each task we performed a hyperparameter search over actor learning rate, critic learning rate and reward scale, and plot the average of six runs for the best hyperparameters. For DDPG we extended the hyperparameter search to also consider the scale and damping of exploratory noise provided by the Ornstein-Uhlenbeck process. Smoothie, on the other hand, contains an additional hyperparameter to determine the weight on KL-penalty. Despite DDPG having the advantage of its exploration decided by a hyperparameter search while Smoothie must learn its exploration without supervision, we find that Smoothie performs competitively or better across all tasks, exhibiting a slight advantage in Swimmer and Ant, while showing more dramatic improvements in Hopper, Walker2d, and Humanoid. The improvement is especially dramatic for Hopper, where the average reward is doubled. We also highlight the for Humanoid, which as far as we know, are the best published for a method that only trains on the order of millions of environment steps. In contrast, TRPO, which to the best of our knowledge is the only other algorithm which can achieve better performance, requires on the order of tens of millions of environment steps to achieve comparable reward. This gives added evidence to the benefits of using a learnable covariance and not restricting a policy to be deterministic. Empirically, we found the introduction of a KL-penalty to improve performance of Smoothie, especially on harder tasks. We present a comparison of of Smoothie with and without the KL-penalty on the four harder tasks in FIG3. A KL-penalty to encourage stability is not possible in DDPG. Thus, our algorithm provides a much needed solution to the inherent instability in DDPG training. We observe benefits of using a proximal policy optimization method, especially in Hopper and Humanoid, where the performance improvement is significant without sacrificing sample efficiency. We have presented a new Q-value function,Q π, that is a Gaussian-smoothed version of the standard expected Q-value, Q π. The advantage of usingQ π over Q π is that its gradient and Hessian possess an intimate relationship with the gradient of expected reward with respect to mean and covariance of a Gaussian policy. The ing algorithm, Smoothie, is able to successfully learn both mean and covariance during training, leading to performance that can match or surpass that of DDPG, especially when incorporating a penalty on divergence from a previous policy. The success ofQ π is encouraging. Intuitively it may be argued that learningQ π is more sensible than learning Q π. The smoothed Q-values by definition make the true reward surface smoother, thus possibly easier to learn; moreover the smoothed Q-values have a more direct relationship with the expected discounted return objective. We encourage future work to further investigate these claims as well as techniques to apply the underlying motivations forQ π to other types of policies. A PROOF OF EQUATION FORMULA0 We note that similar identities for Gaussian integrals exist in the literature BID16 BID17 and point the reader to these works for further information. The specific identity we state may be derived using standard matrix calculus. We make use of the fact that DISPLAYFORM0 and for symmetric A, ∂ ∂A ||v|| DISPLAYFORM1 We omit s from Σ(s) in the following equations for succinctness. The LHS of FORMULA0 Meanwhile, towards tackling the RHS of FORMULA0 we note that DISPLAYFORM2 Thus we have DISPLAYFORM3
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1nLkl-0Z
We propose a new Q-value function that enables better learning of Gaussian policies.
Interactive Fiction games are text-based simulations in which an agent interacts with the world purely through natural language. They are ideal environments for studying how to extend reinforcement learning agents to meet the challenges of natural language understanding, partial observability, and action generation in combinatorially-large text-based action spaces. We present KG-A2C, an agent that builds a dynamic knowledge graph while exploring and generates actions using a template-based action space. We contend that the dual uses of the knowledge graph to reason about game state and to constrain natural language generation are the keys to scalable exploration of combinatorially large natural language actions. Results across a wide variety of IF games show that KG-A2C outperforms current IF agents despite the exponential increase in action space size. Natural language communication has long been considered a defining characteristic of human intelligence. We are motivated by the question of how learning agents can understand and generate contextually relevant natural language in service of achieving a goal. In pursuit of this objective we study Interactive Fiction (IF) games, or text-adventures: simulations in which an agent interacts with the world purely through natural language-"seeing" and "talking" to the world using textual descriptions and commands. To progress in these games, an agent must generate natural language actions that are coherent, contextually relevant, and able to effect the desired change in the world. Complicating the problem of generating contextually relevant language in these games is the issue of partial observability: the fact that the agent never has access to the true underlying world state. IF games are structured as puzzles and often consist of an complex, interconnected web of distinct locations, objects, and characters. The agent needs to thus reason about the complexities of such a world solely through the textual descriptions that it receives, descriptions that are often incomplete. Further, an agent must be able to perform commonsense reasoning-IF games assume that human players possess prior commonsense and thematic knowledge (e.g. knowing that swords can kill trolls or that trolls live in dark places). Knowledge graphs provide us with an intuitive way of representing these partially observable worlds. Prior works have shown how using knowledge graphs aids in the twin issues of partial observability (a) and commonsense reasoning (b), but do not use them in the context of generating natural language. To gain a sense for the challenges surrounding natural language generation, we need to first understand how large this space really is. In order to solve solve a popular IF game such as Zork1 it's necessary to generate actions consisting of up to five-words from a relatively modest vocabulary of 697 words recognized by Zork's parser. Even this modestly sized vocabulary leads to O(697 5) = 1.64 × 10 the structure required to further constrain our action space via our knowledge graph-and make the argument that the combination of these approaches allows us to generate meaningful natural language commands. Our contributions are as follows: We introduce an novel agent that utilizes both a knowledge graph based state space and template based action space and show how to train such an agent. We then conduct an empirical study evaluating our agent across a diverse set of IF games followed by an ablation analysis studying the effectiveness of various components of our algorithm as well as its overall generalizability. Remarkably we show that our agent achieves state-of-the-art performance on a large proportion of the games despite the exponential increase in action space size. We examine prior work in three broad categories: text-based game playing agents and frameworks as well as knowledge graphs used for natural language generation and game playing agents. LSTM-DQN , considers verb-noun actions up to two-words in length. Separate Q-Value estimates are produced for each possible verb and object, and the action consists of pairing the maximally valued verb combined with the maximally valued object. The DRRN algorithm for choice-based games estimates Q-Values for a particular action from a particular state. Fulda et al. use Word2Vec (to aid in extracting affordances for items in these games and use this information to produce relevant action verbs. reduce the combinatorially-sized action space into a discrete form using a walkthrough of the game and introduce the Action Elimination DQN, which learns to eliminate actions unlikely to cause a world change. Côté et al. introduce TextWorld, a framework for procedurally generating parser-based games, allowing a user to control the difficulty of a generated game. , an optimized interface for playing human-made IF games-formalizing this task. They further provide a comparative study of various types of agents on their set of games, testing the performance of heuristic based agents such as NAIL (b) and various reinforcement learning agents are benchmarked. We use Jericho and the tools that it provides to develop our agents. Knowledge graphs have been shown to be useful representations for a variety of tasks surrounding natural language generation and interactive fiction. and effectively use knowledge graph representations to improve neural conversational and story ending prediction models respectively. explore procedural content generation in text-adventure games-looking at constructing a quest for a given game world, and use knowledge graphs to ground generative systems trained to produce quest content. From the perspective of text-game playing agent and most in line with the spirit of our work, Ammanabrolu & Riedl (2019a) present the Knowledge Graph DQN or KG-DQN, an approach where a knowledge graph built during exploration is used as a state representation for a deep reinforcement learning based agent. Ammanabrolu & Riedl (2019b) further expand on this work, exploring methods of transferring control policies in text-games, using knowledge graphs to seed an agent with useful commonsense knowledge and to transfer knowledge between different games within a domain. Both of these works, however, identify a discrete set of actions required to play the game beforehand and so do not fully tackle the issue of the combinatorial action space. Formally, IF games are partially observable Markov decision processes (POMDP), represented as a 7-tuple of S, T, A, Ω, O, R, γ representing the set of environment states, mostly deterministic conditional transition probabilities between states, the vocabulary or words used to compose text commands, observations returned by the game, observation conditional probabilities, reward function, and the discount factor respectively (Côté et al., 2018; a). To deal with the ing twin challenges of partial observability and combinatorial actions, we use a knowledge graph based state space and a template-based action space-each described in detail below. Knowledge Graph State Space. Building on Ammanabrolu & Riedl (2019a), we use a knowledge graph as a state representation that is learnt during exploration. The knowledge graph is stored as a set of 3-tuples of subject, relation, object. These triples are extracted from the observations using Stanford's Open Information Extraction (OpenIE) . Human-made IF games often contain relatively complex semi-structured information that OpenIE is not designed to parse and so we add additional rules to ensure that we are parsing the relevant information. Updated after every action, the knowledge graph helps the agent form a map of the world that it is exploring, in addition to retaining information that it has learned such as the affordances associated with an object, the properties of a character, current inventory, etc. Nodes relating to such information are shown on the basis of their relation to the agent which is presented on the graph using a "you" node (see example in Fig. 2a). Ammanabrolu & Riedl (2019a) build a knowledge graph in a similar manner but restrict themselves to a single domain. In contrast, we test our methods on a much more diverse set of games defined in the Jericho framework (a). These games are each structured differentlycovering a wider variety of genres-and so to be able to extract the same information from all of them in a general manner, we relax many of the rules found in Ammanabrolu & Riedl (2019a). To aid in the generalizability of graph building, we introduce the concept of interactive objects-items that an agent is able to directly interact with in the surrounding environment. These items are directly linked to the "you" node, indicating that the agent can interact with them, and the node for the current room, showing their relative position. All other triples built from the graph are extracted by OpenIE. Further details regarding knowledge graph updates are found in Appendix A.1 An example of a graph built using these rules is seen in Fig. 2a. Template Action Space. Templates are subroutines used by the game's parser to interpret the player's action. They consist of interchangeable verbs phrases (V P) optionally followed by prepositional phrases (V P P P), e.g. Figure 2b, actions may be constructed from templates by filling in the template's blanks using words in the game's vocabulary. Templates and vocabulary words are programmatically accessible through the Jericho framework and are thus available for every IF game. Further details about how we prioritize interchangeable verbs and prepositions are available in Appendix A.2. Combining the knowledge-graph state space with the template action space, Knowledge Graph Advantage Actor Critic or KG-A2C, is an on-policy reinforcement learning agent that collects experience from many parallel environments. We first discuss the architecture of KG-A2C, then detail the training algorithm. As seen in Fig. 1, KG-A2C's architecture can broadly be described in terms of encoding a state representation and then using this encoded representation to decode an action. We describe each of these processes below. Input Representation. The input representation network is broadly divided into three parts: an observation encoder, a score encoder, and the knowledge graph. At every step an observation consisting of several components is received: o t = (o t desc, o tgame, o tinv, a t−1) corresponding to the room description, game feedback, inventory, and previous action, and total score R t. The room description o t desc is a textual description of the agent's location, obtained by executing the command "look." The game feedback o tgame is the simulators response to the agent's previous action and consists of narrative and flavor text. The inventory o tinv and previous action a t−1 components inform the agent about the contents of its inventory and the last action taken respectively. The observation encoder processes each component of o t using a separate GRU encoder. As we are not given the vocabulary that o t is comprised of, we use subword tokenization-specifically using the unigram subword tokenization method described in. This method predicts the most likely sequence of subword tokens for a given input using a unigram language and contains a total vocabulary of size 8000. For each of the GRUs, we pass in the final hidden state of the GRU at step t − 1 to initialize the hidden state at step t. We concatenate each of the encoded components and use a linear layer to combine them into the final encoded observation o t. At each step, we update our knowledge graph G t using o t as described in Sec. 3 and it is then embedded into a single vector g t. Following Ammanabrolu & Riedl (2019a) we use Graph Attention networks or GATs (Veličković et al., 2018) with an attention mechanism similar to that described in. Node features are computed as, where N is the number of nodes and F the number of features in each node, consist of the average subword embeddings of the entity and of the relations for all incoming edges using our unigram language model. Self-attention is then used after a learnable linear transformation W ∈ IR 2F×F applied to all the node features. Attention coefficients α ij are then computed by softmaxing k ∈ N with N being the neighborhood in which we compute the attention coefficients and consists of all edges in G t. where p ∈ IR is a learnable parameter. The final knowledge graph embedding vector g t is computed as: where k refers to the parameters of the k th independent attention mechanism, W g and b g the weights and biases of the output linear layer, and represents concatenation. The final component of state embedding vector is a binary encoding c t of the total score obtained so far in the game-giving the agent a sense for how far it has progressed in the game even when it is not collecting reward. The state embedding vector is then calculated as s t = g t ⊕ o t ⊕ c t. Action Decoder. The state embedding vector s t is then used to sequentially construct an action by first predicting a template and then picking the objects to fill into the template using a series of Decoder GRUs. This gives rise to a template policy π T and a policy for each object π Oi. Architecture You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great antiquity. A batterypowered brass lantern is on the trophy case. You are carrying: A glass bottle The glass bottle contains: A quantity of water. Figure 2: An overall example of the knowledge graph building and subsequent action decoding process for a given state in Zork1, illustrating the use of interactive objects and the graph mask. wise, at every decoding step all previously predicted parts of the action are encoded and passed along with s t through an attention layer which learns to attend over these representations-conditioning every predicted object on all the previously predicted objects and template. All the object decoder GRUs share parameters while the template decoder GRU T remains separate. To effectively constrain the space of template-actions, we introduce the concept of a graph mask, leveraging our knowledge graph at that timestep G t to streamline the object decoding process. Formally, the graph mask m t = {o : o ∈ G t ∧ o ∈ V}, consists of all the entities found within the knowledge graph G t and vocabulary V and is applied to the outputs of the object decoder GRUsrestricting them to predict objects in the mask. Generally, in an IF game, it is impossible to interact with an object that you never seen or that are not in your inventory and so the mask lets us explore the action space more efficiently. To account for cases where this assumption does not hold, i.e. when an object that the agent has never interacted with before must be referenced in order to progress in the game, we randomly add objects o ∈ V to m t with a probability p m. An example of the graph-constrained action decoding process is illustrated in Fig. 2b. We adapt the Advantage Actor Critic (A2C) method to train our network, using multiple workers to gather experiences from the simulator, making several significant changes along the way-as described below. Valid Actions. Using a template-action space there are millions of possible actions at each step. Most of these actions do not make sense, are ungrammatical, etc. and an even fewer number of them actually cause the agent effect change in the world. Without any sense for which actions present valid interactions with the world, the combinatorial action space becomes prohibitively large for effective exploration. We thus use the concept of valid actions, actions that can change the world in a particular state. These actions can usually be recognized through the game feedback, with responses like "Nothing happens" or "That phrase is not recognized." In practice, we follow Hausknecht et al. (2019a) and use the valid action detection algorithm provided by Jericho. Formally, V alid(s t) = a 0, a 1...a N and from this we can construct the corresponding set of valid templates We further define a set of valid objects O valid (s t) = o 0, o 1...o M which consists of all objects in the graph mask as defined in Sec. 4. This lets us introduce two cross-entropy loss terms to aid the action decoding process. The template loss given a particular state and current network parameters is applied to the decoder GRU T. Similarly, the object loss is applied across the decoder GRU O and is calculated by summing cross-entropy loss from all the object decoding steps. Updates. A2C training starts with calculating the advantage of taking an action in a state A(s t, a t), defined as the value of taking an action Q(s t, a t) compared to the average value of taking all possible valid actions in that state V (s t): V (s t) is predicted by the critic as shown in Fig. 1 and r t is the reward received at step t. The action decoder or actor is then updated according to the gradient: updating the template policy π T and object policies π Oi based on the fact that each step in the action decoding process is conditioned on all the previously decoded portions. The critic is updated with respect to the gradient: bringing the critic's prediction of the value of being in a state closer to its true underlying value. We further add an entropy loss over the valid actions, designed to prevent the agent from prematurely converging on a trajectory. Our experiments are structured into two parts: We first present a comprehensive set of ablations designed to test the relative effectiveness of the various parts of our algorithm. The full KG-A2C is then tested on a suite of Jericho supported games and is compared to strong, established baselines. Additionally, as encouraged by Hausknecht et al. (2019a), we present the set of handicaps used by our agents: Jericho's ability to identify valid actions and the Load, Save handicap in order to acquire o t desc and o tinv using the look and inventory commands without changing the game state. Hyperparameters are provided in Appendix B. Ablation Study. Our ablation study is performed on Zork1, identified by Hausknecht et al. (2019a) to be one of the most difficult games in their suite and the subject of much prior work . Zork1 is one of the earliest IF games and is a dungeon-crawler-a player must explore a vast labyrinth while fighting off enemies and complete puzzles in order to collect treasures. It features a relatively sparse reward for collecting a treasure or moving along the right path to one, and stochasticity in terms of random enemy movements. In order to understand the contributions of different components of KG-A2C's architecture, ablate KG-A2C's knowledge graph, template-action space, and valid-action loss. In order to understand the effects of using a knowledge graph, LSTM-A2C removes all components of KG-A2C's knowledge graph. In particular, the state embedding vector is now computed as s t = o t ⊕ c t and the graph mask is not used to constrain action decoding. LSTM-A2C-masked The same as LSTM-A2C but we use interactive objects to provide an object mask that is used in the same manner as the graph mask of KG-A2C. KG-A2C-seq discards the template action space and instead decodes actions word by word up to a maximum of four words. A supervised cross-entropy-based valid action loss L V alid is now calculated by selecting a random valid action a t valid ∈ V alid(s t) and using each token in it as a target label. As this action space is orders of magnitude larger than template actions, we use teacher-forcing to enable more effective exploration while training the agent-executing a t valid with a probability p valid = 0.5 and the decoded action otherwise. In order to understand the importance of training with valid-actions, KG-A2C-unsupervised is not allowed to access the list of valid actions-the valid-action-losses L T and L O are disabled and L E now based on the full action set. Thus, the agent must explore the template action space manually. Template DQN Baseline. TDQN (a) is an extension of LSTM-DQN to template-based action spaces. This is accomplished using three output heads: one for estimating the Q-Values over templates Q(s t, u)∀u ∈ T and two for estimating Q-Values Q(s t, o 1), Q(s t, o 2)∀o i ∈ O over vocabulary to fill in the blanks of the template. The final executed action is constructed by greedily sampling from the predicted Q-values. To understand how humans progress in Zork1, a group of 10 human players-familiar with IF games-were asked to play Zork1 for the first time (with no access to walkthroughs). Half of the players reached a game score of around 40 before dying to the first beatable NPC, a troll, mostly due to neglecting to collect a weapon to fight it with beforehand. Three of the remaining players died to hidden traps even before reaching this point, achieving scores between 5 and 15. The final two players made it significantly past the troll gaining scores of around 70. A map of Zork1 with annotated rewards can be found in Appendix C and additional learning curves can be found in Appendix B. With this in mind, we first discuss the of the ablation study and then KG-A2C's performance over a much wider set of games found in Jericho. On Zork1, the full KG-A2C significantly outperforms all baselines and ablations as seen in Table 3b -indicating that all components of the full algorithm previously introduced are crucial for its performance. The first two possible rewards that can be received in this game are of magnitude 5 and 10, both requiring 4 steps to reach from the starting point when following an optimal policy. This is the extent of the progress of both the LSTM-A2C and the KG-A2C-seq. The LSTM-A2C more often than not collects both of these rewards while the KG-A2C-seq usually only collects one or the other in the span of an episode. KG-A2C-seq, using a action space consisting of the full vocabulary, performs significantly worse than the rest of the agents even when given the handicaps of teacher forcing and being allowed to train for significantly longer-indicating that the template based action space is necessary for effective exploration. The LSTM-A2C-masked progresses significantly further in the game due to the object mask cutting down the action space, allowing for more efficient exploration. It does not, however, perform as well as KG-A2C likely due to the lack of the graph component g t in the state embedding. LSTM-A2C and TDQN, which use the template-based action space without a knowledge graph, also struggle to progress beyond the initial rewards. Without a knowledge graph to maintain a belief over the world state and constrain the action generation, the agent is unable to produce contextually relevant commands. Thus both the templates and the knowledge graph are critical for the agent to attain state-of-the-art performance. The final component being tested, the valid action supervised loss does not appear to be a important as our choice of state and action spaces: KG-A2C-unsupervised achieves nearly comparable performance to the full algorithm, also achieving state-of-the-art when compared to prior agents, Tabula rasa reinforcement learning offers an intuitive paradigm for exploring goal driven, contextually aware natural language generation. The sheer size of the natural language action space, however, has proven to be out of the reach of existing algorithms. In this paper we introduced KG-A2C, a novel learning agent that demonstrates the feasibility of scaling reinforcement learning towards natural language actions spaces with hundreds of millions of actions. The key insight to being able to efficiently explore such large spaces is the combination of a knowledge-graph-based state space and a template-based action space. The knowledge graph serves as a means for the agent to understand its surroundings, accumulate information about the game, and disambiguate similar textual observations while the templates lend a measure of structure that enables us to exploit that same knowledge graph for language generation. Together they constrain the vast space of possible actions into the compact space of sensible ones. An ablation study on Zork1 shows state-of-the-art performance with respect to any currently existing general reinforcement learning agent, including those with action spaces six orders of magnitude smaller than what we consider-indicating the overall efficacy of our combined state-action space. Further, a suite of experiments shows wide improvement over TDQN, the current state-of-the-art template based agent, across a diverse set of 26 human-made IF games covering multiple genres and game structures demonstrate that our agent is able to generalize effectively. A IMPLEMENTATION DETAILS Candidate interactive objects are identified by performing part-of-speech tagging on the current observation, identifying singular and proper nouns as well as adjectives, and are then filtered by checking if they can be examined using the command examine OBJ. Only the interactive objects not found in the inventory are linked to the node corresponding to the current room and the inventory items are linked to the "you" node. The only other rule applied uses the navigational actions performed by the agent to infer the relative positions of rooms, e.g. kitchen, down, cellar when the agent performs go down when in the kitchen to move to the cellar. Templates are processed by selecting a single verb and preposition from the aliases. For the sake of agent explainability, we pick the verb and preposition that are most likely to be used by humans when playing IF games. This is done by assessing token frequencies from a dataset of human playthroughs such as those given in ClubFloyd [at/against/on/onto] ), would then be converted to take and put on. Episodes are terminated after 100 valid steps or game over/victory. Agents that decode invalid actions often wouldn't make it very far into the game, and so we only count valid-actions against the hundred step limit. All agents are trained individually on each game and then evaluated on that game. All A2C based agents are trained using data collected from 32 parallel environments. TDQN was trained using a single environment. Hyperparameters for all agents were tuned on the game of Zork1 and held constant across all other games. Final reported scores are an average over 5 runs of each algorithm. Interactive objects: tree, path, branches, forest, large, all Action: up Score: 0 ---Obs: Desc: Up a Tree You are about 10 feet above the ground nestled among some large branches. The nearest branch above you is above your reach. Beside you on the branch is a small birds nest. In the birds nest is a large egg encrusted with precious jewels, apparently scavenged by a childless songbird. The egg is covered with fine gold inlay, and ornamented in lapis lazuli and motherofpearl. Unlike most eggs, this one is hinged and closed with a delicate looking clasp. The egg appears extremely fragile. Inv: You are emptyhanded. Feedback: Up a Tree You are about 10 feet above the ground nestled among some large branches. The nearest branch above you is above your reach. Beside you on the branch is a small birds nest. In the birds nest is a large egg encrusted with precious jewels, apparently scavenged by a childless songbird. The egg is covered with fine gold inlay, and ornamented in lapis lazuli and motherofpearl. Unlike most eggs, this one is hinged and closed with a delicate looking clasp. The egg appears extremely fragile.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1x6w0EtwH
We present KG-A2C, a reinforcement learning agent that builds a dynamic knowledge graph while exploring and generates natural language using a template-based action space - outperforming all current agents on a wide set of text-based games.
It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones. We shed light on this by proving that the total number of neurons m required to approximate natural classes of multivariate polynomials of n variables grows only linearly with n for deep neural networks, but grows exponentially when merely a single hidden layer is allowed. We also provide evidence that when the number of hidden layers is increased from 1 to k, the neuron requirement grows exponentially not with n but with n^{1/k}, suggesting that the minimum number of layers required for practical expressibility grows only logarithmically with n. Deep learning has lately been shown to be a very powerful tool for a wide range of problems, from image segmentation to machine translation. Despite its success, many of the techniques developed by practitioners of artificial neural networks (ANNs) are heuristics without theoretical guarantees. Perhaps most notably, the power of feedforward networks with many layers (deep networks) has not been fully explained. The goal of this paper is to shed more light on this question and to suggest heuristics for how deep is deep enough. It is well-known BID7 BID11 BID15 BID1 BID23 that neural networks with a single hidden layer can approximate any function under reasonable assumptions, but it is possible that the networks required will be extremely large. Recent authors have shown that some functions can be approximated by deeper networks much more efficiently (i.e. with fewer neurons) than by shallower ones. Often, these admit one or more of the following limitations: "existence proofs" without explicit constructions of the functions in question; explicit constructions, but relatively complicated functions; or applicability only to types of network rarely used in practice. It is important and timely to extend this work to make it more concrete and actionable, by deriving resource requirements for approximating natural classes of functions using today's most common neural network architectures. BID17 recently proved that it is exponentially more efficient to use a deep network than a shallow network when Taylor-approximating the product of input variables. In the present paper, we move far beyond this in the following ways: (i) we use standard uniform approximation instead of Taylor approximation, (ii) we show that the exponential advantage of depth extends to all general sparse multivariate polynomials, and (iii) we address the question of how the number of neurons scales with the number of layers. Our apply to standard feedforward neural networks and are borne out by empirical tests. Our primary contributions are as follows:• It is possible to achieve arbitrarily close approximations of simple multivariate and univariate polynomials with neural networks having a bounded number of neurons (see §3).• Such polynomials are exponentially easier to approximate with deep networks than with shallow networks (see §4).• The power of networks improves rapidly with depth; for natural polynomials, the number of layers required is at most logarithmic in the number of input variables, where the base of the logarithm depends upon the layer width (see §5). Deeper networks have been shown to have greater representational power with respect to various notions of complexity, including piecewise linear decision boundaries BID22 and topological invariants BID2. Recently, and showed that the trajectories of input variables attain exponentially greater length and curvature with greater network depth. Work including BID8; BID10; BID23; BID24; shows that there exist functions that require exponential width to be approximated by a shallow network. BID1 provides bounds on the error in approximating general functions by shallow networks. BID20 and BID24 show that for compositional functions (those that can be expressed by recursive function composition), the number of neurons required for approximation by a deep network is exponentially smaller than the best known upper bounds for a shallow network. BID20 ask whether functions with tight lower bounds must be pathologically complicated, a question which we answer here in the negative. Various authors have also considered the power of deeper networks of types other than the standard feedforward model. The problem has also been posed for sum-product networks BID9 and restricted Boltzmann machines BID19. showed, using tools from tensor decomposition, that shallow arithmetic circuits can express only a measure-zero set of the functions expressible by deep circuits. A weak generalization of this to convolutional neural networks was shown in. In this paper, we will consider the standard model of feedforward neural networks (also called multilayer perceptrons). Formally, the network may be considered as a multivariate function DISPLAYFORM0.., A k are constant matrices and σ denotes a scalar nonlinear function applied element-wise to vectors. The constant k is referred to as the depth of the network. The neurons of the network are the entries of the vectors σ(A · · · σ(A 1 σ(A 0 x)) · · · ), for = 1,..., k − 1. These vectors are referred to as the hidden layers of the network. Two notions of approximation will be relevant in our : -approximation, also known as uniform approximation, and Taylor approximation. Definition 3.1. For constant > 0, we say that a network N (x) -approximates a multivariate function f (x) (for x in a specified domain DISPLAYFORM1 Definition 3.2. We say that a network N (x) Taylor-approximates a multivariate polynomial p(x) of degree d if p(x) is the dth order Taylor polynomial (about the origin) of N (x).The following proposition shows that Taylor approximation implies -approximation for homogeneous polynomials. The reverse implication does not hold. Proposition 3.3. Suppose that the network N (x) Taylor-approximates the homogeneous multivariate polynomial p(x). Then, for every, there exists a network N (x) that -approximates p(x), such that N (x) and N (x) have the same number of neurons in each layer. (This statement holds for x ∈ (−R, R) n for any specified R.) DISPLAYFORM2 is a Taylor series with each E i (x) homogeneous of degree i. Since N (x) is the function defined by a neural network, it converges for every x ∈ R n. Thus, E(x) converges, as does DISPLAYFORM3. By picking δ sufficiently small, we can make each term DISPLAYFORM4 d, and therefore: DISPLAYFORM5 We conclude that N (x) is an -approximation of p(x), as desired. For a fixed nonlinear function σ, we consider the total number of neurons (excluding input and output neurons) needed for a network to approximate a given function. Remarkably, it is possible to attain arbitrarily good approximations of a (not necessarily homogeneous) multivariate polynomial by a feedforward neural network, even with a single hidden layer, without increasing the number of neurons past a certain bound. (See also Corollary 1 in BID24 .) Theorem 3.4. Suppose that p(x) is a degree-d multivariate polynomial and that the nonlinearity σ has nonzero Taylor coefficients up to degree d. Let m k (p) be the minimum number of neurons in a depth-k network that -approximates p. Then, the limit lim →0 m k (p) exists (and is finite). (Once again, this statement holds for x ∈ (−R, R) n for any specified R.)Proof. We show that lim →0 m 1 (p) exists; it follows immediately that lim →0 m k (p) exists for every k, since an -approximation to p with depth k can be constructed from one with depth 1. DISPLAYFORM6. We claim that each p i (x) can be Taylor-approximated by a network N i (x) with one hidden layer. This follows, for example, from the proof in BID17 that products can be Taylor-approximated by networks with one hidden layer, since each monomial is the product of several inputs (with multiplicity); we prove a far stronger about N i (x) later in this paper (see Theorem 4.1).Suppose now that N i (x) has m i hidden neurons. By Proposition 3.3, we conclude that since p i (x) is homogeneous, it may be δ-approximated by a network N DISPLAYFORM7 This theorem is perhaps surprising, since it is common for -approximations to functions to require ever-greater complexity, approaching infinity as → 0. For example, the function exp(| − x|) may be approximated on the domain (−π, π) by Fourier sums of the form m k=0 a m cos(kx). However, in order to achieve -approximation, we need to take m ∼ 1/ √ terms. By contrast, we have shown that a finite neural network architecture can achieve arbitrarily good approximations merely by altering its weights. Note also that the assumption of nonzero Taylor coefficients cannot be dropped from Theorem 3.4. For example, the theorem is false for rectified linear units (ReLUs), which are piecewise linear and do not admit a Taylor series. This is because -approximating a non-linear polynomial with a piecewise linear function requires an ever-increasing number of pieces as → 0. In this section, we compare the efficiency of shallow networks (those with a single hidden layer) and deep networks at approximating multivariate polynomials. Proofs of our main are included in the Appendix. Our first shows that uniform approximation of monomials requires exponentially more neurons in a shallow than a deep network. DISPLAYFORM0 Suppose that the nonlinearity σ has nonzero Taylor coefficients up to degree 2d. Then, we have: DISPLAYFORM1 where x denotes the smallest integer that is at least x. We can prove a comparable for m Taylor under slightly weaker assumptions on σ. Note that by setting r 1 = r 2 =... = r n = 1, we recover the of BID17 that the product of n numbers requires 2 n neurons in a shallow network but can be Taylor-approximated with linearly many neurons in a deep network. DISPLAYFORM2 It is worth noting that neither of Theorems 4.1 and 4.2 implies the other. This is because it is possible for a polynomial to admit a compact uniform approximation without admitting a compact Taylor approximation. It is natural now to consider the cost of approximating general polynomials. However, without further constraint, this is relatively uninstructive because polynomials of degree d in n variables live within a space of dimension n+d d, and therefore most require exponentially many neurons for any depth of network. We therefore consider polynomials of sparsity c: that is, those that can be represented as the sum of c monomials. This includes many natural functions. Theorem 4.3. Let p(x) be a multivariate polynomial of degree d and sparsity c, having monomials q 1 (x), q 2 (x),..., q c (x). Suppose that the nonlinearity σ has nonzero Taylor coefficients up to degree 2d. Then, we have: DISPLAYFORM3 These statements also hold if m uniform is replaced with m Taylor.As mentioned above with respect to ReLUs, some assumptions on the Taylor coefficients of the activation function are necessary for the we present. However, it is possible to loosen the assumptions of Theorem 4.1 and 4.2 while still obtaining exponential lower bounds on m DISPLAYFORM4 Hence, A is invertible, which means that multiplying its columns by nonzero values gives another invertible matrix. Suppose that we multiply the jth column of A by σ j to get A, where σ(x) = j σ j x j is the Taylor expansion of σ(x). Now, observe that the ith row of A is exactly the coefficients of σ(a i x), up to the degree-d term. Since A is invertible, the rows must be linearly independent, so the polynomials σ(a i x), restricted to terms of degree at most d, must themselves be linearly independent. Since the space of degree-d univariate polynomials is (d + 1)-dimensional, these d + 1 linearly independent polynomials must span the space. Hence, m Taylor 1 (p) ≤ d + 1 for any univariate degree-d polynomial p. In fact, we can fix the weights from the input neuron to the hidden layer (to be a 0, a 1, . . ., a d, respectively) and still represent any polynomial p with d + 1 hidden neurons. Proposition 4.6. Let p(x) = x d, and suppose that the nonlinearity σ(x) has nonzero Taylor coefficients up to degree 2d. Then, we have: DISPLAYFORM5 These statements also hold if m uniform is replaced with m Taylor.Proof. Part (i) follows from part (i) of Theorems 4.1 and 4.2 by setting n = 1 and r 1 = d. For part (ii), observe that we can Taylor-approximate the square x 2 of an input x with three neurons in a single layer: DISPLAYFORM6 We refer to this construction as a square gate, and the construction of Lin et al. FORMULA15 as a product gate. We also use identity gate to refer to a neuron that simply preserves the input of a neuron from the preceding layer (this is equivalent to the skip connections in residual nets BID14 We now consider how m uniform k (p) scales with k, interpolating between exponential in n (for k = 1) and linear in n (for k = log n). In practice, networks with modest k > 1 are effective at representing natural functions. We explain this theoretically by showing that the cost of approximating the product polynomial drops off rapidly as k increases. By repeated application of the shallow network construction in Lin et al. FORMULA15, we obtain the following upper bound on m uniform k (p), which we conjecture to be essentially tight. Our approach leverages the compositionality of polynomials, as discussed e.g. in BID20 and BID24, using a tree-like neural network architecture. Theorem 5.1. Let p(x) equal the product x 1 x 2 · · · x n, and suppose σ has nonzero Taylor coefficients up to degree n. Then, we have: DISPLAYFORM0 Proof. We construct a network in which groups of the n inputs are recursively multiplied up to Taylor approximation. The n inputs are first divided into groups of size b 1, and each group is multiplied in the first hidden layer using 2 b1 neurons (as described in Lin et al. FORMULA15). Thus, the first hidden layer includes a total of 2 b1 n/b 1 neurons. This gives us n/b 1 values to multiply, which are in turn divided into groups of size b 2. Each group is multiplied in the second hidden layer using 2 b2 neurons. Thus, the second hidden layer includes a total of 2 b2 n/(b 1 b 2) neurons. We continue in this fashion for b 1, b 2,..., b k such that b 1 b 2 · · · b k = n, giving us one neuron which is the product of all of our inputs. By considering the total number of neurons used, we conclude In fact, we can solve for the choice of b i such that the upper bound in is minimized, under the condition b 1 b 2 · · · b k = n. Using the technique of Lagrange multipliers, we know that the optimum occurs at a minimum of the function as n varies are shown for k = 1, 2, 3. Observe that the b i converge to n 1/k for large n, as witnessed by a linear fit in the log-log plot. The exact values are given by equations and. for n = 20 is shown in black. In the region above and to the right of the curve, it is possible to effectively approximate the product function (Theorem 5.1). DISPLAYFORM1 DISPLAYFORM2 Differentiating L with respect to b i, we obtain the conditions DISPLAYFORM3 Dividing by k j=i+1 b j and rearranging gives us the recursion DISPLAYFORM4 Thus, the optimal b i are not exactly equal but very slowly increasing with i (see FIG6).The following conjecture states that the bound given in Theorem 5.1 is (approximately) optimal. Conjecture 5.2. Let p(x) equal to the product x 1 x 2 · · · x n, and suppose that σ has all nonzero Taylor coefficients. Then, we have: DISPLAYFORM5 i.e., the exponent grows as n 1/k for n → ∞.We empirically tested Conjecture 5.2 by training ANNs to predict the product of input values x 1,..., x n with n = 20 (see FIG7 . The rapid interpolation from exponential to linear width aligns with our predictions. In our experiments, we used feedforward networks with dense connections between successive layers. In the figure, we show for σ(x) = tanh(x) (note that this behavior is even better than expected, since this function actually has numerous zero Taylor coefficients). Similar were also obtained for rectified linear units (ReLUs) as the nonlinearity, despite the fact that this function does not even admit a Taylor series. The number of layers was varied, as was the number of neurons within a single layer. The networks were trained using the AdaDelta optimizer to minimize the absolute value of the difference between the predicted and actual values. Input variables x i were drawn uniformly at random from the interval, so that the expected value of the output would be of manageable size. Eq. provides a helpful rule of thumb for how deep is deep enough. Suppose, for instance, that we wish to keep typical layers no wider than about a thousand (∼ 2 10) neurons. Eq. then implies n 1/k ∼ < 10, i.e., that the number of layers should be at least k ∼ > log 10 n. It would be very interesting if one could show that general polynomials p in n variables require a superpolynomial number of neurons to approximate for any constant number of hidden layers. The analogous statement for Boolean circuits -whether the complexity classes T C 0 and T C 1 are equal -remains unresolved and is assumed to be quite hard. Note that the formulations for Boolean circuits and deep neural networks are independent statements (neither would imply the other) due to the differences between computation on binary and real values. Indeed, gaps in expressivity have already been proven to exist for real-valued neural networks of different depths, for which the analogous remain unknown in Boolean circuits (see e.g. BID21 BID4 BID3 ; Montufar et al. FORMULA15 ; ; Telgarsky FORMULA15). We have shown how the power of deeper ANNs can be quantified even for simple polynomials. We have proved that arbitrarily good approximations of polynomials are possible even with a fixed number of neurons and that there is an exponential gap between the width of shallow and deep networks required for approximating a given sparse polynomial. For n variables, a shallow network requires size exponential in n, while a deep network requires at most linearly many neurons. Networks with a constant number k > 1 of hidden layers appear to interpolate between these extremes, following a curve exponential in n 1/k. This suggests a rough heuristic for the number of layers required for approximating simple functions with neural networks. For example, if we want no layers to have more than 2 10 neurons, say, then the minimum number of layers required grows only as log 10 n. To further improve efficiency using the O(n) constructions we have presented, it suffices to increase the number of layers by a factor of log 2 10 ≈ 3, to log 2 n. The key property we use in our constructions is compositionality, as detailed in BID24. It is worth noting that as a consequence our networks enjoy the property of locality mentioned in, which is also a feature of convolutional neural nets. That is, each neuron in a layer is assumed to be connected only to a small subset of neurons from the previous layer, rather than the entirety (or some large fraction). In fact, we showed (e.g. Prop. 4.6) that there exist natural functions computable with linearly many neurons, with each neuron is connected to at most two neurons in the preceding layer, which nonetheless cannot be computed with fewer than exponentially many neurons in a single layer, no matter how may connections are used. Our construction can also be framed with reference to the other properties mentioned in: those of sharing (in which weights are shared between neural connections) and pooling (in which layers are gradually collapsed, as our construction essentially does with recursive combination of inputs). This paper has focused exclusively on the resources (neurons and synapses) required to compute a given function for fixed network depth. (Note also of BID18 ; BID13 ; BID12 for networks of fixed width.) An important complementary challenge is to quantify the resources (e.g. training steps) required to learn the computation, i.e., to converge to appropriate weights using training data -possibly a fixed amount thereof, as suggested in. There are simple functions that can be computed with polynomial resources but require exponential resources to learn . It is quite possible that architectures we have not considered increase the feasibility of learning. For example, residual networks (ResNets) BID14 and unitary nets (see e.g. BID0 BID16) are no more powerful in representational ability than conventional networks of the same size, but by being less susceptible to the "vanishing/exploding gradient" problem, it is far easier to optimize them in practice. We look forward to future work that will help us understand the power of neural networks to learn. Without loss of generality, suppose that r i > 0 for i = 1,..., n. Let X be the multiset in which x i occurs with multiplicity r i.We first show that n i=1 (r i + 1) neurons are sufficient to approximate p(x). Appendix A in demonstrates that for variables y 1,..., y N, the product y 1 · · · · · y N can be Taylorapproximated as a linear combination of the 2 N functions σ(±y 1 ± · · · ± y d).Consider setting y 1,..., y d equal to the elements of multiset X. Then, we conclude that we can approximate p(x) as a linear combination of the functions σ(±y 1 ± · · · ± y d). However, these functions are not all distinct: there are r i + 1 distinct ways to assign ± signs to r i copies of x i (ignoring permutations of the signs). Therefore, there are DISPLAYFORM0 We now show that this number of neurons is also necessary for approximating p(x). Suppose that N (x) is an -approximation to p(x) with depth 1, and let the Taylor series of N (x) be p(x)+E(x). Let E k (x) be the degree-k homogeneous component of E(x), for 0 ≤ k ≤ 2d. By the definition of -approximation, sup x E(x) goes to 0 as does, so by picking small enough, we can ensure that the coefficients of each E k (x) go to 0.Let m = m uniform 1 (p) and suppose that σ(x) has the Taylor expansion ∞ k=0 σ k x k. Then, by grouping terms of each order, we conclude that there exist constants a ij and w j such that DISPLAYFORM0 For each S ⊆ X, let us take the derivative of this equation by every variable that occurs in S, where we take multiple derivatives of variables that occur multiple times. This gives DISPLAYFORM1 DISPLAYFORM2 Observe that there are r ≡ n i=1 (r i + 1) choices for S, since each variable x i can be included anywhere from 0 to r i times. Define A to be the r × m matrix with entries A S,j = h∈S a hj. We claim that A has full row rank. This would show that the number of columns m is at least the number of rows r = n i=1 (r i + 1), proving the desired lower bound on m. Suppose towards contradiction that the rows A S,• admit a linear dependence: DISPLAYFORM3 where the coefficients c are all nonzero and the S denote distinct subsets of X. Let S * be such that |c * | is maximized. Then, take the dot product of each side of the above equation by the vector with entries (indexed by j) equal to w j (DISPLAYFORM4 We can use to simplify the first term and (with k = d + |S | − |S * |) to simplify the second term, giving us: DISPLAYFORM5 DISPLAYFORM6 Consider the coefficient of the monomial ∂ ∂S * p(x), which appears in the first summand with coefficient c * · |S * |! σ d ·d!. Since the S are distinct, this monomial does not appear in any other term ∂ ∂S p(x), but it could appear in some of the terms DISPLAYFORM7 By definition, |c * | is the largest of the values |c |, and by setting small enough, all coefficients of ∂ ∂S E k (x) can be made negligibly small for every k. This implies that the coefficient of the monomial ∂ ∂S * p(x) can be made arbitrarily close to c * · |S * |! σ d ·d!, which is nonzero since c * is nonzero. However, the left-hand side of equation FORMULA27 tells us that this coefficient should be zero -a contradiction. We conclude that A has full row rank, and therefore that m uniform 1 DISPLAYFORM8 This completes the proof of part (i).We now consider part (ii) of the theorem. It follows from Proposition 4.6, part (ii) that, for each i, we can Taylor-approximate x ri i using 7 log 2 (r i) neurons arranged in a deep network. Therefore, we can Taylor-approximate all of the x ri i using a total of i 7 log 2 (r i) neurons. From BID17, we know that these n terms can be multiplied using 4n additional neurons, giving us a total of i (7 log 2 (r i) +4). As above, suppose that r i > 0 for i = 1,..., n, and let X be the multiset in which x i occurs with multiplicity r i.It is shown in the proof of Theorem 4.1 that n i=1 (r i + 1) neurons are sufficient to Taylorapproximate p(x). We now show that this number of neurons is also necessary for approximating p(x). Let m = m Taylor 1 (p) and suppose that σ(x) has the Taylor expansion DISPLAYFORM9 Then, by grouping terms of each order, we conclude that there exist constants a ij and w j such that DISPLAYFORM10 For each S ⊆ X, let us take the derivative of equations FORMULA15 and FORMULA15 by every variable that occurs in S, where we take multiple derivatives of variables that occur multiple times. This gives DISPLAYFORM11 DISPLAYFORM12 for |S| ≤ k ≤ d − 1. Observe that there are r = n i=1 (r i + 1) choices for S, since each variable x i can be included anywhere from 0 to r i times. Define A to be the r × m matrix with entries A S,j = h∈S a hj. We claim that A has full row rank. This would show that the number of columns m is at least the number of rows r = n i=1 (r i + 1), proving the desired lower bound on m. Suppose towards contradiction that the rows A S,• admit a linear dependence: DISPLAYFORM13 where the coefficients c are nonzero and the S denote distinct subsets of X. Set s = max |S |. Then, take the dot product of each side of the above equation by the vector with entries (indexed by DISPLAYFORM14 We can use to simplify the first term and (with k = d + |S | − s) to simplify the second term, giving us: DISPLAYFORM15 Since the distinct monomials ∂ ∂S p(x) are linearly independent, this contradicts our assumption that the c are nonzero. We conclude that A has full row rank, and therefore that m Our proof in Theorem 4.1 relied upon the fact that all nonzero partial derivatives of a monomial are linearly independent. This fact is not true for general polynomials p; however, an exactly similar argument shows that m uniform 1 (p) is at least the number of linearly independent partial derivatives of p, taken with respect to multisets of the input variables. Consider the monomial q of p such that m uniform 1 (q) is maximized, and suppose that q(x) = x (q) is equal to the number n i=1 (r i + 1) of distinct monomials that can be obtained by taking partial derivatives of q. Let Q be the set of such monomials, and let D be the set of (iterated) partial derivatives corresponding to them, so that for d ∈ D, we have d(q) ∈ Q.Consider the set of polynomials P = {d(p) | d ∈ D}. We claim that there exists a linearly independent subset of P with size at least |D|/c. Suppose to the contrary that P is a maximal linearly independent subset of P with |P | < |D|/c. Since p has c monomials, every element of P has at most c monomials. Therefore, the total number of distinct monomials in elements of P is less than |D|. However, there are at least |D| distinct monomials contained in elements of P, since for d ∈ D, the polynomial d(p) contains the monomial d(q), and by definition all d(q) are distinct as d varies. We conclude that there is some polynomial p ∈ P \P containing a monomial that does not appear in any element of P. But then p is linearly independent of P, a contradiction since we assumed that P was maximal. We conclude that some linearly independent subset of P has size at least |D|/c, and therefore that the space of partial derivatives of p has rank at least |D|/c = m We will prove the desired lower bounds for m uniform 1 (p); a very similar argument holds for m Taylor 1 (p). As above, suppose that r i > 0 for i = 1,..., n. Let X be the multiset in which x i occurs with multiplicity r i.Suppose that N (x) is an -approximation to p(x) with depth 1, and let the degree-d Taylor polynomial of N (x) be p(x) + E(x). Let E d (x) be the degree-d homogeneous component of E(x). Observe that the coefficients of the error polynomial E d (x) can be made arbitrarily small by setting sufficiently small. Let m = m uniform 1 (p) and suppose that σ(x) has the Taylor expansion ∞ k=0 σ k x k. Then, by grouping terms of each order, we conclude that there exist constants a ij and w j such that DISPLAYFORM16 For each S ⊆ X, let us take the derivative of this equation by every variable that occurs in S, where we take multiple derivatives of variables that occur multiple times. This gives DISPLAYFORM17 Consider this equation as S ⊆ X varies over all C s multisets of fixed size s. The left-hand side represents a linear combination of the m terms (
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyProzZAW
We prove that deep neural networks are exponentially more efficient than shallow ones at approximating sparse multivariate polynomials.
Convolutional neural networks (CNNs) in recent years have made a dramatic impact in science, technology and industry, yet the theoretical mechanism of CNN architecture design remains surprisingly vague. The CNN neurons, including its distinctive element, convolutional filters, are known to be learnable features, yet their individual role in producing the output is rather unclear. The thesis of this work is that not all neurons are equally important and some of them contain more useful information to perform a given task. Hence, we propose to quantify and rank neuron importance, and directly incorporate neuron importance in the objective function under two formulations: a game theoretical approach based on Shapley value which computes the marginal contribution of each filter; and a probabilistic approach based on what-we-call, the importance switch using variational inference. Using these two methods we confirm the general theory that some of the neurons are inherently more important than the others. Various experiments illustrate that learned ranks can be readily useable for structured network compression and interpretability of learned features. Neural networks have achieved state-of-the art in various cognition tasks, including image and speech recognition, machine translation, reinforcement learning (; ;). Many of these applications involved CNNs which excel in particular in the vision tasks due to its ability to capture visual by means of convolution filters. Although the effectiveness of convolutional networks is unquestionable, the details of the architecture design and what particularly makes neural network work in detail remain highly uncertain. The experimental roughly confirm that the accuracy of the network and representational capacity is correlated with the depth of the network; ). Interestingly, the deeper architecture also become wider, although the link between width and network expressivity is questionable and the choice of the number of neurons is rather discretionary. As a the discussion about the network architecture often revolves around the numbers of filters and layers and their relative positioning, putting aside the conversation about the quality of the information that it contains. The increasing size of the network architectures have faced scrutiny that made claims that the networks are overparametrized raising two main concerns: heavy computational load and potential overfitting. In response to the need to build networks that are smaller yet accurate, a stream of research attempted to remove redundant units, compress the networks and design lighter architectures (; . A widespread approach to network reduction has been removing weights that are small or even close to zero . This line of research implicitly discerns that nodes with larger weights are more significant for learning task than the small weights. As a , broadly speaking, this approach divides features between those that are useful which are kept and those which are insignificant and therefore discarded, forming a sort of binary approach. In this work, we would like to scrutinize the individual filters and form an explicit theory that states that the units in the network (both convolutional filters and nodes in fully connected layers) are not equally important when it comes to performing an inference task. The corollary of this thesis is that CNNs learn features in a discriminative way so that some of them carry more significance than others, and the knowledge about the input is not uniformly distributed among the CNN features. This theory is in line of research that adding more filters does not make the network more expressive since learning relevant information to the network has already been addressed by other filters. Given the proposed theory, we would like to make a step forward in gaining insight what the CNN learns and propose to extend the binary approach to form a quantifiable ranking of features. In other words, we attempt to estimate the importance of each feature compared to the others with particular focus on convolutional filters, which may be visualized. We introduce a theoretical framework to quantify how important each feature is through proposing a feature ranking method based on two different approaches. The first approach derives from the game theoretical concept of Shapley value , which assesses the importance of an individual in a group of neurons based on its marginal contribution to the group. The second method takes a probabilistic approach and introduces additional learnable parameters, which we call importance switches, that take real values and are trained by means of variational inference to give more weight to the important features. The extensive experimental using these approaches indicate that some features are inherently more significant than others. The theoretical underpinnings of the feature rankings have further direct practical implications we explore. Firstly, the knowledge of the ranking allows to know which features directly impact the score of our method and consequently a more informed way of building an effective model. Thus, we are able to build a network around the the relevant features and discard the less relevant ones, effectively compressing the network achieving state-of-the-art . Secondly and perhaps more significantly, the feature ranking of convolutional features provides more interpretable information about the network and places meaning on particular features in the context of a given task, thus casting light on the black box models. To achieve human interpretability, we visualize the most significant features which significantly show the significance of repeated and complementary features. In early years of CNN development, the networks were limited to a few layers . Recently, the architectures have become deeper and wider (; . The emergence of GPU implementability and regularization algorithms has allowed to use large architectures which train and generalize well. Nevertheless, the trend towards building larger neural networks ironically opposed the research about the nature, interpretability and knowledge extraction from within the neural network models, which we are interested in this work. Therefore, we will compare our method to existing ones in terms of compression ability and interpretability, and then frame the idea in terms of neuron ranking. Compression. The early work on compression largely focused on non-Bayesian approaches, e.g., and mostly centered around non-structured pruning methods, e.g., removing single weights from the architectures of CNNs . Then, hardware-oriented structured pruning techniques made more practical speed-ups (; ; ;). More recently Bayesian methods using the network weights' uncertainties have achieved impressive compression rates, e.g., using sparsity inducing prior on scale parameter , using Gaussian mixture priors, and using the grouping of weights through a group Horseshoe prior, among many. However, none of these methods prune neurons based on the direct optimization for extracting the importance of each neuron. Interpretability. Broadly there are three lines of work done for intepretability of CNNs. The first line of work, the early and used-to-be very popular work, focused on visualization of neurons in CNNs to understand how information propagates within the network . Another line of work focused on probing trained CNNs to obtain local & pixel level explanations either layer-wise or class-wise using gradient information of querying points (; ;). Last line of work focused on mapping semantic concepts to latent representations of CNNs . Other somewhat related work for interpretability using Shapley value also exist (but not in the context of CNNs) . Compared to existing methods, our method provides a global view of a trained model in terms of the importance of learned features. In what follows, we introduce the two methods to perform neuron ranking. The first approach derives from the game theoretical concept of Shapley value . The concept allows to compute the importance score (payoff) of an individual based on the payoffs given to collections of individuals. We subsequently adapt the concept to rank the neurons in terms of their predictive utility. Assuming that an important feature allows for task generalization, the feature importance further translates into finding features that carry most information and usefulness in the prediction task that lead to achieving higher accuracy. A coalitional game is a game where utility is given to a group of players (in our case, nodes or neurons) instead of each agent individually. Let N be the number of agents, which in this work are CNN features (also referred as neurons or nodes). To be specific, let N l to be the number of neurons in a layer l (in unambiguous cases, for clarity we omit the subscript). For every group of players, a coalitional game specifies the payoff the members receive as a group or a coalition. We define a coalition of the neurons N of a layer L as a subset of neurons, C ⊆ N. To assess quantitatively the performance of a group of agents, each coalition is assigned to a real number, which is interpreted as a payoff that a coalition receives from being together. Mathematically, the value of a coalition is given by a characteristic function, which assigns a real number to a set of nodes. Formally, a characteristic function ν: 2 N → R maps each coalition (subset) C ⊆ N to a real number ν(C). Therefore, a coalitional game is defined by a tuple (N, ν), where N is a set of players and ν is a function that assigns payoffs to every coalition of N. A critical component of a coalitional game is specifying the choice of characteristic function that is assigned to a given subset of features. In the case of CNN, the test metric is accuracy which assesses whether the (argmax of) the network output is the correct label averaged over the number of examples. As a , we choose the accuracy on a validation set as the characteristic function, that is, ν(C) = acc(C) and ν(N) = acc(N). The question now remains how to assess the importance of a single feature given the information about the payoffs for each subset of nodes. To this end, we employ the concept of Shapley value about the normative payoff of the total reward/cost, that is a division scheme that allows to distribute the total payoff uniquely and in a fair way. Shapley proposes to evaluate each player by the marginal contribution that the player makes to every coalition averaged over all the coalitions. The marginal contribution of an agent n is the difference between the value of a coalition C that contains n and the coalition C \ n. For example, when a coalition has no members, i.e. is empty, and the neuron n 1 joins the coalition, the value of its marginal contribution is equal to the value of the one-member coalition as the value of the empty coalition is equal to 0, ν({n 1}) − ν({∅}) = ν({n 1}) where {n 1} = C. Subsequently, when another agent n 2 joins this coalition, its marginal contribution is equal to ν({n 1, n 2}) − ν({n 1}). The process continues until all the nodes join the coalition. The coalition of all the nodes is called the grand coalition. The order of nodes, which builds subsequent coalitions to finally the grand coalition, can be represented as a permutation of nodes. For example, in the case of permutation n 5 n 3 n 7...n N...n 2, the neuron n 5 creates the first non-empty coalition on it own, and we measure the accuracy of the pretrained model which includes only one neuron, n 5, in the given layer of the original pre-trained model. Then two-element coalition n 5 n 3 is formed corresponding to the two-neuron layer, and so on. All the subsequent nodes join the coalition in the order given by the permutation. There are N! permutations of N nodes, meaning that there are N! different ways to form a coalition. To compute the Shapley value of the node n, we compare the accuracy of the architecture before and after adding the node n, that is the marginal contributions of n (which may be negative) for each of the N! permutations and divide the sum by all the permutations. The Shapley value of n is then the averaged marginal contribution of n. Formally, let π denote a permutation, π(i) a place of the neuron n i in the permutation π, and C π (i) the coalition formed by the predecessors of n i such that C π (i) = {n j ∈ π : π(j) before π(i)}. For example, in the permutation n 5 n 3 n 7...n N...n 2, π = n 7 and C π = {n 5, n 3}. The Shapley value (SV i) of the node n i is thus defined as follows: This formula can also be written in a form that considers sets instead of permutations: PRACTICAL CONSIDERATIONS First, Shapley value is a mathematically rigorous division scheme and, strictly speaking, it has been proposed as the only measure that satisfies four normative criteria regarding the fair payoff distribution. These criteria are efficiency where the total gain is distributed among the agents, symmetry; if i and j are agents such that ν(C ∪ i) = ν(C ∪ j) for each coalition C of N, then SV (i) = SV (j), null player payoff such that an agent who contributes nothing to every coalition obtains zero individual payoff and linearity; ν(C) = ν 1 (C) + ν 2 (C) for every coalition implies SV ν1 (i) + SV ν2 (i) = SV ν (i). Nevertheless, the choice of a characteristic function which satisfies these criteria is not feasible in case of our application due to the fact that we do not have control over the output of the model. As a , the characteristic function may not be monotone which violates the first criterion. However, the payoff produced by the Shapley value, although may not be unique, is a valid cost division which works well in practice. Second, computing the characteristic function for every subset is combinatorial and takes exponential time complexity. Hence, for large networks, computing Shapley value is computationally infeasible. We propose the following solutions to approximate the optimal solution and obtain a sensible ranking metric based on Shapley value. The first solution entails computing the Shapley value for the subsets no larger than arbitrary k. As a we only compute the synergies that are no larger than k. Intuitively, we assume that that the larger the coalition, the less information is to be obtained from computing the large subsets. The second solution is based on sampling and sampling provides an unbiased estimate of the optimal . Thus, we first sample the characteristic function and then sample the permutations needed for the computations of the Shapley value. What comes next describes our proposal to improve the speed of computation for identifying the neuron ranking in a continuous manner. To infer the neuron ranking in each layer, we propose to make a slight modification in the existing neural network architecture. We introduce a component, the importance switch, denoted by s l for each layer l. Each importance switch is a probability vector of length D l (the output dimension of the lth layer) and D l j s l,j = 1, where s l,j is the jth element of the vector. With this addition, we rewrite the forward pass under a deep neural network model, where the function f (W l, x i) can be the convolution operation for CNNs or simple matrix multiplication for MLPs between the weights W l and the unit x i, Pre-activation followed by a switch s l: Input to the next layer after going through a nonlinearity σ: where • is an element-wise product. Introducing a switch operation between layers in a neural network model was also presented in, although in their case, the switch is a binary random variable (called a gate). The output probability under such networks with L hidden layers for solving classification problems can be written as where g is the softmax operation. A natural choice to model the distribution over the switch is the Dirichlet distribution, which defines a probability distribution over a probability vector. We model each switch as a vector of independent Dirichlet distributed random variables When there is no prior knowledge, i.e., a priori we don't know which feature would be more important for prediction, so we treat them all equally important features by setting the same value to each parameter, i.e., α 0 = α 0 * 1 D l where 1 D l is a vector of ones of length D l. When we apply the same parameter to each dimension, this special case of Dirichlet distribution is called symmetric Dirichlet distribution. In this case, if we set α 0 < 1, this puts the probability mass toward a few components, ing in only a few components that are non-zero, i.e., inducing sparse probability vector. If we set α 0 > 1, all components become similar to each other. We model the posterior over s l as the Dirichlet distribution as well but with asymmetric form to learn a different probability on different elements of the switch (or neurons), using a set of variational parameters (the parameters for the posterior). We denote the variational parameters by φ l, where each element of the vector can choose any values above 0. Our posterior distribution over the switch is, hence, defined by With this parametric form of prior and posterior, we optimize the variational parameters φ l over each layer's importance switch by maximizing the variational lower bound with freezing all the weights to the pre-trained values, We do this variational learning for each layer's importance switch sequentially from the input layer to the last layer before the output layer. Computing the gradient of equation 8 with respect to φ l requires computing the gradients of the integral (the first term on RHS) and also the KL divergence term (the second term on RHS), as both depends on the value of φ l. The KL divergence between two Dirichlet distributions can be wrttien in closed form. However, the first term is tricky. As described in , the usual reparameterization trick, i.e., replacing a probability distribution with an equivalent parameterization of it by using a deterministic and differentiable transformation of some fixed base distribution 1, does not work. For instance, in an attempt to find a reparameterization, one could adopt the representation of a k-dimensional Dirichlet random variable as a weighted sum of Gamma random variables, s l,j = y j /(, where the shape parameter of Gamma is φ l,j and the scale parameter is 1. However, this does not allow us to detach the randomness from the parameters as the parameter still appears in the Gamma distribution, hence one needs to sample from the posterior every time the variational parameters are updated, which is costly and time-consuming. Existing methods suggest either explicitly or implicitly computing the gradients of the inverse CDF of the Gamma distribution during training to decrease the variance of the gradients (e.g., and among many). The length of the importance switch we consider is mostly less than on the order of 100s, in which case the variance of gradients does not affect the speed of convergence as significantly as in other cases such as Latent Dirichlet Allocation (LDA). Hence, when training for the importance switch in each layer, we use the analytic mean of the Dirichlet random variable to make a point estimate of the integral q φ l (s l) log p(D|s l)ds l ≈ log p(D|s l), wherẽ s l,j = φ l,j / D l j =1 φ l,j, which allows us to directly compute the gradient of the quantity without sampling from the posterior. As illustrated in section 4, this approximation performs well with relatively low dimensional switches. Game-theoretic vs. probabilistic neuron ranking How are the game-theoretic and the probabilistic neuron ranking methods related? Consider a vector of random variables r that describes a certain ranking for a certain number of neurons. The predictive distribution on the test data D *, in this case, can be obtained by integrating out a plausible distribution over the neuron rankings, which we denote by f (r), where the second line is a reasonable approximation if the distribution is highly peaked at around the optimal rankingr, meaning that there is indeed such an optimal ranking with a high confidence. This predictive distribution specifies the likelihood of the test data given that optimal ranking. In the multi-class classification, this predictive distribution is the likelihood of true lables given a classifier's predictions. When we compute the Shapley value, we use an "approximate" version of this predictive likelihood, namely, we introduce a max operation for choosing a single label that has the maximum probability for that class, and then see if the label matches the true label, ing in the frequency of correct labeling as an accuracy measure. Hence, both methods attempt to find the best ranking in order to "maximize the likelihood of data". While the Shapley optimization attempts to maximize the test data likelihood approximately in a combinatorial manner, the switch optimization attempts to maximize the training data likelihood with a regularization as in equation 8 in a continuous manner. In this section we present experimental based on the two proposed approaches for CNN features ranking, the Shapley value and the importance switch methods. The tests have been performed on LeNet-5 trained on MNIST and FashionMNIST, and VGG-16 trained on CIFAR-10. To compute the rankings for both methods the same pretrained model is used. To compute the Shapley value of each neuron in the trained model, we remove the subsets of features (both weights and biases) and test the network on a validation set. As mentioned, the accuracy is the payoff for a given group of features. The computation of the complete set of payoffs is of combinatorial nature and therefore we compute the power set for layers up to 25 nodes. To account for this limitation and to illustrate better the proposed method, we choose to limit the number of nodes in the pretrained LeNet-5 architecture to 10-20-100-25. When using the trained VGG-16, we use the same number of filters in each layer as in the original architecture. For the layers with larger number of features, we use one of the two methods to compute marginal contributions. The first method uses equation 1 and only limits the number of coalitions we consider to compute SV. The second method uses equation 2 the accuracy change between two subsets which differ by a single node. Both node and the first combination were sampled uniformly at random. When we learn the importance switches, we load the same train model which has been used to compute the Shapley value and then only add parameters for switches and trained them per layer with fixing all the other network parameters to the trained values. We run the training of the importance switches for 300 epochs, however, in practice, even a few iterations is sufficient to distinguish important nodes from the rest. We start with comparing the learnt ranks of the two methods. As summarized in Table 1, the first observation is that for the model pretrained both on MNIST and FashionMNIST both methods have identified similar nodes to be the most important. The similarity is more significant for smaller layers where over 50% of top nodes (here we consider top-5 nodes for clarity and top-10 nodes for the large fc1 layer) and in three out of six cases the top two nodes are the same. Significantly for conv2 on MNIST the group of four nodes are the same, and as far as fc2 on FashionMNIST is concerned, the top five nodes chosen from the set of 25 nodes are the same (the probability to select this subset at random is 6 · 10 −5), showing that the methods agree when it comes to both convolutional and fully connected layers. For brevity, please look at the Appendix for the rankings of the less significant nodes but what is notable is that both methods also identified similar groups of unimportant nodes, particularly in fc2 where every node indexed higher than 9 (as compared to nodes indexed lower than 9) scored very low for both methods. When it comes to larger layers, the methods however are more discrepant (yet still significantly the common nodes are found as seen in the case of fc1 layer). The differences may also come from the inexact computation of the Shapley value. FashionMNIST MNIST conv1 SH 0, 7, 6, 5, 1 1, 8, 7, 4, 6 IS 0, 7, 5, 9, 6 8, 1, 3, 9, 6 conv2 SH 5, 10, 0, 13, 9 2, 8, 9, 19, 4 IS 5, 8, 13, 14, 15 9, 2, 8, 19, 6 fc1 SH 60, 13, 43, 88, 94, 20, 70, 44, 32, 64 56, 86, 25, 64, 33, 17, 23, 96, 52, 81 IS 94, 7, 50, 92, 13, 25, 60, 40, 75, 45 25, 96, 58, 56, 88, 52, 23, 43, 30, 4 fc2 SH 5, 1, 8, 9, 7 1, 7, 2, 3, 0 IS 1, 7, 9, 5, 8 7, 1, 4, 6, 9 Table 1: Rankings of filters for the Shapley value (SH) and the importance switches (IS) methods on a four-layer network, 10-20-100-25. For each layer the top five neurons are shown, the numbers in bold indicate the common top neurons across both the methods. Interpretability: One of the main aims of this work has been to understand better the process of learning of convolutional neural networks. Building on the previous works which visualized CNN filters, we want to add an extra component and interpret that visual features by means of the filter rankings. In figure 1, we visualize feature maps produced by the first convolution layer of filters. Knowing the important filters allows to ponder over what features the network learns and deems useful. For instance, in the MNIST digits, the learnt filters identify local parts of the image (such as lower and upper parts of the digit '2' and opposite parts of the digit '0'). The interesting observation is that the most important features, on the one hand, complement each other (such as complementing parts of the digit '0' or the dog in CIFAR-10) but, on the other and, overlap to seemingly reinforce its importance. Finally, the important features appear smoother as compared to unimportant ones, which outline the object with no particular focus. Compression: The consequence of the feature ranking is that some of the nodes within each layer are less significant than others and, as argued in network compression literature, the network may do as well without them. The compression experiments procedure follows from the previous experiments. Given the rankings, we prune the neurons from the bottom of the ranking and then we retrain the network. We run the tests for both of the methods on several different architectures. In all the trainings we use SGD with decreasing learning rate from 0.1 to 0.001, momentum, 0.9, weight decay, 5e-4, and early stopping. LeNetVGG presents the for LeNet-5 as trained on MNIST, and VGG-16 as trained on CIFAR-10. For LeNet-5, the compressed architecture has 17K parameters which is less than all the other methods, and 137K FLOPs which is second to FDOO , which however has over three times more parameters. The method fares relatively well also on VGG producing an architecture which is smaller than others in the earlier layers but larger in later layers (the second proposed architecture has overall the least number of parameters at the cost of the performance, though). We hope to test a larger set of possible architectures in the future and devise a way to combine both rankings for a more optimal compression. Nevertheless, the show that the neuron ranking method is adequate for condensing both small and large architectures. 8.3% 142M 1.0M BC-GHS 51-62-125-128-228-129-38-13-9-6-5-6-6-20 8.3% 122M 0.8M Table 2: The structured pruning of LeNet-5 and VGG-16 The final experiment demonstrates how our method compares to magnitude pruning commonly done in compression literature. In figure 2, our method (blue trace) outperforms magnitude pruning methods (L1 and L2 norm over weights). No retraining is used in this case to show how the proposed method retains the relevant neurons that affect the predictive accuracy. We would like to emphasize that the magnitude approaches may be more appropriate for the unstructured pruning where single weights are removed based on its magnitude. However, in the the case of pruning entire channels, considering a norm of weights may be too simplistic as the interactions between the weights within a channel are rather complex. The proposed new paradigm treats the channels as whole units that directly contribute to the task generalization. In summary, this work suggests a theory that the learnable CNN features contain inherent hierarchy where some of the features are more significant than others. This multidisciplinary work which builds on top of probability and game theoretical concepts proposes two methods to produce feature ranking and select most important features in the CNN network. The striking observation is that the different methods lead to similar and allow to distinguish important nodes with larger confidence. The ranking methods allow to build an informed way to build a slim network architecture where the significant nodes remain and unimportant nodes are discarded. A future search for further methods which allow to quantify the neuron importance is the next step to develop the understanding of the feature importance in CNNs.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
B1eWu0NtDS
We propose CNN neuron ranking with two different methods and show their consistency in producing the result which allows to interpret what network deems important and compress the network by keeping the most relevant nodes.
This work presents a modular and hierarchical approach to learn policies for exploring 3D environments. Our approach leverages the strengths of both classical and learning-based methods, by using analytical path planners with learned mappers, and global and local policies. Use of learning provides flexibility with respect to input modalities (in mapper), leverages structural regularities of the world (in global policies), and provides robustness to errors in state estimation (in local policies). Such use of learning within each module retains its benefits, while at the same time, hierarchical decomposition and modular training allow us to sidestep the high sample complexities associated with training end-to-end policies. Our experiments in visually and physically realistic simulated 3D environments demonstrate the effectiveness of our proposed approach over past learning and geometry-based approaches. Navigation is a critical task in building intelligent agents. Navigation tasks can be expressed in many forms, for example, point goal tasks involve navigating to a specific coordinates and semantic navigation involves finding path to a specific scene or object. Such tasks may need to be performed in known (already mapped) or unknown environments. Irrespective of the task or the setting, a core problem in navigation is exploration, i.e., how to efficiently visit as much of the environment. This is useful for pre-mapping in known environments, or actually solving tasks in known environments. Recent work from has used end-to-end learning to tackle this problem. Their motivation is three fold: a) learning provides flexibility to the choice of input modalities (classical systems rely on observing geometry through use of specialized sensors, while learning systems can infer geometry directly from RGB images), b) use of learning can improve robustness to errors in explicit state estimation, and c) learning can effectively leverage structural regularities of the real world, leading to more efficient behavior in previously unseen environments. This lead to their design of an end-to-end trained neural network based policy that processed raw sensory observations to directly output actions that the agent should execute. While use of learning for exploration is well motivated, casting the exploration problem as an end-to-end learning problem has its own drawbacks. Learning about mapping, state-estimation and path-planning purely from data in an end-to-end manner can be prohibitively expensive. Consequently, past end-to-end learning work for exploration from relies on use of imitation learning and many millions of frames of experience, but still performs worse than classical methods that don't require any training at all. This motivates our work. In this paper, we investigate alternate formulations of employing learning for exploration that retains the advantages that learning has to offer, but doesn't suffer from the drawbacks of full-blown end-to-end learning. Our key conceptual insight is that use of learning for leveraging structural regularities of indoor environments, robustness to state-estimation errors, and flexibility with respect to input modalities, happens at different time scales and can thus be factored out. This motivates use of learning in a modular and hierarchical fashion inside of what one may call a'classical navigation pipeline'. This in navigation policies that can work with raw sensory inputs such as RGB images, are robust to state estimation errors, and leverage regularities of real world layout. This in extremely competitive performance over both geometry-based methods and recent learning-based methods; at the same time requiring a fraction of the number of samples. More specifically, our proposed exploration architecture comprises of a learned mapper (and pose estimator), a global policy, and a local policy, that are interfaced via the map and an analytical path planner. The learned mapper, together with the pose estimator, produces free space maps from input RGB images. The global policy consumes this free-space map and employs learning to exploit structural regularities in layout of real world environments to produce long-term goals. These long-term goals are used to generate short-term goals for the local policy (using a geometric path-planner). This local policy uses learning to directly map raw RGB images to actions that the agent should execute. Use of learning in mapper provides flexibility with respect to input modality, learned global policy can exploit regularities in layout of real world layout of environments, while learned local policies can use visual feedback to exhibit more robust behaviour. At the same time, hierarchical and modular design and use of analytical planning, significantly cuts down the search space during training, leading to better performance as well as sample efficient learning. We demonstrate our proposed approach in visually and physically realistic simulators for the task of geometric exploration (visit as much area as possible). We work with the Habitat simulator from. While Habitat is already visually realistic (it uses real world scans from ; as environments), we improve its physical realism by using actuation and odometry sensor noise models, that we collected by conducting physical experiments on a real mobile robot. Our experiments and ablations in this realistic simulation reveal the effectiveness of our proposed approach for the task of exploration. A straight-forward modification of our method also tackles point-goal navigation tasks, and won the AI Habitat challenge at CVPR2019 across all tracks. Navigation has been well studied in classical robotics. There has been a renewed interest in the use of learning to arrive at navigation policies, for a variety of tasks. Our work builds upon concepts in classical robotics and learning for navigation. We survey related works below. Navigation Approaches. Classical approaches to navigation break the problem into two parts: mapping and path planning. Mapping is done via simultaneous localization and mapping (; ;), by fusing information from multiple views of the environment. While sparse reconstruction can be done well with monocular RGB images (Mur-Artal and Tardós, 2017), dense mapping is inefficient or requires specialized scanners such as Kinect . Maps are used to compute paths to goal locations via path planning (; ;). These classical methods have inspired recent learning-based techniques. Researchers have designed neural network policies that reason via spatial representations (; ;), topological representations (a; b), or use differentiable and trainable planners (; ; ;). Our work furthers this research, and we study a hierarchical and modular decomposition of the problem, and employ learning inside these components instead of end-to-end learning. Research also focuses on incorporating semantics in SLAM . Exploration in Navigation. While a number of works focus on passive map-building, path planning and goal-driven policy learning, a much smaller body of work tackles the problem of active SLAM, i.e., how to actively control the camera for map building. We point readers to for a detailed survey, and summarize major themes below. Most such works frame this problem as a Partially Observable Markov Decision Process (POMDP) that are approximately solved , and or seek to find a sequence of actions that minimizes uncertainty of maps . Another line of work, explores by picking vantage points (such as on the frontier between explored and unexplored regions (; ; ;) ). Recent works from; Savinov et al. (2018b); attack this problem via learning. Our proposed modular policies unify the last two lines of research, and we show improvements over representative methods from both these lines of work. Exploration has also been studied more generally in RL in the context of exploration-exploitation trade-off (; ; ;). Hierarchical and Modular Policies. Hierarchical RL (; ;) is an active area of research, aimed at automatically discovering hierarchies to speed up learning. However, this has proven to be challenging, and thus most work has resorted to using hand-defining hierarchies. For example in context of navigation, and design modular policies for navigation, that interface learned policies with low-level feedback controllers. Hierarchical and modular policies have also been used for Embodied Question Answering (a; ; b). We follow the exploration task setup proposed by where the objective is to maximize the coverage in a fixed time budget. The coverage is defined as the total area in the map known to be traversable. Our objective is train a policy which takes in an observation s t at each time step t and outputs a navigational action a t to maximize the coverage. We try to make our experimental setup in simulation as realistic as possible with the goal of transferring trained policies to the real world. We use the Habitat simulator with the Gibson and Matterport (MP3D) datasets for our experiments. Both Gibson and Matterport datasets are based on real-world scene reconstructions are thus significantly more realistic than synthetic SUNCG dataset used for past research on exploration ). In addition to synthetic scenes, prior works on learning-based navigation have also assumed simplistic agent motion. Some works limit agent motion on a grid with 90 degree rotations (; ; . Other works which implement fine-grained control, typically assume unrealistic agent motion without any noise . This consequently leads to another unrealistic assumption of knowledge of perfect agent pose. This is because since the motion is simplistic, it becomes trivial to estimate the agent pose in most cases even if it is not assumed to be known. The reason behind these assumptions on agent motion and pose is that motion and sensor noise models are not known. In order to relax both these assumptions, we collect motion and sensor data in the real-world and implement more realistic agent motion and sensor noise models in the simulator as described in the following subsection. We represent the agent pose by (x, y, o) where x and y represent the xy co-ordinate of the agent measured in metres and o represents the orientation of the agent in radians (measured counterclockwise from x-axis). Without loss of generality, assume agents starts at p 0 =. Now, suppose the agent takes an action a t. Each action is implemented as a control command on a robot. Let the corresponding control command be ∆u a = (x a, y a, o a). Let the agent pose after the action be p 1 = (x, y, o). The actuation noise (act) is the difference between the actual agent pose (p 1) after the action and the intended agent pose (p 0 + ∆u): Mobile robots typically have sensors which estimate the robot pose as it moves. Let the sensor estimate of the agent pose after the action be p 1 = (x, y, o). The sensor noise (sen) is given by the difference between the sensor pose estimate (p 1) and the actual agent pose(p 1): In order to implement the actuation and sensor noise models, we would like to collect data for navigational actions in the Habitat simulator. We use three default navigational actions: Forward: move forward by 25cm, Turn Right: on the spot rotation clockwise by 10 degrees, and Turn Left: on the spot rotation counter-clockwise by 10 degrees. The control commands are implemented as u F orward = (0.25, 0, 0), u Right: (0, 0, −10 * π/180) and u Lef t: (0, 0, 10 * π/180). In practice, a robot can also rotate slightly while moving forward and translate a bit while rotating on-the-spot, creating rotational actuation noise in forward action and similarly, a translation actuation noise in on-the-spot rotation actions. Sampling short-term goal Planned path Input to path planner Figure 1: Overview of our approach. We use a neural network based Mapper that predicts a map and agent pose estimate from incoming RGB observations and sensor readings. This map is used by a Global policy to output a long-term goal, which is converted to a short-term goal using an analytic path planner. A Local Policy is trained to navigate to this short-term goal. We use a LoCoBot 2 to collect data for building the actuation and sensor noise models. We use the pyrobot API along with ROS to implement the control commands and get sensor readings. For each action a, we fit a separate Gaussian Mixture Model for the actuation noise and sensor noise, making a total of 6 models. Each component in these Gaussian mixture models is a multi-variate Gaussian in 3 variables, x, y and o. For each model, we collect 600 datapoints. The number of components in each Gaussian mixture model are chosen using cross-validation. We implement these actuation and sensor noise models in the Habitat simulator for our experiments. We will open-source the collected data and the noise models, along with their implementation in the Habitat simulator. We propose a modular navigation model,'Active Neural Mapping'. It consists of three components: a Mapper, a Global policy and a Local policy. The Mapper predicts the map of the environment and estimates the pose of the agent based on the current observations and previous estimates. The Global policy uses the predicted map and agent pose to produce a long-term goal. The long-term goal is converted into a short-term goal using path planning. The Local policy takes navigational actions based on the current observation to reach the short-term goal. See Figure 1 for an overview. Map Representation. The Active Neural Mapping model internally maintains a spatial map, m t and pose of the agent x t. The spatial map, m t, is a 2 × M × M matrix where M × M denotes the map size and each element in this spatial map corresponds to a cell of size 25cm 2 (5cm × 5cm) in the physical world. Each element in the first channel denotes the probability of an obstacle at the corresponding location and each element in the second channel denotes the probability of that location being explored. A cell is considered to be explored when it is known to be free space or an obstacle. The spatial map is initialized with all zeros at the beginning of an episode, m 0 = 2×M ×M. The pose x t ∈ R 3 denotes the x and y coordinates of the agent and the orientation of the agent at time t. The agent always starts at the center of the map facing east at the beginning of the episode, Mapper. The Mapper (f M ap) takes in the current RGB observation, s t, the current and last sensor reading of the agent pose x t−1:t, last agent pose and map estimates,x t−1, m t−1 and outputs an updated map, m t, and the current agent pose estimate,x t, (see Figure 2): m t,x t = f M ap (s t, x t−1:t,x t−1, m t−1 |θ M), where θ M denote the trainable parameters of the Mapper. It consists of two learned components, a Projection Unit and a Pose Estimator. The Projection Unit outputs a egocentric top-down 2D spatial map, p ego t 2×V ×V (where V is the vision range), predicting the obstacles and the explored area in the current observation. The Pose Estimator predicts the agent pose (x t) based on past pose estimate (x t−1) and last two egocentric map predictions (p ego t−1:t). It essentially compares the current egocentric map prediction to the last egocentric map prediction transformed to the current frame to predict the pose change between the two maps. The egocentric map from the Projection Unit is transformed to a geocentric map based on the pose estimate given by the Pose Estimator and then aggregated with the previous spatial map (m t−1) to get the current map(m t). More implementation details of the Mapper are provided in the Appendix. Global Policy. The Global Policy takes h t ∈ 4×M ×M as input, where the first two channels of h t are the spatial map m t given by the Mapper, the third channel represents the current agent position estimated by the Mapper, the fourth channel represents the visited locations, i.e. ∀i, j ∈ {1, 2, . . ., m}: We perform two transformations before passing h t to the Global Policy model. The first transformation subsamples a window of size 4 × G × G around the agent from h t. The second transformation performs max pooling operations to get an output of size 4 × G × G from h t. Both the transformations are stacked to form a tensor of size 8 × G × G and passed as input to the Global Policy model. The Global Policy uses a 5-layer convolutional neural network to predict a long-term goal, g We use the Habitat simulator with the Gibson and Matterport (MP3D) datasets for our experiments. Both Gibson and Matterport consist of scenes which are 3D reconstructions of real-world environments, however Gibson is collected using a different set of cameras, consists mostly of office spaces while Matterport consists of mostly homes with a larger average scene area. We will use Gibson as our training domain, and use Matterport for domain generalization experiments. The observation space consists of RGB images of size 3 × 128 × 128 and base odometry sensor readings of size 3 × 1 denoting the change in agent's x-y coordinates and orientation. The actions space consists of three actions: move_forward, turn_left, turn_right. Both the base odometry sensor readings and the agent motion based on the actions are noisy. They are implemented using the sensor and actuation noise models based on real-world data as discussed in Section 3.1. We follow the Exploration task setup proposed by where the objective to maximize the coverage in a fixed time budget. Coverage is the total area in the map known to be traversable. We define a traversable point to be known if it is in the field-of-view of the agent and is less than 3m away. We use two evaluation metrics, the absolute coverage area in m 2 (Cov) and percentage of area explored in the scene (% Cov), i.e. ratio of coverage to maximum possible coverage in the corresponding scene. During training, each episode lasts for a fixed length of 1000 steps. We use train/val/test splits provided by Savva et al. 2019 for both the datasets. Note that the set of scenes used in each split is disjoint, which means the agent is tested on new scenes never seen during training. Gibson test set is not public but rather held out on an online evaluation server for the Pointgoal task. We use the validation as the test set for comparison and analysis for the Gibson domain. We do not use the validation set for hyper-parameter tuning. To analyze the performance of all the models with respect to the size of the scene, we split the Gibson validation set into two parts, a small set of 10 scenes with explorable area ranging from 16m 2 to 36m 2, and a large set of 4 scenes with traversable area ranging from 55m 2 to 100m 2. Note that the size of the map is usually much larger than the traversable area, with the largest map being about 23m long and 11m wide. Training Details. We train our model for the Exploration task in the Gibson domain and transfer it to the Matterport domain. The Projection Unit is trained to predict egocentric projections, and the Pose Estimator is trained to predict agent pose using supervised learning. The ground truth egocentric projection is computed using geometric projections from ground truth depth. The Global and Local policies are both trained using Reinforcement Learning. The reward for the Global policy is the increase in coverage and the reward for the Local policy is the reduction in Euclidean distance to the short-term goal. All the modules are trained simultaneously. Their parameters are independent, but the data distribution is inter-dependent. Based on the actions taken by the Local policy, the future input to Mapper changes, which in turn changes the map input to the Global policy and consequently affects the short-term goal given to the Local Policy. For more architecture and hyperparameter details please refer to the supplementary material. We will also open-source the code. Baselines. We use a range of end-to-end Reinforcement Learning (RL) methods as baselines: RL + 3LConv: An RL Policy with 3 layer convolutional network followed by a GRU as described by Savva et al. 2019 which is also identical to our Local Policy architecture. RL + Res18: A RL Policy initialized with ResNet18 pre-trained on ImageNet followed by a GRU. RL + Res18 + AuxDepth: This baseline is adapted from Mirowski et al. 2017 who use depth prediction as an auxiliary task. We use the same architecture as our Mapper (conv layers from ResNet18) with one additional deconvolutional layer for Depth prediction followed by 3 layer convolution and GRU for the policy. RL + Res18 + ProjDepth: This baseline is adapted form who project the depth image in an egocentric top-down in addition to the RGB image as input to the RL policy. Since we do not have depth as input, we use the architecture from RL + Res18 + AuxDepth for depth prediction and project the predicted depth before passing to 3Layer Conv and GRU policy. For all the baselines, we also feed a 32-dimensional embedding of the sensor pose reading to the GRU along with the image-based representation. This embedding is also learnt end-to-end using RL. All baselines are trained using PPO with increase in coverage as the reward. We train the proposed ANM model and all the baselines for the Exploration task with 10 million frames on the Gibson training set. The are shown in Table 1. The on the Gibson Val set are averaged over a total of 994 episodes in 14 different unseen scenes. The proposed model achieves an average absolute and relative coverage of 31.379m 2 /0.924 as compared to 24.958m 2 /0.766 for the best baseline. This indicates that the proposed model is more efficient and effective at exhaustive exploration as compared to the baselines. This is because our hierarchical policy architecture reduces the horizon of the long-term exploration problem as instead of taking tens of low-level navigational actions, the Global policy only takes few long-term goal actions. We also report the domain generalization performance on the Exploration task in Table 1 (see shaded region), where all models trained on Gibson are evaluated on the Matterport domain. ANM leads to higher domain generalization performance (57.228m 2 /0.405 vs 41.549m 2 /0.297). The abosulte coverage is higher' Episode length Gibson Val -Overall for the Matterport domain as it consists of larger scenes on average. Some visualizations of policy executions are provided in the Appendix. In Fig. 3, we plot the relative coverage (% Cov) of all the models as the episode progresses on the large and small scene sets, as well as the overall Gibson Val set. The plot on the small scene set shows that ANM is able to almost completely explore the small scenes in around 500 steps, however the baselines are only able to explore 85% of the small scenes in 1000 steps (see Fig. 3 center). This indicates that ANM explores more efficiently in small scenes. The plot on the large scenes set shows that the performance gap between ANM and baselines widens as the episode progresses (see Fig. 3 left). Looking at the behaviour of the baselines, we saw that they often got stuck in local areas. This behaviour indicates that they are unable to remember explored areas over long-time horizons and are ineffective at long-term planning. On the other hand, ANM uses a Global policy on the map which allows it to have memory of explored areas over long-time horizons, and plan effectively to reach distant long-term goals by leveraging analytical planners. As a , it is able to explore effectively in large scenes with long episode lengths. Local Policy. An alternative to learning a Local Policy is to have a deterministic policy which follows the plan given by the Planner. As shown in Table 2, the ANM model performs much worse without the Local Policy. The Local Policy is designed to adapt to small errors in Mapping. We observed Local policy overcoming both false positives and false negatives encountered in mapping. For example, the Mapper could sometime wrongly predict a carpet as an obstacle. In this case, the planner would plan to go around the carpet. However, if the short-term goal is beyond the carpet, the Local policy can understand that the carpet is not an obstacle based on the RGB observation and learn to walk over it. Similarly, we also observed cases where the Mapper didn't predict small obstacles very close to the agent as they were not in the field-of-view due to the height of the camera. In this case, the planner would plan a path through the obstacle where the deterministic policy would get stuck. Since the local policy is recurrent, it learns to navigate around these obstacles by getting feedback from the environment. When the policy tries to move forward but it can not, it gets feedback that there must be an obstacle. Global Policy. An alternative to learning a Global Policy for sampling long-term goals is to use a classical algorithm called Frontier-based exploration . A frontier is defined as the boundary between the explored free space and the unexplored space. Frontier-based exploration essentially sample points on this frontier as goals to explore the space. There are different variants of Frontier-based exploration based on the sampling strategy. compare different sampling strategies and find that sampling the point on the frontier closest to the agent gives the best empirically. We implement this variant and replace it with our learned Global Policy. As shown in Table 2, Frontier-based exploration policy perform worse than the Global Policy. We observed that Frontier-based exploration spent a lot of time exploring corners or small area behind furniture. In contrast, the trained Global policy ignored small spaces and chose distant long-term goals which led to exploring more area. Pose Estimation. A difference between ANM and the baselines is that ANM uses additional supervision to train the Pose Estimator. In order to understand whether the performance gain is coming from this additional supervision, we remove the Pose Estimator from ANM and just use the input sensor reading as our pose estimate. Results in Table 2 shows that the ANM still outperforms the baselines even without the Pose Estimator. We also tried passing the ground truth pose as input the baselines instead of the sensor reading. The performance of the baselines did not improve with the ground truth pose. We deploy the trained ANM policy on a Locobot in the real-world. In order to match the real-world observations to the simulator observations as closely as possible, we change the simulator input configuration to match the camera intrinsics on the Locobot. This includes the camera height and horizontal and vertical field-of-views. In Figure 4, we show an episode of ANM exploring the living area in an apartment. The figure shows that the policy transfers well to the real-world and is able to effectively explore the environment. The long-term goals sampled by the Global policy (shown by blue circles on the map) are often towards open spaces in the explored map, which indicates that it is learning to exploit the structure in the map. the goal location. In addition to Success rate (Succ), Success weighted by (normalized inverse) Path Length or SPL is also used as a metric for evaluation as proposed by Anderson et al. 2018. All the baseline models trained for the task of Exploration either need to be retrained or atleast fine-tuned to be transferred to the Pointgoal task. The modularity of ANM provides it another advantage that it can be transferred to the Pointgoal task without any additional training. For transfer to the Pointgoal task, we just fix the Global policy to always output the PointGoal coordinates as the long-term goal and use the Local and Mapper trained for the Exploration task. We found that an ANM policy trained on exploration, when transferred to the Pointgoal task performed better than several RL and Imitation Learning baselines trained on the Pointgoal task. The transferred ANM model achieves a success rate/SPL of 0.950/0.846 as compared to 0.827/0.730 for the best baseline model on Gibson val set. The ANM model also generalized significantly better than the baselines to harder goals and to the Matterport domain. In addition to better performance, ANM was also 10 to 75 times more sample efficient than the baselines. This transferred ANM policy was also the winner of the CVPR 2019 Habitat Pointgoal Navigation Challenge for both RGB and RGB-D tracks among over 150 submissions from 16 teams. These highlight a key advantage of our model that it allows us to transfer the knowledge of obstacle avoidance and control in low-level navigation across tasks, as the Local Policy and Mapper are task-invariant. More details about the Pointgoal experiments, baselines, including domain and goal generalization on the Pointgoal task are provided in the supplementary material. In this paper, we proposed a modular navigational model which leverages the strengths of classical and learning-based navigational methods. We show that the proposed model outperforms prior methods on both Exploration and PointGoal tasks and shows strong generalization across domains, goals, and tasks. In future, the proposed model can be extended to complex semantic tasks such as Semantic Goal Navigation and Embodied Question Answering by using a semantic Mapper which creates multi-channel map capturing semantic properties of the objects in the environment. The model can also be combined with prior work on Localization to relocalize in a previously created map for efficient navigation in subsequent episodes. PointGoal has been the most studied task in recent literature on navigation where the objective is to navigate to a goal location whose relative coordinates are given as input in a limited time budget. We follow the PointGoal task setup from Savva et al. 2019, using train/val/test splits for both Gibson and Matterport datasets. Note that the set of scenes used in each split is disjoint, which means the agent is tested on new scenes never seen during training. Gibson test set is not public but rather held out on an online evaluation server 5. We report the performance of our model on the Gibson test set when submitted to the online server but also use the validation set as another test set for extensive comparison and analysis. We do not use the validation set for hyper-parameter tuning. Savva et al. 2019 identify two measures to quantify the difficulty of a PointGoal dataset. The first is the average geodesic distance (distance along the shortest path) to the goal location from the starting location of the agent, and the second is the average geodesic to Euclidean distance ratio (GED ratio). The GED ratio is always greater than or equal to 1, with higher ratio ing in harder episodes. The train/val/test splits in Gibson dataset come from the same distribution of having similar average geodesic distance and GED ratio. In order to analyze the performance of the proposed model on out-of-set goal distribution, we create two harder sets, Hard-Dist and Hard-GEDR. In the Hard-Dist set, the geodesic distance to goal is always more than 10m and the average geodesic distance to the goal is 13.48m as compared to 6.9/6.5/7.0m in train/val/test splits . Hard-GEDR set consists of episodes with an average GED ratio of 2.52 and a minimum GED ratio of 2.0 as compared to average GED ratio 1.37 in the Gibson val set. We also follow the episode specification from Savva et al. 2019. Each episode ends when either the agent takes the stop action or at a maximum of 500 timesteps. An episode is considered a success when the final position of the agent is within 0.2m of the goal location. In addition to Success rate (Succ), we also use Success weighted by (normalized inverse) Path Length or SPL as a metric for evaluation for the PointGoal task as proposed by Anderson et al. 2018. In Table 3, we show the performance of the proposed model transferred to the PointGoal task along with the baselines trained on the PointGoal task with the same amount of data (10million frames). The proposed model achieves a success rate/SPL of 0.950/0.846 as compared to 0.827/0.730 for the best baseline model on Gibson val set. We also report the performance of the proposed model trained from scratch on the PointGoal task for 10 million frames. The indicate that the performance of ANM transferred from Exploration is comparable to ANM trained on PointGoal. This highlights a key advantage of our model that it allows us to transfer the knowledge of obstacle avoidance and control in low-level navigation across tasks, as the Local Policy and Mapper are task-invariant. On the left, we show some successful trajectories which indicate that the model is effective at long distance goals with high GED ratio. On the right, we show a failure case due to mapping error. Sample efficiency. RL models are typically trained for more than 10 million samples. In order to compare the performance and sample-efficiency, we trained the best performing RL model (RL + Res18 + GRU + ProjDepth) for 75 million frames and it achieved a Succ/SPL of 0.678/0.486. ANM reaches the performance of 0.789/0.703 SPL/Succ at only 1 million frames. These numbers indicate that ANM achieves > 75× speedup as compared to the best RL baseline. Table 3 (see shaded region), we evaluate all the baselines and ANM trained on the PointGoal task in the Gibson domain on the test set in Matterport domain as well as the harder goal sets in Gibson. We also transfer ANM trained on Exploration in Gibson on all the 3 sets. The show that ANM outperforms all the baselines at all generalization sets. Interestingly, RL based methods almost fail completely on the Hard-Dist set. We also analyze the performance of the proposed model as compared to two best baselines CMP and IL + Res18 + GRU as a function of geodesic distance to goal and GED ratio in Figure 6. The performance of the baselines drops faster as compared to ANM, especially with increase in goal distance. This indicates that end-toend learning methods are effective at short-term navigation but struggle when long-term planning is required to reach a distant goal. In Figure 9, we show some example trajectories of the ANM model along with the predicted map. The successful trajectories indicate that the model exhibits strong backtracking behavior which makes it effective at distant goals requiring long-term planning. In order to implement the actuation and sensor noise models, we would like to collect data for navigational actions in the Habitat simulator. We use three default navigational actions: Forward: move forward by 25cm, Turn Right: on the spot rotation clockwise by 10 degrees, and Turn Left: on the spot rotation counter-clockwise by 10 degrees. The control commands are implemented as u F orward = (0.25, 0, 0), u Right: (0, 0, −10 * π/180) and u Lef t: (0, 0, 10 * π/180). In practice, a robot can also rotate slightly while moving forward and translate a bit while rotating on-the-spot, creating rotational actuation noise in forward action and similarly, a translation actuation noise in on-the-spot rotation actions. We use a Locobot 7 to collect data for building the actuation and sensor noise models. We use the pyrobot API along with ROS to implement the control commands and get sensor readings. In order to get an accurate agent pose, we use an Hokuyo UST-10LX Scanning Laser Rangefinder (LiDAR) which is especially very precise in our scenario as we take static readings in 2D . We install the LiDAR on the Locobot by replacing the arm with the LiDAR. We note that the Hokuyo UST-10LX Scanning Laser Rangefinder is an expensive sensor. It costs $1600 as compared to the whole Locobot costing less than $2000 without the arm. Using expensive sensors can improve the performance of a model, however for a method to be scalable, it should ideally work with cheaper sensors too. In order demonstrate the scalability of our method, we use the LiDAR only to collect the data for building noise models and not for training or deploying navigation policies in the real-world. For the sensor estimate, we use the Kobuki base odometry available in Locobot. We approximate the LiDAR pose estimate to be the true pose of the agent as it is orders of magnitude more accurate than the base sensor. For each action, we collect 600 datapoints from both the base sensor and the LiDAR, making a total of 3600 datapoints (600 * 3 * 2). We use 500 datapoints for each action to fit the actuation and sensor noise models and use the remaining 100 datapoints for validation. For each action a, the LiDAR pose estimates gives us samples of p 1 and the base sensor readings give us samples of p 1, i = 1, 2,..., 600. The difference between LiDAR estimates (p For each action a, we fit a separate Gaussian Mixture Model for the actuation noise and sensor noise using samples i act,a and i sen,a respectively, making a total of 6 models. We fit Gaussian mixture models with number of components ranging from 1 to 20 for and pick the model with highest likelihood on the validation set. Each component in these Gaussian mixture models is a multi-variate Gaussian in 3 variables, x, y and o. We implement these actuation and sensor noise models in the Habitat simulator for our experiments. The Mapper (f M ap) takes in the current RGB observation, s t ∈ R 3×H×W, the current and last sensor reading of the agent pose x t−1:t and the map at the previous time step m t−1 ∈ R 2×M ×M and outputs an updated map, m t ∈ R 2×M ×M (see Figure 2): where θ M denote the trainable parameters and p t−1 denotes internal representations of the Mapper. The Mapper can be broken down into two parts, a Projection Unit (f P r) and a Pose Estimator Unit (f P E,). The Projection Unit outputs a egocentric top-down 2D spatial map, p ego t (where V is the vision range), predicting the obstacles and the explored area in the current observation: p ego t = f P r (s t |θ P r), where θ P r are the parameters of the Projection Unit. It consists of Resnet18 convolutional layers to produce an embedding of the observation. This embedding is passed through two fully-connected layers followed by 3 deconvolutional layers to get the first-person top-down 2D spatial map prediction. Now, we would like to add the egocentric map prediction (p ego t) to the geocentric map from the previous time step (m t−1). In order to transform the egocentric map to the geocentric frame, we need the pose of the agent in the geocentric frame. The sensor reading x t is typically noisy. Thus, we have a Pose Estimator to correct the sensor reading and give an accurate estimate of the agent's geocentric pose. In order to estimate the pose of the agent, we first calculate the relative pose change (dx) from the last time step using the sensor readings at the current and last time step (x t−1, x t). Then we use a Spatial Transformation on the egocentric map prediction at the last frame (p ego t−1) based on the relative pose change (dx), p t−1 = f ST (p ego t−1 |dx). Note that the parameters of this Spatial Transformation are not learnt, but calculated using the pose change (dx). This transforms the projection at the last step to the current egocentric frame of reference. If the sensor was accurate, p t−1 would highly overlap with p ego t. The Pose Estimator Unit takes in p t−1 and p ego t as input and predicts the relative pose change:d x t = f P E (p t−1, p ego t |θ P E) The intuition is that by looking at the egocentric predictions of the last two frames, the pose estimator can learn to predict the small translation and/or rotation that would align them better. The predicted relative pose change is then added to the last pose estimate to get the final pose estimatex t =x t−1 +d x t. Finally, the egocentric spatial map prediction is transformed to the geocentric frame using the current pose prediction of the agent (x t) using another Spatial Transformation and aggregated with the previous spatial map (m t−1) using Channel-wise Pooling operation: m t = m t−1 + f ST (p t |x t). m t = f M ap (s t, x t−1:t, m t−1 |θ M, b t−1) = m t−1 + f ST (p t |x t + f P E (f ST (p ego t−1 |x t−1:t), f P r (s t |θ P r)|θ P E )) where θ P r, θ P E ∈ θ M, and p We use for implementing and training our model. The Projection Unit in the mapper consists of ResNet18 convolutional layers followed by 2 fully-connected layers trained with dropout of 0.5, followed by 3 deconvolutional layers. The Pose Estimator consists of 3 convolutional layers followed by 2 fully connected layers. The Global Policy is a 5 layer fully-convolutional network, while the Local Policy consists of a 3-layer Convolutional network followed by a GRU. The Global and Local policies are both trained using Reinforcement Learning. The reward for the Global policy is the increase in coverage and the reward for the Local policy is the reduction in Euclidean distance to the short-term goal. implementation of Global and Local policy is based on Kostrikov 2018. In addition to the RGB observation, the Local policy receives relative distance and angle to short-term goal, current timestep and last action as input. We bin the relative distance (bin size increasing with distance), relative angle (5 degree bins) and current timestep (30 time step bins) before passing them through embedding layers. This kind of discretization is used previously for RL policies and it improved the sample efficiency as compared to passing the continuous values as input directly. For fair comparison, we use the same discretization for all the baselines as well. We train all the components with 72 parallel threads, with each thread using one of the 72 scenes in the Gibson training set. This leads to a batch size of 72 for training the Mapper. The Global policy samples a new goal every 25 timesteps. We use for training the Global and Local policies with 72 parallel threads and a horizon length of 25 steps for the Local policy and 20 steps for the Global policy (20 steps for Global policy is equivalent to 500 low-level timesteps as Global policy samples a new goal after every 25 timesteps). We use Adam optimizer with a learning rate of 0.0001 for training both the units in the Mapper and Adam with a learning rate of 0.00025 for training the Global and Local policies. We use a discount factor of γ = 0.99, entropy coefficient of 0.001, value loss coefficient of 0.5 for training both the Global and Local policies. Input frame size is 128 × 128, the vision range for mapper is V = 64, i.e. 3.2m (each cell is 5cm in length). Since there are no parameters dependent on the map size, it can be adaptive. We train with a map size of M = 960 (equivalent to 48m). A map of size 48m × 48m is large enough for all scenes in the Gibson val set. We use an adaptive map size for Pointgoal evaluation such that goal lies within central 50% of the map to handle even larger maps in the unseen test set. For the exploration task, we train and test with a constant M = 960. For the Global policy in the Exploration task, the size of the Global Policy input is G = 240.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HklXn1BKDH
A modular and hierarchical approach to learn policies for exploring 3D environments.
Deep Learning for Computer Vision depends mainly on the source of supervision. Photo-realistic simulators can generate large-scale automatically labeled synthetic data, but introduce a domain gap negatively impacting performance. We propose a new unsupervised domain adaptation algorithm, called SPIGAN, relying on Simulator Privileged Information (PI) and Generative Adversarial Networks (GAN). We use internal data from the simulator as PI during the training of a target task network. We experimentally evaluate our approach on semantic segmentation. We train the networks on real-world Cityscapes and Vistas datasets, using only unlabeled real-world images and synthetic labeled data with z-buffer (depth) PI from the SYNTHIA dataset. Our method improves over no adaptation and state-of-the-art unsupervised domain adaptation techniques. Learning from as little human supervision as possible is a major challenge in Machine Learning. In Computer Vision, labeling images and videos is the main bottleneck towards achieving large scale learning and generalization. Recently, training in simulation has shown continuous improvements in several tasks, such as optical flow BID32, object detection BID31 BID52 BID47 BID36, tracking BID10, pose and viewpoint estimation BID44 BID34 BID46, action recognition BID9, and semantic segmentation BID15 BID39 BID38. However, large domain gaps between synthetic and real domains remain as the main handicap of this type of strategies. This is often addressed by manually labeling some amount of real-world target data to train the model on mixed synthetic and real-world labeled data (supervised domain adaptation). In contrast, several recent unsupervised domain adaptation algorithms have leveraged the potential of Generative Adversarial Networks (GANs) BID14 for pixel-level adaptation in this context BID1 BID45. These methods often use simulators as black-box generators of (x, y) input / output training samples for the desired task. Our main observation is that simulators internally know a lot more about the world and how the scene is formed, which we call Privileged Information (PI). This Privileged Information includes physical properties that might be useful for learning. This additional information z is not available in the real-world and is, therefore, generally ignored during learning. In this paper, we propose a novel adversarial learning algorithm, called SPIGAN, to leverage Simulator PI for GAN-based unsupervised learning of a target task network from unpaired unlabeled real-world data. We jointly learn four different networks: (i) a generator G (to adapt the pixel-level distribution of synthetic images to be more like real ones), (ii) a discriminator D (to distinguish adapted and real images), (iii) a task network T (to predict the desired label y from image x), and (iv) a privileged network P trained on both synthetic images x and adapted ones G(x) to predict their associated privileged information z. Our main contribution is a new method to leverage PI from a simulator via the privileged network P, which acts as an auxiliary task and regularizer to the task network T, the main output of our SPIGAN learning algorithm. We evaluate our approach on semantic segmentation in urban scenes, a challenging real-world task. We use the standard Cityscapes BID6 and Vistas BID33 datasets as target real-world data (without using any of the training labels) and SYNTHIA BID39 as simulator output. Although our method applies to any kind of PI that can be predicted via a deep network (optical flow, instance segmentation, object detection, material properties, forces, ...), we consider one of the most common and simple forms of PI available in any simulator: depth from its z-buffer. We show that SPIGAN can successfully learn a semantic segmentation network T using no real-world labels, partially bridging the sim-to-real gap (see Figure 1). SPIGAN also outperforms related state-of-the-art unsupervised domain adaptation methods. The rest of the paper is organized as follows. Section 2 presents a brief review of related works. Section 3 presents our SPIGAN unsupervised domain adaptation algorithm using simulator privileged information. We report our quantitative experiments on semantic segmentation in Section 4, and conclude in Section 5. Domain adaptation (cf. BID7 for a recent review) is generally approached either as domaininvariant learning BID19 BID17 BID11 or as a statistical alignment problem BID50 BID28. Our work focuses on unsupervised adaptation methods in the context of deep learning. This problem consists in learning a model for a task in a target domain (e.g., semantic segmentation of real-world urban scenes) by combining unlabeled data from this domain with labeled data from a related but different source domain (e.g., synthetic data from simulation). The main challenge is overcoming the domain gap, i.e. the differences between the source and target distributions, without any supervision from the target domain. The Domain Adversarial Neural Network (DANN) BID50 BID11 BID12 ) is a popular approach that learns domain invariant features by maximizing domain confusion. This approach has been successfully adopted and extended by many other researchers, e.g., BID37;. Curriculum Domain Adaptation ) is a recent evolution for semantic segmentation that reduces the domain gap via a curriculum learning approach (solving simple tasks first, such as global label distribution in the target domain).Recently, adversarial domain adaptation based on GANs BID14 have shown encouraging for unsupervised domain adaptation directly at the pixel level. These techniques learn a generative model for source-to-target image translation, including from and to multiple domains BID48 BID45 BID24. In particular, CycleGAN leverages cycle consistency using a forward GAN and a backward GAN to improve the training stability and performance of image-to-image translation. An alternative to GAN is Variational Auto-Encoders (VAEs), which have also been used for image translation.Several related works propose GAN-based unsupervised domain adaptation methods to address the specific domain gap between synthetic and real-world images. SimGAN BID45 leverages simulation for the automatic generation of large annotated datasets with the goal of refining synthetic images to make them look more realistic. Sadat BID40 effectively leverages synthetic data by treating foreground and in different manners. Similar to our approach, xs (synthetic) ys (label) zs (privileged info.) DISPLAYFORM0 Figure 2: SPIGAN learning algorithm from unlabeled real-world images x r and the unpaired output of a simulator (synthetic images x s, their labels y s, e.g. semantic segmentation ground truth, and Privileged Information PI z s, e.g., depth from the z-buffer) modeled as random variables. Four networks are learned jointly: (i) a generator G(x s) ∼ x r, (ii) a discriminator D between G(x s) = x f and x r, (iii) a perception task network T (x r) ∼ y r, which is the main target output of SPIGAN (e.g., a semantic segmentation deep net), and (iv) a privileged network P to support the learning of T by predicting the simulator's PI z s.recent methods consider the final recognition task during the image translation process. Closely related to our work, PixelDA BID1 is a pixel-level domain adaptation method that jointly trains a task classifier along with a GAN using simulation as its source domain but no privileged information. These approaches focus on simple tasks and visual conditions that are easy to simulate, hence having a low domain gap to begin with. On the other hand, BID21 are the first to study semantic segmentation as the task network in adversarial training. uses a curriculum learning style approach to reduce domain gap. BID41 conducts domain adaptation by utilizing the task-specific decision boundaries with classifiers. BID42 leverage the GAN framework by learning general representation shared between the generator and segmentation networks. BID5 use a target guided distillation to encourage the task network to imitate a pretrained model. BID57 propose to combine appearance and representation adaptation. BID49 propose an adversarial learning method to adapt in the output (segmentation) space. BID59 generates pseudo-labels based on confidence scores with balanced class distribution and propose an iterative self-training framework. Our main novelty is the use of Privileged Information from a simulator in a generic way by considering a privileged network in our architecture (see Figure 2). We show that for the challenging task of semantic segmentation of urban scenes, our approach significantly improves by augmenting the learning objective with our auxiliary privileged task, especially in the presence of a large sim-to-real domain gap, the main problem in challenging real-world conditions. Our work is inspired by Learning Using Privileged Information (LUPI) BID51, which is linked to distillation BID18 as shown by BID29. LUPI's goal is to leverage additional data only available at training time. For unsupervised domain adaptation from a simulator, there is a lot of potentially useful information about the generation process that could inform the adaptation. However, that information is only available at training time, as we do not have access to the internals of the real-world data generator. Several works have used privileged information at training time for domain adaptation BID2 BID20 BID2 BID43 BID13. BID20 leverage RGBD information to help adapt an object detector at the feature level, while BID13 propose a similar concept of modality distillation for action recognition. Inspired by this line of work, we exploit the privileged information from simulators for sim-to-real unsupervised domain adaptation. Our goal is to design a procedure to learn a model (neural network) that solves a perception task (e.g., semantic segmentation) using raw sensory data coming from a target domain (e.g., videos of a car driving in urban environments) without using any ground truth data from the target domain. We formalize this problem as unsupervised domain adaptation from a synthetic domain (source domain) to a real domain (target domain). The source domain consists of labeled synthetic images together with Privileged Information (PI), obtained from the internal data structures of a simulator. The target domain consists of unlabeled images. The simulated source domain serves as an idealized representation of the world, offering full control of the environment (weather conditions, types of scene, sensor configurations, etc.) with automatic generation of raw sensory data and labels for the task of interest. The main challenge we address in this work is how to overcome the gap between this synthetic source domain and the target domain to ensure generalization of the task network in the real-world without target supervision. Our main hypothesis is that the PI provided by the simulator is a rich source of information to guide and constrain the training of the target task network. The PI can be defined as any information internal to the simulator, such as depth, optical flow, or physical properties about scene components used during simulation (e.g., materials, forces, etc.). We leverage the simulator's PI within a GAN framework, called SPIGAN. Our approach is described in the next section. DISPLAYFORM0 s ), i = 1... N s } be a set of N s simulated images x s with their labels y s and PI z s. We describe our approach assuming a unified treatment of the PI, but our method trivially extends to multiple separate types of PI. DISPLAYFORM1, and (iv) a privileged network P (x; θ P). The generator G is a mapping function, transforming an image x s in X s (source domain) to x f in X f (adapted or fake domain). SPIGAN aims to make the adapted domain statistically close to the target domain to maximize the accuracy of the task predictor T (x; θ T) during testing. The discriminator D is expected to tell the difference between x f and x r, playing an adversarial game with the generator until a termination criteria is met (refer to section 4.1). The target task network T is learned on the synthetic x s and adapted G(x s ; θ G) images to predict the synthetic label y s, assuming the generator presents a reasonable degree of label (content) preservation. This assumption is met for the regime of our experiments. Similarly, the privileged network P is trained on the same input but to predict the PI z, which in turn assumes the generator G is also PI-preserving. During testing only T (x; θ T) is needed to do inference for the selected perception task. The main learning goal is to train a model θ T that can correctly perform a perception task T in the target real-world domain. All models are trained jointly in order to exploit all available information to constrain the solution space. In this way, the PI provided by the privileged network P is used to constrain the learning of T and to encourage the generator to model the target domain while being label-and PI-preserving. Our joint learning objective is described in the following section. We design a consistent set of loss functions and domain-specific constraints related to the main prediction task T. We optimize the following minimax objective: min DISPLAYFORM0 where α, β, γ, δ are the weights for adversarial loss, task prediction loss, PI regularization, and perceptual regularization respectively, further described below. Adversarial loss L GAN. Instead of using a standard adversarial loss, we use a least-squares based adversarial loss BID30;, which stabilizes the training process and generates better image in our experiments: DISPLAYFORM1 where P r (resp. P s) denotes the real-world (resp. synthetic) data distribution. Task prediction loss L T. We learn the task network by optimizing its loss over both synthetic images x s and their adapted version G(x s, θ G). This assumes the generator is label-preserving, i.e., that y s can be used as a label for both images. Thanks to our joint objective, this assumption is directly encouraged during the learning of the generator through the joint estimation of θ P, which relates to scene properties captured by the PI. Naturally, different tasks require different loss functions. In our experiments, we consider the task of semantic segmentation and use the standard cross-entropy loss (Eq. 4) over images of size W × H and a probability distribution over C semantic categories. The total combined loss in the special case of semantic segmentation is therefore: DISPLAYFORM2 DISPLAYFORM3 where 1 [a=b] is the indicator function. PI regularization L P. Similarly, the auxiliary task of predicting PI also requires different losses depending on the type of PI. In our experiments, we use depth from the z-buffer and an 1 -norm: DISPLAYFORM4 Perceptual regularization L perc. To maintain the semantics of the source images in the generated images, we additionally use the perceptual loss BID23; BID3: DISPLAYFORM5 where φ is a mapping from image space to a pre-determined feature space (see 4.1 for more details).Optimization. In practice, we follow the standard adversarial training strategy to optimize our joint learning objective (Eq. 1). We alternate between updates to the parameters of the discriminator θ D, keeping all other parameters fixed, then fix θ D and optimize the parameters of the generator θ G, the privileged network θ P, and most importantly the task network θ T. We discuss the details of our implementation, including hyper-parameters, in section 4.1. We evaluate our unsupervised domain adaptation method on the task of semantic segmentation in a challenging real-world domain for which training labels are not available. As our source synthetic domain, we select the public SYNTHIA dataset BID39 as synthetic source domain given the availability of automatic annotations and PI. SYNTHIA is a dataset generated from an autonomous driving simulator of urban scenes. These images were generated under different weathers and illumination conditions to maximize visual variability. Pixel-wise segmentation and depth labels are provided for each image. In our experiment, we use the sequence of SYNTHIA-RAND-CITYSCAPES, which contains semantic segmentation labels that are more compatible with Cityscapes. For target real-world domains, we use the Cityscapes BID6 and Mapillary Vistas BID33 datasets. Cityscapes is one of most widely used real-world urban scene image segmentation datasets with images collected around urban streets in Europe. For this dataset, We use the standard split for training and validation with 2, 975 and 500 images respectively. Mapillary Vistas is a larger dataset with a wider variety of scenes, cameras, locations, weathers, and illumination conditions. We use 16, 000 images for training and 2, 000 images for evaluation. During training, none of the labels from the real-world domains are used. In our experiment, we first evaluate adaptation from SYNTHIA to Cityscapes on 16 classes, following the standard evaluation protocol used in Hoffman et al. of using PI by conducting ablation study with and without PI (depth) during adaptation from SYN-THIA to both Cityscapes and Vistas, on a common 7 categories ontology. To be consistent with the semantic segmentation best practices, we use standard intersection-over-union (IoU) per category and mean intersection-over-union (mIoU) as our main validation metric. We adapt the generator and discriminator model architectures from CycleGAN and BID23. For simplicity, we use a single sim-to-real generator (no cycle consistency) consisting of two down-sampling convolution layers, nine ResNet blocks BID16 and two fractionally-strided convolution layers. Our discriminator is a PatchGAN network with 3 layers. We use the standard FCN8s architecture BID28 for both the task predictor T and the privileged network P, given its ease of training and its acceptance in domain adaptation works BID21. For the perceptual loss L perc, we follow the implementation in BID3. The feature is constructed by the concatenation of the activations of a pre-trained VGG19 network BID53 of layers "conv1 2", "conv2 2", "conv3 2", "conv4 2", "conv5 2". FORMULA3, we set hyper-parameters using a coarse grid search on a small validation set different than the target set. For Cityscapes, we use a subset of the validation set of Vistas, and vice-versa. We found a set of values that are effective across datasets and experiments, which show they have a certain degree of robustness and generalization. The weights in our joint adversarial loss (Eq. 1) are set to α = 1, β = 0.5, γ = 0.1, δ = 0.33, for the GAN, task, privileged, and perceptual objectives respectively. This confirms that the two most important factors in the objective are the GAN and task losses (α = 1, β = 0.5). This is intuitive, as the goal is to improve the generalization performance of the task network (the task loss being an empirical proxy) across a potentially large domain gap (addressed first and foremost by the GAN loss). The regularization terms are secondary in the objective, stabilizing the training (perceptual loss) and constraining the adaptation process (privileged loss). FIG1 show an example of our loss curves and the stability of our training. Another critical hyper-parameter for unsupervised learning is the stopping criterion. We observed that the stabilizing effects of the task and privileged losses (Eqs. 3,5) on the GAN objective (Eq. 2) made a simple rule effective for early stopping. We stop training at the iteration when the discriminator loss is significantly and consistently better than the generator loss (iteration 90 in FIG2). This is inspired by the semi-supervised of BID8, where effective discriminative adaptation of the task network might not always be linked to the best image generator. We evaluate the methods with two resolutions: 320 × 640 and 512 × 1024, respectively. Images are resized to the evaluated size during training and evaluation. During training, we sample crops of size 320 × 320 (resp. 400 × 400) for lower (resp. higher) resolution experiments. In all adversarial learning cases, we do five steps of the generator for every step of the other networks. The Adam optimizer BID25 ) is used to adjust all parameters with initial learning rate 0.0002 in our PyTorch implementation BID35. n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a 23.2 LSD n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a 34. Table 2: Semantic Segmentation (per category and mean IoUs, higher is better) for SYN-THIA adapting to Cityscapes and Vistas. The last column is the ratio of images in the validation set for which we observe negative transfer (lower is better). In this section we present our evaluation of the SPIGAN algorithm in the context of adapting a semantic segmentation network from SYNTHIA to Cityscapes. Depth maps from SYNTHIA are used as PI in the proposed algorithm. We compare our to several state-of-art domain adaptation algorithms, including FCNs in the wild (FCNs wild) BID21, Curriculum DA (CDA), Learning from synthetic data (LSD) BID42, and Class-balanced Self-Training (CBST) BID59.Quantitative for these methods are shown in Table 1 for the semantic segmentation task on the target domain of Cityscapes (validation set). As reference baselines, we include training only on source images and non-adapted labels. We also provide our algorithm performance without the PI for comparison (i.e., γ = 0 in Eq. 1, named "SPIGAN-no-PI").Results show that on Cityscapes SPIGAN achieves state-of-the-art semantic segmentation adaptation in terms of mean IoU. A finer analysis of the attending to individual classes suggests that the use of PI helps to estimate layout-related classes such as road and sidewalk and object-related classes such as person, rider, car, bus and motorcycle. SPIGAN achieves an improvement of 3% in 320 × 640, 1.0% in 512 × 1024, in mean IoU with respect to the non-PI method. This improvement is thanks to the regularization provided by P (x; θ P) during training, which decreases the number of artifacts as shown in Figure 5. This comparison, therefore, confirms our main contribution: a general approach to leveraging synthetic data and PI from the simulator to improve generalization performance across the sim-to-real domain gap. To better understand the proposed algorithm, and the impact of PI, we conduct further experiments comparing SPIGAN (with PI), SPIGAN-no-PI (without PI), and SPIGAN-base (without both PI and perceptual regularization), the task network of SPIGAN trained only on the source domain (FCN source, lower bound, no adaptation), and on the target domain (FCN target, upper bound), all at 320 × 640 resolution. We also include on the Vistas dataset, which presents a more challenging adaptation problem due to the higher diversity of its images. For these experiments, we use a 7 semantic classes ontology to produce a balanced ontology common to the three datasets (SYNTHIA, Cityscapes and Vistas). Adaptation for both target domains are given in Table 2.In addition to the conventional segmentation performance metrics, we also carried out a study to measure the amount of negative transfer, summarized in Table 2. A negative transfer case is defined as a real-world testing sample that has a mIoU lower than the FCN source prediction (no adaptation).As shown in Table 2, SPIGAN-no-PI, including perceptual regularization, performs better than SPIGAN-base in both datasets. The performance is generally improved in all categories, which implies that perceptual regularization effectively stabilizes the adaptation during training. For Cityscapes, the quantitative in Table 2 show that SPIGAN is able to provide dramatic adaptation as hypothesized. SPIGAN improves the mean IoU by 17.1%, with the PI itself providing an improvement of 7.4%. This is consistent with our observation in the previous experiment (Table 1). We also notice that SPIGAN gets significant improvements on "nature", "construction", and "vehicle" categories. In addition, SPIGAN is able to improve the IoU by +15% on the "human" category, a difficult class in semantic segmentation. We provide examples of qualitative for the adaptation from SYNTHIA to Cityscapes in Figure 5 and Figure 7.On the Vistas dataset, SPIGAN is able to decrease the domain gap by +4.3% mean IoU. In this case, using PI is crucial to improve generalization performance. SPIGAN-no-PI indeed suffers from negative transfer, with its adapted network performing −13% worse than the FCN source without adaptation. Table 2 shows that 80% of the evaluation images have a lower individual IoU after adaptation in the SPIGAN-no-PI case (vs. 42% in the SPIGAN case).The main difference between the Cityscapes and Vistas is due to the difference in visual diversity between the datasets. Cityscapes is indeed a more visually uniform benchmark than Vistas: it was recorded in a few German cities in nice weather, whereas Vistas contains crowdsourced data from all over the world with varying cameras, environments, and weathers. This makes Cityscapes more amenable to image translation methods (including SPIGAN-no-PI), as can be seen in Figure 5 where a lot of the visual adaptation happens at the color and texture levels, whereas Figure 6 shows that SYNTHIA images adapted towards Vistas contain a lot more artifacts. Furthermore, a larger domain gap is known to increase the risk of negative transfer (cf. BID7). This is indeed what we quantitatively measured in Table 2 and qualitatively confirmed in Figure 6. SPIGAN suffers from similar but less severe artifacts. As shown in Figure 6, they are more consistent with the depth of the scene, which helps addressing the domain gap and avoids the catastrophic failures visible in the SPIGAN-no-PI case. This consistent improvement brought by PI in both of the experiments not only shows that PI imposes useful constraints that promote better task-oriented training, but also implies that PI more robustly guides the training to reduce domain shift. By comparing the on the two different datasets, we also found that all the unsupervised adaptation methods share some similarity in the performance of certain categories. For instance, the "vehicle" category has seen the largest improvement for both Cityscapes and Vistas. This trend is consistent with the well-known fact that "object" categories are easier to adapt than "stuff" BID52. However, the same improvement did not appear in the "human" category mainly because the SYNTHIA subset we used in our experiments contains very few humans. This phenomenon has been recently studied in Sadat BID40. We present SPIGAN, a novel method for leveraging synthetic data and Privileged Information (PI) available in simulated environments to perform unsupervised domain adaptation of deep networks. Our approach jointly learns a generative pixel-level adaptation network together with a target task network and privileged information models. We showed that our approach is able to address large domain gaps between synthetic data and target real-world domains, including for challenging realworld tasks like semantic segmentation of urban scenes. For future work, we plan to investigate SPIGAN applied to additional tasks, with different types of PI that can be obtained from simulation.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkxoNnC5FQ
An unsupervised sim-to-real domain adaptation method for semantic segmentation using privileged information from a simulator with GAN-based image translation.
Adversarial training is one of the main defenses against adversarial attacks. In this paper, we provide the first rigorous study on diagnosing elements of large-scale adversarial training on ImageNet, which reveals two intriguing properties. First, we study the role of normalization. Batch normalization (BN) is a crucial element for achieving state-of-the-art performance on many vision tasks, but we show it may prevent networks from obtaining strong robustness in adversarial training. One unexpected observation is that, for models trained with BN, simply removing clean images from training data largely boosts adversarial robustness, i.e., 18.3%. We relate this phenomenon to the hypothesis that clean images and adversarial images are drawn from two different domains. This two-domain hypothesis may explain the issue of BN when training with a mixture of clean and adversarial images, as estimating normalization statistics of this mixture distribution is challenging. Guided by this two-domain hypothesis, we show disentangling the mixture distribution for normalization, i.e., applying separate BNs to clean and adversarial images for statistics estimation, achieves much stronger robustness. Additionally, we find that enforcing BNs to behave consistently at training and testing can further enhance robustness. Second, we study the role of network capacity. We find our so-called "deep" networks are still shallow for the task of adversarial learning. Unlike traditional classification tasks where accuracy is only marginally improved by adding more layers to "deep" networks (e.g., ResNet-152), adversarial training exhibits a much stronger demand on deeper networks to achieve higher adversarial robustness. This robustness improvement can be observed substantially and consistently even by pushing the network capacity to an unprecedented scale, i.e., ResNet-638. Adversarial attacks can mislead neural networks to make wrong predictions by adding human imperceptible perturbations to input data. Adversarial training is shown to be an effective method to defend against such attacks, which trains neural networks on adversarial images that are generated on-the-fly during training. Later works further improve robustness of adversarially trained models by mitigating gradient masking (Tramèr et al., 2018), imposing logits pairing , denoising at feature space (b), etc. However, these works mainly focus on justifying the effectiveness of proposed strategies and apply inconsistent pipelines for adversarial training, which leaves revealing important elements for training robust models still a missing piece in current adversarial research. In this paper, we provide the first rigorous diagnosis of different adversarial learning strategies, under a unified training and testing framework, on the large-scale ImageNet dataset . We discover two intriguing properties of adversarial training, which are essential for training models with stronger robustness. First, though Batch Normalization (BN) is known as a crucial component for achieving state-of-the-arts on many vision tasks, it may become a major obstacle for securing robustness against strong attacks in the context of adversarial training. By training such networks adversarially with different strategies, e.g., imposing logits pairing , we observe an unexpected phenomenon -removing clean images from training data is the most effective way for boosting model robustness. We relate this phenomenon to the conjecture that clean images and adversarial images are drawn from two different domains. This two-domain hypothesis may explain the limitation of BN when training with a mixture of clean and adversarial images, as estimating normalization statistics on this mixture distribution is challenging. We further show that adversarial training without removing clean images can also obtain strong robustness, if the mixture distribution is well disentangled at BN by constructing different mini-batches for clean images and adversarial images to estimate normalization statistics, i.e., one set of BNs exclusively for adversarial images and another set of BNs exclusively for clean images. An alternative solution to avoiding mixture distribution for normalization is to simply replace all BNs with batch-unrelated normalization layers, e.g., group normalization , where normalization statistics are estimated on each image independently. These facts indicate that model robustness is highly related to normalization in adversarial training. Furthermore, additional performance gain is observed via enforcing consistent behavior of BN during training and testing. Second, we find that our so-called "deep" networks (e.g., are still shallow for the task of adversarial learning, and simply going deeper can effectively boost model robustness. Experiments show that directly adding more layers to "deep" networks only marginally improves accuracy for traditional image classification tasks. In contrast, substantial and consistent robustness improvement is witnessed even by pushing the network capacity to an unprecedented scale, i.e., ResNet-638. This phenomenon suggests that larger networks are encouraged for the task of adversarial learning, as the learning target, i.e., adversarial images, is a more complex distribution than clean images to fit. In summary, our paper reveals two intriguing properties of adversarial training: properly handling normalization is essential for obtaining models with strong robustness; and our so-called "deep" networks are still shallow for the task of adversarial learning. We hope these findings can benefit future research on understanding adversarial training and improving adversarial robustness. Adversarial training. Adversarial training constitutes the current foundation of state-of-the-arts for defending against adversarial attacks. It is first developed in where both clean images and adversarial images are used for training. propose to improve robustness further by encouraging the logits from the pairs of clean images and adversarial counterparts to be similar. Instead of using both clean and adversarial images for training, formulate adversarial training as a min-max optimization and train models exclusively on adversarial images. Subsequent works are then proposed to further improve the model robustness (b; b; a; ; ; b; ;) or accelerate the adversarial training process (; a; . However, as these works mainly focus on demonstrating the effectiveness of their proposed mechanisms, a fair and detailed diagnosis of large-scale adversarial training strategies remains as a missing piece. In this work, we provide the first detailed diagnosis which reveals two intriguing properties of training adversarial defenders at scale. Normalization Layers. Normalization is an effective technique to accelerate the training of deep networks. Different methods are proposed to exploit batch-wise (e.g., BN ), layer-wise (e.g., layer normalization ) or channel-wise (e.g., instance normalization and group normalization ) information for estimating normalization statistics. Different from traditional vision tasks where BN usually yields stronger performance than other normalization methods, we show that BN may become a major obstacle for achieving strong robustness in the context of adversarial training, and properly handling normalization is an essential factor to improving adversarial robustness. As inconsistent adversarial training pipelines were applied in previous works (; b), it is hard to identify which elements are important for obtaining robust models. To this end, we provide a unified framework to train and to evaluate different models, for the sake of fair comparison. Training Parameters. We use the publicly available adversarial training pipeline 1 to train all models with different strategies on ImageNet. We select ResNet-152 as the baseline network, and apply projected gradient descent (PGD) as the adversarial attacker to generate adversarial examples during training. The hyper-parameters of the PGD attacker are: maximum perturbation of each pixel = 16, attack step size α = 1, number of attack iterations N = 30, and the targeted class is selected uniformly at random over the 1000 ImageNet categories. We initialize the adversarial image by the clean counterpart with probability = 0.2, or randomly within the allowed cube with probability = 0.8. All models are trained for a total of 110 epochs, and we decrease the learning rate by 10× at the 35-th, 70-th, and 95-th epoch. Evaluation. For performance evaluation, we mainly study adversarial robustness (rather than clean image accuracy) in this paper. Specifically, we follow the setting in and Xie et al. (2019b), where the targeted PGD attacker is chosen as the white-box attacker to evaluate robustness. The targeted class is selected uniformly at random. We constrain the maximum perturbation of each pixel = 16, set the attack step size α = 1, and measure the robustness by defending against PGD attacker of 2000 attack iterations (i.e., PGD-2000). As in and Xie et al. (2019b), we always initialize the adversarial perturbation from a random point within the allowed -cube. We apply these training and evaluation settings by default for all experiments, unless otherwise stated. In this part, we first elaborate on the effectiveness of different adversarial training strategies on model robustness. Adversarial training can be dated back to , where they mix clean images and the corresponding adversarial counterparts into each mini-batch for training. We choose this strategy as our starting point, and the corresponding loss function is: where J(·) is the loss function, θ is the network parameter, y is the ground-truth, and training pairs {x clean, x adv} are comprised of clean images and their adversarial counterparts, respectively. The parameter α balances the relative importance between clean image loss and adversarial image loss. We set α = 0.5 following. With our adversarial training framework, this model can achieve 20.9% accuracy against PGD-2000 attacker. Besides this baseline, we also study the effectiveness of two recently proposed adversarial training strategies ), and provide the as follows. Ratio of clean images. Different from the canonical form in , apply the min-max formulation for adversarial training where no clean images are used. We note this min-max type optimization can be dated as early as. We hereby investigate the relationship between model robustness and the ratio of clean images used for training. Specifically, for each training mini-batch, we keep adversarial images unchanged, but removing their clean counterparts by 20%, 40%, 60%, 80% and 100%. We report the in Figure 1. Interestingly, removing a portion of clean images from training data can significantly improve model robustness, and the strongest robustness can be obtained by completely removing clean images from the training set, i.e., it achieves an accuracy of 39.2% against PGD-2000 attacker, outperforming the baseline model by a large margin of 18.3%. Adversarial logits pairing. For performance comparison, we also explore the effectiveness of an alternative training strategy, adversarial logits pairing (ALP) . Compared with the canonical form in , ALP imposes an additional loss to encourage the logits from the pairs of clean images and adversarial counterparts to be similar. As shown in Figure 2, our re-implemented ALP obtains an accuracy of 23.0% against PGD-2000 attacker 2, which outperforms the baseline model by 2.1%. Compared with the strategy of removing clean images, this improvement is much smaller. Discussion. Given the above, we conclude that training exclusively on adversarial images is the most effective strategy for boosting model robustness. For example, by defending against PGD-2000 attacker, the baseline strategy in (referred to as 100% adv + 100% clean) obtains an accuracy of 20.9%. Adding an loss of logits pairing (referred to as 100% adv + 100% clean, ALP) slightly improves the performance by 2.1%, while completely removing clean images b ) (referred to as 100% adv + 0% clean) boosts the accuracy by 18.3%. We further plot a comprehensive evaluation curve of these three training strategies in Figure 2, by varying the number of PGD attack iteration from 10 to 2000. Surprisingly, only 100% adv + 0% clean can ensure model robustness against strong attacks, i.e., performance becomes asymptotic when allowing PGD attacker to perform more attack iterations. Training strategies which involve clean images for training are suspicious to in worse robustness, if PGD attackers are allowed to perform more attack iterations. In the next section, we will study how to make these training strategies, i.e., 100% adv + 100% clean and 100% adv + 100% clean, ALP to secure their robustness against strong attacks. Two-domain hypothesis. Compared to feature maps of clean images, Xie et al. (2019b) show that feature maps of their adversarial counterparts tend to be more noisy. Meanwhile, several works (Li Figure 3: Disentangling the mixture distribution for normalization secures model robustness. Unlike the blue curves in Figure 2, these new curves become asymptotic when evaluating against attackers with more iterations, which indicate that the networks using MBN adv can behave robustly against PGD attackers with different attack iterations, even if clean images are used for training. & ; ; ; ; demonstrate it is possible to build classifiers to separate adversarial images from clean images. These studies suggest that clean images and adversarial images are drawn from two different domains 3 . This two-domain hypothesis may provide an explanation to the unexpected observation (see Sec. 4.1) and we ask -why simply removing clean images from training data can largely boost adversarial robustness? As a crucial element for achieving state-of-the-arts on various vision tasks, BN is widely adopted in many network architectures, e.g., Inception, ResNet and DenseNet . The normalization statistics of BN are estimated across different images. However, exploiting batch-wise statistics is a challenging task if input images are drawn from different domains and therefore networks fail to learn a unified representation on this mixture distribution. Given our two-domain hypothesis, when training with both clean and adversarial images, the usage of BN can be the key issue for ing in weak adversarial robustness in Figure 2. Based on the analysis above, an intuitive solution arise: accurately estimating normalization statistics should enable models to train robustly even if clean images and adversarial images are mixed at each training mini-batch. To this end, we explore two ways, where the mixture distribution is disentangled at normalization layers, for validating this argument: maintaining separate BNs for clean/adversarial images; or replacing BNs with batch-unrelated normalization layers. Training with Mixture BN. Current network architectures estimate BN statistics using the mixed features from both clean and adversarial images, which leads to weak model robustness as shown in Figure 2. Xie et al. (2019a) propose that properly decoupling the normalization statistics for adversarial training can effectively boost image recognition. Here, to study model robustness, we apply Mixture BN (MBN) (a), which disentangles the mixed distribution via constructing different mini-batches for clean and adversarial images for accurate BN statistics estimation (illustrated in Figure 4), i.e., one set of BNs exclusively for adversarial images (referred to as MBN adv), and another set of BNs exclusively for clean images (referred to as MBN clean). We do not change the structure of other layers. We verify the effectiveness of this new architecture with two (previously less robust) training strategies, i.e., 100% adv + 100% clean and 100% adv + 100% clean, ALP. At inference time, whether an image is adversarial or clean is unknown. We thereby measure the performance of networks by applying either MBN adv or MBN clean separately. The are shown in Table 1. We find the performance is strongly related to how BN is trained: when using MBN clean, the trained network achieves nearly the same clean image accuracy as the whole network trained exclusively on clean images; when using MBN adv, the trained network achieves nearly the same adversarial robustness as the whole network trained exclusively on adversarial images. Other factors, like whether ALP is applied for training, only cause subtle differences in performance. We further plot an extensive robustness evaluation curve of different training strategies in Figure 3. Unlike Figure 2, we observe that networks using MBN adv now can secure their robustness against strong attacks, e.g., the robustness is asymptotic when increasing attack iterations from 500 to 2000. The in Table 1 suggest that BN statistics characterize different model performance. For a better understanding, we randomly sample 20 channels in a residual block and plot the corresponding running statistics of MBN clean and MBN adv in Figure 5. We observe that clean images and adversarial images induce significantly different running statistics, though these images share the same set of convolutional filters for feature extraction. This observation further supports that clean images and adversarial images come from two different domains; and current networks fail to learn a unified representation on these two domains. Interestingly, we also find that adversarial images lead to larger running mean and variance than clean images. This phenomenon is also consistent with the observation that adversarial images produce noisy-patterns/outliers at the feature space (b). As a side note, this MBN structure is also used as a practical trick for training better generative adversarial networks (GAN). suggest to construct each mini-batch with only real or generated images when training discriminators, as generated images and real images belong to different domains at an early training stage. However, unlike our situation where BN statistics estimated on different domains remain divergent after training, a successful training of GAN, i.e., able to generate natural images with high quality, usually learns a unified set of BN statistics on real and generated images. Training with batch-unrelated normalization layers. Instead of applying MBN structure to disentangle the mixture distribution, we can also train networks with batch-unrelated normalization layers, which avoids exploiting the batch dimension to calculate statistics, for the same purpose. We choose Group Normalization (GN) for this experiment, as GN can reach a comparable performance to BN on various vision tasks (Table 2 : Enforcing a consistent behavior of BN at the training stage and the testing stage significantly boosts adversarial robustness. * denotes that running statistics is used at the last 10 training epochs. channels into groups and computes the normalization statistics within each group. By replacing all BNs with GNs, the mixture training strategy 100% adv + 100% clean now can ensure robustness against strong attacks, i.e., the model trained with GN achieves 39.5% accuracy against PGD-500, and increasing attack iterations to 2000 only cause a marginal performance drop by 0.5% (39.0% accuracy against PGD-2000). Exploring other batch-unrelated normalization in adversarial training remains as future work. Exceptional cases. There are some situations where models directly trained with BN can also ensure their robustness against strong attacks, even if clean images are included for adversarial training. Our experiments show constraining the maximum perturbation of each pixel to be a smaller value, e.g., = 8, is one of these exceptional cases. and also show that adversarial training with clean images can secure robustness on small datasets, i.e., MNIST, CIFAR-10 and Tiny ImageNet. Intuitively, generating adversarial images on these much simpler datasets or under a smaller perturbation constraint induces a smaller gap between these two domains, therefore making it easier for networks to learn a unified representation on clean and adversarial images. Nonetheless, in this paper, we stick to the standard protocol in and Xie et al. (2019b) where adversarial robustness is evaluated on ImageNet with the perturbation constraint = 16. Inconsistent behavior of BN. As the concept of "batch" is not legitimate at inference time, BN behaves differently at training and testing : during training, the mean and variance are computed on each mini-batch, referred to as batch statistics; during testing, there is no actual normalization performed -BN uses the mean and variance pre-computed on the training set (often by running average) to normalize data, referred to as running statistics. For traditional classification tasks, batch statistics usually converge to running statistics by the end of training, thus (practically) making the impact of this inconsistent behavior negligible. Nonetheless, this empirical assumption may not hold in the context of adversarial training. We check this statistics matching of models trained with the strategy 100% adv + 0% clean, where the robustness against strong attacks is secured. We randomly sample 20 channels in a residual block, and plot the batch statistics computed on two randomly sampled mini-batches, together with the pre-computed running statistics. In Figure 6, interestingly, we observe that batch mean is almost equivalent to running mean, while batch variance does not converge to running variance yet on certain channels. Given this fact, we then study if this inconsistent behavior of BN affects model robustness in adversarial training. A heuristic approach. Instead of developing a new training strategy to make batch statistics converge to running statistics by the end of training, we explore a more heuristic solution: applying pre-computed running statistics for model training during the last 10 epochs. We report the performance comparison in Table 2. By enabling BNs to behave consistently at training and testing, this approach can further boost the model robustness by 3.0% with the training strategy 100% adv + 0% clean. We also successfully validate the generality of this approach on other two robust training strategies. More specifically, it can improve the model robustness under the training strategies MBN adv, 100% adv + 100% clean and MBN adv, 100% adv + 100% clean, ALP by 1.6% and 2.8%, respectively. These suggest that model robustness can be benefited from a consistent behavior of BN at training and testing. Moreover, we note this approach does not incur any additional training budgets. On the importance of training convolutional filters adversarially. In Section 4.2, we study the performance of models where the mixture distribution is disentangled for normalization -by applying either MBN clean or MBN adv, the trained models achieve strong performance on either clean images or adversarial images. This suggests that clean and adversarial images share the same convolutional filters to effectively extract features. We further explore whether the filters learned exclusively on adversarial images can extract features effectively on clean images, and vice versa. We first take a model trained with the strategy 100% adv + 0% clean, and then finetune BNs using only clean images for a few epochs. Interestingly, we find the accuracy on clean images can be significantly boosted from 62.3% to 73%, which is only 5.9% worse than the standard training setting, i.e., 78.9%. These indicates that convolutional filters learned solely on adversarial images can also be effectively applied to clean images. However, we find the opposite direction does not work -convolutional filters learned on clean images cannot extract features robustly on adversarial images (e.g., 0% accuracy against PGD-2000 after finetuning BNs with adversarial images). This phenomenon indicates the importance of training convolutional filters adversarially, as such learned filters can also extract features from clean images effectively. The findings here also are related to the discussion of robust/non-robustness features in. Readers with interests are recommended to refer to this concurrent work for more details. Limitation of adversarial training. We note our adversarially trained models exhibit a performance tradeoff between clean accuracy and robustness -the training strategies that achieve strong model robustness usually in relatively low accuracy on clean images. For example, 100% adv + 0% clean, MBN adv, 100% adv + 100% clean and MBN adv, 100% adv + 100% clean, ALP only report 62.3%, 64.4% and 65.9% of clean image accuracy. By replacing BNs with GNs, 100% adv + 100% clean achieves much better clean image accuracy, i.e., 67.5%, as well maintaining strong robustness. We note that this tradeoff is also observed in the prior work. show it is possible to make adversarially trained models to exhibit a better tradeoff between clean accuracy and robustness. Future attentions are deserved on this direction. As discussed in Section 4.2, current networks are not capable of learning a unified representation on clean and adversarial images. It may suggest that the "deep" network we used, i.e., ResNet-152, still underfits the complex distribution of adversarial images, which motivates us to apply larger networks for adversarial training. We simply instantiate the concept of larger networks by going deeper, i.e., adding more residual blocks. For traditional image classification tasks, the benefits brought by adding more layers to "deep" networks is diminishing, e.g., the blue curve in Figure 7 shows that the improvement of clean image accuracy becomes saturated once the network depth goes beyond ResNet-200. For a better illustration, we train deeper models exclusively on adversarial images and observe a possible underfitting phenomenon as shown in Figure 7. In particular, we apply the heuristic policy in Section 4.3 to mitigate the possible effects brought by BN. We observe that adversarial learning task exhibits a strong "thirst" on deeper networks to obtain stronger robustness. For example, increasing depth from ResNet-152 to ResNet-338 significantly improves the model robustness by 2.4%, while the corresponding improvement in the "clean" training setting (referred to as 0% adv + 100% clean) is only 0.5%. Moreover, this observation still holds even by pushing the network capacity to an unprecedented scale, i.e., ResNet-638. These indicate that our so-called "deep" networks (e.g., ResNet-152) are still shallow for the task of adversarial learning, and larger networks should be used for fitting this complex distribution. Besides our findings on network depth, show increase network width also substantially improve network robustness. These empirical observations also corroborate with the recent theoretical studies which argues that robust adversarial learning needs much more complex classifiers. Besides adversarial robustness, we also observe a consistent performance gain on clean image accuracy by increasing network depth (as shown in Table 7). Our deepest network, ResNet-638, achieves an accuracy of 68.7% on clean images, outperforming the relatively shallow network ResNet-152 by 6.1%. In this paper, we reveal two intriguing properties of adversarial training at scale: conducting normalization in the right manner is essential for training robust models on large-scale datasets like ImageNet; and our so-called "deep" networks are still shallow for the task of adversarial learning. Our discoveries may also be inherently related to our two-domain hypothesis -clean images and adversarial images are drawn from different distributions. We hope these findings can facilitate fellow researchers for better understanding of adversarial training as well as further improvement of adversarial robustness. In the main paper, we note that our reproduced ALP significantly outperforms the reported in , as well in an independent study. The main differences between our version and the original ALP implementation lie in parameter settings, and are detailed as follows: • learning rate decay: the original ALP decays the learning rate every two epochs at an exponential rate of 0.94, while ours decays the learning rate by 10× at the 35-th, 70-th and 95-th epoch. To ensure these two policies reach similar learning rates by the end of training, the total number of training epochs of the exponential decay setting and the step-wise decay setting are set as 220 and 110 respectively. • initial learning rate: the original ALP sets the initial learning rate as 0.045 whereas we set it as 0.1 in our implementation. • training optimizer: the original ALP uses RMSProp as the optimizer while we use Momentum SGD (M-SGD). • PGD initialization during training: the original ALP initializes the adversarial perturbation from a random point within the allowed cube; while we initialize the adversarial image by its clean counterpart with probability = 0.2, or randomly within the allowed the cube with probability = 0.8. Table 3: The of ALP re-implementations under different parameter settings. We show that applying stronger attackers for training, e.g., change from PGD-10 to PGD-30, is the most important factor for achieving strong robustness. Other parameters, like optimizer, do not lead to significant robustness changes. By following the parameter settings listed in the ALP paper 4, we can train a ResNet-101 with an accuracy of 38.1% against PGD-10. The ResNet-101 performance reported in the ALP paper is 30.2% accuracy against an attack suite 5. This ∼8% performance gap is possibly due to different attacker settings in evaluation. However, by evaluating this model against PGD-2000, we are able to obtain a similar that reported in, i.e., reports ALP obtains 0% accuracy, and in our implementation the accuracy is 2.1%. Given these different settings, we change them one by one to train corresponding models adversarially. The are summarized in Table 3. Surprisingly, we find the most important factor for the performance gap between original ALP paper and ours is the attacker strength used for trainingby changing the attacker from PGD-10 to PGD-30 for training, the robustness against PGD-2000 can be increased by 19.7%. Other parameters, like network backbones or the GPU number, do not lead to significant performance changes. In this section, we explore the impact of different training parameters on model performance. As suggested in Table 3, the number of attack iteration used for training is an important factor for model robustness. We hereby provide a detailed diagnosis of model performance trained with PGD-{5, 10, 20} 6 for different training strategies. We report the performance in Table 4, and observe that decreasing the number of PGD attack iteration used for training usually leads to weaker robustness. Nonetheless, we note the amount of this performance change is strongly related to training strategies. For strategies that cannot lead to models with strong robustness, i.e., 100% adv + 100% clean and 100% adv + 100% clean, ALP, this robustness degradation is extremely severe (which is similar to the observation in Table 3). For example, by training with PGD-5, these two strategies obtains nearly no robustness, i.e., ∼0% accuracy against PGD-2000. While for strategies that can secure model robustness against strong attacks, changing from PGD-30 to PGD-5 for training only lead to a marginal robustness drop. Table 4: Robustness evaluation of models adversarially trained with PGD-{30, 20, 10, 5} attackers. We observe that decreasing the number of PGD attack iteration for training usually leads to weaker robustness, while the amount of degraded robustness is strongly related to training strategies. In Section 4.3 (of the main paper), we study the effectiveness of applying running statistics in training. We hereby test this heuristic policy under more different settings. Specifically, we consider 3 strategies, each trained with 4 different attackers (i.e., PGD-{5, 10, 20, 30}), which in 12 different settings. We report the in Table 5. We observe this heuristic policy can boost robustness on all settings, which further supports the importance of enforcing BN to behave consistently at training and testing. In the main paper, our study is driven by improving adversarial robustness (measured by the accuracy against PGD-2000), while leaving the performance on clean images ignored. For the completeness of performance evaluation, we list the clean image performance of these adversarially trained models in Table 7. Moreover, to facilitate performance comparison in future works, we list the corresponding accuracy against PGD-{10, 20, 100, 500} in this
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyxJhCEFDS
The first rigor diagnose of large-scale adversarial training on ImageNet
The gradient of a deep neural network (DNN) w.r.t. the input provides information that can be used to explain the output prediction in terms of the input features and has been widely studied to assist in interpreting DNNs. In a linear model (i.e., $g(x)=wx+b$), the gradient corresponds solely to the weights $w$. Such a model can reasonably locally linearly approximate a smooth nonlinear DNN, and hence the weights of this local model are the gradient. The other part, however, of a local linear model, i.e., the bias $b$, is usually overlooked in attribution methods since it is not part of the gradient. In this paper, we observe that since the bias in a DNN also has a non-negligible contribution to the correctness of predictions, it can also play a significant role in understanding DNN behaviors. In particular, we study how to attribute a DNN's bias to its input features. We propose a backpropagation-type algorithm ``bias back-propagation (BBp)'' that starts at the output layer and iteratively attributes the bias of each layer to its input nodes as well as combining the ing bias term of the previous layer. This process stops at the input layer, where summing up the attributions over all the input features exactly recovers $b$. Together with the backpropagation of the gradient generating $w$, we can fully recover the locally linear model $g(x)=wx+b$. Hence, the attribution of the DNN outputs to its inputs is decomposed into two parts, the gradient $w$ and the bias attribution, providing separate and complementary explanations. We study several possible attribution methods applied to the bias of each layer in BBp. In experiments, we show that BBp can generate complementary and highly interpretable explanations of DNNs in addition to gradient-based attributions. Deep neural networks (DNNs) have produced good for many challenging problems in computer vision, natural language processing, and speech processing. Deep learning models, however, are usually designed using fairly high-level architectural decisions, leading to a final model that is often seen as a difficult to interpret black box. DNNs are a highly expressive trainable class of non-linear functions, utilizing multi-layer architectures and a rich set of possible hidden non-linearities, making interpretation by a human difficult. This restricts the reliability and usability of DNNs especially in mission-critical applications where a good understanding of the model's behavior is necessary. The gradient is a useful starting point for understanding and generating explanations for the behavior of a complex DNN. Having the same dimension as the input data, the gradient can reflect the contribution to the DNN output of each input dimension. Not only does the gradient yield attribution information for every data point, but also it helps us understand other aspects of DNNs, such as the highly celebrated adversarial examples and defense methods against such attacks BID13.When a model is linear, the gradient recovers the weight vector. Since a linear model locally approximates any sufficiently smooth non-linear model, the gradient can also be seen as the weight vector of that local linear model for a given DNN at a given data point. For a piecewise linear DNN (e.g., a DNN with activation functions such as ReLU, LeakyReLU, PReLU, and hard tanh) the gradient is exactly the weights of the local linear model 1.Although the gradient of a DNN has been shown to be helpful in understanding the behavior of a DNN, the other part of the locally linear model, i.e., the bias term, to the best of our knowledge, has not been studied explicitly and is often overlooked. If only considering one linear model within a small region, the bias, as a scalar, seems to contain less information than the weight vector. However, this scalar is the of complicated processing of bias terms over every neuron and every layer based on the activations, the non-linearity functions, as well as the weight matrices of the network. Uncovering the bias's nature could potentially reveal a rich vein of attribution information complementary to the gradient. For classification tasks, it can be the case that the gradient part of the linear model contributes to only a negligible portion of the target label's output probability (or even a negative logit value), and only with a large bias term does the target label's probability becomes larger than that of other labels to in the correct prediction (see Sec 5). In our empirical experiments TAB0, using only the bias term of the local linear models achieves 30-40% of the performance of the complete DNN, thus indicating that the bias term indeed plays a substantial role in the mechanisms of a DNN.In this paper, we unveil the information embedded in the bias term by developing a general bias attribution framework that distributes the bias scalar to every dimension of the input data. We propose a backpropagation-type algorithm called "bias backpropagation (BBp)" to send and compute the bias attribution from the output and higher-layer nodes to lower-layer nodes and eventually to the input features, in a layer-by-layer manner. Specifically, BBp utilizes a recursive rule to assign the bias attribution on each node of layer to all the nodes on layer − 1, while the bias attribution on each node of layer − 1 is composed of the attribution sent from the layer below and the bias term incurred in layer − 1. The sum of the attributions over all input dimensions produced by BBp exactly recovers the bias term in the local linear model representation of the DNN at the given input point. In experiments, we visualize the bias attribution as images on a DNN trained for image classification. We show that bias attribution can highlight essential features that are complementary from what the gradient-alone attribution methods favor. Attribution methods for deep models is an important modern research in machine learning since it is important to complement the good empirical performance of DNNs with explanations for how, why, and in what manner do such complicated models make their decisions. Ideally, such methods would render DNNs to be glass boxes rather than black boxes. To this end, a number of strategies have been investigated. BID10 visualized behaviors of convolutional networks by investigating the gradients of the predicted class output with respect to the input features. Deconvolution BID15 and guided backpropagation BID11 ) modify gradients with additional constraints. BID5 extended to higher order gradient information by calculating the Taylor expansion, and BID0 study the Taylor expansion approach on DNNs with local renormalization layers. BID8 proposed DeepLift, which separated the positive attribution and negative attribution, and featured customer designed attribution scores. BID12 declared two axioms an attribution method needs to satisfy. It further developed an integrated gradient method that accumulates gradients on a straightline path from a base input to a real data point and uses the aggregated gradients to measure the importance of input features. Class Activation Mapping (CAM) BID16 localizes the attribution based on the activation of convolution filters, and can only be applied to a fully convolutional network. Grad-CAM BID7 relaxes the all-convolution constraints of CAM by incorporating the gradient information from the non-convolutional layers. All the work mentioned above utilizes information encoded in the gradients in some form or another, but none of them explicitly investigates the importance of the bias terms, which is the focus of this paper. Some of them, e.g. BID8 and BID12, consider the overall activation of neurons in their attribution methods, so the bias terms are implicitly taken into account, but are not independently studied. Moreover, some of the previous work (e.g. CAM) focus on the attribution for specific network architectures such as convolutional networks, while our approach generally applies to any piece-wise linear DNN, convolutional or otherwise. We can write the output f (x) of any feed-forward deep neural network in the following form: DISPLAYFORM0 Where W i and b i are the weight matrix and bias term for layer i, ψ i is the corresponding activation function, x ∈ X is an input data point of d in dimensions, f (x) is the network's output prediction of d out dimensions, and each hidden layer i has d i nodes. In this paper, we rule out the last softmax layer from the network structure; for example, the output f (x) may refer to logits (which are the inputs to a softmax to compute probabilities) if the DNN is trained for classification tasks. The above formalization of DNN generalizes many widely used DNN architectures. Clearly, Eq. can represent a fully-connected network of m layers. Moreover, the convolution operation is essentially a matrix multiplication, where every row of the matrix corresponds to applying a filter from convolution on a certain part of the input, and therefore the ing weight matrix has tied parameters and is very sparse, and typically has a very large (compared to the input size) number of rows. The average-pooling is essentially a linear operation and therefore is representable as a matrix multiplication, and max-pooling can be treated as an activation function. Batchnorm BID3 ) is a linear operation and can be combined into the weight matrix. Finally, we can represent a residual network BID2 block by appending an identity matrix at the bottom of a weight matrix so that we can keep the input values, and then add the kept input values later through another matrix operation. In this paper, we will focus on DNNs with piecewise linear activation functions, which cover most of the successfully used neural networks in a variety of application domains. Some widely used piecewise linear activation functions include the ReLU, leaky ReLU, PReLU, and the hard tanh functions. A general form of a piecewise linear activation function applied to a real value z is as follows: DISPLAYFORM0 In the above, there are h linear pieces, and these correspond to h predefined intervals on the real axis. We define the activation pattern φ(z) of z as the index of the interval containing z, which can be any integer from 0 to h − 1. Both ψ(z) and φ(z) extend to element-wise operators when applied to vectors or high dimensional tensors. As long as the activation function is piecewise linear, the DNN is a piecewise linear function and is equivalent to a linear model at each input point x. Specifically, each piece of the DNN (associated with an input point x) is a linear model: DISPLAYFORM1 This holds true for all the possible input points x on the linear piece of DNN. We will give a more general later in Lemma 1. Note W DISPLAYFORM2 where x i is the activation of layer i (x 1 is the input data) and b x i is an x i -dependent bias vector vector. In the extreme case, no two input training data points share the same linear model in Eq.. Note in this case, the DNN can still be represented as a piecewise linear model on a set of input data points, and each local linear model is only applied to one data point. DISPLAYFORM3 For instance, if ReLU ψ ReLU (z) = max(0, z) is used as the activation function ψ(·) at every layer i, we have an activation pattern φ(DISPLAYFORM4 indicates that ReLU sets the output node p to 0 or otherwise preserves the node value. Therefore, at layer i, W to be 0 while other elements to be b i .We can apply the above process to deeper layers as well, eliminating all the ReLU functions to produce an x-specific local linear model representing one piece of the DNN, as shown in Eq.. Since the model is linear, the gradient DISPLAYFORM5 ∂x is the weight vector of the linear model. Also, given all the weights of the DNN, each linear region, and the associated linear model can be uniquely determined by the ReLU patterns {φ(x i)} m i=2, which are m binary vectors. Given a specific input point x, the attribution of each dimension f (x)[j] of the DNN output (e.g., the logit for class j) to the input features aims to assign a portion of f (x)[j] to each of the input features i, and all the portions assigned to all the input features should sum up to f (x) [j]. For simplicity, in the rest of this paper, we rename f (x) to be f (x)[j], which does not lose any generality since the same attribution method can be applied for any output dimension j. According to Eq., f (x) as a linear model on x can be decomposed into two parts, the linear transformation DISPLAYFORM0 ∂x and the bias term bx. The attribution of the first part is straightforward because we can directly assign each dimension of the gradient DISPLAYFORM1 ∂x to the associated input feature, and we can generate the gradient by using the standard backpropagation algorithm. The gradient-based attribution methods have been widely studied in previous work (see Section 2). However, the attribution of the second part, i.e., the bias b, is arguably a more challenging problem since it is not obvious how to assign a portion of b to each input feature since b is a scalar value rather than a vector that, like the gradient, has the same dimensionality as the input vector. One possible reason for the dearth of bias attribution studies might be that people consider bias, as a scalar, less important relative to the weight vector, containing only minor information about deep model decisions. The final bias scalar bx of every local linear model, however, is the of a complex process (see Eq. FORMULA2), where the bias term on every neuron of a layer gets modified based on the activation function (e.g., for ReLU, a bias term gets dropped if the neuron has a negative value), then propagates to the next layer based on the weight matrix, and contribute to the patterns of activation function in the next layer. As the bias term applied to every neuron can be critical in determining the activation pattern (e.g., changing a neuron output from negative to positive for ReLU), we wish to be able to better understand the behavior of deep models by unveiling and reversing the process of how the final bias term is generated. Moreover, as we show in our empirical studies (see Section 5), we train DNNs both with and without bias for image classification tasks, and the show that the bias plays a significant role in producing accurate predictions. In fact, we find that it is not rare that the main component of a final logit, leading to the final predicted label, comes from the bias term, while the gradient term ∂f (x) ∂x x makes only a minor, or even negative, contribution to the ultimate decision. In such a case, ignoring the bias term can provide misleading input feature attributions. Intuitively, the bias component also changes the geometric shape of the piecewise linear DNNs (see FIG4); this means that it is an essential component of deep models and should also be studied, as we do in this paper. It is a mathematical fact that a piecewise linear DNN is equivalent to a linear model for each input data point. Therefore, the interpretation of the DNN's behavior on the input data should be exclusive to the information embedded in the linear model. However, we often find that the gradient of the DNN, or the weight of the linear model, does not always produce satisfying explanations in practice, and in many cases, it may be due to the overlooked attribution of the bias term that contains the complementary or even key information to make the attribution complete. In this section, we will introduce our method for bias attribution. In particular, the goal is to find a vector β of the same dimension d in as the input data point x such that din p=1 β[p] = b x. However, it is not clear how to directly assign a scalar value b to the d in input dimensions, since there are m layers between the outputs and inputs. In the following, we explore the neural net structure for bias attribution and develop a backpropagation-type algorithm to attribute the bias b layer by layer from the output f (x) to the inputs in a bottom-up manner. Recall x denotes the input nodes of layer ≥ 2, i.e., DISPLAYFORM0 According to the recursive computation shown in Eq., the output f (x) can be represented as a linear model of x shown in the following lemma. Lemma 1. Given x, the output f (x) of a piecewise linear DNN can be written as a linear model of the input x of any layer > 2 (x 1 = x is the raw input) in the following form. DISPLAYFORM1 For each input node x [p] of layer, we aim to compute β [p] as the bias attribution on x [p]. We further require that summing β [p] over all input nodes of layer recovers the bias in Eq. FORMULA10, i.e., DISPLAYFORM2 so the linear model in Eq. FORMULA10 can be represented as the sum of d terms associated with the d input nodes, each composed of a linear transformation part and a bias attribution part, i.e., DISPLAYFORM3 DISPLAYFORM4 We will discuss several options to compute the attribution scores α incurred in layer − 1 (which is applied to all nodes in layer − 1), as shown below: DISPLAYFORM5 It can be easily verified that summing up the attribution β −1 [q] over all the nodes on layer − 1 yields the bias term in Lemma 1, when writing f (x) at x as a linear model of x −1, i.e., DISPLAYFORM6 Hence, the complete attribution of f (x) on the nodes of layer − 1 can be written in the same form as the one shown in Eq. for layer, i.e., f (x) = DISPLAYFORM7. Therefore, we start from the last layer, and recursively apply Eq.- from the last layer to the first layer. This process backpropagates to the lower layers the bias term incurred in each layer and the bias attributions sent from higher layers. Eventually, we can obtain the bias attribution β[p] for each input dimension p. The bias attribution algorithm is detailed in Algorithm 1. DISPLAYFORM8 and {b x} FORMULA0 - FORMULA0 or Eq. FORMULA0; DISPLAYFORM9 // Attribute to the layer input DISPLAYFORM10 // Combine with bias of layer − 1 10 end 11 end 12 return β 1 ∈ R din; In the following, we discuss two possible options to compute the attribution scores in α [p], where α [p, q] measures how much of the bias x [p] should be attributed to x −1 [q]. For both of options, we design α [p] so that the bias attribution on each neuron serves as a compensation for the weight or gradient term to achieve the desired output value, and thus the bias attribution can give complementary explanations to the gradient attribution. We have DISPLAYFORM0 is negative, we may reason that to achieve the target value of x [p], the positive components of the gradient term DISPLAYFORM1 are larger than desirable, so that we need to apply the additional negative bias in order to achieve the desired output x [p]. In other words, the large positive components can be thought as the causal factor leading to the negative bias term, so we attribute more bias to the larger positive components. On the other hand, suppose bx [p] is positive, then the negative components of the gradient term are smaller (or larger in magnitude) than desirable, so the small negative values cause the bias term to be positive, and therefore, we attribute more bias to the smaller negative components. Thus, we have DISPLAYFORM2 where DISPLAYFORM3 DISPLAYFORM4 We use the logistic function to attribute the bias so that the sum over all components recovers the original bias, and T serves as a temperature parameter to control the sharpness of the attribution. With T large, the bias is attributed in a more balanced manner, while with T small, the bias is attributed mostly to a few neurons in layer − 1. Also note that we only consider the non-zero components (indicator 1 e(l−1,p,q)=1 checks whether the component is zero), as the zero-valued components do not offer any contribution to the output value. For example, consider a convolutional layer, the corresponding matrix form is very sparse, and only the non-zero entries are involved in the convolution computation with a filter. The second option adopts a different idea but with a similar philosophy of compensating the attribution of the gradient term. Again, the target is to achieve the value of x [p], and we may assume that to achieve such a value, every component W r=1 1 e(l−1,p,q)=1 (again, we only need to consider the contribution from non-zero components). The offsets of each component to the average target value can be treated as the causal factor of introducing the bias term, where components that are far from the average target get more compensation from the bias, and the components that are close to the target, since they already fulfill their responsibility, receive less compensation from the bias. This produces the following method to compute s [p, q], i.e., DISPLAYFORM5 Note we use the same equations for e(l − 1, p, q) and α l [p, q] as defined in Eq. FORMULA0 - FORMULA0 The above two options are our designs for the α [p, q] function. The attribution function is valid as long as DISPLAYFORM6 While we utilize the logistic function so that all attribution factors are positive, α [p, r] can be negative and still applicable to our BBp framework. We note that there is no single solution to get the optimal attribution function. The proposed options can be applied to any piecewise-linear deep neural networks, and for specific activation function, it is possible to design specialized attribution functions to get still better bias attribution. We first evaluate the importance of bias terms, or in other words, the amount of information encoded in the bias terms by comparing networks trained both with and without bias. In TAB0, we compare on the CIFAR-10,CIFAR-100 BID4 ) and Fashion MNIST BID14 datasets. We trained using the VGG-11 Network of BID9, and we compare the trained with bias, and without bias. Moreover, in the trained with bias case, we derive the linear model of every data point g(x) = wx + b, and compare the performance using only the ing gradient term wx and only the ing bias term b. From the shown in the table, we find that the bias term carries appreciable information and makes unignorable contributions to the correct prediction of the network. We present our bias attribution , using the two options of attribution scores discussed in section 4.2, and compare to the attribution based only on gradient information. We test BBp on STL-10 (BID1 and ImageNet(ILSVRC2012) BID6 and show the in FIG9. For STL-10, we use a 10 layer convolutional network (,maxpool,,maxpool,,maxpool,,maxpool,,dense10, and corresponds to a convolutional layer with 32 channels, kernel size 3 and padding 1), and for ImageNet we use the VGG-11 network of BID9. For both gradient and bias attribution, we select the gradient/bias corresponding to the predicted class (i.e., one row for the final layer weight matrix and one scalar for the final layer bias vector). Note that BBp is a general framework and can work with other choices of gradients and biases for the last layer (e.g. the top predicted class minus the second predicted class).From FIG9, we can observe that the bias attribution can highlight meaningful features in the input data, and in many cases, capture the information that is not covered by the gradient attribution. For example, for the "bird" image of STL-10, the bias attribution shows the major characteristics of a bird such as the body shape, the round head the yellow beak and the claws. Compared to the gradient attribution, the bias attribution seems to give cleaner explanations, and show much stronger focus on the head and beak. Overall, the bias attribution method tends to attribute in a more concentrated way, and often shows explanations about he DNNs complementary to the gradient information. The label of every image is shown in the leftmost column. The gradient attribution is the element-wise product between the linear model weight w and data point x. The "grad.norm." and "bias norm." columns show the attribution of gradient and bias normalized to the color range. The "grad.overlay" and "bias.overlay" show the 10% data features with the highest attribution magnitude of the original image. "bias\grad overlay" columns show the data features selected by the bias attribution and not selected by the gradient attribution. Bias1 correspond to the first proposed option of calculating the bias attribution score (Eq. FORMULA0 - FORMULA0), and bias2 (Eq.) correspond to the second proposed option. For both options of calculating the bias attribution score, the temperature parameter T is set to 1. For many cases, the bias attributions offer explanations complementary to the information provided by the gradient. For example, for the "bittern" image from ImageNet, BBp shows stronger attribution on the bird's head and body compared to the gradient method. For the "fire guard" image of ImageNet, BBp has clear attribution to the fire, in addition to the shape of the guard, while the gradient method only shows the shape of the guard. Similarly, for the "folding chair" of ImageNet, BBp shows clearer parts of the chair, while the gradient attribution shows less relevant features such as the wall.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1xeyhCctQ
Attribute the bias terms of deep neural networks to input features by a backpropagation-type algorithm; Generate complementary and highly interpretable explanations of DNNs in addition to gradient-based attributions.
This paper presents a method to autonomously find periodicities in a signal. It is based on the same idea of using Fourier Transform and autocorrelation function presented in Vlachos et al. 2005. While showing interesting this method does not perform well on noisy signals or signals with multiple periodicities. Thus, our method adds several new extra steps (hints clustering, filtering and detrending) to fix these issues. Experimental show that the proposed method outperforms the state of the art algorithms. A time series is defined by its 3 main components: the trend component, the periodic component and the random component. Trend analysis and prediction are topics that have been greatly studied BID10 and will not be treated in the article, therefore every time series will be assumed stationary regarding its mean and variance, so this study focus the periodic component. The ability to detect and find the main characteristic of this component is not as easy as the trend component. Yet, the ability to detect periodicities in a time series is essential to make precise forecasts. A periodicity is a pattern in a time series that occurs at regular time intervals. More precisely, the time series is said cyclical, if the time intervals at which the pattern repeats itself can't be precisely defined and is not constant. On the opposite, there are seasonal time series in which the pattern repeats itself at constant and well defined time intervals. Thus, cyclical patterns are more difficult to detect due to their inconsistency and the fact that they usually repeat themselves over large periods of time and therefore require more data to be identified. Nevertheless, seasonal patterns are very common in time series such as those related to human behaviour which usually have periodicities like hours and calendar (time of day, day of week, month of year). This kind of feature is well known and can be easily tested to see if they are beneficial or not. Unfortunately, when it comes to time series related to other phenomenons, the periodicities are not trivially found. For instance, tides level are multi-periodic time series correlated to both moon cycles and sun cycles; and females menstrual cycles are related to hormonal changes. The ability to detect periodicity in time series is fundamental when it comes to forecasting BID5. Once a periodic pattern has been detected, numerous techniques can be used to model this later and improve forecasts BID1. However, periodicities detection is not easy and has been greatly studied in the existing literature, but most of current techniques are unable to detect periodicities without the need of preprocessing data BID12 or have trouble detecting multiple periodicities BID11. This paper is organised as follow: we first present the Fourier transform and the Autoperiod algorithm BID11 used to detect periodicities in a signal. Then we propose a new fully automated method, named Clustered Filtered Detrended Autoperiod (CFD-Autoperiod), which also combines the advantages of frequency domain and time domain while being robust to noise and able to handle multi periodicities. Noise robustness is achieved using a density clustering on hints provided by the frequency analysis. Multi-periodicities are more precisely detected by both using detrending and filtering. Finally, we demonstrate that CFD-Autoperiod outperforms previous methods. Autocorrelation and Fourier transform are well known techniques used to find recurrent patterns in a given signal. The Fourier transform decomposes the original signal {s(t j)} j∈ [1,N] in a linear combination of complex sinusoids, also called a Fourier series. Let N be the number of frequency components of a signal, P the periodicity of the signal and c k the k th series coefficient then we have the Fourier series: DISPLAYFORM0 Thus, the amplitude and phase of each sinusoids correspond to the main frequencies contained within the signal. The Fourier transform can easily detect the presence of frequencies in a signal. However, if we need the corresponding periodicities from each frequency then we have to return from the frequency domain to the time domain. Let DF T be the Discrete Fourier Transform of a discrete signal {s(t j)}, then we can obtain the corresponding Periodogram P in the time domain as follow: DISPLAYFORM1 where f k = 2πk N correspond to the frequency captured by each component. However, in the frequency domain each bin is separated with a constant step of 1 N, whereas in the time domain bins size is N k(k+1), thus the range of periods is increasingly wider. Therefore, the Fourier transform resolution for long periods is not sufficient to provide an accurate estimation of the periodicity. Another way to find the dominant periodicities in a signal consists in calculating the autocorrelation function (ACF) of the given signal s(t). The autocorrelation is the correlation between the elements of a series and others from the same series separated from them by a given interval ∆t: DISPLAYFORM0 The ACF function provides a more accurate estimation of each periodicity, especially for longer periods as opposed to the Fourier transform BID11. However, it is not sufficient by itself due to the difficulty to select the most predominant peaks. Indeed, for a given periodicity p 1 the autocorrelation generates peaks for each p 1 multiple, hence the difficulty to select the relevant peaks when multiple periodicities composed a signal. A methodology combining both techniques advantages has been introduced by BID11. This method uses sequentially frequency domain (DFT) and time domain (ACF) in order to detect periodicity. The idea is to combine both methods in such a way that they complement each other. On the one hand, as mentioned earlier, due to its step inconstancy in the time domain, the Fourier transform resolution becomes insufficient to provide good estimations for long periods, where the autocorrelation has a constant resolution. On the other hand, according to BID11, it is difficult to correctly detect periodicities using only the autocorrelation function. Thus they proposed the following steps: first, noise is discarded from possible periodicity hints using a threshold on the Periodogram. Then, these hints are refined using the ACF function. If a periodicity hint lies on a local maximum then it can be validated, otherwise, if it lies on a local minimum this latter is discarded. On top of that, thanks to the ACF resolution, a gradient ascent is used to refine the remaining hints The Fourier Transform is used to select periodicity hints. To do so, we use the 99% confidence Li et al. BID11 technique to compute the threshold distinguishing periodicity hints from noise in the Fourier transform. Firstly, it is necessary to find the maximum amount of spectral power generated by the signal noise. Let be {s (t j)} j∈[1,N] a permuted sequence of a periodic sequence {s(t j)} j∈[1,N]. s should not exhibit any periodic pattern due to the random permutation process. Therefore, the maximal spectral power generated by s should not be higher than the spectral power generated by a true periodicity in s. Thus, we can use this value as a threshold to eliminate the noise. To provide a 99% confidence level, this process is repeated 100 times and the 99th largest value recorded is used as a threshold. Unfortunately, for a given periodicity in X, rather than finding an unique corresponding hint, spectral leakage may produce multiple hints near the true periodicity. This phenomenon is due to the finite resolution of the Fourier Transform and can only be avoided knowing in advance the true periodicities of the signal. Spectral leakage generates points with a spectral power higher than the threshold provided by the 99% confidence method FIG1 ) and therefore generate imprecise periodicity hints. The autocorrelation might filter most of them but every imprecise periodicity hint increase the probability of false positives, therefore it is interesting to reduce the number of periodicity hints in order to achieve a higher precision score. Knowing that the distribution of spectral leakage is more dense around the true periodicity, performing a density clustering over periodicity hints and using the ing centroids as periodicity hints can reduce the number of hints. A fundamental value in density clustering algorithms is the range in which it seeks for neighbours named. In our case, this value is not a constant because the accuracy of the hint is related to the resolution of the corresponding DFT bin size. A hint may have leaked from adjacent DFT bins, thus for a given hint of periodicity N/k, is set as the next bin value plus a constant width of 1 to avoid numerical issues when the difference from the current bin value to the next bin value is less than one: DISPLAYFORM0 The clustering is done by ascending periodicity order, hence a cluster made with small periodicities cannot be altered by bigger periodicity clusters. As shown in the FIG2, the density clustering performed in the GEF dataset BID3 drastically reduces the number of periodicity hints and the ing centroids are close to the true periodicities (24 and 168). Once the centroids have been found, they are used as periodicity hints during the validation step. For the validation step, a search interval for each periodicity hint is needed to check if this latter lies on a hill or a valley of the ACF. BID11 used the DFT bin size to define this search interval but in this study we propose a different approach. A periodicity N generates hills on the ACF at each multiple of N and valleys at each multiple of N 2. Therefore, we defined the search interval R for a periodicity hint N as follow: DISPLAYFORM0 Thereafter, a quadratic function is fitted to the ACR function in the search interval. In order to validate a hint, the function must have a negative second degree term and its derivative sign must change along the interval. The presence of multiple periodicities refutes the assumption that hills and valleys of the ACF are sufficient to validate or discard hints. Precisely, when validating a periodicity hint, correlations generated by both higher and lower frequencies than the hint can be problematic. These two problems are addressed in the following section. On the one hand, periodicities of higher frequencies induces sinusoidal correlations which may be in opposite phase with the correlation we are actually looking for (see FIG3 . Let s be a multiperiodic signal composed of periodicities P 1 and P 2 . Let P 1, a periodicity of length 20 and P 2, a periodicity of length 50. The periodicity P 1 produces on the signal ACF sinusoidal correlations of wavelength 20 and the periodicity P 2 produces sinusoidal correlations of wavelength 50. Thereby, at 50 lags on the ACF, the P 1 and P 2 periodicities will produce correlations in opposite phases and therefore nullify the hill at 50 used to validate or discard the periodicity hint P 2 . To tackle this issue, periodicity hints are analysed in ascending order. If a periodicity hint is validated, a lowpass filter with an adapted cutoff frequency is applied to the signal. Consequently, the following autocorrelations will be computed on this new signal. Thus, the ing autocorrelations will not exhibit any correlation induced by frequencies higher than the cutoff frequency of the lowpass filter. The cutoff frequency must be chosen carefully. Indeed, an ideal lowpass filter is characterised by a full transmission in the pass band, a complete attenuation in the stop band and an instant transition between the two bands. However, in practice, filters are only an approximation of this ideal filter and the higher the order of the filter is, the more the filter approximates the ideal filter. In our case, we are studying the periodicities in the signal, therefore, we want a filter with a frequency response as flat as possible to avoid any negative impact on the periodicity detection. Thereby, a Butterworth filter has been chosen due to its flat frequency response with no ripples in the passband nor in the stopband. However, a Butterworth filter, despite all the good properties, has a slow roll-off attenuating frequencies nearby the cutoff frequency. For the validation step, we do not want to attenuate the periodicity hint, therefore the cutoff frequency must not be exactly equal to the frequency of the hint. For a given periodicity N k, the frequency cutoff is equal to the previous bin value minus 1, to avoid the same numerical issues as described in the Density Clustering section: DISPLAYFORM0 On the other hand, low frequencies may induce a local trend in the autocorrelation that can be problematic when validating an hint. Indeed, in order to validate a periodicity hint, a quadratic function is fitted to the ACF in the search interval as mentioned in the subsection Hints Validation. Sadly, a trend in the search interval may prevent the derivative sign to switch FIG5), and therefore prevent the correct validation of the corresponding hint. Consequently, to avoid this situation, the ACF is detrended by subtracting the best fitted line in the following interval [0, N (k−1) + 1] for a given period hint N/k. Thus, the ing ACF does not exhibit any linear trend and therefore the fitted quadratic function is able to validate or discard hint efficiently. To evaluate the performances of the proposed method it is necessary to use time series datasets with periodicities. To do so, we perform our first evaluations on synthetic signals where the ground truth is known in order to compare raw performances and evaluations on real time series datasets. Signals of length 2000 with 1 to 3 periodicities have been generated. The periodicities have been chosen in the interval using a pseudo-random process. For multi-periodic signals, this pseudo-random process ensures that periodicities are not overlapping each others by checking that one is at least twice as bigger as the previous one. Finally, in order to compute precision and recall metrics, a validation criterion has been established. We assumed that a periodicity P d detected in a generated signal with a true periodicity P t is valid if: DISPLAYFORM0 The metrics have been consolidated over 500 iterations for each generated periodicity. As shown in Table 1, for a non multi-periodic signal, autoperiod and CFD-Autoperiod method achieve high precision scores whereas the Fourier Transform achieves a high recall but a really low precision score. Indeed, the Fourier Transform method does not filter the hints using the autocorrelation. Nevertheless, the autoperiod method did not detect every periodicities even for non multi-periodic signals autoperiod. This is likely due to the absence of density clustering and the narrow interval search to find the corresponding hill on the ACF. For multi-periodic signals, both recall and precision are drastically decreasing for the autoperiod method and as it can be observed, the detrending step and the use of a lowpass filter by the CFD-Autoperiod method lead to better scores. Regarding the Fourier Transform scores, due to the lack of the filtering step its recall is high but its precision score is always the lowest. Benchmarks have also been performed on synthetic signals generated via random process, without limitations on the periodicity values (Table 1). Naturally, the with an unique periodicity are similar. However, for multi-periodic signals the autoperiod and CFD-Autoperiod methods achieve lower scores. This is due to the fact that both methods use the autocorrelation to filter hints and this latter is not able to distinguish very close periodicities. Therefore, the use of autocorrelation as a validation step does not allow the detection of periodicities near each others. Nevertheless, in real datasets, most of the periodicities are sufficiently spaced to be detected by the autocorrelation function and thus remains efficient as a validation step. Benchmarks have also been performed on real datasets TAB3 ) and different types of time series have been chosen in order to test the validity of the proposed method.• GEF BID3 ): This dataset has been provided for the Global Energy Forecasting Competition 2014 (GEFCom2014) BID2, a probabilistic energy forecasting competition. The dataset is composed of 6 years of hourly load data. This time series is multi-periodic with the following periodicities: daily, weekly and bi-annual. The CFD-Autoperiod method has detected and validated 4 periodicities with 3 of them correct. Whereas the autoperiod has detected 5 periodicities with only 2 valid and has missed the long term bi-annual periodicity.• Great lakes: This dataset contains monthly water level of the 5 great lakes and is provided by the National Oceanic and Atmospheric Administration BID9. This time series is mono-periodic with a periodicity of 12 months. The autoperiod method has detected 4 different periodicities with only one correct. Among these latter, 24 and 72 periodicities were detected and are only ing correlations of the 12 periodicity. Whereas the CFD-Autoperiod has successfully filtered out the correlations of the 12 one. BID8 used this dataset as well but did not write the exact periodicities detected by their method. In their plots, the segmentation for both Ontario and Clair lakes does not correspond to a periodicity of 12.• Pseudo periodic (Keogh & Pazzani): These datasets contain 10 pseudo periodic time series generated from 10 different simulation runs. The data appears to be highly periodic, but never exactly repeats itself. BID8 did not write their exact but the segmentation shown on their plot seems to correspond to a detected periodicity of 155. The CFD-Autoperiod method found a periodicity of 144 and the exact true periodicity seems to be 142.• Boston Tides: This dataset contains water level records of Boston, MA from July 01 to August 31, with 6 minutes as sampling interval. It has been recently used by BID12 to evaluate their method. They successfully detected 2 periodicities but their method required a preprocessing step whereas the CFD-Autoperiod method does not require any. The first detected periodicity is 12,4 hours corresponding to the semi-diurnal constituent of 12 hours and 25.2 minutes. They have also detected 28,5 days and 29 days periodicities which correspond to a lunar month. The CFD-Autoperiod method detected a periodicity of 24 hours and 50 minutes whereas the autoperiod did not detect it. This value is interesting because it corresponds to the behaviour of a mixed tide (when there is a high high tide, a high low tide followed by a low high tide and a low low tide, in 24hour ans 50 minutes). However, it has not detected the lunar month periodicity but this might be due to the lack of data used. Indeed, BID12 used 2 months of data and the CFD-Autoperiod can only detect periodicities of a length inferior or equal to the half of the signal length. BID11. CFD-Autoperiod can be applied on noisy time series containing multiple periodicities and output raw periodicities that can later be refined by external domain specific knowledge (for instance 24h for human daily activities). One case not treated in this study concerns non-stationary series. A possible technique would consists in tracking the evolution of the periodicities through time and using a Kalman filter to track the apparition, disappearance or evolution of the detected periodicities. Using the confidence of the Kalman filter we could decide whether to continue considering the presence of a particular periodicity in the signal even if it is not detected for a while. This would strengthen the obtained by CFDAutoperiod and give more reliable periodicities. Thus, even more complex machine learning models can be built on top of them.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJMCdsC5tX
This paper presents a method to autonomously find multiple periodicities in a signal, using FFT and ACF and add three news steps (clustering/filtering/detrending)
We present an adversarial exploration strategy, a simple yet effective imitation learning scheme that incentivizes exploration of an environment without any extrinsic reward or human demonstration. Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other. The former collects training samples for the latter, and its objective is to maximize the error of the latter. The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former. In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, and the inverse dynamics model learns to adapt to the challenging samples. We further propose a reward structure that ensures the DRL agent collects only moderately hard samples and not overly hard ones that prevent the inverse model from imitating effectively. We evaluate the effectiveness of our method on several OpenAI gym robotic arm and hand manipulation tasks against a number of baseline models. Experimental show that our method is comparable to that directly trained with expert demonstrations, and superior to the other baselines even without any human priors. Over the past decade, imitation learning (IL) has been successfully applied to a wide range of domains, including robot learning BID7 BID20, autonomous navigation BID4 BID19, manipulation tasks BID11 BID17, and self-driving cars BID5. Traditionally, IL aims to train an imitator to learn a control policy π only from expert demonstrations. The imitator is typically presented with multiple demonstrations during the training phase, with an aim to distill them into π. To learn π effectively and efficiently, a large set of high-quality demonstrations are necessary. This is especially prevalent in current state-of-the-art IL algorithms, such as dataset aggregation (DAgger) BID18 and generative adversarial imitation learning (GAIL) BID9. Although these approaches have been the dominant algorithms in IL, a major bottleneck for them is their reliance on high-quality demonstrations, which often require extensive supervision from human experts. In addition, a serious flaw in the learned policy π is its tendency to overfit to demonstration data, preventing it from generalizing to new ones. To overcome the aforementioned challenges in IL, a number of methods have been investigated to enhance the generalizability and data efficiency, or reduce the degree of human supervision. Initial efforts in this direction were based on the idea of meta learning BID6 BID8 ), in which the imitator is trained from a meta learner that is able to quickly learn a new task with only a few set of demonstrations. However, such schemes still require training the meta-learner with tremendous amount of time and demonstration data, leaving much room for improvement. Thus, a rapidly-growing body of literature based on the concept of using forward/inverse dynamics models to learn π within an environment in a self-supervised fashion BID0 BID11 BID13 has emerged in the past few years. One key advantage of the concept is that it provides an autonomous way for preparing training data, removing the need of human intervention. In this paper, we call it self-supervised IL.Self-supervised IL allows an imitator to collect training data by itself instead of using predefined extrinsic reward functions or expert supervision during training. It only needs demonstration during inference, drastically decreasing the time and effort required from human experts. Although the core principles of self-supervised IL are straightforward and have been exploited in many fields BID0 BID11 BID12, recent research efforts have been dedicated to addressing the challenges of multi-modality and multi-step planning. For example, the use of forward consistency loss and forward regularizer have been extensively investigated to enhance the task performance of the imitator BID0 BID13. This becomes especially essential when the lengths of trajectories grow and demonstration samples are sparse, as multiple paths may co-exist to lead the imitator from its initial observation to the goal observation. The issue of multi-step planning has also drawn a lot of attention from researchers, and is usually tackled by recurrent neural networks (RNNs) and step-by-step demonstrations BID11 BID13. The above self-supervised IL approaches report promising , however, most of them are limited in applicability due to several drawbacks. First, traditional methods of data collection are usually inefficient and time-consuming. Inefficient data collection in poor exploration, giving rise to a degradation in robustness to varying environmental conditions (e.g., noise in motor control) and generalizability to difficult tasks. Second, human bias in data sampling range tailored to specific interesting configurations is often employed BID0 BID11. Although a more general exploration strategy called curiosity-driven exploration was later proposed in BID12, it focuses only on exploration in states novel to the forward dynamics model, rather than those directly influential to the inverse dynamics model. Furthermore, it does not discuss the applicability to continuous control domains, and fails in high dimensional action spaces according to our experiments in Section 4. Unlike the approaches discussed above, we do not propose to deal with multi-modality or multi-step planning. Instead, we focus our attention on improving the overall quality of the collected samples in the context of self-supervised IL. This motivates us to equip the model with the necessary knowledge to explore the environment in an efficient and effective fashion. In this paper, we propose a straightforward and efficient self-supervised IL scheme, called adversarial exploration strategy, which motivates exploration of an environment in a self-supervised manner (i.e., without any extrinsic reward or human demonstration). Inspired by; BID23; BID24, we implement the proposed strategy by jointly training a deep reinforcement learning (DRL) agent and an inverse dynamics model competing with each other. The former explores the environment to collect training data for the latter, and receives rewards from the latter if the data samples are considered difficult. The latter is trained with the training data collected by the former, and only generates rewards when it fails to predict the true actions performed by the former. In such an adversarial setting, the DRL agent is rewarded only for the failure of the inverse dynamics model. Therefore, the DRL agent learns to sample hard examples to maximize the chances to fail the inverse dynamics model. On the other hand, the inverse dynamics model learns to be robust to the hard examples collected by the DRL agent by minimizing the probability of failures. As a , as the inverse dynamics model becomes stronger, the DRL agent is also incentivized to search for harder examples to obtain rewards. Overly hard examples, however, may lead to biased exploration and cause instability of the learning process. In order to stabilize the learning curve of the inverse dynamics model, we further propose a reward structure such that the DRL agent is encouraged to explore moderately hard examples for the inverse dynamics model, but refraining from too difficult ones for the latter to learn. The self-regulating feedback structure between the DRL agent and the inverse dynamics model enables them to automatically construct a curriculum for exploration. We perform extensive experiments to validate adversarial exploration strategy on multiple OpenAI gym BID3 robotic arm and hand manipulation task environments simulated by the MuJoCo physics engine , including FetchReach, FetchPush, FetchPickAndPlace, FetchSlide, and HandReach. These environments are intentionally selected by us for evaluating the performance of inverse dynamics model, as each of them allows only a very limited set of chained actions to transition the robotic arms and hands to target observations. We examine the effectiveness of our method by comparing it against a number of self-supervised IL schemes. The experimental show that our method is more effective and data-efficient than the other self-supervised IL schemes for both low-and high-dimensional observation spaces, as well as in environments with high-dimensional action spaces. We also demonstrate that in most of the cases the performance of the inverse dynamics model trained by our method is comparable to that directly trained with expert demonstrations. The above observations suggest that our method is superior to the other self-supervised IL schemes even in the absence of human priors. We further evaluate our method on environments with action space perturbations, and show that our method is able to achieve satisfactory success rates. To justify each of our design decisions, we provide a comprehensive set of ablative analysis and discuss their implications. The contributions of this work are summarized as follows:• We introduce an adversarial exploration strategy for self-supervised IL. It consists of a DRL agent and an inverse dynamics model developed for efficient exploration and data collection.• We employ a competitive scheme for the DRL agent and the inverse dynamics model, enabling them to automatically construct a curriculum for exploration of observation space.• We introduce a reward structure for the proposed scheme to stabilize the training process.• We demonstrate the proposed method and compare it with a number of baselines for multiple robotic arm and hand manipulation tasks in both low-and high-dimensional state spaces.• We validate that our method is generalizable to tasks with high-dimensional action spaces. The remainder of this paper is organized as follows. Section 2 introduces material. Section 3 describes the proposed adversarial exploration strategy in detail. Section 4 reports the experimental , and provides an in-depth ablative analysis of our method. Section 5 concludes. In this section, we briefly review DRL, policy gradient methods, as well as inverse dynamics model. DRL trains an agent to interact with an environment E. At each timestep t, the agent receives an observation x t ∈ X, where X is the observation space of E. It then takes an action a t from the action space A based on its current policy π, receives a reward r, and transitions to the next observation x. The policy π is represented by a deep neural network with parameters θ, and is expressed as π(a|x, θ). The goal of the agent is to learn a policy to maximize the discounted sum of rewards G t: DISPLAYFORM0 where t is the current timestep, γ ∈ the discount factor, and T the horizon. Policy gradient methods BID10 BID25 ) are a class of RL techniques that directly optimize the parameters of a stochastic policy approximator using policy gradients. Although these methods have achieved remarkable success in a variety of domains, the high variance of gradient estimates has been a major challenge. Trust region policy optimization (TRPO) BID21 circumvented this problem by applying a trust-region constraint to the scale of policy updates. However, TRPO is a second-order algorithm, which is relatively complicated and not compatible with architectures that embrace noise or parameter sharing BID22. In this paper, we employ a more recent family of policy gradient methods, called proximal policy optimization (PPO) BID22. PPO is an approximation to TRPO, which similarly prevents large changes to the policy between updates, but requires only first-order optimization. PPO is superior in its generalizability and sample complexity while retaining the stability and reliability of TRPO 1. An inverse dynamics model I takes as input a pair of observations (x, x), and predicts the actionâ required to reach the next observation x from the current observation x. It is formally expressed as: DISPLAYFORM0 where (x, x) are sampled from the collected data, and θ I represents the trainable parameters of I. During the training phase, θ I is iteratively updated to minimize the loss function L I, expressed as: DISPLAYFORM1 where d is a distance metric, and a the ground truth action. During the testing phase, a sequence of observations {x 0,x 1, · · ·,x T} is first captured from an expert demonstration. A pair of observations (x t,x t+1) is then fed into I at each timestep t. Starting fromx 0, the objective of I is to predict a sequence of actions {â 0,â 1, · · ·,â T −1} and transition the final observationx T as close as possible. In this section, we first describe the proposed adversarial exploration strategy. We then explain the training methodology in detail. Finally, we discuss a technique for stabilizing the training process. FIG0 shows a framework that illustrates the proposed adversarial exploration strategy, which includes a DRL agent P and an inverse dynamics model I. Assume that sequence of observations and actions generated by P as it explores E using a policy π. At each timestep t, P collects a 3-tuple training sample (x t, a t, x t+1) for I, while I predicts an actionâ t and generates a reward r t for P. In this work, I is modified from Eq. FORMULA1 to include an additional hidden vector h t, which recurrently encodes the information of the past observations. I is thus expressed as: where f (·) denotes the recurrent function. θ I is iteratively updated to minimize L I, formulated as: DISPLAYFORM0 where β is a scaling constant. We employ mean squared error β||a t −â t || 2 as the distance metric d(a t,â t), since we only consider continuous control domains in this paper. It can be replaced with a cross-entropy loss for discrete control tasks. We directly use L I as the reward r t for P, expressed as: DISPLAYFORM1 Our method targets at improving both the quality and efficiency of the data collection process performed by P, as well as the performance of I. Therefore, the goal of the proposed framework is twofold. First, P has to learn an adversarial policy π adv (a t |x t) such that its cumulated discounted rewards DISPLAYFORM2 ) is maximized. Second, I requires to learn an optimal θ I such that Eq. is minimized. Minimizing L I (i.e., r t) leads to decreased G t|π adv, forcing P to enhance π adv to explore more difficult samples to increase G t|π adv. This implies that P is motivated to focus on I's weak points, instead of randomly collecting ineffective training samples. Training I with hard samples not only accelerates its learning progress, but also helps to boost its performance. We describe the training methodology of our adversarial exploration strategy by a pseudocode presented in Algorithm 1. Assume that P's policy π adv is parameterized by a set of trainable parameters θ P, and is represented as π adv (a t |x t, θ P). We create two buffers Z P and Z I for storing the training samples of P and I, respectively. In the beginning, Z P, Z I, E, θ P, θ I, π adv, as well as a timestep cumulative counter c are initialized. A number of hyperparameters are set to appropriate values, including the number of iterations N iter, the number of episodes N episode, the horizon T, as well as the update period T P of θ P. At each timestep t, P perceives the current observation x t from E, takes an action a t according to π adv (a t |x t, θ P), and receives the next observation x t+1 and a termination indicator ξ (lines 9-11). ξ is set to 1 only when t equals T, otherwise it is set to 0. We then store (x t, a t, x t+1, ξ) and (x t, a t, x t+1) in Z P and Z I, respectively. We update θ P every T P timesteps using the samples stored in Z P, as shown in (lines 13-21). At the end of each episode, we update θ I with samples drawn from Z I according to the loss function L I defined in Eq. (line 23). Although adversarial exploration strategy is effective in collecting hard samples, it requires additional adjustments if P becomes too strong such that the collected samples are too difficult for I to learn. Overly difficult samples lead to a large variance in gradients derived from L I, which in turn cause a performance drop in I and instability in its learning process. We analyze this phenomenon in greater detail in Section 4.5. To tackle the issue, we propose a training technique that reshapes r t as follows: DISPLAYFORM0 Algorithm 1 Adversarial exploration strategy 1: Initialize Z P, Z I, E, and model parameters θ P & θ I 2: Initialize π adv (at|xt, θ P) 3: Initialize the timestep cumulative counter c = 0 4: Set Niter, N episode, T, and T P 5: for iteration i = 1 to Niter do 6:for episode e = 1 to N episode do for timestep t = 0 to T do 8:P perceives xt from E, and predicts an action at according to π adv (at|xt, θ P) xt+1 = E(xt, at)10: DISPLAYFORM0 Store (xt, at, xt+1, ξ) in Z P Store (xt, at, xt+1) in Z I if (c % T P) == 0 then Initialize an empty batch B Initialize a recurrent state ht for (xt, at, xt+1, ξ) in Z P do 17: Evaluateât = I(xt, xt+1|ht, θ I) (calculated from Eq. FORMULA4 18:Evaluate rt(xt, at, xt+1) = L I (at,ât|θ I) (calculated from Eq. FORMULA6 19: DISPLAYFORM0 Update θ P with the gradient calculated from the samples of B Reset Z P 22: DISPLAYFORM0 Update θ I with the gradient calculated from the samples of Z I (according to Eq. FORMULA5 24: end where δ is a pre-defined threshold value. This technique poses a restriction on the range of r t, driving P to gather moderate samples instead of overly hard ones. Note that the value of δ affects the learning speed and the final performance. We plot the impact of δ on the learning curve of I in Section 4.5. We further provide an example in our supplementary material to visualize the effect of this technique. In this section, we present experimental for a series of robotic tasks, and validate that (i) our method is effective in both low-and high-dimensional observation spaces; (ii) our method is effective in environments with high-dimensional action spaces; (iii) our method is more data efficient than the baseline methods; and (iv) our method is robust against action space perturbations. We first introduce our experimental setup. Then, we report experimental of robotic arm and hand manipulation tasks. Finally, we present a comprehensive set of ablative analysis to validate our design decisions. We first describe the environments and tasks. Next, we explain the evaluation procedure and the method for collecting expert demonstrations. We then walk through the baselines used for comparison. We evaluate our method on a number of robotic arm and hand manipulation tasks via OpenAI gym BID3 environments simulated by the MuJoCo physics engine. We use the Fetch and Shadow Dexterous Hand BID16 for the arm and hand manipulation tasks, respectively. For the arm manipulation tasks, which include FetchReach, FetchPush, FetchPickAndPlace, and FetchSlide, the imitator (i.e., the inverse dynamic model I) takes as inputs the positions and velocities of a gripper and a target object. It then infers the gripper's action in 3-dimensional space to manipulate it. For the hand manipulation task HandReach, the imitator takes as inputs the positions and velocities of the fingers of a robotic hand, and determines the velocities of the joints to achieve the goal. In addition to low-dimensional observations (i.e., position, velocity, and gripper state), we further perform experiments for the above tasks using visual observations (i.e., high-dimensional observations) in the form of camera images taken from a third-person perspective. The detailed description of the above tasks is specified in BID16. For the detailed configurations of these tasks, please refer to our supplementary material. The primary objective of our experiments is to demonstrate the efficiency of the proposed adversarial exploration strategy in collecting training data (in a self-supervised manner) for the imitator. We compare our strategy against a number of self-supervised data collection methods (referred to as "baselines" or "baseline methods") described in Section 4.1.4. As different baseline methods employ different data collection strategies, the learning curve of the imitator also varies for different cases. For a fair comparison, the model architecture of the imitator and the amount of training data are fixed for all cases. All of the experimental are evaluated and averaged over 20 trials, corresponding to 20 different random initial seeds. In each trial, we train an imitator by the training data collected by a single self-supervised data collection method. At the beginning of each episode, the imitator receives a sequence of observations {x 0,x 1, · · ·,x T} from a successful expert demonstration. At each timestep t, the imitator infers an actionâ t from an expert observationx t+1 and its current observation x t by Eq.. We periodically evaluate the imitator every 10K timesteps. The evaluation is performed by averaging the success rates of reachingx T over 500 episodes. The configuration of the imitator and the hyperparameters of the baselines are summarized in the supplementary material. For each task mentioned in Section 4.1.1, we first randomly configure task-relevant settings (e.g., goal position, initial state, etc.). We then collect demonstrations from non-trivial and successful episodes performed by a pre-trained expert agent. Please note that the collected demonstrations only contain sequences of observations. The implementation details of the expert agent and the method for filtering out trivial episodes are presented in our supplementary material. We compare our proposed methodology with the following four baseline methods in our experiments.• Random: This method collects training samples by random exploration. We consider it to be an important baseline because of its simplicity and prevalence in a number of research works on self-supervised IL BID0 BID11 BID13 ).• Demo: This method trains the imitator directly with expert demonstrations. It serves as the performance upper bound, as the training data is the same as the testing data for this method.• Curiosity: This method trains a DRL agent via curiosity BID12 to collect training samples. Unlike the original implementation, we replace its DRL algorithm with PPO, as training should be done on a single thread for a fair comparison with the other baselines. This is alo an important baseline due to its effectiveness in BID13.• Noise BID15: In this method, noise is injected to the parameter space of a DRL agent to encourage exploration BID15. Please note that its exploratory behavior relies entirely on parameter space noise, instead of using any extrinsic reward. We include this method due to its superior performance and data efficiency in many DRL tasks. We compare the performance of the proposed method and the baselines on the robotic arm manipulation tasks described in Section 4.1.1. As opposed to discrete control domains, these tasks are especially challenging, as the sample complexity grows in continuous control domains. Furthermore, the imitator may not have the complete picture of the environment dynamics, increasing its difficulty to learn an inverse dynamics model. In FetchSlide, for instance, the movement of the object on the slippery surface is affected by both friction and the force exerted by the gripper. It thus motivates us to investigate whether the proposed method can help overcome the challenge. In the subsequent paragraphs, we discuss the experimental in both low-and high-dimensional observation spaces, and plot them in Figs. 2 and 3, respectively. All of the are obtained by following the procedure described in Section 4.1.2. The shaded regions in Figs. 2 and 3 represent the confidence intervals. Low-dimensional observation spaces. FIG1 plots the learning curves for all of the methods in low-dimensional observation spaces. In all of the tasks, our method yields superior or comparable performance to the baselines except for Demo, which is trained directly with expert demonstrations. In FetchReach, it can be seen that every method achieves a success rate of 1.0. This implies that it does not require a sophisticated exploration strategy to learn an inverse dynamics model in an environment where the dynamics is relatively simple. It should be noted that although all methods reach the same final success rate, ours learns significantly faster than Demo. In contrast, in FetchPush, our method is comparable to Demo, and demonstrates superior performance to the other baselines. Our method also learns drastically faster than all the other baselines, which confirms that the proposed strategy does improve the performance and efficiency of self-supervised IL. Our method is particularly effective in tasks that require an accurate inverse dynamics model. In FetchPickAndPlace, for example, our method surpasses all the other baselines. However, all methods including Demo fail to learn a successful inverse dynamics model in FetchSlide, which suggests that it is difficult to train an imitator when the outcome of an action is not completely dependent on the action itself. It is worth noting that Curiosity loses to Random in FetchPush and FetchSlide, and Noise performs even worse than these two methods in all of the tasks. We therefore conclude that Curiosity is not suitable for continuous control tasks, and the parameter space noise strategy cannot be directly applied to self-supervised IL. In addition to the quantitative presented above, we further discuss the empirical qualitatively. Please refer our supplementary material for a description of the qualitative . High-dimensional observation spaces. FIG2 plots the learning curves of all methods in highdimensional observation spaces. It can be seen that our method performs significantly better than the other baseline methods in most of the tasks, and is comparable to Demo. In FetchPickAndPlace, our method is the only one that learns a successful inverse dynamics model. Similar to the in FIG1, Curiosity is no better than Random in high-dimensional observation spaces. Please note that we do not include Noise in FIG2 as it performs worse enough already in low-dimensional settings. FIG1 plots the learning curves for each of the methods considered. Please note that Curiosity, Noise and our method are pre-trained with 30K samples collected by random exploration, as we observe that these methods on their own suffer from large errors in an early stage during training, which prevents them from learning at all. After the first 30K samples, they are trained with data collected by their exploration strategy instead. From the in FIG1, it can be seen that Demo easily stands out from the other methods as the best-performing model, surpassing them all by a considerable extent. Although our method is not as impressive as Demo, it significantly outperforms all of the other baseline methods, achieving a success rate of 0.4 while the others are still stuck at around 0.2. The reason that the inverse dynamics models trained by the self-supervised data-collection strategies discussed in this paper (including ours and the other baselines) are not comparable to the Demo baseline in the HandReach task is primarily due to the high-dimensional action space. It is observed that the data collected by the self-supervised data-collection strategies only cover a very limited range of the state space in the HandReach environment. Therefore, the inverse dynamics models trained with these data only learn to imitate trivial poses, leading to the poor success rates presented in FIG1. We evaluate the performance of the imitator trained in an environment with action space perturbations to validate the robustness of our adversarial exploration strategy. In such an environment, every action taken by the DRL agent is perturbed by a Gaussian random noise, such that the training samples collected by the DRL agent are not inline with its actual intentions. Please note that we only inject noise during the training phase, as we aim to validate the robustness of the proposed data collection strategy. The scale of the injected noise is specified in the supplementary material. We report the performance change rates of various methods for different tasks in Table. 1. The performance change rate is defined as: DISPLAYFORM0, where P r perturb and P r orig represent the highest success rates with and without action space perturbations, respectively. From Table. 1, it can be seen that our method retains the performance for most of the tasks, indicating that our method is robust to action space perturbations during the training phase. Please note that although Curiosity and Noise also achieve a change rate of 0% in HandReach and FetchSlide, they are not considered robust due to their poor performance in the original environment FIG1. Another interesting observation is that our method even gains some performance from action space perturbations in FetchPush and HandReach, which we leave as one of our future directions. We thus conclude that our method is robust to action space perturbations during the training phase, making it a practical option in real-world settings. In this section, we provide a set of ablative analysis. We examine the effectiveness of our method by an investigation of the training loss distribution, the stabilization technique, and the influence of δ. Please note that the value of δ is set to 1.5 by default, as described in our supplementary material. Training loss distribution. FIG3 plots the probability density function (PDF) of L I (derived from Eq. FORMULA5) by kernel density estimation (KDE) for the first 2K training batches during the training phase. The vertical axis corresponds to the probability density, while the horizontal axis represents the scale of L I. The curves Ours (w stab) and Ours (w/o stab) represent the cases where the stabilization technique described in Section 3.3 is employed or not, respectively. We additionally plot the curve Random in FIG3 to highlight the effectiveness of our method. It can be observed that both Ours (w stab) and Ours (w/o stab) concentrate on notably higher loss values than Random. This observation implies that adversarial exploration strategy does explore hard samples for inverse dynamics model. We validate the proposed stabilization technique in terms of the PDF of L I and the learning curve of the imitator, and plot the in FIG3 and 5, respectively. FIG3, it can be observed that the modes of Ours (w stab) are lower than those of Ours (w/o stab) in most cases, implying that the stabilization technique indeed motivates the DRL agents to favor those moderately hard samples. We also observe that for each of the five cases, the mode of Ours (w stab) is close to the value of δ (plotted in a dotted line), indicating that our reward structure presented in Eq. does help to regulate L I (and thus r t) to be around δ. To further demonstrate the effectiveness of the stabilization technique, we compare the learning curves of Ours (w stab) and Ours (w/o stab) in FIG4. It is observed that for the initial 10K samples of the five cases, the success rates of Ours (w/o stab) are comparable to those of Ours (w stab). However, their performance degrade drastically during the rest of the training phase. This observation confirms that the stabilization technique does contribute significantly to our adversarial exploration strategy. Although most of the DRL works suggest that the rewards should be re-scaled or clipped within a range (e.g., from -1 to 1), the unbounded rewards do not introduce any issues during the training process of our experiments. The empirical rationale is that the rewards received by the DRL agent are regulated by Eq. FORMULA8 to be around δ, as described in Section 4.5 and depicted in FIG3. Without the stabilization technique, however, the learning curves of the inverse dynamics model degrade drastically (as illustrated in FIG1, even if the reward clipping technique is applied. Influence of δ. FIG5 compares the learning curves of the imitator for different values of δ. For instance, Ours(0.1) corresponds to δ = 0.1. It is observed that for most of the tasks, the success rates drop when δ is set to an overly high or low value (e.g., 100.0 or 0.0), suggesting that a moderate value of δ is necessary for the stabilization technique. The value of δ can be adjusted dynamically by the adaptive scaling technique presented in BID15, which is left as our future direction. From the analysis presented above, we conclude that the proposed adversarial exploration strategy is effective in collecting difficult training data for the imitator. The analysis also validates that our stabilization technique indeed leads to superior performance, and is capable of guiding the DRL agent to collect moderately hard samples. This enables the imitator to pursue a stable learning curve. In this paper, we presented an adversarial exploration strategy, which consists of a DRL agent and an inverse dynamics model competing with each other for self-supervised IL. The former is encouraged to adversarially collect difficult training data for the latter, such that the training efficiency of the latter is significantly enhanced. Experimental demonstrated that our method substantially improved the data collection efficiency in multiple robotic arm and hand manipulation tasks, and boosted the performance of the inverse dynamics model in both low-and high-dimensional observation spaces. In addition, we validated that our method is generalizable to environments with high-dimensional action spaces. Moreover, we showed that our method is robust to action space perturbations. Finally, we provided a set of ablative analysis to validate the effectiveness for each of our design decisions. In addition to the quantitative presented above, we further discuss the empirical qualitatively. Through visualizing the training progress, we observe that our method initially acts like Random, but later focuses on interacting with the object in FetchPush, FetchSlide, and FetchPickAndPlace. This phenomenon indicates that adversarial exploration strategy naturally gives rise to a curriculum that improves the learning efficiency, which resembles curriculum learning BID2. Another benefit that comes with the phenomenon is that data collection is biased towards interactions with the object. Therefore, the DRL agent concentrates on collecting interesting samples that has greater significance, rather than trivial ones. For instance, the agent prefers pushing the object to swinging the robotic arm. On the other hand, although Curiosity explores the environment very thoroughly in the beginning by stretching the arm into numerous different poses, it quickly overfits to one specific pose. This causes its forward dynamics model to keep maintaining a low error, making it less curious about the surroundings. Finally, we observe that the exploratory behavior of Noise does not change as frequently as ours, Random, and Curiosity. We believe that the method's success in the original paper BID15 ) is largely due to extrinsic rewards. In the absence of extrinsic rewards, however, the method becomes less effective and unsuitable for data collection, especially in self-supervised IL. We employ PPO BID22 as the RL agent responsible for collecting training samples because of its ease of use and good performance. PPO computes an update at every timestep that minimizes the cost function while ensuring the deviation from the previous policy is relatively small. One of the two main variants of PPO is a clipped surrogate objective expressed as: DISPLAYFORM0 where is the advantage estimate, and a hyperparameter. The clipped probability ratio is used to prevent large changes to the policy between updates. The other variant employs an adaptive penalty on KL divergence, given by: DISPLAYFORM1 where β is an adaptive coefficient adjusted according to the observed change in the KL divergence. In this work, we employ the former objective due to its better empirical performance. In the experiments, the inverse dynamics model I(x t, x t+1 |h t, θ I) of all methods employs the same network architecture. For low-dimensional observation setting, we use 3 Fully-Connected (FC) layers with 256 hidden units followed by tanh activation units. For high-dimensional observation setting, we use 3-layer Convolutional Neural Network (CNN) followed by relu activation units. The CNNs are configured as,, and, with each element in the 3-tuple denoting the number of output features, width/height of the filter, and stride. The features extracted by stacked CNNs are then fed forward to a FC with 512 hidden units followed by relu activation units. For both low-and high-dimensional observation settings, we use the architecture proposed in BID22. During training, we periodically update the DRL agent with a batch of transitions as described in Algorithm. 1. We split the batch into several mini-batches, and update the RL agent with these mini-batches iteratively. The hyperparameters are listed in Table. 2 (our method). Our baseline Curiosity is implemented based on the work BID13. The authors in BID13 propose to employ a curiosity-driven RL agent BID12 efficiency of data collection. The curiosity-driven RL agent takes curiosity as intrinsic reward signal, where curiosity is formulated as the error in an agents ability to predict the consequence of its own actions. This can be defined as a forward dynamics model: DISPLAYFORM0 whereφ(x) is the predicted feature encoding at the next timestep, φ(x) the feature vector at the current timestep, a the action executed at the current timestep, and θ F the parameters of the forward model f. The network parameters θ F is optimized by minimizing the loss function L F: DISPLAYFORM1 For low-and high-dimensional observation settings, we use the architecture proposed in BID22. The implementation of φ depends on the model architecture of the RL agent. For low-dimensional observation setting, we implement φ with the architecture of low-dimensional observation PPO. Note that φ does not share parameters with the RL agent in this case. For highdimensional observation setting, we share the features extracted by the CNNs of the RL agent, then feed these features to φ which consists of a FC with 512 hidden units followed by relu activation. The hyperparameters settings can be found in Table. 2(Curiosity). We directly apply the same architecture in BID15 without any modification. Please refer to BID15 for more detail.set of demonstrations in FetchReach is relatively difficult with only 100 episodes of demonstrations. A huge performance gap is observed when the number of episodes is increased to 1,000. Consequently, Demo is selected as our Demo baseline for the presentation of the experimental in Section 4. Another advantage is that Demo demands less memory than Demo. To test the robustness of our method to noisy actions, we add noise to the actions in the training stage. Letâ t denote the predicted action by the imitator. The actual noisy action to be executed by the robot is defined as:â t:=â t + N (0, σ), where σ is set as 0.01. Note thatâ t will be clipped in the range defined by each environment. In this section, we visualize the effects of our stabilization technique with a list of rewards r in FIG7. The rows of Before and After represent the rewards before and after reward shaping, respectively. The bar on the right-hand side indicates the scale of the reward. It can be observed in FIG7 that after reward shaping, the rewards are transformed to the negative distance to the specified δ (i.e., 2.5 in this figure). As a , our stabilization technique is able to encourage the DRL agent to pursue rewards close to δ, where higher rewards can be received.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hyxtso0qtX
A simple yet effective imitation learning scheme that incentivizes exploration of an environment without any extrinsic reward or human demonstration.
This paper proposes a dual variational autoencoder (DualVAE), a framework for generating images corresponding to multiclass labels. Recent research on conditional generative models, such as the Conditional VAE, exhibit image transfer by changing labels. However, when the dimension of multiclass labels is large, these models cannot change images corresponding to labels, because learning multiple distributions of the corresponding class is necessary to transfer an image. This leads to the lack of training data. Therefore, instead of conditioning with labels, we condition with latent vectors that include label information. DualVAE divides one distribution of the latent space by linear decision boundaries using labels. Consequently, DualVAE can easily transfer an image by moving a latent vector toward a decision boundary and is robust to the missing values of multiclass labels. To evaluate our proposed method, we introduce a conditional inception score (CIS) for measuring how much an image changes to the target class. We evaluate the images transferred by DualVAE using the CIS in CelebA datasets and demonstrate state-of-the-art performance in a multiclass setting. Recent conditional generative models have shown remarkable success in generating and transferring images. Specifically, a conditional variational autoencoder (CVAE) BID4 can generate conditional images by learning the latent space Z that corresponds to multiclass labels. In addition, StarGAN BID1 and FaderNetworks BID5 can generate images corresponding to multiple domains by conditioning with domains such as attributes. However, when the dimension of the multiclass is increased, these models cannot transfer the images corresponding to one arbitrary domain (an element of a multiclass label). The possible reasons are the following. For simplicity, we consider a binary multiclass classification. To transfer an image of a certain class, it is necessary to learn the distributions of the corresponding class. That is, assuming that the number of classes in the multiclass is N, conditional models need to create 2 N distributions. However, when N is large, training is difficult as O(2 N) training samples will be required. Hence, instead of conditioning with labels, we propose DualVAE, which conditions with latent vectors that include label information. DualVAE divides one distribution of the latent space by N linear decision boundaries which need to learn only O(N) parameters by adding another decoder p w (y|z) to a variational autoencoder (VAE) BID3. DualVAE assumes that a label is a linear combination of vectors of the latent space and the dual latent space. There are two advantages to the DualVAE decoder p w (y|z) being a linear model. First, DualVAE can easily transfer an image by moving a latent vector toward a decision boundary. Next, DualVAE is robust to the missing values of multiclass labels. In addition to this method, we propose the conditional inception score (CIS), a new metric for conditional transferred images. Although the evaluation methods often used in the generation models are the Inception Score (IS) BID9 and the Fréchet Inception Distance BID2, they are used for evaluating the diversity of images and not suitable for evaluating transferred images conditioned with domains such as attributes or classes. Therefore, we propose a new metric to evaluate two properties: the first property pertains to whether images in one domain are transferred properly to images in another domain; the second property pertains to whether images in one domain Figure 1: Conditional VAE learns 2 n distributions for each binary multiclass label when the number of class is n. DualVAE learns n decision boundaries for dividing a distribution of latent space. u 1 is a parameter of a decision boundary, which we call a dual vector.transferred to images in another domain can preserve the original properties. By using the CIS, we compare DualVAE with other methods that can perform image-to-image translations for multiple domains. In summary, the contributions from this study are as follows: 1) We introduce DualVAE, a method for transferring images corresponding to multiclass labels and demonstrate that images can be transferred quantitatively and qualitatively. 2) We propose the CIS, a new metric that can evaluate transferred images corresponding to multiclass labels. Conditional model Several studies have been conducted to generate or transfer images conditioned with labels. For example, conditional VAE BID4 is an extension of a VAE BID3 where latent variables z are inferred using image x and label y, and image x is reconstructed with y,z. Further, a CGAN BID7 ) is a conditional model using a GAN, where a noise z and a class label y are input to the generator, and learning is performed similarly to the GAN using image x corresponding to class label y. FaderNetworks BID5 learns latent variables from which label information is eliminated by using adversarial learning and assigns attributes to images by providing labels to the decoder. Furthermore, StarGAN BID1, a method of domain transfer, had succeeded in outputting a beautiful image corresponding to an attribute by conditioning with a domain (attribute). However, all these methods are models conditioned with labels; therefore, as the dimension of the labels becomes larger, the number of training samples becomes insufficient. Connection to the Information Bottleneck As with DualVAE, there are several papers related to finding a latent variable z that predicts label y. For example, Information Bottleneck (IB) BID11 is a method for obtaining a latent expression z that solves task y. IB is a method which leaves the latent information z for solving the task y by maximizing the mutual information amount I(Z; Y). At the same time, extra information about input x is discarded by minimizing I(Z; X). Variational Information Bottleneck (VIB) BID0 succeeded in parameterizing the IB with a neural network, by performing a variational approximation. VIB can also be considered as a kind of extension of VAE. VAE minimizes the mutual information I(Z; i) between individual data i and latent variable z while maximizing I(Z; X). DualVAE can be regarded as a framework of VIB as well, and it minimizes I(Z; i) while maximizing I(Z; Y) and I(Z; X). We can also regard DualVAE as a probabilistic matrix factorization (PMF) BID8 extended to a generative model. A PMF is used in several application areas, primarily in collaborative filtering, which is a typical recommendation algorithm. It can predict missing ratings of users by assuming that the user's ratings are modeled by a linear combination of the item and user latent factors. Similarly, we experimentally show that DualVAE is also robust to missing labels. We devised DualVAE by adding a decoder p w (y|z) = p(y|z, u) to the VAE to learn the decision boundaries between classes. Here, z is a vector of the latent space Z, u is a vector of the dual latent space Z * and y is a label. Unlike the CVAE, this model does not require label y at the time of inference of z corresponding to x, and the difference is shown in FIG0. The objective function of the VAE is as follows: DISPLAYFORM0 where φ, θ are parameters of the encoder and decoder of the VAE, respectively. The lower bound of DualVAE is as follows: DISPLAYFORM1 where p w (y|z) = Bern(y|σ(Uz)). Here, U is a domain feature matrix whose row vector is a dual vector u and Bern is a Bernoulli distribution. As you can see from Equation 2, the objective function of DualVAE is the objective function of the VAE plus the expectation of log-likelihood of p w (y|z) Specifically, training is performed such that the inner product of z j ∈ Z and u i ∈ Z * predicts the label y ij where j is the index of a sample and i is the index of a domain. At the same time, we find the values of θ and φ that maximize the lower bound in Equation 1.We transfer the images on domain i by performing the following operation. We calculated the following vector w i: DISPLAYFORM2 where λ(∈ R) is a parameter. Image transfer can be demonstrated by changing λ and decoding w i. Equation 3 corresponds to moving a latent vector toward a decision boundary. Require: images (x j) m j=1, batch size M, indicator function I ij VAE/encoder optimizers: g, g e, hyper parameter α, and the label matrix Y = (y ij). Initialize encoder parameter, decoder parameter and dual vector: θ, φ, U = (u i) DISPLAYFORM0 Although IS BID9 ) is a score for measuring generated images, it can only measure the diversity of the images, and cannot be used for evaluating the domain transfer of the images. Therefore, we proposed using a CIS, a score for evaluating the transformation of images into multiclass target domains. The CIS is a scalar value calculated from the sum of two elements. The first is whether the domain transfer of the original image has been successful (transfer score), and the second is whether the features other than the transferred domain are retained (reconstruction score). The computation flow of these scores can be found in FIG1.We calculated the CIS using Algorithm 2. First, we assumed that the number of domains is n and the domain that each image belonged to was known. We finetuned Inception-v3 BID10 using train images as inputs and domains as outputs. To enable the model to classify the images with the domains, we replaced the last layer of the model with a new layer that had n outputs. Next, we transferred test images into n domain images and loaded the transferred images into the pretrained Inception-v3. Through this process, we obtained an n × n matrix for every original image because one image was transferred into n domain images and each domain image was mapped to an n-dimension vector. We subsequently mapped the original image into an n-dimension vector using Inception-v3 and subtracted this vector from each row of the n × n matrix. We named this matrix M. The key points are the following: the diagonal elements of M should be large because the specified domain should be changed significantly, and the off-diagonal elements of M should be small because the transferred images should preserve the original features. DISPLAYFORM0 In the algorithm, abs denotes taking the absolute value, diag denotes taking the diagonal elements of the matrix, notdiag denotes taking the nondiagonal elements, avg denotes taking the mean of the multiclass values. x is an original image, and x (i) denotes a transferred image on domain i. We performed a standard image transfer task with the 40 attributes in the CelebA BID6 dataset, which comprises approximately 200,000 images of faces of celebrities. Comparison of DualVAE and several models DualVAE was compared with several models capable of performing image-to-image translations for multiclass labels using a single model. In each model, we calculated the CIS several times when applying Algorithm 2 on 160 CelebA test images; subsequently, the average and standard deviation were obtained. DualVAE obtained a higher CIS than the other models and the are shown in TAB0 and Figure 4. Robustness to sparsity To demonstrate experimentally that DualVAE is robust to the missing values of multiclass labels, the following steps were performed. We calculated the rs and ts values when applying Algorithm 2 on 160 CelebA test images and plotted the figure below when we changed the missing ratio of CelebA's domain labels and the λ in Equation 3. StarGAN (b) s = 0.9. All identical images were generated, and image transfer was not conducted properly. As shown in FIG3, DualVAE is robust in terms of the sparseness of domain labels, and the CIS does not decrease even when 90% of the labels are missing. Meanwhile, we found that StarGAN is not as robust as DualVAE with respect to sparseness. When 90% of the domain labels are missing, StarGAN cannot learn at all and generates identical images. We proposed DualVAE, a simple framework for generating and transferring images corresponding to multiclass labels. Further, we introduced the CIS, a new metric for measuring how much of an image corresponding to the change of labels could be generated. The decoder of DualVAE was a simple linear model in this study; however, we would like to test more complex models in the future.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1lnmLLKu4
a new framework using dual space for generating images corresponding to multiclass labels when the number of class is large
One of the most notable contributions of deep learning is the application of convolutional neural networks (ConvNets) to structured signal classification, and in particular image classification. Beyond their impressive performances in supervised learning, the structure of such networks inspired the development of deep filter banks referred to as scattering transforms. These transforms apply a cascade of wavelet transforms and complex modulus operators to extract features that are invariant to group operations and stable to deformations. Furthermore, ConvNets inspired recent advances in geometric deep learning, which aim to generalize these networks to graph data by applying notions from graph signal processing to learn deep graph filter cascades. We further advance these lines of research by proposing a geometric scattering transform using graph wavelets defined in terms of random walks on the graph. We demonstrate the utility of features extracted with this designed deep filter bank in graph classification of biochemistry and social network data (incl. state of the art in the latter case), and in data exploration, where they enable inference of EC exchange preferences in enzyme evolution. Over the past decade, numerous examples have established that deep neural networks (i.e., cascades of linear operations and simple nonlinearities) typically outperform traditional "shallow" models in various modern machine learning applications, especially given the increasing Big Data availability nowadays. Perhaps the most well known example of the advantages of deep networks is in computer vision, where the utilization of 2D convolutions enable network designs that learn cascades of convolutional filters, which have several advantages over fully connected network architectures, both computationally and conceptually. Indeed, in terms of supervised learning, convolutional neural networks (ConvNets) hold the current state of the art in image classification, and have become the standard machine learning approach towards processing big structured-signal data, including audio and video processing. See, e.g., Goodfellow et al. (2016, Chapter 9) for a detailed discussion. Beyond their performances when applied to specific tasks, pretrained ConvNet layers have been explored as image feature extractors by freezing the first few pretrained convolutional layers and then retraining only the last few layers for specific datasets or applications (e.g., BID47 BID33 . Such transfer learning approaches provide evidence that suitably constructed deep filter banks should be able to extract task-agnostic semantic information from structured data, and in some sense mimic the operation of human visual and auditory cortices, thus supporting the neural terminology in deep learning. An alternative approach towards such universal feature extraction was presented in BID28, where a deep filter bank, known as the scattering transform, is designed, rather than trained, based on predetermined families of distruptive patterns that should be eliminated to extract informative representations. The scattering transform is constructed as a cascade of linear wavelet transforms and nonlinear complex modulus operations that provides features with guaranteed invariance to a predetermined Lie group of operations such as rotations, translations, or scaling. Further, it also provides Lipschitz stability to small diffeomorphisms of the inputted signal. Scattering features have been shown to be effective in several audio (e.g., BID6 BID0 BID27 and image (e.g., BID7 BID40 BID34 processing applications, and their advantages over learned features are especially relevant in applications with relatively low data availability, such as quantum chemistry (e.g., BID15 BID35 .Following the recent interest in geometric deep learning approaches for processing graph-structured data (see, for example, BID4 and references therein), we present here a generalization of the scattering transform from Euclidean domains to graphs. Similar to the Euclidean case, our construction is based on a cascade of bandpass filters, defined in this case using graph signal processing BID38 notions, and complex moduli, which in this case take the form of absolute values (see Sec. 3). While several choices of filter banks could generally be used with the proposed cascade, we focus here on graph wavelet filters defined by lazy random walks (see Sec. 2). These wavelet filters are also closely related to diffusion geometry and related notions of geometric harmonic analysis, e.g. the diffusion maps algorithm of BID10 and the associated diffusion wavelets of BID11. Therefore, we call the constructed cascade geometric scattering, which also follows the same terminology from geometric deep learning. We note that similar attempts at generalizing the scattering transform to graphs have been presented in BID9 as well as BID49 and BID17. The latter two works are most closely related to the present paper. In them, the authors focus on theoretical properties of the proposed graph scattering transforms, and show that such transforms are invariant to graph isomorphism. The geometric scattering transform that we define here also possesses the same invariance property, and we expect similar stability properties to hold for the proposed construction as well. However, in this paper we focus mainly on the practical applicability of geometric scattering transforms for graph-structured data analysis, with particular emphasis on the task of graph classification, which has received much attention recently in geometric deep learning (see Sec. 4) In supervised graph classification problems one is given a training database of graph/label pairs DISPLAYFORM0 ⊂ G × Y sampled from a set of potential graphs G and potential labels Y. The goal is to use the training data to learn a model f: G → Y that associates to any graph G ∈ G a label y = f (G) ∈ Y. These types of databases arise in biochemistry, in which the graphs may be molecules and the labels some property of the molecule (e.g., its toxicity), as well as in various types of social network databases. Until recently, most approaches were kernel based methods, in which the model f was selected from the reproducing kernel Hilbert space generated by a kernel that measures the similarity between two graphs; one of the most successful examples of this approach is the Weisfeiler-Lehman graph kernel of BID37. Numerous feed forward deep learning algorithms, though, have appeared over the last few years. In many of these algorithms, task based (i.e., dependent upon the labels Y) graph filters are learned from the training data as part of the larger network architecture. These filters act on a characteristic signal x G that is defined on the vertices of any graph G, e.g., x G may be a vector of degrees of each vertex (we remark there are also edge based algorithms, such as BID20 and references within, but these have largely been developed for and tested on databases not considered in Sec. 4). Here, we propose an alternative to these methods in the form of a geometric scattering classifier (GSC) that leverages graph-dependent (but not label dependent) scattering transforms to map each graph G to the scattering features extracted from x G. Furthermore, inspired by transfer learning approaches such as BID33, we consider treatment of our scattering cascade as frozen layers on x G, either followed by fully connected classification layers (see FIG2), or fed into other classifiers such as SVM or logistic regression. We note that while the formulation in Sec. 3 is phrased for a single signal x G, it naturally extends to multiple signals by concatenating their scattering features. In Sec. 4.1 we evaluate the quality of the scattering features and ing classification by comparing it to numerous graph kernel and deep learning methods over 13 datasets (7 biochemistry ones and 6 social network ones) commonly studied in related literature. In terms of classification accuracy on individual datasets, we show that the proposed approach obtains state of the art on two datasets and performs competitively on the rest, despite only learning a classifier that come after the geometric scattering transform. Furthermore, while other methods may excel on specific datasets, when considering average accuracy: within social network data, our proposed GSC outperforms all other methods; in biochemistry or over all datasets, it outperforms nearly all feed forward neural network approaches, and is competitive with state of the art of graph kernels BID26 and graph recurrent neural networks BID41. We regard this as crucial in establishing the universality of graph features extracted by geometric scattering, as they provide an effective task-independent representation of analyzed graphs. Finally, to establish their unsupervised qualities, in Sec. 4.2 we use geometric scattering features extracted from enzyme data BID2 to infer emergent patterns of enzyme commission (EC) exchange preferences in enzyme evolution, validated with established knowledge from BID12. We define graph wavelets as the difference between lazy random walks that have propagated at different time scales, which mimics classical wavelet constructions found in BID29 as well as more recent constructions found in BID11. The underpinnings for this construction arise out of graph signal processing, and in particular the properties of the graph Laplacian. Let G = (V, E, W) be a weighted graph, consisting of n vertices V = {v 1, . . ., v n}, edges E ⊆ {(v, v m): 1 ≤, m ≤ n}, and weights W = {w(v, v m) > 0: (v, v m) ∈ E}. Note that unweighted graphs are considered as a special case, by setting w (v, v m DISPLAYFORM0 and zero otherwise, where we use the notation A(v, v m) to denote the (, m) entry of the matrix A so as to emphasize the correspondence with the vertices in the graph and to reserve sub-indices for enumerating objects. Define the (weighted) degree of vertex v as DISPLAYFORM1 The graph Laplacian is a symmetric, real valued positive semi-definite matrix, and thus has n nonnegative eigenvalues. Furthermore, if we set 0 = (0, . . ., 0)T to to be the n × 1 vector of all zeroes, and 1 = (1, . . ., 1)T to be the analogous vector of all ones, then it is easy to see that L1 = 0. Therefore 0 is an eigenvalue of L and we write the n eigenvalues of L as 0 = λ 0 ≤ λ 1 ≤ · · · ≤ λ n−1 with corresponding n × 1 orthonormal eigenvectors 1/ √ n = ϕ 0, ϕ 1,..., ϕ n−1. If the graph G is connected, then λ 1 > 0. In order to simplify the following discussion we assume that this is the case, although the discussion below can be amended to include disconnected graphs as well. Since ϕ 0 is constant and every other eigenvector is orthogonal to ϕ 0, it is natural to view the eigenvectors ϕ k as the Fourier modes of the graph G, with a frequency magnitude √ λ k. Let x: V → R be a signal defined on the vertices of the graph G, which we will consider as an n × 1 vector with entries x(v). It follows that the Fourier transform of x can be defined as x(k) = x · ϕ k, where x · y is the standard dot product. This analogy is one of the foundations of graph signal processing and indeed we could use this correspondence to define wavelet operators on the graph G, as in BID22. Rather than follow this path, though, we instead take a related path similar to BID11; BID17 by defining the graph wavelet operators in terms of random walks defined on G, which will avoid diagonalizing L and will allow us to control the "spatial" graph support of the filters directly. Define the n × n transition matrix of a lazy random random walk as P = 1 2 D −1 A + I. Note that the row sums of P are all one and thus the entry P(v, v m) corresponds to the transition probability of walking from vertex v to v m in one step. Powers of P run the random walk forward, so that in particular P t (v, v m) is the transition probability of walking from v to v m in exactly t steps. We will use P as a left multiplier, in which case P acts a diffusion operator. To understand this idea more precisely, first note that a simple calculation shows that P1 = 1 and furthermore if the graph G is connected, every other eigenvalue of P is contained in. Note in particular that L and P share the eigenvector 1. It follows that P t x responds most significantly to the zero frequency x of x while depressing the non-zero frequencies of x (where the frequency modes are defined in terms of the graph Laplacian L, as described above). On the spatial side, the value P t x(v) is the weighted average of x(v) with all values x(v m) such that v m is within t steps of v in the graph G.High frequency responses of x can be recovered in multiple different fashions, but we utilize multiscale wavelet transforms that group the non-zero frequencies of G into approximately dyadic bands. As shown in Mallat (2012, Lemma 2.12), wavelet transforms are provably stable operators in the Euclidean domain, and the proof of Zou & Lerman (2018, Theorem 5.1) indicates that similar on graphs may be possible. Furthermore, the multiscale nature of wavelet transforms will allow the ing geometric scattering transform (Sec. 3) to traverse the entire graph G in one layer, which is valuable for obtaining global descriptions of G. Following BID11, define the n × n diffusion wavelet matrix at the scale 2 j as DISPLAYFORM2 Since P t 1 = 1 for every t, we see that Ψ j 1 = 0 for each j ≥ 1. Thus each Ψ j x partially recovers x(k) for k ≥ 1. The value Ψ j x(v) aggregates the signal information x(v m) from the vertices v m. Instead, it responds to sharp transitions or oscillations of the signal x within the neighborhood of v with radius 2 j (in terms of the graph path distance). Generally, the smaller j the higher the frequencies Ψ j x recovers in x. These high frequency wavelet coefficients up to the scale 2 J are denoted by: DISPLAYFORM3 Since 2 J controls the maximum scale of the wavelet, in the experiments of Sec. 4 we select J such that 2 J ∼ diam(G). FIG0 plots the diffusion wavelets at different scales on two different graphs. A geometric wavelet scattering transform follows a similar construction as the (Euclidean) wavelet scattering transform of BID28, but leverages a graph wavelet transform. In this paper we utilize the wavelet transform defined in of the previous section, but remark that in principle any graph wavelet transform could be used (see, e.g., BID49 . In Sec. 3.1 we define the graph scattering transform, in Sec. 3.2 we discuss its relation to other recently proposed graph scattering constructions BID17 BID49, and in Sec. 3.3 we describe several of its desirable properties as compared to other geometric deep learning algorithms on graphs. Machine learning algorithms that compare and classify graphs must be invariant to graph isomorphism, i.e., re-indexations of the vertices and corresponding edges. A common way to obtain invariant graph features is via summation operators, which act on a signal x = x G that can be defined on any graph G, e.g., DISPLAYFORM0 The geometric scattering transform, which is described in the remainder of this section, follows such an approach. The simplest of such summation operators computes the sum of the responses of the signal x. As described in BID44, this invariant can be complemented by higher order summary statistics of x, the collection of which form statistical moments, and which are also referred to as "capsules" in that work. For example, the unnormalized q th moments of x yield the following "zero" order geometric scattering moments: DISPLAYFORM1 We can also replace with normalized (i.e., standardized) moments of x, in which case we store its mean (q = 1), variance (q = 2), skew (q = 3), kurtosis (q = 4), and so on. In the numerical experiments described in Sec. 4 we take Q = 2, 3, 4 depending upon the database, where Q is chosen via cross validation to optimize classification performance. Higher order moments are not considered as they become increasingly unstable, and we report for both normalized and unnormalized moments. In what follows we discuss the unnormalized moments, since their presentation is simpler and we use them in conjunction with fully connected layers (FCL) for classification purposes, but the same principles also apply to normalized moments (e.g., used with SVM and logistic regression in our classification ). The invariants Sx(q) do not capture the full variability of x and hence the graph G upon which the signal x is defined. We thus complement these moments with summary statistics derived from the wavelet coefficients of x, which in turn will lead naturally to the graph ConvNet structure of the geometric scattering transform. Observe, analogously to the Euclidean setting, that in computing Sx FORMULA3, which is the summation of x(v) over V, we have captured the zero frequency of x since DISPLAYFORM2. Higher order moments of x can incorporate the full range of frequencies in x, e.g. DISPLAYFORM3 2, but they are mixed into one invariant coefficient. We can separate and recapture the high frequencies of x by computing its wavelet coefficients Ψ (J) x, which were defined in. However, Ψ (J) x is not invariant to permutations of the vertex indices; in fact, it is covariant (or equivariant). Before summing the individual wavelet coefficient vectors Ψ j x, though, we must first apply a pointwise nonlinearity. Indeed, define the n × 1 vector d(v) = deg(v), and note that Ψ j x · d = 0 since one can show that d is a left eigenvector of P with eigenvalue 1. If G is a regular graph then d = c1 from which it follows that Ψ j x · 1 = 0. For more general graphs d(v) ≥ 0 for v ∈ V, which implies that for many graphs 1 · d will be the dominating coefficient in an expansion of 1 in an orthogonal basis containing d; it follows that in these cases |Ψ j x · 1| 1.We thus apply the absolute value nonlinearity, to obtain nonlinear covariant coefficients |Ψ (J) x| = {|Ψ j x| : 1 ≤ j ≤ J}. We use absolute value because it is covariant to vertex permutations, nonexpansive, and when combined with traditional wavelet transforms on Euclidean domains, yields a provably stable scattering transform for q = 1. Furthermore, initial theoretical in BID49; BID17 indicate that similar graph based scattering transforms possess certain types of stability properties as well. As in, we extract invariant coefficients from |Ψ j x| by computing its moments, which define the first order geometric scattering moments: DISPLAYFORM4 These first order scattering moments aggregate complimentary multiscale geometric descriptions of G into a collection of invariant multiscale statistics. These invariants give a finer partition of the frequency responses of x. For example, whereas Sx mixed all frequencies of x, we see that Sx(j, 2) only mixes the frequencies of x captured by the graph wavelet Ψ j.First order geometric scattering moments can be augmented with second order geometric scattering moments by iterating the graph wavelet and absolute value transforms, which leads naturally to the structure of a graph ConvNet. These moments are defined as: DISPLAYFORM5 which consists of reapplying the wavelet transform operator Ψ (J) to each |Ψ j x| and computing the summary statistics of the magnitudes of the ing coefficients. The intermediate covariant coefficients |Ψ j |Ψ j x|| and ing invariant statistics Sx(j, j, q) couple two scales 2 j and 2 j within the graph G, thus creating features that bind patterns of smaller subgraphs within G with patterns of larger subgraphs (e.g., circles of friends of individual people with larger community structures in social network graphs). The transform can be iterated additional times, leading to third order features and beyond, and thus has the general structure of a graph ConvNet. The collection of graph scattering moments Sx = {Sx(q), Sx(j, q), Sx(j, j, q)} (illustrated in FIG2) provides a rich set of multiscale invariants of the graph G. These can be used in supervised settings as input to graph classification or regression models, or in unsupervised settings to embed graphs into a Euclidean feature space for further exploration, as demonstrated in Sec. 4. In order to assess the utility of scattering features for representing graphs, two properties have to be considered: stability and capacity. First, the stability property aims to essentially provide an upper bound on distances between similar graphs that only differ by types of deformations that can be treated as noise. This property has been the focus of both BID49 and BID17, and in particular the latter shows that a diffusion scattering transform yields features that are stable to graph structure deformations whose size can be computed via the diffusion framework BID11 ) that forms the basis for their construction. While there are some technical differences between the geometric scattering here and the diffusion scattering in BID17, these constructions are sufficiently similar that we can expect both of them to have analogous stability properties. Therefore, we mainly focus here on the complementary property of the scattering transform capacity to provide a rich feature space for representing graph data without eliminating informative variance in them. We note that even in the classical Euclidean case, while the stability of scattering transforms to deformations can be established analytically BID28, their capacity is typically examined by empirical evidence when applied to machine learning tasks (e.g., BID5 BID39 BID0 . Similarly, in the graph processing settings, we examine the capacity of our proposed geometric scattering features via their discriminaive power in graph data analysis tasks. In Sec. 4.1, we describe extensive numerical experiments for graph classification problems in which our scattering coefficients are utilized in conjunction with several classifiers, namely, fully connected layers (FCL, illustrated in FIG2), support vector machine (SVM), and logistic regression. We note that SVM classification over scattering features leads to state of the art on social network data, as well as outperforming all feed-forward neural network methods in general. Furthermore, for biochemistry data (where graphs represent molecule structures), FCL classification over scattering features outperforms all other feed-forward neural networks, even though we only train the fully connected layers. Finally, to assess the scattering feature space for data representation and exploration, in Sec. 4.2 we examine its qualities when analyzing biochemistry data, with emphasis on enzyme graphs. We show that geometric scattering enables graph embedding in a relatively low dimensional Euclidean space, while preserving insightful properties in the data. Beyond establishing the capacity of our specific construction, these also indicate the viability of graph scattering transforms in general, as universal feature extractors on graph data, and complement the stability established in BID49 and BID17. DISPLAYFORM0 (a) Representative zeroth-, first-, and second-order cascades of the geometric scattering transform for an input graph signal x. The presented cascades, indexed by j, j, q, are collected together to form the set of scattering coefficients Sx defined in eqs.. DISPLAYFORM1 A d ja c e n c y m a tr ix: DISPLAYFORM2 Si gn al ve ct or: DISPLAYFORM3 Diffusion wavelets: DISPLAYFORM4 Fully connected layers: DISPLAYFORM5 We give a brief comparison of geometric scattering with other graph ConvNets, with particular interest in isolating the key principles for building accurate graph ConvNet classifiers. We begin by remarking that like several other successful graph neural networks, the graph scattering transform is covariant or equivariant to vertex permutations (i.e., commutes with them) until the final features are extracted. This idea has been discussed in depth in various articles, including BID25, so we limit the discussion to observing that the geometric scattering transform thus propagates nearly all of the information in x through the multiple wavelet and absolute value layers, since only the absolute value operation removes information on x. As in BID44, we aggregate covariant responses via multiple summary statistics (i.e., moments), which are referred to there as a capsule. In the scattering context, at least, this idea is in fact not new and has been previously used in the Euclidean setting for the regression of quantum mechanical energies in BID16 BID42 and texture synthesis in BID8. We also point out that, unlike many deep learning classifiers (graph included), a graph scattering transform extracts invariant statistics at each layer/order. These intermediate layer statistics, while necessarily losing some information in x (and hence G), provide important coarse geometric invariants that eliminate needless complexity in subsequent classification or regression. Furthermore, such layer by layer statistics have proven useful in characterizing signals of other types (e.g., texture synthesis in BID19 .A graph wavelet transform Ψ (J) x decomposes the geometry of G through the lens of x, along different scales. Graph ConvNet algorithms also obtain multiscale representations of G, but several works, including BID1 and, propagate information via a random walk. While random walk operators like P t act at different scales on the graph G, per the analysis in Sec. 2 we see that P t for any t will be dominated by the low frequency responses of x. While subsequent nonlinearities may be able to recover this high frequency information, the ing transform will most likely be unstable due to the suppression and then attempted recovery of the high frequency content of x. Alternatively, features derived from P t x may lose the high frequency responses of x, which are useful in distinguishing similar graphs. The graph wavelet coefficients Ψ (J) x, on the other hand, respond most strongly within bands of nearly non-overlapping frequencies, each with a center frequency k j that depends on Ψ j.Finally, graph labels are often complex functions of both local and global subgraph structure within G. While graph ConvNets are adept at learning local structure within G, as detailed in BID44 they require many layers to obtain features that aggregate macroscopic patterns in the graph. This is due in large part to the use of fixed size filters, which often only incorporate information from the neighbors of any individual vertex. The training of such networks is difficult due to the limited size of many graph classification databases (see Table 4 in Appendix D). Geometric scattering transforms have two advantages in this regard: (a) the wavelet filters are designed; and (b) they are multiscale, thus incorporating macroscopic graph patterns in every layer/order. To evaluate the proposed geometric scattering features, we test their effectiveness for graph classification on thirteen datasets commonly used for this task. Out of these, seven datasets contain biochemistry graphs that describe molecular structures of chemical compounds, as described in the following works that introduced them: NCI1 and NCI109, BID45 BID14. In these cases, each graph has several associated vertex features x that represent chemical properties of atoms in the molecule, and the classification is aimed to characterize compound properties (e.g., protein types). The other six datasets, which are introduced in BID46, contain social network data extracted from scientific collaborations (COLLAB), movie collaborations (IMDB-B & IMDB-M), and Reddit discussion threads (REDDIT-B, REDDIT-5K, REDDIT-12K). In these cases there are no inherent graph signals in the data, and therefore we compute general node characteristics (e.g., degree, eccentricity, and clustering coefficient) over them, as is considered standard practice in relevant literature (see, for example, BID44 In all cases, we iterate over all graphs in the database and for each one we associate graph-wide features by computing the scattering features of each of the available graph signals (provided or computed), and concatenating the features of all such signals. Then, the full scattering feature vectors of these graphs are passed to a classifier, which is trained from input labels, in order to infer the class for each graph. We consider three classifiers here: neural network with two/three fully connected hidden layers (FCL), SVM with RBF kernel, or logistic regression. We note that the scattering features (computed as described in Sec. 3) are based on either normalized or unnormalized moments over the entire graph. Here we used unnormalized moments for FCL, and normalized ones for other classifiers, but the difference is subtle and similar can be achieved for the other combinations. Finally, we also note that all technical design choices for configuring our geometric scattering or the classifiers were done as part of the cross validation described in Appendix E.We evaluate the classification of our three geometric scattering classification (GSC) settings using ten-fold cross validation (as explained in Appendix E) and compare them to 14 prominent methods for graph classification. Out of these, six are graph kernel methods, namely: WeisfeilerLehman graph kernels (WL, BID37, propagation kernel (PK, BID31, Graphlet kernels BID36, Random walks (RW, BID18, deep graph kernels (DGK, BID46, and Weisfeiler-Lehman optimal assignment kernels (WL-OA, BID26 . Seven other methods are recent geometric feed forward deep learning algorithms, namely: deep graph convolutional neural network , Graph2vec BID30, 2D convolutional neural networks (2DCNN, BID42, covariant compositional networks (CCN, BID24, Patchy-san (PSCN, BID32, with k = 10), diffusion convolutional neural networks (DCNN, BID1, and graph capsule convolutional neural networks (GCAPS-CNN, BID44 . Finally, one method is the recently introduced recurrent neural network autoencoder for graphs (S2S-N2N-PP, BID41 . Following the standard format of reported classification performances for these methods (per their respective references, see also Appendix A), our are reported in the form of average accuracy ± standard deviation (in percentages) over the ten crossvalidation folds. We remark here that many of them are not reported for all datasets, and hence, we mark N/A when appropriate. For brevity, the comparison is reported here in FIG4 in summarized form, as explained below, and in full in Appendix A.Since the scattering transform is independent of training labels, it provides universal graph features that might not be specifically optimal in each individual dataset, but overall provide stable classification . Further, careful examination of the of previous methods (feed forward algorithms in particular) shows that while some may excel in specific cases, none of them achieves the best in all reported datasets. Therefore, to compare the overall classification quality of our GSC methods with related methods, we consider average accuracy aggregated over all datasets, and within each field (i.e., biochemistry and social networks) in the following way. First, out of the thirteen datasets, classification on four datasets (NCI109, ENZYMES, IMDB-M, REDDIT-12K) are reported significantly less frequently than the others, and therefore we discard them and use the remaining nine for the aggregation. Next, to address reported values versus N/A ones, we set an inclusion criterion of 75% reported datasets for each method. This translates into at most one N/A in each individual field, and at most two N/A overall. For each method that qualifies for this inclusion criterion, we compute its average accuracy over reported values (ignoring N/A ones) within each field and over all datasets; this in up to three reported values for each method. The aggregated of our GSC and 13 of the compared methods appears in FIG4. These show that GSC (with SVM) outperforms all other methods on social network data, and in fact as shown Appendinx B, it achieves state of the art on two datasets of this type. Additionally, the aggregated shows that our GSC approach (with FCL or SVM) outperforms all other feed forward methods both on biochemsitry data and overall in terms of universal average accuracy 2. The CCN method is omitted from these aggregated , as its in BID24 are only reported on four biochemistry datasets. For completeness, detailed comparison of GSC with this method, which appears in FIG4, shows that our method outperforms it on two datasets while CCN outperforms GSC on the other two. Geometric scattering essentially provides a task independent representation of graphs in a Euclidean feature space. Therefore, it is not limited to supervised learning applications, and can be also utilized for exploratory graph-data analysis, as we demonstrate in this section. We focus our discussion on biochemistry data, and in particular on the ENZYMES dataset. Here, geometric scattering features can be considered as providing "signature" vectors for individual enzymes, which can be used to explore interactions between the six top level enzyme classes, labelled by their Enzyme Commission (EC) numbers BID2. In order to emphasize the properties of scattering-based feature extraction, rather than downstream processing, we mostly limit our analysis of the scattering feature space to linear operations such as principal component analysis (PCA).We start by considering the viability of scattering-based embedding for dimensionality reduction of graph data. To this end, we applied PCA to our scattering coefficients (computed from unnormalized moments), while choosing the number of principal components to capture 90% explained variance. In the ENZYMES case, this yields a 16 dimensional subspace of the full scattering features space. While the Euclidean notion of dimensionality is not naturally available in the original dataset, we note that graphs in it have, on average, 124.2 edges, 29.8 vertices, and 3 features per vertex, and therefore the effective embedding of the data into R 16 indeed provides a significant dimensionality reduction. Next, to verify the ing PCA subspace still captures sufficient discriminative information with respect to classes in the data, we compare SVM classification on the ing low dimensional vectors to the the full feature space; indeed, projection on the PCA subspace in only a small drop in accuracy from 56.85 ± 4.97 (full) to 49.83 ± 5.40 (PCA). Finally, we also consider the dimensionality of each individual class (with PCA and > 90% exp. variance) in the scattering feature space, as we expect scattering to reduce the variability in each class w.r.t. the full feature space. In the ENZYMES case, individual classes have PCA dimensionality ranging between 6 and 10, which is indeed significantly lower than the 16 dimensions of the entire PCA space. Appendix C summarizes these findings, and repeats the described procedure for two addi- tional biochemistry datasets (from BID45 to verify that these are not unique to the specific ENZYMES dataset, but rather indicate a more general trend for geometric scattering feature spaces. To further explore the scattering feature space, we now use it to infer relations between EC classes. First, for each enzyme e, with scattering feature vector v e (i.e., with Sx for all vertex features x), we compute its distance from class EC-j, with PCA subspace C j, as the projection distance: dist(e, EC-j) = v e − proj Sj v e. Then, for each enzyme class EC-i, we compute the mean distance of enzymes in it from the subspace of each EC-j class as D(i, j) = mean{dist(e, EC-j): e ∈ EC-i}. Appendix C summarizes these distances, as well as the proportion of points from each class that have their true EC as their nearest (or second nearest) subspace in the scattering feature space. In general, 48% of enzymes select their true EC as the nearest subspace (with additional 19% as second nearest), but these proportions vary between individual EC classes. Finally, we use these scatteringbased distances to infer EC exchange preferences during enzyme evolution, which are presented in FIG5 and validated with respect to established preferences observed and reported in BID12. We note that the there is observed independently from the ENZYMES dataset. In particular, the portion of enzymes considered from each EC is different between these data, since BID3 took special care to ensure each EC class in ENZYMES has exactly 100 enzymes in it. However, we notice that in fact the portion of enzymes (in each EC) that choose the wrong EC as their nearest subspace, which can be considered as EC "incoherence" in the scattering feature space, correlates well with the proportion of evolutionary exchanges generally observed for each EC in BID12, and therefore we use these as EC weights in FIG5. Our in FIG5 demonstrate that scattering features are sufficiently rich to capture relations between enzyme classes, and indicate that geometric scattering has the capacity to uncover descriptive and exploratory insights in graph data analysis, beyond the supervised graph classification from Sec 4.1. We presented the geometric scattering transform as a deep filter bank for feature extraction on graphs. This transform generalizes the scattering transform, and augments the theoretical foundations of geometric deep learning. Further, our evaluation on graph classification and data exploration show the potential of the produced scattering features to serve as universal representations of graphs. Indeed, classification with these features with relatively simple classifier models reaches high accuracy on most commonly used graph classification datasets, and outperforms both traditional and recent deep learning feed forward methods in terms of average classification accuracy over multiple datasets. We note that this might be partially due to the scarcity of labeled big data in this field, compared to more traditional ones (e.g., image or audio classification). However, this trend also correlates with empirical for the classic scattering transform, which excels in cases with low data availability. Finally, the geometric scattering features provide a new way for computing and considering global graph representations, independent of specific learning tasks. Therefore, they raise the possibility of embedding entire graphs in Euclidean space and computing meaningful distances between graphs with them, which can be used for both supervised and unsupervised learning, as well as exploratory analysis of graph-structured data. APPENDIX A FULL COMPARISON TABLE DISPLAYFORM0 DISPLAYFORM1 The details of the datasets used in this work are as follows (see the main text in Sec. 3 for references):NCI1 contains 4,110 chemical compounds as graphs, with 37 node features. Each compound is labeled according to is activity against non-small cell lung cancer and ovarian cancer cell lines, and these labels serve as classification goal on this data. NCI109 is similar to NCI1, but with 4,127 chemical compounds and 38 node features. MUTAG consists of 188 mutagenic aromatic and heteroaromatic nitro compounds (as graphs) with 7 node features. The classification here is binary (i.e., two classes), based on whether or not a compound has a mutagenic effect on bacterium. PTC is a dataset of 344 chemical compounds (as graphs) with nineteen node features that are divided into two classes depending on whether they are carcinogenic in rats. PROTEINS dataset contains 1,113 proteins (as graphs) with three node features, where the goal of the classification is to predict whether the protein is enzyme or not. D&D dataset contains 1,178 protein structures (as graphs) that, similar to the previous one, are classified as enzymes or non-enzymes. ENZYMES is a dataset of 600 protein structures (as graphs) with three node features. These proteins are divided into six classes of enzymes (labelled by enzyme commission numbers) for classification. COLLAB is a scientific collaboration dataset contains 5K graphs. The classification goal here is to predict whether the graph belongs to a subfield of Physics. IMDB-B is a movie collaboration dataset with contains 1K graphs. The graphs are generated on two genres: Action and Romance, the classification goal is to predict the correct genre for each graph. IMDB-M is similar to IMDB-B, but with 1.5K graphs & 3 genres: Comedy, Romance, and Sci-Fi. REDDIT-B is a dataset with 2K graphs, where each graph corresponds to an online discussion thread. The classification goal is to predict whether the graph belongs to a Q&A-based community or discussion-based community. REDDIT-5K consists of 5K threads (as graphs) from five different subreddits. The classification goal is to predict the corresponding subreddit for each thread. REDDIT-12K is similar to REDDIT-5k, but with 11,929 graphs from 12 different subreddits. Table 4 summarizes the size of available graph data (i.e., number of graphs, and both max & mean number of vertices within graphs) in these datasets, as previously reported in the literature. Graph signals for social network data: None of the social network datasets has ready-to-use node features. Therefore, in the case of COLLAB, IMDB-B, and IMDB-M, we use the eccentricity, degree, and clustering coefficients for each vertex as characteristic graph signals. In the case of REDDIT-B, REDDIT-5K and REDDIT-12K, on the other hand, we only use degree and clustering coefficient, due to presence of disconnected graphs in these datasets. Software & hardware environment: Geometric scattering and related classification code were implemented in Python with TensorFlow. All experiments were performed on HPC environment using an intel16-k80 cluster, with a job requesting one node with four processors and two Nvidia Tesla k80 GPUs.
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SygK6sA5tX
We present a new feed forward graph ConvNet based on generalizing the wavelet scattering transform of Mallat, and demonstrate its utility in graph classification and data exploration tasks.
OCR is inevitably linked to NLP since its final output is in text. Advances in document intelligence are driving the need for a unified technology that integrates OCR with various NLP tasks, especially semantic parsing. Since OCR and semantic parsing have been studied as separate tasks so far, the datasets for each task on their own are rich, while those for the integrated post-OCR parsing tasks are relatively insufficient. In this study, we publish a consolidated dataset for receipt parsing as the first step towards post-OCR parsing tasks. The dataset consists of thousands of Indonesian receipts, which contains images and box/text annotations for OCR, and multi-level semantic labels for parsing. The proposed dataset can be used to address various OCR and parsing tasks. Optical character recognition (OCR) is a technique for converting images of characters into digitized texts. Recently, deep learning in computer vision domain has significantly improved the performances of OCR. Nonetheless, there is still huge room for improvement, especially concerning the tasks simultaneously linked to natural language processing (NLP) as well. In particular, post-OCR parsing is currently one of the most important, yet challenging problems in both OCR and NLP community. The goal of post-OCR parsing is to predict pre-defined semantic labels from the given OCR. Researchers from both domains have long tried to tackle the problem and collected a significant amount of data sets independently. However, since it is a specialized task, the datasets contain critical limitations to provide proper supervision. The OCR datasets typically do not have parsing class labels for the extracted texts. The parsing datasets usually contain error-free and well-ordered digitized texts in contrast to the erroneous outcomes from OCR process. We can add synthetic noise to the parsing data, but the distribution and error patterns could be different from the OCR errors, which would inevitably lead to the degradation of generalization performance. Over the past few years, a few post-OCR parsing datasets have been made public through post OCR challenges. For example, ICDAR 2019 Post-OCR Challenge introduced the Scanned Receipts OCR and Information Extraction (SROIE) dataset. It provides receipt images of texts and two types of annotations for OCR and parsing problem: box-level text annotations for OCR, and document-level parse annotations for parsing. Although the availability of both OCR and parsing information have given rise to active research within the field, it still possesses some shortcomings, e.g., limited data size and lack of box-level parsing annotations. Considering that only hundreds of samples are provided in the SROIE dataset, weak document-level annotations could not provide enough supervision for training a model with satisfactory performance. In this paper, we introduce a novel dataset called CORD, which stands for a Consolidated Receipt Dataset for post-OCR parsing. To the best of our knowledge, this is the first publicly available dataset which includes both box-level text and parsing class annotations. The parsing class labels are provided in two-levels. The eight superclasses include store, payment, menu, subtotal, and total. The eight superclasses are subdivided into 54 subclasses e.g., store has nine subclasses including name, address, telephone, and fax. Furthermore, it also provides line annotations for the serialization task which is a newly emerging problem as a combination of the two tasks. Current semantic parsing techniques can handle only well-ordered texts. Texts obtained by OCR, however, are in two-dimensional space, thus we need an appropriate serialization technique for mapping obtained texts into one-dimensional space. In our experiments, serialization has a significant impact on parsing performance. To recapitulate briefly, the key contributions of our paper are as follows: • We introduce a novel and large-scale receipt dataset that can be used for OCR and parsing tasks, from task-specific to end-to-end. • Our dataset provides multi-level labels for weakly and strongly supervised parsing tasks. The dataset and descriptions will be available on https://github.com/clovaai/cord at the time of publication. 2 Data Acquisition We collected over 11,000 Indonesian receipt images through crowd-sourcing. Receipts have been obtained from shops and restaurants. In general, crowd-sourcing involves providing a guideline and annotation tool for crowd workers. For making guidelines, we first sampled hundreds of receipts and analyzed their common structures. The sampled receipts are carefully examined and then dominant and useful parse categories were defined. The main parse classes consist of store information, payment information, menu, void menu, subtotal, void total, total. Since these structures are not easily perceptible for the human mind, we carried out pilot annotation and redesigned parse categories repeatedly. Finally, we created two guidelines for the OCR and parse annotation. For more efficient annotation, we developed a web-based annotation tool and provided it to crowd workers. The annotation tool has two types of user accounts: annotator and inspector. When the annotator completes the annotation on an image, the inspector checks the annotation and determines whether the annotator complies with the guidelines or not. If the annotation is not done properly, the corresponding image is reassigned to another annotator. Although a relatively small amount, receipts may have sensitive information, e.g., customer's name, number of credit/debit card, transaction date/time. We thoroughly inspected the image and then removed the information by blurring in the image and deleting the corresponding field in the JSONformatted file. 3 Data Specification The receipts dataset consists of more than 11,000 image and JSON pairs. An example of image and json pair is shown in Figure 1. The ground truth has three main attributes, meta, the region of The valid line field has crucial information for post-OCR parsing. The quad field contains four coordinates of quadrilateral, and the text field has the incorporating text of the corresponding box. quad and text fields are used for OCR detection/localization and recognition task, respectively. Note that only optically identifiable text instances are annotated. For parsing tasks, there are three additional fields, category, is_key, and row_id. The category indicates parse class label. Note that the details of parse classes are explained on the Section 3.2. The row_id is an index of the line. The text instances which have the same row_id are on the same line. As represented in Figure 1, BORNGA and EXPRESS have the same row_id since they are placed next to each other. The is_key flag is used to identify words that act as a key to other text elements. For example, the is_key value of the text BORNGA is 0 since it does not act as a key but a value. On the other hand, the text Total has an is_key value of 1 because it acts as a key to another text element, 45,500. We present dataset statistics in Table 1. A dataset consists of eight superclasses: store, payment, menu, void menu, subtotal, void total, total, and etc. These eight superclasses are divided into 54 subclasses, e.g., the superclass menu has 16 subclasses containing menu name, quantity, unit price, discount price, submenu. Figure 2 represents the top 20 class labels from 1,000 randomly sampled data examples. The most frequent class label is the menu, especially menu name, price, and count. In this paper, we introduce a novel receipt dataset for a unified OCR-parsing task. The proposed dataset can be exploited not only each task-specific task but also end-to-end approaches. A receipt is a complex type of document. It contains many numbers and symbols as well as plain text, and it also has a complex text layout such as a tablet-shaped layout. Besides, receipt images acquired through the OCR process have various types of optical noise that originated from wrinkles and warpage. Due to these characteristics, the proposed data is more suitable for the OCR-parsing task than the synthetically generated dataset in terms of generalization performance. As mentioned in Section 3.2, our dataset provides multi-level class labels, for eight superclasses and 54 subclasses. It can be used to carry out weakly supervised learning as well as strongly supervised learning. Note that we will first release 1,000 samples which consist of 800 (train), 100 (dev), and 100 (test) data examples, and the remaining data will be published in sequence. The exposure of sensitive information can cause a legal problem, so we will carefully examine and remove sensitive information.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJl3z659UH
We introduce a large-scale receipt dataset for post-OCR parsing tasks.
The growth in the complexity of Convolutional Neural Networks (CNNs) is increasing interest in partitioning a network across multiple accelerators during training and pipelining the backpropagation computations over the accelerators. Existing approaches avoid or limit the use of stale weights through techniques such as micro-batching or weight stashing. These techniques either underutilize of accelerators or increase memory footprint. We explore the impact of stale weights on the statistical efficiency and performance in a pipelined backpropagation scheme that maximizes accelerator utilization and keeps memory overhead modest. We use 4 CNNs (LeNet-5, AlexNet, VGG and ResNet) and show that when pipelining is limited to early layers in a network, training with stale weights converges and in models with comparable inference accuracies to those ing from non-pipelined training on MNIST and CIFAR-10 datasets; a drop in accuracy of 0.4%, 4%, 0.83% and 1.45% for the 4 networks, respectively. However, when pipelining is deeper in the network, inference accuracies drop significantly. We propose combining pipelined and non-pipelined training in a hybrid scheme to address this drop. We demonstrate the implementation and performance of our pipelined backpropagation in PyTorch on 2 GPUs using ResNet, achieving speedups of up to 1.8X over a 1-GPU baseline, with a small drop in inference accuracy. Modern Convolutional Neural Networks (CNNs) have grown in size and complexity to demand considerable memory and computational resources, particularly for training. This growth makes it sometimes difficult to train an entire network with a single accelerator (; ; . Instead, the network is partitioned among multiple accelerators, typically by partitioning its layers among the available accelerators, as shown in Figure 1 for an example 8-layer network. The 8 layers are divided into 4 computationally-balanced partitions, P 0 ...P 3 and each partition is mapped to one of the 4 accelerators, A 0 ...A 3 . Each accelerator is responsible for the computations associated with the layers mapped to it. However, the nature of the backpropagation algorithm used to train CNNs is that the computations of a layer are performed only after the computations of the preceding layer in the forward pass of the algorithm and only after the computations of the succeeding layer in the backward pass. Further, the computations for one batch of input data are only performed after the computations of the preceding batch have updated the parameters (i.e., weights) of the network. These dependences underutilize the accelerators, as shown by the space-time diagram in Figure 2; only one accelerator can be active at any given point in time. The underutilization of accelerators can be alleviated by pipelining the computations of the backpropagation algorithm over the accelerators (; ; . That is, by overlapping the computations of different input data batches using the multiple accelerators. However, pipelining causes an accelerator to potentially use weights that are yet to be updated by an accelerator further down in the pipeline. The use of such stale weights can negatively affect the statistical efficiency of the network, preventing the convergence of training or producing a model with lower inference accuracy. Common wisdom is that the use of stale weights must either be avoided, e.g., with the use of microbatches , be constrained to ensure the consistency of the weights within an accelerator using stashing , or by limiting the use of pipelining to very small networks . However, these approaches either underutilize accelerators or inflate memory usage to stash multiple copies of weights . In this paper we question this common wisdom and explore pipelining that allows for the full utilization of accelerators while using stale weights. This in a pipelining scheme that, compared to existing schemes, is simpler to implement, fully utilizes the accelerators and has lower memory overhead. We evaluate this pipelining scheme using 4 CNNs: LeNet-5 (trained on MNIST), AlexNet, VGG and ResNet (all trained on CIFAR-10). We analyze the impact of weight staleness and show that if pipelining is limited to early layers in the network, training does converge and the quality of the ing models is comparable to that of models obtained with non-pipelined training. For the 4 networks, the drop in accuracy is 0.4%, 4%, 0.83% and 1.45%, respectively. However, inference accuracies drop significantly when the pipelining is deeper in the network. While this is not a limitation since the bulk of computations that can benefit from pipelining are in the early convolutional layers, we address this through a hybrid scheme that combines pipelined and non-pipelined training to maintain inference accuracy while still delivering performance improvement. Evaluation shows that our pipelined training delivers a speedup of up to 1.8X on a 2-GPU system. The remainder of this paper is organized as follows. Section 2 briefly describes the backpropagation for training of CNNs. Section 3 details our pipelining scheme. Section 4 describes how non-pipelined and pipelined backpropagation are combined. Section 5 highlights some of the implementation details. Experimental evaluation is presented in Section 6. Related work is reviewed in Section 7. Finally, Section 8 gives concluding remarks and directions for future work. The backpropagation algorithm consists of two passes: a forward pass that calculates the output error and a backward pass that calculates the error gradients and updates the weights of the network. The two passes are performed for input data one mini-batch at a time. In the forward pass, a mini-batch is fed into the network, propagating from the first to the last layer. At each layer l, the activations of the layer, denoted by x (l), are computed using the weights of the layer, denoted by W (l). When the output of the network (layer L) x (L) is produced, it is used with the true data label to obtain a training error e for the mini-batch. In the backward pass, the error e is propagated from the last to the first layer. The error gradients with respect to pre-activations of layer l, denoted by δ (l), are calculated. Further, the error gradients with respect to weights of layer l, ∂e ∂W (l), are computed using the activations from layer l − 1 (i.e., x (l−1) ) and δ (l). Subsequently, δ (l) is used to calculate the δ (l−1). When ∂e ∂W (l) is computed for every layer, the weights are updated using the error gradients. In the forward pass, the activations of the layer l, x (l), cannot be computed until the activations of the previous layers, i.e., x (l−1), are computed. In backward pass, ∂e ∂W (l) can only be computed once x (l−1) and δ (l) have been computed. Moreover, δ (l) depends on δ (l+1). Finally, for a given mini-batch the backward pass cannot be started until the forward pass is completed and the error e has been determined. The above dependences ensure that the weights of the layers are updated using the activations and error gradients calculated from the same batch of training data in one iteration of the backpropagation algorithm. Only when the weights are updated is the next batch of training data fed into the Figure 3: Pipelined Backpropagation Algorithm network. These dependences limit parallelism when a network is partitioned across multiple accelerators and allow only one accelerator to be active at any point. This in under-utilization of the accelerators. It is this limitation that pipelining addresses. We illustrate our pipelined backpropagation implementation with the L layer network shown in Figure 3, using conceptual pipeline registers. Two registers are inserted between layers l and l + 1; one register for the forward pass and a second for the backward pass. The forward register stores the activations of layer l (x (l) ). The backward register stores the gradients δ (l+1) of layer l + 1. This defines a 4-stage pipelined backpropagation. The forward pass for layers 1 to l forms forward stage FS 1. The forward pass for layers l + 1 to L form forward stage FS 2. Similarly, the backwards pass for layers l + 1 to L and 1 to l form backward stages BKS 1 and BKS 2 respectively. The forward and backward stages are executed in a pipelined fashion on 3 accelerators: one for FS 1, one for both FS 2 and BKS 1, and one for BKS 2 1. In cycle 0, mini-batch 0 is fed to FS 1. The computations of the forward pass are done as in the traditional non-pipelined implementation. In cycle 1, layer l activations x (l) are fed to FS 2 and mini-batch 1 is fed to FS 1. In cycle 2, the error for mini-batch 0 computed in FS 2 is directly fed to BKS 1, the activations of layer l x (l) are forwarded to FS 2 and mini-batch 2 is fed to FS 1. This pipelined execution is illustrated by the space-time diagram in Figure 4 for 5 mini-batches. The figure depicts the mini-batch processed by each accelerator cycles 0 to 6. At steady state, all the accelerators are active in each cycle of execution. The above pipelining scheme utilizes weights in FS 1 that are yet to be updated by the errors calculated by FS 2 and BKS 1. At steady state, the activations of a mini-batch in FS 1 are calculated using weights that are 2 execution cycles old, or 2 cycles stale. This is reflected in Figure 4 by indicating the weights used by each forward stage and the weights updated by each backward stage. The weights of a forward stage are subscripted by how stale they are (-ve subscripts). Similarly, the weights updated by a backward stage are subscripted by how delayed they are (+ve subscripts). Further, since the updates of the weights by BKS 2 requires activations calculated for the same mini-batch in FS 1 for all layers in the stage, it is necessary to save these activations until the error gradients with respect to the weights are calculated by BKS 2. Only when the weights are updated using the gradients can these activations be discarded. In the general case, we use K pairs of pipeline registers (each pair consisting of a forward register and a backward register) inserted between the layers of the network. We describe the placement of the register pairs by the Pipeline Placement Vector, PPV = (p 1, p 2, ..., p K), where p i represents the layer number after which a pipeline register pair is inserted. Such a placement creates (K + 1) forward stages, labeled FS i, i = 1, 2,..., K + 1 and (K + 1) backward stages, labeled BKS i, i = 1, 2,..., K + 1. Forward stage FS i and backward stage BKS K−i+2 correspond to the same set of layers. Specifically, stage FS i contains layers p i + 1 to p i+1, inclusive. We assign each forward stage and each backward stage to an accelerator, with the exception of the FS K+1 and backward stage BKS 1, which are assigned to the same accelerator to reduce weight staleness by an execution cycle. In total 2K + 1 accelerators are used. We quantify weight staleness as follows. A forward stage FS i and backward stage BKS K−i+2 use the same weights that are 2(K − i + 1) cycles old. A forward stage FS i must store the activations of On the one hand, the above pipelined execution allows a potential speedup of 2K + 1 over the nonpipelined implementation, keeping all the accelerators active at steady state. On the other hand, the use of stale weights may prevent training convergence or may in a model that has an inferior inference accuracy. Further, it requires an increase in storage for activations. Our goal is to assess the benefit of this pipelined execution and the impact of its down sides. Hybrid training combines pipelined training with non-pipelined training. We start with pipelined training and after a number of iterations, we switch to non-pipelined training. This can address drops in inference accuracy of ing models because of weight staleness, but it reduces the performance benefit since during non-pipelined training, the accelerators are under-utilized. The extent of the speedup obtained by hybrid training with a given number of accelerators is determined by the number of iterations used for pipelined and non-pipelined training. Assume that n np iterations are used to reach the best inference accuracy for non-pipelined training, and that in hybrid training, n p iterations (n p ≤ n np) are pipelined followed by n np − n p iterations of non-pipelined training to reach the same inference accuracy as non-pipelined training. The speedup of hybrid training with respect to the non-pipelined training with 2K + 1 accelerators is n np /(n p /(2K + 1) + (n np − n p)). For large K, then using Amdahl's law, the speedup approaches an upper bound of n np /(n np − n p). We implement pipelined training in two ways: simulated in Caffe , where the whole training process is performed on one process with no parallelism, and actual with parallelism across accelerators in PyTorch . The simulated execution is used to analyze statistical convergence, inference accuracy and impact of weight staleness unconstrained by parallelism and communication overhead. The actual execution is used to report performance and PyTorch is used instead of Caffe to leverage its support for collective communication protocols and its flexibility in partitioning a network across multiple accelerators. Both Caffe and PyTorch have no support for pipelined training. Thus both were extended to provide such support. We develop a custom Caffe layer in Python, which we call a Pipeline Manager Layer (PML), to facilitate the simulated pipelining. During the forward pass, a PML registers the input from a previous layer and passes the activation to the next layer. It also saves the activations for the layers connected to it to be used in the backward pass. During the backward pass, a PML passes the appropriate error gradients. It uses the corresponding activations saved during the forward pass to update weights and generate error gradients for the previous stage, using existing weight update mechanisms in Caffe. To implement actual hardware-accelerated pipelined training, we partition the network onto different accelerators (GPUs), each running its own process. Asynchronous sends and receives are used for data transfers, but all communication must go through the host CPU, since point-to-point communication between accelerators is not supported in PyTorch. This increases communication overhead. Similar to the PMLs in Caffe, the activations computed on one GPU are copied to the next GPU (via the CPU) in the forward pass and the error gradients are sent (again via the CPU) to the preceding GPU during the backward pass. The GPUs are running concurrently, achieving pipeline parallelism. Simulated pipelining is evaluated on a machine with one Nvidia GTX1060 GPU with 6 GB of memory and an Intel i9-7940X CPU with 64 GB of RAM. The performance of actual pipelining is evaluated using two Nvidia GTX1060 GPUs, each with 6 GB of memory, hosted in an Intel i7-9700K machine with 32 GB of RAM. We use four CNNs in our evaluation: LeNet-5 with minor variations to the hyperparameters, as described in Appendix 8. We evaluate the effectiveness of pipelined training in terms of its training convergence and its Top-1 inference accuracy, compared to those of the non-pipelined training. We use the speedup to evaluate performance improvements. It is defined as the ratio of the training time of the non-pipelined implementation on single communication-free GPU to the training time of the pipelined training. 6.2 TRAINING CONVERGENCE AND INFERENCE ACCURACY Figure 5 shows the improvements in the inference accuracies for both pipelined and non-pipelined training as a function of the number of training iterations (each iteration corresponds to a minibatch). The pipelined training is done using 4, 6, 8 and 10 stages. Table 1 shows where the registers are inserted in the networks using their PPV defined in Section 3. Figure 5 shows that for all the networks, both pipelined and non-pipelined training have similar convergence patterns. They converge in more or less the same number of iterations for a given number of pipeline stages, albeit to different inference accuracies. This indicates that our approach to pipelined training with stale weights does converge, similar to non-pipelined training. Table 2 shows the inference accuracy obtained after up to 30,000 iterations of training. For LeNet-5, the inference accuracy drop is within 0.5%. However, for the other networks, there is a small drop in inference accuracy with 4 and 6 stages. AlexNet has about 4% drop in inference accuracy, but for VGG-16 the inference accuracy drop is within 2.4%, and for ResNet-20 the accuracy drop is within 3.5%. Thus, the ing model quality is comparable to that of a non-pipelining-trained model. However, with deeper pipelining (i.e., 8 and 10 stages), inference accuracy significantly drops. There is a 12% and a 8.5% inference accuracy drop for VGG-16 and ResNet-20 respectively. In this case, the model quality is not comparable to that of the non-pipelined training. This confirm what is reported in the literature and can be attributed to the use of stale weights. Below we further explore the impact of stale weights on inference accuracy. We wish to better understand the impact of the number of pipeline stages and the location of these stages in the network on inference accuracy. We focus on ResNet-20 because of its relatively small size and regular structure. It consists of 3 residual function groups with 3 residual function blocks within each group. In spite of this relatively small size and regular structure, it enables us to create pipelines with up to 20 stages by inserting pipeline register pairs within residual function blocks. We conduct two experiments. In the first, we increase the number of pipeline stages (from earlier layers to latter layers) and measure the inference accuracy of the ing model. The are shown in Table 3, which gives the inference accuracy of pipelined training after 100,000 iterations, as the number of pipeline stages increases. The 8-stage pipelined training is created by a PPV of, and the subsequent pipeline schemes are created by adding pipeline registers after every 2 layers after layer 7. Clearly, the greater the number stages, the worse is the ing model quality. The number of stale weights used in the pipelined training increases as the number of pipeline stages increases. Thus, Figure 6 depicts the inference accuracy as a function of the percentage of weights that are stale. The curve labeled "Increasing Stages" shows that the drop in inference accuracy increases as the percentage of stale weights increases. In the second experiment, we investigate the impact of the degree of staleness (Section 3). Only one pair of pipeline registers is inserted. The position of this register slides from the beginning of the network to its end. At every position, the percentage of stale weights remains the same as in the first experiment, but all stale weights have the same degree of staleness. The of this experiment is shown by the curve labeled "Sliding Stage" in Figure 6. The curve shows the inference accuracy also drops as the percentage of stale weights increases. However, it also indicates that the drop of inference accuracy remains more or less the same as in the first experiment in which the degree of staleness is higher. Thus, the percentage of stale weight appears to be what determines the drop in inference accuracy and not the degree of staleness of the weights. The percentage of stale weight is determined by where the last pair of pipeline registers are placed in the network. It is the position of this pair that determines the loss in inference accuracy. Therefore, it is desirable to place this last pair of registers as early as possible in the network so as to minimize the drop in inference accuracy. While at first glance this may seem to limit pipelining, it is important to note that the bulk of computations in a CNN is in the first few convolutional layers in the network. Inserting pipeline registers for these early layers can in both a large number of stages that are computationally balanced. For example, our profiling of the runtime of ResNet-20 shows that the first three residual functions take more than 50% of the training runtime. This favors more pipeline stages at the beginning of the network. Such placement has the desirable effect of reducing the drop in inference accuracy while obtaining relatively computationally balanced pipeline stages. We demonstrate the effectiveness of hybrid training using only ResNet-20 for brevity. Figure 7 shows the inference accuracy for 20K iterations of pipelined training followed by either 10K or 20K iterations of non-pipelined training. This inference accuracy is compared to 30K iterations of either non-pipelined or pipelined training with PPV. The figure demonstrates that hybrid training converges in a similar manner to both pipelined and non-pipelined training. Table 4 shows the ing inference accuracies. The table shows that the 20K+10K hybrid training produces a model with accuracy that is comparable to that of the non-pipelined model. Further, with an additional 10K iterations of non-pipelined training, the model quality is slightly better than that of the non-pipelined model. This demonstrates the effectiveness of hybrid training. We implement 4-stage pipelined training ResNet-20/56/110/224/362 on a 2-GPU system. Each GPU is responsible for one forward stage and one backward stage. Thus, the maximum speedup that can be obtained is 2. We train every ResNet for 200 epochs. Table 5 shows the inference accuracies with and without pipelining as model as the measured speedups of pipelined training over the non-pipelined one. The table indicates that the quality of the models produced by pipelined training is comparable to those achieved by the simulated pipelining on Caffe. The table further shows that speedup exists for all networks. Indeed, for ResNet-362, the speedup is 1.82X. This is equivalent to about 90% utilization for each GPU. The table also reflects that as the networks get larger, the speedup improves. This reflects that with larger networks, the ratio of computation to communication overhead is higher, leading to better speedups. Moreover, we combine the 4-stage pipelined training described above with non-pipelined training to demonstrate the performance of hybrid training. We train every ResNet using pipelined training for 100 epochs and follow it up by 100 epochs of non-pipelined training. Because the maximum speedup for the pipelined training is 2 and only half the training epochs is accelerated, the maximum speedup for this hybrid training is s = t/(t/2 + t/4) = 1.33, where t is the training time of non- Table 5 shows the inference accuracies and speedup of the hybrid training for each ResNet and validates that hybrid training can produce a model quality that is comparable to the baseline non-pipelined training while speeding up the training process. As network size grows, the speedup reaches 1.29X, approaching the theoretical limit 1.33X. Pipelined training requires the saving of intermediate activations, as described earlier in Section 3, leading to an increase in memory footprint. This increase in memory is a function of not only the placement of the pipeline registers, but also of the network architecture and the number of inputs in a mini-batch (batch size). We calculate the memory usage of the 4-stage pipelined ResNet training above to show that this increase is modest for our pipelining scheme. Specifically, we use torchsummary in PyTorch to report memory usage for weights and activations for a network and calculate the additional memory required by the additional copies of activations. The are shown in Table 6. Assuming a batch size of 128, the percentage increase in size is close to 60% except for ResNet-20. We compare our pipelined training scheme with two key existing systems: PipeDream and GPipe . We do so on three aspects: the pipelining scheme, performance and memory usage. We believe that PipeDream and GPipe are representative of existing key approaches that implement pipelined training, including Decoupled Backpropagation (DDG) (b) and Feature Replay (FR) (a) (discussed in Section 7). Our pipelining scheme is simpler than that of PipeDream and GPipe in that we do not require weight stashing nor do we divide mini-batches into micro-batches. This leads to less communication overhead, and is amicable to rapid realization in machine learning framework such as PyTorch or in actual hardware such as Xilinx's xDNN FPGA accelerators . Our pipelining scheme, as PipeDream, eliminates bubbles that exist in the pipeline leading to better performance. For example, we obtain a speedup of 1.7X for ResNet-110 using 2 GPUs in contrast to GPipe that obtains a speedup of roughly 1.3X for ResNet-101 using 2 TPUs. We also obtain similar performance compared to PipeDream for similar networks. When the number of pipeline stages grows, pipeline bubbles exhibits more negative effect on performance shown in GPipe on a 4-partition pipelined ResNet-101 using 4 TPUs as its bubble overhead doubled compared to that of the 2-partition pipelined ResNet-101. Our scheme uses less memory compared to PipeDream, although it introduces more memory overhead compared to GPipe. PipeDream saves intermediate activations during training, as we do. However, it also saves multiple copies of a network's weights for weight stashing. The memory footprint increase due to this weight stashing depends on the network architecture, including the number of weights and activations, as well as on the size of the mini-batch. For example, for VGG-16 trained on CIFAR-10 with a mini-batch size of 128 using a 4-stage, pipelined training, we estimate our pipelining methodology to use 49% less memory compared PipeDream. Similarly for VGG-16 trained on ImageNet and a mini-batch size of 32, our scheme uses 29% less memory. We estimate the memory increase due to weight stashing also using tourchsummary. There has been considerable work that explores parallelism in the training of deep neural networks. There are several approaches to exploiting parallelism. One approach is to exploit data parallelism (; ; ; ; ;), in which each accelerator obtains a full copy of the model and processes different mini-batches of training data simultaneously. At the end of each training iteration, the gradients produced by all accelerators are aggregated and used to update weights for all copies of the model, synchronously or asynchronously . A centralized parameter server is usually used to facilitate data communication . Although the training is performed in parallel, the communication overhead can be significant . A second approach is to exploit model parallelism (; ; ; ;). In this approach, a model is partitioned onto different accelerators (; ; ; ;). Each accelerator is only responsible for updating the weights for the portion of the model assigned to it. This approach is often used when a model is large and cannot fit into the memory of a single accelerator. However, because of the data dependences described in Section 2, only one accelerator is active during the training process, ing in under-utilization of accelerators resources. Moreover, inter-layer activations and gradients across two consecutive stages needs to be communicated during training, adding more overhead to the entire process. Pipelined parallelism addresses the under-utilization of accelerators resources for the training of large models. There have been a few studies that explore pipelined parallelism (; ; ; ; b ;a), which we review in this section. PipeDream implements pipelined training for large neural networks such as VGG-16, Inception-v3 and S2VT across multiple GPUs. However, in their implementation, they limited the usage of stale weights by weight stashing, i.e., keeping multiple versions of network parameters (weights) during training. This increases the memory footprint of training. In contrast, we do not maintain multiple copies of weights during training, therefore reducing the memory footprint of pipelined training. GPipe implements a library in Tensorflow to enable pipelined parallelism for the training of large neural networks. GPipe pipelines micro-batches within each mini-batch to keep the gradients consistently accumulated. This eliminates the use of stale weight during training, but it does so at the expense of "pipeline bubbles" at steady state. GPipe utilizes these bubbles to reduce the memory footprint by re-computing forward activations instead of storing them. In contrast, our work has no pipeline bubble and thus dedicates computing resources to compute forward pass and backward pass only once during each training iteration. Huo et al. (b) implement decoupled backpropagation (DDG) using delayed gradient updates. They show that DDG guarantees convergence through a rigorous convergence analysis. Similar to PipeDream, DDG uses multiple copies of the weights and thus increases memory footprint. Further, DDG pipelines only the backward pass of training, leaving forward pass un-pipelined. Huo et al. (a) follow up by proposing feature replay (FR) that re-computes activations during backward pass, similar to GPipe, ing less memory footprint and improved inference accuracy than DDG. In contrast, we pipeline both forward and backward pass without maintaining multiple copies of weights or re-computing forward activations during backward pass. Thus, in summary, our work contrasts to the above work on pipelined training, in that we use pipelining with unconstrained stale weights, ing in full pipeline utilization with a modest increase in memory usage. We extend earlier work by studying the impact of weights staleness on the quality of the model. We show that it is effective to use stale weights if the pipelining is in early layers, which is where the bulk of computations exist. Further we also extend earlier work through hybrid training, which combines both pipelined and non-pipelined training. We compare the performance and memory footprint increase of our scheme to existing work in Section 6.7. We evaluate pipelined execution of backpropagation for the training of CNNs in a way that fully utilizes accelerators, achieving a speedup of 1.82X on the 2-GPU system, and does not significantly increase memory usage, unlike previous work. We show that pipelining training with stale weights does converge. Further, we show that the inference accuracies of the ing models are comparable to those of models obtained with traditional backpropagation, but only when pipelining is implemented in the early layers of the network, with inference accuracy drop within 1.45% on 4-stage pipelined training except for AlexNet. This does not limit the benefit of pipelining since the bulk of computations is in the early convolutional layers. When pipelining is implemented deeper in the network, the inference accuracies do drop significantly, but we can compensate for this drop by combining pipelined with non-pipelined training, albeit with lower performance gains, obtaining model quality with an average of 0.19% better than the baseline in inference accuracies for ResNets. This work can be extended in a number of directions. One direction is to evaluate the approach with a larger number of accelerators since pipelined parallelism is known to scale naturally with the number of accelerators. Another is to evaluate the approach on larger datasets, such as ImageNet. Finally, pipelined parallelism lends itself to hardware implementation. Thus, another direction for future work is to evaluate pipelined parallelism using Field Programmable Gate Array (FPGA) or ASIC accelerators. LeNet-5 is trained on the MNIST dataset with Stochastic Gradient Descent (SGD) using a learning rate of 0.01 with inverse learning policy, a momentum of 0.9, a weight decay of 0.0005 and a minibatch size of 100 and for 30,000 iterations. The progression of inference accuracy during training is recorded with 300 tests AlexNet is trained on the CIFAR-10 dataset with SGD with Nesterov momentum using a learning rate of 0.001 that is decreased by 10x twice during training, a momentum of 0.9, a weight decay of 0.004 and a mini-batch size of 100 for 250,000 iterations. One test is performed every epoch to record the progression of inference accuracy. VGG-16 is trained on CIFAR-10 dataset with SGD with Nesterov momentum using a learning rate starting at 0.1 that is decreased by half every 50 epochs during training, a momentum of 0.9, a weight decay of 0.0005 and a mini-batch size of 100 for 250,000. Since it is relatively more difficult to train VGG-16 compared to other models, batch normalization and dropout are used during training throughout the network. One test is performed every epoch to record the progression of inference accuracy. ResNet is trained on CIFAR-10 dataset with SGD using a learning rate starting at 0.1 and 0.01 for non-pipelined and pipelined training respectively, that is decreased by 10x twice during training, a momentum of 0.9, a weight decay of 0.0001 and a mini-batch size of 128 for 100,000 iterations. Batch normalization is used during training throughout the network. One test is performed every 100 iterations to record the progression of inference accuracy. For the baseline non-piplined training, ResNet-20/56/110/224/362 is trained on CIFAR-10 dataset for 200 epochs with SGD using a learning rate of 0.1 that is decreased by a factor of 10 twice (at epoch 100 and 150), a momentum of 0.9, a weight decay of 0.0001 and a mini-batch size of 128. Batch normalization is used during training throughout the network. This set of hyperparameters can be found at https://github.com/akamaster/pytorch resnet cifar10. For the 4-stage pipelined training, the hyperparameters are the same as the non-pipelined baseline, except for the BKS 2 learning rate.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkgTR3VFvH
Accelerating CNN training on a Pipeline of Accelerators with Stale Weights
Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs. One core idea of adversarial example research is to reveal neural network errors under such distribution shifts. We decompose these errors into two complementary sources: sensitivity and invariance. We show deep networks are not only too sensitive to task-irrelevant changes of their input, as is well-known from epsilon-adversarial examples, but are also too invariant to a wide range of task-relevant changes, thus making vast regions in input space vulnerable to adversarial attacks. We show such excessive invariance occurs across various tasks and architecture types. On MNIST and ImageNet one can manipulate the class-specific content of almost any image without changing the hidden activations. We identify an insufficiency of the standard cross-entropy loss as a reason for these failures. Further, we extend this objective based on an information-theoretic analysis so it encourages the model to consider all task-dependent features in its decision. This provides the first approach tailored explicitly to overcome excessive invariance and ing vulnerabilities. Figure 1: All images shown cause a competitive ImageNet-trained network to output the exact same probabilities over all 1000 classes (logits shown above each image). The leftmost image is from the ImageNet validation set; all other images are constructed such that they match the non-class related information of images taken from other classes (for details see section 2.1). The excessive invariance revealed by this set of adversarial examples demonstrates that the logits contain only a small fraction of the information perceptually relevant to humans for discrimination between the classes. Adversarial vulnerability is one of the most iconic failure cases of modern machine learning models BID45 ) and a prime example of their weakness in out-of-distribution generalization. It is particularly striking that under i.i.d. settings deep networks show superhuman performance on many tasks BID33, while tiny targeted shifts of the input distribution can cause them to make unintuitive mistakes. The reason for these failures and how they may be avoided or at least mitigated is an active research area BID41 BID20 BID11.So far, the study of adversarial examples has mostly been concerned with the setting of small perturbation, or -adversaries BID23 BID35 BID38.Perturbation-based adversarial examples are appealing because they allow to quantitatively measure notions of adversarial robustness BID9. However, recent work argued that the perturbation-based approach is unrealistically restrictive and called for the need of generalizing the concept of adversarial examples to the unrestricted case, including any input crafted to be misinterpreted by the learned model BID44 BID10 ). Yet, settings beyond -robustness are hard to formalize BID19.We argue here for an alternative, complementary viewpoint on the problem of adversarial examples. Instead of focusing on transformations erroneously crossing the decision-boundary of classifiers, we focus on excessive invariance as a major cause for adversarial vulnerability. To this end, we introduce the concept of invariance-based adversarial examples and show that class-specific content of almost any input can be changed arbitrarily without changing activations of the network, as illustrated in figure 1 for ImageNet. This viewpoint opens up new directions to analyze and control crucial aspects underlying vulnerability to unrestricted adversarial examples. The invariance perspective suggests that adversarial vulnerability is a consequence of narrow learning, yielding classifiers that rely only on few highly predictive features in their decisions. This has also been supported by the observation that deep networks strongly rely on spectral statistical regularities BID29, or stationary statistics BID17 to make their decisions, rather than more abstract features like shape and appearance. We hypothesize that a major reason for this excessive invariance can be understood from an information-theoretic viewpoint of crossentropy, which maximizes a bound on the mutual information between labels and representation, giving no incentive to explain all class-dependent aspects of the input. This may be desirable in some cases, but to achieve truly general understanding of a scene or an object, machine learning models have to learn to successfully separate essence from nuisance and subsequently generalize even under shifted input distributions. • We identify excessive invariance underlying striking failures in deep networks and formalize the connection to adversarial examples.• We show invariance-based adversarial examples can be observed across various tasks and types of deep network architectures.• We propose an invertible network architecture that gives explicit access to its decision space, enabling class-specific manipulations to images while leaving all dimensions of the representation seen by the final classifier invariant.• From an information-theoretic viewpoint, we identify the cross-entropy objective as a major reason for the observed failures. Leveraging invertible networks, we propose an alternative objective that provably reduces excessive invariance and works well in practice. In this section, we define pre-images and establish a link to adversarial examples. DISPLAYFORM0 with layers f i and let F i denote the network up to layer i. Further, let D: R d → {1, . . ., C} be a classifier with D = arg max k=1,...,C sof tmax(F (x)) k. Then, for input x ∈ R d, we define the following pre-images (i) i-th Layer pre-image: DISPLAYFORM1 DISPLAYFORM2 Moreover, the (sub-)network is invariant to perturbations ∆x which satisfy x * = x + ∆x.→ Invariance-based: DISPLAYFORM3 Figure 2: Connection between invariance-based (long pink arrow) and perturbation-based adversarial examples (short orange arrow). Class distributions are shown in green and blue; dashed line is the decision-boundary of a classifier. All adversarial examples can be reached either by crossing the decision-boundary of the classifier via perturbations, or by moving within the pre-image of the classifier to mis-classified regions. The two viewpoints are complementary to one another and highlight that adversarial vulnerability is not only caused by excessive sensitivity to semantically meaningless perturbations, but also by excessive insensitivity to semantically meaningful transformations. Non-trivial pre-images (pre-images containing more elements than input x) after the i-th layer occur if the chain f i • · · · • f 1 is not injective, for instance due to subsampling or non-injective activation functions like ReLU BID6. This accumulated invariance can become problematic if not controlled properly, as we will show in the following. We define perturbation-based adversarial examples by introducing the notion of an oracle (e.g., a human decision-maker or the unknown input-output function considered in learning theory): DISPLAYFORM4.., C} is the classifier and o: R d → {1, . . ., C} is the oracle.(ii) Created by adversary: DISPLAYFORM5 Further, -bounded adversarial ex. x * of x fulfill x − x * <, · a norm on R d and > 0.Usually, such examples are constructed as -bounded adversarial examples BID23. However, as our goal is to characterize general invariances of the network, we do not restrict ourselves to bounded perturbations. Definition 3 (Invariance-based Adversarial Examples). Let G denote the i-th layer, logits or the classifier (Definition 1) and let x * = x be in the G pre-image of x and and o an oracle (Definition 2). Then, an invariance-based adversarial example DISPLAYFORM6 Intuitively, adversarial perturbations cause the output of the classifier to change while the oracle would still consider the new input x * as being from the original class. Hence in the context ofbounded perturbations, the classifier is too sensitive to task-irrelevant changes. On the other hand, movements in the pre-image leave the classifier invariant. If those movements induce a change in class as judged by the oracle, we call these invariance-based adversarial examples. In this case, however, the classifier is too insensitive to task-relevant changes. In , these two modes are complementary to each other, whereas both constitute failure modes of the learned classifier. When not restricting to -perturbations, perturbation-based and invariance-based adversarial examples yield the same input x * via DISPLAYFORM7 with different reference points x 1 and x 2, see Figure 2. Hence, the key difference is the change of reference, which allows us to approach these failure modes from different directions. To connect these failure modes with an intuitive understanding of variations in the data, we now introduce the notion of invariance to nuisance and semantic variations, see also BID1.Definition 4 (Semantic/ Nuisance perturbation of an input). Let o be an oracle (Definition 2) and DISPLAYFORM8 For example, such a nuisance perturbation could be a translation or occlusion in image classification. Further in Appendix A, we discuss the synthetic example called Adversarial Spheres from BID20, where nuisance and semantics can be explicitly formalized as rotation and norm scaling. Figure 3: The fully invertible RevNet, a hybrid of Glow and iRevNet with simple readout structure. z s represents the logits and z n the nuisance. As invariance-based adversarial examples manifest themselves in changes which do not affect the output of the network F, we need a generic approach that gives us access to the discarded nuisance variability. While feature nuisances are intractable to access for general architectures (see comment after Definition 1), invertible classifiers only remove nuisance variability in their final projection BID28 ). For C < d, we denote the classifier as D: R d → {1, ..., C}. Our contributions in this section are: Introduce an invertible architecture with a simplified readout structure, allowing to exactly visualize manipulations in the hidden-space, Propose an analytic attack based on this architecture allowing to analyze its decision-making, Reveal striking invariance-based vulnerability in competitive classifiers. Bijective classifiers with simplified readout. We build deep networks that give access to their decision space by removing the final linear mapping onto the class probes in invertible RevNet-classifiers and call these networks fully invertible RevNets. The fully invertible RevNet classifier can be written as D θ = arg max k=1,...,C sof tmax(F θ (x) k ), where F θ represents the bijective network. We denote z = F θ (x), z s = z 1,...,C as the logits (semantic variables) and z n = z C+1,...,d as the nuisance variables (z n is not used for classification). In practice we choose the first C indices of the final z tensor or apply a more sophiscticated DCT scheme (see appendix D) to set the subspace z s, but other choices work as well. The architecture of the network is similar to iRevNets BID28 with some additional Glow components like actnorm BID31, squeezing, dimension splitting and affine block structure BID15, see Figure 3 for a graphical description. As all components are common in the bijective network literature, we refer the reader to Appendix D for exact training and architecture details. Due to its simple readout structure, the ing invertible network allows to qualitatively and quantitatively investigate the task-specific content in nuisance and logit variables. Despite this restriction, we achieve performance on par with commonly-used baselines on MNIST and ImageNet, see Table 1 BID43 and two ResNet BID25 variants, as well as an iRevNet BID28 ) with a non-invertible final projection onto the logits. Our proposed fully invertible RevNet performs roughly on par with others. Analytic attack. To analyze the trained models, we can sample elements from the logit pre-image by computing x met = F −1 (z s,z n), where z s andz n are taken from two different inputs. We term this heuristic metameric sampling. The samples would be from the true data distribution if the subspaces would be factorized as P (z s, z n) = P (z s)P (z n). Experimentally we find that logit metamers are revealing adversarial subspaces and are visually close to natural images on ImageNet. Thus, metameric sampling gives us an analytic tool to inspect dependencies between semantic and nuisance variables without the need for expensive and approximate optimization procedures. Attack on adversarial spheres. First, we evaluate our analytic attack on the synthetic spheres dataset, where the task is to classify samples as belonging to one out of two spheres with different radii. We choose the sphere dimensionality to be d = 100 and the radii: R 1 = 1, R 2 = 10. By training a fully-connected fully invertible RevNet, we obtain 100% accuracy. After training we visualize the decision-boundaries of the original classifier D and a posthoc trained classifier on z n (nuisance classifier), see FIG1. We densely sample points in a 2D subspace, following BID20, to visualize two cases: 1) the decision-boundary on a 2D plane spanned by two randomly chosen data points, 2) the decision-boundary spanned by metameric sample x met and reference point x. In the metameric sample subspace we identify excessive invariance of the classifier. Here, it is possible to move any point from the inner sphere to the outer sphere without changing the classifiers predictions. However, this is not possible for the classifier trained on z n. Most notably, the visualized failure is not due to a lack of data seen during training, but rather due to excessive invariance of the original classifier D on z s. Thus, the nuisance classifier on z n does not exhibit the same adversarial vulnerability in its subspace. Figure 5: Each column shows three images belonging together. Top row are source images from which we sample the logits, middle row are logit metamers and bottom row images from which we sample the nuisances. Top row and middle row have the same (approximately for ResNets, exactly for fully invertible RevNets) logit activations. Thus, it is possible to change the image content completely without changing the 10-and 1000-dimensional logit vectors respectively. This highlights a striking failure of classifiers to capture all task-dependent variability. Attack on MNIST and ImageNet. After validating its potential to uncover adversarial subspaces, we apply metameric sampling to fully invertible RevNets trained on MNIST and Imagenet, see Figure 5. The is striking, as the nuisance variables z n are dominating the visual appearance of the logit metamers, making it possible to attach any semantic content to any logit activation pattern. Note that the entire 1000-dimensional feature vector containing probabilities over all ImageNet classses remains unchanged by any of the transformations we apply. To show our findings are not a particular property of bijective networks, we attack an ImageNet trained ResNet152 with a gradientbased version of our metameric attack, also known as feature adversaries BID39. The attack minimizes the mean squared error between a given set of logits from one image to another image (see appendix B for details). The attack shows the same failures for non-bijective models. This highlights the general relevance of our finding and poses the question of the origin of this excessive invariance, which we will analyze in the following section. In this section we identify why the cross-entropy objective does not necessarily encourage to explain all task-dependent variations of the data and propose a way to fix this. As shown in FIG1, the nuisance classifier on z n uses task-relevant information not captured by the logit classifier D θ on z s (evident by its superior performance in the adversarial subspace).We leverage the simple readout-structure of our invertible network and turn this observation into a formal explanation framework using information theory: Let (x, y) ∼ D with labels y ∈ {0, 1} C. Then the goal of a classifier can be stated as maximizing the mutual information (Cover & BID14 between semantic features z s (logits) extracted by network F θ and labels y, denoted by I(y; z s).Adversarial distribution shift. As the previously discussed failures required to modify input data from distribution D, we introduce the concept of an adversarial distribution shift D Adv = D to formalize these modifications. Our first assumptions for D Adv is I D Adv (z n ; y) ≤ I D (z n ; y). Intuitively, the nuisance variables z n of our network do not become more informative about y. Thus, the distribution shift may reduce the predictiveness of features encoded in z s, but does not introduce or increase the predictive value of variations captured in z n. Second, we assume I D Adv (y; z s |z n) ≤ I D Adv (y; z s), which corresponds to positive or zero interaction information, see e.g. BID18. While the information in z s and z n can be redundant in this assumption, synergetic effects where conditioning on z n increase the mutual information between y and z s are excluded. Bijective networks F θ capture all variations by design which translates to information preservation I(y; x) = I(y; F θ (x)), see BID32. Consider the reformulation I(y; x) = I(y; F θ (x)) = I(y; z s, z n) = I(y; z s) + I(y; z n |z s) = I(y; z n) + I(y; z s |z n)by the chain rule of mutual information (Cover & BID14, where I(y; z n |z s) denotes the conditional mutual information. Most strikingly, equation 5 offers two ways forward:1. Direct increase of I(y; z s)2. Indirect increase of I(y; z s |z n) via decreasing I(y; z n).Usually in a classification task, only I(y; z s) is increased actively via training a classifier. While this approach is sufficient in most cases, expressed via high accuracies on training and test data, it may fail under D Adv. This highlights why cross-entropy training may not be sufficient to overcome excessive semantic invariance. However, by leveraging the bijection F θ we can minimize the unused information I(y; z n) using the intuition of a nuisance classifier. Definition 5 (Independence cross-entropy loss). Let DISPLAYFORM0 be the nuisance classifier with θ nc ∈ R p2. Then, the independence cross-entropy loss is defined as: DISPLAYFORM1.The underlying principles of the nuisance classification loss L nCE can be understood using a variational lower bound on mutual information from BID5. In summary, the minimization is with respect to a lower bound on I D (y; z n), while the maximization aims to tighten the bound (see Lemma 10 in Appendix C). By using these , we now state the main under the assumed distribution shift and successful minimization (proof in Appendix C.1):Theorem 6 (Information I DAdv (y; z s) maximal after distribution shift). Let D Adv denote the adversarial distribution and D the training distribution. Assume I D (y; z n) = 0 by minimizing L iCE and the distribution shift satisfies I D Adv (z n ; y) ≤ I D (z n ; y) and Under distribution D, the iCE-loss minimizes I(y; z n) (Lemma 10, Appendix C), but has no effect as the CE-loss already maximizes I(y; z s). However under the shift to D Adv, the information I(y; z s) decreases when training only under the CE-loss (orange arrow), while the iCE-loss induces I(y; z n) = 0 and thus leaves I(y; z s) unchanged (Theorem 6). DISPLAYFORM2 Thus, incorporating the nuisance classifier allows for the discussed indirect increase of I D Adv (y; z s) under an adversarial distribution shift, visualized in Figure 6.To aid stability and further encourage factorization of z s and z n in practice, we add a maximum likelihood term to our independence cross-entropy objective as DISPLAYFORM0 where det(J x θ) denotes the determinant of the Jacobian of F θ (x) and p k ∼ N (β k, γ k) with β k, γ k learned parameter. The log-determinant can be computed exactly in our model with negligible additional cost. Note, that optimizing L M LEn on the nuisance variables together with L sCE amounts to maximum-likelihood under a factorial prior (see Lemma 11 in Appendix C).Just as in GANs the quality of the relies on a tight bound provided by the nuisance classifier and convergence of the MLE term. Thus, it is important to analyze the success of the objective after training. We do this by applying our metameric sampling attack, but there are also other ways like evaluating a more powerful nuisance classifier after training. In this section, we show that our proposed independence cross-entropy loss is effective in reducing invariance-based vulnerability in practice by comparing it to vanilla cross-entropy training in four aspects: error on train and test set, effect under distribution shift, perturbing nuisances via metameric sampling, evaluate accuracy of a classifier on the nuisance variables to quantify the class-specific information in them and on our newly introduced shiftMNIST, an augmented version of MNIST to benchmark adversarial distribution shifts according to Theorem 6.For all experiments we use the same network architecture and settings, the only difference being the two additional loss terms as explained in Definition 5 and equation 6. In terms of test error of the logit classifier, both losses perform approximately on par, whereas the gap between train and test error vanishes for our proposed loss function, indicating less overfitting. For classification errors see TAB3 in appendix D. To analyze if our proposed loss indeed leads to independence between z n and labels y, we attack it with our metameric sampling procedure. As we are only looking on data samples and not on samples from the model (factorized gaussian on nuisances), this attack should reveal if the network learned to trick the objective. In FIG2 we show interpolations between original images and logit metamers in CE-and iCE-trained fully invertible RevNets. In particular, we are holding the activations z s constant, while linearly interpolating nuisances z n down the column. The CE-trained network allows us to transform any image into any class without changing the logits. However, when training with our proposed iCE, the picture changes fundamentally and interpolations in the pre-image only change the style of a digit, but not its semantic content. This shows our loss has the ability to overcome excessive task-related invariance and encourages the model to explain and separate all task-related variability of the input from the nuisances of the task. −1 (z s,z n) with logit activations z s taken from original image andz n obtained by linearly interpolating from the original nuisance z n (first row) to the nuisance of a target example z * n (last row upper block). The used target example is shown at the bottom. When training with cross-entropy, virtually any image can be turned into any class without changing the logits z s, illustrating strong vulnerability to invariance-based adversaries. Yet, training with independence cross-entropy solves the problem and interpolations between nuisances z n and z * n preserve the semantic content of the image. A classifier trained on the nuisance variables of the cross-entropy trained model performs even better than the logit classifier. Yet, a classifier on the nuisances of the independence cross-entropy trained model is performing poorly (Table 2 in appendix D). This indicates little class-specific information in the nuisances z n, as intended by our objective function. Note also that this inability of the nuisance classifier to decode class-specific information is not due to it being hard to read out from z n, as this would be revealed by the metameric sampling attack (see FIG2 . At test time, the binary code is not present and the network can not rely on it anymore. (b) Textured shiftMNIST introduces textured s for each digit category which are patches sampled from the describable texture dataset BID13. DISPLAYFORM0 At train time the same type of texture is underlayed each digit of the same category, while texture types across categories differ. At test time, the relationship is broken and texture s are paired with digits randomly, again minimizing the mutual information between and label in a targeted manner. See Figure 8 for examples 1.It turns out that this task is indeed very hard for standard classifiers and their tendency to become excessively invariant to semantically meaningful features, as predicted by our theoretical analysis. When trained with cross-entropy, ResNets and fi-RevNets make zero errors on the train set, while having error rates of up to 87% on the shifted test set. This is striking, given that e.g. in binary shiftMNIST, only one single pixel is removed under D Adv, leaving the whole image almost unchanged. When applying our independence cross-entropy, the picture changes again. The errors made by the network improve by up to almost 38% on binary shiftMNIST and around 28% on textured shiftMNIST. This highlights the effectiveness of our proposed loss function and its ability to minimize catastrophic failure under severe distribution shifts exploiting excessive invariance. Adversarial examples. Adversarial examples often include -norm restrictions BID45, while BID19 argue for a broader definition to fully capture the implications for security. The -adversarial examples have also been extended to -feature adversaries BID39, which are equivalent to our approximate metameric sampling attack. Some works BID44 BID16 consider unrestricted adversarial examples, which are closely related to invariance-based adversarial vulnerability. The difference to human perception revealed by adversarial examples fundamentally questions which statistics deep networks use to base their decisions BID29 BID47.Relationship between standard and bijective networks. We leverage recent advances in reversible BID21 and bijective networks BID28 BID4 BID31 for our analysis. It has been shown that ResNets and iRevNets behave similarly on various levels of their representation on challenging tasks BID28 and that iRevNets as well as Glow-type networks are related to ResNets by the choice of dimension splitting applied in their residual blocks BID24. Perhaps unsurprisingly, given so many similarities, ResNets themselves have been shown to be provably bijective under mild conditions BID7. Further, excessive invariance of the type we discuss here has been shown to occur in non residual-type architectures as well BID20 BID6. For instance, it has been observed that up to 60% of semantically meaningful input dimensions on the adversarial spheres problem are learned to be ignored, while retaining virtually perfect performance BID20. In summary, there is ample evidence that RevNet-type networks are closely related to ResNets, while providing a principled framework to study widely observed issues related to excessive invariance in deep learning in general and adversarial robustness in particular. Information theory. The information-theoretic view has gained recent interest in machine learning due to the information bottleneck BID46 BID42 BID2 and usage in generative modelling BID26. As a consequence, the estimation of mutual information BID5 BID3 BID1 BID8 has attracted growing attention. The concept of group-wise independence between latent variables goes back to classical independent subspace analysis BID27 and received attention in learning unbiased representations, e.g. see the Fair Variational Autoencoder BID34. Furthermore, extended cross-entropy losses via entropy terms BID37 or minimizing predictability of variables BID40 has been introduced for other applications. Our proposed loss also shows similarity to the GAN loss BID22. However, in our case there is no notion of real or fake samples, but exploring similarities in the optimization are a promising avenue for future work. Failures of deep networks under distribution shift and their difficulty in out-of-distribution generalization are prime examples of the limitations in current machine learning models. The field of adversarial example research aims to close this gap from a robustness point of view. While a lot of work has studied -adversarial examples, recent trends extend the efforts towards the unrestricted case. However, adversarial examples with no restriction are hard to formalize beyond testing error. We introduce a reverse view on the problem to: show that a major cause for adversarial vulnerability is excessive invariance to semantically meaningful variations, demonstrate that this issue persists across tasks and architectures; and make the control of invariance tractable via fully-invertible networks. In summary, we demonstrated how a bijective network architecture enables us to identify large adversarial subspaces on multiple datasets like the adversarial spheres, MNIST and ImageNet. Afterwards, we formalized the distribution shifts causing such undesirable behavior via information theory. Using this framework, we find one of the major reasons is the insufficiency of the vanilla cross-entropy loss to learn semantic representations that capture all task-dependent variations in the input. We extend the loss function by components that explicitly encourage a split between semantically meaningful and nuisance features. Finally, we empirically show that this split can remove unwanted invariances by performing a set of targeted invariance-based distribution shift experiments. Example 7 (Semantic and nuisance on Adversarial Spheres BID20). Consider classifying inputs x from two classes given by radii R 1 or R 2. Further, let (r, φ) denote the spherical coordinates of x. Then, any perturbation ∆x, x * = x + ∆x with r * = r is semantic. On the other hand, if r * = r the perturbation is a nuisance with respect to the task of discriminating two spheres. In this example, the max-margin classifier D(x) = sign x − R1+R2 2 is invariant to any nuisance perturbation, while being only sensitive to semantic perturbations. In summary, the transform to spherical coordinates allows to linearize semantic and nuisance perturbations. Using this notion, invariance-based adversarial examples can be attributed to perturbations of x * = x + ∆x with following two properties 1. Perturbed sample x * stays in the pre-image {x DISPLAYFORM0 Thus, the failure of the classifier D can be thought of a mis-alignment between its invariance (expressed through the pre-image) and the semantics of the data and task (expressed by the oracle).Example 8 (Mis-aligned classifier on Adversarial Spheres). Consider the classifier DISPLAYFORM1 which computes the norm of x from its first d − 1 cartesian-coordinates. Then, D is invariant to a semantic perturbation with ∆r = R 2 − R 1 if only changes in the last coordinate x d are made. We empirically evaluate the classifier in equation 7 on the spheres problem (10M/2M samples setting BID20) and validate that it can reach perfect classification accuracy. However, by construction, perturbing the invariant dimension x * d = x d + ∆x d allows us to move all samples from the inner sphere to the outer sphere. Thus, the accuracy of the classifier drops to chance level when evaluating its performance under such a distributional shift. To conclude, this underlines how classifiers with optimal performance on finite samples can exhibit non-intuitive failure modes due to excessive invariance with respect to semantic variations. We use a standard Imagenet pre-trained Resnet-154 as provided by the torchvision package BID36 and choose a logit percept y = G(x) that can be based on any seed image. Then we optimize various imagesx to be metameric to x by simply minimizing a mean squared error loss of the form: DISPLAYFORM0 in the 1000-dimensional semantic logit space via stochastic gradient descent. We optimize with Adam in Pytorch default settings and a learning rate of 0.01 for 3000 iterations. The optimization thus takes the form of an adversarial attack targeting all logit entries and with no norm restriction on the input distance. Note that our metameric sampling attack in bijective networks is the analytic reverse equivalent of this attack. It leads to the exact solution at the cost of one inverse pass instead of an approximate solution here at the cost of thousands of gradient steps. Figure 9: Here we show a batch of randomly sampled metamers from our ImageNet-trained fully invertible RevNet-48. The quality is generally similar, sometimes colored artifacts appear. Computing mutual information is often intractable as it requires the joint probability p(x, y), see BID14 for an extensive treatment of information theory. However, following variational lower bound can be used for approximation, see BID5. Lemma 9 (Variational lower bound on mutual information). Let X, Y be random variables with conditional density p(y|x). Further, let q θ (y|x) be a variational density depending on parameter θ. Then, the lower bound DISPLAYFORM0 holds with equality if p(y|x) = q θ (y|x).While above lower bound removes the need for the computation of p(y|x), estimating the expectation E Y |X still requires sampling from it. Using this bound, we can now state the effect of the nuisance classifiation loss. Lemma 10 (Effect of nuisance classifier). Define semantics as z s = F θ (x) 1,...,C and nuisances as z n = F θ (x) C+1,...,d, where (x, y) ∼ D. Then, the nuisance classification loss yields DISPLAYFORM1 (ii) Maximization to tighten bound on I D (y; z n): Under a perfect model of the conditional density, DISPLAYFORM2 Proof. To proof above , we need to draw the connection to the variational lower bound on mutual information from Lemma 9. Let the nuisance classifier D θnc (z n) model the variational posterior q θnc (y|z n). Then we have the lower bound DISPLAYFORM3 From Lemma 9 follows, that if D θnc (z n) = p(y|z n), it holds I(y; z n) = I θnc (y; z n). Hence, the nuisance classifier needs to model the conditional density perfectly. Estimating this bound via Monte Carlo simulation requires sampling from the conditional density p(y|z n). Following BID2, we have the Markov property y ↔ x ↔ z n as labels y interact with inputs x and representation z n interacts with inputs x. Hence, DISPLAYFORM4 Including above and assuming F θ (x) = z n to be a deterministic function, we have DISPLAYFORM5 Lemma 11 (Effect of MLE-term). Define semantics as z s = F θ (x) 1,...,C and nuisances as z n = F θ (x) C+1,...,d, where (x, y) ∼ D. Then, the MLE-term in equation 6 together with cross-entropy on the semantics DISPLAYFORM6 minimizes the mutual information I(z s ; z n).Proof. Letz s = sof tmax(z s). Then minimizing the loss terms L sCE and L M LEn is a maximum likelihood estimation under the factorial prior DISPLAYFORM7 where Cat is a categorical distribution. As sof tmax is shift-invariant, sof tmax(x + c) = sof tmax(x), above factorial prior forz s and z n yields independence between logits z s and z n up to a constant c. Finally note, the log term and summation in L M LEn and L CE is re-formulation for computational ease but does not change its minimizer as the logarithm is monotone. From the assumptions follows I D Adv (y; z n) = 0. Furthermore, we have the assumption DISPLAYFORM0 excluding synergetic effects in the interaction information BID18. By information preservation under homeomorphisms BID32 and the chain rule of mutual information (Cover & BID14, we have DISPLAYFORM1 ..,C is obtained by the deterministic transform F, by the data processing inequality (Cover & BID14 we have the inequality I D Adv (y; x) ≥ I D Adv (y; z s). Thus, the claimed equality must hold. Remark 12. Since our goal is to maximize the mutual information I(y; z s) while minimizing I(y; z n), we need to ensure that this objective is well defined as mutual information can be unbounded from above for continuous random variables. However, due to the data processing inequality (Cover & BID14 we have I(y; z n) = I(y; F θ (x)) ≤ I(y; x). Hence, we have a fixed upper bound given by our data (x, y). Compared to BID8 there is thus no need for gradient clipping or a switch to the bounded Jensen-Shannon divergence as in BID26 is not necessary. All experiments were based on a fully invertible RevNet model with different hyperparameters for each dataset. For the spheres experiment we used Pytorch BID36 and for MNIST, as well as Imagenet Tensorflow BID0. The network is a fully connected fully invertible RevNet. It has 4 RevNet-type ReLU bottleneck blocks with additive couplings and uses no batchnorm. We train it via cross-entropy and use the Adam optimizer BID30 ) with a learning rate of 0.0001 and otherwise default Pytorch settings. The nuisance classifier is a 3 layer ReLU network with 1000 hidden units per layer. We choose the spheres to be 100-dimensional, with R 1 = 1 and R 2 = 10, train on 500k samples for 10 epochs and then validate on another 100k holdout set. We achieve 100% train and validation accuracy for logit and nuisance classifier. We use a convolutional fully invertible RevNet with additional actnorm and invertible 1x1 convolutions between each layer as introduced in BID31. The network has 3 stages, after which half of the variables are factored out and an invertible downsampling, or squeezing BID15 BID28 ) is applied. The network has 16 RevNet blocks with batch norm per stage and 128 filters per layer. We also dequantize the inputs as is typically done in flow-based generative models. The network is trained via Adamax with a base learning rate of 0.001 for 100 epochs and we multiply the it with a factor of 0.2 every 30 epochs and use a batch size of 64 and l2 weight decay of 1e-4. For training we compare vanilla cross-entropy training with our proposed independence cross-entropy loss. To have a more balanced loss signal, we normalize L nCE by the number of input dimensions it receives for the maximization step. The nuisance classifier is a fullyconnected 3 layer ReLU network with 512 units. As data-augmentation we use random shifts of 3 pixels. For classification errors of the different architectures we compare, see TAB3: Results comparing cross-entropy training (CE) with independence cross-entropy training (iCE) from Definition 5 and two architectures from the literature. The accuracy of the logit classifiers is on par for the CE and iCE networks, but the train error is higher for CE compared to test error, indicating less overfitting for iCE. Further, a classifier independently trained on the nuisance variables is able to reach even smaller error than on the logits for CE, but just 27.70% error for iCE, indicating that we have successfully removed most of the information of the label from the nuisance variables and fixed the problem of excessive invariance to semantically meaningful variability with no cost in test error.network. The first three stages consist of additive and the last of affine coupling layers. After the final layer we apply an orthogonal 2D DCT type-II to all feature maps and read out the classes in the low-pass components of the transformation. This effectively gives us an invertible global average pooling and makes our network even more similar to ResNets, that always apply global average pooling on their final feature maps. We train the network with momentum SGD for 128 epochs, a batch size of 480 (distributed to 6 GPUs), a base learning rate of 0.1, which is reduced by a factor of 0.1 every 32 epochs. We apply momentum of 0.9 and l2 weight decay of 1e-4.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkfbpsAcF7
We show deep networks are not only too sensitive to task-irrelevant changes of their input, but also too invariant to a wide range of task-relevant changes, thus making vast regions in input space vulnerable to adversarial attacks.
Flow-based generative models are powerful exact likelihood models with efficient sampling and inference. Despite their computational efficiency, flow-based models generally have much worse density modeling performance compared to state-of-the-art autoregressive models. In this paper, we investigate and improve upon three limiting design choices employed by flow-based models in prior work: the use of uniform noise for dequantization, the use of inexpressive affine flows, and the use of purely convolutional conditioning networks in coupling layers. Based on our findings, we propose Flow++, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks. Our work has begun to close the significant performance gap that has so far existed between autoregressive models and flow-based models. Deep generative models -latent variable models in the form of variational autoencoders BID16, implicit generative models in the form of GANs BID8, and exact likelihood models like PixelRNN/CNN (van den c), Image Transformer BID22, PixelSNAIL, NICE, RealNVP, and Glow BID5 BID15 -have recently begun to successfully model high dimensional raw observations from complex real-world datasets, from natural images and videos, to audio signals and natural language BID14 BID34.Autoregressive models, a certain subclass of exact likelihood models, achieve state-of-the-art density estimation performance on many challenging real-world datasets, but generally suffer from slow sampling time due to their autoregressive structure BID28 BID22. Inverse autoregressive models can sample quickly and potentially have strong modeling capacity, but they cannot be trained efficiently by maximum likelihood. Non-autoregressive flow-based models (which we will refer to as "flow models"), such as NICE, RealNVP, and Glow, are efficient for sampling, but have so far lagged behind autoregressive models in density estimation benchmarks BID5 BID15.In the hope of creating an ideal likelihood-based generative model that simultaneously has fast sampling, fast inference, and strong density estimation performance, we seek to close the density estimation performance gap between flow models and autoregressive models. In subsequent sections, we present our new flow model, Flow++, which is powered by an improved training procedure for continuous likelihood models and a number of architectural extensions of the coupling layer defined by BID5. A flow model f is constructed as an invertible transformation that maps observed data x to a standard Gaussian latent variable z = f (x), as in nonlinear independent component analysis BID1 BID10 BID9. The key idea in the design of a flow model is to form f by stacking individual simple invertible transformations BID5 BID15 BID25 BID19. Explicitly, f is constructed by composing a series of invertible flows as DISPLAYFORM0, with each f i having a tractable inverse and a tractable Jacobian determinant. This way, sampling is efficient, as it can be performed by computing DISPLAYFORM1 1 (z) for z ∼ N (0, I), and so is training by maximum likelihood, since the model density DISPLAYFORM2 is easy to compute and differentiate with respect to the parameters of the flows f i. In this section, we describe three modeling inefficiencies in prior work on flow models: uniform noise is a suboptimal dequantization choice that hurts both training loss and generalization; commonly used affine coupling flows are not expressive enough; convolutional layers in the conditioning networks of coupling layers are not powerful enough. Our proposed model, Flow++, consists of a set of improved design choices: variational flow-based dequantization instead of uniform dequantization; logistic mixture CDF coupling flows; self-attention in the conditioning networks of coupling layers. Many real-world datasets, such as CIFAR10 and ImageNet, are recordings of continuous signals quantized into discrete representations. Fitting a continuous density model to discrete data, however, will produce a degenerate solution that places all probability mass on discrete datapoints BID30. A common solution to this problem is to first convert the discrete data distribution into a continuous distribution via a process called "dequantization," and then model the ing continuous distribution using the continuous density model BID30 BID6 BID28. Dequantization is usually performed in prior work by adding uniform noise to the discrete data over the width of each discrete bin: if each of the D components of the discrete data x takes on values in {0, 1, 2, . . ., 255}, then the dequantized data is given by y = x + u, where u is drawn uniformly from D. BID29 note that training a continuous density model p model on uniformly dequantized data y can be interpreted as maximizing a lower bound on the log-likelihood for a certain discrete model P model on the original discrete data x: DISPLAYFORM0 The argument of BID29 proceeds as follows. Letting P data denote the original distribution of discrete data and p data denote the distribution of uniformly dequantized data, Jensen's inequality implies that DISPLAYFORM1 Consequently, maximizing the log-likelihood of the continuous model on uniformly dequantized data cannot lead to the continuous model degenerately collapsing onto the discrete data, because its objective is bounded above by the log-likelihood of a discrete model. While uniform dequantization successfully prevents the continuous density model p model from collapsing to a degenerate mixture of point masses on discrete data, it asks p model to assign uniform density to unit hypercubes x + D around the data x. It is difficult and unnatural for smooth function approximators, such as neural network density models, to excel at such a task. To sidestep this issue, we now introduce a new dequantization technique based on variational inference. Again, we are interested in modeling D-dimensional discrete data x ∼ P data using a continuous density model p model, and we will do so by maximizing the log-likelihood of its associated discrete model P model (x):= D p model (x + u) du. Now, however, we introduce a dequantization noise distribution q(u|x), with support over u ∈ D. Treating q as an approximate posterior, we have the following variational lower bound, which holds for all q: DISPLAYFORM0 We will choose q itself to be a conditional flow-based generative model of the form u = q x , where DISPLAYFORM1 x /∂u, and thus we obtain the objective DISPLAYFORM2 which we maximize jointly over p model and q. When p model is also a flow model x = f −1 (z) (as it is throughout this paper), it is straightforward to calculate a stochastic gradient of this objective using the pathwise derivative estimator, as f (x + q x ) is differentiable with respect to the parameters of f and q. Notice that the lower bound for uniform dequantization -eqs. to -is a special case of our variational lower bound -eqs. to, when the dequantization distribution q is a uniform distribution that ignores dependence on x. Because the gap between our objective and the true expected log-likelihood DISPLAYFORM3, using a uniform q forces p model to unnaturally place uniform density over each hypercube x + D to compensate for any potential looseness in the variational bound introduced by the inexpressive q. Using an expressive flow-based q, on the other hand, allows p model to place density in each hypercube x + D according to a much more flexible distribution q(u|x). This is a more natural task for p model to perform, improving both training and generalization loss. Recent progress in the design of flow models has involved carefully constructing flows to increase their expressiveness while preserving tractability of the inverse and Jacobian determinant computations. One example is the invertible 1 × 1 convolution flow, whose inverse and Jacobian determinant can be calculated and differentiated with standard automatic differentiation libraries BID15. Another example, which we build upon in our work here, is the affine coupling layer BID6. It is a parameterized flow y = f θ (x) that first splits the components of x into two parts x 1, x 2, and then computes y = (y 1, y 2), given by DISPLAYFORM0 Here, a θ and b θ are outputs of a neural network that acts on x 1 in a complex, expressive manner, but the ing behavior on x 2 always remains an elementwise affine transformation -effectively, a θ and b θ together form a data-parameterized family of invertible affine transformations. This allows the affine coupling layer to express complex dependencies on the data while keeping inversion and log-likelihood computation tractable. Using · and exp to respectively denote elementwise multiplication and exponentiation, DISPLAYFORM1 The splitting operation x → (x 1, x 2) and merging operation (y 1, y 2) → y are usually performed over channels or over space in a checkerboard-like pattern BID6. We found in our experiments that density modeling performance of these coupling layers could be improved by augmenting the data-parameterized elementwise affine transformations by more general nonlinear elementwise transformations. For a given scalar component x of x 2, we apply the cumulative distribution function (CDF) for a mixture of K logistics -parameterized by mixture probabilities, means, and log scales π, µ, s -followed by an inverse sigmoid and an affine transformation parameterized by a and b: DISPLAYFORM0 where MixLogCDF(x; π, µ, DISPLAYFORM1 The transformation parameters π, µ, s, a, b for each component of x 2 are produced by a neural network acting on x 1 . This neural network must produce these transformation parameters for each component of x 2, hence it produces vectors a θ (x 1) and b θ (x 1) and tensors π θ (x 1), µ θ (x 1), s θ (x 1) (with last axis dimension K). The coupling transformation is then given by: DISPLAYFORM2 where the formula for computing y 2 operates elementwise. The inverse sigmoid ensures that the inverse of this coupling transformation always exists: the range of the logistic mixture CDF is, so the domain of its inverse must stay within this interval. The CDF itself can be inverted efficiently with bisection, because it is a monotonically increasing function. Moreover, the Jacobian determinant of this transformation involves calculating the probability density function of the logistic mixtures, which poses no computational difficulty. In addition to improving the expressiveness of the elementwise transformations on x 2, we found it crucial to improve the expressiveness of the conditioning on x 1 -that is, the expressiveness of the neural network responsible for producing the elementwise transformation parameters π, µ, s, a, b. Our best were obtained by stacking convolutions and multi-head self attention into a gated residual network BID20, in a manner resembling the Transformer BID34 with pointwise feedforward layers replaced by 3×3 convolutional layers. Our architecture is defined as a stack of blocks. Each block consists of the following two layers connected in a residual fashion, with layer normalization BID0 after each residual connection: DISPLAYFORM0 where Gate refers to a 1 × 1 convolution that doubles the number of channels, followed by a gated linear unit BID3. The convolutional layer is identical to the one used by PixelCNN++ BID28, and the multi-head self attention mechanism we use is identical to the one in the Transformer BID34. (We always use 4 heads in our experiments, since we found it to be effective early on in our experimentation process.)With these blocks in hand, the network that outputs the elementwise transformation parameters is simply given by stacking blocks on top of each other, and finishing with a final convolution that increases the number of channels to the amount needed to specify the elementwise transformation parameters. Here, we show that Flow++ achieves state-of-the-art density modeling performance among nonautoregressive models on CIFAR10 and 32x32 and 64x64 ImageNet. We also present ablation experiments that quantify the improvements proposed in section 3, and we present example generative samples from Flow++ and compare them against samples from autoregressive models. Our experiments employed weight normalization and data-dependent initialization. We used the checkerboard-splitting, channel-splitting, and downsampling flows of BID6; we also used before every coupling flow an invertible 1x1 convolution flows of BID15, as well as a variant of their "actnorm" flow that normalizes all activations independently (instead of normalizing per channel). Our CIFAR10 model used 4 coupling layers with checkerboard splits at 32x32 resolution, 2 coupling layers with channel splits at 16x16 resolution, and 3 coupling layers with checkerboard splits at 16x16 resolution; each coupling layer used 10 convolution-attention blocks, all with 96 filters. More details on architectures, as well as details for the other experiments, will be given in a source code release. In table 1, we show that Flow++ achieves state-of-the-art density modeling out of all nonautoregressive models, and it is competitive with autoregressive models: its performance is on par with the first generation of PixelCNN models, and it outperforms Multiscale PixelCNN BID24. As of submission, our models have not fully converged due to computational constraint and we expect further performance gain in future revision of this manuscript. 3.14 --PixelRNN (van den 3.00 3.86 3.63 Gated PixelCNN (van den BID33 3.03 3.83 3.57 PixelCNN++ BID28 2.92 --Image Transformer BID22 2.90 3.77 -PixelSNAIL 2.85 3.80 3.52 We ran the following ablations of our model on unconditional CIFAR10 density estimation: variational dequantization vs. uniform dequantization; logistic mixture coupling vs. affine coupling; and stacked self-attention vs. convolutions only. As each ablation involves removing some component of the network, we increased the number of filters in all convolutional layers (and attention layers, if present) in order to match the total number of parameters with the full Flow++ model. In FIG0 and table 2, we compare the performance of these ablations relative to Flow++ at 400 epochs of training, which was not enough for these models to converge, but far enough to see their relative performance differences. Switching from our variational dequantization to the more standard uniform dequantization costs the most: approximately 0.127 bits/dim. The remaining two ablations both cost approximately 0.03 bits/dim: switching from our logistic mixture coupling layers to affine coupling layers, and switching from our hybrid convolution-and-self-attention architecture to a pure convolutional residual architecture. Note that these performance differences are present despite all networks having approximately the same number of parameters: the improved performance of Flow++ comes from improved inductive biases, not simply from increased parameter count. The most interesting is probably the effect of the dequantization scheme on training and generalization loss. At 400 epochs of training, the full Flow++ model with variational dequantization has a train-test gap of approximately 0.02 bits/dim, but with uniform dequantization, the train-test gap is approximately 0.06 bits/dim. This confirms our claim in Section 3.1.2 that training with variational dequantization is a more natural task for the model than training with uniform dequantization. BID23. More samples are available in the appendix (section 7). Likelihood-based models constitute a large family of deep generative models. One subclass of such methods, based on variational inference, allows for efficient approximate inference and sampling, but does not admit exact log likelihood computation BID16 BID26. Another subclass, which we called exact likelihood models in this work, does admit exact log likelihood computation. These exact likelihood models are typically specified as invertible transformations that are parameterized by neural networks BID4 BID18 BID30 BID5 BID7 BID28.There is prior work that aims to improve the sampling speed of deep autoregressive models. The Multiscale PixelCNN BID24 modifies the PixelCNN to be non-fully-expressive by introducing conditional independence assumptions among pixels in a way that permits sampling in a logarithmic number of steps, rather than linear. Such a change in the autoregressive structure allows for faster sampling but also makes some statistical patterns impossible to capture, and hence reduces the capacity of the model for density estimation. WaveRNN BID13 improves sampling speed for autoregressive models for audio via sparsity and other engineering considerations, some of which may apply to flow models as well. There is also recent work that aims to improve the expressiveness of coupling layers in flow models. BID15 demonstrate improved density estimation using an invertible 1x1 convolution flow, and demonstrate that very large flow models can be trained to produce photorealistic faces. BID21 introduce piecewise polynomial couplings that are similar in spirit to our mixture of logistics couplings. They found them to be more expressive than affine couplings, but reported little performance gains in density estimation. We leave a detailed comparison between our coupling layer and the piecewise polynomial CDFs for future work. We presented Flow++, a new flow-based generative model that begins to close the performance gap between flow models and autoregressive models. Our work considers specific instantiations of design principles for flow models -dequantization, flow design, and conditioning architecture design -and we hope these principles will help guide future research in flow models and likelihoodbased models in general.7 APPENDIX A: SAMPLES
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hyg74h05tX
Improved training of current flow-based generative models (Glow and RealNVP) on density estimation benchmarks
Modern deep artificial neural networks have achieved impressive through models with orders of magnitude more parameters than training examples which control overfitting with the help of regularization. Regularization can be implicit, as is the case of stochastic gradient descent and parameter sharing in convolutional layers, or explicit. Explicit regularization techniques, most common forms are weight decay and dropout, have proven successful in terms of improved generalization, but they blindly reduce the effective capacity of the model, introduce sensitive hyper-parameters and require deeper and wider architectures to compensate for the reduced capacity. In contrast, data augmentation techniques exploit domain knowledge to increase the number of training examples and improve generalization without reducing the effective capacity and without introducing model-dependent parameters, since it is applied on the training data. In this paper we systematically contrast data augmentation and explicit regularization on three popular architectures and three data sets. Our demonstrate that data augmentation alone can achieve the same performance or higher as regularized models and exhibits much higher adaptability to changes in the architecture and the amount of training data. One of the central issues in machine learning research and application is finding ways of improving generalization. Regularization, loosely defined as any modification applied to a learning algorithm that helps prevent overfitting, plays therefore a key role in machine learning (; Müller, 2012). In the case of deep learning, where neural networks tend to have several orders of magnitude more parameters than training examples, statistical learning theory indicates that regularization becomes even more crucial. Accordingly, a myriad of techniques have been proposed as regularizers: weight decay and other L p penalties; dropout and stochastic depth , to name a few examples. Moreover, whereas in simpler machine learning algorithms the regularizers can be easily identified as explicit terms in the objective function, in modern deep neural networks the sources of regularization are not only explicit, but implicit . In this regard, many techniques have been studied for their regularization effect, despite not being explicitly intended as such. That is the case of unsupervised pre-training , multi-task learning , convolutional layers , batch normalization or adversarial training . In sum, there are multiple elements in deep learning that contribute to reduce overfitting and thus improve generalization. Driven by the success of such techniques and the efficient use of GPUs, considerable research effort has been devoted to finding ways of training deeper and wider networks with larger capacity (; ;). Ironically, the increased representational capacity is eventually reduced in practice by the use of explicit regularization, most commonly weight decay and dropout. It is known, for instance, that the gain in generalization provided by dropout comes at the cost of using larger models and training for longer . Hence, it seems that with these standard regularization methods deep networks are wasting capacity . Unlike explicit regularization, data augmentation improves generalization without reducing the capacity of the model. Data augmentation, that is synthetically expanding a data set by apply-ing transformations on the available examples, has been long used in machine learning and identified as a critical component of many recent successful models, like AlexNet , All-CNN or ResNet , among others. Although it is most popular in computer vision, data augmentation has also proven effective in speech recognition , music source separation or text categorization . Today, data augmentation is an almost ubiquitous technique in deep learning, which can also be regarded as an implicit regularizer for it improves generalization. Recently, the deep learning community has become more aware of the importance of data augmentation (Hernández-García & König, 2018b) and new techniques, such as cutout (a) or augmentation in the feature space (b), have been proposed. Very interestingly, a promising avenue for future research has been set by recently proposed models that automatically learn the data transformations (; ; ;). Nonetheless, another study by analyzed the performance of different techniques for object recognition and concluded that one of the most successful techniques so far is still the traditional data augmentation carried out in most studies. However, despite its popularity, the literature lacks, to our knowledge, a systematic analysis of the impact of data augmentation on convolutional neural networks compared to explicit regularization. It is a common practice to train the models with both explicit regularization, typically weight decay and dropout, and data augmentation, assuming they all complement each other. included data augmentation in their analysis of generalization of deep networks, but it was questionably considered an explicit regularizer similar to weight decay and dropout. To our knowledge, the first time data augmentation and explicit regularization were systematically contrasted was the preliminary study by Hernández-García & König (2018b). The present work aims at largely extending that work both with more empirical and a theoretical discussion. Our specific contributions are the following: • Propose definitions of explicit and implicit regularization that aim at solving the ambiguity in the literature (Section 2). • A theoretical discussion based on statistical learning theory about the differences between explicit regularization and data augmentation, highlighting the advantages of the latter (Section 3). • An empirical analysis of the performance of models trained with and without explicit regularization, and different levels of data augmentation on several benchmarks (Sections 4 and 5). Further, we study their adaptability to learning from fewer examples (Section 5.2) and to changes in the architecture (Section 5.3). • A discussion on why encouraging data augmentation instead of explicit regularization can benefit both theory and practice in deep learning (Section 6). 2 raised the thought-provoking idea that "explicit regularization may improve generalization performance, but is neither necessary nor by itself sufficient for controlling generalization error." The authors came to this from the observation that turning off the explicit regularizers of a model does not prevent the model from generalizing reasonably well. This contrasts with traditional machine learning involving convex optimization, where regularization is necessary to avoid overfitting and generalize . Such observation led the authors to suggest the need for "rethinking generalization" in order to understand deep learning. We argue it is not necessary to rethink generalization if we instead rethink regularization and, in particular, data augmentation. Despite their thorough analysis and relevant , arguably underestimated the role of implicit regularization and considered data augmentation an explicit form of regularization much like weight decay and dropout. This illustrates that the terms explicit and implicit regularization have been used subjectively and inconsistently in the literature before. In order to avoid the ambiguity and facilitate the discussion, we propose the following definitions of explicit and implicit regularization 1: • Explicit regularization techniques are those which reduce the representational capacity of the model they are applied on. That is, given a model class H 0, for instance a neural network architecture, the introduction of explicit regularization will span a new hypothesis set H 1, which is a proper subset of the original set, i.e. H 1 H 0. • Implicit regularization is the reduction of the generalization error or overfitting provided by means other than explicit regularization techniques. Elements that provide implicit regularization do not reduce the representational capacity, but may affect the effective capacity of the model, that is the achievable set of hypotheses given the model, the optimization algorithm, hyperparameters, etc. One of the most common explicit regularization techniques in machine learning is L p -norm regularization, of which weight decay is a particular case, widely used in deep learning. Weight decay sets a penalty on the L 2 norm of the learnable parameters, thus constraining the representational capacity of the model. Dropout is another common example of explicit regularization, where the hypothesis set is reduced by stochastically deactivating a number of neurons during training. Similar to dropout, stochastic depth, which drops whole layers instead of neurons, is also an explicit regularization technique. There are multiple elements in deep neural networks that implicitly regularize the models. Note, in this regard, that the above definition, contrary to explicit regularization, does not refer to techniques, but to a regularization effect, as it can be provided by elements of very different nature. For instance, stochastic gradient descent (SGD) is known to have an implicit regularization effect without constraining the representational capacity. Batch normalization does not either reduce the capacity, but it improves generalization by smoothing the optimization landscape. Of quite a different nature, but still implicit, is the regularization effect provided by early stopping, which does not reduce the representational, but the effective capacity. By analyzing the literature, we identified some previous pieces of work which, lacking a definition of explicit and implicit regularization, made a distinction apparently based on the mere intention of the practitioner. Under such notion, data augmentation has been considered in some cases an explicit regularization technique, as in. Here, we have provided definitions for explicit and implicit regularization based on their effect on the representational capacity and argue that data augmentation is not explicit, but implicit regularization, since it does not affect the representational capacity of the model. The generalization of a model class H can be analyzed through complexity measures such as the VC-dimension or, more generally, the Rademacher complexity R n (H) = E S∼D n R S (H), where: is the empirical Rademacher complexity, defined with respect to a set of data samples S = (x i, ..., x n). Then, in the case of binary classification and the class of linear separators, the generalization error of a hypothesis,ˆ S (h), can be bounded using the Rademacher complexity: with probability 1 − δ. Tighter bounds for some model classes, such as fully connected neural networks, can be obtained , but it is not trivial to formally analyze the influence on generalization of specific architectures or techniques. Nonetheless, we can use these theoretical insights to discuss the differences between explicit regularization-particularly weight decay and dropout-and implicit regularization-particularly data augmentation. A straightforward yet very relevant from the analysis of any generalization bound is the strong dependence on the number of training examples n. Increasing n drastically improves the generalization guarantees, as reflected by the second term in RHS of Equation 1 and the dependence of the Rademacher complexity (LHS) on the sample size as well. Data augmentation exploits prior knowledge of the data domain D to create new examples and its impact on generalization is related to an increment in n, since stochastic data augmentation can generate virtually infinite different samples. Admittedly, the augmented samples are not independent and identically distributed and thus, the effective increment of samples does not strictly correspond to the increment in n. This is why formally analyzing the impact of data augmentation on generalization is complex and out of the scope of this paper. Recently, some studies have taken steps in this direction by analyzing the effect of simplified data transformations on generalization from a theoretical point of view;. In contrast, explicit regularization methods aim, in general, at improving the generalization error by constraining the hypothesis class H, which hopefully should reduce its complexity, R n (H), and, in turn, the generalization errorˆ S (h). Crucially, while data augmentation exploits domain knowledge, most explicit regularization methods only naively constrain the hypothesis class. For instance, weight decay constrains the learnable models H by setting a penalty on the weights norm. have recently shown that weight decay has little impact on the generalization bounds and confidence margins. Dropout has been extensively used and studied as a regularization method for neural networks , but the exact way in which dropout may improve generalization is still an open question and it has been concluded that the effects of dropout on neural networks are somewhat mysterious, complicated and its penalty highly non-convex . have established new generalization bounds on the variance induced by a particular type of dropout on feedforward neural network. Nevertheless, dropout can also be analyzed as a random form of data augmentation without domain knowledge , that is data-dependent regularization. Therefore, any generalization bound derived for dropout can be regarded as a pessimistic bound for domain-specific, standard data augmentation. A similar argument applies for weight decay, which, as first shown by , is equivalent to training with noisy examples if the noise amplitude is small and the objective is the sum-of-squares error function. In sum, many forms of explicit regularization are at least approximately equivalent to adding random noise to the training examples, which is the simplest form of data augmentation 2. Thus, it is reasonable to argue that more sophisticated data augmentation can overshadow the benefits provided by explicit regularization. In general, we argue that the reason why explicit regularization may not be necessary is that neural networks are already implicitly regularized by many elements-stochastic gradient descent (SGD), convolutional layers, normalization and data augmentation, to name a few-that provide a more successful inductive bias . For instance, it has been shown that linear models optimized with SGD converge to solutions with small norm, without any explicit regularization . In the remainder of the paper, we present a set of experiments that shed more light on the advantages of data augmentation over weight decay and dropout. This section describes the experimental setup for systematically analyzing the role of data augmentation in deep neural networks compared to weight decay and dropout and builds upon the methods used in preliminary studies (Hernández-García & König, 2018a; b;). We perform our experiments on three distinct, popular architectures that have achieved successful in object recognition tasks: the all convolutional network, All-CNN ; the wide residual network, WRN ; and the densely connected network, DenseNet . Importantly, we keep the same training hyper-parameters (learning rate, training epochs, batch size, optimizer, etc.) as in the original papers in the cases they are reported. Below we present the main features of each network and more details can be found in the supplementary material. • All-CNN: it consists only of convolutional layers with ReLU activation , it is relatively shallow and has few parameters. For ImageNet, the network has 16 layers and 9.4 million parameters; for CIFAR, it has 12 layers and 1.3 million parameters. In our experiments to compare the adaptability of data augmentation and explicit regularization to changes in the architecture, we also test a shallower version, with 9 layers and 374,000 parameters, and a deeper version, with 15 layers and 2.4 million parameters. • WRN: a residual network, ResNet , that achieves better performance with fewer layers, but more units per layer. Here, we choose for our experiments the WRN-28-10 version (28 layers and about 36.5 M parameters), which is reported to achieve the best on CIFAR. • DenseNet: a network architecture arranged in blocks whose layers are connected to all previous layers, allowing for very deep architectures with few parameters. Specifically, for our experiments we use a DenseNet-BC with growth rate k = 12 and 16 layers in each block, which has a total of 0.8 million parameters. We perform the experiments on the highly benchmarked data sets ImageNet (So as to analyze the role of data augmentation, we train every network architecture with two different augmentation schemes as well as with no data augmentation at all: • Light augmentation: This scheme is common in the literature, for example , and performs only horizontal flips and horizontal and vertical translations of 10% of the image size. • Heavier augmentation: This scheme performs a larger range of affine transformations such as scaling, rotations and shear mappings, as well as contrast and brightness adjustment. On ImageNet we additionally perform a random crop of 128 × 128 pixels. The choice of the allowed transformations is arbitrary and the only criterion was that the objects are still recognizable in general. We deliberately avoid designing a particularly successful scheme. The details of the heavier scheme can be consulted in the supplementary material. Every architecture is trained on each data set both with explicit regularization-weight decay and dropout as specified in the original papers-and with no explicit regularization. Furthermore, we train each model with the three data augmentation schemes. The performance of the models is computed on the held out test tests. As in previous works , we average the softmax posteriors over 10 random light augmentations, since slightly better are obtained. All the experiments are performed on Keras on top of TensorFlow and on a single GPU NVIDIA GeForce GTX 1080 Ti. This section presents the most relevant of the experiments comparing the roles of data augmentation and explicit regularization on convolutional neural networks. First, we present the experiments with the original architectures in section 5.1. Then, Sections 5.2 and 5.3 show the of training the models with fewer training examples and with shallower and deeper versions of the All-CNN architecture. The figures aim at facilitating the comparison between the models trained with and without explicit regularization, as well as between the different levels of data augmentation. The purple bars (top of each pair) correspond to the models trained without explicit regularization-weight decay and dropout-and the red bars (bottom) to the models trained with it. The different color shades correspond to the three augmentation schemes. The figures show the relative performance of each model with respect to a particular baseline in order to highlight the relevant comparisons. A detailed and complete report of all the can be found in the supplementary material. The on CIFAR refer to the top-1 test accuracy while on ImageNet we report the top-5. Figure 1: Relative improvement of adding data augmentation and explicit regularization to the baseline models, (accuracy − baseline)/accuracy * 100. The baseline accuracy is shown on the left. The suggest that data augmentation alone (purple bars) can achieve even better performance than the models trained with both weight decay and dropout (red bars). First, we contrast the regularization effect of data augmentation and weight decay and dropout on the original networks trained with the complete data sets. For that purpose, in Figure 1 we show the relative improvement in test performance achieved by adding each technique or combination of techniques to the baseline model, that is the model trained with neither explicit regularization nor data augmentation (see the left of the bars). Several can be extracted from Figure 1 and Table 1. Most importantly, training with data augmentation alone (top, purple bars) improves the performance in most cases as much as or even more than training with both augmentation and explicit regularization (bottom, red bars), on average Figure 2: Fraction of the baseline performance when the amount of available training data is reduced, accuracy/baseline * 100. The models trained wit explicit regularization present a significant drop in performance as compared to the models trained with only data augmentation. The differences become larger as the amount of training data decreases. 8.57 and 7.90 % respectively. This is quite a surprising and remarkable : note that the studied architectures achieved state-of-the-art at the moment of their publication and the models included both light augmentation and weight decay and dropout, whose parameters were presumably finely tuned to achieve higher accuracy. The replication of these corresponds to the middle red bars in Figure 1. We show here that simply removing weight decay and dropout-while even keeping all other hyperparameters intact, see Section 4.1-improves the formerly state-of-the-art accuracy in 4 of the 8 studied cases. Second, it can also be observed that the regularization effect of weight decay and dropout, an average improvement of 3.02 % with respect to the baseline, 1 is much smaller than that of data augmentation. Simply applying light augmentation increases the accuracy in 8.46 % on average. Finally, note that even though the heavier augmentation scheme was deliberately not designed to optimize the performance, in both CIFAR-10 and CIFAR-100 it improves the test performance with respect to the light augmentation scheme. This is not the case on ImageNet, probably due to the increased complexity of the data set. It can be observed though that the effects are in general more consistent in the models trained without explicit regularization. In sum, it seems that the performance gain achieved by weight decay and dropout can be achieved and often improved by data augmentation alone. We argue that one of the main drawbacks of explicit regularization techniques is their poor adaptability to changes in the conditions with which the hyperparameters were tuned. To test this hypothesis and contrast it with the adaptability of data augmentation, here we extend the analysis by training the Figure 3: Fraction of the original performance when the depth of the All-CNN architecture is increased or reduced in 3 layers. In the explicitly regularized models, the change of architecture implies a dramatic drop in the performance, while the models trained without explicit regularization present only slight variations with respect to the original architecture. same networks with fewer examples. The models are trained with the same random subset of data and evaluated in the same test set as the previous experiments. In order to better visualize how well each technique resists the reduction of training data, in Figure 2 we show the fraction of baseline accuracy achieved by each model when trained with 50 % and 10 % of the available data. In this case, the baseline is thus each corresponding model trained with the complete data set. Table 2 summarizes the mean and standard deviation of each combination. An extended report of , including additional experiments with 80 % and 1 % of the data, is provided in the supplementary material. One of the main of this set of experiments is that if no data augmentation is applied, explicit regularization hardly resist the reduction of training data by itself. On average, with 50 % of the available data, these models only achieve 83.20 % of the original accuracy, which, remarkably, is worse than the models trained without any explicit regularization (88.11 %). On 10 % of the data, the average fraction is the same (58.75 and 58.72 %, respectively). This implies that training with explicit regularization is even detrimental for the performance. When combined with data augmentation, the models trained with explicit regularization (bottom, red bars) also perform worse (88.78 and 61.16 % with 50 and 10 % of the data, respectively), than the models with just data augmentation (top, purple bars, 91.64 and 68.12 % on average). Note that the difference becomes larger as the amount of available data decreases. Importantly, it seems that the combination of explicit regularization and data augmentation is only slightly better than training without data augmentation. We can think of two reasons that could explain this: first, the original regularization hyperparameters seem to adapt poorly to the new conditions. The hyperparameters are specifically tuned for the original setup and one would have to re-tune them to achieve comparable . Second, since explicit regularization reduces the representational capacity, this might prevent the models from taking advantage of the augmented data. In contrast, the models trained without explicit regularization more naturally adapt to the reduced availability of data. With 50 % of the data, these models, trained with data augmentation achieve about 91.5 % of the performance with respect to training with the complete data sets. With only 10 % of the data, they achieve nearly 70 % of the baseline performance, on average. This highlights the suitability of data augmentation to serve, to a great extent, as true, useful data . Finally, in this section we test the adaptability of data augmentation and explicit regularization to changes in the depth of the All-CNN architecture (see Section 4.1). We show the fraction of the performance with respect to the original architecture in Figure 3. A noticeable from Figure 3 is that all the models trained with weight decay and dropout (bottom, red bars) suffer a dramatic drop in performance when the architecture changes, regardless of whether it becomes deeper or shallower and of the amount of data augmentation. As in the case of reduced training data, this may be explained by the poor adaptability of the regularization hyperparameters, which highly depend on the architecture. This highly contrasts with the performance of the models trained without explicit regularization (top, purple bars). With a deeper architecture, these models achieve slightly better performance, effectively exploiting the increased capacity. With a shallower architecture, they achieve only slightly worse performance 4. Thus, these models seem to more naturally adapt to the new architecture and data augmentation becomes beneficial. It is worth commenting on the particular case of the CIFAR-100 benchmark, where the difference between the models with and without explicit regularization is even more pronounced, in general. It is a common practice in object recognition papers to tune the parameters for CIFAR-10 and then test the performance on CIFAR-100 with the same hyperparameters. Therefore, these are typically less suitable for CIFAR-100. We believe this is the reason why the benefits of data augmentation seem even more pronounced on CIFAR-100 in our experiments. In sum, these highlight another crucial advantage of data augmentation: the effectiveness of its hyperparameters, that is the type of image transformations, depend mostly on the type of data, rather than on the particular architecture or amount of available training data, unlike explicit regularization hyperparameters. Therefore, removing explicit regularization and training with data augmentation increases the flexibility of the models. We have presented a systematic analysis of the role of data augmentation in deep convolutional neural networks for object recognition, focusing on the comparison with popular explicit regularization techniques-weight decay and dropout. In order to facilitate the discussion and the analysis, we first proposed in Section 2 definitions of explicit and implicit regularization, which have been ambiguously used in the literature. Accordingly, we have argued that data augmentation should not be considered an explicit regularizer, such as weight decay and dropout. Then, we provided some theoretical insights in Section 3 that highlight some advantages of data augmentation over explicit regularization. Finally, we have empirically shown that explicit regularization is not only unnecessary , but also that its generalization gain can be achieved by data augmentation alone. Moreover, we have demonstrated that, unlike data augmentation, weight decay and dropout exhibit poor adaptability to changes in the architecture and the amount of training data. Despite the limitations of our empirical study, we have chosen three significantly distinct network architectures and three data sets in order to increase the generality of our , which should ideally be confirmed by future work on a wider range of models, data sets and even other domains such text or speech. It is important to note, however, that we have taken a conservative approach in our experimentation: all the hyperparameters have been kept as in the original models, which included both weight decay and dropout, as well as light augmentation. This setup is clearly suboptimal for models trained without explicit regularization. Besides, the heavier data augmentation scheme was deliberately not optimized to improve the performance and it was not the scope of this work to propose a specific data augmentation technique. As future work, we plan to propose data augmentation schemes that can more successfully be exploited by any deep model. The relevance of our findings lies in the fact that explicit regularization is currently the standard tool to enable the generalization of most machine learning methods and is included in most convolutional neural networks. However, we have empirically shown that simply removing the explicit regularizers often improves the performance or only marginally reduces it, if some data augmentation is applied. These are supported by the theoretical insights provided in in Section 3. suggested that regularization might play a different role in deep learning, not fully explained by statistical learning theory . We have argued instead that the theory still naturally holds in deep learning, as long as one considers the crucial role of implicit regularization: explicit regularization seems to be no longer necessary because its contribution is already provided by the many elements that implicitly and successfully regularize the models: to name a few, stochastic gradient descent, convolutional layers and data augmentation. Data augmentation is often regarded by authors of machine learning papers as cheating, something that should not be used in order to test the potential of a newly proposed architecture (; ;). In contrast, weight decay and dropout are almost ubiquitous and considered intrinsic elements of the algorithms. In view of the presented here, we believe that the deep learning community would benefit if we rethink data augmentation and switch roles with explicit regularization: a good model should generalize well without the need for explicit regularization and successful methods should effectively exploit data augmentation. In this regard it is worth highlighting some of the advantages of data augmentation: Not only does it not reduce the representational capacity of the model, unlike explicit regularization, but also, since the transformations reflect plausible variations of the real objects, it increases the robustness of the model and it can be seen as a data-dependent prior, similarly to unsupervised pre-training . have shown that data augmentation consistently yields models with smaller sensitivity to perturbations. Interestingly, recent work has found that models trained with heavier data augmentation learn representations that are more similar to the inferior temporal (IT) cortex, highlighting the biological plausibility of data augmentation (Hernández-García et al., 2018). Deep neural networks are especially well suited for data augmentation because they do not rely on pre-computed features and because the large number of parameters allows them to shatter the augmented training set. Moreover, unlike explicit regularization, data augmentation can be performed on the CPU, in parallel to the gradient updates. Finally, an important from Sections 5.2 and 5.3 is that data augmentation naturally adapts to architectures of different depth and amounts of available training data, whereas explicitly regularized models are highly sensitive to such changes and need specific fine-tuning of their hyperparameters. In sum, data augmentation seems to be a strong alternative to explicit regularization techniques. Some argue that despite these advantages, data augmentation is a limited approach because it depends on some prior expert knowledge and it cannot be applied to all domains. However, we argue instead that expert knowledge should not be disregarded but exploited. A single data augmentation scheme can be designed for a broad family of data (for example, natural images) and effectively applied to a broad set of tasks (for example, object recognition, segmentation, localization, etc.). Besides, interesting recent works have shown that it is possible to automatically learn the data augmentation strategies . We hope that these insights encourage more research attention on data augmentation and that future work brings more sophisticated and effective data augmentation techniques, potentially applicable to different data modalities. This appendix presents the details of the network architectures used in the main experiments: All-CNN, Wide Residual Network (WRN) and DenseNet. All-CNN is a relatively simple, small network with a few number of layers and parameters, WRN is deeper, has residual connections and many more parameters and DenseNet is densely connected and is much deeper, but parameter effective. A.1 ALL CONVOLUTIONAL NETWORK All-CNN consists exclusively of convolutional layers with ReLU activation , it is relatively shallow and has few parameters. For ImageNet, the network has 16 layers and 9.4 million parameters; for CIFAR, it has 12 layers and about 1.3 million parameters. In our experiments to compare the adaptability of data augmentation and explicit regularization to changes in the architecture, we also test a shallower version, with 9 layers and 374,000 parameters, and a deeper version, with 15 layers and 2.4 million parameters. The four architectures can be described as in Table 3, where KCD(S) is a D × D convolutional layer with K channels and stride S, followed by batch normalization and a ReLU non-linearity. N.Cl. is the number of classes and Gl. Avg. refers to global average pooling. The CIFAR network is identical to the All-CNN-C architecture in the original paper, except for the introduction of the batch normalization layers. The ImageNet version also includes batch normalization layers and a stride of 2 instead of 4 in the first layer to compensate for the reduced input size (see below). Importantly, we keep the same training parameters as in the original paper in the cases they are reported. Specifically, the All-CNN networks are trained using stochastic gradient descent, with fixed Nesterov momentum 0.9, learning rate of 0.01 and decay factor of 0.1. The batch size for the experiments on ImageNet is 64 and we train during 25 epochs decaying the learning rate at epochs 10 and 20. On CIFAR, the batch size is 128, we train for 350 epochs and decay the learning rate at epochs 200, 250 and 300. The kernel parameters are initialized according to the Xavier uniform initialization . WRN is a modification of ResNet that achieves better performance with fewer layers, but more units per layer. Here we choose for our experiments the WRN-28-10 version (28 layers and about 36.5 M parameters), which is reported to achieve the best on CIFAR. It has the following architecture: where KR is a residual block with residual function BN-ReLU-KC3-BN-ReLU-KC 3. BN is batch normalization, Avg. is spatial average pooling of size 8 and FC is a fully connected layer. On ImageNet, the stride of the first convolution is 2. The stride of the first convolution within the residual blocks is 1 except in the first block of the series of 4, where it is set to 2 in order to subsample the feature maps. Similarly, we keep the training parameters of the original paper: we train with SGD, with fixed Nesterov momentum 0.9 and learning rate of 0.1. On ImageNet, the learning rate is decayed by 0.2 at epochs 8 and 15 and we train for a total of 20 epochs with batch size 32. On CIFAR, we train with a batch size of 128 during 200 epochs and decay the learning rate at epochs 60, 120 and 160. The kernel parameters are initialized according to the He normal initialization . The main characteristic of DenseNet is that the architecture is arranged into blocks whose layers are connected to all the layers below, forming a dense graph of connections, which permits training very deep architectures with fewer parameters than, for instance, ResNet. Here, we use a network with bottleneck compression rate θ = 0.5 (DenseNet-BC), growth rate k = 12 and 16 layers in each of the three blocks. The model has nearly 0.8 million parameters. The specific architecture can be descried as follows: where DB(c) is a dense block, that is a concatenation of c convolutional blocks. Each convolutional block is of a set of layers whose output is concatenated with the input to form the input of the next convolutional block. A convolutional block with bottleneck structure has the following layers: TB is a transition block, which downsamples the size of the feature maps, formed by the following layers: Like with All-CNN and WRN, we keep the training hyper-parameters of the original paper. On the CIFAR data sets, we train with SGD, with fixed Nesterov momentum 0.9 and learning rate of 0.1, decayed by 0.1 on epochs 150 and 200 and training for a total of 300 epochs. The batch size is 64 and the are initialized with He initialization. In this appendix we present the details of the heavier data augmentation scheme, introduced in Section 3.2: • Affine transformations: This appendix details the of the main experiments shown in Figures 1, 2 and 3 and provides the of many other experiments not presented above in order not to clutter the visualization. Some of these are the top-1 accuracy on ImageNet, the of the models trained with dropout, but without weight decay; and the of training with 80 % and 1 % of the data. Additionally, for many experiments we also train a version of the network without batch normalization. These are provided within brackets in the tables. Note that the original All-CNN published by did not include batch normalization. In the case of WRN, we remove all batch normalization layers except the top-most one, before the spatial average pooling, since otherwise many models would not converge. An important observation from Table 5 is that the interaction of weight decay and dropout is not always consistent, since in some cases better can be obtained with both explicit regularizers active and in other cases, only dropout achieves better generalization. In contrast, the effect of data augmentation seems to be consistent: just some light augmentation achieves much better than training only with the original data set and performing heavier augmentation almost always further improves the test accuracy, without the need for explicit regularization. Not surprisingly, batch normalization also contributes to improve the generalization of All-CNN and it seems to combine well with data augmentation. On the contrary, when combined with explicit regularization the are interestingly not consistent in the case of All-CNN: it seems to improve the generalization of the model trained with both weight decay and dropout, but it drastically reduces the performance with only dropout, in the case of CIFAR-10 and CIFAR-100 without augmentation. A probable explanation is, again, that the regularization hyperparameters would need to be readjusted with a change of the architecture. Furthermore, it seems that the gap between the performance of the models trained with and without batch normalization is smaller when they are trained without explicit regularization and when they include heavier data augmentation. This can be observed in Table 5, as well as in Table 6, which contains the of the models trained with fewer examples. It is important to note as well the benefits of batch normalization for obtaining better when training with fewer examples. However, it is surprising that there is only a small drop in the performance of WRN-95.47 % to 94.95 % without regularization-from removing the batch normalization layers of the residual blocks, given that they were identified as key components of ResNet . The in Table 6 clearly support the presented in Section 4.2: data augmentation alone better resists the lack of training data compared to explicit regularizers. Already with 80% and 50% of the data better are obtained in some cases, but the differences become much bigger when training with only 10% and 1% of the available data. It seems that explicit regularization prevents the model from both fitting the data and generalizing well, whereas data augmentation provides useful transformed examples. Interestingly, with only 1% of the data, even without data augmentation the models without explicit regularization perform better. The same effect can be observed in Table 7, where both the shallower and deeper versions of All-CNN perform much worse when trained with explicit regularization, even when trained without data augmentation. This is another piece of evidence that explicit regularization needs to be used very carefully, it requires a proper tuning of the hyperparameters and is not always beneficial. In this appendix we provide the computations of the Frobenius norm of the weight matrices of the models trained with different levels of explicit regularization and data augmentation, as a rough estimation of the complexity of the learned models. Table 8 shows the Frobenius norm of the weight matrices of the models trained with different levels of explicit regularization and data augmentation. The clearest is that heavier data augmentation seems to yield solutions with larger norm. This is always true except in some All-CNN models trained without batch normalization. Another observation is that, as expected, weight decay constrains the norm of the learned function. Besides, the models trained without batch normalization exhibit smaller differences between different levels of regularization and augmentation and, in the case of All-CNN, less consistency. One of the relevant presented in this paper is the poor performance of the regularized models on the shallower and deeper versions of All-CNN, compared to the models without explicit regularization (see Table 7). One hypothesis is that the amount of regularization is not properly adjusted through the hyperparameters. This could be reflected in the norm of the learned weights, shown in Table 9. However, the norm alone does not seem to fully explain the large performance differences between the different models. Finding the exact reasons why the regularized models not able to generalize well might require a much thorough analysis and we leave it as future work. Although it is out of the scope of this paper to elaborated on the taxonomy of regularization techniques for deep neural networks, an important contribution of this work is providing definitions of explicit and implicit regularization, which have been used ambiguously in the literature before. It is therefore worth mentioning here some of the previous works that have used these terms and to point to literature that has specifically elaborated on the regularization taxonomy or proposed other related terms. et al. observed that the size of neural networks could not explain and control by itself the effective capacity of neural networks and proposed that other elements should implicitly regularize the models. However, no definitions or clear distinction between explicit and implicit regularization was provided. compared different regularization techniques and mentioned the role of implicit regularization, but did not provide definitions either, and, importantly, they considered data augmentation an explicit form of regularization. We have argued against that view throughout this paper, especially in Sections 2 and 6.1. An extensive review of the taxonomy of regularization techniques was carried out by Kukačka et al.. Although no distinction is made between explicit and implicit regularization, they define the class regularization via optimization, which is somehow related to implicit regularization. However, regularization via optimization is more specific than our definition and data augmentation, among others, would not fall into that category. provided a distinction between data-independent and data-dependent regularization. They define data-independent regularization as those techniques that impose certain constraint on the hypothesis set, thus constraining the optimization problem. Examples are weight decay and dropout. We believe this is closely related to our definition of explicit regularization. Then, they define data-dependent regularization as those techniques that make assumptions on the hypothesis set with respect to the training data, as is the case of data augmentation. While we acknowledge the usefulness of such taxonomy, we believe the division between dataindependent and dependent regularization leaves some ambiguity about other techniques, such as batch-normalization, which neither imposes an explicit constraint on H nor on the training data. The taxonomy of explicit vs. implicit regularization is however complete, since implicit regularization refers to any regularization effect that does not come from explicit (or data-independent) techniques. Finally, we argue it would be useful to distinguish between domain-specific, perceptually-motivated data augmentation and other kinds of data-dependent regularization. Data augmentation ultimately aims at creating new examples that could be plausible transformations of the real-world objects. In other words, the augmented samples should be no different in nature than the available data. In statistical terms, they should belong to the same underlying probability distribution. In contrast, one can think of data manipulations that would not mimic any plausible transformation of the data, which still can improve generalization and thus fall into the category of data-dependent regularization (and implicit regularization). One example is mixup, which is the subject of study of.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1eqOnNYDH
Deep neural networks trained with data augmentation do not require any other explicit regularization (such as weight decay and dropout) and exhibit greater adaptaibility to changes in the architecture and the amount of training data.
Adversarial feature learning (AFL) is one of the promising ways for explicitly constrains neural networks to learn desired representations; for example, AFL could help to learn anonymized representations so as to avoid privacy issues. AFL learn such a representations by training the networks to deceive the adversary that predict the sensitive information from the network, and therefore, the success of the AFL heavily relies on the choice of the adversary. This paper proposes a novel design of the adversary, {\em multiple adversaries over random subspaces} (MARS) that instantiate the concept of the {\em volunerableness}. The proposed method is motivated by an assumption that deceiving an adversary could fail to give meaningful information if the adversary is easily fooled, and adversary rely on single classifier suffer from this issues. In contrast, the proposed method is designed to be less vulnerable, by utilizing the ensemble of independent classifiers where each classifier tries to predict sensitive variables from a different {\em subset} of the representations. The empirical validations on three user-anonymization tasks show that our proposed method achieves state-of-the-art performances in all three datasets without significantly harming the utility of data. This is significant because it gives new implications about designing the adversary, which is important to improve the performance of AFL. Since its invention over ten years ago BID4, deep neural networks (DNN) have shown significant performance improvements in various fields. When we apply DNN or more general machine learning techniques to real-world data, one of the key challenges is how to systematically incorporate the desired constraints into the learned representations in a controllable manner. For example, when practitioners apply these techniques to the data that contain a lot of user information (such as images with username BID1 or data of wearables BID6), the desired representations should not contain user-information that may in privacy issues. Moreover, for legal and ethical reasons, machine learning algorithms have to make fair decisions, which do not rely on sensitive variables such as gender, age, or race BID8 BID1. Such a requires removal of information related to specific factors (such as user ID, race, etc.) from the representation; this is called censoring representations in this paper. One of the recently proposed approaches for censoring representation is adversarial feature learning (AFL) BID1 BID6 BID13, which employs the adversarial training framework to constrain the representations FIG0. Specifically, AFL considers an adversarial classifier who attempts to predict sensitive variables from the representations of a DNN and simultaneously trains the DNN to deceive the classifier. By alternatively or jointly (using gradient reversal layer proposed by BID2) training the adversary and DNN in such a manner, AFL ensures that there is little or no information about the sensitive variables in the representations. Although some previous studies report significant performance improvements of the AFL in the context of censoring representations, the success of the AFL depends on the choice of the adversarial classifier. For example, if we use a logistic regression as the adversarial classifier, AFL can only eliminate the information that is linearly separated in the representation spaces and cannot remove any non-linear dependency. It is also possible that deceiving some classifier might be too easy, ing in poor performance improvements of AFL. As such, the design of adversary is crucial for the performance of AFL; however, existing studies fail to address how to design the adversary for improving the quality of AFL.In this paper, we propose a novel design of adversary for improving the performance of AFL, multiple-adversaries over random subspace (MARS), which consider the vulnerableness of the adversary. The proposed design is motivated by the recent report BID6 that is just increasing the capacity of adversary did not successfully improves the performance of AFL BID6, and assumptions that deceiving an adversary fail to give meaningful information if the adversary is easily fooled, and adversary relies on single classifier suffer from this issues. The proposed method incorporates multiple adversaries where each adversary tries to predict sensitive variables from a different subset of the representations. This design makes adversary less vulnerable to the update of the encoder since the encoder needs to in a set of diverse adversaries. In this paper, we validate the effectiveness of the proposed design by empirically showing that MARS archives better performance compared to baselines (that uses a single adversary and multiple adversaries over the entire representation spaces), and MARS is less vulnerable compared to the baselines. The primary contributions of this paper are as follows:• This is the first study verifying the importance of the design of adversary in AFL and proposes the novel design for improving AFL. This is significant because the suggest that design of adversary is vital for the performance of adversary, and gives new implications about designing the adversary in AFL, which is important to improve the performance of AFL. It is worth mentioning that, except our paper, all existing studies focus only on the accuracy/capacity for designing adversaries, which is not enough for improving the performance of AFL as shown in this paper.• The proposed method achieved state-of-the-art performance in the task of censoring representations, which is essential to extend the applicability of DNN to many real-world applications. The empirical validation using three user-anonymization tasks shows that the proposed method allows the learning of significantly more anonymized representations with negligible performance degradation. Specifically, the probability of correctly predicting the user ID from learned representations is more than 0.07 points better on average than that of a single adversary and multiple adversaries over entire representation spaces.2 PROBLEM DEFINITION AND RELATED WORKS 2.1 PROBLEM DEFINITION: CENSORING REPRESENTATIONS Censoring representation is a task to obtaining unbiased features. Here, unbiased features are features that are less affected by S, where S is a random variable that we want to remove from the data for some reason. One typical reason is related to fairness or privacy, which requires the output of neural networks not to be affected by unfair information or not contain user information. It should be noted that poor design of the censoring procedure significantly reduces the utility of data. For example, the output of random mapping f rand apparently has no information about S, but it also gives no information about target Y. Alternatively, as a more realistic example, a neural network with limited capacity possibly acquires less information about S, but it may also in poorer performance. Therefore, the primary goal of censoring representation is to obtain an encoder E that reduces information about S, while maintaining information about Y. Formally, the task can be written as a joint optimization problem of the loss: DISPLAYFORM0 where X indicates the input random variable, E is an encoder that transforms X to representation R, λ is the weighting parameter, and V and L are loss functions that represent how much information about S and Y is present, respectively. Note that S can be any form of variables such as binary variable, categorical variable, or continuous variable. In this paper, we primarily consider a particular variant of censoring representation tasks, where we learn E with deep neural networks and S is the user ID (anonymization tasks). A recently proposed approach for the censoring representation task is adversarial feature learning (AFL). In AFL, V (E(X), S) is measured by an external neural network D. In other words, if the external networks can accurately predict S from R, AFL regards that R has too much information about S and if it is difficult to predict S accurately, AFL regards that R has little or no information about S. The external network D is called discriminator or adversary in this context. The information is used to update the weights of the encoder E so that the updated representations have less information about S. Formally, AFL solves the joint optimization problems: min To the best of the authors' knowledge, the adversarial training framework was first introduced by BID12, and later re-invented by BID3 in the context of an imagegeneration task. In the context of censoring representations, BID1 first proposed the use of adversarial training to remove sensitive information from the representations. They show its efficacy for fair classification tasks (S is binary) compared to an LFR (Learned Fair Representation) proposed by BID14 that regularizes the l 1 distance between the distributions for data with different S. BID6 first applied the adversarial training to learn anonymized representations on the data of wearables, where S is categorical. More recently, BID13 first introduced the notion of adversarial feature learning and showed superior performances compared to variational fair auto-encoder BID8. DISPLAYFORM0 Although AFL already shows state-of-the-art performance to learn unbiased representations, how to improve its performance is still a challenge, which is tackled in this paper. This work is motivated by previous studies conducted by BID6. They reported that simply using highcapacity networks as an adversary (such as deep neural networks) sometimes fails to improve the performance; on the other hand, the learned representations are still highly biased if the adversary does not have enough capacity. Our work can be regarded as an introduction of the new design consideration (i.e., vulnerableness) of adversary for improving the quality of AFL and validation of it by proposing a method that instantiates the concept, which is not done in the previous works. It is worth mentioning that the concept of multiple adversaries itself is already proposed and verified in the context of the image generation with adversarial training BID0, and this paper does not insist that using multiple adversaries is the primal novelty. From the methodological perspective, the proposed method (MARS) can be seen as an extension of the concept of multiple adversaries by introducing the concept of diversity, and we verify that this extension is essential in the context of censoring representations. Though the primary focus of this paper is limited to the context of adversarial feature learning, we further discuss the applicability of the proposed method on the other application of adversarial training, such as image/text generation BID3 BID16 or domain adaptation BID2; this is done at the end of this paper. The proposed method, multiple-adversaries over random subspaces (MARS), considers multiple adversaries where each adversary is in charge of different subsets of features. The development of MARS is motivated by the assumption that the adversary should not be vulnerable and an ensemble of diverse classifiers make the adversary less vulnerable, ing in the improved performance of AFL. The experiment parts of this paper empirically validate the importance of the design by showing that MARS gives superior performances compared to the method with single adversary or that of simple ensemble of adversaries without enforcing adversaries to have diverse view, and using ensemble of independent classifiers as adversary actually makes the adversary robust. It is worth mentioning that the proposed method could be benefited from the variance reduction because the ensemble of diverse classifiers typically reduces the variance. More generally, this technique is so-called random subspace method in the context of ensemble researches and widely used in a various methods, such as random forest BID5.To make the what parts of the proposed method contribute to the performance gain clearer, we also verify the effect of variance reductions later in the experimental parts.3.2 ALTHORIGHM FIG0 shows the comparison of overall architectures of between AFL with an adversary, and with multiple adversaries over random subspaces. Since the primary difference between naive AFL and MARS is how to build the adversary (training adversaries) and how to use the information of adversary (training encoders), we focus on the explanation about the part. There are many possible options to select a subspace; however, this paper considers the case where each subspace is randomly selected. Suppose that n is the number of dimensions of representation R ∈ R n, and K is the number of adversaries. In MARS, each adversary D k is trained to predict S over a randomly selected subset of features R k, whose dimension is m k and m k < n. Each adversary is trained to maximize the expected log-likelihood. Formally, the optimization problem for D k is as follows max DISPLAYFORM0 where Sub k is a function that return the subset of R, which is fixed before the training. Precisely, each Sub k determine whether to remove the i-th dimension of R with the probability of α. The θ D k is the parameter of D k. The optimization is done by stochastic gradient descent as usual. The adversaries are then integrated, and a single prediction is made to train the encoder. Formally, the probability distribution of q D (s = s i |h = E(x)) is parameterized by an ensembled adversary D: DISPLAYFORM1 which means, the predictions of adversaries D k are integrated by averaging the predictions. Note that integration operator F is not essential to be the averaging, it may be the max operator or the soften version of the max operator proposed in BID0 ).The encoder is trained for minimizing the log-likelihood of Eq.2 and negative log-likelihood q M (y|h). Formally, DISPLAYFORM2 where θ E and θ M are the parameter of E and M respectively. Algorithm 1 show overall algorithm. We used two datasets to demonstrate the efficacy of MARS. Both datasets are related to human activity recognition using the data of wearables, where privacy issues have recently been pointed Algorithm 1 Optimization of the proposed model DISPLAYFORM0 Update weights of E and M (eq. 3) end while out by BID6. In both datasets, the task is to learn anonymized representations (R that does not contain user-identifiable information), while maintaining classification performance. The opportunity recognition dataset BID10 consists of sensory data regarding human activity in a breakfast scenario, and the task is to predict the activity performed by a user. A wide variety of body-worn, object-based, and ambient sensors are used (see FIG0 in BID10 for more details). Each record consists of 113 real-value sensory readings, excluding time information. We considered two recognition tasks: gesture recognition (Opp-G) and locomotion recognition (Opp-L). Opp-G requires the recognition of 18 class activities 1, while Opp-L requires the recognition of 4 class locomotion-activities, i.e., stand, walk, sit, and lie. Given a sampling frequency of 30 Hz, the sliding window procedure with 30 frames and a 50% overlap produced 57,790 samples. We used data from subjects 2-4 as training/validation sets (90% for training and 10% for validation) and that of subject 1 as test sets. The USC-HAD dataset BID15 ) is a relatively new benchmark dataset, which contains a relatively large number of subjects (14 subjects: 7 males and 7 females). The data include 12 activity classes that are corresponding to the most basic and everyday activities of people's daily lives 2. MotionNode, which is a 6-DOF inertial measurement unit specially designed for human motion sensing applications, is used to record the outputs from accelerometers that record 6 real sensory values. The sliding window procedure, using 30 frames and a 50% overlap, produced 172,169 samples. We used data from subjects 1-10 as training/validation sets (90% for training and 10% for validation) and that of subjects 11-14 as test sets. In all experiments, we parameterized the encoder E by convolutional neural networks (CNN) with three convolution-ReLU-pooling repeats followed by one fully connected layer and M by logistic regression, following a previous study BID6. Every component of the network was trained with the Adam algorithm (learning rate is set to 0.0001) BID7 (150 epochs). In the optimization, we used the annealing heuristics for weighting parameter λ following BID6; we linearly increased λ during 15 epochs to 135 epochs of training. Following previous works, we evaluated the level of anonymization by training a classifier f eva that tries to predict S over learned representations. This means that we regarded that a learned representation is anonymized well if we could not build an accurate classifier. To be more specific, we trained the evaluator with the data that is used for training encoder E and a part of the data whose data did not use for training the encoder E. The rest of the data is used for the evaluations. To demonstrate the efficacy of the proposed method, we compare the following four methods and its variants: None: w/o adversary (correspond to standard CNN), Adv: w/ a single adversary, MA: w/ multiple adversaries where each adversary tries to predict from the entire space, and MARS: w/ multiple adversaries where each adversary tries to predict from a different subspace of the entire space. Each adversary is parametrized by multi-layer perceptron (MLP) with 800 hidden units. If we need to express the type of adversary, we denote it by a suffix. For example, Adv-LR means logistic regression parametrizes an adversary. Without mentioning otherwise, we set the num- In addition to None, Adv, MA, MARS, we compared a logistic repression (Adv-LR), DNNs with 400-200 hidden units, the ensemble of DNNs (MA-DNN), and the ensemble of DNNs over random subspaces (MARS-DNN) for E. Adv 2 and Adv 5 correspond to the case where we train an adversary with MLP 2 iterations or 5 iterations against one iteration for training encoder E. For evaluator f eva, we tested LR, multi-layer perceptron with 50 hidden units (MLP 1), multi-layer perceptron with 800 hidden units (MLP 2), and deep neural networks with 400-200 hidden units. The best performance in each combination of datasets and f eva is highlighted in underline and bold. The second best performance is highlighted in bold. We can make the following observations. For small-capacity evaluator (LR), a single adversary (Adv and its variants) marks superior performances to MA and MARS. However, the method with single adversary tends to give poor performance for the stronger adversary (MLP 2 or DNN). For high-capacity evaluator (MLP 2 or DNN), the proposed method outperforms baseline methods. Specifically, in the case when the evaluator is DNN, MARS and MARS-DNN marks 0.806 and 0.789 on average of user accuracy, which is significantly better than the best baseline Adv-DNN (0.863 on average of user accuracy). Increasing the iterations of training adversary does not pay off, as shown in the weak performance improvements of Adv 2 or Adv 5 against Adv. In addition to the quantitative evaluation, we also give qualitative evaluation by visualizing the learned representations using t-SNE BID9 (Figure 2). Precisely, we randomly sample 1,000 examples from the validation sets and reduce the dimension of data by using principal component analysis over the output of the E to speed up the computations and suppress some noise. The number of components is selected such that the amount of variance that needs to be explained The interesting thing here is, one method fails to achieve good final performance and the other achieve it even if both methods give the similar accuracy of D during training. For example, in the Opp-L dataset (center of the Figure), MA-DNN and MARS show similar performance curve during training, but MARS gives more than 0.10 better score at final (0.85 to 0.96). This implies that accuracy of the discriminator is not an only definitive factor that determines the success of AFL.Figure 3-c shows how the update of E affect the − log q D k. Specifically, each line represents the average effect of the loss by the update E, and the shaded area represents the minimum and maximum effect of the loss, which means that larger shaded area indicates the higher variety of the D k. The number listed next to each method represents the standard deviation of the classifiers. The demonstrates that the influences of the update to each adversary are more varied for MARS compared to MA, which implies the adversaries of higher α have more diversity, and it makes the D less vulnerable against the update. Figures 4-a and b compares the performance between different K and α. Note that α = 0.0 means the method using all feature spaces, which is MA. The shows that although there is no clear winner among the α, using subspaces gives lower accuracy on predicting S, the number of discriminator K effect accuracy on S while the accuracy on Y is robust to the K especially if α ̸ = 0, indicating the possibility that the performance of AFL could be improved by incorporating multiple adversaries. This study proposed MARS, which incorporates multiple adversaries where each adversary has a different role and conducted empirical validations on the efficacy of the proposed method for censoring representations, specifically user-anonymization for the data of wearables. TAB0 compares the proposed method and several baselines and shows the efficacy of the proposed method against various evaluators. Figure 2 qualitatively shows that the proposed method provides wellanonymized representations. FIG4 -c shows that each adversary in MARS has the diverse role, ing MARS more robust to the update of E as a whole. All these support that the proposed method is more effective in removing the influence of a specific factor (user in experiments) compared to the previous methods. One of the reasons why MARS works well is that the adversary is designed to have diverse-views by incorporating random subspace methods, ing the encoder need to be stronger to deceive the adversary. It is worth mentioning that the capacity or accuracy of the adversary is not the only a definitive factor that determines the success of the adversarial feature learning, as shown by the superior performance of MARS over MA that has 1 1−α times the larger capacity of MARS. Moreover, the final performance of AFL is significantly different even if the accuracy of D is reasonably similar during training, as shown in FIG4 -b. As mentioned in the related work section, such knowledge is essential to design the adversary in practice, and prior studies of adversarial feature learning did not address this issues. Although this paper focused on the case where the subsets are randomly selected and fixed, this might not be necessary. One of the possible extensions is to determine subsets with more sophisticated ways (e.g., by performing clustering or soft-clustering on the representation spaces after few training iterations), or to learn how to select the subset itself by adding the criterion regarding the diversity of adversaries. Also, it might be possible to realize the diversity of adversaries by methods other than subspace selection. One possible way is to constrain weights of two adversaries so that they are an orthogonal view, which is used in semi-supervised learning using co-training BID11, or it might be worth a try to add different noises for each adversary. It might be worth mentioning about applicability and implications of MARS for other applications of adversarial training, such as image generation. From the perspective of the applicability, the MARS itself does not rely on any domain-specific settings and is therefore general enough for many applications based on adversarial training. For example, we can build multiple-adversaries upon the subset of feature spaces (maybe not on the image spaces). This makes discriminator have diverse-view, so it might be useful for preventing mode collapse that is one of the well-known problems in imagegeneration with adversarial training. In the context of image-generation, Generative Multi Adversarial Networks proposed by BID0, which also use multiple adversaries, shows that multiple adversaries are useful for generating better images, and for avoiding mode collapse. It might be interesting to see if enhancing the diversity of discriminators by preparing asymmetric adversaries as with this paper helps to generate a better image or to avoid mode collapse better. Table 2 shows the selected λ for each combination of datasets and baselines. Although the best hyper-parameter might be determined by the balance between log q M and log q D, here we cannot see the obvious relationships between the best λ and the easiness of tasks.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByuP8yZRb
This paper improves the quality of the recently proposed adversarial feature leaning (AFL) approach for incorporating explicit constrains to representations, by introducing the concept of the {\em vulnerableness} of the adversary.
A key problem in neuroscience, and life sciences more generally, is that data is generated by a hierarchy of dynamical systems. One example of this is in \textit{in-vivo} calcium imaging data, where data is generated by a lower-order dynamical system governing calcium flux in neurons, which itself is driven by a higher-order dynamical system of neural computation. Ideally, life scientists would be able to infer the dynamics of both the lower-order systems and the higher-order systems, but this is difficult in high-dimensional regimes. A recent approach using sequential variational auto-encoders demonstrated it was possible to learn the latent dynamics of a single dynamical system for computations during reaching behaviour in the brain, using spiking data modelled as a Poisson process. Here we extend this approach using a ladder method to infer a hierarchy of dynamical systems, allowing us to capture calcium dynamics as well as neural computation. In this approach, spiking events drive lower-order calcium dynamics, and are themselves controlled by a higher-order latent dynamical system. We generate synthetic data by generating firing rates, sampling spike trains, and converting spike trains to fluorescence transients, from two dynamical systems that have been used as key benchmarks in recent literature: a Lorenz attractor, and a chaotic recurrent neural network. We show that our model is better able to reconstruct Lorenz dynamics from fluorescence data than competing methods. However, though our model can reconstruct underlying spike rates and calcium transients from the chaotic neural network well, it does not perform as well at reconstructing firing rates as basic techniques for inferring spikes from calcium data. These demonstrate that VLAEs are a promising approach for modelling hierarchical dynamical systems data in the life sciences, but that inferring the dynamics of lower-order systems can potentially be better achieved with simpler methods. Many datasets in the life sciences are generated by a hierarchy of dynamical systems, wherein lower-order dynamical systems that directly generate the data are driven by higher-order dynamical systems that are not observable. This problem is outlined in figure 1A, in which noisy observations x depend on the state z 1 of a low-order dynamical system that is perturbed by inputs u 1. The state of this dynamical system is also coupled to the state z 2 of a higher-order dynamical system, which can be perturbed independently by inputs u 2. One example of such a system in in-vivo two-photon calcium imaging from neuroscience. Calcium imaging provides systems neuroscientists with the ability to observe the activity of hundreds of neurons simultaneously during behavioural experiments. Such experiments have allowed neuroscientists to ask questions about the underlying computations and algorithms that neural circuits are implementing in perception, decision-making, memory, and many other processes. Such experiments can be characterized as observing a hierarchical dynamical system (Fig 1B) in which measurable calcium fluorescence is primarily determined by dynamics based on voltage-gated calcium channels and calcium binding to fluorescence dyes, and the rate of fluorescence transients controlled by the underlying computation. Recent applications of sequential variational autoencoders to neural data analysis has seen great success in inferring underlying computations in populations of cells in macaque and human motor cortex. By characterizing neural computation as low-dimensional dynamic factors in a non-hierarchical dynamical systems, showed that these dynamic factors trained to generate the inhomogeneous intensity functions explaining the rate of spikes assumed to follow a Poisson process. Crucially, these low-dimensional factors could also decode reaching behaviour of macaques and humans with much higher fidelity than any other dimensionality reduction method. Although this is a significant advance in our ability to analyze neural data in the form of spikes trains, two-photon calcium imaging poses the additional problem of identifying latent spike trains in fluorescence traces. This problem has been independently addressed in a number of different ways, including deconvolution and variational inference. If we continue to model the frequency of events as being generated by a Poisson process, this can be seen as hierarchy of dynamical systems (Fig 1A), in which low dimensional dynamics generate spike probabilities that in turn drive fluctuations in biophysical dynamics of calcium activity (Fig 1B. Here we propose a method that extends LFADS to accommodate calcium activity using this hierarchical dynamical systems approach, in which we can infer both the latent dynamics and the latent spike trains from the observed calcium fluorescence signal. The model is a variational ladder autoencoder (VLAE) with recurrent neural networks (RNNs) that supports uncovering latent dynamical systems (Fig 1C, full directed acyclic graph in Fig A1). It can be seen as a unification of two recent applications of variational autoencoders (VAEs) in neuroscience: 1) Latent Factor Analysis for Dynamical Systems (LFADS) and 2) DeepSpike, a VAE approach to inferring spike counts from calcium imaging data . We choose the VLAE approach since it has been shown to avoid the problem of trying to reconstruct the data solely from the lower-order features by separating latent variables via divergent deterministic pathways in the encoder, and convergent deterministic pathways in the generator (Sønderby et al., 2016; . Model hyperparameters are shown in table A1. The inferred dynamical system underlying the frequency of calcium events in the data is identical to that of LFADS (Fig 1C, blue modules). The prior distribution of initial conditions g 0 and external inputs u t are modelled as Gaussian distributions P (g 0) = N (µ g0, σ 2 g0), and P (u t) = N (µ ut, σ 2 ut). The underlying dynamical systemġ = G(g t, u t) is modelled by a Gated Recurrent Unit (GRU) taking the initial hidden state g 0 and inputs u t. Low dimensional factors f t are calculated as a linear transformation of the generator hidden state f t = W f ac g t. These factors are used to reconstruct the Poisson process intensity function with a fully connected layer and exponential non-linearity Inferred spike counts s t are generated by sampling z t from Gaussian distributions P (z t) = N (µ zt, σ 2 zt) and projecting these through an affine transformation and non-linearity along with the factors from the deeper layer, i.e., We assume a simple model of calcium dynamics:ẏ = −y t /τ y + α y s t + β y where the parameters τ y, α y, β y are measured from the data, however it is a topic for future research to fit the calcium dynamics simultaneously. In our synthetic data, these are valued at 0.4 s, 1, and 0 respectively. The value of τ y is chosen as it is the known decay time constant of GCamP6, a commonly used calcium fluorescence indicator used in calcium imaging experiments. The calcium fluorescence signal x t is mapped onto the parameters of the variational posterior distributions Q(z t |x), Q(g 0 |x), Q(u t |x). These distributions are all modelled as Gaussians, with the mean and standard deviations parameterized by a stack of bidirectional GRUs, con. The final hidden states of E 1 gen and E 2 gen are mapped onto the parameters of Q(z 0 |x) and Q(g 0 |x) respectively with fully connected layers. The hidden states E 1 con and E 2 con are concatenated and passed as inputs to single direction GRUs C 1 and C 2. The hidden states of C 1 and C 2 are concatenated at each time step t with s t−1 and f t−1. Subsequently these concatenated activities are mapped onto the parameters of Q(z t |x) and Q(u t |x) with fully connected layers. One of the advantages of using VLAEs is that the evidence lower bound (ELBO) formulation is the same as for VAEs despite the hierarchical latent space. As such, our cost function remains very similar to that of LFADS. g0,u∼Q(z,g0,u|x) [log(s|λ)] Where W gg is the weight matrix in the GRU that take g as input. The likelihood function P (x t |y t) is modelled as a Gaussian distribution x t ∼ N (y t, σ 2 y), where σ 2 y is learned. Although s t is not discrete, P (s t |λ t) is treated as an approximate Poisson process s t ∼ P oisson(λ t) = s λt t exp(−λ t)/Γ(s t + 1). This is an inductive bias to ensure that the inferred spike trains in the lower-order dynamic system is constrained to follow the dynamics of the higher-order dynamic system. The latent variables are treated as independent, with hierarchy imposed in the deterministic paths, i.e., Q(z, g 0, u|x) = Q(z|x)Q(g 0 |x)Q(u|x) Parameters of our model were optimized with ADAM, with an initial learning rate of 0.01, which decreased by a factor of 0.95 whenever plateaus in training error were detected. As in LFADS training, KL and L2 terms in the cost function were'warmed up', i.e., had a scaling factor being 0 and 1 applied that gradually increased. Warm-up for the deeper parameters (blue modules in Figure 1) was delayed until warm-up for shallower parameters was completed (red modules in Figure 1). The model was tested on synthetic data with Lorenz dynamics embedded in the frequency of calcium fluorescence transients, as described by , where generated spikes were convolved with an exponential kernel with a time constant of 0.4 ms, and white noise added to the ing traces. We measure the performance of the model in three ways: 1) uncovering the underlying Lorenz dynamics, 2) reconstructing the rates of calcium transients an inhomogeneous Poisson intensity functions, 3) reconstructing the spike counts contributing to increases in the calcium fluorescence signal. The model was compared against a ground-truth where the spike counts are known, and LFADS is used to reconstruct the latent dynamics and intensity function, and against a model where spike counts are extracted using a deconvolution algorithm before using LFADS to reconstruct the rates and intensity function (OASIS + LFADS). It was also tested against a model that used a 1-D convolution of the intensity function to reconstruct either the first two (Gaussian-LFADS) or four (Edgeworth-LFADS) time-varying moments of fluorescence, as used previously in estimating the intensity functions of filtered Poisson processes in neuroscience . Figure 2 shows examples of performance of our model in reconstructing the fluorescence traces (Fig 2A), Poisson intensity functions (Fig 2B), spikes (Fig 2C) and Lorenz dynamics (Fig 2D). Visually, the model provides very close fit to the fluorescence traces, intensity functions, and Lorenz dynamics. The model also captures spike-timing, although these spike trains appear smoothed. Table 1 compares the R 2 goodness-of-fit on reconstructing held-out validation data with ground-truth latent dynamic structure. Of all approaches, our model easily performs best at reconstructing fluorescence traces. It should be noted that the reason the Gaussian and Edgeworth LFADS models perform so poorly at reconstructing fluorescence is that it only attempts to reconstruct the time-varying moments of fluorescence as a 1-D convolution of the intensity function, which stems from the approximation of exponential-kernel shot noise processes . Additionally, our model almost performs as well as LFADS in reconstructing the Lorenz dynamics. It is to be expected that LFADS performs better than our method, since there is an additional source of observation noise in our synthetic dataset generating fluorescence transients from spikes. Notably, our model does not perform as well as the deconvolution method OASIS in reconstructing spike trains, however this does not impact the ability of our model to reconstruct the latent dynamics. In fact, constraining the reconstructed spikes by the latent dynamics may mitigate any errors in spike train reconstruction that occur by deconvolution, since the deconvolution algorithm may erroneously drop spikes during high rates, whereas our model should be less likely to do so. It will be necessary to assess this possibility further. It should be noted that the deconvolution algorithm performs much better at reconstructing spike trains in our synthetic dataset than in real datasets where ground-truth spiking is known . To our knowledge, there are no known dual recordings of population 2-photon calcium imaging with ground-truth electrophysiology in a subpopulation of neurons in-vivo during behaviour driven by hypothesized low-dimensional dynamics that we would be able to validate this with. Nevertheless, since the relationship between calcium dynamics and somatic spiking is highly non-linear, especially in dendrites, it remains to be seen how useful it is to faithfully reproduce unseen spike trains in calcium fluorescence activity. The model was then tested on synthetic data generated from a 50 cell recurrent neural network with chaotic dynamics, and an external perturbation at a random time, as described in , with parameters adjusted to make the data more representative of firing rates and time scales observed in calcium imaging experiments. Spike trains were transformed into fluorescence signals using the same procedure as with the Lorenz system dataset. (Fig 3A), spike trains (Fig 3B), and intensity functions (Fig 3C). Visually, the fluorescence traces and spike trains have been reconstructed reasonably well, whereas reconstructions of the the intensity functions are highly noisy. The latent dynamics represented by the factors (Fig 3D) and external inputs (Fig 3E) also show a lot of noise. This appears to be due to difficulty in inferring external inputs, in which it is not clear whether the timing of external perturbation has been accurately inferred, despite a slight transient at roughly the time of the perturbation in the example. Table 2 compares the R 2 goodness-of-fit on reconstructing held-out validation data, which demonstrates that our model performs very well in reconstructing the fluorescence signal and does reasonably well at reconstructing spikes. However, in this more complex benchmark where the deeper dynamic system is perturbed by external input, the pre-processing of fluorescence transients with the OASIS deconvolution algorithm reconstructs firing rates far better than our method. We present a hierarchical recurrent variational autoencoder model capable of reconstructing latent dynamics, latent spike trains, and calcium fluorescence traces in a benchmark synthetic dataset. Of the four methods tested, our model is the only one capable of reconstructing all three. Furthermore, our model performed best in reconstructing latent dynamics in our synthetic dataset We will need to assess our model on further synthetic benchmark data to assess the validity of our approach. Since our model is trained end-to-end, it should be possible to extend to reconstructing raw 2-photon imaging videos, which could enable us to train models to uncover latent dynamics from arbitrarily shaped neuronal structures. This would of great use to neuroscientists who are largely restricted to techniques that extract fluorescence traces from regions of interest with somatic shapes, whereas the morphological diversity of dendrites is much greater. An additional advantage of using our hierarchical model is that we can obtain measures of the uncertainty in both the latent dynamics, and the latent spike trains. The correlation in uncertainty between layers of this hierarchy may be what allows superior inference of the latent dynamics, despite less accurate reconstructions of the spike trains than OASIS, which provides no measure of uncertainty. We hope to improve our model to better capture the relationships between layers of this hierarchy in future. We describe a use-case in neuroscience (2-photon calcium imaging data) for which this model may be very useful. However, we are keen to investigate the general case of hierarchical dynamical systems and their utility in uncovering structure in datasets outside this domain. A APPENDIX
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
Ske066VFwS
We extend a successful recurrent variational autoencoder for dynamic systems to model an instance of dynamic systems hierarchy in neuroscience using the ladder method.
With the proliferation of specialized neural network processors that operate on low-precision integers, the performance of Deep Neural Network inference becomes increasingly dependent on the of quantization. Despite plenty of prior work on the quantization of weights or activations for neural networks, there is still a wide gap between the software quantizers and the low-precision accelerator implementation, which degrades either the efficiency of networks or that of the hardware for the lack of software and hardware coordination at design-phase. In this paper, we propose a learned linear symmetric quantizer for integer neural network processors, which not only quantizes neural parameters and activations to low-bit integer but also accelerates hardware inference by using batch normalization fusion and low-precision accumulators (e.g., 16-bit) and multipliers (e.g., 4-bit). We use a unified way to quantize weights and activations, and the outperform many previous approaches for various networks such as AlexNet, ResNet, and lightweight models like MobileNet while keeping friendly to the accelerator architecture. Additional, we also apply the method to object detection models and witness high performance and accuracy in YOLO-v2. Finally, we deploy the quantized models on our specialized integer-arithmetic-only DNN accelerator to show the effectiveness of the proposed quantizer. We show that even with linear symmetric quantization, the can be better than asymmetric or non-linear methods in 4-bit networks. In evaluation, the proposed quantizer induces less than 0.4\% accuracy drop in ResNet18, ResNet34, and AlexNet when quantizing the whole network as required by the integer processors.
[ 1, 0, 0, 0, 0, 0, 0, 0 ]
H1lBj2VFPS
We introduce an efficient quantization process that allows for performance acceleration on specialized integer-only neural network accelerator.
The prohibitive energy cost of running high-performance Convolutional Neural Networks (CNNs) has been limiting their deployment on resource-constrained platforms including mobile and wearable devices. We propose a CNN for energy-aware dynamic routing, called the EnergyNet, that achieves adaptive-complexity inference based on the inputs, leading to an overall reduction of run time energy cost without noticeably losing (or even improving) accuracy. That is achieved by proposing an energy loss that captures both computational and data movement costs. We combine it with the accuracy-oriented loss, and learn a dynamic routing policy for skipping certain layers in the networks, that optimizes the hybrid loss. Our empirical demonstrate that, compared to the baseline CNNs, EnergyNetcan trim down the energy cost up to 40% and 65%, during inference on the CIFAR10 and Tiny ImageNet testing sets, respectively, while maintaining the same testing accuracies. It is further encouraging to observe that the energy awareness might serve as a training regularization and can even improve prediction accuracy: our models can achieve 0.7% higher top-1 testing accuracy than the baseline on CIFAR-10 when saving up to 27% energy, and 1.0% higher top-5 testing accuracy on Tiny ImageNet when saving up to 50% energy, respectively. While deep learning-powered Internet of Things (IoT) devices promise to dramatically revolutionize the way we live and work by enhancing our ability to recognize, analyze, and classify the world around us, this revolution has yet to be unleashed due to many fundamental challenges. Edge devices, such as smart phones, smart sensors, drones and robots, have limited energy and computation resources since they are battery-powered and have a small form factor. On the other hand, high-performance Convolutional Neural Networks (CNNs) come at a cost of prohibitive energy consumption BID0. The CNNs with the highest accuracy have hundreds of layers and tens of millions of parameters. When deployed in practice, such networks drain the battery very quickly BID1. Recently, there have been a number of methods proposed to reduce energy cost in CNNs, while not hampering their predictive power. Most of them aim to reduce the model size or the number of computations BID2 BID3 BID4 BID5 BID6 BID7 BID8 BID9 BID10 BID11. However, BID1 shows that a smaller model size and fewer operations might not necessarily lead to a lower energy cost. BID1 uses energy cost to guide the pruning process, where the layer with the highest energy cost is pruned first. BID12 formulates the CNN training process as an optimization problem under a certain energy budget constraint. While both methods BID1 BID12 show promising towards pursuing more energy-efficient CNN models, they do not incorporate energy costs into the training loss function to explicitly learn a more energy-efficient model. Furthermore, once their model structures are learned from training, it can only be fixed during the inference time, and there is no room for input-dependent adaptivity. This paper proposes a new CNN model that combines energy cost with a dynamic routing strategy to enable adaptive energy-efficient inference. Our proposed model, termed as EnergyNet, is a gated CNN architecture which employs conditional computing to route the input data through the network Figure 1: EnergyNet Structure: each green circle G indicates an RNN gate and each blue square under G indicates one block of layers in the base model. To reduce the energy cost, the RNN gates generate routing strategies dynamically for different input images. By sharing the parameters between all RNN gates, they will have only 0.04% of the energy cost of the base CNN model, which is negligible. In this specific example, only the first and third blocks get executed.in an efficient path. Built on a base network (such as ResNet-34 or ResNet-50 BID13), EnergyNet uses an additional gating network BID10 to decide whether the current input should skip certain layers in the network or not. It optimizes a weighted combination of an accuracy loss and an energy loss which captures both the computational and memory data movement costs, under which EnergyNet is trained to find the optimal routing policy to reduce the energy cost of the model without degrading the prediction accuracy. Our empirical demonstrate that, compared to the base network without gating nor dynamic inference, EnergyNet can trim down the energy cost up to 40% and 65%, during inference on the CIFAR10 and Tiny ImageNet testing sets, respectively, while maintaining almost the same testing accuracy. Interestingly enough, we find the energy-aware EnergyNet can even achieve win-win, by simultaneously improving the prediction accuracy and saving energy, potentially due to its equivalent effect as a training regularization to avoid overfitting. For example, our models achieve 0.7% higher top-1 testing accuracy than the baseline on CIFAR-10 when saving up to 27% energy, and 1.0% higher top-5 accuracy on Tiny ImageNet when saving up to 50% energy, respectively. Overview: EnergyNet implements an effective dynamic routing algorithm using a set of gating networks, which shares similar ideas with BID10, as depicted in Figure 1. Each gating network associates with a block of layers in the EnergyNet. Given an input image, the gating networks decide if the corresponding block should be skipped or not. The input to each block is first sent to the gating network G, whose output is either 0 or 1. If it is 0, the block will be skipped; otherwise, it will process the input normally as in the base model. If the input and output of the block have different dimensions, then we can perform a linear projection using a shortcut connection to match the dimensions as in BID13. The core innovation in EnergyNet is the adoption of a new energy-aware loss function for learning the gating (skipping) policy, whose details we defer to the next subsection. In our implementation, we adopt the recurrent gates (RNNGates) as in BID10 (see FIG1). Gate-I (b) FFGate-II It is composed of a global average pooling followed by a linear projection that reduces the features to a 10-dimensional vector. A Long Short Term Memory (LSTM) BID14 network that contains a single layer of dimension 10 is applied to generate a binary scalar. As mentioned in BID10, this RNNGate design incurs a negligible overhead compared to its feed-forward counterpart (0.04% vs. 12.5% of the computation of the residual blocks when the baseline architecture is a ResNet). In order to further reduce the energy cost due to loading parameters into the memory, all RNNGates in the EnergyNet share the same weights. Energy-aware Learning for Dynamic Routing: The dynamic routing in EnergyNet is learned by minimizing an energy cost together with the accuracy loss. In particular, the learning goal in the EnergyNet is defined as: min DISPLAYFORM0 Here, α is a weighting coefficient of the energy loss, and W and G denote the parameters of the base model and the gating network, respectively. Also, L(W, G) denotes the prediction loss, and E(W, G) denotes the energy cost of the CNN model associated with W and G, which is calculated by accumulating the energy cost of the layers that are not skipped. In order to compute the energy cost of each layer, we adopt the following energy model: DISPLAYFORM1 where e i and e M AC denote the energy costs of accessing the i-th memory hierarchy and one multiplyand-accumulate (MAC) operation BID15, respectively, while # M AC and # acci denote the total number of MAC operations and accesses to the i-th memory hierarchy, respectively. Note that state-of-the-art CNN accelerators commonly employ such a hierarchical memory architecture for minimizing the dominant memory access and data movement costs. In this work, we consider the most commonly used design of three memory hierarchies including the main memory, the cache memory, and local register files BID15, and employ a state-of-the-art simulation tool called "SCALE-Sim" BID16 to calculate the number of memory accesses # acci and the total number of MACs # M AC. Summary: We show that EnergyNet saves more energy than the baseline ResNet after training on CIFAR10 and Tiny ImageNet BID17. In particular, compared to the baseline ResNet, EnergyNet saves up to 40% and 65% energy cost without degrading the prediction accuracy, when processing CIFAR10 and TinyImageNet images, respectively. More encouragingly, our models can achieve 0.7% higher top-1 testing accuracy than the baseline on CIFAR-10 when saving up to 27% energy, and 1.0% higher top-5 testing accuracy on Tiny ImageNet when saving up to 50% energy, respectively. We use the ResNet-38 and ResNet-50 in BID13 as the baseline models for constructing and evaluating EnergyNet models on CIFAR-10 and Tiny ImageNet, with the ing models denoted as EnergyNet-38 and EnergyNet-50, respectively. The training process contains three steps. In step I, we set the weighting coefficient α to a small value (e.g., 0.1), which helps the model converge to the baseline accuracy first. In step II, we increase α to a larger value (e.g., 0.9) and retrain the model obtained from step I. For step III, it is only triggered if the model sees an accuracy loss larger than a threshold (default 0.1%) from step II: we then set α to a small value (e.g., 0.1) again to retrain the ing model from step II for restoring the accuracy. Such a three-step strategy proves to help stabilize training and gain better performance. Discussion: We use energy savings as a metric to quantify the ing energy efficiency improvement of EnergyNet. The energy savings is defined as E s /E total, where E total and E s are the energy cost of the baseline model and the skipped layers due to EnergyNet. From FIG2, we can conclude that EnergyNet achieves the goal of reducing energy cost while preserving or even improving the prediction accuracy. In particular, the accuracy of EnergyNet-38 and EnergyNet-50 will not drop when the energy savings is as high as 40% and 65%, respectively. To confirm that these experimental are not just a coincidence, we performed 20 trials of experiments using EnergyNet-38 and observed that the confidence interval with a 95% confidence level for the mean of the prediction accuracy and the energy savings are [92.47%, 92.58%] and [39.55%, 40.52%], respectively, verifying the reproducibility of EnergyNet's prediction accuracy and ing energy savings. We observe that EnergyNet can achieve a higher accuracy than the original ResNet model. We conjecture that this is because EnergyNet can overcome overfitting when performing the dynamic routing for energy savings. Further EnergyNet can aggressively reduce energy cost by about 4×, over both the ResNet-38 and ResNet-50 baselines, at the cost of 3% and 4% testing accuracy losses, respectively. 3
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Syxp2bgKoX
This paper proposes a new CNN model that combines energy cost with a dynamic routing strategy to enable adaptive energy-efficient inference.
Log-linear models models are widely used in machine learning, and in particular are ubiquitous in deep learning architectures in the form of the softmax. While exact inference and learning of these requires linear time, it can be done approximately in sub-linear time with strong concentrations guarantees. In this work, we present LSH Softmax, a method to perform sub-linear learning and inference of the softmax layer in the deep learning setting. Our method relies on the popular Locality-Sensitive Hashing to build a well-concentrated gradient estimator, using nearest neighbors and uniform samples. We also present an inference scheme in sub-linear time for LSH Softmax using the Gumbel distribution. On language modeling, we show that Recurrent Neural Networks trained with LSH Softmax perform on-par with computing the exact softmax while requiring sub-linear computations. Deep neural networks have achieved impressive successes in tasks spanning vision BID9 BID16, language BID3, speech BID6 BID27 and videos BID1. While these models can vastly differ in architecture, activation functions, and presence of recurrence, they (almost) all share a common trait: the softmax layer. The softmax layer, or log-linear model, is a widely used model in machine learning and statistics that transforms a feature vector into a distribution over the output space, modeling log-probabilities as a linear function of the feature vector. For example, in object classification, the softmax layer at the end of a deep convolutional network transforms a feature vector into a probability distribution over classes for the image; in language modeling using recurrent neural networks, it maps the hidden state to a distribution over next words. While parameterizing for logits offers modeling flexibility, inference and learning have linear runtime in the number of classes. Indeed, both of these require computing the un-normalized probability for every class to compute the partition function and retrieve an actual probability distribution. Problems with large output spaces arise naturally in many areas like natural language processing (NLP), where the output space is a language's vocabulary and can be on the order of hundreds of thousands of elements BID15; BID12. This can also occur in computer vision BID14 when attempting tag prediction on massive, weakly-labeled datasets such as Flickr100M BID31.Many solutions have been proposed to address this bottleneck, all revolving around two themes: approximation of the softmax probabilities or computation of exact probabilities for an approximate model. Canonical examples of the former are importance sampling (IS) or noise contrastive estimation (NCE; BID8). Instead of computing probabilities over the whole output space, these methods compute the softmax over a smaller, sampled vocabulary and re-weight the probabilities, providing an unbiased estimator. An illustration of the latter is Hierarchical Softmax BID24, where the output classes are first clustered such that you only need to compute the softmax over a smaller output space. While the former is an unbiased estimate, it comes with no concentration guarantees, and it is often more art than science to craft proposal distributions which will provide low-variance estimators. The latter, while efficient, requires carefully hand-crafted clustering of the output space, at the risk of making mistakes from which there is no recovery. More recently, estimators based on nearest neighbor search have been proposed for inference and learning in log-linear models BID25 BID26. These estimators hinge on Maximum Inner Product Search using Locality-Sensitive to retrieve the largest logits of the distribution and account for the tail with uniformly sampled classes. They boast strong theoretical guarantees and well-established concentration bounds. However, they were constrained to toy settings and not directly applicable to real-world, large-scale, machine learning. In this work, we build upon these estimators to make them amenable to deep learning practitioners, without losing any theoretical guarantees. We first show how they can be extended to be usable within training of deep learning models, then present our efficient implementation, adapted to deep learning hardware and frameworks. Finally, we show the applicability and efficiency of our method by evaluating on a real-world task: language modeling. We show significant perplexity gains against competing methods with significant speed-ups. Our contributions are as follows:• We present a new deep learning layer, LSH Softmax, an efficient replacement for the softmax layer based on Locality-Sensitive Hashing and the Gumbel distribution, for any deep learning architecture, with strong theoretical guarantees for sub-linear learning and inference.• We provide details for efficient implementation on deep learning hardware (GPUs) and modern deep learning frameworks BID0 BID19 ).• Empirically, we show, on several datasets, that training and sampling from LSH Softmax performs similarly to an exact softmax while requiring significantly less FLOPS. In this section, we first provide a quick overview of Neural Networks and the most popular classification layer, the softmax layer. We then present the Gumbel distribution BID7 and introduce Locality-Sensitive Hashing BID11, both of which our estimator is built upon for inference and learning. Notationally, X is the input space, e.g. X R d and Y is a discrete output space: Y {1, . . ., C}. Feedforward Networks Neural networks models are built hierarchically by applying linear and non-linear transformations in alternating fashion. Formally, given input x ∈ X, an m-layer neural network with σ(·) non-linearity transforms x into h defined as: DISPLAYFORM0 {W i} i≤m and {b i} i≤m are learned weights of the network. σ(·) denotes an element-wise nonlinearity such as ReLU (max(·, 0)) or sigmoid ((1 + exp(−·)) −1 ).Recurrent Networks Recurrent Neural Networks (RNN) are an extension of the previous setting to arbitrarily long sequences by keeping an internal state h t. Formally, given an input sequence (x 1, . . ., x T), it can be written as a dynamical system of the form: DISPLAYFORM1 where U and V are learnable weight matrices. In practice, this parametrization is not wellconditioned for optimization as it can be subject to vanishing or exploding gradients and in practice the Longer Short Term Memory (LSTM; BID10) is preferred. In both cases, these outputs are then given as input to a softmax layer which produces a distribution over the output space Y. In the rest of this work, we denote by φ the parameters of the neural network. The softmax layer is the common name given to a log-linear model for multi-classification at the end of a neural network. Let us consider the multi-classification setting with inputs in X and outputs in Y. Given a feature vector ψ(x) and C weight vectors {θ c} c≤C, the softmax layer parameterizes the following distribution: DISPLAYFORM0 In the particular case of neural networks, p(y|x; θ, φ) ∝ exp(h T θ i). {h T θ i} i≤C are called the logits. It is important to note that computing the distribution over the output space, for inference or learning, requires O(C) operations. For the rest of this work, θ denotes the parameters of the softmax whereas φ denotes the parameters of the neural network (producing the feature). First introduced by BID7, the Gumbel distribution is defined by the following cumulative distribution function: p(G < s) = exp(− exp(−s)). More practically, one can sample from the Gumbel distribution by first sampling U ∼ U and returning G = − log(− log(U)). This distribution is particularly useful as it casts sampling as optimization. Theorem 1 (Maddison et al. FORMULA0). Let {y i} i≤C be un-normalized probabilities (or logits) over Y and let {G i} i≤C be i.i.d Gumbel variables. Then: DISPLAYFORM0 Nearest neighbor search is a task that arises in many fields, such as information retrieval. Given a fixed set of vectors S and a distance, this task consists of, given any incoming query q, returning the vectors closest to the query according to the specified distance. In this work, we will be interested in the Maximum Inner Product Search (MIPS) task. Let S = {s 1, . . ., s N} be a subset of R d. Given a query q ∈ R d, MIPS aims at retrieving arg max s∈S q T s. This requires Θ(N) operations as one has to compute the dot-product of q with all elements of S.In the case where we assume that, for a given set S, it is needed to retrieve the nearest neighbor for a large numbers of queries, we can achieve amortized sub-linear time. This problem is commonly addressed with space partitioning techniques, such as Locality-Sensitive Hashing (LSH; BID11). LSH leverages hashing to reduce the number of candidate vectors to evaluate, based on the idea that similar vectors will hash in the same bucket. We have the following : Theorem 2 BID11 ). Given a set S of size N, a similarity measure d(·, ·) and a family of hash functions H s.t. for S > T and p > q: DISPLAYFORM0 we can construct a data structure s.t. given an incoming query q, a nearest neighbor can be retrieved, with high probability, in sub-linear time O(N ρ log N) with ρ log p log q < 1.Recent work builds on top of LSH to either reduce the number of tables BID18, or utilize more expressive hash functions BID2. A common family of hash is the hyperplane hash, i.e. for v ∼ N (0, I), h v (x) = sign v T x, also called Signed Random Projections BID4. For the rest of this work, we denote b the number of hashing bits (equivalently, the number of random vectors) per table, and L the number of tables. In this section, we show how we can apply Theorem 3.5 of BID26 to enable sublinear learning of softmax parameters in the context of deep models, i.e. where both weights and inputs can change. This is crucial for real-world use. Deep learning models for both classification BID16 and generation BID22 are often trained with a maximum-likelihood objective. Formally, given a training pair (x, y) ∈ X × Y, one aims at maximizing log p(y|x; θ, φ), where θ ∈ Θ and φ ∈ Φ are respectively the parameters of the softmax and of the neural network. To optimize this model, the usual method is to use back-propagation BID28 to differentiate and then perform stochastic gradient descent (SGD; BID17) on θ and φ. Let's denote by f (x; φ) h the feature vector given as input to the softmax. Given our notation, the objective is written as (x, y, θ, φ) DISPLAYFORM0 For backpropagation, we need to compute the gradient of w.r.t to both θ and h -the gradient w.r.t. h is then passed down to compute the gradient w.r.t. φ. DISPLAYFORM1 with DISPLAYFORM2 Computing these gradients clearly requires O(|Y|) operations. In practice, this constitutes a major bottleneck for large output spaces. BID26 shows how to compute expectation in in sub-linear time, with a well-concentrated estimator using an LSH structure. Intuitively, we can build a good estimate of the partition function by retrieving the largest logits (using LSH) and accounting for the tail with uniform samples. Applying this , we can compute the expectations necessary to compute the softmax gradients in sub-linear time. This is described in Theorem 3. Theorem 3 (LSH Softmax for Learning). Let h = f (x; φ) be input to a softmax layer with parameters {θ c} c≤C and define (x, y, θ, φ) as previously. Given S, the k-nearest neighbors of h in {θ c} c≤C and T, l uniform samples from {1, . . ., C} − S, let us define: DISPLAYFORM3 DISPLAYFORM4 These estimators are well concentrated: i.e. for, δ > 0, if k = l = O n 2 3 1 1 δ, then with probability greater than 1 − δ: DISPLAYFORM5 While Theorem 3 provides computation of the gradients in sub-linear time, it is only usable in a setting where the weights ({θ i} i≤C ) are not updated. Indeed, querying nearest neighbors in sublinear time assumes that an appropriate data structure (here LSH) was built in advance. However, when training deep models, we are required to update the weights at every training step. This necessitates online updating of the LSH structure. To maintain the sub-linear runtime, we perform these updates in a sparse manner. We describe in Algorithm 1 how this estimator can be used in a training loop, with weight updating and sparse LSH updates. Proposition 4. The softmax computations described in Algorithm 1 run in sub-linear time., n iters number of training iterations. Initialize θ and φ Initialize the MIPS structure with {θ i} i≤|V|. for j ≤ n iters doSample an example (x, y) from D. DISPLAYFORM0 Find S, k-nearest-neighbors of h using the MIPS. Define T as l indexes uniformly sampled from Y − S. DISPLAYFORM1 Pass downĝ h for back-propagation. Re-hash the updated vectors (at most (k + l)) into the right buckets. end forProof. The softmax computations can be split into three parts: retrieving nearest neighbors, computing the forward/backward passes, and rehashing updated vectors. With a sub-linear MIPS such as LSH, the first part is guaranteed to be sub-linear. For the second part, computing the partition function and the entire gradient estimator requires computing a finite number of sums over DISPLAYFORM2 3 ) terms, which is sub-linear. The third part consists of re-hashing updated vectors. Re-hashing a vector is a constant operation (consisting of b × L dot-products) and thus, given that only a sub-linear number of vectors are updated, re-hashing is sub-linear. In the last section, we presented a method to speed-up training time based on an LSH data structure. In addition to these training time gains, LSH Softmax can be utilized for computational gains at inference time as well. While MAP inference can be easily derived from the MIPS structure, sampling from the conditional distribution is often required (e.g. to generate diverse sentences in language modeling or machine translation). These gains can be crucial for large-scale deployment. This is a direct application of BID26 that once again leverages a MIPS structure and the Gumbel distribution. By lazily evaluating Gumbel noise, once can devise an inference scheme which allows to sample from log-linear models in sub-linear time. Theorem 5 (LSH Softmax for Inference). We reuse the same notations as the once in Theorem 3. We define t − log(− log(1 − l/C)). Let {G i} i≤k be k samples from the Gumbel distribution. We then proceed to sample m ∼ Binomial(C, l/C), and sample T, m points from Y − S with associated Gumbels {G i} i≤m s.t. each G i are larger than t. Let us define: DISPLAYFORM0 Let, δ > 0, we then have the two following :1. For k = l ≥ log 1 δ,ŷ is a sample from p(y|x; θ, φ) with probability greater than 1 − δ.2. This inference scheme runs in sub-linear time. Proof. BID26 We denote by p Gumbel (·|h; θ) the implicit distribution over Y provided by this inference scheme. While we can sample from p Gumbel, we note that the likelihood is intractable. We also emphasize that this scheme can be utilized for any softmax model, regardless of the training method. Recent successes of deep neural networks hinge on their efficient implementation on specialized hardware: Graphics Processor Units (GPU), which enables training of large models in reasonable time. Often, methods with theoretically faster runtime are dismissed by practitioners because of their incompatibility with the hardware, rendering them hard to implement efficiently and ultimately not widely used. In this section, we first detail how our method is indeed amenable to GPU implementation and can amount to wall-clock gains in practice, and explain why LSH Softmax is easy to implement in the context of modern deep learning frameworks who often provide a gradient computation API.GPU Implementation Standard LSH implementations consist of three steps: DISPLAYFORM0 b, retrieve candidates in each of the L tables. Let us denote C q the number of candidates retrieved.3. Distances: Given those candidates {x 1, . . ., x Cq} ⊂ R d, compute the distances {q T x i} i≤Cq and only return the closest one. It is also important to note that deep learning models are often trained using minibatch optimization; let us describe how each of these steps can be computed efficiently and in the minibatch setting. The first step is amenable to the GPU setting; a batch of queries {q i} i≤m ⊂ R d can be represented by Q ∈ R m×d. Given that the hyperplanes are similarly presented in matrix form i.e. H ∈ R d×(b×L), the hashing step is equivalent to sign (Q · H) ∈ {0, 1} m×(b×L). This is the type of operations that GPUs excel at: matrix-multiply followed by element-wise function. The second step, while not as compatible with GPU, is still massively parallelizable using multithreading on CPU. Given the computed signatures, one can run parallelism at the query level (i.e. each thread retrieves candidates for a given query), rendering that step efficient. It also allows for more memory-efficient look-up such as BID18.The last operation is, once again, very amenable to GPU. It simply consists of a gather (i.e. building a matrix with the appropriate indexes from the candidates) into a 3-d tensor. Indeed, after the previous step, the LSH structure returns m lists of s candidates, and the gather step returns the appropriate vectors from the vocabulary into a 3-d tensor of shape R m×s×d. As the batched queries can be also seen as a 3-d tensor R m×d×1, computing the exact distances then reduces to a batch matrix-multiply which is a very efficient operation on GPU.Software Implementation Another crucial point for practitioners is the ability to rely on frameworks automatically providing gradients, such as BID0 BID19, to implement deep learning models; this abstracts away the need to write down the exact gradients which can be both cumbersome and error-prone. An additional advantage of our estimator is that it can be effortlessly implemented in these frameworks. Indeed, given logits computed over the nearest-neighbors and the additional uniformly sampled indexes, one can compute the estimate of the partition function and thus an estimate of the loss. Computing the gradient estimators now reduces to differentiating this loss, which can be very simply done using the framework's differentiation API. After having presented our new layer LSH Softmax, we now proceed to show its applicability and efficiency in a real-world setting for deep learning practitioners, specifically towards language modeling. We first show that our method significantly outperforms approximate softmax baselines while performing within 20% of the performance of the exact softmax. We then provide a computational comparison. While we evaluate our method on NLP tasks, we want to emphasize that it is directly applicable to other domains, such as vision. However, public vision benchmark datasets with large output spaces require significantly more computational resources (e.g. 98 GPU nodes for 8 days for Flickr100M BID31) which is outside the scope of this paper. Language modeling is the task of, given a sequence of words (w 1, . . ., w T) in a vocabulary V, estimating p(w 1, . . ., w T) = t≤T p(w t |w <t). Substantial work has been done to model these distributions using non-parametric n-gram counts with additional smoothing techniques, but can fail to model long histories because of an exponential number of sequences. Recently, parametric models using RNNs have shown impressive success on these tasks BID22. In this setting, large output spaces arise naturally, as the vocabulary size can range from 10 4 to 10 6. We first describe our experimental protocol, and then report perplexity (ppl) of LSH Softmax against a set of baselines on this task for several datasets. Datasets We evaluate our method on three standard datasets for Language Modeling with varying number of characters and vocabulary size:• Penn TreeBank (PTB): We follow the pre-processing described by BID22, which in 929k training tokens, 73k validation and 82k test tokens with a 10k vocabulary size.• Text8 is a dataset consisting of the first 100 millions characters of Wikipedia, and has a vocabulary size of 44k. This dataset has been used recently in the context of language modeling BID33. We use the 90M first words for training and split the remaining between the validation and test set.• Wikitext-2. First introduced in BID21, this is a selected corpus of Wikipedia articles. It has a vocabulary size of 33k and contains 217k tokens. As previously, we split between a training, validation and testing set. Baselines We evaluate the performance of models trained with exact softmax i.e. computed over the entire output space, Biased Importance Sampled softmax (BIS), as presented in BID12, which consists of sub-sampling the vocabulary according to a proposal distribution based on unigram counts, and Negative Sampling (NS), proposed in, equivalent to (BIS) with a uniform distribution, standard Importance Sampling BID15 and (Our models are trained using SGD using gradient clipping, with an initial learning rate of 20. This learning rate is annealed when the validation perplexity plateaus. Our models are trained for 40 epochs for PTB, 3 epochs for Text8, 25 epochs for Wikitext-2. With the notations of Theorem 3, for LSH Softmax, we choose k = 10 |V| and l = |V|. For the IS and NS baselines, we choose to sample k + l classes from the output space for a fair comparison. We choose the number of bits per signature b log 2 |V| and choose L, number of tables, to have sufficient recall for the MIPS task. We report perplexity for a fixed architecture but comparing different softmax evaluations; we present both learning curves and perplexity on each set. We report the perplexity of all trained models using the exact probabilities i.e. the full softmax. Perplexities are reported in Table 1 and learning curves in FIG1 . We see that LSH Softmax consistently outperforms the approximate baselines by a fair margin while performing a similar number of operations, showcasing the strength of this estimator. We also observe from the training curves that approximate methods' performances tend to plateau, as IS and NS cannot target the proper classes to push down. In constrast, LSH Softmax does not. Having established that the proposed estimator performs very well on real-world tasks, we now proceed to evaluate the computation gains. It is important to note that for models with large output spaces, the softmax computation can amount to about 80% of the total computation (Joulin et 2016; BID13 ; we thus choose to only evaluate computational gains in the softmax layer. We evaluate our method in CPU, with a batch size of 1, to have an accurate estimation of the ratio of FLOPS. We report both speed-up and validation perplexity (ppl) relative difference with the exact softmax for LSH Softmax and NS. Note that NS requires the same number of operations as importance sampling (IS) but outperforms it in all tasks. Additionally, we show the speed-ups one can achieve on the One Billion Word dataset BID5, whose ppl was not evaluated due to computational constraints. We report the in Table 2. We observe that, while faster, NS performs significantly worse than LSH Softmax. Furthermore, its performance deteriorates significantly when increasing the size of the output space, contrary to LSH Softmax which always performs in the same relative range. Table 2: LSH Softmax performs closest to the exact softmax and handily outperforms importance sampling based methods with no concentration guarantees. In recent years, MIPS-based estimators for log-linear models have been explored in the literature. BID32 propose retrieving the largest logits using LSH and estimating the Softmax using only those classes. Their method is encompassed in ours by simply setting l to 0. However, we note that not accounting for the tail can lead to highly biased gradients. Indeed, BID26 show that, using only the top-k largest values leads to significantly worse performance. In a similar direction, BID30 propose using LSH at each layer and only retaining the largest activations which can be viewed as a form of adaptive dropout. This work differs with ours in two ways: first of all, their paper provides no theoretical guarantees and secondly, they focus on reducing memory footprint which is not the aim of our work. Finally, BID29 proposed using the LSH structure as a proposal distribution to evaluate the Softmax. While unbiased and efficient, their method does not offer any concentration guarantees and the estimator can have arbitrarily bad variance. In this work, we presented LSH Softmax, a softmax approximation layer for large output spaces with sub-linear learning and inference cost (in the number of states) and strong theoretical guarantees. We showcased both its applicability and efficiency by evaluating LSH on a common NLP task, language modeling. On several datasets for this task, we report perplexity closest to exact training among all baselines, as well as significant speed-ups. Our hope is that, for any architecture, this layer could be chosen in lieu of softmax, when the output space is sufficiently large to warrant the approximation. To that end, we plan to release source-code with the camera-ready version.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJ3dBGZ0Z
we present LSH Softmax, a softmax approximation layer for sub-linear learning and inference with strong theoretical guarantees; we showcase both its applicability and efficiency by evaluating on a real-world task: language modeling.
Can the success of reinforcement learning methods for simple combinatorial optimization problems be extended to multi-robot sequential assignment planning? In addition to the challenge of achieving near-optimal performance in large problems, transferability to an unseen number of robots and tasks is another key challenge for real-world applications. In this paper, we suggest a method that achieves the first success in both challenges for robot/machine scheduling problems. Our method comprises of three components. First, we show any robot scheduling problem can be expressed as a random probabilistic graphical model (PGM). We develop a mean-field inference method for random PGM and use it for Q-function inference. Second, we show that transferability can be achieved by carefully designing two-step sequential encoding of problem state. Third, we resolve the computational scalability issue of fitted Q-iteration by suggesting a heuristic auction-based Q-iteration fitting method enabled by transferability we achieved. We apply our method to discrete-time, discrete space problems (Multi-Robot Reward Collection (MRRC)) and scalably achieve 97% optimality with transferability. This optimality is maintained under stochastic contexts. By extending our method to continuous time, continuous space formulation, we claim to be the first learning-based method with scalable performance in any type of multi-machine scheduling problems; our method scalability achieves comparable performance to popular metaheuristics in Identical parallel machine scheduling (IPMS) problems. Suppose that we are given a set of robots and seek to serve a set of spatially distributed tasks. A reward is given for serving each task promptly -ing in a time-decaying reward collection problem -or when completing the entire set of tasks -ing in a makespan minimization problem. As the capability to control and route individual robots has increased [], efficient orchestration of robots arises as an important remaining concern for such problems. Multi-robot planning problems. In this paper, we focus on orchestration problems that can be formulated as robot planning problems. A key assumption in such orchestration problems is that we are given information on the "duration of time required for an assigned robot to complete a task". This duration may be deterministic (e.g. as in a Traveling Salesman Problem (TSP) or Vehicle Routing Problem (VRP)) or random with given probability distribution (c.f., []). 1. We call this duration the task completion time. Due to their combinatorial nature, robot planning problems suffer from exponential computational complexity. Even in the context of single-robot scheduling problems (e.g., TSP) scalability is a concern. Planning for multiple robots exacerbates the scalability issue. While scalable heuristic methods have been developed for various deterministic multi-robot planning problems (c.f., [Rossi Proposed methods. In the seminal paper [], the authors observed that combinatorial optimization problems such as TSP can be formulated as sequential decision making problems. Decision making in such a sequential framework relies on an estimate of future costs Q(s, a) for an existing task sequence s and candidate next task a. With this estimate, given the prior decisions s at each decision step, they select the next task a to minimize the future cost estimate. []'s solution framework relies on the following three assumptions. 1) For each combinatorial optimization problem, one can heuristically choose how to induce a graph representation of (s, a). In the case of TSP, the paper induces a fully connected graph for every possible next task. 2) This induced graph representation can be considered as a probabilistic graphical model (PGM) []. This PGM can be used with a graph-based mean-field inference method called structure2vec [] to infer Q(s, a) for use in combinatorial optimization problems. 3) Inference of Q(s, a) can be learned by the reinforcement framework called fitted Q-iteration. We create a solution framework to achieve scalability and transferability for multi-robot planning that builds in numerous directions upon the foundation of [] as follows: 1. State representation and mean-field inference theory for random PGM. Instead of heuristically inducing a PGM, we show that a robot scheduling problem exactly induces a random PGM. Since there exists no mean-field inference theory for random PGM, we develop the theory and corresponding new structure2vec iteration. 2. Sequential encoding of information for transferability. To achieve transferability in terms of the number of robots and tasks, we carefully design a two-step hierarchical mean-field inference []. Each step is designed to infer certain information. The first step is designed to infer each task's relative graphical distance from the robots. The second step is designed to infer Q(s, a) (a here refers to a joint assignment of robots). While the first step is by its nature transferable to any number of tasks and robots, the transferability in inference of the second step is achieved by the scale-free characteristic of fitted Q-iteration [van]. That is, the relative magnitudes of Q(s, a) values are sufficient to select an action a. 3. Auction-based assignment. Even if we can infer Q(s, a) precisely, the computation time required to select an action a using the maximum Q(s, a) operation exponentially increases as robots and tasks increase. To resolve this issue, we suggest a heuristic auction that is enabled by the transferability of our Q(s, a) inference. Even though this heuristic auction selects a with only polynomial computational complexity, it provides surprisingly good choices for a. (In fact, this heuristic auction increases the performance empirically relative to using the max operation.) time τ i to complete -we call this the processsing time. This time is the same independent of which machine serves the task. We incorporate one popular extension and allow'sequence-dependent setup times'. In this case, a machine must conduct a setup prior to serving each task. The duration of this setup depends on the current task i and the task j that was previously served on that machine -we call this the setup time. The completion time for each task is thus the sum of the setup time and processing time. Under this setting, we solve the IPMS problem for make-span minimization as discussed in []. That is, we seek to minimize the total time spent from the start time to the completion of the last task. The IPMS formulation resembles our MRRC formulation in continuous-time and continuous-space and we relegate the detailed formulation to Appendix B. In Section 2, we formulated multi-robot/machine planning problems as sequential joint assignment decision problems. As in [], we will select a joint assignment using a Q-function based policy. Since we thus choose action a t with the largest inferred Q(s t, a t) value in state s t, the development of a Q(s t, a t) inference method is a key issue. Toward this end and motivated by these robot planning problems, we provide new in random PGM-based mean-field inference methods and a subsequent extension of the graph-neural network based inference method called structure2vec [] in Section 3.1. In Section 3.2, we discuss how a careful encoding of information using the extended structure2vec of Section 3.1 enables precise and transferable Q(s t, a t) inference. Since the computational complexity required to identify the best joint assignment is exponential with respect to the number of robots and tasks, Section 3.3 discusses how the transferability of our Q(s t, a t) inference method enables a good action choice heuristic with polynomial computational complexity. PGM. Given random variables {X k}, suppose that joint distribution of {X k} can be factored as PGM-based mean-field inference. One popular use of this PGM information is PGM-based mean-field inference. In mean-field inference, we find a surrogate distribution Q(X 1, . . ., X n) = i Q i (x i) that has smallest Kullback-Leibler distance to original joint distribution P (X 1, . . ., X n). We then use this surrogate distribution to solve the original inference problem. [] shows that when we are given PGM information, {Q i (x i)} can be analytically computed by a fixed point equation. Despite that this usefulness, in most inference problems it is unrealistic to assume we know or can infer probability distributions of a PGM. This limitation was addressed in [] using a method called structure2vec.. [] suggests that an inference problem with graph-structured data (e.g. a molecule classification problem) can be seen as a particular PGM structure that consists of two types of random variables. One type of random variables {X k} is one that serves as input of inference problem (e.g. X k denotes atomic number of atom k). Another type of random variables {H k} is latent random variable where H k is a latent random variable related to X k. Existence of probabilistic relationships among {H k} are assumed heuristically from graph structure of data. Then the particular PGM structure they assume is, where V denotes the set of vertex indexes. The goal of mean-field inference problem is to find a surrogate distribution Q k (h k) for posterior marginal P ({h k}|{x k}). However, we can't compute {Q k (h k)} since we are not given φ (H k |H i) nor φ (H k |X k). To overcome this limitation, [] develops a method called structure2vec that only requires the structure of the PGM for mean-field inference. structure2vec embeds the mean-field inference procedure, i.e. fixed point iteration on {Q k (h k)}, into fixed point iterations of neural networks on vectors {μ k}. Derivation of such fixed point iterations of neural networks can be found in and can be written asμ k = σ W 1 x k + W 2 j =kμ j where σ denotes Relu function and W denotes parameters of neural networks. Robot scheduling as random PGM-based mean-field inference. All applications of structure2vec in [Dai et al. (2016; 2017) ] heuristically decide the structure of PGM of each data point from its graph structure. The key observation we make is that inference problems in robot scheduling exactly induce a'random' PGM structure (to be precise, a 'random' Bayesian Network). Given that we start from state s t and action a t, consider a random experiment "sequential decision making using policy φ". In this experiment, we can define an event as'How robots serve all the remaining tasks in which sequence'. We call one such event a'scenario'. For each task t i ∈ T t, define a random variable X i as'a characteristic of task t i' (e.g. when task i is served). Given a scenario, the relationships among {X i} satisfy as a Bayesian Network. For details, see Appendix C) Note that we do not know which scenario will occur from time t and thus do not know which PGM will be realized. Besides, the inference of probability of each scenario is challenging. Putting aside this problem for a while, we first define a'random PGM' and'semi-cliques'. Denote the set of all random variables in the inference problem as X = {X i}. A random PGM is a probabilistic model of how a PGM is randomly chosen from a set of all possible PGMs on X 4. Next, denote the set of all possible probabilistic relationships on X as C X. We call them'semi-cliques'. In robot scheduling problem, a semi-clique D ij ∈ C X is a conditional dependence X i |X j. The semi-clique D ij presents as an actual clique if and only if the robot which finishes task t i chooses task t j as the next task. We will now prove that we don't have to infer the probability of each scenario, i.e. random PGM model itself. The following theorem for mean-field inference with random PGM is an extension of mean-field inference with PGM [] and suggests that only a simple inference task is required: inference of the presence probability of each semi-cliques. Theorem 1. Random PGM based mean field inference Suppose we are given a random PGM on X = {X k}. Also, assume that we know presence probability {p m} for all semi-cliques where Z k is a normalizing constant and φ m is the clique potential for clique m. From this new , we can develop the structure2vec inference method for random PGM. As in [], we restrict our discussion to when every semi-clique is between two random variables. In this case, a semi-clique can be written as D ij with its presence probability p ij. Lemma 1. Structure2vec for random PGM. Suppose we are given a random PGM model with X = {X k}. Also, assume that we know presence probability {p ij} for all semi-cliques C X = {D ij}. The fixed point iteration in Theorem 1 for posterior marginal P ({H k}|{x k}) can be embedded in a nonlinear function mapping with embedding vectorμ k as Proof of Thorem 1 and lemma 1. For brevity, proofs are relegated to the Appendix D and E. Corollary 1. For a robot scheduling problem with set of tasks t i ∈ T t, the random PGM representation for structure2vec in lemma 1 is ((T t, E {p ij} inference procedure employed in this paper is as follows. Denote ages of task i, j as age i, age j. Note that if we generate M samples of ij as {e is an unbiased and consistent estimator of E[f ( ij, age i, age j)]. For each sample k, for each task i and task j, we form a vector of u k ij = (e k ij, age i, age j) and compute We obtain {p ij} from {g ij} using softmax. Algorithm details are in Appendix F. In this section, we show how Q(s t, a t) can be precisely and transeferably inferred using a two-step structure2vec inference method (For theoretical justifications on hierarchical variational inference, see). We here assume that we are given (T t, E T T t) and inferred {p ij} so that Corollary 1 can be applied. For brevity, we illustrate the inference procedure for the special case when task completion time is deterministic (Appendix G illustrates how we can combine random sampling to inference procedure to deal with task completion times as a random variable). Step 1. Distance Embedding. The output vectors {μ 1 k} of structure2vec embeds a local graph information around that vector node []. We here focus on embedding information of robot locations around a task node and thus infer each task's'relative graphical distance' from robots around it. As the input of first structure2vec ({x k} in lemma 1), we only use robot assignment information (if t k is an assigned task, we set x k as 'task completion time of assignment'; if t k is not an assigned task:, we set x k = 0). This procedure is illustrated in Figure 1. According to [], the output vectors {μ 1 k} of structure2vec will include sufficient information about the relative graphical distance from all robots to each task. Step 2. Value Embedding. The second step is designed to infer'How much value is likely in the local graph around each task'. Remind that vectors {μ 1 k}, output vectors of the first step, carries information about the relative graphical distance from all robots to each task. We concatenate'age' of each tasks {age k} to each corresponding vector in {μ 1 k} and use the ing graph as an input ({x k} in lemma 1) of second structure2vec, as illustrated in Figure 1. Again, vectors {μ 2 k} of the output graph of second structure2vec operation embeds a local graph structure around each node. Our intuition is that {μ 2 k} includes sufficient information about'How much value is likely in the local graph around each task'. Step 3. Computing Q(s t, a t). To infer Q(s t, a t), we aggregate the embedding vectors for all nodes, i.e.,μ 2 = kμ 2 k to get one vectorμ 2 which embeds the'value likeliness' of the global graph. We then use a layer of neural network to mapμ 2 into Q(s t, a t). The detailed algorithm of above whole procedure (combined with random task completion times) is illustrated in Appendix G. Why are each inference steps transferable? For the first step, it is trivial; the inference problem is a scale-free task. In the second step, the'value likeliness' will be underestimated or overestimated according to the ratio of (number of robots/number of tasks) in a local graph: underestimated if the ratio in training environment is smaller than the ratio in the testing environment; overestimated otherwise. The key idea solving this problem is that this over/under-estimation does not matter in Q-function based action decision [van] as long as the order of Q-function value among actions are the same. While analytic justification of this order invariance is beyond this paper's scope, the fact that there is no over/underestimation issue in the first step inference problem helps this justification. In Q-function based action choice, at each time-step t, we find an action with largest Q(s t, a t). We call this action choice operation'max-operation'. The problem in max-operation in the multi-robot setting is that the number of computation exponentially increases as the number of robots and tasks increases. In this section, we show that transferability of Q-function inference enables designing an efficient heuristic auction that replaces max operation. We call it auction-based policy(ADP) and denote it as φ Q θ, where Q θ indicates that we compute φ Q θ using current Q θ estimator. At time-step t, a state s t is a graph G t = (R t ∪ T t, E t) as defined in section 2.1. Our ADP, φ Q θ, finds an action a t (which is a matching in bipartite graph ((R t ∪ T t), E RT t ) of graph G t ) through iterations between two phases: the bidding phase and the consensus phase. We start with a bidding phase. All robots initially know the matching determined in previous iterations. We denote this matching as Y, a bipartite subgraph of ((R t ∪T t), E RT t ). When making a bid, a robot r i ignores all other unassigned robots. For example, suppose robot r i considers t j for bidding. For r i, Y ∪ ij is a proper action (according to definition in section 2.1) in a'unassigned robot-ignored' problem. Robot r i thus can compute Q(s t, Y ∪ ritj) of'unassigned robot-ignored' problem for all unassigned task t j. If task t * is with the highest value, robot r i bids {rit *, Q(s t, Y ∪ rit *)} to auctioneer. Since number of robots ignored by r i is different at each iteration, transferability of Q-function inference plays key role. The consensus phase is simple. The auctioneer finds the bid with the best value, say {*, bid value with *}. Then auctioneer updates everyone's Y as Y ∪ {*}. These bidding and consensus phases are iterated until we can't add an edge to Y anymore. Then the central decision maker chooses Y as φ Q θ (s k). One can easily verify that the computational complexity of computing φ Q θ is O (|L R | |L T |), which is only polynomial. While theoretical performance guarantee of this heuristic auction is out of this paper's scope, in section 5 we show that empirically this heuristic achieves near-optimal performance. 4 LEARNING ALGORITHM In fitted Q-iteration, we fit θ of Q θ (s t, a t) with stored data using Bellman optimality equation. That is, chooses θ that makes small. Note that every update of θ needs at least one max-operation. To solve this issue, we suggest a learning framework we call auction-fitted Q-iteration. What we do is simple: when we update θ, we use auction-based policy(ADP) defined in section 3.3 instead of max-operation. That is, we seek the parameter θ that minimizes How can we conduct exploration in Auction-fitted Q-iteration framework? Unfortunately, we can't use -greedy method since such randomly altered assignment is very likely to cause a catastrophic in problems with combinatorial nature. In this paper, we suggest that parameter space exploration [] can be applied. Recall that we use Q θ (s k, a k) to get policy φ Q θ (s k). Note that θ denotes all neural network parameters used in the structure2vec iterations introduced in Section 5. Since Q θ (s k, a k) is parametrized by θ, exploration with φ Q θ (s k) can be performed by exploration with parameter θ. Such exploration in parameter space has been introduced in the policy gradient RL literature. While this method was originally developed for policy gradient based methods, exploration in parameter space can be particularly useful in auction-fitted Q-iteration. The detailed application is as follows. When conducting exploration, apply a random perturbation on the neural network parameters θ in structure2vec. The ing a perturbation in the Q-function used for decision making via the auction-based policy φ Q θ (s k) throughout that problem. Similarly, when conducting exploitation, the current surrogate Q-function is used throughout the problem. Updates for the surrogate Q-function may only occur after each problem is complete (and typically after a group of problems). For MRRC, we conduct a simulation experiment for a discrete time, discrete state environment. We use maze (see Figure 1) generator of UC Berkeley CS188 Pacman project [] to generate large size mazes. We generated a new maze for every training and testing experiments. Under the deterministic environment, the robot succeeds its movement 100%. Under stochastic environment, a robot succeeds its intended movement in 55% on the grid with dots and for every other direction 15% each; on the grid without dots, the rates are 70% and 10%. As described in section 2, routing problems are already solved. That is, each robot knows how to optimally (in expectation) reach a task. To find an optimal routing policy, we use Dijkstra's algorithm for deterministic environments and dynamic programming for stochastic environments. The central assignment decision maker has enough samples of task completion time for every possible route. We consider two reward rules: Linearly decaying rewards obey f (age) = 200 − age until reaching 0, where age is the task age when served; For nonlinearly decaying rewards, f (t) = λ t for λ = 0.99. Initial age of tasks were uniformly distributed in the interval. Performance test. We tested the performance under four environments: deterministic/linear rewards, deterministic/nonlinear rewards, stochastic/linear rewards, stochastic/nonlinear rewards. There are three baselines used for performance test: exact baseline, heuristic baseline, and indirect baseline. For the experiment with deterministic with linearly decaying rewards, an exact optimal solution for mixed-integer exists and can be used as a baseline. We solve this program using Gurobi with 60-min cut to get the baseline. We also implemented the most up-to-date heuristic for MRRC in []. For any other experiments with nonlinearly decaying rewards or stochastic environment, such an exact optimal solution or other heuristics methods does not exist. In these cases, we should be conservative when talking about performance. Our strategy is to construct a indirect baseline using a universally applicable algorithm called Sequential greedy algorithm (SGA) []. SGA is a polynomial-time task allocation algorithm that shows decent scalable performance to both linear and non-linear rewards. For stochastic environments, we use mean task completion time for task allocation and re-allocate the whole tasks at every timesteps. We construct our indirect baseline as'ratio between our method and SGA for experiments with deterministic-linearly decaying rewards'. Showing that this ratio is maintained for stochastic environments in both linear/nonlinear rewards suffices our purpose. Table 1 shows experiment for (# of robots, # of tasks) =,,,,,,; For linear/deterministic rewards, our proposed method achieves nearoptimality (all above 95% optimality). While there is no exact or comparable performance baseline for experiments under other environments, indirect baseline (%SGA) at least shows that our method does not lose %SGA for stochastic environments compared with %SGA for deterministic environments in both linear and nonlinear rewards. Scalability test. We count the training requirements for 93% optimality for seven problem sizes (# of robots N R, # of tasks N T) =,,,,, with deterministic/linearly decaying rewards (we can compare optimality only in this case). As we can see in Table 2, the training requirement shown not to scale as problem size increases. Transferability test. Suppose that we trained our learning algorithm with problems of three robots and 30 Tasks. We can claim transferability of our algorithm if our algorithm achieves similar performance for testing with problems of 8 robots and 50 tasks when compared with the algorithm specifically trained with problems of 8 robots and 50 tasks, the same size as testing. Table 3 shows our comprehensive experiment to test transeferability. The in the diagonals (where training size and testing size is the same) becomes a baseline, and we can compare how the networks trained with different problem size did well compare to those . We could see that lower-direction transfer tests (trained with larger size problem and tested with smaller size problems) shows only a small loss in performance. For upper-direction transfer tests (trained with smaller size problem and tested with larger size problem), the performance loss was up 4 percent. Ablation study. There are three components in our proposed method: 1) a careful encoding of information using two-layers of structure2vec, 2) new structure2vec equation with random PGM and 3) an auction-based assignment. Each component was removed from the full method and tested to check the necessity of the component. We test the performance in a deterministic/linearly decaying rewards (so that there is an optimal solution available for comparison). The experimental are shown in Figure 2. While the full method requires more training steps, only the full method achieves near-optimal performance. For IPMS, we test it with continuous time, continuous state environment. While there have been many learning-based methods proposed for (single) robot scheduling problems, to the best our knowledge our method is the first learning method to claim scalable performance among machinescheduling problems. Hence, in this case, we focus on showing comparable performance for large problems, instead of attempting to show the superiority of our method compared with heuristics specifically designed for IPMS (actually no heuristic was specifically designed to solve our exact problem (makespan minimization, sequence-dependent setup with no restriction on setup times)) For each task, processing times is determined using uniform. For every (task i, task j) ordered pair, a unique setup time is determined using uniform. As illustrated in section 2, we want to minimize make-span. As a benchmark for IPMS, we use Google OR-Tools library. This library provides metaheuristics such as Greedy Descent, Guided Local Search, Simulated Annealing, Tabu Search. We compare our algorithm's with the heuristic with the best for each experiment. We consider cases with 3, 5, 7, 10 machines and 50, 75, 100 jobs. The are provided in Table 4. Makespan obtained by our method divided by the makespan obtained in the baseline is provided. Although our method has limitations in problems with a small number of tasks, it shows comparable performance to a large number of tasks and shows its value as the first learning-based machine scheduling method that achieves scalable performance. We presented a learning-based method that achieves the first success for multi-robot/machine scheduling problems in both challenges: scalable performance and tranferability. We identified that robot scheduling problems have an exact representation as random PGM. We developed a meanfield inference theory for random PGM and extended structure2vec method of. To overcome the limitations of fitted Q-iteration, a heuristic auction that was enabled by transferability is suggested. Through experimental evaluation, we demonstrate our method's success for MRRC problems under a deterministic/stochastic environment. Our method also claims to be the first learning-based algorithm that achieves scalable performance among machine scheduling algorithms; our method achieves a comparable performance in a scalable manner. Our method for MRRC problems can be easily extended to ride-sharing problems or package delivery problems. Given a set of all user requests to serve, those problems can be formulated as a MRRC problem. For both ride-sharing and package delivery, it is reasonable to assume that the utility of a user depends on when she is completely serviced. We can model how the utility of a user decreases over time since when it appears and set the objective function of problems as maximizing total collected user utility. Now consider a task'deliver user (or package) from A to B'. This is actually a task "Move to location A and then move to location B". If we know the completion time distribution of each move (as we did for MRRC), the task completion time is simply the sum of two random variables corresponding to task completion time distribution of the moves in the task. Indeed, ride-sharing or package delivery problems are of such tasks (We can ignore charging moves for simplicity, and also we don't have to consider simple relocation of vehicles or robots since we don't consider random customer arrivals). Therefore, both ride-sharing problems and package delivery problems can be formulated as MRRC problems. A MRRC WITH CONTINUOUS STATE/CONTINUOUS TIME SPACE FORMULATION, OR WITH SETUP TIME AND PROCESSING TIME In continuous state/continuous time space formulation, the initial location and ending location of robots and tasks are arbitrary on R 2. At every moment at least a robot finishes a task, we make assignment decision for a free robot(s). We call this moments as'decision epochs' and express them as an ordered set (t 1, t 2, . . ., t k, . . .). Abusing this notation slightly, we use (·) t k = (·) k. Task completion time can consist of three components: travel time, setup time and processing time. While a robot in the travel phase or setup phase may be reassigned to other tasks, we can't reassign a robot in the processing phase. Under these assumptions, at each decision epoch robot r i is given a set of tasks it can assign itself: if it is in the traveling phase or setup phase, it can be assigned to any tasks or not assigned; if it is in the processing phase, it must be reassigned to its unfinished task. This problem can be cast as a Markov Decision Problem (MDP) whose state, action, and reward are defined as follows: R k is the set of all robots and T k is the set of all tasks; The set of directed edges where a directed edge is a random variable which denotes task completion time of robot i in R k to service task j in T k and a directed edge titj ∈ E T T k denotes a task completion time of a robot which just finished serving task i in T k to service task j in T k. E RT k contains information about each robot's possible assignments:, where E ri t is a singleton set if robot i is in the processing phase and it must be assigned to its unfinished task, and otherwise it is the set of possible assignments from robot r i to remaining tasks that are not in the processing phase. Action. The action a k at decision epoch k is the joint assignment of robots given the current state s k = G k. The feasible action should satisfy the two constraints: No two robots can be assigned to a task; some robots may not be assigned when number of robots are more than remaining tasks. To best address those restrictions, we define an action a k at time t as a maximal bipartite matching in bipartite sub-graph ((R k ∪ T k), E RT k ) of graph G k. For example, robot i in R k is matched with task j in T k in an action a k if we assign robot i to task j at decision epoch t. We denote the set of all possible actions at epoch k as A k. Reward. In MRRC, Each task has an arbitrarily determined initial age. At each decision epoch, the age of each task increases by one. When a task is serviced, a reward is determined only by its age when serviced. Denote this reward rule as R(k). One can easily see that whether a task is served at epoch k is completely determined by s k, a k and s k+1. Therefore, we can denote the reward we get with s k, a k and s k+1 as R(s k, a k, s k+1). Objective. We can now define an assignment policy φ as a function that maps a state s k to action a k. Given s 0 initial state, an MRRC problem can be expressed as a problem of finding an optimal assignment policy φ * such that As written in 2.2, IPMS is a problem defined in continuous state/continuous time space. Machines are all identical, but processing times of tasks are all different. In this paper, we discuss IPMS with'sequence-dependent setup time'. A machine's setup time required for servicing a task i is determined by its previously served task j. In this case, the task completion time is the sum of setup time and processing time. Under this setting, we solve IPMS problem for make-span minimization objective discussed in [] (The constraints are different in this problem though); That is, minimizing total time spent from start to end to finish all tasks. Every time there is a finished task, we make assignment decision for a free machine. We call this times as'decision epochs' and express them as an ordered set (t 1, t 2, . . ., t k, . . .). Abusing this notation slightly, we use (·) t k = (·) k. Task completion time for a machine to a task consists of two components: processing time and setup time. While a machine in setup phase may be reassigned to another task, we can't reassign a machine in the processing phase. Under these assumptions, at each epoch, a machine r i is given a set of tasks it can assign: if it is in the setup phase, it can be assigned to any tasks or not assigned; if it is in the processing phase, it must be reassigned to its unfinished task. This problem can be cast as a Markov Decision Problem (MDP) whose state, action, and reward are defined as follows: State. State s k at decision epoch k is a directed graph G k = (R k ∪ T k, E k): R k is the set of all machines and T k is the set of all tasks; The set of directed edges where a directed edge ritj ∈ E RT k is a random variable which denotes task completion time of machine i in R k to service task j in T k and a directed edge titj ∈ E T T k denotes a task completion time of a machine which just finished serving task i in T k to service task j in T k. E RT k contains information about each robot's possible assignments: E RT k = ∪ i E ri k, where E ri k is a singleton set if machine i is in the processing phase and it must be assigned to its unfinished task, and otherwise it is the set of possible assignments from machine r i to remaining tasks that are not in the processing phase. Action. Defined the same as MRRC with continuous state/time space. Reward. In IPMS, time passes between decision epoch t and decision epoch t + 1. Denote this time as T t. One can easily see that T t is completely determined by s k, a k and s k+1. Therefore, we can denote the reward we get with s k, a k and s k+1 as T (s k, a k, s k+1). Objective. We can now define an assignment policy φ as a function that maps a state s k to action a k. Given s 0 initial state, an MRRC problem can be expressed as a problem of finding an optimal assignment policy φ * such that T (s k, a k, s k+1) |s 0. Here we analytically show that robot scheduling problem randomly induces a random Bayesian Network from state s t. Given starting state s t and action a t, a person can repeat a random experiment of "sequential decision making using policy φ". In this random experiment, we can define events' How robots serve all remaining tasks in which sequence'. We call such an event a'scenario'. For example, suppose that at time-step t we are given robots {A, B}, tasks {1, 2, 3, 4, 5}, and policy φ. One possible scenario S * can be {robot A serves task 3 → 1 → 2 and robot B serves task 5 → 4}. Define random variable X k a task characteristic, e.g.'The time when task k is serviced'. The question is,'Given a scenario S *, what is the relationship among random variables {X k}'? Recall that in our sequential decision making formulation we are given all the'task completion time' information in the s t description. Note that, task completion time is only dependent on the previous task and assigned task. In our example above, under scenario S *'when task 2 is served' is only dependent on'when task 1 is served'. That is, P (X 2 |X 1, X 3, S *) = P (X 2 |X 1, S *). This relationship is called'conditional independence'. Given a scenario S *, every relationship among {X i |S *} can be expressed using this kind of relationship among random variables. A graph with this special relationship is called'Bayesian Network' [We first define necessary definitions for our proof. In a random PGM, a PGM is chosen among all possible PGMs on {X k} and semi-cliques C. Denote the set of all possible factorization as F = {S 1, S 2, ..., S N} where a factorization with index k is denoted as S k ⊆ C. Suppose we are given P ({S = S m}).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJxRJeStvB
RL can solve (stochastic) multi-robot/scheduling problems scalably and transferably using graph embedding
Curriculum learning consists in learning a difficult task by first training on an easy version of it, then on more and more difficult versions and finally on the difficult task. To make this learning efficient, given a curriculum and the current learning state of an agent, we need to find what are the good next tasks to train the agent on. Teacher-Student algorithms assume that the good next tasks are the ones on which the agent is making the fastest progress or digress. We first simplify and improve them. However, two problematic situations where the agent is mainly trained on tasks it can't learn yet or it already learnt may occur. Therefore, we introduce a new algorithm using min max ordered curriculums that assumes that the good next tasks are the ones that are learnable but not learnt yet. It outperforms Teacher-Student algorithms on small curriculums and significantly outperforms them on sophisticated ones with numerous tasks. Curriculum learning. An agent with no prior knowledge can learn a lot of tasks by reinforcement, i.e. by reinforcing (taking more often) actions that lead to higher reward. But, for some very hard tasks, it is impossible. Let's consider the following task:Figure 1: The agent (in red) receives a reward of 1 when it picks up the blue ball in the adjacent room. To do so, it has to first open the gray box, take the key inside and then open the locked door. This is an easy task for humans because we have prior knowledge: we know that a key can be picked up, that we can open a locked door with a key, etc... However, most of the time, the agent starts with no prior knowledge, i.e. it starts by acting randomly. Therefore, it has a probability near 0 of achieving the task in a decent number of time-steps, so it has a probability near 0 of getting reward, so it can't learn the task by reinforcement. One solution to still learn this task is to do curriculum learning BID0 ), i.e. to first train the agent on an easy version of the task, where it can get reward and learn, then train on more and more difficult versions using the previously learnt policy and finally, train on the difficult task. Learning by curriculum may be decomposed into two parts:1. Defining the curriculum, i.e. the set of tasks the agent may be trained on. 2. Defining the program, i.e. the sequence of curriculum's tasks it will be trained on. These two parts can be done online, during training. Curriculum learning algorithms. Defining a curriculum and a program can be done manually, e.g. by defining a hand-chosen performance threshold for advancement to the next task BID6; BID5 ).However, if an efficient algorithm is found, it may save us a huge amount of time in the future. Besides, efficient (and more efficient than humans) algorithms are likely to exist because they can easily mix in different tasks (what is hard for humans) and then:• avoid catastrophic forgetting by continuously retraining on easier tasks;• quickly detect learnable but not learnt yet tasks. Hence, it motivates the research of curriculum learning algorithms. Curriculum learning algorithms can be grouped into two categories:1. curriculum algorithms: algorithms that define the curriculum; 2. program algorithms: algorithms that define the program, i.e. that decide, given a curriculum and the learning state of the agent, what are the good next tasks to train the agent on. In this paper, we will focus on program algorithms, in the reinforcement learning context. Recently, several such algorithms emerged, focused on the notion of learning progress BID4; BID3 BID2 ). BID4 proposed four algorithms (called Teacher-Student) based on the assumption that the good next tasks are the ones on which the agent is making the fastest progress or digress. We first simplify and improve Teacher-Student algorithms (section 4). However, even improved, two problematic situations where the agent is mainly trained on tasks it can't learn or it already learnt may occur. Therefore, we introduce a new algorithm (section 5), focused on the notion of mastering rate, based on the assumption that the good next tasks are the ones that are learnable but not learnt yet. We show that this algorithm outperforms Teacher-Student algorithms on small curriculums and significantly outperforms them on sophisticated ones with numerous tasks. 2.1 CURRICULUM LEARNING First, let's recall some general curriculum learning notions defined in BID3. A curriculum C is a set of tasks {c 1, ..., c n}. A sample x is a drawn from one of the tasks of C. A distribution d over C is a family of non-negative summing to one numbers indexed by DISPLAYFORM0 Without loss of generality, we propose to perceive a distribution d over tasks C as coming from an attention a over tasks, i.e. d:= ∆(a). An attention a over C is a family of non-negative numbers indexed by C, i.e. a = (a c) c∈C with a c ≥ 0. Intuitively, a c represents the interest given to task c. Let A C be the set of attentions over C. BID4; BID3 BID2, several distribution converters are used (without using this terminology): DISPLAYFORM1 • the argmax distribution converter: ∆ Amax (a) c:= 1 if c = argmax c a c 0 otherwise.A greedy version of it is used in BID4 DISPLAYFORM2 and u the uniform distribution over C.• the exponential distribution converter: BID3 ). DISPLAYFORM3 • the Boltzmann distribution converter: BID4 ). DISPLAYFORM4 • The powered distribution converter: BID2 ). DISPLAYFORM5 An attention function a: N → A C is a time-varying sequence of attentions over C. A program d can be rewritten using this notion: d(t):= ∆(a(t)) for a given attention converter ∆.Finally, a program algorithm can be defined as follows: DISPLAYFORM6 An agent A; DISPLAYFORM7 The Teacher-Student paper BID4 ) presents four attention functions called Online, Naive, Window and Sampling. They are all based on the idea that the attention must be given by the absolute value of an estimate of the learning progress over tasks, i.e. A(t):= |β(t)| where β c (t) is an estimate of the learning progress of the agent A on task c. For the Window attention function, they first estimate the "instantaneous" learning progress of the agent A on task c by the slope β Linreg c (t) of the linear regression of the points (t 1, r t1),..., (t K, r t K) where t 1,..., t K are the K last time-steps when the agent was trained on a sample of c and where r ti is the return got by the agent at these time-steps. From this instantaneous learning progress, they define β c as the weighted moving average of β DISPLAYFORM0 For all the algorithms, a Boltzmann or greedy argmax distribution converter is used. For example, here is the GAmax Window program algorithm proposed in the paper: DISPLAYFORM1 An agent A; β:= 0; for t ← 1 to T do a:= |β|; d:= ∆ GAmax (a); Draw a task c from d and then a sample x from c; Train A on x and observe return r t; β Linreg c:= slope of lin. reg. of (t 1, r t1),..., (t K, r t K); β c:= αβ Three curriculums were used to evaluate the algorithms, called BlockedUnlockPickup, KeyCorridor and ObstructedMaze (see appendix A for screenshots of all tasks). They are all composed of Gym MiniGrid environments BID1 ).These environments are partially observable and at each time-step, the agent receives a 7 × 7 × 3 image (figure 1) along with a textual instruction. Some environments require language understanding and memory to be efficiently solved, but the ones chosen in the three curriculums don't. The agent gets a reward of 1 − n nmax when the instruction is executed in n steps with n ≤ n max. Otherwise, it gets a reward of 0. Before simplifying and improving Teacher-Student algorithms, here are some suggestions about which distribution converters and attention functions of the Teacher-Student paper to use and not to use. First, in this paper, two distribution converters are proposed: the greedy argmax and the Boltzmann ones. We don't recommend to use the Boltzmann distribution converter because τ is very hard to tune in order to get a distribution that is neither deterministic nor uniform. Second, four attention functions are proposed: the Online, Naive, Window and Sampling ones. We don't recommend to use:• the Naive attention function because it is a naive version of the Window one and performs worst (see figure 5 in BID4);• the Sampling attention function because it performs worst than the Window one (see figure 5 in BID4). Moreover, the reason it was introduced was to avoid hyperparameters but it still require to tune a ε to avoid deterministic distributions (see algorithm 8 in BID4)... It remains the Online and Window attention functions. But, the Online one is similar to the Window one when K = 1.Finally, among all what is proposed in this paper, we only recommend to use the Window attention function with the greedy argmax distribution converter, i.e. to use the GAmax Window algorithm (algorithm 2). This is the only one we will consider in the rest of this section. Now, let's see how we can simplify and improve the GAmax Window algorithm.. This corresponds to the powered distribution converter when p = 1. We then replace the greedy argmax distribution converter by a greedy proportional one, and improve performances (figures 2 and 3). Algorithm 3: GProp Linreg algorithm input: A curriculum C; An agent A; β Linreg:= 0; for t ← 1 to T do a:= |β Linreg |; d:= ∆ GP rop (a); Draw a task c from d and then a sample x from c; Train A on x and observe return r t; β Linreg c:= slope of lin. reg. FIG6,..., (t K, r t K);In the rest of this article, this algorithm will be referred as our "baseline". The two following figures show that:• the GAmax Linreg algorithm performs similarly to the GAmax Window algorithm, as asserted before. It even seems a bit more stable because the gap between the first and last quartile is smaller.• the GProp Linreg algorithm performs better than the GAmax Linreg and GAmax Window algorithm, as asserted before. Algorithms introduced in BID4; BID3 BID2, and in particular Teacher-Student algorithms and the baseline algorithm, are focused on the notion of learning progress, based on the assumption that the good next tasks are the ones on which the agent is making the fastest progress or digress. However, two problematic situations may occur:1. The agent may be mainly trained on tasks it already learnt. The frame B of the figure 4 shows that, around time-step 500k, the agent already learnt Unlock and UnlockPickup but is still trained 90% of the time on them, i.e. on tasks it already learnt. 2. It may be mainly trained on tasks it can't learn yet. The more the curriculum has tasks, the more it occurs:• The frame A of the figure 4 shows that, initially, the agent doesn't learn Unlock but is trained 66% of the time on UnlockPickup and BlockedUnlockPickup, i.e. on tasks it can't learn yet.• The figure 5 shows that agents spend most of the time training on the hardest task of the ObstructedMaze curriculum whereas they have not learnt yet the easy tasks.. The agent with seed 6 was trained on the BlockedUnlockPickup curriculum using the baseline algorithm. The return and probability during training are plotted. Two particular moments of the training are framed. Figure 5: 10 agents (seeds) were trained on the ObstructedMaze curriculum using the baseline algorithm. The mean return and mean probability during training are plotted. To overcome these issues, we introduce a new algorithm, focused on the notion of mastering rate, based on the assumption that the good next tasks to train on are the ones that are learnable but not learnt yet. Why this assumption? Because it can't be otherwise. Mainly training on learnt tasks or not learnable ones is a lost of time. This time must be spent training on respectively harder or easier tasks. In subsection 5.1, we first define what are learnt and learnable tasks and then, in subsection 5.2, we present this new algorithm. Learnt tasks. A min-max curriculum is a curriculum C = {c 1, ..., c n} along with:• a family (m 1 c) c∈C where m 1 c is an estimation of the minimum mean return the agent would get on task c. It should be higher than the true minimum mean return.• and a family (M 1 c) c∈C where M 1 c is an estimation of the maximum mean return the agent would get on task c. It should be lower than the true maximum mean return. On such a curriculum, we can define, for a task c:• the live mean returnr c (t) byr c:= m 1 c andr c (t) = (r t1 + ... + r t K)/K where t 1,..., t K are the K last time-steps when the agent was trained on a sample of c and where r ti is the return got by the agent at these time-steps;• the live minimum mean return by m c:= m From this, we can define, for a task c, the mastering rate M c (t):=r DISPLAYFORM0 Mc(t)−mc(t). Intuitively, a task c would be said "learnt" if M c (t) is near 1 and "not learnt" if M c (t) is near 0. FIG6,..., (t K, r t K); Finally, we can remark that the MR algorithm is just a more general version of Teacher-Student algorithms and the baseline algorithm. If we consider min-max ordered curriculums without edges (i.e. just curriculums), if δ = 0, and if we use the GProp dist converter instead of the Prop one, then the MR algorithm is exactly the GProp Linreg algorithm. The MR algorithm with δ = 0.6 (see appendix B for the min-max ordered curriculums given to this algorithm) outperforms the baseline algorithm on all the curriculums, especially on:• KeyCorridor where the median return of the baseline is near 0 after 10M time-steps on S4R3, S5R3 and S6R3 while the first quartile of the MR algorithm is higher than 0.8 after 6M time-steps (see FIG8).• ObstructedMaze where the last quartile of the baseline is near 0 after 10M time-steps on all the tasks while the last quartile of the MR algorithm is higher than 0.7 after 5M time-steps on 1Dl, 1Dlh, 1Dlhb, 2Dl, 2Dlhb (see FIG9).. A CURRICULUMS Three curriculums were used to evaluate the algorithms: BlockedUnlockPickup (3 tasks), KeyCorridor (6 tasks) and ObstructedMaze (9 tasks).(a) Unlock. nmax = 288 DISPLAYFORM0 Figure 8: BlockedUnlockPickup curriculum. In Unlock, the agent has to open the locked door. In the others, it has to pick up the box. In UnlockPickup, the door is locked and, in BlockedUnlockPickup, it is locked and blocked by a ball. The position and color of the door and the box are random. Here are the min-max ordered curriculums given to the MR algorithm in subsection 5.3.For every task c, we set m 1 c to 0 and M 1 c to 0.5. The real maximum mean return is around 0.9 but we preferred to take a much lower maximum estimation to show we don't need an accurate one to get the algorithm working. FIG6: GProp Linreg and MR with δ = 0.6 were each tested with 10 different agents (seeds) on the BlockedUnlockPickup curriculum. The median return during training, between the first and last quartile, are plotted.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJlGdsC9Ym
We present a new algorithm for learning by curriculum based on the notion of mastering rate that outperforms previous algorithms.
The fields of artificial intelligence and neuroscience have a long history of fertile bi-directional interactions. On the one hand, important inspiration for the development of artificial intelligence systems has come from the study of natural systems of intelligence, the mammalian neocortex in particular. On the other, important inspiration for models and theories of the brain have emerged from artificial intelligence research. A central question at the intersection of these two areas is concerned with the processes by which neocortex learns, and the extent to which they are analogous to the back-propagation training algorithm of deep networks. Matching the data efficiency, transfer and generalisation properties of neocortical learning remains an area of active research in the field of deep learning. Recent advances in our understanding of neuronal, synaptic and dendritic physiology of the neocortex suggest new approaches for unsupervised representation learning, perhaps through a new class of objective functions, which could act alongside or in lieu of back-propagation. Such local learning rules have implicit rather than explicit objectives with respect to the training data, facilitating domain adaptation and generalisation. Incorporating them into deep networks for representation learning could better leverage unlabelled datasets to offer significant improvements in data efficiency of downstream supervised readout learning, and reduce susceptibility to adversarial perturbations, at the cost of a more restricted domain of applicability. the collection of areas responsible for visual object recognition, computes hierarchically organized representations much like state-of-the art convolutional neural networks (CNNs) optimized for the task . While there are impressive similarities in the learned representations between the ventral stream and CNNs, there are important differences in how those representations are learned. While CNNs are trained in a supervised manner using a gradient descent optimization algorithm with an explicit global objective on large labelled datasets, the ventral stream learns from a much larger dataset (visual experience) but with only very sparse labelling. The latter property of cortical learning is attractive to emulate in CNNs, and more broadly across deep learning models. Attractive, not only because of the ability to make use of unlabelled data during learning, but also because it will impart the models with superior generalization and transfer properties, as discussed below. The monkey's paw effect: the problem with specifying what without specifying how A well known and often encountered pitfall of numerical optimization algorithms for high dimensional problems, such as evolutionary algorithms, simulated annealing and also gradient descent, is that they regularly yield solutions matching what your objective specifies to the letter, but far from how you intended . The short story "The Monkey's Paw" by W. W. Jacobs provides a compelling metaphor. In that story, the new owner of a magical mummified monkey's paw of Indian origin is granted three wishes. The owner first wishes for $200, and his wish is eventually granted to the penny, but with the grave side effect that it is granted through a goodwill payment from his son's employer in response to his untimely death in a terrible machinery accident . The Monkey's Paw effect is also applicable to gradient descent-based optimization of deep neural nets. The relative data-hungriness of current supervised learning strategies, and the use of data augmentation to improve generalization reflect the precarious position we are in of needing to micromanage the learning processes. Adversarial examples are evidence that the monkey's paw effect none-the-less persists. It is temping to continue with the current paradigm and re-inject adversarial examples back into the learning data stream. Extrapolating, this goes in the direction of specifying the negative space of the objective, all those things the optimization should not do to solve the problem, which is potentially infinite, and rather risky in production environments like self-driving cars. Adversarial examples represent an opportunity to address the issue in a more fundamental way . It has been argued by that if we could design deep learning systems with the explicit objective of "disentangling the underlying factors of variation" in an unsupervised manner, then there is much to be gained for generalization and transfer. Such an approach offers a promising solution to the Monkey's Paw effect, as there is an explicit objective of learning good representations, from which generalization and transfer follow by definition. 1 One small challenge remains: how to express the objective of learning good representations? If we restrict ourselves to the subset of all possible inputs for which the neocortex learns good representations, the local processes of synaptic plasticity may provide valuable clues. The neocognitron model , the original CNN architecture, learned visual features through self-organization using local rules. Since its conception, our understanding of the neocortex and its neurons and synapses has progressed considerably. Recent insights into the local plasticity rules for learning in the neocortex offer new inspiration for deep representation learning paradigms that learn "disentangled representations" from large unlabelled datasets in an unsupervised manner. A selection of recent insights into the systems of plasticity of the neocortex is shown in Fig. 1. A new dendrite-centric view of synaptic plasticity is emerging with the discovery of the NMDA spike, a non-linear mechanism hypothesized to associate co-activated synapses through potentiation or structural changes driven by the ing calcium currents (; ;) (Fig. 1A-B). Such associations, in the form of co-coding clusters of synapses, have recently been experimentally observed using optical techniques (Fig. 1C). Moreover neurons in the neocortex are known to form small cliques of all-to-all connected neurons which drive co-coding , a process that would be self-reinforced through dendritic clustering by NMDA spikes (Fig. 1D). Martinotti neurons, which are activated by such cliques of pyramidal neurons, and subsequently inhibit pyramidal dendrites provide well-timed inhibition to block further NMDA spikes , and put a limit on the maximal pyramidal clique size, but also suppress activation of competing cliques (e.g. Winner-take-all (WTA) dynamics). Together, such plasticity mechanisms appear to form basic building blocks for representation learning in the feedforward pathway of the neocortex using local learning rules. While long known competitive strategies for unsupervised representation learning indeed rely on WTA dynamics , deep learning approaches incorporating these increasingly apparent dendritic dimensions of learning processes have yet to be proposed . Unlike CNNs, the neocortex also has a prominent feedback pathway down the hierarchy, whereby topdown input from upper layers innervate the apical tufts of pyramidal cells in layer 1 of a given cortical region . Associations between top-down and feed-forward (bottom-up) activation are known to trigger dendritic calcium spikes and dendritic bursting , which again specifically activates the WTA dynamics of the Martinotti neurons , but disinhibitory VIP neurons can also modulate their impact . These feed-back pathways have been proposed to implement predictive coding , and error back-propagation for supervised learning algorithms . While their importance for rapid object recognition has been recently demonstrated, their computational role remained inconclusive . With the demonstrated applicability of supervised learning for a broad range of problems and data distributions, and an ever expanding toolbox of optimized software libraries, it is unlikely that supervised learning, back-propagation and gradient descent will be dethroned as the work horses of AI for many years to come. Nonetheless, as applications of deep networks are moving into regions where sparse data, generalization and transfer are increasingly important, unsupervised approaches designed with the explicit goal of learning good representations from mere observation may find an important place in the AI ecosystem. Quoting Yann LeCun 2 "If intelligence is a cake, the bulk of the cake is unsupervised learning, the icing on the cake is supervised learning, and the cherry on the cake is reinforcement learning." A promising strategy would be to assume learning with sparse labels, overcoming adversarial examples, transfer learning, and few-shot learning together as the success criteria for the further development of the powerful unsupervised approaches we seek. Recent advances in our understanding of the processes of neocortical plasticity may well offer useful inspiration, but let's close with some words of moderation. Biology's solutions also show us there will be no free lunch, i.e. neocortical unsupervised learning algorithms will be less general than supervised learning by gradient descent. Neocortex relies on structure at specific spatial and temporal scales in its input streams to learn representations. Evolution has had millions of years to configure the sensory organs to provide signals to the neocortex in ways that it can make sense of them, and that serve the animal's ecological niche. We should not expect, for example, cortical unsupervised learning algorithms to cluster frozen white noise images. A neocortical solution requires a neocortical problem (e.g. from the so-called "Brain set" ), so if we are to successfully take inspiration from it, we must also work within its limitations.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1g_N7FIUS
Inspiration from local dendritic processes of neocortical learning to make unsupervised learning great again.
Real-world Question Answering (QA) tasks consist of thousands of words that often represent many facts and entities. Existing models based on LSTMs require a large number of parameters to support external memory and do not generalize well for long sequence inputs. Memory networks attempt to address these limitations by storing information to an external memory module but must examine all inputs in the memory. Hence, for longer sequence inputs the intermediate memory components proportionally scale in size ing in poor inference times and high computation costs. In this paper, we present Adaptive Memory Networks (AMN) that process input question pairs to dynamically construct a network architecture optimized for lower inference times. During inference, AMN parses input text into entities within different memory slots. However, distinct from previous approaches, AMN is a dynamic network architecture that creates variable numbers of memory banks weighted by question relevance. Thus, the decoder can select a variable number of memory banks to construct an answer using fewer banks, creating a runtime trade-off between accuracy and speed. AMN is enabled by first, a novel bank controller that makes discrete decisions with high accuracy and second, the capabilities of a dynamic framework (such as PyTorch) that allow for dynamic network sizing and efficient variable mini-batching. In our , we demonstrate that our model learns to construct a varying number of memory banks based on task complexity and achieves faster inference times for standard bAbI tasks, and modified bAbI tasks. We achieve state of the art accuracy over these tasks with an average 48% lower entities are examined during inference. Question Answering (QA) tasks are gaining significance due to their widespread applicability to recent commercial applications such as chatbots, voice assistants and even medical diagnosis BID7 ). Furthermore, many existing natural language tasks can also be re-phrased as QA tasks. Providing faster inference times for QA tasks is crucial. Consumer device based question-answer services have hard timeouts for answering questions. For example, Amazon Alexa, a popular QA voice assistant, allows developers to extend the QA capabilities by adding new "Skills" as remote services BID0 ). However, these service APIs are wrapped around hard-timeouts of 8 seconds which includes the time to transliterate the question to text on Amazon's servers and the round-trip transfer time of question and the answer from the remote service, and sending the response back to the device. Furthermore, developers are encouraged to provide a list of questions ("utterances") apriori at each processing step to assist QA processing BID0 ).Modeling QA tasks with LSTMs can be computationally expensive which is undesirable especially during inference. Memory networks, a class of deep networks with explicit addressable memory, have recently been used to achieve state of the art on many QA tasks. Unlike LSTMs, where the number of parameters grows exponentially with the size of memory, memory networks are comparably parameter efficient and can learn over longer input sequences. However, they often require accessing all intermediate memory to answer a question. Furthermore, using focus of attention over the intermediate state using a list of questions does not address this problem. Soft attention based models compute a softmax over all states and hard attention models are not differentiable and can be difficult to train over a large state space. Previous work on improving inference over memory networks has focused on using unsupervised clustering methods to reduce the search space BID2; BID19 ). Here, the memory importance is not learned and the performance of nearest-neighbor style algorithms is often comparable to a softmax operation over memories. To provide faster inference for long sequence-based inputs, we present Adaptive Memory Networks (AMN), that constructs a memory network on-the-fly based on the input. Like past approaches to addressing external memory, AMN constructs the memory nodes dynamically. However, distinct from past approaches, AMN constructs a memory architecture with network properties that are decided dynamically based on the input story. Given a list of possible questions, our model computes and stores the entities from the input story in a memory bank. The entities represent the hidden state of each word in the story while a memory bank is a collection of entities that are similar w.r.t the question. As the number of entities grow, our network learns to construct new memory banks and copies entities that are more relevant towards a single bank. Entities may reside in different bank depending on their distance from the question. Hence, by limiting the decoding step to a dynamic number of constructed memory banks, AMN achieves lower inference times. AMN is an end-to-end trained model with dynamic learned parameters for memory bank creation and movement of entities. Figure 1 demonstrates a simple QA task where AMN constructs two memory banks based on the input. During inference only the entities in the left bank are considered reducing inference times. To realize its goals, AMN introduces a novel bank controller that uses reparameterization trick to make discrete decisions with high accuracy while maintaining differentiability. Finally, AMN also models sentence structures on-the-fly and propagates update information for all entities that allows it to solve all 20 bAbI tasks. Memory Networks: Memory networks store the entire input sequence in memory and perform a softmax over hidden states to update the controller BID27; BID23 ). DMN+ connects memory to input tokens and updates them sequentially BID29 ). For inputs that consist of large number of tokens or entities, these methods can be expensive during inference. AMN stores entities with tied weights in different memory banks. By controlling the number of memory banks, AMN achieves low inference times with reasonable accuracy. Nearest neighbor methods have also been explored over memory networks. For example, Hierarchical Memory Networks separates the input memory into groups using the MIPS algorithm BID2 ). However, using MIPS is as slow as a softmax operation, so the authors propose using an approximate MIPS that gives inferior performance. In contrast, AMN is end to end differentiable, and reasons which entities are important and constructs a network with dynamic depth. Neural Turing Machine (NTM) consists of a memory bank and a differentiable controller that learns to read and write to specific locations BID9 ). In contrast to NTMs, AMN memory bank controller is more coarse grained and the network learns to store entities in memory banks instead of specific locations. AMN uses a discrete bank controller that gives improved performance for bank controller actions over NTM's mechanisms. However, like NTMs, our design is consistent with the modeling studies of working memory by BID11 ) where the brain performs robust memory maintenance and may maintain multiple working representations for individual working tasks. Sparse access memory uses approximate nearest neighbors (ANN) to reduce memory usage in NTMs BID19 ). However, ANNs are not differentiable. AMN, uses a input specific memory organization that does not create sparse structures. This limits access during inference to specific entities reducing inference times. Graph-based networks, (GG-NNs, BID16 and) use nodes with tied weights that are updated based on gated-graph state updates with shared weights over edges. However, unlike AMN, they require strong supervision over the input and teacher forcing to learn the graph structure. Furthermore, the cost of building and training these models is expensive and if every edge is considered at every time-step the amount of computation grows at the order of O(N 3) where N represents the number of nodes/entities. AMN does not use strong supervision but can solve tasks that require transitive logic by modeling sentence walks on the fly. EntNet constructs dynamic networks based on entities with tied weights for each entity BID12 ). A key-value update system allows it to update relevant (learned) entities. However, Entnet uses soft-attention during inference to attend to all entities that incur high inference costs. To summarize, majority of the past work on memory networks uses softmax over memory nodes, where each node may represent input or an entity. In contrast, AMN learns to organize memory into various memory banks and performs decode over fewer entities reducing inference times. Conditional Computation & Efficient Inference: AMN is also related to the work on conditional computation which allows part of networks to be active during inference improving computational efficiency BID1 ). Recently, this has been often accomplished using a gated mixture of experts BID4; BID22 ). AMN conditionally attends to entities in initial banks during inference improving performance. For faster inference using CNNs, pruning BID15; BID10 ), low rank approximations BID3 ), quantization and binarization BID20 ) and other tricks to improve GEMM performance BID25 ) have been explored. For sequence based inputs, pruning and compression has been explored BID6; BID21 ). However, compression in irregular sparsity that reduces memory costs but may not reduce computation costs. Adaptive computation time BID8 ) learns the number of steps required for inferring the output and this can also be used to reduce inference times BID5 ). AMN uses memory networks with dynamic number of banks to reduce computation costs. Dynamic networks: Dynamic neural networks that change structure during inference have recently been possible due to newer frameworks such as Dynet and PyTorch. Existing work on pruning can be implemented using these frameworks to reduce inference times dynamically like dynamic deep network demonstrates BID17 ). AMN utilizes the dynamic architecture abilities to construct an input dependent memory network of variable memory bank depth and the dynamic batching feature to process a variable number of entities. Furthermore, unlike past work that requires an apriori number of fixed memory slots, AMN constructs them on-the-fly based on the input. The learnable discrete decision-making process can be extended to other dynamic networks which often rely on REINFORCE to make such decisions BID17 ).Neuroscience: Our network construction is inspired by work on working memory representations. There is sufficient evidence for multiple, working memory representations in the human brain (Hazy et al. FORMULA0). Semantic memory BID24 ), describes a hierarchical organization starting with relevant facts at the lowest level and progressively more complex and distant concepts at higher levels. AMN constructs entities from the input stories and stores the most relevant entities based on the question in the lowest level memory bank. Progressively higher level memory banks represent distant concepts (and not necessarily higher level concepts for AMN). Other work demonstrates organization of human memory in terms of "priority structure" where attention is a gate-keeper of working memory-guided by executive control's goals, plans, and intentions as in BID26, similar in spirit to AMN's question guided network construction. In this section, we describe the design process and motivation of our memory module. Our memory network architecture is created during inference time for every story. The architecture consists of different memory banks and each memory bank stores entities from the input story. Hence, a memory entity represents the hidden state of each entity (each word in our case) from the input story while a memory bank is a collection of entities. Intuitively, each memory bank stores entities that have a similar distance score from the question. At a high level, entities are gradually and recurrently copied through memory banks to filter out irrelevant nodes such that in the final inference stage, fewer entities are considered by the decoder. Note that the word filter implies a discrete decision and that recurrence implies time. If we were to perform a strict cut off and remove entities that appear to be irrelevant at each time step, learning the reasoning logic that requires previous entities that were cut off would not be possible. Thus, smoothed discretization is required. We design filtering to be a two-stage pseudo-continuous process to simulate discrete cut offs (Π move, Π new), while keeping reference history. The overall memory (M) consists of multiple memory banks. A memory bank is a collection or group of entities (m 0...l), where m 0 denotes the initial and most general bank and m l denotes the most relevant bank. Note that |l| is input dependent and learned. First, entities are moved from m 0 gradually towards m l based off of their individual relevance to the question and second, if m l becomes too saturated, m l+1 is created. Operations in the external memory allowing for such dynamic restructuring and entity updates are described below. Note that these operations still maintain end to end differentiability.1. Memory bank creation (Π new), which creates a new memory bank depending on the current states of entities m i. If the entropy, or information contained (explained below), of m i is too high, Π new (m i) will learn to create a new memory bank m i+1 to reduce entropy.2. Moving entities across banks (Π move), which determines which entities are relevant to the current question and move such entities to further (higher importance) memory banks.3. Adding/Updating entities in a bank (Π au), which adds entities that are not yet encountered to the first memory bank m 0 or if the entity is already in m 0, the operation updates the entity state.4. Propagating changes across entities (Π prop), which updates the entity states in memory banks based on node current states Π prop (M) and their semantic relationships. This is to communicate transitive logic. Both Π new, Π move require a discrete decision (refer to section 4.2.1.), and in particular, for Π new we introduce the notion of entropy. That is to say if m i contains too many nodes (the entropy becomes too high), the memory module will learn to create a new bank m i+1 and move nodes to m i+1 to reduce entropy. By creating more memory banks, the model spreads out the concentration of information which in turn better discretizes nodes according to relevance. A high-level overview is shown in FIG1, followed by a mathematical detail of the model's modules. Our model adopts the encoder-decoder framework with an augmented adaptive memory module. For an overview of the algorithm, refer to Section A.1.Notation and Problem Statement: Given a story represented by N input sentences (or statements), i.e., (l 1, · · ·, l N), and a question q, our goal is to generate an answer a. Each sentence l is a sequence of N words, denoted as (w 1, · · ·, w Ns), and a question is a sequence of N q words denoted as (w 1, · · ·, w Nq). Throughout the model we refer to entities; these can be interpreted as a 3-tuple of e w = (word ID wi, hidden state w, question relevance strength s). Scalars, vectors, matrices, and dot products are denoted by lower-case letters, boldface lower-case letters and boldface capital letters, and angled brackets respectively. The input to the model, starting with the encoder, are story-question input pairs. On a macro level, sentences l 1... N are processed. On a micro level, words w 1... Ns are processed within sentences. For each w i ∈ l i, the encoder maps w i to a hidden representation and a question relevance strength ∈. The word ID of w i is passed through a standard embedding layer and then encoded through an accumulation GRU. The accumulation GRU captures the entity states through time by adding the output of each GRU time step to its respective word, stored in a lookup matrix. The initial states of e w are set to this GRU output. Meanwhile, the question is also embedded and encoded in the same manner sans accumulation. In the following, the subscripts i, j are used to iterate through the total number of words in a statement and question respectively, D stores the accumulation GRU output, and w i is a GRU encoding output. The last output of the GRU will be referred to as w N, w Nq for statements and questions. DISPLAYFORM0 DISPLAYFORM1 To compute the question relevance strength s ∈ for each word, the model uses GRU-like equations. The node strengths are first initialized to Xavier normal and the inputs are the current word states w in, the question state w Nq, and when applicable, the previous strength. Sentences are processed each time step t. DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 In particular, equation FORMULA2 shows where the model learns to lower the strengths of nodes that are not related the question. First, a dot product between the current word states and question state are computed for similarity (high correlation), then it is subtracted from a 1 to obtain the dissimilarity. We refer to these operations as SGRU (Strength GRU) in Algorithm 1. The adaptive memory module recurrently restructures entities in a question relevant manner so the decoder can then consider fewer entities (namely, the question relevant entities) to generate an answer. The following operations are performed once per sentence. As mentioned earlier, discrete decisions are difficult for neural networks to learn so we designed a specific memory bank controller Π ctrl for binary decision making. The model takes ideas from the reparameterization trick and uses custom backpropagation to maintain differentiability. In particular, the adaptive memory module needs to make two discrete decisions on a {0, 1} basis, one in Π new to create a new memory bank and the other in Π move to move nodes to a different memory bank. The model uses a scalar p ∈ {0, 1} to parameterize a Bernoulli distribution where the realization H, is the decision the model makes. However, backpropagation through a random node is intractable, so the model detaches H from the computation graph and introduces H as a new node. Finally, H is used as a mask to zero out entities in the discrete decision. Meanwhile, p is kept in the computation graph and has a special computed loss (Section 4.4). The operations below will be denoted as Π ctrl and has two instances: one for memory bank creation Π new and one for moving entities across banks Π move. In equation 9, depending on what Π ctrl is used for, q is a polymorphic function and will take on a different operation and * will be a different input. Examples of such are given in their respective sections (4.2.2.1, 4.2.2.2). DISPLAYFORM0 4.2.2 MEMORY BANK OPERATIONS 1. Memory bank creation Π new: To determine when a new memory bank is created, in other words, if the current memory bank becomes too saturated, the memory bank controller (4.2.1.) will make a discrete decision to create a new memory bank. Here, q (eq 9) is a fully connected layer and the input is the concatenation of all the current memory bank m i's entity states [w 0 ...w i] ∈ R 1,n|ew|. Intuitively, q will learn a continuous decision that is later discretized by eq 10 based on entity states and the number of entities. Note this is only performed for the last memory bank. DISPLAYFORM1 2. Moving entities through memory banks: Similar to Π new, individual entities' relevance scores are passed into the bank controller to determine H as the input. The relevance score is computed by multiplying an entity state by its respective relevance ∈ R n,|ew|. Here, q has a slight modification and is the identity function. Note that this operation can only be performed if there is a memory bank to move nodes to, namely if m i+1 exists. Additionally, each bank has a set property where it cannot contain duplicate nodes, but the same node can exist in two different memory banks. DISPLAYFORM2 3. Adding/Updating entities in a bank: Recall that entities are initially set to the output of D. However, as additional sentences are processed, new entities and their hidden states are observed. In the case of a new entity e w, the entity is added to the first memory bank m 0. If the entity already exists in m 0, then e w's corresponding hidden state is updated through a GRU. This procedure is done for all memory banks. DISPLAYFORM3 4. Propagating updates to related entities: So far, entities exist as a bag of words model and the sentence structure is not maintained. This can make it difficult to solve tasks that require transitive reasoning over multiple entities. To track sentence structure information, we model semantic relationships as a directed graph stored in adjacency matrix A. As sentences are processed word by word, a directed graph is drawn progressively from w 0... w i...w N. If sentence l k's path contains nodes already in the current directed graph, l k will include said nodes in its path. After l k is added to A, the model propagates the new update hidden state information a i among all node states using a GRU. a i for each node i is equal to the sum of the incoming edges' node hidden states. Additionally, we add a particular emphasis on l k to simulate recency. At face value, one propagation step of A will only have a reachability of its immediate neighbor, so to reach all nodes, A is raised to a consecutive power r to reach and update each intermediate node. r can be either the longest path in A or a set parameter. Again, this is done within a memory bank for all memory banks. For entities that have migrated to another bank, the update for these entities is a no-op but propagation information as per the sentence structure is maintained. A single iteration is shown below: DISPLAYFORM4 When nodes are transferred across banks, A is still preserved. If intermediate nodes are removed from a path, a transitive closure is drawn if possible. After these steps are finished at the end of a sentence, namely, the memory unit has reasoned through how large (number of memory banks) the memory should be and which entities are relevant at the current point in the story, all entities are passed through the strength modified GRU (4.1, eq 5-8) to recompute their question relevance (relevance score). After all sentences l 1...N are ingested, the decode portion of the network learns to interpret the from the memory banks. The network iterates through the memory banks using a standard attention mechanism. To force the network to understand the question importance weighting, the model uses an exponential function d to weight important memory banks higher. C m are the hidden states contained in memory m, s m are the relevance strengths of memory bank m, w Nq is the question hidden state, ps is the attention score, r, h are learned weight masks, g are the accumulated states, and l is the final logits prediction. During inference, fewer memory banks are considered. DISPLAYFORM0 Loss is comprised of two parts, answer loss, which is computed from the given annotations, and secondary loss (from Π new, Π move), which is computed from sentence and story features at each sentence time step l 0...N. Answer loss is standard cross entropy at the end of the story after l N is processed. DISPLAYFORM0 After each sentence l i, the node relevance s li is enforced by computing the expected relevance E[s li]. E[s] is determined by nodes that are connected to the answer node a in a directed graph; words that are connected to a are relevant to a. They are then weighted with a deterministic function of distance from a. DISPLAYFORM1 Additionally, bank creation is kept in check by constraining p li w.r.t. the expected number of memory banks. The expected number of memory banks can be thought of as a geometric distribution ∼ Geometric(p li) parameterized byp li, a hyperparameter. Typically, at each sentence stepp is raised to the inverse power of the current sentence step to reflect the amount of information ingested. Intuitively, this loss ensures banks are created when a memory bank contains too many nodes. On the other hand, the learned mask q (eq. 9) enables the model to weight certain nodes a higher entropy to prompt bank creation. Through these two dependencies, the model is able to simulate bank creation as a function of the number of nodes and the type of nodes in a given memory bank. DISPLAYFORM2 All components combined, the final loss is given in the following equation DISPLAYFORM3 In this section, we evaluate AMN accuracy and inference times on the bAbI dataset and extended bAbI tasks dataset. We compare our performance with Entnet BID12 ), which recently achieved state of the art on the bAbi dataset. For accuracy measurements, we also compare with DMN+ and encoder-decoder methods. Finally we discuss the time trade offs between AMN and current SOTA methods. The portion regarding inference times are not inclusive of story ingestion. We summarize our experiments as follows:• We are able to solve all bAbi tasks using AMN. Furthermore, AMN is able to reason important entities and propagate them to the final memory bank allowing for 48% fewer entities examined during inference.• We construct extended bAbI tasks to evaluate AMN behavior. First, we extend Task 1 for multiple questions in order to gauge performance in a more robust manner. For example, if a reasonable set of questions are asked (where reasonable means that collectively they do not require all entities to answer implying entities can be filtered out), will the model still sufficiently reason through entities. We find that our network is able to reason useful entities for both tasks and store them in the final memory bank. Furthermore, we also scale bAbI for a large number of entities and find that AMN provides additional benefits at scale since only relevant entities are stored in the final memory bank. We implement our network in PyTorch BID18 ). We initialize our model using Xavier initialization, and the word embeddings utilize random uniform initialization ranging from − √ 3 to √ 3. The learning rate is set as 0.001 initially and updated with a learning rate scheduler. E[s] contains nodes in the connected components of A containing the answer node a which has relevance scores sampled from a Gaussian distribution centered at 0.75 with a variance of 0.05 (capped at 1). Nodes that are not in the connected component containing a are similarly sampled from a Gaussian centered from 0.3 with a variance of 0.1 (capped at 0).p li is initially set to 0.8 and β varies depending on the story length from 0.1 ≤ β ≤ 0.25. Note that for transitive tasks,p li is set to 0.2. We train our models using the Adam optimizer BID14. The bAbI task suite consists of 20 reasoning tasks that include deduction, induction, path finding etc. Results are from the following parameters: ≤ 200 epochs, best of 10 runs. TAB1 shows the inference performance in terms of the number of entities examined. A task is considered passed if the error rate is less than 5%.We find that AMN creates 1 − 6 memory banks for different tasks. We also find that 8 tasks can be solved by looking at just one memory bank and 14 tasks can be solved with half the total number of memory banks. Lastly, all tasks can be solved by examining less than or equal the total number of entities (e ∈ M ≤ |V | +)1. Tasks that cannot be solved in fewer than half the memory banks either require additional entities due to transitive logic or have multiple questions. For transitive logic, additional banks could be required as an relevant nodes may be in a further bank. However, this still avoids scanning all banks. In the case of multiple questions, all nodes may become necessary to construct all answers. We provide additional evaluation in Appendix to examine memory bank behavior for certain tasks. TAB5 shows the number of banks created and required to solve a task, as well as the ratio of entities examined to solve the task. TAB2 shows the complexity of AMN and other SOTA models. Entnet uses an empirically selected parameter, typically set to the number of vocabulary words. GGT-NN uses the number of vocabulary words and creates new k new nodes intermittently per sentence step. For tasks where nodes are easily separable where nodes are clearly irrelevant to the question(s), AMN is able to successfully reduce the number of nodes examined. However for tasks that require more information, such as counting (Task 7), the model is still able to obtain the correct answer Table 2: Memory bank analysis of indicative tasks.without using all entities. Lastly, transitive logic tasks where information is difficult to separate due to dependencies of entities, the model creates very few banks (1 or 2) and uses all nodes to correctly generate an answer. We note that in the instance where the model only creates one bank, it is very sparse, containing only one or two entities. Because variations in computation times in text are minute, the number of entities required to construct an answer are of more interest as they directly correspond to the number of computations required. Additionally, due to various implementations of current models, their run times can significantly vary. However, for the comparison of inference times, AMN's decoder and EntNet's decoder are highly similar and contain roughly the same number of operations. We extend the bAbI tasks by adding additional entities and sentences and adding multiple questions for a single story, for Task 1. We increase the the number of entities to 100 entities in the task generation system instead of existing 38. We also extend the story length to 90 to ensure new entities are referenced. We find that AMN creates 6 memory banks and the ratio of entities in the final banks versus the overall entities drops to 0.13 given the excess entities that are not referenced in the questions. Multiple questions: We also augment the tasks with multiple questions to understand if AMN can handle when a story has multiple questions associated with it. We extend our model to handle multiple questions at once to limit re-generating the network for every question. To do so, we modify bAbi to generate several questions per story for tasks that do not currently have multiple questions. For single supporting fact (Task 1), the model creates 3 banks and requires 1 bank to successfully pass the task. Furthermore, the ratio of entities required to pass the task only increases by 0.16 for a total of 0.38. In this paper, we present Adaptive Memory Network that learns to adaptively organize the memory to answer questions with lower inference times. Unlike NTMs which learn to read and write at individual memory locations, Adaptive Memory Network demonstrates a novel design where the learned memory management is coarse-grained that is easier to train. Through our experiments, we demonstrate that AMN can learn to reason, construct, and sort memory banks based on relevance over the question set. AMN architecture is generic and can be extended to other types of tasks where the input sequence can be separated into different entities. In the future, we plan to evaluate AMN over such tasks to evaluate AMN generality. We also plan to experiment with larger scale datasets (beyond bAbI, such as a document with question pairs) that have a large number of entities to further explore scalability. Method Complexity Entnet BID12 We describe our overall algorithm in pseudo-code in this section. We follow the notation as described in the paper. DISPLAYFORM0 Algorithm 1 AMN(S, q, a) DISPLAYFORM1 for word w ∈ s do 4: DISPLAYFORM2 end for 6: DISPLAYFORM3 for memory bank m i ∈ M do 8: DISPLAYFORM4 n mi ← SGRU(D, n mi) We compare the computations costs during the decode operation during inference for solving the extended bAbi task. We compute the overheads for AMN Entnet BID12 ) and GGT-NN. TAB2 gives the decode comparisons between AMN, Entnet and GGT-NN. Here, |V | represents to the total number of entities for all networks. GGT-NN can dynamically create nodes and k k is hyper parameter the new nodes created for S sentences in input story. α is the percent of entities stored in the final bank w.r.t to the total entities for AMN.We compare the wall clock execution times for three tasks within bAbI for 1000 examples/task. We compare the wall-clock times for three tasks. We compare the inference times of considering all banks (and entities) versus the just looking at the passing banks as required by AMN. We find that AMN requires fewer banks and as a consequence fewer entities and saves inference times. In this section, we understand memory bank behavior of AMN. Figure 3 shows the memory banks and the entity creation for a single story example, for some of the tasks from bAbI. Depending upon the task, and distance from the question AMN creates variable number of memory banks. The heatmap demonstrates how entities are copied across memory banks. Grey blocks indicate absence of those banks. Under review as a conference paper at ICLR 2018 Figure 4 shows how propagation happens after every time step. The nodes represent entities corresponding to words in a sentence. As sentences are processed word by word, a directed graph is drawn progressively from w 0... w i...w N. If sentence l k's path contains nodes already in the current directed graph, l k will include said nodes in the its path. After l k is added to A, the model propagates the new update hidden state information a i among all node states using a GRU. a i for each node i is equal to the sum of the incoming edges' node hidden states. Additionally, we add a particular emphasis on l k to simulate recency. At face value, one propagation step of A will only have a reachability of its immediate neighbor, so to reach all nodes, A is raised to a consecutive power r to reach and update each intermediate node. r can be either the longest path in A or a set parameter.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJZ2Mf-0-
Memory networks with faster inference
When a bilingual student learns to solve word problems in math, we expect the student to be able to solve these problem in both languages the student is fluent in, even if the math lessons were only taught in one language. However, current representations in machine learning are language dependent. In this work, we present a method to decouple the language from the problem by learning language agnostic representations and therefore allowing training a model in one language and applying to a different one in a zero shot fashion. We learn these representations by taking inspiration from linguistics, specifically the Universal Grammar hypothesis and learn universal latent representations that are language agnostic . We demonstrate the capabilities of these representations by showing that the models trained on a single language using language agnostic representations achieve very similar accuracies in other languages. Anecdotally speaking, fluent bilingual speakers rarely face trouble translating a task learned in one language to another. For example, a bilingual speaker who is taught a math problem in English will trivially generalize to other known languages. Furthermore there is a large collection of evidence in linguistics arguing that although separate lexicons exist in multilingual speakers the core representations of concepts and theories are shared in memory BID1 BID28 BID6. The fundamental question we're interested in answering is on the learnability of these shared representations within a statistical framework. We approached this problem from a linguistics perspective. Languages have vastly varying syntactic features and rules. Linguistic Relativity studies the impact of these syntactic variations on the formations of concepts and theories BID5. Within this framework of study, the two schools of thoughts are linguistic determinism and weak linguistic influence. Linguistic determinism argues that language entirely forms the range of cognitive processes, including the creation of various concepts, but is generally agreed to be false BID18 BID5. Although there exists some weak linguistic influence, it is by no means fundamental BID0. The superfluous nature of syntactic variations across languages brings forward the argument of principles and parameters (PnP) which hypothesizes the existence of a small distributed parameter representation that captures the syntactic variance between languages denoted by parameters (e.g. head-first or head-final syntax), as well as common principles shared across all languages BID12. Universal Grammar (UG) is the study of principles and the parameters that are universal across languages BID29.The ability to learn these universalities would allow us to learn representations of language that are fundamentally agnostic of the specific language itself. Doing so would allow us to learn a task in one language and reap the benefits of all other languages without needing multilingual datasets. We take inspiration from the UG hypothesis and learn latent representations that are language agnostic which allow us to solve downstream problems in new languages without the need of any language-specific training data. We do not make any claims about the Universal Grammar hypothesis, but simply take inspiration from it. Our work attempts to unite universal (task agnostic) representations with multilingual (language agnostic) representations BID30 BID26. The recent trend in universal representations has been moving away from context-less unsupervised word embeddings to contextrich representations. Deep contextualized word representations (ELMo) trains an unsupervised language model on a large corpus of data and applies it to a large set of auxiliary tasks BID30. These unsupervised representations boosted the performance of models on a wide array of tasks. Along the same lines BID26 showed the power of using latent representations of translation models as features across other non-translation tasks. In general, initializing models with pre-trained language models shows promise against the standard initialization with word embeddings. Even further, BID31 show that an unsupervised language model trained on a large corpus will contain a neuron that strongly correlates with sentiment without ever training on a sentiment task implying that unsupervised language models maybe picking up informative and structured signals. In the field of multilingual representations, a fair bit of work has been done on multilingual word embeddings. BID2 explored the possibility of training massive amounts of word embeddings utilizing either parallel data or bilingual dictionaries via the SkipGram paradigm. Later on an unsupervised approach to multilingual word representations was proposed by BID9 which utilized an adversarial training regimen to place word embeddings into a shared latent space. Although word embeddings show great utility, they fall behind methods which exploit sentence structure as well as words. Less work has been done on multilingual sentence representations. Most notably both BID32 and BID4 propose a way to learn multilingual sentence representation through a translation task. We train downstream models using language agnostic universal representations on a set of tasks and show the ability for the downstream models to generalize to languages that we did not train on. Statistical language models approximate the probability distribution of a series of words by predicting the next word given a sequence of previous words. DISPLAYFORM0 where w i are indices representing words in an arbitrary vocabulary. Learning grammar is equivalent to language modeling, as the support of p will represent the set of all grammatically correct sentences. Furthermore, let j α represent a particular language. Let p jα (·) represent the language model for the j α th language and w jα represents a word from the j α th language. Let k jα represent a distributed representation of a specific language along the lines of the PnP argument . UG, through the lens of statistical language modeling, hypothesizes the existence of a factorization of p jα (·) containing a language agnostic segment. The factorization used throughout this paper is the following (• denotes function composition): DISPLAYFORM1 The distribution matching constraint d, insures that the representations across languages are common as hypothesized by the UG argument. as a distributed representation of the language of size f and returns a language specific decoded representation. e −1 jα maps our decoded representation back to the token space. For the purposes of distribution matching we utilize the GAN framework. Following recent successes we use Wasserstein-1 as our distance function d.Given two languages j α and j β the distribution of the universal representations should be within with respect to the W 1 of each other. Using the Kantarovich-Rubenstein duality BID37 we define DISPLAYFORM2 where L is the Lipschitz constant of f. Throughout this paper we satisfy the Lipschitz constraint by clamping the parameters to a compact space, as done in the original WGAN paper. Therefore the complete loss function for m languages each containing N documents becomes: DISPLAYFORM3 λ is a scaling factor for the distribution constraint loss. Our specific implementation of our factorization and optimization problem we denote as UG-WGAN. Each function described in the previous section we implement using neural networks. For e jα in equation 1 we use a language specific embedding table followed by a LSTM BID17 jα is non trivial therefore we use another language specific LSTM whose outputs we multiply by the transpose of the embedding table of e jα to obtain token probabilities. For regularization we utilized standard dropout after the embedding layers and layer-wise locked dropout after each LSTM's layer BID34 BID14.The critic, adopting the terminology from, takes the input from u, feeds it through a stacked LSTM, aggregates the hidden states using linear sequence attention as described in DrQA BID8. Once we have the aggregated state we map to a m × m matrix from where we can compute the total Wasserstein loss. A Batch Normalization layer is appended to the end of the critic BID19. The α, βth index in the matrix correspond to the function output of f in calculating DISPLAYFORM0 We trained UG-WGAN with a variety of languages depending on the downstream task. For each language we utilized the respective Wikipedia dump. From the wikipedia dump we extract all pages using the wiki2text 1 utility and build language specific vocabularies consisting of 16k BPE tokens BID33. During each batch we uniform sample random documents from our set of languages which are approximately the same length, therefore a batch will be mixed with respect to language. We train our language model via BPTT where the truncation length progressively grows from 15 to 50 throughout training. The critic is updated 10 times for every update of the language model. We trained each language model for 14 days on a NVidia Titan X. For each language model we would do a sweep over λ, but in general we have found that λ = 0.1 works sufficiently well for minimizing both perplexity and Wasserstein distance. A couple of interesting questions arise from the described training procedure. Is the distribution matching constraint necessary or will simple joint language model training exhibit the properties we're interested in? Can this optimization process fundamentally learn individual languages grammar while being constrained by a universal channel? What commonalities between languages can we learn and are they informative enough to be exploited?We can test out the usefulness of the distribution matching constraint by running an ablation study on the λ hyper-parameter. We trained UG-WGAN on English, Spanish and Arabic wikidumps following the procedure described above. We kept all the hyper-parameters consistent apart for augmenting λ from 0 to 10. The are shown in Figure 2. Without any weight on the distribution matching term the critic trivially learns to separate the various languages and no further training reduces the wasserstein distance. The joint language model internally learns individual language models who are partitioned in the latent space. We can see this by running a t-SNE plot on the universal (u(·)) representation of our model and seeing existence of clusters of the same language as we did in FIG4 BID25 ). An universal model satisfying the distribution matching constrain would mix all languages uniformly within it's latent space. To test the universality of UG-WGAN representations we will apply them to a set of orthogonal NLP tasks. We will leave the discussion on the learnability of grammar to the Discussion section of this paper. By introducing a universal channel in our language model we reduced a representations dependence on a single language. Therefore we can utilize an arbitrary set of languages in training an auxiliary task over UG encodings. For example we can train a downstream model only on one languages data and transfer the model trivially to any other language that UG-WGAN was trained on. To test the universality of UG-WGAN representation we first trained UG-WGAN in English, Chinese and German following the procedure described in Section 4. The embedding size of the table was 300 and the internal LSTM hidden size was 512. A dropout rate of 0.1 was used and trained with the ADAM optimization method BID21 ). Since we are interested in the zeroshot capabilities of our representation, we trained our sentiment analysis model only on the English IMDB Large Movie Review dataset and tested it on the Chinese ChnSentiCorp dataset and German SB-10K BID24 BID36. We binarize the label's for all the datasets. Our sentiment analysis model ran a bi-directional LSTM on top of fixed UG representations from where we took the last hidden state and computed a logistic regression. This was trained using standard SGD with momentum. IMDB ChnSentiCorp SB-10KNMT + Logistic BID32 12.44% 20.12% 22.92% FullUnlabeledBow BID24 11.11% * * NB-SVM TRIGRAM BID27 8.54% 18.20% 19.40% UG-WGAN λ = 0.1 + Logistic (Ours) 8.01% 15.40% 17.32% UG-WGAN λ = 0.0 + Logistic (Ours) 7.80% 53.00% 49.38% Sentiment Neuron BID31 7.70% * * SA-LSTM BID13 7.24% * * We also compare against encodings learned as a by-product of multi-encoder and decoder neural machine translation as a baseline BID22. We see that UG representations are useful in situations when there is a lack of data in an specific language. The language agnostics properties of UG embeddings allows us to do successful zero-shot learning without needing any parallel corpus, furthermore the ability to generalize from language modeling to sentiment attests for the universal properties of these representations. Although we aren't able to improve over the state of the art in a single language we are able to learn a model that does surprisingly well on a set of languages without multilingual data. A natural language inference task consists of two sentences; a premise and a hypothesis which are either contradictions, entailments or neutral. Learning a NLI task takes a certain nuanced understanding of language. Therefore it is of interest whether or not UG-WGAN captures the necessary linguistic features. For this task we use the Stanford NLI (sNLI) dataset as our training data in English BID7. To test the zero-shot learning capabilities we created a russian sNLI test set by random sampling 400 sNLI test samples and having a native russian speaker translate both premise and hypothesis to russian. The label was kept the same. For this experiment we trained UG-WGAN on the English and Russian language following the procedure described in Section 4. We kept the hyper-parameters equivalent to the Sentiment Analysis experiment. All of the NLI model tested were run over the fixed UG embeddings. We trained two different models from literature, Densely-Connected Recurrent and Co-Attentive Network by BID20 and Multiway Attention Network by BID35. Please refer to this papers for further implementation details. Densely-Connected Recurrent and Co-Attentive Network Ensemble BID20 9.90% * UG-WGAN (λ = 0.1) + Densely-Connected Recurrent and Co-Attentive Network BID20 12.25% 21.00%UG-WGAN (λ = 0.1) + Multiway Attention Network BID35 21.50% 34.25% UG-WGAN (λ = 0.0) + Multiway Attention Network BID35 13.50% 65.25% UG-WGAN (λ = 0.0) + Densely-Connected Recurrent and Co-Attentive Network BID20 11.50% 68.25%Unlexicalized features + Unigram + Bigram features BID7 21.80% 55.00% UG representations contain enough information to non-trivially generalize the NLI task to unseen languages. That being said, we do see a relatively large drop in performance moving across languages which hints that either our calculation of the Wasserstein distance may not be sufficiently accurate or the universal representations are biased toward specific languages or tasks. One hypothesis might be that as we increase λ the cross lingual generalization gap (difference in test error on a task across languages) will vanish. To test this hypothesis we conducted the same experiment where UG-WGAN was trained with a λ ranging from 0 to 10. From each of the experiments we picked the model epoch which showed the best perplexity. The NLI specific model was the Densely-Connected Recurrent and Co-Attentive Network. Increasing λ doesn't seem to have a significant impact on the generalization gap but has a large impact on test error. Our hypothesis is that a large λ doesn't provide the model with enough freedom to learn useful representations since the optimizations focus would largely be on minimizing the Wasserstein distance, while a small λ permits this freedom. One reason we might be seeing this generalization gap might be due to the way we satisfy the Lipschitz constraint. It's been shown that there are better constraints than clipping parameters to a compact space such as a gradient penalty BID15. This is a future direction that can be explored. Universal Grammar also comments on the learnability of grammar, stating that statistical information alone is not enough to learn grammar and some form of native language faculty must exist, sometimes titled the poverty of stimulus (POS) argument BID10 BID23 ). The goal of our paper is not to make a statement on the Universal Grammar hypothesis. But from a machine learning perspective, we're interested in extracting informative features. That being said it is of interest to what extent language models capture grammar and furthermore the extent to which models trained with our objective learn grammar. One way to measure universality is by studying perplexity of our multi-lingual language model as we increase the number of languages. To do so we trained 6 UG-WGAN models on the following languages: English, Russian, Arabic, Chinese, German, Spanish, French. We maintain the same procedure as described above. The hidden size of the language model was increased to 1024 with 16K BPE tokens being used. The first model was trained on English Russian, second was trained on English Russian Arabic and so on. For Arabic we still trained from left to right even though naturally the language is read from right to left. We report the in FIG3. As we increase the number of languages the perplexity gap between constrained and unconstrained UG-WGAN (λ = 0.0) decreases which implies while controlling capacity, our constrained (universal λ = 0.1) language model, models language (almost) as well as jointly trained language models with no universal constraints (λ = 0.0).Furthermore, the heatmap in FIG3 shows the perplexity gap of UG-WGAN trained on any combination of 2 languages from our set of 7. We can treat the perplexities as a loose measure of distance λ = 0.0 λ = 0.1 en earth's oxide is a monopoly that occurs towing of the carbon-booed trunks, ing in a beam containing of oxygen through the soil, salt, warm waters, and the different proteins.the practice of epimatic behaviours may be required in many ways of all non-traditional entities.the groove and the products are numeric because they are called "pressibility" (ms) nutrients containing specific different principles that are available from the root of their family, including a wide variety of molecular and biochemical elements. a state line is a self-government environment for statistical cooperation, which is affected by the monks of canada, the east midland of the united kingdom.however, compared to the listing of special definitions, it has evolved to be congruent with structural introductions, allowing to form the chemical form.the vernacular concept of physical law is not as an objection (the whis) but as a universal school.es la revista ms reciente vari el manuscrito originalmente por primera vez en la revista publicada en 1994.en el municipio real se localiza al mar del norte y su entorno en escajros alto, con mayor variedad de cclica poblacin en forma de cerca de 1070 km2.de hecho la primera cancin de "blebe cantas", pahka zanjiwtryinvined cot de entre clases de fanticas, apareci en el ornitlogo sello triusion, jr., en la famosa publicacin playboy de john allen.fue el ltimo habitantes de suecia, con tres hijos, atasaurus y aminkinano (nuestra).The names of large predators in charlesosaurus include bird turtles hibernated by aerial fighters and ignored fish.jaime en veracruz fue llamado papa del conde mayor de valdechio, hijo de diego de ziga. We see from Figure 2 that perplexity worsens proportional to λ. We explore the differences by sampling sentences from an unconstrained language model and λ = 0.1 language model trained towards English and Spanish in Table 3. In general there is a very small difference between a language model trained with our objective and one without. The constrained model tends to make more gender mistakes and mistakes due to Plural-Singular Form in Spanish. In English we saw virtually no fundamental differences between the language models. One explanation of this phenomena comes from the autonomy of syntax argument, which argues that semantics have no weight on syntax BID16. Our hypothesis is that both models learn syntax well, but the models with better perplexity generate sentences with better or clearer semantic meaning. Although completely learning grammar from statistical signals might be improbable, we can still extract useful information. In this paper we introduced an unsupervised approach toward learning language agnostic universal representations by taking inspiration from the Universal Grammar hypothesis. We showed that we can use these representations to learn tasks in one language and automatically transfer them to others with no additional training. Furthermore we studied the importance of the Wasserstein constraint through the λ hyper-parameter. And lastly we explored the difference between a standard multi-lingual language model and UG-WGAN by studying the generated outputs of the respective language models as well as the perplexity gap growth with respect to the number of languages.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1l9Nj09YQ
By taking inspiration from linguistics, specifically the Universal Grammar hypothesis, we learn language agnostic universal representations which we can utilize to do zero-shot learning across languages.
Generative models with both discrete and continuous latent variables are highly motivated by the structure of many real-world data sets. They present, however, subtleties in training often manifesting in the discrete latent variable not being leveraged. In this paper, we show why such models struggle to train using traditional log-likelihood maximization, and that they are amenable to training using the Optimal Transport framework of Wasserstein Autoencoders. We find our discrete latent variable to be fully leveraged by the model when trained, without any modifications to the objective function or significant fine tuning. Our model generates comparable samples to other approaches while using relatively simple neural networks, since the discrete latent variable carries much of the descriptive burden. Furthermore, the discrete latent provides significant control over generation. Unsupervised learning using generative latent variable models provides a powerful and general approach to learning the underlying, low-dimensional structure from large, unlabeled datasets. Perhaps the two most common techniques for training such models are Variational Autoencoders (VAEs), and Generative Adversarial Networks (GANs) BID8. Both have advantages and disadvantages. VAEs provide a meaningful lower bound on the log likelihood that is stable under training, as well as an encoding distribution from the data into the latent. However, they generate blurry samples due to their objective being unable to handle deterministic decoders and tractability requiring simple priors BID12. On the other hand, GANs naturally enable deterministic generative models with sharply defined samples, but their training procedure is less stable.A relatively new approach to training generative models has emerged based on minimizing the Optimal Transport (OT) distance BID30 ) between the generative model distribution and that of the data. The OT approach provides a general framework for training generative models, which promises some of the best of both GANs and VAEs. Though interesting first have been given in; BID27;, the OT approach to generative modelling is still nascent. Our contributions are twofold: we seek to improve generative modelling capabilities with discrete and continuous latent variables, but importantly, we seek also to establish that training generative models with OT can be significantly more effective than the traditional VAE approach. Discrete latent-variable models are critical to the endeavor of unsupervised learning because of the ubiquity of discreteness in the natural world, and hence in the datasets that describe it. However, they are harder to train than their continuous counterparts. This has been tackled in a number of ways (e.g., directly mitigating high-variance discrete samples BID6 BID19, parametrizing discrete distributions using continuous ones BID14 BID22 BID29, deliberate model design leveraging conjugacy).However, even in the simple case where the number of mixtures is small enough that monte-carlo sampling from the discrete latent is avoidable, training can still be problematic. For example, in BID4 a Gaussian-mixture latent-variable model (GM-LVM) was studied, and the authors were unable to train their model on MNIST using variational inference without substantially modifying the VAE objective. What appears to happen is that the model quickly learns to "hack" the VAE objective function by collapsing the discrete latent variational distribution. This problem only occurs in the unsupervised setting, as are able to learn the discrete latent in the semi-supervised version of the same problem once they have labeled samples for the discrete latent to latch onto. This is discussed in more detail in Section 2.1.The OT approach to training generative models (in particular the Wasserstein distance, discussed in Section 2.2) induces a weaker topology on the space of distributions, enabling easier convergence of distributions than in the case of VAEs BID2. Thus, one might conjecture that the OT approach would enable easier training of GM-LVMs than the VAE approach. We provide evidence that this is indeed the case, showing that GM-LVMs can be trained in the unsupervised setting on MNIST, and motivating further the value of the OT approach to generative modelling. We consider a hierarchical generative model p G with two layers of latent variables, the highest one being discrete. Explicitly, if we denote the discrete latent k with density p D (D for discrete), and the continuous latent z with density p C (C for continuous), the generative model is given by: DISPLAYFORM0 In this work, we consider a GM-LVM with categorical distribution p D = Cat(K) and continuous DISPLAYFORM1 We refer to this GM-LVM as a GM-VAE when it is trained as a VAE or GM-WAE when trained as a Wasserstein Autoencoder ) (discussed in Section 2.2). Training GM-LVMs in the traditional VAE framework (GM-VAEs) involves maximizing the evidence lower bound (ELBO) averaged over the data. Such models are empirically hard to train BID4. This is likely due to the fact that the discrete latent variational distribution learns on a completely different scale from the generative distribution. Consequently, the discrete latent tends to instantly learn some unbalanced structure where its classes are meaningless in order to accommodate the untrained generative distribution. The generative model then learns around that structure, galvanizing the meaningless discrete distribution early in training. More explicitly, if we choose a variational distribution q(z, k|x) = q C (z|k, x) q D (k|x) to mirror the prior in Equation 1, the ELBO can be written as follows: DISPLAYFORM0 Both the first and the second term in Equation 2 depend on q D (k|x). However, the second term is much smaller than the first; it is bounded by log K for uniform p D over K classes, whereas the first term is unbounded from above (though we will initialize the modes of q C to match those of the priors making the continuous KL term initially small as well). As a consequence, q D (k|x) will immediately shut off the k values (i.e., q D (k|x) = 0 ∀x) with large reconstruction losses, DISPLAYFORM1. This is shown in the top row of Figure 1 where within the first 10 training steps the reconstruction loss has substantially decreased (Figure 1a) by simply shutting off 9 values of k in q D (k|x) (Figure 1b), ing in a drastic increase of the discrete KL term (Figure 1a). However, this increase in the discrete KL term is negligible since the term is multiple orders of magnitude smaller than the reconstruction term in the ELBO. All of this takes place in the first few training iterations; well before the generative model has learned to use its continuous latent (see Figure 1c). Subsequently, on a slower timescale, the generative model starts to learn to reconstruct from its continuous latent, causing q C (z|k, x) to shift away from its prior toward a more-useful distribution to the generative model. We see this in Figure 1d: the continuous KL curve grows concurrently with the downturn of the reconstruction loss term. Figure 1f shows that after this transition (taking a few thousands training steps), the reconstructions from the model start to look more like MNIST digits. DISPLAYFORM2 Figure 1: Top row shows a snapshot of the GM-VAE after 10 training steps. Loss curves are shown in (a), the discrete variational distribution in (b) with rows representing E {x|label(x)= } q D (k|x), and reconstructions are shown in (c). Bottom row shows the same snapshot after 6000 training steps. While the generative model learns to use the continuous latent, the discrete distribution q D (k|x) never revives the k values that it shut off. This is because the generative model would not know how to use the z ∼ q C (z|k, x) values for those ks, implying a significant penalty in the reconstruction term of the ELBO. This is evidenced in Figure 1d by the discrete KL staying flat, and in Figure 1e where the columns corresponding to the shut off k values never repopulate. We have discussed the difficulty of leveraging the structure of the latent variables in GM-VAEs using our specific implementation designed to mirror the GM-WAE of Section 2.2. Many other variants of this implementation performed similarly. Though the root cause of this difficulty has not been ascertained in generality, we expect it to be in part due to the per-data-point nature of the ELBO objective, in particular, the impact of the KL divergence term on learning the variational distribution. This point will be elaborated upon with more empirical justification in Section 3. The difficulty associated with training GM-VAEs may be interpreted as a problem of restricted convergence of a sequence of distributions, where the sequence is indexed by the training steps. If that were so, an objective function that induces a weaker topology (and therefore, allows sequences to converge more easily) might help GM-LVMs converge to a distribution that non-trivially uses its discrete latent variable. Hence, we are motivated to consider approaching the training of such models using the OT framework, and in particular the Wasserstein distance as our objective, as it is known to induce a weaker topology than that of maximum likelihood. Following the OT approach of, we would like to minimize the 2-Wasserstein distance between the underlying data distribution (from which we have samples) and our GM-LVM: DISPLAYFORM0 Under review as a conference paper at ICLR 2019 where P Z×K is the set of all joint distributions over z and k, such that q(z, k|x) = q C (z|k, x)q D (k|x) with q C and q D parametrized below. Any parametrization of q(z, k|x) reduces the search space of the infimum, so W † 2 is in fact an upper bound on the true 2-Wasserstein distance W 2. Note that W † 2 is only equal to the true 2-Wasserstein distance when p G (y|z) is deterministic, providing an upper bound in the case of random generative models. We choose to model the "variational" distribution q(z, k|x) deliberately to mirror the structure of the prior, which differs from, for example, BID23 who assume conditional independence between z|x and k|x. Since the constrained infimum is intractable, a relaxed version of W † 2 is introduced as follows: DISPLAYFORM1 which is equivalent to the original distance when λ → ∞. This equivalence requires only that D be a divergence. As in, we use the Maximum Mean Discrepancy (MMD) with a mixture of inverse multiquadratic (IMQ) kernels with various bandwidth C i. The MMD is a distance on the space of densities and has an unbiased U-estimator BID9. Explicitly, if k is a reproducing positive-definite kernel and is characteristic, then the MMD associated to k is given by DISPLAYFORM2 IMQ kernels have fatter tails than the classic radial basis function kernels, proving more useful early in training when the encoder has not yet learned to match the aggregated posterior with the prior. The choice of bandwidth for the kernel can be fickle, so we take a mixture of kernels with bandwidths DISPLAYFORM3.., 2}} reducing the sensitivity on any one choice (see BID5 ; BID10 ; BID20).Given the discrete latent in our model, we cannot directly use Equation 4 with the MMD. Instead we integrate out the discrete latent variable in Equation 3, arriving at our GM-WAE objective function: DISPLAYFORM4 This allows us to compute the MMD between two continuous distributions, where it is defined. As mentioned in Section 1, VAEs have the disadvantage that deterministic generative models cannot be used; this is not the case for the Wasserstein distance. Thus we parametrize the generative density p G (x|z) as a deterministic distribution x|z = g θ (z) where g θ is a mapping from the latent to the data space specified by a deep neural network with parameters θ. This parametrization allows the minimization the objective function using stochastic gradient descent with automatic differentiation. To enable gradient-based minimization for the infimum in Equation 6, we parametrize q(z, k|x) = q C (z|k, x) q D (k|x) with neural networks. We take q C (z|k, x) to be a Gaussian with diagonal covariance for each k, mirroring the prior, and use the reparameterization trick to compute gradients. In order to avoid back propagating through discrete variables, the expectation over the distribution q D (k|x) is computed exactly. It could be computed by sampling using standard techniques BID3 BID14; BID22 ).As previously mentioned, the weakness of the topology induced by the Wasserstein distance on the space of distributions may enable the GM-WAE to overcome the VAE training issues presented in Section 2.1. With the objective in hand, a more precise argument can be made to support this claim. Recall from Section 2.1 that the problem with the GM-VAE was that the objective function demands the various distributions to be optimized at the individual data-point level. For example, the DISPLAYFORM5 breaks off completely and becomes irrelevant due to its size. This causes the q D (k|x) distribution to shut off k values early, which becomes galvanized as the generative model learns. However, in posing the problem in terms of the most efficient way to move one distribution p G onto another p data, via the latent distribution q(z, k|x), the Wasserstein distance never demands the similarity of two distributions conditioned per data point. Indeed, the E pdata in Equation 6 is inside both the infimum and the divergence D. We expect that "aggregating" the posterior as such will allow q(z, k|x) (in particular, q D (k|x)) the flexibility to learn data-point specific information while still matching the prior on aggregate. Indeed, it is also found in BID23 that using an adversarial game to minimize the distance between an aggregated posterior and the prior is successful at unsupervised training on MNIST with a discrete-continuous latent-variable model. In this work we primarily seek to show the potential for OT techniques to enable the training of GM-LVMs. Thus, we use relatively simple neural network architectures and train on MNIST.We use a mixture of Gaussians for the prior, with 10 mixtures to represent the 10 digits in MNIST and a non-informative uniform prior over these mixtures. Namely, for each k ∈ {0, . . ., 9}: DISPLAYFORM0 where the µ 0 k and σ 0 k represent the mean and covariance of each mixture and are fixed before training. We found that choosing dim(z) = 9 worked well. We choose the µ 0 k to be equidistant and for each k, σ 0 k = σ 0 is chosen identically in order to admit ≈ 5% overlap between the 10 different modes of the prior (i.e., the distance between any pair of means µ 0 k1 and µ 0 k2 is 4σ 0).For the variational distribution, we take q(z, k|x) = q C (z|k, x) q D (k|x) with DISPLAYFORM1 where each component is parametrized by a neural network. For π k (x) a 3-layer DCGAN-style network BID25 is used with largest convolution layer composed of 64 filters. The Guassian networks µ k (x), σ k (x) are taken to be 32-unit two-hidden-layer dense networks. Finally, for the generative model, we take p θ G (x|z) to be deterministic with x|z = g θ (z), using a 3-layer DCGAN-style network with smallest transpose convolution layer composed of 128 filters. All the convolutional filters have size 5 × 5 except for the last layer which has size 1 × 1.We use batch normalisation BID13, ReLU activation functions BID7 after each hidden layer and Adam for optimization BID16 with a learning rate of 0.0005. We find that λ = 450 works well, although the value of λ does not impact performance appreciably as long as it is larger than a few hundred. The (µ k, σ k) networks are pretrained to match the prior moments, which accelerates training and improves stability (this was also done for GM-VAE in Section 2.1). Our implementation of GM-WAE is able to reconstruct MNIST digits from its latent variables well. In Figure 2a example data points from the held-out test set are shown on the odd rows, with their reconstructions on the respective rows below. The encoding of the input points is a two step process, first determining in which mode to encode the input via the discrete latent, and then drawing the continuous encoding from the corresponding mode. Samples from the GM-WAE are shown in Figure 2b and 2c. Since the discrete prior p D (k) is uniform, we can sample evenly across the ks in order from 0 through 9, while still displaying representative samples from p(z, k) = p C (z|k)p D (k). Again, this shows how the GM-WAE learns to leverage the structure of the prior, whereas the GM-VAE in the collapse of the several modes of the prior. GM-WAE has a smooth manifold structure in its latent variables. In Figure 3a the reconstructions of a linear interpolation with uniform step size in the continuous latent space is shown between pairs of data points. This compares similarly to other WAE and VAE approaches to MNIST. In Figure 3b a linear interpolation is performed between the prior mode µ 0 6, and the other nine prior modes µ 0 k =6. This not only shows the smoothness of the learned latent manifold in all directions around a single mode of the prior, but also shows that the variatonal distribution has learned to match the modes of the prior. As one would hope given the suitability of a 10-mode GM-LVM to MNIST, almost every mode of the prior now represents a different digit. This level of control built into the prior requires not only a multi-modal prior, but also a training procedure that actually leverages the structure in both the prior and variational distribution, which seems to not be the case for VAEs (see Section 2.1).The quality of samples from our GM-WAE is related to the ability of the encoder networks to match the prior distribution. Figure 2c and 3b demonstrate that the latent manifold learned is similar to the prior. Near the modes of the prior the samples are credible handwritten digits, with the encoder networks able to capture the structure within each mode of the data manifold (variation within each column) and clearly separate each different mode (variation between rows).We have argued that the VAE objective itself was responsible for the collapse of certain k values in the discrete variational distribution, and that the per-data-point nature of the KL played a significant role. To test this hypothesis, and to compare directly our trained WAE with the equivalent VAE discussed in Section 2.1, we initialize the VAE with the parameters of the final trained WAE, and train it according to the VAE objective. At initialization, the VAE with trained WAE parameters produces high quality samples and reconstructions FIG1. However, after a few hundred iterations, the reconstructions deteriorate significantly FIG1, and are not improved with further training. The learning curves over the period of training between FIG1 and 4b are shown in FIG1, where the cause of the performance deterioration is clear: the continuous KL term in the VAE objective is multiple orders of magnitude larger than the reconstruction term, causing optimization to sacrifice reconstruction in order to reduce this KL term. Of course, the approximate posterior aggregated over the data will not be far from the prior as that distance is minimized in the WAE objective. However, this is not enough to ensure that the continuous KL term is small for every data point individually. It is thus the per-data-point nature of the KL in the VAE objective that destroys the reconstructions. Indeed, in order to minimize the per-data-point KL term in the GM-VAE objective, q C (z|k, x) is forced toward the mean µ 0 k for every x, causing it to lose much of its x dependence. This can be seen in FIG1 where the reconstructions are less customized and blurrier. To compare the performance of GM-WAE against GM-VAE more quantitatively, we directly compare the reconstruction loss from the VAE objective (the first term on the right hand side of Equation 2). Strictly speaking, this quantity is ill-defined for the GM-WAE, as the generative model is chosen to be deterministic. Instead we simply use the values returned by the GM-WAE generative model as if they were the Bernoulli mean parameters of the GM-VAE. These reconstruction loss curves are shown FIG1. Also shown are the reconstruction losses for the GM-VAE with various rescaling factors β in front of the KL terms of Equation 2. This rescaled KL term is inspired by both BID11, which studies the impact of rescaling the KL term in VAEs, as well as by the WAE objective itself where λ plays the role of a regularization coefficient. While, the GM-WAE is not trained to minimized this reconstruction loss, it actually achieves the best . This shows that GM-WAE performs better at reconstructing MNIST digits than its VAE counterpart, as measured by the VAE's own reconstruction objective. We also show in FIG1 the reconstruction curve of a GM-VAE initialized with trained GM-WAE parameters. This echoes the previous discussion concerning the deterioration of the reconstructions in GM-VAEs due to the per-data-point KL term. In FIG1, the GM-VAE initialized with trained GM-WAE parameters uses a rescaling factor β = 10 for visualization purposes. The same phenomenological behavior is observed with no rescaling factor, just less visually pronounced. Overall, our for GM-WAE are qualitatively competitive with other approaches, despite a relatively low-complexity implementation. Furthermore, GM-WAE offers more control over generation and inference due to its latent-variable structure, which cannot be achieved with the GM-VAE objective. We have shown that the GM-WAE is able to both reconstruct data and generate new samples meaningfully from the prior distribution. We now turn to studying the variational distributions directly, including with how much fidelity a given class of digits is paired with a given discrete latent. Consider first the discrete distribution q D (k|x) shown in FIG2, where E {x|label(x)= } q D (k|x) is shown in row. From the staircase structure, it is clear that this distribution learns to approximately assign each discrete latent value k to a different class of digit. However, it does not do so perfectly. This is expected as the GM-WAE seeks only to reconstruct the data from its encoding, not to encode it in any particular way. This does not mean GM-WAE is failing to use its discrete latent effectively. Indeed, when comparing Figure 2c and FIG2, a meaningful source of overlap between different values of k and a single digit class can be seen. For example, in FIG2 the digit 5 is assigned partially to k = 3 and k = 5. In Figure 2c, 5s drawn with a big-round lower loop are similar to digit 3 and 5s with a small loop and long upper bar are assigned to another cluster corresponding to digit 5. A similar discussion holds for 8s and 9s. To assess the digit-class fidelity of the discrete encoder more quantitatively, we calculate the accuracy of the digit-class assignment according to q D (k|x). To assign a digit-class label to each k value, we follow a similar protocol to that of BID23: we assign the digit-class label to the k value that maximizes the average discrete latent for that class, in decreasing order of that maximum. FIG2 shows the ing accuracy throughout training. Our GM-WAE achieves an accuracy on the held-out test set just shy of 70%. The corresponding accuracies for the GM-VAE variations considered in FIG1 are also shown. The best performing GM-VAE with a scaling factor of β = 20 achieves approximately 30%. This shows again the difficulty of the GM-VAE to capture meaningful structure in the data. For reference, basic K-means clustering achieves 50-60%, and BID23 achieve 90% (using 16 discrete classes, and substantially different model and training procedure).Another way to study the latent variable structure of GM-WAE is to consider dimensionally reduced visualizations of the continuous latent z. In FIG2 such a visualization is shown using UMAP BID24. Distinct clusters can indeed be seen in the prior and in the samples from q C (z|k, x). Though the clusters of z ∼ q C (z|k, x) do not fully align with those from the prior z ∼ p D (z|k), they maintain significant overlap. Samples from q C (z|k, x) in FIG2 are colored according to the true digit labels, and show how GM-WAE learns to assign digits to the different clusters. In particular, the 7 / 9 cluster is clearly overlapping, as seen in FIG2 and 2c. We have see that the GM-WAE model is highly suited to the problem under study. It reconstructs data and provides meaningful samples, it effectively uses both discrete and continuous variational distributions, all while maintaining close proximity between the variational distribution and the prior. We studied an unsupervised generative model with a mixture-of-Gaussians latent variable structure, well suited to data containing discrete classes of objects with continuous variation within each class. We showed that such a simple and critical class of models fails to train using the VAE framework, in the sense that it immediately learns to discard its discrete-latent structure. We further exposed the root cause of this phenomenon with empirical . We then put to the test the abstract mathematical claim that the Wasserstein distance induces a weaker topology on the space of distributions by attempting to train the same mixture-of-Gaussians model in the WAE framework. We found the Wasserstein objective is successful at training this model to leverage its discrete-continuous latent structure fully. We provided promising on MNIST and demonstrated the additional control available to a highly structured model with both discrete and continuous latent variables. We hope this motivates further study of the exciting but nascent field of Optimal Transport in generative modeling.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1EiIsCctm
This paper shows that the Wasserstein distance objective enables the training of latent variable models with discrete latents in a case where the Variational Autoencoder objective fails to do so.
While machine learning models achieve human-comparable performance on sequential data, exploiting structured knowledge is still a challenging problem. Spatio-temporal graphs have been proved to be a useful tool to abstract interaction graphs and previous works exploits carefully designed feed-forward architecture to preserve such structure. We argue to scale such network design to real-world problem, a model needs to automatically learn a meaningful representation of the possible relations. Learning such interaction structure is not trivial: on the one hand, a model has to discover the hidden relations between different problem factors in an unsupervised way; on the other hand, the mined relations have to be interpretable. In this paper, we propose an attention module able to project a graph sub-structure in a fixed size embedding, preserving the influence that the neighbours exert on a given vertex. On a comprehensive evaluation done on real-world as well as toy task, we found our model competitive against strong baselines.
[ 0, 0, 1, 0, 0, 0, 0 ]
rJEGwo0cFX
A graph neural network able to automatically learn and leverage a dynamic interactive graph structure
We introduce NAMSG, an adaptive first-order algorithm for training neural networks. The method is efficient in computation and memory, and is straightforward to implement. It computes the gradients at configurable remote observation points, in order to expedite the convergence by adjusting the step size for directions with different curvatures in the stochastic setting. It also scales the updating vector elementwise by a nonincreasing preconditioner to take the advantages of AMSGRAD. We analyze the convergence properties for both convex and nonconvex problems by modeling the training process as a dynamic system, and provide a strategy to select the observation factor without grid search. A data-dependent regret bound is proposed to guarantee the convergence in the convex setting. The method can further achieve a O(log(T)) regret bound for strongly convex functions. Experiments demonstrate that NAMSG works well in practical problems and compares favorably to popular adaptive methods, such as ADAM, NADAM, and AMSGRAD. Training deep neural networks (; ; ;) with large datasets costs a huge amount of time and computational resources. Efficient optimization methods are urgently required to accelerate the training process. First-order optimization methods (; ; ; ; ;) are currently the most popular for training neural networks. They are easy to implement since only first-order gradients are introduced as input. Besides, they require low computation overheads except for computing gradients, which is of the same computational complexity as just evaluating the function. Compared with second-order methods (; ;), they are more effective to handle gradient noise. show that the momentum is crucial to improve the performance of SGD. Momentum methods, such as , can amplify steps in low-curvature eigen-directions of the Hessian through accumulation, although careful tuning is required to ensure fine convergence along the high-curvature directions. also rewrite the Nesterov's Accelerated Gradient (NAG) in a momentum form, and show the performance improvement over HB. The method computes the gradient at a observation point ahead of the current point along the last updating direction. They illustrate that NAG suppresses the step along high curvature eigen-directions in order to prevent oscillations. However, all these approaches are approximation of their original forms derived for exact gradients, without fully study on gradient noise. show the insufficiency of HB and NAG in stochastic optimization, especially for small minibatches. They further present ASGD and show significant improvements. However, the method requires tuning of 3 parameters, leading to huge costs that impede its practical applications. Among variants of SGD methods, adaptive methods that scale the gradient elementwise by some form of averaging of the past gradients are particularly successful. ADAGRAD is the first popular method in this line. It is well-suited for sparse gradients since it uses all the past gradients to scale the update. Nevertheless, it suffers from rapid decay of step sizes, in cases of nonconvex loss functions or dense gradients. Subsequent adaptive methods, such as RMSPROP (Tieleman & Hinton., 2012), ADADELTA , ADAM , and NADAM , mitigate this problem by using the exponential moving averages of squared past gradients. show that ADAM does not converge to optimal solutions in some convex problems, and the analysis extends to RMSPROP, ADADELTA, and NADAM. They propose AMSGRAD, which fixes the problem and shows improvements in experiments. In this paper, we propose NAMSG, that is an efficient first-order method for training neural networks. The name is derived from combining a configurable NAG method (CNAG) and AMSGRAD. NAMSG computes the stochastic gradients at configurable observation points ahead of the current parameters along the last updating direction. Nevertheless, instead of approximating NAG for exact gradients, it adjusts the learning rates for eigen-directions with different curvatures to expedite convergence in the stochastic setting, by selecting the observation distance. It also scales the update vector elementwisely using the nonincreasing preconditioner of AMSGRAD. We analyze the convergence properties by modeling the training process as a dynamic system, reveal the benefits of remote gradient observations and provide a strategy to select the observation factor without grid search. A regret bound is introduced in the convex setting, and it is further improved for strongly convex functions. Finally, we present experiments to demonstrate the efficiency of NAMSG in real problems. 2 THE NAMSG SCHEME Before further description, we introduce the notations following , with slight abuse of notation. The letter t denotes iteration number, d denotes the dimension of vectors and matrices, denotes a predefined positive small value, and S d + denotes the set of all positive definite d × d matrix. For a vector a ∈ R d and a matrices M ∈ R d × R d, we use a/M to denote M −1 a, diag(a) to denote a square diagonal matrix with the elements of a on the main diagonal, M i to denote the i th row of M, and, we use √ a for elementwise square root, a 2 for elementwise square, a/b for elementwise division, and max(a, b) to denote elementwise maximum. For any vector θ i ∈ R d, θ i,j denotes its j th coordinate where j ∈ {1, 2, . . ., d}. We define F ⊂ R d as the feasible set of points. Assume that F has bounded diameter D ∞, i.e. x − y ≤ D ∞ for any x, y ∈ F, and ∇f t (x) ∞ ≤G ∞, ∇f t (x) 1 ≤G 1 for all x ∈ F. The projection operation is defined as Π F,A (y) = arg min x∈F A 1/2 (x − y) for A ∈ S d + and y ∈ R d. In the context of machine learning, we consider the minimization problem of a stochastic function, where x is a d dimensional vector consisting of the parameters of the model, and ξ is a random datum consisting of an input-output pair. Since the distribution of ξ is generally unavailable, the optimizing problem is approximated by minimizing the empirical risk on the training set {ζ 1, ζ 2, ..., ζ N}, as In order to save computation and avoid overfitting, it is common to estimate the objective function and its gradient with a minibatch of training data, as where the minibatch S t ⊂ {1, 2, ..., N}, and b = |S t | is the size of S t. Firstly, we propose a configurable NAG method (CNAG). Since the updating directions are partially maintained in momentum methods, gradients computed at observation points, which lie ahead of the current point along the last updating direction, contain the predictive information of the forthcoming update. The remote observation points are defined asẋ t = x t − η t u t−1 where u t−1 is the updating vector, andẋ 1 = x 1. By computing the gradient at a configurable observation pointẋ t, and substituting the gradient with the observation gradient in the HB update, we obtain the original form of CNAG, as where α t, β t, η t are configurable coefficients, and m 0 = 0. The observation distance η t can be configured to accommodate gradient noise, instead of η t = β t in NAG . Both x t andẋ t are required to update in. To make the method more efficient, we simplify the update by approximation. Assume that the coefficients α t, β 1t, and η t, change very slowly between adjacent iterations. Substituting x t byẋ t + η t−1 α t−1 m t−1, we obtain the concise form of CNAG, as where the observation factor µ t = η t (1 − β t)/β t, and we use x instead ofẋ for simplicity. In practical computation of CNAG, we further rearrange the update form as where only 3 scalar vector multiplications and 3 vector additions are required per iteration besides the gradient computation. Hereinafter, we still use for simplicity in expressions. Then, we study the relation of CNAG and ASGD, that guides the selection of the momentum coefficient. shows that ASGD improves on SGD in any information-theoretically admissible regime. By taking a long step as well as short step and an appropriate average of both of them, ASGD tries to make similar progress on different eigen-directions. It takes 3 hyper-parameters: short stepα, long step parameterκ, and statistical advantage parameterξ.α is the same as the step size in SGD. For convex functions,κ is an estimation of the condition number. The statistical advantage parameterξ ≤ √κ captures trade off between statistical and computational condition numbers, andξ √κ in high stochasticity regimes. These hyper-parameters vary in large ranges, and are difficult to estimate. The huge costs in tuning limits the application of ASGD. The appendix shows that CNAG is a more efficient equivalent form of ASGD. For CNAG with constant hyper-parameters, the momentum coefficient β t = β = (κ − 0.49ξ)/(κ + 0.7ξ). Since the condition number is generally large in real high dimension problems, and the statistical advantage parameterξ ≤ √κ, β is close to 1. To sum up, the equivalence of CNAG and ASGD shows that in order to narrow the gap between the step sizes on eigen-directions with different curvatures, the momentum coefficient β should be close to 1. Finally, we form NAMSG by equipping CNAG with the nonincreasing preconditioner of AMSGRAD, and project the parameter vector x into the feasible set F. Algorithm 1 shows the pseudo code of NAMSG. Compared with AMSGRAD, NAMSG requires low computation overheads, as a scalar vector multiplication and a vector addiction per iteration, which are much cheaper than the gradient estimation. Almost no more memory is needed if the vector operations are run by pipelines. In most cases, especially when weight decay is applied for regularization, which limits the norm of the parameter vectors, the projection can also be omitted in implementation to save computation. In Algorithm 1, the observation factor µ t is configurable to accelerate convergence. However, it is costly to select it by grid search. In this section we analyze the convergence rate in a local stochastic quadratic optimization setting by investigating the optimizing process as a dynamic system, and reveal the effect of remote gradient observation for both convex and non-convex problems. Based on the analysis, we provide the default values and a practical strategy to set the observation factor without grid search. The problem can be approximated locally as a stochastic quadratic optimization problem, as whereF is a local set of feasible parameter points. In the problem, the gradient observation is noisy as ∇f t (x) = ∇Φ(x) +ǵ t, whereǵ t is the gradient noise. 4: 5: 8: Consider the optimization process of NAMSG, and ignore the projections for simplicity. Sincev t varies slowly when t is large, we can ignore the change ofv t between recent iterations. The operation of dividing the update by √v t can be approximated by solving a preconditioned problem, as wherex, which is supposed to have improved condition number compared withĤ in the convex setting. Then, we model the optimization process as a dynamic system. Solving the quadratic problem by NAMSG is equal to solving the preconditioned problem by CNAG, aš where the preconditioned stochastic functionf t (x) = f t (V −1/4 tx), the initial momentumm 0 = 0, the coefficients α = (1 − β 1t)α t, β = β 1t, and µ = µ t are considered as constants. We use ν to denote a unit eigenvector of the Hessian H, and the corresponding eigenvalue is λ. We define the coefficients asṡ t = ν,x t,v t = ν,m t. According to, the coefficients are updated as where the gradient error coefficient δ t = V −1/4 tǵt, ν /λ. Substitutingv t byṽ t = αv t, and denote τ = αλ, we rewrite the update into a dynamic system as where A is the gain matrix. The eigenvalues of A are where ρ = 1 + β − τ (1 − β(1 − µ)). Denote the corresponding unit eigenvectors as w 1 and w 2, that are solved numerically since the expressions are too complicated. Define the coefficients c 1, c 2, d 1, d 2 satisfying From, and, we obtain Assume that δ t = σδ, where δ obeys the standard normal distribution, and σ is the standard deviation of δ t. From, we obtain E(ṡ t+1) = r According to the analysis in Section 2, we recommend the momentum factor β = 0.999. Figure 1 presents the gain factor g f ac = max(|r 1 |, |r 2 |) and the stand deviation limit lim t→+∞ Std(ṡ t) of CNAG. It is shown that compared with HB (µ = 0), a proper observation factor µ improves the convergence rate significantly, and also accelerates the divergence in nonconvex problems where τ = αλ < 0. When the step size α is constant, compared with large curvatures, a small curvature λ converges much slower, forming the bottleneck of the whole training process. The problem can be alleviated by using a large µ. However, the noise level also increases along with µ when α is constant, that prohibits too large µ. Consequently, we recommend µ = 0.1 to achieve fast convergence speed, and µ = 0.2 to improve generalization at the cost of more iterations, since higher noise level is beneficial for expanding the range of exploration. For NAMSG, experiments shows that when β 2 is close to 1, its variation does not effect the significantly. We recommend β 2 = 0.99. Only the step size α is left for grid search. Figure 1 also shows that a large β and a proper µ ensures a large convergence domain, while 0 < τ < 2 is required for convergence in SGD (β = 0). Since the range of eigenvalue λ is problemdependent, a large maximum τ (denoted by τ max) allows large step sizes. As shown in Figure 1 (a), µ does not effect g fac significantly for a tiny range of τ close to 0. Then, g fac decreases almost linearly according to τ to the minimum. Consequently, training with a large step size α and small µ is beneficial for both the convergence of tiny positive λ, and the divergence of tiny negative λ in nonconvex settings. While selecting a smaller µ and scaling α proportional to argmin τ g fac, the λ to minimize g fac is unchanged, and the convergence rate for 0 < λ < τ max /α is generally improved according to Figure 1. However, the noise level also increases, that prohibits too large α. We propose a hyper-parameter policy named observation boost (OBSB). The policy performs grid search for a small portion of iterations using a small µ to select an optimal initial α. In training, when the loss flattens, it doubles µ, and scales α proportional to argmin τ g fac. The recommend initial µ is 0.05. In this section, we provide a data dependent regret bound of NAMSG in the convex setting, and further improve the bound for strongly convex functions. Since the sequence of cost functions f t (x) are stochastic, we evaluate the convergence property of our algorithm by regret, which is the sum of all the previous difference between the online prediction f t (x t) and the best fixed point parameter f t (x *) for all the previous steps, defined as When the regret of an algorithm satisfies R T = o(T), the algorithm converges to the optimal parameters on average. The positive definiteness of Γ t in a nonincreasing step size and avoids the non-convergence of ADAM. , we derive the following key for NAMSG. Theorem 1. Let {x t}, {v t} and {v t} be the sequences obtained from Algorithm 1,, · · ·, T }, and x ∈ F. We have the following bound on the regret By compared with the regret bound of AMSGRAD , we find that the regret bounds of the two methods have the similar form. However, when β 1 and γ are close to 1, which is the typical situation, NAMSG has lower coefficients on all of the 3 terms. From Theorem 1, we can immediately obtain the following corollary. Corollary 1. Suppose β 1t = β 1 /t, then we have The bound in Corollary 1 is considerably better than O. For strongly convex functions, NAMSG further achieves a O(log(T)) regret bound with O(1/t) step size under certain assumptions., where λ is a positive constant. Let {x t}, {v t} and {v t} be the sequences obtained from Algorithm 1. The initial step size α ≥ max i∈{1,···,d} tv 1/2, · · ·, T }, and x ∈ F. We have the following bound on the regret When the gradients are sparse, satisfying The proof of theorems are given in the appendix. It should be noted that although the proof requires a decreasing schedule of α t and β 1t to ensure convergence, numerical experiments show that piecewise constant α t and constant β 1t provide fast convergence speed in practice. In this section, we present experiments to evaluate the performance of NAMSG and the OBSB policy for NAMSG, compared with SGD with momentum , CNAG, and popular adaptive stochastic optimization methods, such as ADAM , NADAM , and AMSGRAD 1 . We study logistic regression and neural networks for multiclass classification, representing convex and nonconvex settings, respectively. The experiments are carried out with MXNET . We compare the performance of SGD, ADAM, NADAM, CNAG, AMSGRAD, NAMSG and OBSB, for training logistic regression and neural network on the MNIST dataset . The dataset consists of 60k training images and 10k testing images in 10 classes. The image size is 28 × 28. Logistic regression:In the experiment, the minibatch size is 256. The hyper-parameters for all the methods except NAMSG and OBSB are chosen by grid search (see appendix), and the best in training are reported. In NAMSG and OBSB, only the step size α is chosen by grid search, and the other hyper-parameters are set according to the default values. We report the train and test in Figure 2, which are the average of 5 runs. It is observed that OBSB perform the best with respect to train loss, and NAMSG also converges faster than other methods. The test accuracy is roughly consistent with the train loss in the initial epochs, after which they fluctuate for overfitting. The experiment shows that NAMSG and OBSB achieves fast convergence in the convex setting. In the experiment, we train a simple convolutional neural network (CNN) for the multiclass classification problem on MNIST. The architecture has two 5 × 5 convolutional layers, with 20 and 50 outputs. Each convolutional layer is followed by Batch Normalization (BN) and a 2×2 max pooling. The network ends with a 500-way fully-connected layer with BN and ReLU , a 10-way fully-connected layer, and softmax. The hyper-parameters are set in a way similar to the previous experiment. The are also reported in Figure 2, which are the average of 5 runs. We can see that NAMSG has the lowest train loss, which translates to good generalization performance. OBSB also converges faster than other methods. The experiment shows that NAMSG and OBSB are efficient in non-convex problems. In the experiment, we train Resnet-20 on the CIFAR-10 dataset , that consists of 50k training images and 10k testing images in 10 classes. The image size is 32 × 32. The architecture of the network is as follows: In training, the network inputs are 28 × 28 images randomly cropped from the original images or their horizontal flips to save computation. The inputs are subtracted by the global mean and divided by the standard deviation. The first layer is 3 × 3 convolutions. Then we use a stack of 18 layers with 3 × 3 convolutions on the feature maps of sizes {28, 14, 7} respectively, with 6 layers for each feature map size. The numbers of filters are {16, 32, 64} respectively. A shortcut connection is added to each pair of 3×3 filters. The subsampling is performed by convolutions with a stride of 2. Batch normalization is adopted right after each convolution and before the ReLU activation. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. In testing, the original 32 × 32 images are used as inputs. We train Resnet-20 on CIFAR-10 using SGD, ADAM, NADAM, CNAG, AMSGRAD, NAMSG, and OBSB. The training for each network runs for 75 epochs. The hyper-parameters are selected in a way similar to the previous experiments, excepting that we divide the constant step size by 10 at the 12000 th iteration (in the 62 th epoch). A weight decay of 0.001 is used for regularization. Two group of hyper-parameters are obtained for each method, one of which minimizes the train loss before the dropping of step size, and the other maximizes the mean test accuracy of the last 5 epoches. Figure 3 shows the average of 5 runs. In experiments to achieve the fastest training speed (Figure 3 (a),(b) ), OBSB converges the fastest, and NAMSG is also faster than other methods. Compares with ADAM, OBSB is more than 1 time faster, and NAMSG is roughly 1 time faster to reach the train loss before the dropping of step size. OBSB has the best test accuracy, and NAMSG is better than other methods. CNAG achieves significant acceleration upon SGD, and is also faster than ADAM, NADAM, and AMSGRAD. In experiments to achieve the best generalization (Figure 3 (c), (d)), OBSB still converges the fastest, NAMSG and CNAG converge at almost the same speed, which is faster than other methods. The mean best generalization accuracy of SGD, ADAM, NADAM, CNAG, AMSGRAD, NAMSG, and OBSB are 0.9129, 0.9065, 0.9066, 0.9177, 0.9047, 0.9138, and 0.9132, respectively. CNAG achieves the highest test accuracy. OBSB, NAMSG, and SGD obtains almost the same final test accuracy, which is much higher than ADAM, NADAM, and AMSGRAD. It should be noted that CNAG achieves the best test accuracy at the cost of grid search for 3 parameters, while NAMSG and OBSB only search for the step size. The experiments show that in the machine learning problems tested, NAMSG and OBSB converges faster compared with other popular adaptive methods, such as ADAM, NADAM, and AMSGRAD. The acceleration is achieved with low computational overheads and almost no more memory. We present the NAMSG method, which computes the gradients at configurable remote observation points, and scales the update vector elementwise by a nonincreasing preconditioner. It is efficient in computation and memory, and is straightforward to implement. A data-dependent regret bound is proposed to guarantee the convergence in the convex setting. The bound is further improved to O(log(T)) for strongly convex functions. The analysis of the optimizing process provides a hyperparameter policy (OBSB) which leaves only the step size for grid search. Numerical experiments demonstrate that NAMSG and OBSB converge faster than ADAM, NADAM, and AMSGRAD, for the tested problems. A.1 PROOF OF THEOREM 1 In this proof, we use y i to denote the i th coordinate of a vector y. From Algorithm 1, we have Since 0 ≤ β 1t < 1, from the assumptions, Rearrange the inequity (A2), we obtain For simplicity, denote Because of the convexity of the objective function, the regret satisfies The first inequity follows from the convexity of function f t. The second inequality is due to (A4). We now bound the term We have In (A7), the second inequity is follows from the definition of v t, the fifth inequality is due to Cauchy-Schwarz inequality. The final inequality is due to the following bound on harmonic sum: From (A7), and Lemma A2, which bound, we further bound the term P 2 as The third inequity is due to β 1t ≥ β 1t+1 andv 1/2 t,i /α t ≥v 1/2 t−1,i /α t−1 by definition. We also have In (A9), the second inequity follows from the assumption β 1t < β 1t−1, the third and the last inequality is due tov 1/2 t,i /α t ≥v 1/2 t−1,i /α t−1 by definition and the assumption α t = α/ √ t. Combining (A6), (A8), and (A9), we obtain The proof is complete. The Lemmas used in the proof are as follows: Lemma A2. For the parameter settings and conditions assumed in Theorem 1, which is the same as Theorem 4 in , we have The proofs of Lemma A1 and A2 are described in. Because of the objective function is strongly convex, from (A3) and (A4) the regret satisfies We divide the righthand side of (A11) to three parts, as Firstly, we bound the term Q 1. ≤ 0 (A13) The first inequity follows from β t is nonincreasing, the second equity follows from α t = α/t. The last inequity is because of the assumption α ≥ max i∈{1,···,d} tv 1/2 Algorithm A1 ASGD Algorithm Input: initial parameter vector x 1, short stepα, long step parameterκ ≥ 1, statistical advantage parameterξ ≤ √κ, iteration number T Output: parameter vector x T 1: Setx 1 = x 1,β = 1 − 0.7 2ξ /κ. 2: for t = 1 to T − 1 do 3: g t = ∇f t (x t). 4:x t+1 =βx t + (1 −β) x t −κα 0.7 g t. 5: Finally, we bound the term Q 3. Both the first equity and the first inequity follow from the assumptions α t = α/t and β 1t = β 1 /t 2. The last inequity is due tov t,i is nondecreasing by definition. Combining (A11), (A13), (A17), and (A18), we obtain A.3 EQUIVALENCE OF CNAG AND ASGD The pseudo code of ASGD ) is shown in Algorithm A1. ASGD maintains two iterates: descent iterate x t and a running averagex t. The running average is a weighted average of the previous average and a long gradient step from the descent iterate, while the descent iterate is updated as a convex combination of short gradient step from the descent iterate and the running average. We rewrite the update of Algorithm A1 as Define the variable transform as m t x t =T x t x t,T = lkl 0 1, wherek arel are adjustable coefficients. Combining (A20) and (A21), we obtain m t+1 x t+1 =T m t x t +Tbg t,T =TÄT −1. In order to minimize the number of vector computations, we solve the adjustable coefficientsk andl by assigningT 1,2 = 0,T 2,1 = 1. We choose the solution as Combining (A22) and (A23), we obtain m t+1 x t+1 =T m t x t +Tbg t,T = 0.7β (1−β)+0.7 0 1 1. The update (A24) is identical to the practical form of CNAG update with constant hyperparameters. The momentum coefficient of CNAG is β t = β = 0.7β (1 −β) + 0.7 = (κ − 0.49ξ)/(κ + 0.7ξ), where the second equity follows from the definition ofβ in Algorithm A1. It should be noted that the computational overheads of ASGD besides the gradient computation is 6 scalar vector multiplications and 4 vector additions per iteration, while CNAG reduces the costs to 3 scalar vector multiplications and 3 vector additions. We use constant hyper-parameters in the experiments. For ADAM, NADAM, and AMSGRAD, the hyper-parameters (α, β 1, β 2) are selected from {0.0005, 0.001, 0.002, 0.005, 0.01, 0.02} × {0, 0.9, 0.99, 0.999, 0.9999} × {0.99, 0.999} by grid search. For SGD, the hyperparameters (α, β) are selected from {0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0, 2.0, 5.0} × {0, 0.9, 0.99, 0.999, 0.9999} by grid search. For CNAG, the hyper-parameters (α, β, µ) are selected from {0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0, 2.0, 5.0} × {0, 0.9, 0.99, 0.999, 0.9999} ×{0.001, 0.01, 0.05, 0.1, 0.2, 0.3, 0.5, 0.9} by grid search. For NAMSG and OBSB, the hyperparameters (α) is selected from {0.0005, 0.001, 0.002, 0.005, 0.01, 0.02} by grid search, (β 1, β 2, µ) are set according to the default values. In OBSB, the grid search runs for 5 epochs in the experiments on MNIST, and 20 epochs on CIFAR10. The average convergence rate is computed each 2 epoches on MNIST, and 10 epochs on CIFAR10. α and µ are scaled when the converging rate is halved to achieve fast convergence, and at the 50th epoch (when the loss flattens) to maximize generalization. The experiments are carried out on a workstation with an Intel Xeon E5-2680 v3 CPU and a NVIDIA K40 GPU. The source code of NAMSG can be downloaded at https://github.com/rationalspark/NAMSG/blob/master/Namsg.py, and the hyper-parameters can be downloaded at https://github.com/rationalspark/NAMSG/ blob/master/hyperparamters.txt. The simulation environment is MXNET, which can be downloaded at http://mxnet.incubator.apache.org. The MNIST dataset can be downloaded at http://yann.lecun.com/exdb/mnist; the CIFAR-10 dataset can be downloaded at http://www.cs.toronto.edu/~kriz/cifar.html.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkxGaeHKvB
A new algorithm for training neural networks that compares favorably to popular adaptive methods.
Recent advances in Generative Adversarial Networks facilitated by improvements to the framework and successful application to various problems has ed in extensions to multiple domains. IRGAN attempts to leverage the framework for Information-Retrieval (IR), a task that can be described as modeling the correct conditional probability distribution p(d|q) over the documents (d), given the query (q). The work that proposes IRGAN claims that optimizing their minimax loss function will in a generator which can learn the distribution, but their setup and baseline term steer the model away from an exact adversarial formulation, and this work attempts to point out certain inaccuracies in their formulation. Analyzing their loss curves gives insight into possible mistakes in the loss functions and better performance can be obtained by using the co-training like setup we propose, where two models are trained in a co-operative rather than an adversarial fashion. Information-Retrieval (IR) involves providing a list of ranked documents {d 1, d 2, . . ., d k} in answer to a query q. This general formulation can be extended to various tasks like web-search, where the documents are web pages and information needs are queries, content-recommendation, where the documents are items/content to suggest and queries are users, and Question-Answering, where the documents are answers and queries are questions. The retrieved list can also be viewed as a probability distribution over candidates, one example being DISPLAYFORM0 where l is a hyperparameter. Even if the probability distribution is not explicit, it is desirable to retrieve a higher ranked document more often than a lower ranked document. GANs were proposed as alternatives to generative models and have been shown to be capable of modeling the true data well. High dimensional settings like images and word sequences have seen some success. Given that the generator in GANs tries to model the training data's distribution, adversarial setups seem like a natural fit for IR. The learned distribution can then be used to retrieve relevant documents for incoming queries. IRGAN is a framework proposed by, with the hope of giving Information-Retrieval, access to the large literature of GANs. IRGAN consists of a discriminator and a generator. Like in a typical setup, the discriminator learns to distinguish between documents produces by the real probability distribution or the real ranking and the generator's probability distribution. It increases the likelihood of the former and decreases it for the latter. The generator tries to bring its probability distribution closer to the real one so that it increases the likelihood of confusing the discriminator into believing that it is the true distribution. Ideally, equilibrium is achieved when the generator manages to rank the documents according to the true distribution. However, the formulation and implementation of the loss function in the work seems to have a few issues. Specifically, the use of the baseline term recommended in the work in pitting the loss functions of the discriminator and the generator directly against each other and this leads to issues that are conspicuous in the loss curves. The training starts off with a pre-trained discriminator and generator, and the performance of the generator decreases as the training proceeds, while you would actually expect the opposite. When pre-training is not used, the generator does not learn at all. This forces IRGAN to choose the generator or discriminator based on whichever has better performance, while it expected that the generator is chosen at equilibrium. Given the traction this paper has received since its inception (53 citations as of 27 th September 2018), it is important to critically analyze the work and attribute the claimed performance improvements correctly. To this end, we propose two models which outperform IRGAN on two of the three tasks and give a comparable performance on the third. They also serve as an ablation study by experimentally showing that the generator might not be playing a vital role during train or test time. The following contributions are made in this work• We propose a model motivated by Co-training which outperforms IRGANs • We point out inaccuracies in the minimax loss function used in IRGANs • We substantiate the same by drawing from the loss curves 2 RELATED WORK 2.1 GENERATIVE ADVERSARIAL NETWORKS Generative Adversarial Networks (GANs) BID11 ) were proposed as an alternative to generative models BID23 ) which used Markov Chains or other approximations to compute intractable probability distributions. In essence, the generator tries to model the real data distribution and the discriminator learns to differentiate between real data points and generated data points. GANs are notoriously unstable to train and works like DCGANs BID21 ) and Wasserstein GAN BID1 ) have successfully attempted to alleviate a few issues. Nonetheless, GANs have been widely applied to various problems like image generation, text generation, cross-modal retrieval and more niche ones like Interactive Image Generation BID29 ), Text to Image ), Image to Image style transfer BID12 ) and robotics BID4 ).While GANs allow generation based on a random variable z, Conditional GANs BID17 ) partition the sample variable into two parts (z and y). y is used to denote which part of the probability distribution the generator has to generate from, and z plays the same role played in Vanilla GANs BID11 ). Conditional GANs dovetail with IR because y can be used to represent the query or its embedding, and in theory, the model should be able to generate the required document. DISPLAYFORM1 We feel that an eventual adversarial formulation for IR will be similar to this in flavor. employed Sketch-GANs for the interesting task of retrieving similar merchant seals (images) based on an input image. DCGANs BID21 ) are used to generate an image, and post training, the last layer of the discriminator is popped off and the rest of it is used as an encoder. This model, however, is specifically for retrieving image responses. In the subsequent sections, D denotes the discriminator, G the generator, p true the real probability distribution over documents, φ the parameters of the discriminator, θ the parameters of the generator, d the document, q the query and r the rank of a document with respect to a query. The equations used to train the discriminator and generator in BID11 are the following respectively. DISPLAYFORM0 The discriminator minimizes the likelihood of a "generated" data point and maximizes it for a "real" data point, while the generator tries to generate data points which the discriminator thinks is "real". The two models are trained alternatively and the procedure culminates in a generator which is able to produce data which looks like the real data. This section elucidates the IRGAN formulation ). Comments by the authors are in italics (in this section alone), while normal typeface is a paraphrased version of IRGAN. IRGAN is motivated by the combination of two schools of thoughts, the generative retrieval model and the discriminative retrieval model. The generative retrieval model p θ (d|q, r) tries to sample relevant documents from a candidate pool with the aim of cloning the true probability distribution p true. The discriminative retrieval model f φ (q, d), which is a binary classifier, tries to discriminate between real and generated pairs (q, d).Two different loss functions IRGAN-Pointwise and IRGAN-Pairwise are proposed. This is called so because each data point is independently used to train, unlike in IRGAN-Pairwise where pairs of points are used. The dataset is expected to have some cue with respect to how often a document is correctly retrieved for a query, if at all. DISPLAYFORM0 Note that the generator G can alternately be written as p θ (d|q n, r), which denotes the modeled probability distribution, and D(d|q) = σ(f φ (d, q)) represents the discriminator's score. In some IR problems the training data may not be a set of relevant documents for each query, but rather a set of ordered document pairs DISPLAYFORM0, where d i d j means that the first document is more relevant for query q n than the second document. o represents a real pair DISPLAYFORM1 Note the similarity between this and the previous formula. The problem with this formula is that D(o|q) is actually supposed to denote the probability that the pair o is from the real data distribution and not the probability that the pair is correctly ranked, as mentioned in the paper. The generator samples documents from the candidate pool based on its belief (relevance score). This sampling has the downside that the gradients cannot be backpropagated, and policy gradients BID25 ) have to be used. As an intuition, the documents can be considered as the arms of a contextual multi-arm bandit BID2, BID16 ), and picking an arm can be viewed as analogous to choosing the document as relevant. The policy discovered gives us the relevance of each document and − log(1 − D(d|q)) is the reward for picking that action/document (d). Let J G represent the objective function of the generator that it has to maximize. The policy gradient (REINFORCE) can be written as the following. DISPLAYFORM0 To reduce the variance in REINFORCE, a standard trick is to use the advantage function instead of just the reward. This does not change the optimal parameters. DISPLAYFORM1 Another baseline term that is suggested for each query is f φ (d +, q), where d + represents the positive document. This is legal because the term does not depend on the document (action). This is motivated by the belief of a larger generator score if f φ (d +, q) is large and lower if f φ (d +, q) is low. This baseline term is used in two of their three tasks and causes the violation of adversarial formulation, as we show in the following section. Having shown that the generator can be optimized using REINFORCE, we focus on the loss function and show how the baseline term exacerbates training. We consider Stochastic Gradient Descent updates for ease of illustration. Consider a triple (q, d r, d g), where d r denotes the correct document according to the true distribution and d g denotes the generated document. The discriminator's updates are in the direction of ∇J D, with the following definition. DISPLAYFORM0 With the baseline term included, the generator's updates are in the direction of ∇J G, with the following definition. DISPLAYFORM1 Since maximizing log(1 − z) with respect to z is the same as maximizing − log z, we can write the following equivalent loss functions DISPLAYFORM2 Note that this substitution is similar in principle to the substitution in BID11, where the motivation is to allow easier flow of gradients. It is apparent that the discriminator and the generator are optimizing directly opposite loss functions and this detrimental to the performance of the models. We provide experimental proof later that the performance improvements shown in IRGAN are mainly because of the discriminator maximizing the likelihood of the real data and not because of the generator. We propose two models to compare and critically analyze performance gains facilitated by IRGAN and illustrate them in FIG0.The first model increases the likelihood of the training data and decreases the likelihood of documents which are not relevant to the query but have a high score according to its own parameters. It maximizes the following, where the sampling for the second term is from a candidate pool with only negative answers (denoted by p −). Not following this will lead to undesirable updates because sampling positive documents for the second term will in decreasing the likelihood of real data. ψ denotes the parameters of the only model. To alleviate the pernicious loss function of IRGAN, we propose a model which uses two discriminators in a co-operative setup influenced by Co-training BID3 ). Instead of using two different views (x 1, x 2) as mentioned in the work, we use the same views for both the discriminators but let them influence each other in a feedback loop. Training is similar to Model 1 with the only difference being that each discriminator decreases the likelihood of documents relevant to the other discriminator rather than itself, as shown in the equation below. This model achieves better performance than IRGAN. DISPLAYFORM0 DISPLAYFORM1 This section describes the datasets, the task and hyperparameters. We conduct experiments on three tasks, Web Search, Item Recommendation and Question Answering, using the same datasets mentioned in IRGAN. InsuranceQA by BID9 7.2 TASK In Web Search, the task is to retrieve the document which is most relevant to the query. Each query on average has around 5 positive documents. In Content Recommendation, users give ratings for movies and given a user, the task is to retrieve a movie that they would probably rate high. In IRGAN, any movie retrieved for which the user rating is greater than or equal to 4 (out of a scale of 5) is considered correct. Based on the dataset statistics, around 55% of the user-movie ratings are ≥ 5. This makes the problem easy to solve. In Question Answering, every query has just one relevant document in most cases. This is thus the hardest task. The hyperparameters for the proposed model are the same except for absence of G Epochs, for obvious reasons. Information about hyperparameter tuning is mentioned in the Appendix. We report only the P@5 and NDCG@5 values because all other metrics follow the same trend. TAB2 reports the performance of various models. As can be seen, both the Single Discriminator and the Co-training models outperform IRGAN models. The fact that each query is associated approximately with 5 positive documents provides evidence that the proposed models can perform well in sparse reward settings. This task, in contrast to the other two, has multiple relevant documents that can be retrieved for each query, making it slightly easier. Each user (query) rates a movie (document), and 55% of the entries in the train set and 56% in the test set are relevant pairs. It can be seen in TAB3 that the single discriminator model achieves only a slightly lower score, and given the small size of the dataset (943 users), it makes just 7 more mistakes when compared to IRGAN. This is not a statistically significant number, especially because the IRGAN generator is pre-initialized to a model which scores 0.34 but our model learns from scratch. After close correspondence with the authors of IRGAN, we obtained all the hyperparameters required for the models. Multiple random seeds were used in vain, the in the paper for Question-Answering tasks could not be replicated. We instead mention the best out of all random seeds. We believe that if there is some random seed which gives better performance for IR-GAN, it should do so for our model as well. The co-training model outperforms IRGAN-Pairwise. The loss curves in FIG1 picked from IRGAN's work show deteriorating performance of the generator, which is in contrast to what is observed in actual adversarial training. In the minimax setting, since the generator is expected to capture the real data distribution, its performance is supposed to improve and this can indirectly be seen in GANs and DCGANs where the samples generated look more and more like real-world data points. Further, a deteriorating generator implies that the discriminator's improvement in performance is only because of the first term of J D, which hints that our proposed models might be able to do better than IRGAN. The reason offered in the paper is that "A worse generator could be the of the sparsity of document distribution, i.e., each question usually has only one correct answer". But this reason does not seem plausible, given that DCGANs have been able to model very high dimensional data, where the probability distribution is only a tiny part of the real space. Further, the increase in performance of the discriminator in all cases is coupled with a deteriorating generator. This substantiates our claim that the discriminator and the generator are optimizing directly opposite loss functions. Item-recommendation task is a little different from the other two tasks at hand because of a large number of positive answers. When the loss curves are plotted, though the generator's performance improves, the discriminator's loss remains high and almost constant throughout the procedure, as shown in FIG2. This is another indication that the performance of IRGAN is not actually because of the adversarial setup, but because of the maximization of the likelihood of the real data. We have already shown in Section 2 that Conditional GANs are connected directly to Information Retrieval. The problem can also be viewed as a contextual multi-armed bandit problem , where each documents is an arm and the context x q,d can be used to determine the actionvalue function f θ (x q,d). In previous works BID14 ) f has been considered to be linear, but recent studies BID7 have modeled them as deep neural networks. In BID20, a parallel is drawn between Actor-Critic algorithms and GANs. This is directly related to our work because REINFORCE with a baseline can be connected to Actor-Critic algorithms when bootstrapping is used BID24 ). The work shows a restricted scenario which involves a stateless MDP, each action setting all the pixels of the image and cross-entropy loss instead of mean-squared Bellmann residual in which GANs are equivalent to Actor-Critic algorithms. But this equivalence holds only when the baseline term is not used so the formulation in IRGAN is not exactly equivalent to a GAN framework. Another study BID10 ) draws a parallel between Inverse Reinforcement Learning BID19 ) and GANs because both the methods try to "learn" the cost function to optimize for. The experiments performed show that IRGAN is by no means state-of-the-art on those datasets. Further, the performance does not justify the large training time of 4 hours per generator epoch and 1 hour of discriminator epoch as opposed to 2 hours per epoch of the co-training model (11 GB GPU and Question Answering task). The shaky mathematical formulation renders the generator useless after training, and any gains in performance can be attributed directly to the first term of J D, where the likelihood of the real data is increased. We showed that the discriminator and generator are optimizing directly opposite loss functions and this is the cause of deleterious training. The poor performance of IRGAN on Web-Search and Question Answering and only a satisfactory performance on Content-Recommendation (which has dense rewards) lead us to speculate that it does not work well in sparse reward scenarios. This is similar to a well-known problem called the Sparse Reward Reinforcement Learning. We think that a correct formulation along with established techniques from the former, like reward shaping BID18 ) may lead to better performance. Newer methods like Hindsight Experience Replay BID0 ) which allow models to learn both from mistakes and rewards may further ameliorate learning. We would also like to explore in the direction of learning correct adversarial frameworks for more complex tasks like Image Retrieval and Question Answering which will involve learning end-toend trainable models. With advances in modeling sequences, this could also involve generation of documents rather than sampling them. The hyperparameters are mentioned in tables 6, 7, 8, 9 and 10. The following were the ranges of hyperparameter tuning, along with the best value. Gradient Descent Optimizer was used so that the comparison with IRGAN is fair. For the co-training model, for every epoch, we optimize the two discriminators several times. We call these the outer and inner epochs in TAB5. TAB6 represents the number of candidates that are chosen before performing the softmax. This is done to make the procedure computationally tractable. We use the value suggested in IRGAN. The discriminator and the generator have the same architecture in all the tasks. For the Web-retrieval task, the model has a single hidden layer 46 units. For the content-recommendation task, the model converts users and movies to a 5 dimensional embedding. This can be though to be a single hidden layer which compresses a one-hot user embedding to a 5 dimensional embedding. For the Question-Answering task, each word is initialized to a 100 dimensional random vector. A Convolutional Neural Network is then used and the window size of the convolutional kernel is. A max-pooling-over-time strategy is then used and the output is a 100 dimensional vector because each feature map is pooled to a scalar. Note that this architecture is the same as the one used in the IRGAN paper. We refer the user to that for further description.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Syez3j0cKX
Points out problems in loss function used in IRGAN, a recently proposed GAN framework for Information Retrieval. Further, a model motivated by co-training is proposed, which achieves better performance.
Collaborative personalization, such as through learned user representations (embeddings), can improve the prediction accuracy of neural-network-based models significantly. We propose Federated User Representation Learning (FURL), a simple, scalable, privacy-preserving and resource-efficient way to utilize existing neural personalization techniques in the Federated Learning (FL) setting. FURL divides model parameters into federated and private parameters. Private parameters, such as private user embeddings, are trained locally, but unlike federated parameters, they are not transferred to or averaged on the server. We show theoretically that this parameter split does not affect training for most model personalization approaches. Storing user embeddings locally not only preserves user privacy, but also improves memory locality of personalization compared to on-server training. We evaluate FURL on two datasets, demonstrating a significant improvement in model quality with 8% and 51% performance increases, and approximately the same level of performance as centralized training with only 0% and 4% reductions. Furthermore, we show that user embeddings learned in FL and the centralized setting have a very similar structure, indicating that FURL can learn collaboratively through the shared parameters while preserving user privacy. Collaborative personalization, like learning user embeddings jointly with the task, is a powerful way to improve accuracy of neural-network-based models by adapting the model to each user's behavior (; ; ; ; ;). However, model personalization usually assumes the availability of user data on a centralized server. To protect user privacy, it is desirable to train personalized models in a privacy-preserving way, for example, using Federated Learning (; b). Personalization in FL poses many challenges due to its distributed nature, high communication costs, and privacy constraints (a; ; b; 2018; ; a). To overcome these difficulties, we propose a simple, communication-efficient, scalable, privacypreserving scheme, called FURL, to extend existing neural-network personalization to FL. FURL can personalize models in FL by learning task-specific user representations (i.e., embeddings) (; ; ; ;) or by personalizing model weights . Research on collaborative personalization in FL (; ;) has generally focused on the development of new techniques tailored to the FL setting. We show that most existing neural-network personalization techniques, which satisfy the split-personalization constraint, can be used directly in FL, with only a small change to Federated Averaging , the most common FL training algorithm. Existing techniques do not efficiently train user embeddings in FL since the standard Federated Averaging algorithm transfers and averages all parameters on a central server. Conventional training assumes that all user embeddings are part of the same model. Transferring all user embeddings to devices during FL training is prohibitively resource-expensive (in terms of communication and storage on user devices) and does not preserve user privacy. FURL defines the concepts of federated and private parameters: the latter remain on the user device instead of being transferred to the server. Specifically, we use a private user embedding vector on each device and train it jointly with the global model. These embeddings are never transferred back to the server. We show theoretically and empirically that splitting model parameters as in FURL affects neither model performance nor the inherent structure in learned user embeddings. While global model aggregation time in FURL increases linearly in the number of users, this is a significant reduction compared with other approaches whose global aggregation time increases quadratically in the number of users. FURL has advantages over conventional on-server training since it exploits the fact that models are already distributed across users. There is little resource overhead in distributing the embedding table across users as well. Using a distributed embeddings table improves the memory locality of both training embeddings and using them for inference, compared to on-server training with a centralized and potentially very large user embedding table. Our evaluation of document classification tasks on two real-world datasets shows that FURL has similar performance to the server-only approach while preserving user privacy. Learning user embeddings improves the performance significantly in both server training and FL. Moreover, user representations learned in FL have a similar structure to those learned in a central server, indicating that embeddings are learned independently yet collaboratively in FL. In this paper, we make the following contributions: • We propose FURL, a simple, scalable, resource-efficient, and privacy preserving method that enables existing collaborative personalization techniques to work in the FL setting with only minimal changes by splitting the model into federated and private parameters. • We provide formal constraints under which the parameter splitting does not affect model performance. Most model personalization approaches satisfy these constraints when trained using Federated Averaging , the most popular FL algorithm. • We show empirically that FURL significantly improves the performance of models in the FL setting. The improvements are 8% and 51% on two real-world datasets. We also show that performance in the FL setting closely matches the centralized training with small reductions of only 0% and 4% on the datasets. • Finally, we analyze user embeddings learned in FL and compare with the user representations learned in centralized training, showing that both user representations have similar structures. Most existing work on collaborative personalization in the FL setting has focused on FL-specific implementations of personalization. Multi-task formulations of Federated Learning (MTL-FL) present a general way to leverage the relationship among users to learn personalized weights in FL. However, this approach is not scalable since the number of parameters increases quadratically with the number of users. We leverage existing, successful techniques for on-server personalization of neural networks that are more scalable but less general, i.e., they satisfy the split-personalization constraint. Transfer learning has also been proposed for personalization in FL , but it requires alternative freezing of local and global models, thus complicating the FL training process. Moreover, some versions need access to global proxy data. uses a twolevel meta-training procedure with a separate query set to personalize models in FL. FURL is a scalable approach to collaborative personalization that does not require complex multiphase training, works empirically on non-convex objectives, and leverages existing techniques used to personalize neural networks in the centralized setting. We show empirically that user representations learned by FURL are similar to the centralized setting. Collaborative filtering can be seen as a specific instance of the generalized approach in FURL. Finally, while fine-tuning individual user models after FL training can be effective, we focuses on more powerful collaborative personalization that leverages common behavior among users. The main constraint in preserving privacy while learning user embeddings is that embeddings should not be transferred back to the server nor distributed to other users. While typical model parameters are trained on data from all users, user embeddings are very privacy-sensitive because a user's embedding is trained only on that user's data. FURL proposes splitting model parameters into federated and private parts. In this section, we show that this parameter-splitting has no effect on the FL training algorithm, as long as the FL training algorithm satisfies the split-personalization constraint. Models using common personalization techniques like collaborative filtering, personalization via embeddings or user-specific weights satisfy the split-personalization constraint when trained using Federated Averaging. FL algorithms typically have two steps: 1. Local Training: Each user initializes their local model parameters to be the same as the latest global parameters stored on the server. Local model parameters are then updated by individual users by training on their own data. This produces different models for each user. 1 2. Global Aggregation: Locally-trained models are "aggregated" together to produce an improved global model on the server. Many different aggregation schemes have been proposed, from a simple weighted average of parameters , to a quadratic optimization . To protect user privacy and reduce network communication, user embeddings are treated as private parameters and not sent to the server in the aggregation step. Formal conditions under which this splitting does not affect the model quality are described as follows. Suppose we train a model on data from n users, and the k-th training example for user i has features x i k, and label y i k. The predicted label isŷ, where the model has federated parameters w f ∈ R f and private parameters w i p ∈ R p ∀i ∈ {1, . . ., n}. In order to guarantee no model quality loss from splitting of parameters, FURL requires the splitpersonalization constraint to be satisfied, i.e., any iteration of training produces the same irrespective of whether private parameters are kept locally, or shared with the server in the aggregation step. The two following constraints are sufficient (but not necessary) to satisfy the splitpersonalization constraint: local training must satisfy the independent-local-training constraint, and global aggregation must satisfy the independent-aggregation constraint. The independent-local-training constraint requires that the loss function used in local training on user i is independent of private parameters for other users w j p, ∀j = i. A corollary of this constraint is that for training example k on user i, the gradient of the local loss function with respect to other users' private parameters is zero: Equation 1 is satisfied by most implementations of personalization techniques like collaborative filtering, personalization via user embeddings or user-specific model weights, and MTL-FL . Note that is not satisfied if the loss function includes a norm of the global user representation matrix for regularization. In the FL setting, global regularization of the user representation matrix is impractical from a bandwidth and privacy perspective. Even in centralized training, regularization of the global representation matrix slows down training a lot, and hence is rarely used in practice . Dropout regularization does not violate. Neither does regularization of the norm of each user representation separately. The independent-aggregation constraint requires, informally, that the global update step for federated parameters is independent of private parameters. In particular, the global update for federated parameters depends only on locally trained values of federated parameters, and optionally, on some summary statistics of training data. Furthermore, the global update step for private parameters for user i is required to be independent of private parameters of other users, and independent of the federated parameters. The global update for private parameters for user i depends only on locally trained values of private parameters for user i, and optionally, on some summary statistics. The independent-aggregation constraint implies that the aggregation step has no interaction terms between private parameters of different users. Since interaction terms increase quadratically in the number of users, scalable FL approaches, like Federated Averaging and its derivatives satisfy the independent-aggregation assumption. However, MTL-FL formulations do not. More formally, at the beginning of training iteration t, let w t f ∈ R f denote federated parameters and w i,t p ∈ R p denote private parameters for user i. These are produced by the global aggregation step at the end of the training iteration t − 1. Local Training At the start of local training iteration t, model of user i initializes its local federated parameters as u Global Aggregation At the end of local training, these locally updated parameters u are sent to the server for global aggregation. Equation 2 for federated parameters and Equation 3 for private parameters must hold to satisfy the independent-aggregation constraint. In particular, the global update rule for federated parameters w f ∈ R f must be of the form: where u t f,i is the local update of w f from user i in iteration t, s i ∈ R s is summary information about training data of user i (e.g., number of training examples), and a f is a function from R f +nf +ns → R f. Also, the global update rule for private parameters of user i, w i p ∈ R p, must be of the form: where u i,t p,i is the local update of w i p from user i in iteration t, s i ∈ R s is summary information about training data of user i, and a p is a function from R 2p+ns → R p. In our empirical evaluation of FURL, we use Federated Averaging as the function a f, while the function a p is the identity function w i,t+1 p:= u i,t p,i (more details in Section 3.4). However, FURL's approach of splitting parameters is valid for any FL algorithm that satisfies and. FURL works for all FL algorithms that satisfy the split-personalization constraint. Our empirical evaluation of FURL uses Federated Averaging , the most popular FL algorithm. The global update rule of vanilla Federated Averaging satisfies the independent-aggregation constraint since the global update of parameter w after iteration t is: where c i is the number of training examples for user i, and u t i is the value of parameter w after local training on user i in iteration t. Recall that u t i is initialized to w t at the beginning of local training. Our implementation uses a small tweak to the global update rule for private parameters to simplify implementation, as described below. In practical implementations of Federated Averaging , instead of sending trained model parameters to the server, user devices send model deltas, i.e., the difference between the original model downloaded from the server and the locally-trained model: d Since most personalization techniques follow Equation 1, the private parameters of user i, w The second term in the equation above is multiplied by a noisy scaling factor z i, an artifact of peruser example weighting in Federated Averaging. While it is not an essential part of FURL, our implementation ignores this scaling factor z i for private parameters. Sparse-gradient approaches for learning representations in centralized training also ignore a similar scaling factor for efficiency reasons. Thus, for the private parameters of user i, we simply retain the value after local training on user i (i.e., z i = 1) since it simplifies implementation and does not affect the model performance: where u i,t p is the local update of w i p from user i in iteration t. In other words, the global update rule for private parameters of user i is to simply keep the locally trained value from user i. While this paper focuses on learning user embeddings, our approach is applicable to any personalization technique that satisfies the split-personalization constraint. The training process is as follows: 1. Local Training: Initially, each user downloads the latest federated parameters from the server. Private parameters of user i, w i p are initialized to the output of local training from the last time user i participated in training, or to a default value if this was the first time user i was trained. Federated and private parameters are then jointly trained on the task in question. 2. Global Aggregation: Federated parameters trained in the step above are transferred back to, and get averaged on the central server as in vanilla Federated Averaging. Private parameters (e.g., user embeddings) trained above are stored locally on the user device without being transferred back to the server. These will be used for the next round of training. They may also be used for local inference. We evaluate the performance of FURL on two document classification tasks that reflect real-world data distribution across users. Datasets We use two datasets, called Sticker and Subreddit. Their characteristics are as follows. 1. In-house production dataset (Sticker): This proprietary dataset from a popular messaging app has randomly selected, anonymized messages for which the app suggested a Sticker as a reply. The features are messages; the task is to predict user action (click or not click) on the Sticker suggestion, i.e., binary classification. The messages were automatically collected, de-identified, and annotated; they were not read or labeled by human annotators. 2. Reddit comment dataset (Subreddit): These are user comments on the top 256 subreddits on reddit.com. , we filter out users who have fewer than 150 or more than 500 comments, so that each user has sufficient data. The features are comments; the task is to predict the subreddit where the comment was posted, i.e., multiclass classification. The authors are not affiliated with this publicly available dataset . Sticker dataset has 940K samples and 3.4K users (274 messages/user on average) while Subreddit has 942K samples and 3.8K users (248 comments/user on average). Each user's data is split (0.8/0.1/0.1) to form train/eval/test sets. Table 1 presents the summary statistics of the datasets. We formulate the problems as document classification tasks and use the a LSTM-based neural network architecture in Figure 1. The text is encoded into an input representation vector by using character-level embeddings and a Bidirectional LSTM (BLSTM) layer. A trainable embedding layer translates each user ID into a user embedding vector. Finally, an Multi-Layer Perceptron (MLP) produces the prediction from the concatenation of the input representation and the user embedding. All the parameters in the character embedding, BLSTM and MLP layers are federated parameters that are shared across all users. These parameters are locally trained and sent back to the server and averaged as in standard Federated Averaging. User embedding is considered a private parameter. It is jointly trained with federated parameters, but kept privately on the device. Even though user embeddings are trained independently on each device, they evolve collaboratively through the globally shared model, i.e., embeddings are multiplied by the same shared model weights. Configurations We ran 4 configurations to evaluate the performance of the models with/without FL and personalization: Global Server, Personalized Server, Global FL, and Personalized FL. Global is a synonym for non-personalized, Server is a synonym for centralized training. The experiment combinations are shown in Table 2. Model Training Server-training uses SGD, while FL training uses Federated Averaging to combine SGD-trained client models . Personalization in FL uses FURL training as described in section 3.5 The models were trained for 30 and 40 epochs for the Sticker and Subreddit datasets, respectively. One epoch in FL means the all samples in the training set were used once. We ran hyperparameter sweeps to find the best model architectures (such as user embedding dimension, BLSTM and MLP dimensions) and learning rates. The FL configurations randomly select 10 users/round and run 1 epoch locally for each user in each round. Separate hyperparameter sweeps for FL and centralized training ed in the same optimal embedding dimension for both configurations. The optimal dimension was 4 for the Sticker task and 32 for the Subreddit task. Metrics We report accuracy for experiments on the Subreddit dataset. However, we report AUC instead of accuracy for the Sticker dataset since classes are highly unbalanced. Personalization improves the performance significantly. User embeddings increase the AUC on the Sticker dataset by 7.85% and 8.39% in the Server and FL configurations, respectively. The improvement is even larger in the Subreddit dataset with 37.2% and 50.51% increase in the accuracy for the Server and FL settings, respectively. As shown in Figure 2, these demonstrate that the user representations effectively learn the features of the users from their data. Personalization in FL provides similar performance as server training. There is no AUC reduction on the Sticker dataset while the accuracy drops only 3.72% on the Subreddit dataset (as shown in Figure 2). Furthermore, the small decrease of FL compared to centralized training is expected and consistent with other . The learning curves on the evaluation set on Figure 3 show the performance of FL models asymptotically approaches the server counterpart. Therefore, FL provide similar performance with the centralized setting while protecting the user privacy. User embeddings learned in FL have a similar structure to those learned in server training. Recall that for both datasets, the optimal embedding dimension was the same for both centralized and FL training. We visualize the user representations learned in both the centralized and FL settings using t-SNE (van der Maaten & Hinton, Nov 2008). The demonstrate that similar users are clustered together in both settings. Visualization of user embeddings learned in the Sticker dataset in Figure 4 shows that users having similar (e.g., low or high) click-through rate (CTR) on the suggested stickers are clustered together. For the Subreddit dataset, we highlight users who comment a lot on a particular subreddit, for the top 5 subreddits (AskReddit, CFB, The Donald, nba, and politics). Figure 5 indicates that users who submit their comments to the same subreddits are clustered together, in both settings. Hence, learned user embeddings reflect users' subreddit commenting behavior, in both FL and Server training. This paper proposes FURL, a simple, scalable, bandwidth-efficient technique for model personalization in FL. FURL improves performance over non-personalized models and achieves similar performance to centralized personalized model while preserving user privacy. Moreover, representations learned in both server training and FL show similar structures. In future, we would like to evaluate FURL on other datasets and models, learn user embeddings jointly across multiple tasks, address the cold start problem and personalize for users not participating in global FL aggregation.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Syl-_aVtvH
We propose Federated User Representation Learning (FURL), a simple, scalable, privacy-preserving and bandwidth-efficient way to utilize existing neural personalization techniques in the Federated Learning (FL) setting.
Recurrent models for sequences have been recently successful at many tasks, especially for language modeling and machine translation. Nevertheless, it remains challenging to extract good representations from these models. For instance, even though language has a clear hierarchical structure going from characters through words to sentences, it is not apparent in current language models. We propose to improve the representation in sequence models by augmenting current approaches with an autoencoder that is forced to compress the sequence through an intermediate discrete latent space. In order to propagate gradients though this discrete representation we introduce an improved semantic hashing technique. We show that this technique performs well on a newly proposed quantitative efficiency measure. We also analyze latent codes produced by the model showing how they correspond to words and phrases. Finally, we present an application of the autoencoder-augmented model to generating diverse translations. Autoencoders have a long history in deep learning BID3 BID10 BID16 BID7. In most cases, autoencoders operate on continuous representations, either by simply making a bottleneck BID3, denoising BID16, or adding a variational component BID7. In many cases though, a discrete latent representation is potentially a better fit. Language is inherently discrete, and autoregressive models based on sequences of discrete symbols yield impressive . A discrete representation can be fed into a reasoning or planning system or act as a bridge towards any other part of a larger system. Even in reinforcement learning where action spaces are naturally continuous, show that discretizing them and using autoregressive models can yield improvements. Unluckily, using discrete latent variables is challenging in deep learning. And even with continuous autoencoders, the interactions with an autoregressive component cause difficulties. Despite some success BID1 BID17, the task of meaningfully autoencoding text in the presence of an autoregressive decoder has remained a challenge. In this work we present an architecture that autoencodes a sequence s of N discrete symbols from any vocabulary (e.g., a tokenized sentence), into a K-fold (we test K = 8 and K = 32) compressed sequence c(s) of Since gradient signals can vanish when propagating over discrete variables, the compression function c(s) can be hard to train. To solve this problem, we draw from the old technique of semantic hashing BID11. There, to discretize a dense vector v one computes σ(v + n) where σ is the sigmoid function and n represents annealed Gaussian noise that pushes the network to not use middle values in v. We enhance this method by using a saturating sigmoid and a straight-through pass with only bits passed forward. These techniques, described in detail below, allow to forgo the annealing of the noise and provide a stable discretization mechanism that requires neither annealing nor additional loss factors. We test our discretization technique by amending language models over s with the autoencoded sequence c(s). We compare the perplexity achieved on s with and without the c(s) component, and contrast this value with the number of bits used in c(s). We argue that this number is a proper measure for the performance of a discrete autoencoder. It is easy to compute and captures the performance of the autoencoding part of the model. This quantitative measure allows us to compare the technique we introduce with other methods, and we show that it performs better than a GumbelSoftmax BID4 BID8 in this context. Finally, we discuss the use of adding the autoencoded part c(s) to a sequence model. We present samples from a character-level language model and show that the latent symbols correspond to words and phrases when the architecture of c(s) is local. ehen, we introduce a decoding method in which c(s) is sampled and then s is decoded using beam search. This method alleviates a number of problems observed with beam search or pure sampling. We show how our decoding method can be used to obtain diverse translations of a sentence from a neural machine translation model. To summarize, the main contributions of this paper are: a discretization technique that works well without any extra losses or parameters to tune, a way to measure performance of autoencoders for sequence models with baselines, an improved way to sample from sequence models trained with an autoencoder part. Below, we introduce our discretization method, the autoencoding function c(s) and finally the complete model that we use for our experiments. All code and hyperparameter settings needed to replicate our experiments will be available as open-source 1. As already mentioned above, our discretization method stems from semantic hashing BID11. To discretize a b-dimensional vector v, we first add noise, so v n = v + n. The noise n is drawn from a b-dimensional Gaussian distribution with mean 0 and standard deviation 1 (deviations between 0 and 1.5 all work fine, see ablations below). The sum is component-wise, as are all operations below. Note that noise is used only for training, during evaluation and inference n = 0. From v n we compute two vectors: v 1 = σ (v n) and v 2 = (v n < 0), where σ is the saturating sigmoid function from BID6 BID5: DISPLAYFORM0 The vector v 2 represents the discretized value of v and is used for evaluation and inference. During training, in the forward pass we use v 1 half of the time and v 2 the other half. In the backward pass, we let gradients always flow to v 1, even if we used v 2 in the forward computation 2.We will denote the vector v discretized in the above way by DISPLAYFORM1 Since in other parts of the system we will predict v d with a softmax, we want the number of bits to not be too large. In our experiments we stick with b = 16, so v d is a vector of 16 bits, and so can be interpreted as an integer between 0 and 2 16 − 1 = 65535.The dense vectors representing activations in our sequence models have much larger dimensionality than 16 (often 512, see the details in the experimental section below). To discretize such a high-dimensional vector w we first have a simple fully-connected layer converting it into v = dense(w, 16). In our notation, dense(x, n) denotes a fully-connected layer applied to x and mapping it into n dimensions, i.e., dense(x, n) = xW + B where W is a learned matrix of shape d × n, where d is the dimensionality of x, and B is a learned bias vector of size n. The discretized vector v d is converted back into a high-dimensional vector using a 3-layer feed-forward network: h1a = dense(vd, filter_size) h1b = dense(1.0 -vd, filter_size) h2 = dense(relu(h1a + h1b), filter_size) = dense(relu(h2), hidden_size)Above, every time we apply dense we create a new weight matrix an bias to be learned. The relu function is defined in the standard way: relu(x) = max(x, 0). In the network above, we usually use a large filter size; in our experiments we set it to 4096 while hidden size was usually 512. We suspect that this allows the above network to recover from the discretization bottleneck by simulating the distribution of w encountered during training. Given a dense, high-dimensional vector w we will denote the corresponding returned from the network above by bottleneck(w) and the corresponding discrete vector v 2 by discrete(w). As an alternative discretization method, we consider the recently studied Gumbel-Softmax BID4 BID8. In that case, given a vector w we compute discrete g (w) by applying a linear layer mapping into 2 16 elements, ing in the logits l. During evaluation and inference we simply pick the index of l with maximum value for discrete g (w) and the vector bottleneck g (w) is computed by an embedding. During training we first draw samples g from the Gumbel distribution: g ∼ − log(− log(u)), where u ∼ U are uniform samples. Then, as in BID4, we compute x, the log-softmax of l, and set: DISPLAYFORM0 With low temperature τ this vector is close to the 1-hot vector representing the maximum index of l. But with higher temperature, it is an approximation (see FIG0 in). We multiply this vector y by the embedding matrix to compute bottleneck g (w) during training. Having the functions bottleneck(w) and discrete(w) (respectively their Gumbel-Softmax versions), we can now describe the architecture of the autoencoding function c(s). We assume that s is already a sequence of dense vectors, e.g., coming from embedding vectors from a tokenized sentence. To halve the size of s, we first apply to it 3 layers of 1-dimensional convolutions with kernel size 3 and padding with 0s on both sides (SAME-padding). We use ReLU non-linearities between the layers and layer-normalization BID0. Then, we add the input to the , forming a residual block. Finally, we process the with a convolution with kernel size 2 and stride 2, effectively halving the size of s. In the local version of this function we only do the final strided convolution, without the residual block. To autoencode a sequence s and shorten it K-fold, with K = 2 k, we first apply the above step k times obtaining a sequence s that is K times shorter. Then we put it through the discretization bottleneck described above. The final compression function is given by c(s) = bottleneck(s) and the architecture described above is depicted in FIG0.Note that, since we perform 3 convolutions with kernel 3 in each step, the network has access to a large context: 3 · 2 k−1 just from the receptive fields of convolutions in the last step. That's why we also consider the local version. With only strided convolutions, the i-th symbol in the local c(s) has only access to a fixed 2 k symbols from the sequence s and can only compress them. Training with c(s) defined above from scratch is hard, since at the beginning of training s is generated by many layers of untrained convolutions that are only getting gradients through the discretization bottleneck. To help training, we add a side-path for c(s) without discretization: we just use c(s) = s for the first 10000 training steps. In this pretraining stage the network reaches loss of almost 0 as everything needed to reconstruct s is encoded in s. After switching to c(s) = bottleneck(s) the loss is high again and improves during further training. To test the autoencoding function c(s) we will use it to prefix the sequence s in a sequence model. Normally, a sequence model would generate the i-th element of s conditioning on all elements of s before that, s <i, and possibly on some other inputs. For example, a language model would just condition on s <i while a neural machine translation model would condition on the input sentence (in the other language) and s <i. We do not change the sequence models in any way other than adding the sequence c(s) as the prefix of s. Actually, for reasons analogous to those in BID13, we first reverse the sequence c(s), then add a separator symbol (#), and only then concatenate it with s, as depicted in FIG1. We also use a separate set of parameters for the model predicting c(s) so as to make sure that the models predicting s with and without c(s) have the same capacity. As the architecture for the sequence model we use the Transformer . Transformer is based on multiple attention layers and was originally introduced in the context of neural machine translation. We focused on the autoencoding function c(s) and did not tune the sequence model in this work: we used all the defaults from the baseline provided by the Transformer authors (6 layers, hidden size of 512 and filter size of 4096) and only varied parameters relevant to c(s). We experimented with autoencoding on 3 different sequence tasks: on a character-level language model, on a word-level language model, and on a word-level translation model. The goal for was to check if our technique works at all, since character sequences are naturally amenable to compression into shorter sequences of objects from a larger vocabulary. For, we wanted to check if the good obtained in will still hold if the input is from a larger vocabulary and inherently more compressed space. Finally, in we want to check if this method is applicable to conditional models and how it can be used to improve decoding. We use the LM1B corpus BID2 for language modelling and we tokenize it using a subword (wordpiece) tokenizer BID12 into a vocabulary of 32000 words and word-pieces. For translation, we use the WMT English-German corpus, similarly tokenized into a vocabulary of 32000 words and word-pieces 3.Below we report both qualitative and quantitative . First, we focus on measuring the performance of our autoencoder quantitatively. To do that, we introduce a measure of discrete autoen- coder performance on sequence tasks and compare our semantic hashing based method to GumbelSoftmax on this scale. Sequence models trained for next-symbol prediction are usually trained (and often also evaluated) based on the perplexity per token that they reach. Perplexity is defined as 2 H, where H is the entropy (in bits) of a distribution. Therefore, a language model that reaches a per-word perplexity of p, say p = 32, on a sentence s can be said to compress each word from s into log(p) = 5 bits of information. Let us now assume that this model is allowed to access some additional bits of information about s before decoding. In our autoencoding case, we let it peek at c(s) before decoding s, and c(s) has K = 8 times less symbols and b = 16 bits in each symbol. So c(s) has the information capacity of 2 bits per word. If our autoencoder was perfectly aligned with the needs of the language model, then allowing it to peek into c(s) would lower its information needs by these 2 bits per word. The perplexity p of the model with access to c(s) would thus satisfy log 2 (p) = 5 − 2 = 3, so its perplexity would be p = 8.Getting the autoencoder c(s) perfectly aligned with the language model is hard, so in practice the perplexity p is always higher. But since we measure it (and optimize for it during training), we can calculate how many bits has the c(s) part actually contributed to lowering the perplexity. We calculate log 2 (p) − log 2 (p) and then, if c(s) is K-times shorter than s and uses b bits, we define the discrete sequence autoencoding efficiency as: DISPLAYFORM0 The second formulation is useful when the raw numbers are given as natural logarithms, as is often the case during neural networks training. Defined in this way, DSAE measures how many of the available bits in c(s) are actually used well by the model that peeks into the autoencoded part. Note that some models may have autoencoding capacity higher than the number of bits per word that log(p) indicates. In that case achieving DSAE=1 is impossible even if log(p) = 0 and the autoencoding is perfect. One should be careful when reporting DSAE for such over-capacitated models. So how does our method perform on DSAE and how does it compare with Gumbel-Softmax? In Table 1 we list log-perplexties of baseline and autoencoder models. We report numbers for the global version of c(s) on our 3 problems and compare it to Gumbel-Softmax on word-level problems. We did not manage to run the Gumbel-Softmax on character-level data in our baseline configuration because it requires too much memory (as it needs to learn the embeddings for each latent discrete symbol). Also, we found that the for Gumbel-Softmax heavily depend on how the temperature parameter τ is annealed during training. We tuned this on 5 runs of a smaller model and chose the best configuration. This was still not enough, as in many runs the Gumbel-Softmax would only utilize a small portion of the discrete symbols. We added an extra loss term to increase the variance of the Gumbel-Softmax and ran another 5 tuning runs to optimize this loss term. We used the best configuration for the experiments above. Still, we did not manage to get any information autoencoded in the translation model, and got only 12% efficiency in the language model (see Table 1). Table 2: Autoencoder-augmented language models with different noise deviations. All values from no noise (0.0) upto a deviation of 1.5 yield DSAE between 40% and 50%.Our method, on the other hand, was most efficient on character-level language modeling, where we reach almost 60% efficiency, and it retained high 55% efficiency on the word-level language modeling task. On the translation task, our efficiency goes down to 19%, possibly because the c(s) function does not take inputs into account, and so may not be able to compress the right parts to align with the conditional model that outputs s depending on the inputs. But even with 19% efficiency it is still useful for sampling from the model, as shown below. To make sure that our autoencoding method is stable, we experiment with different standard deviations for the noise n in the semantic hashing part. We perform these experiments on word-level language modelling with a smaller model configuration (3 layers, hidden size of 384 and filter size of 2048). The , presented in Table 2, show that our method is robust to the amount of noise. Interestingly, we see that our method works even without any noise (standard deviation 0.0). We suspect that this is due to the fact that half of the time in the forward computation we use the discrete values anyway and pass gradients through to the dense part. Also, note that a standard deviation of 1.5 still works, despite the fact that our saturating sigmoid is saturated for values above 2.4 as 1.2 · σ(2.4) − 0.1 = 1.0002. Finally, with deviation 1.0 the small model achieves DSAE of 48.5%, not much worse than the 55% achieved by the large baseline model and better than the larger baseline model with Gumbel-Softmax. Having trained the models, we try to find out whether the discrete latent symbols have any interpretable meaning. We start by asking a simpler question: do the latent symbols correspond to some fixed phrases or topics?We first investigate this in a 32-fold compressed character-level language model. We set c(s) to 4 random latent symbols [l 1, l 2, l 3, l 4] and decode s with beam search, obtaining:All goods are subject to the Member States' environmental and security aspects of the common agricultural policy. Now, to find out whether the second symbol in c(s) stands for anything fixed, we replace the third symbol by the second one, hoping for some phrase to be repeated. Indeed, decoding s from the new c(s) = [l 1, l 2, l 2, l 4] with beam search we obtain:All goods are charged EUR 50.00 per night and EUR 50.00 per night stay per night. Note that the beginning of the sentence remained the same, as we did not change the first symbol, and we see a repetition of EUR 50.00 per night. Could it be that this is what that second latent symbol stands for? But there were no EUR in the first sentence. Let us try again, now changing the first symbol to a different one. With c(s) = [l 5, l 2, l 2, l 4] the decoded s is:All bedrooms suited to the large suite of the large living room suites are available. We see a repetition again, but of a different phrase. So we are forced to conclude that the latent code is structured, the meaning of the latent symbols can depend on other symbols before them. Failing to decipher the code from this model, we try again with an 8-fold compressed character-level language model that uses the local version of the function c(s). Recall (see Section 2.3) that a local function c(s) with 8-fold compression generates every latent symbol from the exact 8 symbols that correspond to it in s, without any context. With this simpler c(s) the model has lower DSAE, 35%, but we expect the latent symbols to be more context-independent. And indeed: if we pick the first 2 latent symbols at random but fix the third, fourth and fifth to be the same, we obtain the following:It's studio, rather after a gallery gallery... When prices or health after a gallery gallery... I still offer hotels at least gallery gallery... So the fixed latent symbol corresponds to the word gallery in various contexts. Let us now ignore context-dependence, fix the first three symbols, and randomly choose another one that we repeat after them. Here are a few sample decodes:Come to earth and culturalized climate climate... Come together that contribution itself, itself,... Come to learn that countless threat this gas threat... In the first two samples we see that the latent symbol corresponds to climate or itself, respectively. Note that all these words or phrases are 7-characters long (and one character for space), most probably due to the architecture of c(s). But in the last sample we see a different phenomenon: the latent symbol seems to correspond to X threat, where X depends on the context, showing that this latent code also has an interesting structure. From the above we know that our discretization method works quantitatively and we see interesting patterns in the latent code. But how can we use the autoencoder models in practice? One well-known problem with autoregressive sequence models is decoding. In settings where the possible outputs are fairly restricted, such as translation, one can obtain good with beam search. But obtained by beam search lack diversity BID15. Sampling can improve diversity, but it can introduce artifacts or even change semantics in translation. We present an example of this problem in FIG2. We pick an English sentence from the validation set of our English-German dataset and translate it using beam search and sampling (left and middle columns).In the left column, we show top 3 from beam search using our baseline model (without autoencoder). It is not necessary to speak German to see that they are all very similar; the only difference between the first and the last one are the spaces before "%". Further beams are also like this, providing no real diversity. In the middle column we show 3 sampled from the baseline model. There is more diversity in them, but they still share most of the first half and unluckily all of them actually changed the semantics of the sentence in the second half. The part African-Americans, who accounted however for only 13% of voters in the State becomes The american voters were only 13% of voters in the state in the first case, African-Americans, who accounted however for only 13% of all people in the State in the second one, and African-Americans, who elected only 13% of people in the State in the third case. This illustrates the dangers of just sampling different words during decoding. Using a model with access to the autoencoded part c(s) presents us with another option: sample c(s) and then run beam search for the sequence s appropriate for that c(s). In this way we do not introduce low-level artifacts from sampling, but still preserve high-level diversity. To sample c(s) we train a language model on c(s) with the same architecture as the model for s (and also conditioned on the input), but with a different set of weights. We then use the standard multinomial sampling from this model to obtain c(s) and run a beam search on the model for s with the sampled c(s).In the right column in FIG2 we show 3 samples obtained in this way. As you can see, these samples are much more diverse and they still preserve the semantics of the original sentence, even if with sometimes strange syntax. One would back-translate the first example as: In turned out, for example, in the course of the parliamentary elections in Florida, that 33% of the early voters are African-Americans, which were, however, only 13% of the voters of the state. Note the addition of It turned out and restructuring of the sentence. In the third sample the whole order is reversed, as it starts with 33% of the voters... instead of the election phrase. Obtaining such samples that differ in phrase order and other aspects but preserve semantics has been a challenge in neural translation. In this work, the study of text autoencoders BID1 BID17 is combined with the research on discrete autoencoders BID4 BID8. It turns out that the semantic hashing technique BID11 can be improved and then yields good in this context. We introduce a measure of efficiency of discrete autoencoders in sequence models and show that improved semantic hashing has over 50% efficiency. In some cases, we can decipher the latent code, showing that latent symbols correspond to words and phrases. On the practical side, sampling from the latent code and then running beam search allows to get valid but highly diverse samples, an important problem with beam search BID15.We leave a number of questions open for future work. How does the architecture of the function c(s) affect the latent code? How can we further improve discrete sequence autoencoding efficiency? Despite remaining questions, we can already see potential applications of discrete sequence autoencoders. One is the training of multi-scale generative models end-to-end, opening a way to generating truly realistic images, audio and video. Another application is in reinforcement learning. Using latent code may allow the agents to plan in larger time scales and explore more efficiently by sampling from high-level latent actions instead of just atomic moves.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkQCGzZ0-
Autoencoders for text with a new method for using discrete latent space.
Auto-encoding and generative models have made tremendous successes in image and signal representation learning and generation. These models, however, generally employ the full Euclidean space or a bounded subset (such as $^l$) as the latent space, whose trivial geometry is often too simplistic to meaningfully reflect the structure of the data. This paper aims at exploring a nontrivial geometric structure of the latent space for better data representation. Inspired by differential geometry, we propose \textbf{Chart Auto-Encoder (CAE)}, which captures the manifold structure of the data with multiple charts and transition functions among them. CAE translates the mathematical definition of manifold through parameterizing the entire data set as a collection of overlapping charts, creating local latent representations. These representations are an enhancement of the single-charted latent space commonly employed in auto-encoding models, as they reflect the intrinsic structure of the manifold. Therefore, CAE achieves a more accurate approximation of data and generates realistic new ones. We conduct experiments with synthetic and real-life data to demonstrate the effectiveness of the proposed CAE. Autoencoding (; ;) is a central tool in unsupervised representation learning. The latent space therein captures the essential information of a given data set, serving the purposes of dimension reduction, denoising, and generative modeling. Even for models such as generative adversarial networks that do not employ an encoder, the generative component starts with a latent space. A common practice is to model the latent space as a low-dimensional Euclidean space R l or a bounded subset of it (e.g., l ), sometimes equipped with a prior probability distribution. Such spaces carry far simple geometry and may not be adequate for representing complexly structured data. In this work, we are concerned with a widely studied structure: manifold. A commonly held belief, known as the Manifold Hypothesis , states that real-world data often lies on, or at least near, some low-dimensional manifold embedded in the high-dimensional ambient space. Hence, a natural approach to representation learning is to introduce a low-dimensional latent space to which the data is mapped. It is desirable that such a mapping possesses basic properties such as invertibility and continuity. In differential geometry, such a notion is coined homeomorphism. Challengingly, it is known that even for some simple manifolds, there does not always exist a homeomorphic mapping to the Euclidean space of the intrinsic dimension of the data. We elaborate two examples such examples next. Consider a data set X lying on the 2-dimensional sphere S 2 embedded in the ambient space R n where n ≥ 3. It is well known that there exist no homeomorphic maps between S 2 and an open domain on R 2 . Therefore, it is impossible for a traditional autoencoder with a 2-dimensional latent space to faithfully capture the structure of the data. Consequently, the dimension of the latent space needs be increased beyond the intrinsic dimension (two in this case). For another example, we show in Figure 1 a double torus. When one uses an autoencoder to map uniform data points on this manifold to R 2, the distribution of the points is distorted and the shape destroyed, whereas if one maps to R 3, some of the points depart from the mass and become outliers. Fur- thermore, in the appendix (see Figure 11) we demonstrate that increasing the number of parameters of the autoencoder does not help overcome the coverage issue when the latent space is a single 2-dimensional space. To better reflect structures, in this work, we follow the definition of manifolds in differential geometry and propose Chart Auto-Encoder (CAE) for learning a low-dimensional representation of data lying on a manifold. Rather than using a single function mapping, the manifold is parameterized by a collection of overlapping chart functions, each of which describes a local neighborhood and they collectively cover the entire manifold. To the right of Figure 1, we show the same double torus aforementioned, now encoded by using four color-coded charts. One sees that the encoding faithfully preserves the shape of the data set, as well as the topology (two holes). To realize the parameterization, we develop a neural network architecture and propose a training regime to implement it. We conduct a comprehensive set of experiments on both synethic data and real-world data to demonstrate that CAE captures much better the structure of the data and enriches the understanding of them. One common approach to enhancing the capability of autoencoders is to impose a prior distribution on the flat Euclidean latent space, as in variational autoencoders (VAE) . The distributional assumption (e.g., Gaussian) introduces low-density regions that sometimes depart from the manifold. Then, paths in these regions will either trace off the manifold or become invariant. introduce a non-Euclidean latent space to guarantee the existence of a homeomorphic representation by using a so-called homeomorphic variational autoencoder (HVAC). There are two disadvantages of this approach. First, it requires the user to know the topological class of the data set, which can be extremely difficult for real-world data. Secondly, it requires the computation (or estimation) of the Lie group action on the latent space. If the topology of the data is relatively simple (e.g., a sphere or torus), the computation is possible, but for more complex objects the estimation is much difficult. Similarly, several recent works; have studied autoencoders with (hyper)-spherical latent spaces. These methods allow for the detection of cyclical features, but offer little insight into the homology of the manifold, since it will always be represented by a compact genus-zero surface. Exploring the low-dimensional structure of manifolds has led to many dimension reduction techniques in the past two decades (; ; ; ; ; ; ; van der). Isomap divides a data set into local neighborhoods, which are embedded into a low-dimensional space that preserves local properties. Similarly, Laplacian Eigenmaps use embeddings induced by the LaplaceBeltrami eigen-functions to represent the data. These methods employ a flat Euclidean space for embedding and may lose information as aforementioned. considers a manifold point of view to explain Wasserstein generative adversarial network (WGAN) using optimal transport by minimizing a distance between a manifold parameterize by a neural network and one estimated from some training data. Under the manifold hypothesis extend the work of and show the theoretical existence of neural networks that approximate functions supported on lowdimensional manifolds, with a number of parameters only weakly dependent on the embedding dimension. A key feature in their proposal is a chart determination sub-network that divides the manifold into charts and a pairing sub-network that re-combines them. All existing methods only have theoretical consideration of chart structure for data manifolds and assume that the manifold in question is known. However, multi-chart latent space representations have not been implemented and conducted computationally. Moreover, part of the constructions relies on information of the underlying manifold, which is unavailable in most real-world applications. Additionally, important questions regarding the loss function and the training method remain open. Our work introduces an implementable neural network architecture which is able to address these challenges by directly approximating the data manifold. A manifold is a topological space locally homeomorphic to a Euclidean domain. More formally, a d-dimensional manifold is defined as a collection of pairs {(M α, φ α)} α, referred to as charts, Figure 2 left). Smoothness of the transition functions controls the smoothness of the manifold. A well-known from differential geometry states that any compact manifold can be covered by a finite number of charts which obey these transition conditions. The intrinsic dimension of the manifold is the dimension of U α. for a thorough review. In practice, the coherent structure of data motivates us to model a given data as samples from an unknown ground manifold. One crucial task in machine learning is to explore the topological and geometric structure of the manifold and perform tasks such as classification and data generation. Mathematically, we explain the encoding and decoding process for a manifold as follows. Given some manifold M, usually embedded in a high dimension ambient space R n, the encoding network constructs a local parameterization φ α from the data manifold to the latent space U α; and the decoding network maps U α back to the data manifold M through φ −1 α. In standard autoencoders, only one single chart U α is used as the latent space. In our work, multiple charts are used. Different from classical dimension reduction methods where distance preservation is preferred, we do not require the local parameterization φ α to preserve metric but only bound its Lipschitz constant to control the regularity of the parameterization (see Section 4 for more details). To illustrate the utility of such a multi-charted parameterization, we consider a simple example: find a latent representation of data sampled from a 1-dimensional circle S 1 embedded in R 2. See Figure 2 right. A simple (non-chart) parameterization is (cos(z), sin(z)) with z ∈ (−∞, ∞). However, approximating this parameterization with a finite neural network is impossible, since z is unbounded and hence any multi-layer perceptron will have an infinite Vapnik-Chervonenkis dimension . One obvious alternative is to limit z ∈ [0, 2π), but this parameterization introduces a discontinuity and breaks the topology (it is theoretically known that the closed circle is not homeo-morhpic to [0, 2π) ). Following the definition of manifold, we instead parameterize the circle as: Although this function is more cumbersome to write, it is more suitable for representation learning, since each encoding function can be represented with finite neural networks. Moreover, the topological and geometric information of the data is maintained. Thus, instead of using only one chart as in standard autoencoders (; ;), we propose to model the latent space with multiple charts glued by their transition functions, akin to the concept of manifold. This geometric construction reflects the intrinsic structure of the manifold. Therefore, it is able to achieve more accurate approximation of the data and generate realistic new ones. Moreover, once the charts and the associated transition functions are learned, the geometric information of the data manifold, including metric, geodesics, and curvatures, can be approximated according to their definitions in differential geometry. Thus, this multi-chart latent construction leads to a better geometric understanding of the manifold. To integrate manifold structure in the latent space, we investigate a new representation of latent space based on a multi-chart structure. We implement a novel network architecture to learn the multi-chart latent space and its transition functions. The proposed network architecture, illustrated in Figure 3, can be summarized as follows: An input data x is passed into an encoding module, which creates an initial latent representation. Next, a collection of chart paramterizations-encoders E i as analogy of φ α -map the initial latent representation to N different chart spaces U α, which provides the new multi-chart latent space. Each chart representation is then passed into a decoding function, a decoder D i as analogy of φ −1 α, which produces approximation to the input data. Finally, the chart prediction module decides which chart and the associated decoder best represent the data. As detailed in A.3, this architecture, along with the proposed loss function, naturally enforces the chart transitions without explicit computation. Initial Encoder The initial encoder serves as a dimension reduction step to find a low-dimensional isometric embedding of the data from R n to R l. For example, given an R 3 torus embedded in R 1000, the initial encoder maps from R 1000 to a lower dimensional space, ideally R 3. (Note that however three is not the intrinsic dimension of the torus; rather, two is. Thus, subsequently we introduce chart encoders to map from the 3-dimensional space to R 2 .) We approximate this mapping using a neural net, denoted as E, with a combination of fully connected and convolution layers (see section A.5 for details). We choose l n; this encoding can be viewed as a dimension reduction step, which prepares the data to be split into each of the multi-chart latent spaces. Ideally, this step preserves the original topology and geometry information of the data set while also reducing its dimension to the minimal isometric embedding dimension of the manifold. It aims at improving computational efficiency for decoders to multi-chart latent space. This step can be replaced with a Homeomoric Variation Auto-Encoder in the cases where the topology of the data set is known, or with an appropriately chosen random matrix projection . Chart Encoder This step provides parameterization of the data manifold locally to a chart space, whose dimension is approximately close to the intrinsic dimension of the data manifold. This splitting is done with a small collection of networks {E α} which takes z ∈ R l as input and outputs N local coordinates z α ∈ U α. We denote the direct sum of these space as U = N α=1 U α, which is the proposed multi-chart latent space for our model. In practice, we choose U α = l and regularize the Lipschitz constant of the corresponding encoding map, to control size and regularity of the region M α ⊂ M paramterized by U α (more details in section 4). We remark that the proposed multi-chart architecture aims at constructing the correct latent space structure and understanding the geometric structure of the data manifold. The decoupled nature of the encoding operations mean that the model tends to be larger in terms of the number of parameters. However, the improvement shown in the experiments is not caused by the use of more parameters; rather, a correct latent space structure. Further experiments in Appendix A.10 show that increasing the number of parameters in a VAE alone (without increasing the latent dimension) does not allow one to simultaneously produce good reconstruction and generation. A latent space of too small dimension will not be able to cover a manifold, and one of a too large dimension will generate points far from the manifold. Thus, the structure of the latent space is more important than the number of parameters. Decoders Each of the latent chart U α is equipped with an associated decoder function D α, which maps from the chart latent space U α back to the ambient space. We represent each of these maps with a deep network, which are trained to reconstruct the input data. Chart Selection There are several options to the chart selection module P for an input x sampled in the data manifold. In general, this module must produce prediction or confidence measure {p α (x)} α = P(x) regarding which chart should be used for a given input. After training, this module can also be used to reduce the computational cost of evaluating the model, by ensuring that only a small set of decoder functions needs to be evaluated for any input signal. Output The output of the network will depend on the application and the training specifics (discussed further in section 4.1). In general, the network produces an internal latent representation z α (x) for an input x, a reconstruction signal y ≈ x to check the fidelity of the system, as well as some confidence {p α (x)} in this prediction. Each of y α = D α • E α • E(x) may be used as a proxy for y (and then each p α can be interpreted as this probability) or some combination of the y α s may be used (in which case the p α are interpreted as the partition of unity weights). In this section, we explain several task specific modeling options, loss functions, regularization, and pre-training schemes that promote training efficiency of the proposed model. The chart prediction module assigns any input to one or more charts. If the data manifold has relatively simple geometry, such a a circle, we may use the normalized distance from the data point to the center of the patch for prediction. This procedure is extremely efficient, but is not sufficiently powerful in cases where the geometry is more complex. For example, for the same surface area, a high curvature may require many small charts to cover, whereas a flat region may need only one chart. In this case we can compute p α with a deep network, denoted as the chart predictor in Figure 3, using some combination of x, z and/or z α as inputs. Using x as an input in a network which is independent of the rest of the network (and can potentially be trained separately), but the size and complexity of this network will depend on the ambient dimension of x. Using the internal representation z or z α as an input to this network allows the chart selection module to benefit from the dimension reduction of x preformed by E. We propose two loss functions which lead to two slightly different interpretations of the model, based on how to handle regions in which the charts overlap. In the first regime, we define a decoderwise loss for y α = D α • E α • E(x) as e α = x − y α 2 and an internal label α = softmax(e α). Then the Chart-Prediction Loss is given by: where Θ are network parameters. The first term models the reconstruction error of the predicted chart and the second term is the log-likelihood of the prediction, weighted by decoder-wise error. The second regime is based on the idea of partition of unity idea (see for details). Here, we view p α: M → R as a function with compact support in M, i.e. {x ∈ M | p α (x) = 0} ⊂ M, p α ≥ 0 for all α, and α p α = 1. They serve as the partition of unity (See Figure 8 for a example). Thus, we represent any point on the manifold as a combination of the charts and use the confidence weights predicted by the chart predictor as the coefficients. The loss is then given by the following Partition of Unity Loss: Since it is impossible to know a priori the number of charts necessary to cover a given data set, we instead overparameterize the model by using many charts and enforce a strong regularization on the decoder functions to eliminate unnecessary charts. Note that during training, a chart function (say E j) which is not utilized in the reconstruction of a point (i.e. p j ≈ 0) does not get any update from the loss function. Then, adding any convex penalty centered at 0 to the weights of E j will in weight decay and, if a decoder is never utilized during training, its weights will go to zero. In practice, we can automatically remove these charts by eliminating them from the network when the norm of the decoder weights falls bellow some tolerance. This mechanism provides a way of choosing the number of charts used in the network. Namely, we over estimate the number of charts and let the network automatically eliminate unnecessary ones, ing in an appropriate number. We also introduce an additional regularization to stabilize the training of our network by balancing the size of M α parameterized by E α and stopping a small number of charts from dominating the data manifold. For example, if we would like to use our network to model a (finitely sampled) sphere S 2, then we need at least two 2-dimensional charts. However, if we regularize a network with only l 2 weight decay, it may be able to reconstruct the training data well with only one chart without capturing the true structure of the data (see right panel of Figure 4 for such an example). To prevent this type of overreach, we add a Lipschitz regularization to the decoders to penalize how far away nearby inputs can be mapped. Formally, the Lipschitz constant of a function is defined as:. Since the chart spaces are fixed as l, controlling the Lipschitz constant of the encoder will control the max (euclidean) volume a chart can map onto. To do this, we note that the Lipschitz constant of a composition of functions can be upper bounded by the multiplication of the Lipschitz constants of its constituent functions. The Lipschitz constant of a matrix is its spectral norm and the Lipschitz constant of the ReLU is 1. Then, we can control the upper bound of the Lipschitz constant of an encoder function by regularizing multiplication of the spectral norm of its weights at each layer. Combining these ideas, and denoting the weights of the k th layer of E α as W k α, we propose the following regularization on the decoder functions for a K-layer network: Here the first term aims at stopping a single chart from dominating, and the second term works as a weight decay which also promotes the smoothness of each chart. Since our model jointly predicts the chart outputs and chart probabilities, it is important that our model is properly initialized, so that the range of each decoder lies somewhere on the manifold and the probability that a randomly sampled point lies in each chart is roughly equal. To do this, we begin by using the furthest point sampling (FPS) scheme to select N data points, {x α}, from the training set which are'far away' from each other. Then we assign each of these data points to a decoder and train each one to reconstruct. Additionally, we train the encoder such that x α is at the center of the chart space U α. We further define the chart prediction probability as the categorical distribution and use it to pre-train the chart predictor. Then the loss for the β th initialization example is: We can extend this idea of pre-training to also ensure that the charts are oriented consistently. Details are presented in A.4. We remark that the pre-training network does not aim at separating the data manifold as different clusters. The pre-training works to ensure that each of the decoders is on the manifold, so that when training begins no decoders stay inactive. Since the chart selection module is learned in conjunction with the rest of the model, we do not assume any prior segmentation of the data. During training the charts will move, change sizes, overlap, or disappear. We present numerical of several experiments to demonstrate the effectiveness of the proposed CAE. First, we study three illustrative geometric examples-sphere, double torus, and a genus-3 surface-to understand the behavior of the network. Afterward, we use the MINIST data set to further demonstrate the properties of CAE. In the end, we evaluate CAE and compare with other models on several metrics with the use of synethetic data sets as well as MNIST and SVHN. In our first experiment illustrated in Figure 4, we visualize the process of applying a four-chart CAE to a data set sampled from the unit sphere (see Appendix A.2 for the network architecture). We apply the proposed loss function with and without the Lipschitz regularization discussed in section 4.2. We use four copies of 2 as the chart latent space in this experiment. We color code 2 using the distance of each point to the origin. After training, we uniformly sample points on the latent space and use the learned decoders to generate points back to the unit sphere. As we can see from the middle panel of Figure 4, the four charts, when glued together, successfully cover the unit sphere. Moreover, all charts occupy the data manifold in a balanced and regularized way; that is, even thought they are not uniform, no single chart dominates the rest. From the right panel of Figure 4, we can see that, when no regularization is employed, the charts are less localized. This behavior shows the necessity of using Lipschitz regularization to control the regularity of the decoder. Our second experiment is conducted on a more complex object-the double torus-shown in Figure 1. The experiment illustrates some of the difficulties in using traditional auto-encoders to capture topologically complex data. Here, the data manifold has local dimension of 2, but it is not homeomorphic to a plane. We uniformly sample the latent space of each model and apply the ing decoder to generate points back to the ambient space. As we can see from the second left plot in Figure 1, a traditional model with a 2-dimensional single-chart latent space cannot capture the overall manifold. Since this object can be embedded in R 3, a model with a 3-dimensional latent space can capture the entire manifold. However, this type of model also likely generates points off the manifold, as we can see from the second right image in Figure 1. Finally, we see that our CAE with four 2-dimensional charts can produce points successfully covering the objects without introducing unfaithful points. Next, we test our CAE on a genus-3 surface with ten 2-dimensional charts (detailed as CAE 2 in A.2). The left of Figure 5 shows the of randomly sampling z α in the chart latent space U, and decoding the latent representations. The right of this figure shows which chart is active in each region of the manifold. Since this model uses a network to predict the chart segmentation, the ing parameterization has charts of varying sizes. This allows the network to place more charts in areas of high curvature, and allow charts to grow over more linear regions. Nevertheless, this example demonstrates the effectiveness of our method to handle objects with complex topology. We apply the 10-chart model on the MNIST data set (now using CAE 3 as detailed in A.2). The left panel of Figure 6 reports the reconstruction in the training data, for a given image showed in the second last row. Each of the first ten rows in the corresponding column shows the decoding from the i-th chart. Note that while each decoder may produce vastly different outputs, the chart selection module chooses which is most likely to be correct. As we can observe from the image, the chart selection model successfully picks up the most faithful decoding , as we circle and repeat in the last row of the image. This picture shows that the proposed multi-chart auto-encoder does provide faithful reconstruction for the training data. The middle panel of Figure 6 shows decoding by sampling the charts, where each row shows images generated from the corresponding decoder. Note that each chart produces only a few digits, even though every digit is covered by some chart. Additionally, on any chart the digits which that chart produces are "close" to each other (for example the 3s and 8s in chart 8 and the 5s and 6s in chart 1). This means the multi-chart latent space can cover the MINST data manifold in a balanced and regular way, similar to what we observe from the experiments conducted for geometric objects. The right panel of this figure shows the morphing of a'2' to a'3' by interpolating a linear path through the latent space. Since each of the latent representations decoded along this path produces output similar to examples found in the training set, we can conclude that approximation of the manifold given by our chart parameterization is close to the underlying distribution the training data is sampled from. In traditional approaches this is possible because the latent space (modeled as a normal distribution) is simply connected. Our model is able to do so without using a distributional assumptions, owing to the transition conditions and Lipshitz regularization. In this experiment, we apply four traditional models (2 auto-encoders and 2 variational autoencoders) as well as three CAEs on several data sets. Details of the exact architecture of these networks can be found in A.2. For each model and data set, we are primarily interested in three measures of success, including reconstruction error, unfaithfulness, and coverage (See A.10 for detailed definitions). The reconstruction error measures the fidelity of the model. The unfaithfulness measures how far synthesized data decoded from samples drawn on the latent space are to samples from the original training data. Coverage indicates how much of the training data is covered by the encoder. Models which produce unrealistic data when sampling from the latent space will have high unfaithfulness sores and models which experience mode collapse will have low coverage scores. We test these measurements on four data sets, in the order from the simplest to the most complex. 1 Sphere: The data consists of 2000 equally distributed points sampled uniformly form a sphere embedded in R 3. 2 Genus 3: The genus-3 object used in Figure 5 non-trivally embedded in The MNIST hand-written digits database containing 60k training and 10k testing images . 4 SVHN: A real-world image dataset from house numbers in Google Street View images. We focus on the individual digits problem and preprocess the images in gray scale . Results of the the evaluation metrics are summarized in Figure 7 and reported fully in Table A.10 in A.10. From these , clearly the CAE models consistently preform better than other models with simple latent spaces. More specifically, when the dimension of the latent space is fixed, the CAE model preforms better than the associated VAE and AE in every test. Moreover, because of the Lipschitz regularization in our model, we are able to achieve much better coverage than with the previous methods. We have proposed and investigated the use of chart-based paramterization to model manifold structured data, through introducing multiple-chart latent spaces, along with transition functions, to autoencoders. The parameterization allows us to significantly reduce the dimension of latent encoding for efficiently representing data with complex structures. Numerically, we design geometric examples to analyze the behavior of the proposed CAE and illustrate its advantage over single-chart Figure 7: Summary of benchmark test on Sphere, Genus-3, MNIST and SVHN data sets autoencoders. We also apply our method to real-life data sets, including MNIST and SVHN, to demonstrate the effectiveness of the proposed model. We believe that the proposed chart-based parameterization of manifold-structured data provides many opportunities for further analysis and applications. In future work, we will extend this architecture to other generative models (e.g, GAN) and apply the machinery to investigate the topology and geometry of real-world data. In this section we detail the architecture of the networks used in the numerical experiments. We denote fully connected layers as F C y where y is the number of units in the layers, Conv i,j,k.l as convolution layers with filters of size(i, j) input dimension k and output dimension l, and dim(U) to be the dimension of the latent space. Each model was trained using the chart prediction loss function as we have found it to be more stable during training. Variational Auto-Encoders: CAE 1 (4 2-dim charts, distance confidence, chart predictor): CAE 2 (10 2-dim charts, learned confidence chart predictor) CAE 3 (10 25-dim charts, convolution layers, learned chart predictor) A key feature of the chart-based parameterization of manifolds in differential geometry is the construction of chart transition functions. As show in figure 2, some points on the manifold may be parameterized by multiple charts. Let φ α and φ β be two charts with overlapping domains (M α ∩ M β = ∅), then the chart transition function φ αβ can be computed as: φ −1 α φ β. In our model the φ −1 s are represented as E i neural networks, but directly computing φ itself from φ −1 is not simple since ReLU-nets are non-invertable. It would be possible to add additional modules and to train a model to predict the transition function, but this adds up to N 2 new networks to train, many of which may be unnecessary (since we only need chart transition function for overlapping charts). However, we can exploit the structure the encoder module and re-encode the signal generated by the first decoder, using the second encoder define a chart transition. Then Each chart transition function can be modeled by the composition: Note that if x ∈ M α ∪ M β, then to estimate the chart transition between U α and U β we need: Each of these conditions are naturally enforced by both of loss functions equation 2, equation 3 discussed in the precious section. Therefore the chart transition function of this network can be computed without explicitly parameterizing them or adding new terms to the loss function. One could explicitly characterize the the chart transition by re-encoding the decoded signals in a second pass thought the network and computing an regularizer: which measures the exactly error in the transition and reconstruction. However this type of cyclic condition is computationally expensive to implement, and we have found it unnecessary in our empiric studies. We can further extend the idea of pre-training to also orient the rest of the chart around the center c α. To do so, we take a small sample of points N (c α) around the center and use principle component analysis (PCA) to define a d-dimensional embedding of the local neighborhood. Let the coordinates of this neighborhood embedding be: and C i is chosen as a local scaling constant. Then we can use this coordinate system to initialize the orientation of the local charts by adding an additional regularization the term to the equation 5: A.5 CONVOLUTION It has been widely argued that invariant and equivalent properties of convolutions layers promote manifold structured representations. For example, conjectures:'that in a representation obtained as an output of convolutional and pooling layers, the data concentrates near a collection of low-dimensional manifolds embedded in a high-dimensional space.' In other words, applying dimension reduction operations which has localized invariance (such as convolution and pooling) maps data to relatively simple manifolds by contracting the representation space. This suggests that adding convolution and pooling layers to the beginning of the encoder networks will in representations which are easier for our model to estimate since the underlying geometry will be simpler. We write D as a general notation for the decoder in the model. Measures fidelity of the output y: faithfulness Error Measures how close data decoded from samples drawn on the latent space are to samples from the original training data. We uniformly selecting M points in the latent space and define 2. Often, people hope that through sampling the latent space and decoding, one can get new data in the original space that novel or original. However, if the training set is sufficiently dense on the data manifold, newly generated data is far from anything observed during training will not be realistic. Since we are concerned with fitting a model to the data, a properly trained model should always stay close the the underlying manifold. To select the latent variables used in this experiment, we uniformly draw M = 100 samples from the latent space used in the single chart auto-encoder, CAE and VAE. Coverage Measures how well the latent space and decoders capture the entire data set. We uniformly draw M samples {z i} M i=1 from the latent space. Let M * be the number of the set {x M . This measurement provides quantitative way to describe the well known "mode collapse" problem of GAN wherein some the model captures part of the data well, but ignores large sections (or entire classes) of the data. A coverage score close to one indicates that the samples are well distributed on the manifold, while scores close to zero indicate that the model may be experiencing mode collapse. In this experiment, we train a model with four 1-dimensional charts to fit a circle in order to visualize the transition between charts. In figure 8, the first row shows the output of each chart using the latent variable z i sampled on. The top right shows the chart which has the largest p value. In the second row, we visualize the transition zones. Here, the solid colored lines are decoded samples taken from the U i space. The'+'s represent training data, and their color indicates which chart had maximum p value. The ground truth manifold is represented by the dashed lined. The last row show the partition of unity function, unwrapped from the circle to a line. From this experiment we can see that charts have very close values in these transition zones and the partition of unity functions are compactly supported. In this experiment, we train a model with four 2-dimensional charts for data sampled on a sphere in order to visualize the effect of the regularization scheme when using a neural network as the chart prediction module. Figure 9 shows the patch prediction function on the training data at the end of the pre-training and at the completion of training, respectively. From this figure we see that even though all charts cover the sphere at the beginning, the regularization is able to automatically remove some charts after training. In this experiment, we demonstrate a simple example of recovering geometric information from a trained model by computing the length of some geodesic curves on a sphere. Let p and q be points on the manifold with latent representations z p and z q. Then the line γ z (t) = αz p + (1 − α)z q in the latent space will correspond to a path on γ ⊂ M. To measure the length of this path, we can sample points along γ z (t), decode them and then measure the euclidean distance between the decoded points. Figure 10 shows an example of such a test using different numbers of sampling points for five difference curves on the sphere. From this experiment we observe convergence of these measurements as more points are sampled along the geodesic path, validating our geometric intuition. We remark that this is a very preliminary to show a potential of understanding geometric structure of data manifold using multi-chart latent space. We will explore in the direction in our future work. A.10 DETAILED BENCHMARK COMPARISONS Figure 11 shows an experiment for data sampled on a double torus using VAE with different choice of parameters. The latent space dimension is chosed as 2 which is compatible with the intrinsic diemisno of the object. This experiment shows that increasing the number of parameters in a VAE alone (without increasing the latent dimension) does not allow one to simultaneously produce good reconstruction and generation. A latent space which has too small of a dimension will not be able to cover a manifold, and one which is too large will generate points far from the data manifold. Thus the structure of latent space is more important than the number of parameters. This is one of the main objectives of this paper.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJeBJJBYDB
Manifold-structured latent space for generative models
We tackle the problem of modeling sequential visual phenomena. Given examples of a phenomena that can be divided into discrete time steps, we aim to take an input from any such time and realize this input at all other time steps in the sequence. Furthermore, we aim to do this \textit{without} ground-truth aligned sequences --- avoiding the difficulties needed for gathering aligned data. This generalizes the unpaired image-to-image problem from generating pairs to generating sequences. We extend cycle consistency to \textit{loop consistency} and alleviate difficulties associated with learning in the ing long chains of computation. We show competitive compared to existing image-to-image techniques when modeling several different data sets including the Earth's seasons and aging of human faces. Image-to-image translation has gained tremendous attention in recent years. A pioneering work by shows that it is possible to realize a real image from one domain as a highly realistic and semantically meaningful image in another when paired data between the domains are available. Furthermore, CycleGAN extended the image-to-image translation framework in an unpaired manner by relying on the ability to build a strong prior in each domain based off generative adversarial networks (GANs, ) and enforcing consistency on the cyclic transformation from and to a domain. Methods similar to CycleGAN have also been developed roughly around the same time. Since its birth, CycleGAN has become a widely adopted technique with applications even beyond computer vision . However, CycleGAN family models are still somewhat limited since they only handle the translation problem (directly) between two domains. Modeling more than two domains would require separate instantiations of CycleGAN between any two pairs of domains -ing in a quadratic model complexity. A major recent work, StarGAN , addresses this by facilitating a fully connected domain-translation graph, allowing transformation between two arbitrary domains with a single model. This flexibility, however, appears restricted to domains corresponding to specific attribute changes such as emotions and appearance. Within nature, a multitude of settings exist where neither a set of pairs nor a fully-connected graph are the most natural representations of how one might proceed from one domain to another. In particular, many natural processes are sequentialand therefore the translation process should reflect this. A common phenomena modeled as an image-to-image task is the visual change of natural scenes between two seasons, e.g., Winter and Summer. This neglects the fact that nature first proceeds to Spring after Winter and Fall after Summer and therefore the pairing induces a very discontinuous reflection of the underlying process. Instead, we hope that by modeling a higher resolution discretization of this process, the model can more realistically approach the true model while reducing the necessary complexity of the model. It is difficult to obtain paired data for many image-to-image problems. Aligned sequential are even more difficult to come by. Thus, it is more plausible to gather a large number of examples from each step (domain) in a sequence without correspondences between the content of the examples. Therefore, we consider a setting similar to unpaired image-to-image transformation where we only have access to unaligned examples from each time step of the sequence being modeled. Given an example from an arbitrary point in the sequence, we then generate an aligned sequence over all other time steps -expecting a faithful realization of the image at each step. The key condition that required is that after generating an entire loop (returning from the last domain to the input domain), one should expect to return to the original input. This is quite a weak condition and promotes model flexibility. We denote this extension to the cycle consistency of as loop consistency and therefore name our approach as Loop-Consistent Generative Adversarial Networks (LoopGAN). This is a departure from many image-to-image approaches that have very short (usually length 2) paths of computation defining what it means to have gone "there and back", e.g. the ability to enforce reconstruction or consistency. Since we do not have aligned sequences, the lengths of these paths for LoopGAN are as large as the number of domains being modeled and require different approaches to make learning feasible. These are not entirely different from the problems that often arise in recurrent neural networks and we can draw similarities to our model as a memory-less recurrent structure with applied to images. We apply our method to the sequential phenomena of human aging and the seasons of the Alps with extensive comparisons with baseline methods for image-to-image translation. We also present additional on gradually changing azimuth angle of chairs and gradual change of face attributes to showcased the flexibility of our model. We show favorable against baseline methods for image-to-image translation in spite of allowing for them to have substantially larger model complexity. This is consistent with how CycleGAN is trained where two cycles are included. Generative Adversarial Networks Generative adversarial networks (GANs, ) implicitly model a distribution through two components, a generator G that transforms a sample from a simple prior noise distribution into a sample from the learned distribution over observable data. An additional component known as the discrimintor D, usually a classifier, attempts to distinguish the generations of G with samples from the data distribution. This forms a minimax game from which both G and D adapt to one another until some equilibrium is reached. Unpaired Image-to-Image Transformation As an extension to the image-to-image translation framework (pix2pix,), proposed CycleGAN which has a similar architecture as in but is able to learn transformation between two domains without paired training data. To achieve this, CycleGAN simultaneously train two generators, one for each direction between the two domains. Besides the GAN loss enforced upon by domain-wise discriminators, the authors proposed to add a cycle-consistency loss which forces the two generators to be reversible. Similar to pix2pix, this model aims at learning a transformation between two domains and cannot be directly applied in multi-domain setting that involves more than two domains. Concurrent to CycleGAN, proposed a method named UNIT that implicitly achieves alignment between two domains using a VAE-like structure where both domains share a common latent space. Furthermore, StarGAN ( ) proposed an image-to-image translation model for multiple domains. A single network takes inputs defining the source image and desired domain transformation, however, it has been mainly shown to be successful for the domains consisting of facial attributes and expressions. The problem of learning non-deterministic multi-modal transformation between two image domains has made progress in recent years (. The common approach that achieves good performance is to embed the images for both domains into a shared latent space. At test time, an input image in the source domain is first embedded into the shared latent space and decoded into the target domain conditioned on a random noise vector. These models avoid one-to-one deterministic mapping problem and are able to learn different transformations given the same input image. However, these models are developed exclusively for two-domain transformation and cannot be directly applied to problems with more than two domains. Style Transfer A specific task in image-to-image transformation called style transfer is broadly defined as the task of transforming a photo into an artistic style while preserving its content . Common approaches use a pre-trained CNN as feature extractor and optimize the output image to match low-level features with that of style image and match high-level features with that of content image . A network architecture innovation made popular by this field known as AdaIn combines instance normalization with learned affine parameters. It needs just a small set of parameters compared to the main network weights achieve different style transfers within the same network. It also shows great potential in improving image quality for image generation and image-to-image transformation . Face Aging Generating a series of faces in different ages given a single face image has been widely studied in computer vision. State-of-the-art methods use a combination of pre-trained age estimator and GAN to learn to transform the given image to different ages that are both age-accurate and preserve original facial structure. They rely heavily on a domain-specific age estimator and thus have limited application to the more general sequential image generation tasks that we try to tackle here. Video Prediction Video prediction attempts to predict some number of future frames of a video based on a set of input frames . Full videos with annotated input frames and target frames are often required for training these models. A combination of RNN and CNN models has seen success in this task . Predictive vision techniques (; ;) that use CNN or RNN to generate future videos also require aligned video clips in training. A recent work added a GAN as an extra layer of supervision for learning human trajectories. At a high level, video prediction can be seen as a supervised setting of our unsupervised task. Moreover, video prediction mostly aims at predicting movement of objections rather than transformation of a still object or scene which is the focus of our task. We formulate our method and objectives. Consider a setting of n domains, X 1,..., X n where i < j implies that X i occurs temporally before X j. This defines a sequence of domains. To make this independent of the starting domain, we additionally expect that can translate from X n to X 1 -something a priori when the sequence represents a periodic phenomena. We define a single generator G(x, i) where i ∈ {1, . . ., n} and x ∈ X i. Then, a translation between two domains X i and X j of an input x i ∈ X i is given by repeated applications of G in the form of G j−i (x i, i) (allowing for incrementing the second argument modulo n + 1 after each application of G). By applying G to an input n times, we have formed a direct loop of translations where the source and target domains are equal. While we use a single generator, we make use of n discriminators where D i is tasked with discriminating between a translation from any source domain to X i. Since we are given only samples from each domain X i, we refer to each domain X i = {x j} Ni j=1 as consisting of N i examples from the domain X i with data distribution p data (x i). Suppose x i ∼ p data (x i). Then we expect that for all other domains j, G ||j−i|| (x i, i) should be indistinguishable under D j from (true) examples drawn from p data (x j). Additionally, each D j should aim to minimize the ability for G to generate examples that it cannot identify as fake. This forms the adversarial objective for a specific domain as: where G * denotes iteratively applying G until x j is transformed into domain X i, i.e. ||j − i|| times. Taking this over all possible source domains, we get an overall adversarial objective as: where q(i) is a prior on the set of domains, eg. uniform. Within, an adversarial loss was supplemented with a cycle consistency loss that ensured applying the generator from domain A to domain B followed by applying a separate generator from B to A acts like an identity function. However, LoopGAN only has a single generator and supports an arbitrary number of domains. Instead, we build a loop of computations by applying the generator G to a source image n times (equal to the number of domains being modeled). This constitutes loop consistency and allows us to reduce the set of possible transformations learned to those that adhere to the consistency condition. Loop consistency takes the form of an L 1 reconstruction objective for a domain X i as: The combined loss of LoopGAN over both adversarial and loop-consistency losses can be written as: where λ weighs the trade-off between adversarial and loop consistency losses. An example instantiation of our framework for one loop in a four-domain problem is shown in Figure 1. We adopt the network architecture for style transfer proposed in as our generator. This architecture has three main components: a down-sampling module Enc(x), a sequence of residual blocks T (h, i), and an up-sampling module Dec(h). The generator G therefore is the composition G(x, i) = Dec(T (Enc(x), i)) where the dependence of T on i only relates to the stepspecific AdaIN parameters while all other parameters are independent of i. Following the notations from (;, let c7-k denote a 7 × 7 ConvReLU layer with k filters and stride 1, dk denote a 3 × 3 Conv-ReLU layer with k filters and stride 2, Rk denote a residual block with two 3 × 3 Conv-AdaIn-ReLU layers with k filters each, uk denotes a 3 × 3 fractional-strided-Conv-LayerNorm-ReLU layer with k filters and stride 1 2 . The layer compositions of modules are down-sampling: c7-32, d64, d128; residual blocks: R128 × 6; upsampling: u128, u64, c7-3. We use the PatchGAN discriminator architecture as : c4-64, c4-128, c4-256, c4-1, where c4-k denotes a 4 × 4 Conv-InstanceNorm-LeakyRelu(0.2) layer with k filters and stride 2. Suppose we wish to translate some x i ∈ X i to another domain X j. A naive approach would formulate this as repeated application of G, |j − i| times. However, referencing our definition of G, we can unroll this to find that we must apply Enc and Dec j − i times throughout the computation. However, Enc and Dec are only responsible for bringing an observation into and out of the space of T. This is not only a waste of computation when we only require an output at X j, but it has serious implications for the ability of gradients to propagate through the computation. Therefore, we implement G(x i, i) as: a single application of Enc(x i), j − i applications of T (h), and a single application of Dec(h). T is applied recurrently and the entire generator is of the form: We show in our ablation studies that this re-formulation is critical to the learning process and the ing quality of the transformations learned. Additionally, T (h, i) is given a a set of separate, learnable normalization (AdaIN ) parameters that it selects based off of i with all other parameters of T being stationary across time steps. The overall architecture is shown in Figure 2. For all datasets, the loop-consistency loss coefficient λ is set to 10. We use Adam optimizer ( ) with the initial learning rate of 0.0002, β 1 = 0.5, and β 2 = 0.999. We train the face aging dataset and Alps seasons dataset for 50 epochs and 70 epochs respectively with initial learning rate and linearly decay learning rate to 0 for 10 epochs for both datasets. We apply LoopGAN to two very different sequential image generation tasks: face aging and chaging seasons of scenery pictures. Baselines are built with two bi-domain models, CycleGAN and UNIT and also a general-purpose multi-domain model StarGAN . We are interested in the sequential transformation capabilities of separately trained bi-domains compared to LoopGAN. Therefore, for each of the two bi-domains models, we train a separate model between every pair of sequential domains, i.e. X i and X i+1 and additionally train a model between every pair (not necessarily sequential) domains X i and X j (i = j). The first approach allows us to build a baseline for sequential generation by chaining the (separately learned) models in the necessary order. For instance, if we have four domains: A, B, C, D, then we can train four separate CycleGAN (or UNIT) models: G AB, G BC, G CD, G DA and correctly compose them to replicate the desired sequential transformation. Additionally, we can train direct versions e.g. G AC of CycleGAN (or UNIT) for a more complete comparison against LoopGAN. We refer to composed versions of separately trained models as Chained-CycleGAN and Chained-UNIT depending on the base translation method used. Since StarGAN ( ) inherently allows transformation between any two domains, we can apply this in a chained or direct manner without any additional models needing to be trained. We adopt the UTKFace dataset for modeling the face aging task. It consists of over 20,000 face-only images of different ages. We divide the dataset into four groups in order of increasing age according to the ground truth age given in the dataset as A consisting of ages from 10-20, B containing ages 25-35, C containing ages 40-50, and D containing ages 50-100. The number of images for each group are 1531, 5000, 2245, 4957, respectively, where a 95/5 train/test split is made. The of LoopGAN generation are shown in on the left side in Figure 3. LoopGAN shows advantage over baseline models in two aspects. The overall facial structure is preserved which we believe is due to the enforced loop consistency loss. Moreover, LoopGAN is able to make more apparent age changes compared to the rest of baseline models. In order to quantitatively compare the amount of age change between models, we obtain an age distribution of generated images by running a pre-trained age estimator DEX . The estimated age distributions of generated images (from input test images) are compared against those of the train images in Figure 4. The age distribution of LoopGAN generated images is closer to that of the train images across all four age groups when compared to the baseline modelssuggesting that it more faithfully learns the sequential age distribution changes of the training data. We use the collected scenery photos of Alps mountains of four seasons from . They are ordered into a sequence starting from Spring (A), to Summer (B), Fall (C), and Winter (D). They each have approximately 1700 images and are divided into 95/5% training and test set. We show the in Figure 3. Overall, LoopGAN is able to make drastic season change while maintaining the overall structure of the input scenery images. To further quantify the generation , we conducted a user study with Amazon Mechanical Turk (AMT) To showcase the universality of our model, we apply LoopGAN to two additional datasets in four different sequential transformation tasks: chairs with different azimuth angles, and gradual change of face attributes in degree of smiling, gender feature, and hair color. The chairs dataset comes with ground truth azimuth angle and is divided into four sets each containing chairs facing a distinct direction. To obtain linear manifolds for the face attributes, we train a binary classifier with 0/1 labels available for each attribute (Liu et al.) and use the predicted probability to determine the position of an image on an attribute manifold. The are shown in Figure 5. We experiment with several network architecture variations and investigate their effect on generation quality. First, attention mechanisms have proven to be useful in GAN image generation . We added attention mechanim in both space and time dimension ), however we found that the network struggles to generate high quality image after adding this type of attention mechanism. We also noticed that mentioned that for down-sampling, it is better to use no normalization to preserve information from input image, and for up-sampling it is better to use layer-normalization for faster training and higher quality. We applied these changes and found that they indeed help the network produce better . The under these variations are shown in Figure 6.a (first three rows). Moreover, we show the importance of the recurrent form of T (h) discussed in Section 4.2. We compare the choice to invoke Enc and Dec at each time step versus applying them once with some number of recurrent applications of T in Figure 6.a (last row) and show the poor quality observed when performing the loop naively. Lastly, we calculate the parameter count of generator networks compared in the face aging and season change experiments above and show that our final generator network architecture is parameterefficient compared to baseline models in Figure 6.b. For completeness, we also include a selection of failure cases in table in Figure 7. (viewed in color). In both cases, the highlighted generated images (the first column in (a) and the last column in (b)) bear some semantic dissimilarity to the input images. It seems that sometimes the network overfit to more drastic transformations that only preserve overall facial structure and orientation but neglects all other features. We proposed an extension to the family of image-to-image translation methods when the set of domains corresponds to a sequence of domains. We require that the translation task can be modeled as a consistent loop. This allows us to use a shared generator across all time steps leading to significant efficiency gains over a nave chaining of bi-domain image translation architectures. Despite this, our architecture shows favorable when compared with the classic CycleGAN family algorithms.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1ebc0VYvH
LoopGAN extends cycle length in CycleGAN to enable unaligned sequential transformation for more than two time steps.
We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel. The ing models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time. We demonstrate the reduction in training time and the good generalization performance of the ing models on the computer vision datasets CIFAR10, CIFAR100, and ImageNet. Stochastic gradient descent (SGD) and its variants are the de-facto methods to train deep neural networks (DNNs). Each iteration of SGD computes an estimate of the objective's gradient by sampling a mini-batch of the available training data and computing the gradient of the loss restricted to the sampled data. A popular strategy to accelerate DNN training is to increase the mini-batch size together with the available computational resources. Larger mini-batches produce more precise gradient estimates; these allow for higher learning rates and achieve larger reductions of the training loss per iteration. In a distributed setting, multiple nodes can compute gradient estimates simultaneously on disjoint subsets of the mini-batch and produce a consensus estimate by averaging all estimates, with one synchronization event per iteration. Training with larger mini-batches requires fewer updates, thus fewer synchronization events, yielding good overall scaling behavior. Even though the training loss can be reduced more efficiently, there is a maximum batch size after which the ing model tends to have worse generalization performance (; ; ; ;). This phenomenon forces practitioners to use batch sizes below those that achieve the maximum throughput and limits the usefulness of large-batch training strategies. Stochastic Weight Averaging (SWA) ) is a method that produces models with good generalization performance by averaging the weights of a set of models sampled from the final stages of a training run. As long as the models all lie in a region where the population loss is mostly convex, the average model can behave well, and in practice, it does. We have observed that if instead of sampling multiple models from a sequence generated by SGD, we generate multiple independent SGD sequences and average models from each, the ing model achieves similar generalization performance. Furthermore, if all the independent sequences use small-batches, but start from a model trained with large-batches, the ing model achieves generalization performance comparable with a model trained solely with small-batches. Using these observations, we derive Stochastic Weight Averaging in Parallel (SWAP): A simple strategy to accelerate DNN training by better utilizing available compute resources. Our algorithm is simple to implement, fast and produces good with minor tuning. For several image classification tasks on popular computer vision datasets (CIFAR10, CIFAR100, and ImageNet), we show that SWAP achieves generalization performance comparable to models trained with small-batches but does so in time similar to that of a training run with large-batches. We use SWAP on some of the most efficient publicly available models to date, and show that it's able to substantially reduce their training times. Furthermore, we are able to beat the state of the art for CIFAR10 and train in 68% of the time of the winning entry of the DAWNBench competition. The mechanism by which the training batch size affects the generalization performance is still unknown. A popular explanation is that because of the reduced noise, a model trained using larger mini-batches is more likely to get stuck in a sharper global minima. In , the authors argue that sharp minima are sensitive to variations in the data because slight shifts in the location of the minimizer will in large increases in average loss value. However, if flatness is taken to be the curvature as measured by the second order approximation of the loss, then counterexamples exist. In , the authors transform a flat minimizer into a sharp one without changing the behavior of the model, and in , the authors show the reverse behavior when weight-decay is not used. In , the authors predict that the batch size can be increased up to a critical size without any drop in accuracy and empirically validate this claim. For example, the accuracy begins to drop for image classification on CIFAR10 when the batch sizes exceed 1k samples. They postulate that when the batch size is large, the mini-batch gradient is close to the full gradient, and further increasing the batch size will not significantly improve the signal to noise ratio. In , the authors argue that, for a fixed number of epochs, using a larger batch size implies fewer model updates. They argue that changing the number of updates impacts the distance the weights travel away from their initialization and that this distance determines the generalization performance. They show that by training with large-batches for longer times (thus increasing the number of updates), the generalization performance of the model is recovered. Even though this large-batch strategy generates models that generalize well, it does so in more time than the smallbatch alternative. Irrespective of the generalization performance, the batch size also affects the optimization process. In , the authors show that for convex functions in the over-parameterized setting, there is a critical batch size below which an iteration with a batch size of M is roughly equivalent to M iterations with a batch size of one, and batch-sizes larger than M do not improve the rate of convergence. Methods which use adaptive batch sizes exist (; ; ; ;). However, most of these methods are either designed for specific datasets or require extensive hyper-parameter tuning. Furthermore, they ineffectively use the computational resources by reducing the batch size during part of the training. Local SGD is a distributed optimization algorithm that trades off gradient precision with communication costs by allowing workers to independently update their models for a few steps before synchronizing. Post-local SGD is a variant, which refines the output of large-batch training with local-SGD. The authors have observed that the ing model has better generalization than the model trained with large-batches and that their scheme achieves significant speedups. In this manner Post-local SGD is of a very similar vein than the present work. However, while Post-local SGD lets the models diverge for T iterations where T is in the order of tens, SWAP averges the models once after multiple epochs. For example, in our Imagenet exeperiments (see Sec. 5) we average our models after tens of thousands of updates, while Post-local SGD does after at most 32. Because of this difference, we believe that the mechanisms that power the success of SWAP and Post-local SGD must be different and point to different phenomena in DNN optimization. Stochastic weight averaging (SWA) ) is a method where models are sampled from the later stages of an SGD training run. When the weights of these models are averaged, they in a model with much better generalization properties. This strategy is very effective and has been adopted in multiple domains: deep reinforcement learning (Nikishin et al.), semisupervised learning , Bayesian inference , lowprecision training. In this work, we adapt SWA to accelerate DNN training. We describe SWAP as an algorithm in three phases (see Algorithm 1): In the first phase, all workers train a single model by computing large mini-batch updates. Synchronization between workers is required at each iteration and a higher learning rate is used. In the second phase, each worker independently refines its copy of the model to produce a different set of weights. Workers use a smaller batch size, a lower learning rate, and different randomizations of the data. No synchronization between workers is required in this phase. The last phase consists of averaging the weights of the ing models and computing new batch-normalization statistics to produce the final output. Phase 1 is terminated before the training loss reaches zero or the training accuracy reaches 100% (for example, a few percentage points below 100%). We believe that stopping early precludes the optimization from getting stuck at a location where the gradients are too small and allows the following stage to improve the generalization performance. However, the optimal stopping accuracy is a hyper-parameter that requires tuning. During phase 2, the batch size is appropriately reduced and small-batch training is performed independently and simultaneously. Here, each worker (or a subset of them) performs training using all the data, but sampling in different random order. Thus, after the end of the training process, each worker (or subset) will have produced a different model. Figure 1 plots the accuracies and learning-rate schedules for a run of SWAP. During the large-batch phase (phase 1), all workers share a common model and have the same generalization performance. During the small-batch phase (phase 2) the learning rates for all the workers are the same but their testing accuracies differ as the stochasticity causes the models to diverge from each other. We also plot the test-accuracy of the averaged model that would were we to stop phase 2 at that point. Note that the averaged model performs consistently better than each individual model. Small-batch Phase Figure 1: Learning rate schedules and CIFAR10 test accuracies for workers participating in SWAP. The largebatch phase with synchronized models is followed by the small-batch phase with diverging independent models. The test accuracy of the averaged weight model is computed by averaging the independent models and computing the test loss for the ing model. To visualize the mechanism behind SWAP, we plot the error achieved by our test network on a plane that contains the outputs of the three different phases of the algorithm. Inspired by and, we pick orthogonal vectors u, v that span the plane which contains θ 1, θ 2, θ 3. We plot the loss value generated by model θ = θ 1 +αu+βv at the location (α, β). To plot a loss value, we first generate a weight vector θ, compute the batch-norm statistics for that model (through one pass over the training data), and then evaluate the test and train accuracies. In Figure 2, we plot the training and testing error for the CIFAR10 dataset. Here'LB' marks the output of phase one,'SGD' the output of a single worker after phase two, and'SWAP' the final Figure 2a, we observe that the level-sets of the training error (restricted to this plane) form an almost convex basin and that both the output of phase 1 ('LB') 2 and the output of one of the workers of phase 2 ('SGD') lie in the outer edges of the basin. Importantly, during phase 2 the model traversed to a different side of the basin (and not to the center). Also, the final model ('SWAP') is closer to the center of the basin. When we visualize these three points on the test loss landscape (Figure 2b), we observe that the variations in the topology of the basin cause the'LB' and'SGD' points to fall in regions of higher error. But, since the'SWAP' point is closer to the center of the basin, it is less affected by the change in topology. In Figure 3, we neglect the'LB' point and plot the plane spanned by three workers'SGD1','SGD2','SGD3'. In Figure 3a, we can observe that these points lie at different sides of the training error basin while'SWAP' is closer to the center. In Figure 3b, we observe that the change in topology causes the worker points to lie in regions of higher testing errors than'SWAP', which is again close to the center of both basins. For reference, we have also plotted the best model that can be generated by this region of the plane. In , the authors argue that in the later stages of SGD the weight iterates behave similar to an Ornstein Uhlenbeck process. So, by maintaining a constant learning rate the SGD iterates should reach a stationary distribution that is similar to a high-dimensional Gaussian. This distribution is centered at the local minimum, has a covariance that grows proportionally with the learning rate, inversely proportional to the batch size and has a shape that depends on both the Hessian of the mean loss and covariance of the gradient. The authors of argue that by virtue of being a high dimensional Gaussian all the mass of the distribution is concentrated near the'shell' of the ellipsoid, and therefore, it is unlikely for SGD to access the interior. They further argue that by sampling weights from an SGD run (leaving enough time steps between them) will choose weights that are spread out on the surface of this ellipsoid and their average will be closer to the center. Without any further assumptions, we can justify sampling from different SGD runs (as done in phase 2 during SWAP). As long as all runs start in the same basin of attraction, and provided the model from holds, all runs will converge to the same stationary distribution, and each run can generate independent samples from it. To win some intuition on the advantage that SWA and SWAP have over SGD, we measure the cosine similarity between the gradient descent direction, −g i, and the direction towards the output of SWAP, ∆θ = θ swap − θ i. In Figure 4, we see that the cosine similarity, ∆θ,−gi gi ∆θ, decreases as the training enters its later stages. We believe that towards the end of training, the angle between the gradient direction and the directions toward the center of the basin is large, therefore the process moves mostly orthogonally to the basin, and progress slows. However, averaging samples from different sides of the basin can (and does) make faster progress towards the center. In this section we evaluate the performance of SWAP for image classification tasks on the CIFAR10, CIFAR100, and ImageNet datasets. For the experiments in this subsection, we found the best hyper-parameters using grid searches (see Appendix A for details). We train using mini-batch SGD with Nesterov momentum (set to 0.9) and weight decay of 5×10 −4. We augment the data using cutout and use a fastto-train custom ResNet 9 from a submission 3 to the DAWNBench leaderboard (Coleman et al.). All experiments were run on one machine with 8 NVIDIA Tesla V100 GPUs and use Horovod to distribute the computation. All statistics were collected over 10 different runs. For these experiments, we used the following settings-SWAP phase one: 4096 samples per batch using 8 GPUs (512 samples per GPU). Phase one is terminated when the training accuracy reaches 98% (on average 108 epochs). SWAP phase two: 8 workers with one GPU each and 512 samples per batch for 30 epochs. The experiment that uses only large-batches had 4096 samples per batch across 8 GPUs and is run for 150 epochs. The experiments that use only small-batches had 512 samples per batch on 2 GPUs and is trained for 100 epochs. Table 1 compares the best test accuracies and corresponding training times for models trained with small-batch only, with large-batch only, and with SWAP. We report the average accuracy of the workers before averaging and the accuracy of the final model. Test Table 2 compares the best test accuracies and corresponding training times for models trained with only small-batches (for 150 epochs), with only large-batches (for 150 epochs), and with SWAP. For SWAP, we report test accuracies obtained using the last SGD iterate before averaging, and test accuracy of the final model obtained after averaging. We observe significant improvement in test accuracies after averaging the models. For both CIFAR 10 and CIFAR100, training with small-batches achieves higher testing accuracy than training with large-batches but takes much longer to train. SWAP, however, terminates in time comparable to the large-batch run but achieves accuracies on par (or better) than small batch training. Achieving state of the art training speeds for CIFAR10: At the time of writing the front-runner of the DAWNBench competition takes 37 seconds with 4 Tesla V100 GPUs to train CIFAR10 to 94% test accuracy. Using SWAP with 8 Tesla V100 GPUs, a phase one batch size of 2048 samples and 28 epochs, and a phase two batch size of 256 samples for one epoch is able to reach the same accuracy in 27 seconds. We use SWAP to accelerate a publicly available fast-to-train ImageNet model with published learning rate and batch size schedules 4. The default settings for this code modify the learning-rates and batch sizes throughout the optimization (see Figure 5). Our small-batch experiments train ImageNet for 28 epochs using the published schedules with no modification and are run on 8 Tesla V100 GPUs. Our large-batch experiments modify the schedules by doubling the batch size and doubling the learning rates (see Figure 5) and are run on 16 Tesla V100 GPUs. For SWAP phase 1, we use the large-batch settings for 22 epochs, and for SWAP phase 2, we run two independent workers each with 8 GPUs using the settings for small-batches for 6 epochs. We observe that doubling the batch size reduces the Top1 and Top5 test accuracies with respect to the small-batch run. SWAP, however, recovers the generalization performance at substantially reduced training times. Our are compiled in Table 3 (the statistics were collected over 3 runs). We believe it's worthy of mention that these accelerations were achieved with no tuning other than increasing the learning rates proportionally to the increase in batch size and reverting to the original schedule when transitioning between phases. taken from an existing DAWNBench submission. For a larger batch experiment, we double the batch size, double the number of GPUs and double the learning rate of the original schedule. For SWAP, we switch from the modified schedule to the original schedule as we move from phase 1 to phase 2. Top1 Accuracy (%) Top5 Accuracy (%) Training Time (min) SGD (small-batch) 76.14 ± 0.07 93. We now compare SWAP with SWA: the sequential weight averaging algorithm from. For the experiments in this section, we use the CIFAR100 dataset. We sample the same number of models for both SWA and SWAP and maintain the same number of epochs per sample. For SWA, we sample each model with 10 epochs in-between and average them to get the final model. For SWAP, we run 8 independent workers for 10 epochs each and use their average as the final model. We explore if SWA can recover the test accuracy of small-batch training on a large-batch training run. We use the same (large) batch size throughout. We follow an initial training cycle with cyclic learning rates (with cycles of 10 epochs) to sample 8 models (one from the end of each cycle). See Figure 6a for an illustration of the learning rate schedule. As expected we observe that the large-batch training run achieves lower training accuracy, but surprisingly SWA was unable to improve it (see Table 4, row 1). Large-batch followed by small-batch SWA: We evaluate the effect of executing SWA using smallbatches after a large-batch training run. We interrupt the large-batch phase at the same accuracy we interrupt phase 1 of our CIFAR100 experiment (Table 2). In this case, the small-batch phase uses a single worker and samples the models sequentially. SWA is able to reach the test accuracy of a small-batch run but requires more than three times longer than SWAP to compute the model (see Table 4, row 2). An illustration of the learning rate schedule is provided in Figure 6b. Small-batch SWA and SWAP: We start the SWA cyclic learning rate schedule from the best model found by solely small-batch training (table 2, row 1). Since the cycle length and cycle count are fixed, the only free parameter is the peak learning rate. We select this using a grid-search. Once the SWA schedule is specified, we re-use the peak learning rate settings in SWAP. We start phase two from the model that was generated as the output of phase 1 for the experiment on section 5.1 reported on table 2 rows 3 and 4. With these settings, small-batch SWA achieves better accuracy than SWAP (by around ∼ 0.9%) at 6.8x more training time. Next, we wish to explore the speed-up that SWAP achieves over SWA if the precision of SWA is set as a target. To that end, we relax the constraints on SWAP. By increasing the phase two schedule from one 10 epoch cycle to two 20 epoch cycles and sampling two models from each worker (16 models) the ing model achieved a test accuracy of 79.11% in 241 seconds or 3.5x less time. Test accuracy before averaging (%) We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm that uses a variant of Stochastic Weight Averaging (SWA) to improve the generalization performance of a model trained with large mini-batches. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models trained using small-batches. The final model obtained after averaging has good generalization performance and is trained in a shorter time. We believe that this variant and this application of SWA are novel. We observed that using large-batches in the initial stages of training does not preclude the models from achieving good generalization performance. That is, by refining the output of a large-batch run, with models sampled sequentially as in SWA or in parallel as in SWAP, the ing model is able to perform as well as the models trained using small-batches only. We confirm this in the image classification datasets CIFAR10, CIFAR100, and ImageNet. Through visualizations, we complement the existing evidence that averaged weights are closer to the center of a training loss basin than the models produced by stochastic gradient descent. It's interesting to note that the basin into which the large mini-batch run is converging to seems to be the same basin where the refined models are found. So, it is possible that regions with bad and good generalization performance are connected through regions of low training loss and, more so, that both belong to an almost convex basin. Our method requires the choice of (at least) one more hyperparameter: the transition point between the large-batch and small-batch. For our experiments, we chose this by using a grid search. A principled method to choose the transition point will be the focus of future work. In future work we intend to explore the behavior of SWAP when used with other optimization schemes, such as Layer-wise Adaptive Rate Scaling (LARS) , mixed-precision training , post-local SGD or NovoGrad . The design of SWAP allows us to substitute any of these for the large-batch stage, for example, we can use local SGD to accelerate the first stage of SWAP by reducing the communication overhead. We provide the parameters used in the experiments of Section 5.1. These were obtained by doing independent grid searches for each experiment. For all CIFAR experiments, the momentum and weight decay constants were kept at 0.9 and 5 × 10 −4 respectively. Tables 5 and 6 list the remaining hyperparameters. When a stopping accuracy of 100% is listed, we mean that the maximum number of epochs were used.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rygFWAEFwS
We propose SWAP, a distributed algorithm for large-batch training of neural networks.
A common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with a mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. However, training with a large batch often in lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome these optimization difficulties, we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled AlexNet and ResNet-50 to a batch size of 16K. Training of large Convolutional Neural Networks (CNN) takes a lot of time. The brute-force way to speed up CNN training is to add more computational power (e.g. more GPU nodes) and train network using data-parallel Stochastic Gradient Descent, where each worker receives some chunk of global mini-batch (see e.g. BID10 or BID4). The size of a chunk should be large enough to utilize the computational resources of the worker. So scaling up the number of workers in the increase of batch size. But using large batch may negatively impact the model accuracy, as was observed in BID10, BID14, BID8, BID6.Increasing the global batch while keeping the same number of epochs means that you have fewer iterations to update weights. The straight-forward way to compensate for a smaller number of iterations is to do larger steps by increasing the learning rate (LR). For example, BID10 suggests to linearly scale up LR with batch size. However using a larger LR makes optimization more difficult, and networks may diverge especially during the initial phase. To overcome this difficulty, BID4 suggested a "learning rate warm-up": training starts with a small LR, which is slowly increased to the target "base" LR. With a LR warm-up and a linear scaling rule, BID4 successfully trained ResNet-50 BID5 ] with batch B=8K, see also BID1 ]. Linear scaling of LR with a warm-up is the "state-of-the art" recipe for large batch training. We tried to apply this linear scaling and warm-up scheme to train AlexNet BID11 ] on ImageNet BID3 ], but scaling stopped after B=2K since training diverged for large LR-s. For B=4K the accuracy dropped from the baseline 57.6% (B=512) to 53.1%, and for B=8K the accuracy decreased to 44.8%. To enable training with a large LR, we replaced Local Response Normalization layers in AlexNet with Batch Normalization (BN) BID7 ]. We will refer to this models AlexNet-BN. BN improved model convergence for large LRs as well as accuracy: for B=8K the accuracy gap decreased from 14% to 2.2%.To analyze the training stability with large LRs we measured the ratio between the norm of the layer weights and norm of gradients update. We observed that if this ratio is too high, the training becomes unstable. On other hand, if the ratio is too small, then weights don't change fast enough. The layer with largest ||∇W || ||W || defines the global limit on the learning rate. Since this ratio varies a lot between different layers, we can speed-up training by using a separate LR for each layer. Thus we propose a novel Layer-wise Adaptive Rate Scaling (LARS) algorithm. There are two notable differences between LARS and other adaptive algorithms such as ADAM BID9 ) or RMSProp BID16 ): first, LARS uses a separate learning rate for each layer and not for each weight, which leads to better stability. And second, the magnitude of the update is defined with respect to the weight norm for better control of training speed. With LARS we trained AlexNet-BN and ResNet-50 with B=16K without accuracy loss. The training of CNN is done using Stochastic Gradient (SG) based methods. At each step t a minibatch of B samples x i is selected from the training set. The gradients of loss function ∇L(x i, w) are computed for this subset, and networks weights w are updated based on this stochastic gradient: DISPLAYFORM0 The computation of SG can be done in parallel by N units, where each unit processes a chunk of the mini-batch with B N samples. Increasing the mini-batch permits scaling to more nodes without reducing the workload on each unit. However, it was observed that training with a large batch is difficult. To maintain the network accuracy, it is necessary to carefully adjust training hyper-parameters (learning rate, momentum etc). BID10 suggested the following rules for training with large batches: when you increase the batch B by k times, you should also increase LR by k times while keeping other hyper-parameters (momentum, weight decay, etc) unchanged. The logic behind linear LR scaling is straight-forward: if you increase B by k times while keeping the number of epochs unchanged, you will do k times fewer steps. So it seems natural to increase the step size by k times. For example, let's take k = 2. The weight updates for batch size B after 2 iterations would be: DISPLAYFORM1 The weight update for the batch B 2 = 2 * B with learning rate λ 2: DISPLAYFORM2 will be similar if you take λ 2 = 2 * λ, assuming that ∇L(x j, w t+1) ≈ ∇L(x j, w t).Using the "linear LR scaling" BID10 trained AlexNet with batch B=1K with minor (≈ 1%) accuracy loss. The scaling of AlexNet above 2K is difficult, since the training diverges for larger LRs. It was observed that linear scaling works much better for networks with Batch Normalization (e.g. BID2). For example BID0 trained the Inception model with batch B=6400, and trained ResNet-152 for B=5K.The main obstacle for scaling up batch is the instability of training with high LR. BID6 tried to use less aggressive "square root scaling" of LR with special form of Batch Normalization ("Ghost Batch Normalization") to train AlexNet with B=8K, but still the accuracy (53.93%) was much worse than baseline 58%. To overcome the instability during initial phase, BID4 proposed to use LR warm-up: training starts with small LR, and then LR is gradually increased to the target. After the warm-up period (usually a few epochs), you switch to the regular LR policy ("multi-steps", polynomial decay etc). Using LR warm-up and linear scaling BID4 trained ResNet-50 with batch B=8K without loss in accuracy. These recipes constitute the current state-of-the-art for large batch training, and we used them as the starting point of our experiments. Another problem related to large batch training is so called "generalization gap", observed by BID8. They came to that "the lack of generalization ability is due to the fact that large-batch methods tend to converge to sharp minimizers of the training function." They tried a few methods to improve the generalization with data augmentation and warm-starting with small batch, but they did not find a working solution. We used BVLC 1 AlexNet with batch B=512 as baseline. Model was trained using SGD with momentum 0.9 with initial LR=0.02 and the polynomial (power=2) decay LR policy for 100 epochs. The baseline accuracy is 58.8% (averaged over last 5 epochs). Next we tried to train AlexNet with B=4K by using larger LR. In our experiments we changed the base LR from 0.01 to 0.08, but training diverged with LR > 0.06 even with warm-up 2. The best accuracy for B=4K is 53.1%, achieved for LR=0.05. For B=8K we couldn't scale-up LR either, and the best accuracy is 44.8%, achieved for LR=0.03 (see TAB0 (a) ).To stabilize the initial training phase we replaced Local Response Normalization layers with Batch Normalization (BN). We will refer to this model as AlexNet-BN.3. AlexNet-BN model was trained using SGD with momentum=0.9, weight decay=0.0005 for 128 epochs. We used polynomial (power 2) decay LR policy with base LR=0.02. The baseline accuracy for B=512 is 60.2%. With BN we could use large LR-s even without warm-up. For B=4K the best accuracy 58.9% was achieved for LR=0.18, and for B=8K the best accuracy 58% was achieved for LR=0.3. We also observed that BN significantly widens the range of LRs with good accuracy. Still there is a 2.2% accuracy loss for B=8K. To check if it is related to the "generalization gap" (Keskar et al. FORMULA0), we looked at the loss gap between training and testing (see FIG0). We did not find the significant difference in the loss gap between B=512 and B=8K. We conclude that in this case the accuracy loss was mostly caused by the slow training and was not related to a generalization gap. The standard SGD uses the same LR λ for all layers: w t+1 = w t − λ∇L(w t). When λ is large, the update ||λ * ∇L(w t)|| can become larger than ||w||, and this can cause the divergence. This makes the initial phase of training highly sensitive to the weight initialization and to initial LR. We found that the ratio of the L2-norm of weights and gradients ||w||/||∇L(w t)|| varies significantly between weights and biases, and between different layers. For example, let's take AlexNet after one iteration TAB1, "*.w" means layer weights, and "*.b" -biases). The ratio ||w||/||∇L(w)|| for the 1st convolutional layer ("conv1.w") is 5.76, and for the last fully connected layer ("fc6.w") -1345. The ratio is high during the initial phase, and it is rapidly decreasing after few epochs (see Figure 2). If LR is large comparing to the ratio for some layer, then training may becomes unstable. The LR "warm-up" attempts to overcome this difficulty by starting from small LR, which can be safely used for all layers, and then slowly increasing it until weights will grow up enough to use larger LRs. We would like to use different approach. We want to make sure that weights update is small comparing to the norm of weights to stabilize training DISPLAYFORM0 where η < 1 control the magnitude of update with respect to weights. The coefficient η defines how much we "trust" that the value of stochastic gradient ∇L(w l t) is close to true gradient. The η depends on the batch size. "Trust" η is monotonically increasing with batch size: for example for Alexnet for batch B = 1K the optimal η = 0.0002, for batch B = 4K -η = 0.005, and for B = 8K -η = 0.008. We implemented this idea through defining local LR λ l for each layer l: DISPLAYFORM1 where γ defines a global LR policy (e.g. steps, or exponential decay), and local LR λ l is defined for each layer through "trust" coefficient η < 1 4: DISPLAYFORM2 Note that now the magnitude of the update for each layer doesn't depend on the magnitude of the gradient anymore, so it helps to partially eliminate vanishing and exploding gradient problems. The network training for SGD with LARS are summarized in the Algorithm 1 5.LARS was designed to solve the optimization difficulties, and it does not replace standard regularization methods (weight decay, batch norm, or data augmentation). But we found that with LARS we can use larger weight decay, since LARS automatically controls the norm of layer weights: DISPLAYFORM3 where B is min-batch size, N -number of training epochs, and S -number of samples in the training set. Here we assumed that global rate policy starts from 1 and decrease during training over training interval [0, N * S/B]. BID4 and BID1 we used the second setup with an extended augmentation with variable image scale and aspect ratio similar to ]. The baseline top-1 accuracy for this setup is 75.4%. The accuracy with B=16K is 0.7-1.4% less than baseline. This gap is related to smaller number of steps. We will show in the next section that one can recover the accuracy by training for more epochs. When batch becomes large (32K), even models trained with LARS and large LR don't reach the baseline accuracy. One way to recover the lost accuracy is to train longer (see BID6). Note that when batch becomes large, the number of iteration decrease. So one way to try to improve the accuracy, would be train longer. For example for Alexnet and Alexnet-BN with B=16K, when we double the number of iterations from 7800 (100 epochs) to 15600 (200 epochs) the accuracy improved by 2-3% (see TAB4). The same effect we observed for Resnet-50: training for additional 100 epochs recovered the top-1 accuracy to the baseline 75.5%. In general we found that we have to increase the training duration to keep the accuracy. Consider for example Googlenet ]. As a baseline we trained BVLC googlenet 8 with batch=256 for 100 epoch. The top-1 accuracy of this model is 69.2%. Googlenet is deep, so in original paper authors used auxiliary losses to accelerate SGD. We used LARS to solve optimization difficulties so we don't need these auxiliary losses. The original model also has no Batch Normalization, so we used data augmentation for better regularization. The baseline accuracy for B=256 is 70.3% with extended augmentation and LARS. We found that Googlenet is very difficult to train with large batch even with LARS: we needed both large number of epoch and longer ramp-up to scale learning rate up (see TAB5). Large batch is a key for scaling up training of convolutional networks. The existing approach for large-batch training, based on using large learning rates, leads to divergence, especially during the initial phase, even with learning rate warm-up. To solve these difficulties we proposed the new optimization algorithm, which adapts the learning rate for each layer (LARS) proportional to the ratio between the norm of weights and norm of gradients. With LARS the magnitude of the update for each layer doesn't depend on the magnitude of the gradient anymore, so it helps with vanishing and exploding gradients. But even with LARS and warm-up we couldn't increase LR farther for very large batches, and to keep the accuracy we have to increase the number of epochs and use extensive data augmentation to prevent over-fitting.
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJ4uaX2aW
A new large batch training algorithm based on Layer-wise Adaptive Rate Scaling (LARS); using LARS, we scaled AlexNet and ResNet-50 to a batch of 16K.
Finding an embedding space for a linear approximation of a nonlinear dynamical system enables efficient system identification and control synthesis. The Koopman operator theory lays the foundation for identifying the nonlinear-to-linear coordinate transformations with data-driven methods. Recently, researchers have proposed to use deep neural networks as a more expressive class of basis functions for calculating the Koopman operators. These approaches, however, assume a fixed dimensional state space; they are therefore not applicable to scenarios with a variable number of objects. In this paper, we propose to learn compositional Koopman operators, using graph neural networks to encode the state into object-centric embeddings and using a block-wise linear transition matrix to regularize the shared structure across objects. The learned dynamics can quickly adapt to new environments of unknown physical parameters and produce control signals to achieve a specified goal. Our experiments on manipulating ropes and controlling soft robots show that the proposed method has better efficiency and generalization ability than existing baselines.
[ 1, 0, 0, 0, 0, 0, 0 ]
H1ldzA4tPr
Learning compositional Koopman operators for efficient system identification and model-based control.
We derive reverse-mode (or adjoint) automatic differentiation for solutions of stochastic differential equations (SDEs), allowing time-efficient and constant-memory computation of pathwise gradients, a continuous-time analogue of the reparameterization trick. Specifically, we construct a backward SDE whose solution is the gradient and provide conditions under which numerical solutions converge. We also combine our stochastic adjoint approach with a stochastic variational inference scheme for continuous-time SDE models, allowing us to learn distributions over functions using stochastic gradient descent. Our latent SDE model achieves competitive performance compared to existing approaches on time series modeling. Deterministic dynamical systems can often be modeled by ordinary differential equations (ODEs). For training, a memory-efficient implementation of the adjoint sensitivity method effectively computes gradients through ODE solutions with constant memory cost. Stochastic differential equations (SDEs) are a generalization of ODEs which incorporate instantaneous noise into their dynamics (; Øksendal, 2003). They are a natural fit for modeling phenomena governed by small and unobserved interactions. In this paper, we generalize the adjoint method to dynamics defined by SDEs ing in an approach which we call the stochastic adjoint sensitivity method. Building on theoretical advances by , we derive a memory-efficient adjoint method whereby we simultaneously reconstruct the original trajectory and evaluate the gradients by solving a backward SDE (in the sense of) whose formulation we detail in Section 3. Computationally, in order to retrace the original trajectory during the backward pass, we need to reuse noise samples generated in the forward pass. In Section 4, we give an algorithm that allows arbitrarily-precise querying of a Brownian motion realization at any time point, while only storing a single random seed. Overall, this in a constant-memory algorithm that approximates the gradient arbitrarily well as step size reduces by computing vector-Jacobian products a constant number of times per-iteration. See Table 2 for a comparison of our method against previous approaches in terms of asymptotic time and memory complexity. We incorporate SDEs into a stochastic variational inference framework, whereby we efficiently compute likelihood ratios and backpropagate through the evidence lower bound using our adjoint approach. This effectively generalizes existing model families such as latent ODEs and deep Kalman filters . We review works on learning ODEs and SDEs. We refer the reader to Appendix B on for stochastic flows and (backward) Stratonovich integrals. The adjoint sensitivity method is an efficient approach to solve optimization problems by considering the dual form . recently applied this idea to obtain gradients with respect to parameters of a neural network defining an ODE. The method is scalable due to its memory-efficiency, as intermediate computations need not be cached as in regular backpropagation . Recent works have considered SDEs whose drift and diffusion functions are defined by neural networks (a,b; ;). Consider a filtered probability space (Ω, F, {F t} t∈T, P ) on which an m-dimensional adapted Wiener process {W t} t∈T is defined. An Itô SDE defines a stochastic process {Z t} t∈T by where z 0 ∈ R d is a deterministic starting value, and b: are the drift and diffusion functions, respectively. Here, the second integral on the right hand side of is the Itô stochastic integral (Øksendal, 2003). When the coefficients are globally Lipschitz in both the state and time components, there exists a unique strong solution to the SDE (Øksendal, 2003). Therefore, one can consider coefficients defined by neural networks that have smooth activation functions (e.g. tanh) of the form b(z, t, θ) and σ (z, t, θ). This in a model known as the neural SDE. We derive a backward Stratonovich SDE for what we call the stochastic adjoint process. A direct implication of this is a gradient computation algorithm that works by solving a set of dynamics in reverse time and relies on vector-Jacobian products without storing intermediate computation. Recall from Appendix B.3, Φ s,t (z):= Z s,z t is the solution at time t when the process is started at z at time s, and its inverse is defined asΨ s,t (z):= Φ −1 s,t (z). Consider A s,t (z) = ∇(L(Φ s,t (z))), where L is a scalar loss function. The chain rule gives A s,t (z) = ∇L(Φ s,t (z))∇Φ s,t (z). Letà s,t (z):= A s,t (Ψ s,t (z)) = ∇L(z)∇Φ s,t (Ψ s,t (z)) = ∇L(z)K s,t (z). Note that A s,t (z) =à s,t (Φ s,t (z)). Since ∇L(z) is constant, we see that (à s,t (z),Ψ s,t (z)) satisfies the following backward SDE system by Lemma C.1 (cf. Appendix C) Since can be viewed as a single SDE (with smooth coefficients) for an augmented state,à s,T (z) also has a unique strong solution. Therefore, for t = 0, we may writẽ where W · = {W t} 0≤t≤T denotes the path of the Brownian motion and F: is a deterministic measurable function (the Itô map) (, Definition 10.9). The next theorem follows immediately from and the definition of F. Theorem 3.1: The theorem is a consequence of T ) and. This implies we may solve the dynamics starting from the end state of the forward solve Z 0,z T to obtain the gradient of the loss with respect to the starting value z. To obtain the gradient with respect to the parameters, we augment the original state with parameters. Algorithm 1 summarizes this assuming access to a black-box solver SDESolve. See details in Appendix C. Input: parameters θ, start time t 0, stop time t 1, final state z t 1, loss gradient ∂L/z t 1. Input: drift b(z, t, θ), diffusion σ(z, t, θ), Wiener process sample w(t). We present a data structure that allows arbitrarilyprecise query of the sample path of the Wiener process given a global random seed based on the Brownian tree construction. The data structure facilitates the adjoint method such that we can ensure the noise sample in the backward solve is the same as in the forward solve using a split pseudorandom random number generator (PRNG). We present the procedure in Algorithm D.2 and details in Appendix D. Consider the SDEs where are Lipschitz in both arguments. Suppose and define the prior and posterior processes, respectively. Additionally, assume there is a function u: for all x ∈ R d and t ∈ R. Then, the variational free energy can be written as where the expectation is taken over the distribution of the posterior process defined by, and y 1,..., y N are observations at times t 1,..., t N, respectively. To compute the gradient with respect to parameters, we need only augment the forward equation with an extra variable whose drift function returns 1 2 |u(Z t, t)| 2 and diffusion function is 0. In this case, the backward adjoint dynamics can be derived analogously using. Appendix E includes details. We verify our theory by comparing the gradients obtained by our stochastic adjoint framework against analytically derived gradients for chosen test problems with closed-form solutions. We then fit latent SDE models with our framework on two synthetic datasets and a real dataset, verifying that the variational inference framework promotes learning a generative model of time series. Due to space constraint, we refer the reader to Appendix F for on numerical studies and Appendix N for on synthetic data. We present only the on the motion capture dataset here. We experiment on a dataset extracted from the CMU motion capture library. We use the dataset adopted by which consists of 23 walking sequences of subject number 35 that is partitioned into 16 training, 3 validation, and 4 test sequences. We include the settings in Appendix O and report the test MSE here following Yıldız et al.. Appendix B. Additional Background t. For the diffusion functions σ 1,..., σ m, we will also write σ: R d → R d×m as the matrix-valued function obtained by stacking the component functions σ i in a columnwise fashion, and index its jth row and ith column by σ j,i. Among recent work on neural SDEs, none has enabled an efficient training framework. In particular, Tzen and Raginsky (2019a); considered computing the gradient by simulating the forward dynamics of an explicit Jacobian matrix the size either the squared number of parameters or the number of parameters times the number of states, building on the pathwise approach . By contrast, the approach we present only requires evaluating vector-Jacobian products a constant number of times with respect to the number of parameters and states, which has the same asymptotic time cost as evaluating the drift and diffusion functions, and can be done automatically by modern machine learning libraries (; ; ;). Our stochastic adjoint sensitivity method involves stochastic processes running forward and backward in time. The Stratonovich stochastic integral, due to its symmetry, gives nice expressions for the backward dynamics and so is more convenient for our purpose. Our can be applied straightforwardly to Itô SDEs as well using a conversion (see e.g. (, Sec. 2 Following the treatment of Kunita , we introduce the forward and backward Stratonovich integrals. Let {F s,t} s≤t;s,t∈T be a two-sided filtration, where F s,t is the σ-algebra generated by For a continuous semimartingale {Y t} t∈T adapted to the forward filtration {F 0,t} t∈T, the Stratonovich stochastic integral is defined as where denotes the size of largest segment of the partition, and the limit is to be interpreted in the L 2 sense. The Itô integral uses instead the left endpoint Y t k rather than the average. In general, the Itô and Stratonovich integrals differ by a term of finite variation. To define the backward Stratonovich integral, we consider the backward Wiener process {W t} t∈T defined asW t = W t − W T for all t ∈ T that is adapted to the backward filtration {F t,T} t∈T. For a continuous semimartingaleY t adapted to the backward filtration, we define where Π = {0 = t N < · · · < t 0 = T} is a partition, and the limit is again in the L 2 sense. It is well known that an ODE defines a flow of diffeomorphisms . Here we consider the stochastic analogue for the Stratonovich SDE Throughout the paper, we assume b, σ are of class C ∞,1 b, so that the SDE has a unique strong solution. Let Φ s,t (z):= Z s,z t be the solution at time t when the process is started at z at time s. Given a realization of the Wiener process, this defines a collection S = {Φ s,t} s≤t;s,t∈T of continuous maps from R d to itself. The following theorem shows that these maps are diffeomorphisms and that they satisfy backward SDEs. Theorem B.1 (Thm. 3.7.1 ): (i) With probability 1, the collection S = {Φ s,t} s≤t;s,t∈T satisfies the flow property Moreover, each Φ s,t is a smooth diffeomorphism from R d to itself. We thus call S the stochastic flow of diffeomorphisms generated by the SDE. (ii) The backward flowΨ s,t:= Φ −1 s,t satisfies the backward SDE: for all z ∈ R d and s, t ∈ T such that s ≤ t a.s. Note that the coefficients in and differ by only a negative sign. This symmetry is due to our use of the Stratonovich integral (see Figure 2). Figure 2: Negating the drift and diffusion functions for an Itô SDE and simulating backwards from the end state gives the wrong solution. Negating the drift and diffusion functions for the converted Stratonovich SDE, however, gives the correct path when simulated in reverse time. We present our main contribution, i.e. the stochastic analog of the adjoint sensitivity method for SDEs. We use to derive another backward Stratonovich SDE for what we call the stochastic adjoint process. The direct implication of this is a gradient computation algorithm that works by solving a set of dynamics in reverse time and relies on vector-Jacobian products without storing intermediate computation produced in the forward pass. The goal is to derive the stochastic adjoint process {∂L/∂Z t} t∈T that can be simulated by evaluating only vector-Jacobian products, where L = L(Z T) is a scalar loss of the terminal state Z T. The main theoretical is Theorem 3.1. We first derive a backward SDE for the process {∂Z T /∂Z t} t∈T, assuming that Z t =Ψ t,T (Z T) for a deterministic Z T ∈ R d that does not depend on the realized Wiener process. We then extend to the case where In the latter case, the ing value cannot be interpreted as the solution to a backward SDE anymore due to loss of adaptiveness; instead we will formulate the using the Itô map . Finally, we extend the state of Z to include parameters and obtain the gradient with respect to them. We first derive the SDE for the Jacobian matrix of the backward flow. Consider the stochastic flow generated by the backward SDE as in Theorem B.1(ii). Let J s (z):= ∇Ψ s,T (z), then it satisfies the backward SDE for all s ≤ t and x ∈ R d a.s. Furthermore, let K s,t (z) = J s,t (z) −1, we have for all s ≤ t and x ∈ R d a.s. The proof included in Appendix I relies on Itô's lemma in the Stratonovich form (, Theorem 2.4.1). This lemma considers only the case where the endpoint z is fixed and deterministic. Now we compose the state process (represented by the flow) and the loss function L. WritingX s = (à s,T (z),Ψ s,T (z) ) as the augmented process, the system is a backward Stratonovich SDE of the form. As a has a unique strong solution. Without loss of generality, assume t = 0. Since admits a strong solution, we may writeà where W · = {W t} 0≤t≤T denotes the path of the Brownian motion and is a deterministic measurable function (the Itô map) (, Definition 10.9). Intuitively, F can be thought as an algorithm that computes the solution to the backward SDE given the position z at time T and the realized Brownian path. Similarly, we let G be the solution map for the forward flow. Immediately, we arrive at Theorem 3.1. In practice, we compute solutions to SDEs with numerical solvers F h and G h, where h = T /N denotes the mesh size of a fixed grid 1. The approximate algorithm thus outputs The following theorem provides sufficient conditions for convergence. Suppose the schemes F h and G h satisfy the following conditions: converge to 0 in probability as h → 0, and (ii) for any M > 0, we have sup |z|≤M |F h (z, W ·) − F(z, W ·)| → 0 in probability as h → 0. Then, for any starting point z of the forward flow, we have in probability as h → 0. For details and the proof see Appendix J. Usual schemes such as the Euler-Maruyama and Milstein method satisfy condition (i). Indeed, they converge pathwise (i.e. almost surely) with explicit rates for any fixed starting point . While condition (ii) is rather strong, we note that the SDEs considered here have smooth coefficients and thus the solutions enjoy nice regularity properties in the starting position. Therefore, it is reasonable to expect that the corresponding numerical schemes to also behave nicely as a function of both the mesh size and the starting position. To the best of our knowledge this property is not considered at all in the literature on numerical methods for SDEs (where the initial position is fixed), but is crucial in the proof of Theorem C.2. Detailed analysis for specific schemes is beyond the scope of this paper and is left for future research. So far we have derived the gradient of the loss with respect to the initial state. We can extend these to give gradients with respect to parameters of the drift and diffusion functions by treating them as an additional part of the state whose dynamics has zero drift and diffusion. We summarize this in Algorithm 1 2, assuming access to a numerical solver SDESolve. Note for the Euler-Maruyama scheme, the most costly terms to compute a t ∂b/∂θ and a t ∂σ i /∂θ can be evaluated by calling vjp(a t, b, θ) and vjp(a t, σ i, θ), respectively. In principle, we can simulate the forward and backward adjoint dynamics with any high-order solver of choice. However, in practice, to obtain a strong numerical solution 3 with order beyond 1/2, we need to simulate multiple integrals of the Wiener process such as These random variables are difficult to simulate exactly and costly to approximate using truncated infinite series . Note that even though the backward SDE for the stochastic adjoint does not have diagonal noise, it satisfies a commutativity property (Rößler, 2004) when the SDE of the original 1. We may also use adaptive solvers . 2. We use row vector notation here. 3. A numerical scheme is of strong order p if E [|XT − XNη|] ≤ Cη p for all T > 0, where Xt and XNη are respectively the coupled true solution and numerical solution, N and η are respectively the iteration index and step size such that N η = T, and C is independent of η. dynamics has diagonal noise. In this case, we can safely adopt certain numerical schemes of strong order 1.0 (e.g. Milstein and stochastic Runge-Kutta (Rößler, 2010)) without approximating multiple integrals or the Lévy area during simulation. We verify this formally in Appendix K. We have implemented several SDE solvers in PyTorch which include Euler-Maruyama, Milstein, and stochastic Runge-Kutta schemes with adaptive time-stepping using a PI controller . In addition, following torchdiffeq , we have created a user-friendly subclass of torch.autograd.Function that facilitates gradient computation using our stochastic adjoint framework when the neural SDE is implemented as a subclass of torch.nn.Module. We include a short code snippet covering the main idea of the stochastic adjoint in Appendix L and plan to release all code after the double-blind reviewing process. The formulation of the adjoint ensures it can be numerically integrated by merely evaluating dynamics cheaply defined by vector-Jacobian products, as opposed to whole Jacobians. However, the backward-in-time nature also introduces the additional difficulty that the same Wiener process sample path in the forward pass has to be queried again during the backward pass. Naïvely storing Brownian motion increments and related quantities (e.g. Lévy area approximations) not only implies a large memory consumption but also disables using adaptive time-stepping numerical integrators, where the evaluation timestamps in the backward pass may be different from those in the forward pass. To overcome this issue, we combine Brownian trees with splittable Pseudorandom number generators (PRNGs) and obtain a data structure that allows querying values of the Wiener process path at arbitrary times with logarithmic time cost with respect to some error tolerance. Lévy's Brownian bridge states that given a start time t s and end time t e along with their respective Wiener process values w s and w e, the marginal of the process at time t ∈ (t s, t e) is a normal distribution: We can recursively apply this formula to evaluate the process at the midpoint of any two distinct timestamps where the values are already known. Constructing the whole sample path of a Wiener process in this manner in what is known as the Brownian tree . We assume access to a splittable PRNG (Claessen and Pałka, 2013), which has an operation split that deterministically generates two (or more) keys 4 using an existing key. In addition, we assume access to an operation BrownianBridge which samples from given a key. To obtain the Wiener process value at a specific time, the seeded Brownian tree works by recursively sampling according to the Brownian tree with keys split from those of parent nodes, assuming the values at some initial and terminal times are known. The algorithm terminates when the current time under consideration is within a certain error tolerance of the desired time. We outline the full procedure in Algorithm D.2. This algorithm has constant memory cost. For fixed-step-size solvers, the tolerance that the tree will be queried at will scale as 1/L, where L is the number of steps in the solver. Thus the complexity per-step will scale as log L. Note that the variational free energy can be derived from Girsanov's change of measure theorem . To efficiently Monte Carlo estimate this quantity and its gradient, we simplify the equation by noting that for a one-dimensional process {V t} t∈T adapted to the filtration generated by a one-dimensional Wiener process {W t} t∈T, if Novikov's condition (Øksendal, 2003) is satisfied, then the process defined by the Itô integral t 0 V s dW s is a Martingale (Øksendal, 2003). Hence, E T 0 u(Z t, t) dW t = 0, and To Monte Carlo simulate the quantity in the forward pass along with the original dynamics, we need only extend the original augmented state with an extra variable L t such that the new drift and diffusion functions for the new augmented state By, the backward SDEs of the adjoint processes become In this case, neither do we need to actually simulate the backward SDE of the extra variable nor do we need to simulate its adjoint. Moreover, when considered as a single system for the augmented adjoint state, the diffusion function of the backward SDE satisfies the commutativity property. We consider three carefully designed test problems (examples 1-3 ; details in Appendix M) all of which have closed-form solutions. We compare the gradient computed from simulating our stochastic adjoint process using the Milstein scheme against the gradient evaluated by analytically solving the equations. Figure F (a) shows that for test example 1, the error between the adjoint gradient and analytical gradient decreases as the fixed step size decreases. One phenomenon not covered by our theory is that the error can be indeed be controlled by the adaptive solver. This is shown by the fact that for all three test problems, the mean-square error across dimensions tends to be smaller as the absolute tolerance is reduced (see Figure F (c, f, j) ). However, we note that the Number of Function Evaluations (NFEs) tends to be much larger than that in the ODE case , which is expected given the inherent roughness of Brownian motion paths. Sensitivity Analysis for SDEs. Gradient computation is closely related to sensitivity analysis. Computing gradients with respect to parameters of vector fields of an SDE has been extensively studied in the stochastic control literature . In particular, for low dimensional problems, this is done effectively using dynamic programming and finite differences . However, both approaches scale poorly with the dimensionality of the parameter vector. Analogous to REINFORCE (or score-function estimator) (; ;), as for some random variable H 5. However, H usually depends on the density of Z T with respect to the Lebesgue measure which can be difficulty to compute. extended this approach by weakening a non-degeneracy condition using Mallianvin calculus. Closely related to the current submission is the pathwise method , which is the continuous-time analog of the reparameterization trick . Existing methods in this regime (a; ;) all require simulating a forward SDE where each step requires computing entire Jacobian matrices. This computational cost is prohibitive for high-dimensional systems with a large number of parameters. Based on the Euler discretization, considered storing the intermediate values and performing reverse-mode automatic differentiation. They named this method the adjoint approach, which, by modern standards, is a form of "backpropagation 5. The random variable H is not unique. through the operations of a numerical solver". We comment that this approach, despite widely adopted in the field of finance for calibrating market models , has high memory cost, and relies on a fixed step size Euler-Maruyama discretization. This approach was used by to parameterize the drift and diffusion of an SDE using Gaussian processes. Backward SDEs. Our backward SDE for the stochastic adjoint process relies on the notion of backward SDEs by which is based on two-sided filtrations. This is different from the more traditional notion of backward SDEs where only a single filtration is defined . Based on the latter notion, forward-backward SDEs (FBSDEs) have been proposed to solve the stochastic optimal control problem . However, simulating FBSDEs is costly due to the need to estimate conditional expectations in the backward pass. Estimating conditional expectations, however, is a direct consequence of the appearance of an auxiliary process from the Martingale representation theorem . For notational convenience we suppress z and W ·. Bounding I 1. Let > 0 be given. Since G h → G in probability, there exist M 1 > 0 and h 0 > 0 such that Since the SDE defines a stochastic flow of diffeomorphisms, there exists a finite random variable C 2 such that sup |z|≤2M 1 |∇ z F| ≤ C 2, and there exists M 2 > 0 such that P(|C 2 | > M 2) <. Given M 2, there exists h 1 > 0 such that Now suppose h ≤ min{h 0, h 1}. Then, by the union bound, with probability at least 1 − 4, we have On this event, we have Thus, we have shown that I 1 converges to 0 in probability as h → 0. Bounding I 2. The idea is similar. By condition (ii), we have in probability. Using this and condition (i), for given > 0, there exist M > 0 and h 0 > 0 such that for h ≥ h 0, we have with probability at least 1 −. On this event, we have Thus I 2 also converges to 0 in probability. Recall the Stratonovich SDE with drift and diffusion functions b, σ 1,..., σ m ∈ R d × R → R d governed by a set of parameters θ ∈ R p. Consider the augmented state composed of the original state and parameters Y t = (Z t, θ). The augmented state satisfies a Stratonovich SDE with the drift function f (y, t) = (b(z, t), 0 p ) and diffusion functions. By and, the dynamics for the adjoint process of the augmented state is characterized by the backward SDE: By definitions of f and g i, the Jacobian matrices ∇f (x, s) and ∇g i (x, s) can be written as: Thus, we can write out the backward SDEs for the adjoint processes of the state and parameters separately: Now assume the original SDE has diagonal noise. Then, m = d and Jacobian matrix ∇σ i (z) can be written as: Consider the adjoint process for the augmented state along with the backward flow of the backward SDE. We write the overall state as X t = (Z t, (A z t), (A θ t) ), where we abuse notation slightly to let {Z t} t∈T denote the backward flow process. Then, by and, {X t} t∈T satisfies a backward SDE with a diffusion function that can be written as: Recall, for an SDE with diffusion function Σ(x) ∈ R d×m, it is said to satisfy the commutativity property (Rößler, 2004) When an SDE has commutative noise, the computationally intensive double Itô integrals (and the Lévy areas) need not be simulated by having the numerical scheme take advantage of the following property of iterated integrals : where the Brownian motion increment To see that the diffusion function indeed satisfies the commutativity condition, we consider several cases: • k = 1,..., d: Both LHS and RHS are zero unless j 1 = j 2 = k, since for Σ i,j 2 (x) to be non-zero, i = j 1 = j 2 = k. • k = d + 1..., 2d: Similar to the case above. • k = 2d + 1..., 2d + p:. Both LHS and RHS are zero unless Since in all scenarios, LHS = RHS, we conclude that the commutativity condition holds. Finally, we comment that the Milstein scheme for the stochastic adjoint of diagonal noise SDEs can be implemented such that during each iteration of the backward solve, vjp is only called a number of times constant with respect to the dimensionality of the original SDE. r e t u r n ans @staticmethod d e f backward (ctx, * grad_outputs): t s, flat_params_f, flat_params_g, * ans = c t x. s a v e d _ t e n s o r s f, g, dt, bm = c t x. f, c t x. g, c t x. dt, c t x. bm f_params, g_params = t u p l e (f . p a r a m e t e r s ), t u p l e (g . p a r a m e t e r s ) n _ t e n s o r s = l e n (ans) # Accumulate g r a d i e n t s a t i n t e r m e d i a t e p o i n t s. adj_y = _sequence_add (adj_y, t u p l e ( grad_outputs_ [ i − 1] f o r grad_outputs_ i n grad_outputs ) ) r e t u r n (* adj_y, None, None, None, adj_params_f, adj_params_g, None, None) In the following, α, β, and p are parameters of SDEs, and x 0 is a fixed initial value. Example 1. Analytical solution: Example 2. Analytical solution: Example 3. Analytical solution: In each numerical experiment, we duplicate the equation 10 times to obtain a system of SDEs where each dimension had their own parameter values sampled from the standard Gaussian distribution and then passed through a sigmoid to ensure positivity. Moreover, we also sample the initial value for each dimension from a Gaussian distribution. We consider training latent SDE models with our adjoint framework to recover a 1D Geometric Brownian motion, and a 3D stochastic Lorenz attractor process. The main objective is to verify that the learned posterior is able to reconstruct the training data, and the learned prior exhibit stochastic behavior. We jointly optimize the variational free energy with respect to parameters of the prior and posterior distributions at the initial latent state z 0, the prior and posterior drift, the diffusion function, the encoder, and the decoder. We include the details of dataset and architecture in Appendix N.1. For the stochastic Lorenz attractor, not only is the model able to reconstruct the data well, but also the learned prior process can produce bimodal samples in both data and latent space. This is showcased in the last row of Figure 4, where once the initial position sampled from the learned prior distribution is fixed, the latent and data space samples cluster around two modes. Note that this cannot be achieved by a latent ODE, where trajectories are determined once their initial latent state is determined. See Figure 4 for additional visualization on the synthetic Lorenz attractor dataset. See Figure 5 for visualization on the synthetic geometric Brownian motion dataset. We comment that for the second example, the posterior reconstructs the data well, and the prior process exhibit behavior of the data. However, from the third row, we can observe that the prior process is learned such that most of the uncertainty is account for in the initial latent state. We leave the investigation of more interpretable prior process for future work. Consider a geometric Brownian motion SDE: We use µ = 1, σ = 0.5, and x 0 = 0.1 + as the ground-truth model, where ∼ N(0, 0.03 2). We sample 1024 time series, each of which is observed at intervals of 0.02 from time 0 to time 1. We corrupt this data using Gaussian noise with mean zero and standard deviation 0.01. To recover the dynamics, we use a latent SDE model where the GRU has 1 layer and 100 hidden units, the prior and posterior drift functions are MLPs with 1 hidden layer of 100 units, and the diffusion function is an MLP with 1 hidden layer of 100 hidden units and the sigmoid activation applied at the end. The drift function in the posterior is time-inhomogenous in the sense that it takes in a context vector of size 1 at each observation that is output by the GRU from running backwards after processing all future observations. The decoder is a linear mapping from a 4 dimensional latent space to observation space. For all nonlinearities, we use the softplus function. We fix the observation model to be Gaussian with noise standard deviation 0.01. We optimize the model jointly with respect to the parameters of a Gaussian distribution for initial latent state distribution, the prior and posterior drift functions, the diffusion function, the GRU encoder, and the decoder. We use a fixed discretization with step size of 0.01 in both the forward and backward pass. We use the Adam optimizer with an initial learning rate of 0.01 that is decay by a factor of 0.999 after each iteration. We use a linear KL annealing schedule over the first 50 iterations. Consider a stochastic Lorenz attractor SDE with diagonal noise: dX t =σ (Y t − X t) dt + α x dW t, X 0 = x 0, dY t = (X t (ρ − Z t) − Y t ) dt + α y dW t, Y 0 = y 0, dZ t = (X t Y t − βZ t) dt + α z dW t, Z 0 = z 0. We use σ = 10, ρ = 28, β = 8/3, (α x, α y, α z) = (.1, .28., .3), and (x 0, y 0, z 0) sampled from the standard Gaussian distribution as the ground-truth model. We sample 1024 time series, each of which is observed at intervals of 0.025 from time 0 to time 1. We normalize these samples by their mean and standard deviation across each dimension and corrupt this data by Gaussian noise with mean zero and standard deviation 0.01. We use the same architecture and training procedure for the latent SDE model as in the geometric Brownian motion section, except that the diffusion function consists of four small neural networks, each for a single dimension of the latent SDE. We follow the preprocessing used by. Following Yıldız et al., we use a fully connected network to encode the first three observations of each sequence and thereafter predicted the remaining sequence. Note the current choice of encoder is for comparing fairly to models in the existing literature, and it may be extended to be a recurrent or attention model to enhance performance. The overall architecture is described in Appendix O and is similar to that of ODE 2 VAE Yıldız et al. with a similar number of parameters. We also use a fixed step size that is 1/5 of smallest interval between any two observations Yıldız et al.. We train latent ODE and latent SDE models with the Adam optimizer and its default hyperparameter settings, with an initial learning rate of 0.01 that is exponentially decayed with rate 0.999 during each iteration. We perform validation over the number of training iterations, KL penalty , and KL annealing schedule. All models were trained for at most 400 iterations, where we start to observe severe overfitting for most model instances. We use a latent SDE model with an MLP encoder which takes in the first three frames and outputs the mean and log-variance of the variational distribution of the initial latent state and a context vector. The decoder has a similar architecture as that for the ODE 2 VAE model Yıldız et al. and projects the 6-dimensional latent state into the 50-dimensional observation space. The posterior drift function takes in a 3-dimensional context vector output by the encoder and the current state and time, whereas the prior drift only takes in the current state and time. The diffusion function is composed of multiple small neural nets, each producing a scalar for the corresponding dimension such that the posterior SDE has diagonal noise. We comment that the overall parameter count of our model is smaller than that of ODE 2 VAE for the same task. The latent ODE baseline was implemented with a similar architecture, except is does not have the diffusion and prior drift components, and its vector field defining the ODE does not First row from left to right are the encoder and decoder. Second row from left to right are the prior drift, posterior drift, and diffusion functions. take in a context vector. Therefore, the model has slightly fewer parameters than the latent SDE model. See Figure 6 for overall details of the architecture. The main hyperparameter we tuned was the coefficient for reweighting the KL. For both the latent ODE and SDE, we considered training the model with a reweighting coefficient in {1, 0.1, 0.01, 0.001}, either with or without a linear KL annealing schedule that increased from 0 to the prescribed value over the first 200 iterations of training.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyeL9yh4KH
We present a constant memory gradient computation procedure through solutions of stochastic differential equations (SDEs) and apply the method for learning latent SDE models.
It is clear that users should own and control their data and privacy. Utility providers are also becoming more interested in guaranteeing data privacy. Therefore, users and providers can and should collaborate in privacy protecting challenges, and this paper addresses this new paradigm. We propose a framework where the user controls what characteristics of the data they want to share (utility) and what they want to keep private (secret), without necessarily asking the utility provider to change its existing machine learning algorithms. We first analyze the space of privacy-preserving representations and derive natural information-theoretic bounds on the utility-privacy trade-off when disclosing a sanitized version of the data X. We present explicit learning architectures to learn privacy-preserving representations that approach this bound in a data-driven fashion. We describe important use-case scenarios where the utility providers are willing to collaborate with the sanitization process. We study space-preserving transformations where the utility provider can use the same algorithm on original and sanitized data, a critical and novel attribute to help service providers accommodate varying privacy requirements with a single set of utility algorithms. We illustrate this framework through the implementation of three use cases; subject-within-subject, where we tackle the problem of having a face identity detector that works only on a consenting subset of users, an important application, for example, for mobile devices activated by face recognition; gender-and-subject, where we preserve facial verification while hiding the gender attribute for users who choose to do so; and emotion-and-gender, where we hide independent variables, as is the case of hiding gender while preserving emotion detection. will make such devices not to understand the data until the visual or sound trigger is detected, with the capability to do this at the sensor level and without modifying the existing recognition system. This new paradigm of collaborative privacy environment is critical since it has also been shown that algorithmic or data augmentation and unpredictable correlations can break privacy BID7; BID14;; BID18. The impossibility of universal privacy protection has been studied extensively in the domain of differential privacy BID4, where a number of authors have shown that assumptions about the data or the adversary must be made in order to be able to provide utility BID5; BID6; BID10. We can, however, minimize the amount of privacy we are willing to sacrifice for a given level of utility. Other recent data-driven privacy approaches like have also explored this notion, but do not integrate the additional collaborative constraints. Therefore, it is important to design collaborative systems where each user shares a sanitized version of their data with the service provider in such a way that user-defined non-sensitive tasks can be performed but user-defined sensitive ones cannot, without the service provider requiring to change any data processing pipeline otherwise. Contributions-We consider a scenario where a user wants to share a sanitized representation of data X in a way that a latent variable U can be inferred, but a sensitive latent variable S remains hidden. We formalize this notion using privacy and transparency definitions. We derive an informationtheoretic bound on privacy-preserving representations. The metrics induced by this bound are used to learn such a representation directly from data, without prior knowledge of the joint distribution of the observed data X and the latent variables U and S. This process can accommodate for several user-specific privacy requirements, and can be modified to incorporate constraints about the service provider's existing utility inference algorithms enabling several privacy constraints to be satisfied in parallel for a given utility task. We apply this framework to challenging use cases such as hiding gender information from a facial image (a relatively easy task) while preserving subject verification (a much harder task), or designing a sanitization function that preserves subject identification on a consenting subset of users, while disallowing it on the general population. Blocking a simpler task while preserving a harder one and blocking a device from constantly listening out-of-sample data are new applications in this work, here addressed with theoretical foundations and respecting the provider's existing algorithms, which can simultaneously handle sanitized and non-sanitized data. The problem statement is detailed in Section 2, and the information-theoretic bounds are derived in Section 3. Section 4 defines a trainable adversarial game that directly attempts to achieve this bound; the section also discusses how service-provider specific requirements can be incorporated. Examples of this framework are shown in Section 5. The paper is concluded in Section 6. Complementary information and proofs are presented in the Supplementary Material. We describe a scenario in which we have access to possibly high-dimensional data X ∈ X, this data depends on two special latent variables U and S. U is called the utility latent variable, and is a variable we want to communicate, while S is called the secret, and is a variable we want to protect. We consider two agents, a service provider that wants to estimate U from X, and an actor that wants to infer S from X.We define a third agent, the privatizer, that wants to learn a space-preserving stochastic mapping Q: X → Q ⊃ X in such a way that Q(X) provides information about the latent variable U, but provides relatively little information of S. In other words, we want to find a data representation that is private with respect to S and transparent with respect to U. 1 We first recall the definition of privacy presented in BID9 DISPLAYFORM0 Definition 2.1. Privacy: Let δ s be a measure of distance between probability distributions, b s ∈ R + a positive real number, and P (S) the marginal distribution of the sensitive attribute S. The stochastic mapping Q(X) is (δ s, b s)-private with respect to S if δ s (P (S), P (S|Q(X))) < b s.We can define transparency in the same fashion: Definition 2.2. Transparency: Let δ u be a measure of distance between probability distributions, b u ∈ R + a positive real number, and P (U |X) the posterior conditional distribution of the utility variable U after observing X. The stochastic mapping Q(X) is (δ u, b u)-transparent with respect to U if δ u (P (U |X), P (U |Q(X))) < b u.Both definitions depend on the learned mapping Q; in the following section, we derive an information-theoretic bound between privacy and transparency, and show that this bound infers a particular choice of metrics δ u, δ s. We then show that this inferred metric can be directly implemented as a loss function to learn privatization transformations from data using standard machine learning tools. A similar analysis of these bounds for the special case where we directly observe the utility variable U (X = U) was analyzed in BID1 in the context of the Privacy Funnel. Here, we extend this to the more general case where U is observed indirectly. More importantly, these bounds are used to design a data-driven implementation for learning privacy-preserving mappings. Consider the utility and secret variables U and S defined over discrete alphabets U, S,and the observed data variable X, defined over X, with joint distribution P X,U,S. FIG0 illustrates this set-up, and shows the fundamental relationship of their entropies H(·) and mutual information. We analyze the properties of any mapping Q: X → Q, and measure the ing mutual information between the transformed variable Q(X) and our quantities of interest. Our goal is to find Q such that the information leakage from our sanitized data I(S; Q(X)) is minimized, while maximizing the shared information of the utility variable I(U ; Q(X)). We will later relate these quantities and the bounds here developed with the privacy/utility definitions presented in the previous section. Maximizing I(U ; Q(X)) is equivalent to minimizing I(U ; X | Q(X)), since I(U ; X|Q(X)) = I(U ; X) − I(U ; Q(X)). The quantity I(U ; X | Q(X)) is the information X contains about U that is censured by the sanitization mapping Q. FIG0 illustrates I(S; Q(X)) and I(U ; X|Q(X)). One can see that there exists a trade-off area, I(U, S) − I(U, S|X), that is always included in the union of I(S; Q(X)) and I(U ; X|Q(X)). The lower we make I(S; Q(X)), the higher we make the censored information I(U ; X|Q(X)), and vice versa. This induces a lower bound over the performance of the best possible mappings Q(X) that is formalized in the following lemma. Lemma 3.1. Let X, U, S be three discrete random variables with joint probability distribution P X,U,S. For any stochastic mapping Q: X → Q we have DISPLAYFORM0 Proof of this lemma is shown in Supplementary Material. To show that this bound is reachable in some instances, consider the following example. Let U and S be independent discrete random variables, and X = (U, S). The sanitization mapping Q(X) = U satisfies this bound with equality. We can also prove, trivially, an upper bound for these quantities. Lemma 3.2. Let X, U, S be three discrete random variables with joint probability distribution P X,U,S. For any stochastic mapping Q: X → Q we have: DISPLAYFORM1 That simply states that the information leakage about the secret and the censured information on the utility variable cannot exceed the total information present in the original observed variable X. We relate the terms I(S; Q(X)) and I(U ; X | Q(X)) in Eq.1 back to our definitions of privacy and transparency. DISPLAYFORM0 Here we used the fact that U is conditionally independent of Q given X. We then observe that Eq. Similarly, we can analyze I(S; Q) to get, DISPLAYFORM1 We can see from Eq.4 that the natural induced metric for measuring privacy δ s in Def,2.1 is the reverse Kullback-Leibler divergence RD KL.We can thus rewrite our fundamental tradeoff equation as DISPLAYFORM2 We show next how this bound can be used to define a trainable loss metric, allowing the privatizer to select different points in the transparency-privacy trade-off space. Assume that for any given stochastic transformation mapping Q ∼ Q(X), we have access to the posterior conditional probability distributions P (S | Q), P (U | Q), and P (U | X). Assume we also have access to the prior distribution of P (S). Inspired by the bounds from the previous section, the proposed privatizer loss is DISPLAYFORM0 where α ∈ is a tradeoff constant. A low α value implies a high degree of transparency, while a high value of α implies a high degree of privacy. Using Eq.5 we have a lower bound on how private or transparent the privatizer can be for any given α value, as detailed next. Theorem 3.3. For any α ∈, and stochastic mapping Q: X → Q the solution to Eq.6 guarantees the following bounds, DISPLAYFORM1 The proof is shown in Supplementary Material. To recap, we proposed a privatizer loss Eq.6 with a controllable trade-off parameter α, and showed bounds on how transparent and private our data can be for any given value of α. Next we show how to optimize this utility-privacy formulation. Even if the joint distribution of P (U, S, X) is not known, the privatizer can attempt to directly implement Eq.6 in a data-driven architecture to find the optimal Q. Assume the privatizer has access to a dataset {(x, s, u)}, where s and u are the ground truth secret and utility values of observation x. Under these conditions, the privatizer searches for a parametric mapping q = Q θ (x, z), where z is an independent random variable, and attempts to predict the best possible attack by learning P η (s | q), an estimator of P (s | q). The privatizer also needs P ψ (u|q) and P φ (u|x), estimators of P (u | q) and P (u | x) respectively, to measure how much information about the utility variable is censored with the proposed mapping. Under this setup Q θ (x, z) is obtained by optimizing the following adversarial game: DISPLAYFORM0 Here the first three terms are crossentropy loss terms to ensure our estimators P η (s|q), P ψ (u|q), and P φ (u|x) are a good approximation to the true posterior distributions. The final loss term attempts to find the best possible sampling function Q θ (x, z) such that (1 − α)I 2 (U ; X | Q) + αI 2 (S; Q) is minimized. Details on the algorithmic implementation are given in Section 7.3.1. Performance on simulated datasets is shown in Section 7.2. The proposed framework naturally provides a means to achieve collaboration from the utility provider. In this scenario, the utility provider wishes to respect the user's desired privacy, but is unwilling to change their estimation algorithm Pφ(u | x), and expects the privatizer to find a mapping that minimally affects its current performance.2 This is a more challenging scenario, with worse tradeoff characteristics, in which Q θ (x, z) is obtained by optimizinĝ DISPLAYFORM0 2 Recall that the utility provider wants to use the same algorithm for sanitized and non-sanitized data, a unique aspect of the proposed framework and critical to accept its collaboration. A final scenario addressed by the proposed framework arises when the utility provider is the sole agent to access the sanitized data, and it has estimation algorithms for both the utility and the privacy variable Pφ(u | x), Pη(s | x), that it is unwilling to modify. The service provider wishes to reassure the users that they are unable to infer the secret attribute from the sanitized data, if and when the user decides so. Under these conditions, we optimize for DISPLAYFORM0 The following examples are based on the framework presented in FIG8. Here we have the three key agents mentioned before: the utility algorithm that is used by the provider to estimate the information of interest. This algorithm can take the raw data (X) or the mapped data (Q(X)) and be able to infer the utility; the secret algorithm that is able to operate on the mapped data to infer the secret; the privatizer that learns a space preserving mapping Q that allows the provider to learn the utility but prevents the secret algorithm to infer the secret. The utility algorithm is trained to perform well on raw data, the secret algorithm is adversarially trained to infer the secret variable after sanitization. In the next examples we show how the proposed framework performs under different scenarios, the privatizer architecture is kept unchanged across all experiments to show that the same architecture can achieve very different objectives using the proposed framework, the detailed architectures are shown in Section 7.3.2. Extra experiments under known conditions are shown in 7.2.Figure 2: Three components of the collaborative privacy framework. Raw data can be directly fed into the secret and utility inferring algorithm. Since the privatization mapping is space preserving, the privatized data can also be directly fed to both tasks without any need for further adaptations. We begin by analyzing the subject-within-subject problem. Imagine a subset of users wish to unlock their phone using facial identification, while others opt out of the feature; we wish the face identification service to work only on the consenting subset of users. We additionally assume that the utility provider wishes to comply with the user's wishes, so we can apply the framework described in Section 4.2. Note that in this problem, the utility and secrecy variables are mutually exclusive. We solve this problem by training a space-preserving stochastic mapping Q on facial image data X, where the utility and secret variable U and S are categorical variables over consenting and nonconsenting users respectively. We test this over the FaceScrub dataset BID8 ), using VGGFace2 Cao et al. (2017 as the utility and secrecy inferring algorithm. The stochastic mapping was implemented using a stochastic adaptation of the , architecture details are provided in Section 7.3.2. Table 1 shows the top-5 categorical accuracy of the utility network over the sanitized data at various α points in the privacy-utility trade-off. Figure 3 show some representantive images on how images are sanitized. It also shows that the sanitization function is able to preserve information about the utility variable while effectively censoring the secret variable, even for unobserved images. A phone equipped with this filter at the sensor level would be effectively incapable of collecting information on nonconsenting users. Figure 3 : Left and center figures show images of consenting and nonconsenting (private) users respectively, along with their sanitized counterparts. The identity of consenting users is still easily verified, while the identity of nonconsenting users is effectively censored. Table on the right shows Top-5 accuracy performance of the subject detector after sanitization across several sanitation levels α. Performance is shown across 3 subsets, consenting users (CU) are users that decided to be detected by the utility algorithm, observed private users (OPU) are those that explicitly decided to protect their privacy, while unobserved private users (UPU) are users that decided to protect their privacy but where not available during training. Consenting users are still recognized by the system, while nonconsenting users are not. For example, for α = 0.4, we significantly block OPU and UPU while preserving CU Here we continue to work on facial image data X, where utility variable U is gender recognition, and the secret variable S is emotion (smiling/non-smiling). In this scenario, variables U and S are independent. We implement this over the CelebA dataset BID13, using Xception networks as our utility and privacy estimators. Table. 2 shows the distribution of the utility and secrecy estimators over the sanitized data. FIG3 shows example sanitized images. It is visually possible to identify the gender of the subject but not their emotion. Most importantly, the existing gender detection algorithm still performs correctly over the sanitized images. Table 2: Gender and emotion detection on users on raw and sanitized data. In this setup, we want to find a mapping Q that hides the gender attribute but allows subject verification. The mapping Q should prevent a standard gender detection algorithm from performing its task, while allowing a standard subject detector algorithm to still perform subject verification. This is the only experiment in this section where the secret inference algorithm is fixed. The mapping that incorporates a pretrained FaderNet was chosen as the baseline for the stochastic mapping function since this network is already trained to defeat a gender discriminator in its encoding space. This proves a suitable baseline comparison and starting point for a mapping function that needs to fool a gender discriminator in image space while simultaneously preserving subject verification performance. We show the performance of using only the pretrained gender FaderNet and demonstrate how we can improve its performance by training a posterior processing mapping (UNET) using the loss proposed in Eq.10.We tested this framework on the FaceScrub dataset BID15. FIG5 shows how the output probabilities of the gender classification model approach the prior distribution of the dataset as α increases. We see that sanitized images produce output gender probabilities close to the dataset prior even for relatively low α values. Last column of FIG5 shows how the top-5 categorical accuracy of the subject verification task varies across different α values. These suggest that under these conditions we can achieve almost perfect privacy while maintaining reasonable utility performance. DISPLAYFORM0 Inspired by information-theory bounds on the privacy-utility trade-off, we introduced a new paradigm where users and entities collaborate to achieve both utility and privacy per a user's specific requirements. One salient feature of this paradigm is that it can be completely transparentinvolving only the use of a simple user-specific privacy filter applied to user data -in the sense that it requires otherwise no modifications to the system infrastructure, including the service provider algorithmic capability, in order to achieve both utility and privacy. Representative architectures and suggest that a collaborative user-controlled privacy approach can be achieved. While the here presented clearly show the potential of this approach, much has yet to be done, of particular note is extending this approach to continuous utility and privacy variables. While the underlying framework still holds, reliably measuring information between continuous variables is a more challenging task to perform and optimize for. The proposed framework provides privacy metrics and bounds in expectation; we are currently investigating how the privacy tails concentrate as data is acquired and if there is a need to use information theory metrics with worst-case scenario guarantees. Modifying the information theory metrics to match some of the theoretical in (local) differential privacy is also the subject of future research. Privacy is closely related to fairness, transparency, and explainability, both in goals and in some of the underlying mathematics. A unified theory of these topics will be a great contribution to the ML community. Consider the equality I(U ; S) − I(U ; S|X) = I(S; Q(X)) − I(S; Q(X)|U ) + I(U ; X|Q(X)) − I(U ; X|Q(X), S).We know that DISPLAYFORM0 so we can guarantee DISPLAYFORM1 Proof. Theorem 3.3 DISPLAYFORM2 and α ∈.Minimizing Eq.6 respecting Eq.5 and Eq.12 is equivalent to solving: DISPLAYFORM3 Consider the following relaxation of Eq.15 DISPLAYFORM4 where A and B are positive real values. Eq.16 is a relaxation of Eq.15 because the space of possible tuples (A Q, B Q) is included in the space of possible values of R Suppose Q * is the solution to Eq.15, with corresponding values (A Q *, B Q *), and suppose (A *, B *) is the solution to Eq. 16. We know DISPLAYFORM0 Assume DISPLAYFORM1 However, A Q * > 0 and A Q * + B * ≤ A * + B * ≤ K. Therefore, (A Q *, B *) is a valid solution to Eq.16, and is smaller than the lower bound (A *, B *).This contradiction arises from assuming A * > A Q *, we thus conclude that DISPLAYFORM2 Similarly for B * and B Q * we get B * ≤ B Q *.Additionally, Eq.16 is easily solvable and has solutions DISPLAYFORM3 Consequently, we proved DISPLAYFORM4 The following experiments attempt to show how close to the theoretical bound shown in Eq. 5 we can get by following Algorithm 1 under known conditions. Consider the following scenario: Utility variable U and secret variable S are two Bernoulli variables with the following joint distribution: DISPLAYFORM0 where the marginal probabilities are P (U = 1) = ρ, P (S = 1) = β, and k is a parameter that controls the dependence between and S, k ∈ [0, min{ρ DISPLAYFORM1 For these experiments, we make both marginals equal to 0.5 (ρ = β = 0.5). Note that when k = 1, U and S are independent (I(U ; S) = 0) and when k = 0 or k = 2 they reach maximum mutual information (I(U ; S) = H(U) = H(S) = H b (0.5) = ln nats).Our observations X will be taken in the extreme case, where X contains almost perfect information about the values of U and S. We do this by assuming that X ∈ R 2 is a Gaussian Mixture Model X with the following conditional distribution: DISPLAYFORM2 We choose a low σ = 0.05; this makes it so that every pair (u, s) is mapped to almost entirely disjoint regions, therefore knowing X gives nearly perfect information about (u, s) (I(U ; X) H(U), I(S; X) H(S), I(U ; S|X) 0). For added simplicity, the privacy filter is linear: Figure 6 shows how the raw and sanitized data are distributed for varying levels of codependence k and tradeoff α for linear sanitization functions. Figure 7 shows that privacy filters optimized using Algorithm 1 learn effective privacy-preserving mappings close to the theoretical bounds, even for a simple filtering architecture. They do so without any explicit modelling of the underlying data-generating distributions, and we can achieve different tradeoff points by simply modifing the parameter α. Note that when variables U and S are perfectly independent or codependent, the linear filter is perfectly able to reach any point in the optimal bound. For intermediate cases, the linear filter was capable of reaching the bound in the region where no utility is compromised, but was not capable of following the optimal tradeoff line for higher levels of privacy. Figure 7: Top row shows the best privacy-utility loss so far on the validation set for different levels of codependence I(U ; S) and trade-off parameter α. Middle row shows how the sum of the estimated informations approximate the information bound for the best privacy-utility loss so far on the validation set. Finally, bottom row illustrates the trajectory in information space of the privacy-utility filters as they are being trained. DISPLAYFORM3 Here, we explicitly elaborate on how we optimize the data-driven loss functions shown in equations 8, 9, and 10 using an adversarial approach. We first detail the exact adversarial training setup that was used to perform the experiments in Section 5, and then provide the concrete network architectures used for all shown . Minimizing the objective functions in equations 8, 9, and 10, is a challenging problem in general. By focusing our attention on Eq. 8, we see that each of the four loss terms have a distinct purpose: DISPLAYFORM0 The first three loss terms minimize a crossentropy objective for functions P η (s | q), P ψ (u | q), and P φ (u | x); this ensures that these functions are good estimators of the unknown true distributions of P (s | q), P (u | q), and P (u | x), where samples q are drawn from the learned sanitization mapping q = Q θ (x, z). The final loss term attempts to find the best possible sampling function DISPLAYFORM1 We can approximately solve this problem by applying iterative Stochastic Gradient Descent to each of the individual loss terms with respect to their relevant parameters, this is similar to the procedure used to train Generative Adversarial Networks. The algorithm we used to solve Eq. 8 is shown in Algorithm 1, similarly, algorithms to solve Eq. 9 and Eq. 10 are shown in Algorithm 2 and Algorithm 3 respectively. DISPLAYFORM2 Evaluate crossentropy loss on raw utility inference 5: DISPLAYFORM3 Evaluate crossentropy loss on filtered utility inference 7: DISPLAYFORM4 Evaluate crossentropy loss on secret inference 9: DISPLAYFORM5 10: DISPLAYFORM6 Evaluate sanitation loss 11: DISPLAYFORM7 Stochastic gradient descent step on Q θ (x, z)12: until convergence We now describe the exact architecture used to implement the privacy filter on all experiments shown in Section 5. Figure 8 shows the network diagram. The architecture presented in Figure 8 is fully convolutional, so the same network definition could be used across all three experiments by varying the input layer. To speed up convergence to a good filtering solution, filters were initially trained to copy the image (under RMSE loss), and optionally infer some meaningful attribute from the input (in subject-within-subject, this attribute was a simple class label on whether the subject wished their privacy preserved). We stress that this was only done for initialization, final training of the network was done exactly as described in Algorithm 1. η ← η − lr∇ η H(η) Stochastic gradient descent step on P η (s | q)6: DISPLAYFORM0 Evaluate sanitation loss 7: DISPLAYFORM1 Stochastic gradient descent step on Q θ (x, z) Θ(θ) = (1 − α) DISPLAYFORM2 Evaluate sanitation loss 5: DISPLAYFORM3 Stochastic gradient descent step on Q θ (x, z) 6: until convergence Figure 8: Architecture of privacy filter, based on UNET. There is a single noise layer (shown in yellow) where standard Gaussian noise is injected into the network to make the ing filtered image stochastic in nature. The other notable component is the auxiliary label softmax, used for the subject-within-subject experiment. This extra layer was trained only to initialize the network, but was not preserved during the final training stage. Input image sizes are shown for the subject-within-subject experiment. The architecture of the networks used to infer the utility and secret attribute in the emotion vs. gender experiment are identical, and are shown in Figure 9.Networks used for the experiments in Section 7.2 are shown in FIG0.All other networks used in the section are implemented as described in their respective papers.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJe2so0qF7
Learning privacy-preserving transformations from data. A collaborative approach
Solving tasks with sparse rewards is one of the most important challenges in reinforcement learning. In the single-agent setting, this challenge has been addressed by introducing intrinsic rewards that motivate agents to explore unseen regions of their state spaces. Applying these techniques naively to the multi-agent setting in agents exploring independently, without any coordination among themselves. We argue that learning in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to what they have explored. In this paper we propose an approach for learning how to dynamically select between different types of intrinsic rewards which consider not just what an individual agent has explored, but all agents, such that the agents can coordinate their exploration and maximize extrinsic returns. Concretely, we formulate the approach as a hierarchical policy where a high-level controller selects among sets of policies trained on different types of intrinsic rewards and the low-level controllers learn the action policies of all agents under these specific rewards. We demonstrate the effectiveness of the proposed approach in a multi-agent gridworld domain with sparse rewards, and then show that our method scales up to more complex settings by evaluating on the VizDoom platform. Recent work in deep reinforcement learning effectively tackles challenging problems including the board game Go, Atari video games , and simulated robotic continuous control ; however, these successful approaches often rely on frequent feedback indicating whether the learning agent is performing well, otherwise known as dense rewards. In many tasks, dense rewards can be difficult to specify without inducing locally optimal but globally sub-optimal behavior. As such, it is frequently desirable to specify only a sparse reward that simply signals whether an agent has attained success or failure on a given task. Despite their desirability, sparse rewards introduce their own set of challenges. When rewards are sparse, determining which of an agent's actions led to a reward becomes more difficult, a phenomenon known in reinforcement learning as the credit-assignment problem. Furthermore, if rewards cannot be obtained by random actions, an agent will never receive a signal through which it can begin learning. As such, researchers have devised methods which attempt to provide agents with additional reward signals, known as intrinsic rewards, through which they can learn meaningful behavior . A large subset of these works focus on learning intrinsic rewards that encourage exploration of the state space (; ; ; ;). Exploring the state space provides a useful inductive bias for many sparse reward problems where the challenge lies in "finding" rewards that may only be obtained in parts of the state space that are hard to reach by random exploration. These exploration-focused approaches frequently formulate their intrinsic rewards to measure the "novelty" of a state, such that agents are rewarded for taking actions that lead to novel states. Our work approaches the question of how to apply novelty-based intrinsic motivation in the cooperative multi-agent setting. Directly applying novelty-based intrinsic motivation to the multi-agent setting in agents each exploring their shared state space independently from one another. In many cases, independent exploration may not be the most efficient method. For example, consider a task where multiple agents are placed in a maze and their goal is to collectively reach all of the landmarks that are spread out through the maze. It would be inefficient for the agents to explore the same areas redundantly. Instead, it would be much more sensible for agents to "divide-and-conquer," or avoid redundant exploration. Thus, an ideal intrinsic reward for this task would encourage such behavior; however, the same behavior would not be ideal for other tasks. For example, take the same maze but change the task such that all agents need to reach the same landmark. Divide-and-conquer would no longer be an optimal exploration strategy since agents only need to find one landmark and they all need to reach the same one. Cooperative multi-agent reinforcement learning can benefit from sharing information about exploration across agents; however, the question of what to do with that shared information depends on the task at hand. In order to improve exploration in cooperative multi-agent reinforcement learning, we must first identify what kinds inductive biases can potentially be useful for multi-agent tasks and then devise intrinsic reward functions that incorporate those biases. Then, we must find a way to allow our agents to adapt their exploration to the given task, rather than committing to one type of intrinsic reward function. In this work, we first introduce a candidate set of intrinsic rewards for multiagent exploration which hold differing properties with regards to how they explore the state space. Subsequently, we present a hierarchical method for simultaneously learning policies trained on different intrinsic rewards and selecting the policies which maximize extrinsic returns. Importantly, all policies are trained using a shared replay buffer, drastically improving the sample efficiency and effectiveness of learning in cooperative multi-agent tasks with sparse rewards. Single-Agent Exploration In order to solve sparse reward problems, researchers have long worked on improving exploration in reinforcement learning. To achieve these means, prior works commonly propose reward bonuses that encourage agents to reach novel states. In tabular domains, reward bonuses based on the inverse state-action count have been shown to be effective in speeding up learning . In order to scale count-based approaches to large state spaces, many recent works have focused on devising pseudo state counts to use as reward bonuses (; ;). Alternatively, some work has focused on defining intrinsic rewards for exploration based on inspiration from psychology . These works use various measures of novelty as intrinsic rewards including: transition dynamics prediction error , information gain with respect to a learned dynamics model , and random state embedding network distillation error . Multi-Agent Reinforcement Learning (MARL) Multi-agent reinforcement learning introduces several unique challenges that recent work has attempted to address. These challenges include: multi-agent credit assignment in cooperative tasks with shared rewards (; ;, non-stationarity of the environment in the presence of other learning agents , and learning of communication protocols between cooperative agents (; ;). Exploration in MARL While the fields of exploration in RL and multi-agent RL are popular, relatively little work has been done at the intersection of both. consider exploration with respect to opponent strategies in competitive games, and consider exploration of a large joint action space in a load balancing problem. define an intrinsic reward function for multi-agent reinforcement learning that encourages agents to take actions which have the biggest effect on other agents' behavior, otherwise referred to as "social influence." Define metrics for evaluating the efficacy of reward functions in multi-agent domains. These works, while important, do not address the problem of exploring a large state space, and whether this exploration can be improved in multi-agent systems. A recent approach to collaborative evolutionary reinforcement learning shares some similarities with our approach. As in our work, the authors devise a method for learning a population of diverse policies with a shared replay buffer and dynamically selecting the best learner; however, their work is focused on single-agent tasks and does not incorporate any notion of intrinsic rewards. As such, this work is not applicable to sparse reward problems in MARL. Dec-POMDPs In this work, we consider the setting of decentralized POMDPs , which are used to describe cooperative multi-agent tasks. A decentralized POMDP (Dec-POMDP) is defined by a tuple: (S, A, T, O, O, R, n, γ). In this setting we have n total agents. S is the set of global states in the environment, while O = ⊗ i∈{1... n} O i is the set of joint observations for each agent and A = ⊗ i∈{1... n} A i is the set of possible joint actions for each agent. A specific joint action at one time step is denoted as a = {a 1, . . ., a n} ∈ A and a joint observation is o = {o 1, . . ., o n} ∈ O. T is the state transition function which defines the probability P (s |s, a), and O is the observation function which defines the probability P (o|a, s). R is the reward function which maps the combination of state and joint actions to a single scalar reward. Importantly, this reward is shared between all agents, so Dec-POMDPs always describe cooperative problems. Finally, γ is the discount factor which determines how much the agents should favor immediate reward over long-term gain. Soft Actor-Critic Our approach uses Soft Actor-Critic (SAC) as its underlying algorithm. SAC incorporates an entropy term in the loss functions for both the actor and critic, in order to encourage exploration and prevent premature convergence to a sub-optimal deterministic policy. The policy gradient with an entropy term is computed as follows: where D is a replay buffer that stores past environment transitions, ψ are the parameters of the learned critic, b(s) is a state dependent baseline (e.g. the state value function V (s)), and α is a reward scale parameter determining the amount of entropy in an optimal policy. The critic is learned with the following loss function: whereψ are the parameters of the target critic which is an exponential moving average of the past critics, updated as:ψ ← (1 − τ)ψ + τ ψ, and τ is a hyperparameter that controls the update rate. Centralized Training with Decentralized Execution A number of works in deep multi-agent reinforcement learning have followed the paradigm of centralized training with decentralized execution (; ; ;). This paradigm allows for agents to train while sharing information (or incorporating information that is unavailable at test time) but act using only local information, without requiring communication which may be costly at execution time. Since most reinforcement learning applications use simulation for training, communication between agents during the training phase has a relatively lower cost. In this section we present a set of intrinsic reward functions for exploration that incorporate information about what other agents have explored. These rewards assume that each agent (indexed by i) has a novelty function f i that determines how novel an observation is to it, based on its past experience. This function can be an inverse state visit count in discrete domains, or, in large/continuous domains, it can be represented by recent approaches for developing novelty-based intrinsic rewards in complex domains, such as random network distillation . Note that we assume that all agents share the same observation space so that each agent's novelty function can operate on all other agents' observations. In Table 1 we define the intrinsic rewards that we use in our experiments. INDEPENDENT rewards are analagous to single-agent approaches to exploration which define the intrinsic reward for an agent as the novelty of their new and own observation that occurs as a of an action. The remainder of intrinsic reward functions that we consider use the novelty functions of other agents, in addition to their own, to further inform their exploration. MINIMUM rewards consider how novel all agents find a specific agent's observation and rewards that agent based on the minimum of these novelties. This method leads to agents only being rewarded for exploring areas that no other agent has explored, which could be advantageous in scenarios where redundancy in exploration is not useful or even harmful. COVERING rewards agents for exploring areas that it considers more novel than the average agent. This reward in agents shifting around the state space, only exploring regions as long as they are more novel to them than their average teammate. BURROWING rewards do the opposite, only rewarding agents for exploring areas that it considers less novel than the average agent. While seemingly counterintuitive, these rewards encourage agents to further explore areas they have already explored with the hope that they will discover new regions that few or no other agents have seen, which they will then consider less novel than average and continue to explore. As such, these rewards in agents continuing to explore until they exhaust all possible intrinsic rewards from a given region (i.e. hit a dead end), somewhat akin to a depth-first search. LEADER-FOLLOWER uses burrowing rewards for the first agent, and covering rewards for the rest of the agents. This leads to an agent exploring a space thoroughly, and the rest of the agents following along and trying to cover that space. Note that these are not meant to be a comprehensive set of intrinsic reward functions applicable to all cooperative multi-agent tasks but rather a set of examples of how exploration can be centralized in order to take other agents into account. Our approach, described in the following sections, is agnostic to the type of intrinsic rewards used and, as such, can incorporate other reward types not described here, as long as they can be computed off-policy. For many tasks, it is impossible to know a priori which intrinsic rewards will be the most helpful one. Furthermore, the type of reward that is most helpful could change over the course of training if the task is sufficiently complex. In this section we present our approach for simultaneously learning policies trained with different types of intrinsic rewards and dynamically selecting the best one. Simultaneous Policy Learning In order to learn policies for various types of intrinsic rewards in parallel, we utilize a shared replay buffer and off-policy learning to maximize sample efficiency. In other words, we learn policies and value functions for all intrinsic reward types from all collected data, regardless of which policies it was collected by. This parallel learning is made possible by the fact that we can compute our novelty functions off-policy, given the observations for each agent after each environment transition, which are saved in a replay buffer. For each type of reward, we learn a different "head" for our policies and critics. In other words, we learn a single network for each agent's set of policies that shares early layers and branches out into different heads for each reward type. For critics, we learn a single network across all agents that shares early layers and branches out into separate heads for each agent and reward type. We learn separate heads for intrinsic and extrinsic rewards, as in. We provide a diagram of our model architecture in Figure 1. We index agents by i ∈ {1 . . . n} and intrinsic reward types by j ∈ {1 . . . m} where m is the total number of intrinsic reward types that we are considering. The policy for agent i, trained using reward j (in addition to extrinsic rewards), is represented by π are is a shared base/input (for agent i) in a neural network and θ j i is a head/output specific to this reward type. The extrinsic critic for policy head π We remove the symbols representing the parameters of the policies (Θ) and the critics (Ψ) for readability. In our notation we use the absence of a subscript or superscript to refer to a group. For example π j, refers to all agents' policies trained on intrinsic reward j. We train our critics with the following loss function, adapted from soft actor-critic: whereQ refers to the target Q-function, an exponential weighted average of the past Q-functions, used for stability, andπ are similarly updated target policies. The intrinsic rewards laid out in Table 1 are represented as a function of the observations that from the action taken, r in i,j (o i) where j specifies the type of reward. Importantly, we can calculate these loss functions for expected intrinsic and extrinsic returns for all policies given a single environment transition, allowing us to learn multiple policies for each agent in parallel. We train each policy head with the following gradient: where β is a scalar that determines the weight of the intrinsic rewards, relative to extrinsic rewards, and A j i is a multi-agent advantage function ), used for helping with multi-agent credit assignment. Dynamic Policy Selection Now that we have established a method for simultaneously learning policies using different intrinsic reward types, we must devise a means of selecting between these policies. In order to select policies to use for environment rollouts, we must consider which policies maximize extrinsic returns, while taking into account the fact that there may still be "unknown unknowns," or regions that the agents have not seen yet where they may be able to further increase their extrinsic returns. As such, we must learn a meta-policy that, at the beginning of each episode, selects between the different sets of policies trained on different intrinsic rewards and maximizes extrinsic returns without collapsing to a single set of policies too early. We parameterized the selector policy Π with a vector, φ, that contains an entry for every reward type. The probability of sampling head j is: Π(j) ∝ exp(φ[j]). Unlike the action policies, this high-level policy does not take any inputs, a we simply want to learn which set of policies trained on the individual intrinsic reward functions has the highest expected extrinsic returns from the beginning of the episode. The most sensible metric for selecting policies is the expected extrinsic returns given by each policy head. We can use policy gradients to train the policy selector, Π, to maximize this value using the returns received when performing rollouts in the environment. We use the following gradient to train Π: where µ h is a running mean of the returns received by head h in the past, and η is a parameter similar to α for the low-level policies, which promotes entropy in the selector policy. Entropy in the policy selector is important in order to prevent it from collapsing onto a single exploration type that does well at first but does not continue to explore as effectively as others. As such, we can learn a diverse set of behaviors based on various multi-agent intrinsic reward functions and select the one that maximizes performance on the task at hand at any point during training, while continuing to consider other policies that may lead to greater rewards. We begin by describing our evaluation domains and then present experimental which demonstrate the effectiveness of our approach. We provide additional details in the appendix and will share code for both the model and environments. We use a maximum of four agents in gridworld and two agents in VizDoom. We encode several tasks in both domains related to collecting the items (displayed in yellow in Figure 2) which each require different types of exploration: TASK 1 Agents must cooperatively collect all treasure on the map in order to complete the task; TASK 2 Agents must all collect the same treasure. The first agent to collect a treasure during an episode determines the goal for the rest of the agents. TASK 3 Agents must all collect the specific treasure that is assigned to them. The two agent version of each task uses agents 1-2 and treasure A-B, while the three agent versions use 1-3, A-C, and the four agent versions use 1-4, A-D. Agents receive a negative time penalty towards extrinsic rewards at each step, so they are motivated to complete the task as quickly as possible. The only positive extrinsic reward comes from any agent collecting a treasure that is allowed by the specific task, and rewards are shared between all agents. The optimal strategy in TASK 1 is for agents to spread out and explore separate portions of the map, while in TASK 2 they should explore the same areas, and in TASK 3 they should explore independently. We first test our approach using a multi-agent gridworld domain (pictured in Fig. 2a), which allows us to design environments where the primary challenge lies in a combination of exploring the state space efficiently and coordinating behaviors. The environment includes two sources of stochasticity: random transitions and black holes. At each step there is a 10% chance of an agent's action being replaced by a random one. Furthermore, there are several "black holes" placed around the map which have a probability of opening at each time step. This probability changes at each step using a biased random walk such that it moves toward one, until the hole opens and it resets to zero. If an agent steps into a black hole when it is open, they will be sent back to their starting position. The spaces colored as black are holes that are currently open, while the gray spaces are holes that have the possibility of opening at the next step (the darker they are, the higher the probability). We set the rate of black holes dropping out to be higher in TASK 1 than the other 2 tasks, in order to balance the difficulty. The novelty function for each agent f i, which is used for calculating the intrinsic rewards in Table 1, is defined as 1 N ζ, where N is the number of times that the agent has visited its current cell and ζ is a decay rate selected as a hyperparameter (we find that ζ = 0.7 works well for our purposes). In order to test our method's ability to scale to more complex environments with similarly challenging exploration tasks, we implement tasks analogous to those in our gridworld environment (i.e. extrinsic rewards are defined identically) in the VizDoom framework . We use the "My Way Home" map, which has been used as a test bed for single agent exploration techniques , and modify it for multi-agent tasks (pictured in Figure 2b). Since the agents are moved to a central location closer to their rewards than in the original map, we lower the action repeat from 4 to 2, in order to force agents to take twice as many steps in order to explore the same areas, maintaining the challenging nature of exploration in the original task. As in the gridworld setting, we use count-based intrinsic rewards for VizDoom; however, since VizDoom is not a discrete domain, we separate agents' (x, y) positions into discrete bins and use the counts for these bins. We again find that ζ = 0.7 to work well in our experiments. Shaded region is a 68% confidence interval across 6 runs of the running mean over the past 100 episodes. Our approach (MULTI-EXPLORATION) is competitive with the best individual intrinsic reward function, using the same number of environment samples without any prior knowledge provided. (Right) Ablations of our model in the same setting. We show that both aspects of our approach (the meta-policy selector and the diverse intrinsic reward functions) are crucial for successful completion of exploration tasks requiring coordination. Figure 3a demonstrates the of our approach over the course of training on the 2 agent version of TASK 1 in gridworld, and the final on each task/agent/domain combination can be found in Table 2. The full training curves for all settings can be found in the appendix (Section A.4). We train a team of agents using each of the multi-agent intrinsic reward functions defined in Table 1 individually, and then test our dynamic policy selection approach. We find that our approach is competitive with, or outperforms the best performing individual exploration method in nearly all tasks. This performance is exciting since our method receives no prior information about which type of exploration would work best, while each type carries its own inductive bias. Notably our learned policy selector learns to select the policies trained on intrinsic rewards that do well individually on the tasks. For instance, on TASK 1 with 2 agents, we find that our policy selector consistently selects BURROWING and MINIMUM rewards, the two best performing reward functions on that task. Furthermore, we find that our on the more complex VizDoom domain mirror those in the gridworld, indicating that our methods are not limited to discrete domains, assuming that a reliable way for measuring the novelty of observations exists. Interestingly, our approach is sometimes able to significantly surpass the performance of the best individual reward function on TASK 3. This task requires agents to collect the specific reward assigned to them, so we expect independent exploration to be the most effective; however, exploration types that perform "divide-and-conquer" type behavior such as BURROWING and MINIMUM have the potential to drastically speed up the exploration process if they happen to divide the space correctly, leading to a stark success-failure contrast in runs of these types. Since our method MULTI can select policies trained on these rewards, and otherwise fall back on INDEPENDENT policies if they are not working, we find that our method is able to surpass all individual reward types. We find that our approach is unable to match the performance of the best individual method on TASK 2 in some settings (gridworld with 3 agents and VizDoom). This lack of success may be an indication that these particular settings require commitment to a specific exploration strategy early on in training, highlighting a limitation of our approach. Our method requires testing out all policies until we find one that reaches high extrinsic rewards, which can dilute the effectiveness of exploration early on. In order to better understand how each reward type encourages agents to explore the state space, we visualize their exploration in videos, viewable at the anonymized link below. 1. INDEPENDENT rewards, as expected, in agents exploring the whole state space without taking other agents into consideration. As a , on TASK 1, which requires coordination between agents to spread out and explore different areas, INDEPENDENT rewards struggle; however, on TASK 3, where agents receive individualized goals, independent exploration usually performs better, relative to the other methods. TASK 2 also requires coordination, but the rate of black holes dropping out in the gridworld version is lower on that task, making exploration easier. As a , INDEPENDENT rewards perform well on TASK 2; however, we find that LEADER-FOLLOWER also performs well on this task, expecially when more agents are involved, indicating that these rewards do a good job of biasing agents toward exploring similar regions of the environment. MIMIMUM rewards prevent agents from exploring the same regions redundantly but can lead to situations where one of the agents is the first to explore all regions that provide sparse extrinsic rewards. In these cases, other agents are not aware of the extrinsic rewards and are also not motivated to explore for them since another agent has already done so. COVERING rewards, as expected, lead to behavior where agents are constantly switching up the regions that they explore. While this behavior does not prove to be useful in the tasks we test since the switching slows down overall exploration progress, it may be useful in scenarios where agents are required to spread out. Finally, BURROWING rewards cause agents to each explore different subregions and continue to explore those regions until they exhaust their options. This behavior is particularly effective on TASK 1, where agents are best served by spreading out and exploring the whole map in a mutually exclusive fashion. Ablations We compare to a baseline meta-policy which simply selects the action policies uniformly at random. We find that our approach is significantly superior to this baseline (see Figure 3b Multi (Uniform Meta-Policy)). Furthermore, we test a version of our method where all policies (with different random initializations) are trained on independent rewards (Multi (All Independent)). The purpose of this ablation is to test the degree to which the specific multi-agent intrinsic reward functions are helpful, as opposed to simply providing multiple options at each episode. Again, we find that our method outperforms the baseline, indicating that both aspects of our approach (diverse intrinsic reward functions which share information across agents, and a meta-policy selector that maximizes extrinsic rewards) are crucial for success in multi-agent exploration tasks. We perform two further ablations/comparisons. Results on task 1 with 2 agents in GridWorld are viewable in Figure 3b, and on tasks 2 and 3 with 2 agents are viewable in the Appendix (A.5). In the first (Centralized) we compute intrinsic rewards under the assumption that all agents are treated as one agent. In other words, we use the inverse count of the number of times that all agents have jointly taken up their combined positions, rather than considering agents independently. While this reward function will ensure that the global state space is thoroughly searched, it lacks the inductive biases toward spatial coordination that our reward functions incorporate. As such, it does not learn as efficiently as our method. In the second (Multi (No Entropy)) we remove the entropy term from the head selector loss function in order to test its importance. We find that this ablation is unable to match the performance of the full method, indicating that entropy is crucial in making sure that our method does not converge early to a suboptimal policy selector. We propose a set of multi-agent intrinsic reward functions with differing properties, and compare them both qualitatively (through videos) and quantitatively on several multi-agent exploration tasks in both a gridworld domain as well as in VizDoom. Overall, we can see that cooperative multi-agent tasks can, in many cases, benefit from intrinsic rewards that take into account what other agents have explored, but there are various ways to incorporate that information, each with differing properties. As such, we propose a method for learning policies for all intrinsic reward types simultaneously while dynamically selecting the most effective ones. We show that our method is capable of matching or surpassing the performance of the best performing intrinsic reward type on various tasks while using the same number of samples collected from the environment. In future work we hope to introduce methods for directly learning the multi-agent intrinsic reward functions, rather than selecting from a set. The black holes which send agents back to their starting positions if they are stepped into are an important aspect of the environment, as they add difficulty to exploration. The probability, ρ, of a black hole opening at each step, t, evolves as such: ρ t+1 = ρ t + N (µ, σ), where µ = σ = 0.05 for TASK 1 and µ = σ = 0.005 for 2 and 3. Agents observe their global position in (x, y) coordinates (scalars), as well as local information regarding walls in adjacent spaces, the probability of their adjacent spaces opening into a black hole, the relative position of other agents (if they are within 3 spaces), as well as information about which treasures the agent has already collected in the given episode. The global state is represented by the (x, y) coordinates of all agents, as one-hot encoded vectors for x and y separately, as well as the local information of all agents regarding black holes, walls, and treasures collected. Each agent's action space consists of the 4 cardinal directions as well as an option to not move, which is helpful in cases where an agent is waiting for a black hole to be safe to cross. Agents receive their egocentric view (Figure 2c) in the form of 48x48 grayscale images as observations along with an indicator of which agents (if any) have collected each reward, and we use a vector based global state which includes all agents' (x, y) positions and velocities, their orientations, as well as the same indicator of which agent has collected each reward. As in the gridworld setting, we use count-based intrinsic rewards for VizDoom; however, since VizDoom is not a discrete domain, we separate agents' (x, y) positions into discrete bins and use the counts for these bins. There are 30 bins in the x dimension and 26 in the y dimension. (x, y) positions in the global state are represented both as scalars and one-hot vectors indicating which bin the agents are currently occupying. Each agent can choose from 3 actions at each time step: turn left, turn right, or go forward. The training procedure is detailed in Algorithm 1, and all hyperparameters are listed in Tables 3 and 4. Hyperparameters were selected by tuning one parameter at a time through intuition on task 1 with 2 agents and then applying to the rest of the settings with minimal changes. Where hyperparameters differ between settings, we make a footnote denoting them as such. Under review as a conference paper at ICLR 2020 Algorithm 1 Training Procedure for Multi-Explore w/ Soft Actor-Critic Update target parameters:Ψ pad1 = ReflectionPadding(size=1) conv1 = Conv2D(in_channels=img_obs_channels, out_channels=32, filter_size=3, stride=2) conv_nl1 = ReLU pad2 = ReflectionPadding(size=1) conv2 = Conv2D(in_channels=conv1.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl2 = ReLU pad3 = ReflectionPadding(size=1) conv3 = Conv2D(in_channels=conv2.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl3 = ReLU pad4 = ReflectionPadding(size=1) conv4 = Conv2D(in_channels=conv3.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl4 = ReLU conv_flatten = Flatten # flatten output of conv layers conv_fc = Linear(in_dim=conv_flatten.out_dim, out_dim=128) conv_fc_nl = ReLU Step Step 100 Step Step 100 Step Step 100 Step Step 150 Step Step 150 Step Step 150 A.5 MORE ABLATIONS In this section we consider two ablations/comparisons to our model across all three tasks in the 2 agent version of gridworld. In the first (Centralized) we compute intrinsic rewards under the assumption that all agents are treated as one agent. In other words, we use the inverse count of the number of times that all agents have jointly taken up their combined positions, rather than considering agents independently. While this reward function will ensure that the global state space is thoroughly searched, it lacks the inductive biases toward spatial coordination that our reward functions incorporate. As such, it does not learn as efficiently as our method in any of the three tasks. In the second (Multi (No Entropy)) we remove the entropy term from the head selector loss function in order to test its importance. We find that this ablation is unable to match the performance of the full method, indicating that entropy is crucial in making sure that our method does not converge early to a suboptimal policy selector. Step Step 100 Step Step 100 Step Step 100 In Figure 19 we analyze the behavior of the meta-policy in two separate runs. We evaluate on Task 3, since we find that our method is able to surpass the best individual reward function. This task assigns specific goals to each agent. As such, one might expect that independent exploration would work most effectively in this setting. While independent exploration is effective (see Figure 10), we find that our method can outperform it. In both runs, we find that burrowing rewards are selected when the agents finally learn how to solve the task; however, we find that burrowing rewards are not necessarily successful when deployed on their own. This lack of success is likely due to the fact that these rewards cause the agents to pick a region and commit to exploring it for the duration of training. As such, the agents may pick the "wrong" region at first and never be able to recover. On the other hand, using our methods, the meta-policy can wait until the burrowing exploration regions align with the assigned rewards and then select the policies trained on these rewards. This usually ends up being more efficient than waiting for the agents to explore the whole map using independent rewards.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkltE0VKwH
We propose several intrinsic reward functions for encouraging coordinated exploration in multi-agent problems, and introduce an approach to dynamically selecting the best exploration method for a given task, online.
Object recognition in real-world requires handling long-tailed or even open-ended data. An ideal visual system needs to reliably recognize the populated visual concepts and meanwhile efficiently learn about emerging new categories with a few training instances. Class-balanced many-shot learning and few-shot learning tackle one side of this problem, via either learning strong classifiers for populated categories or learning to learn few-shot classifiers for the tail classes. In this paper, we investigate the problem of generalized few-shot learning (GFSL) -- a model during the deployment is required to not only learn about "tail" categories with few shots, but simultaneously classify the "head" and "tail" categories. We propose the Classifier Synthesis Learning (CASTLE), a learning framework that learns how to synthesize calibrated few-shot classifiers in addition to the multi-class classifiers of ``head'' classes, leveraging a shared neural dictionary. CASTLE sheds light upon the inductive GFSL through optimizing one clean and effective GFSL learning objective. It demonstrates superior performances than existing GFSL algorithms and strong baselines on MiniImageNet and TieredImageNet data sets. More interestingly, it outperforms previous state-of-the-art methods when evaluated on standard few-shot learning. Visual recognition for objects in the "long tail" has been an important challenge to address . We often have a very limited amount of data on those objects as they are infrequently observed and/or visual exemplars of them are hard to collect. As such, state-of-the-art methods (e.g deep learning) can not be directly applied due to their notorious demand of a large number of annotated data (; ;). Few-shot learning (FSL) (; ;) is mindful of the limited instances (i.e, shots) per "tail" concept, which attempts to address this challenging problem by distinguishing between the data-rich "head" categories as SEEN classes and data-scarce "tail" categories as UNSEEN classes. While it is difficult to build classifiers with data from UNSEEN classes, FSL leverages data from SEEN classes to extract inductive biases for effective classifiers acquisition on UNSEEN ones. We refer to for an up-to-date survey in few-shot learning. This type of learning, however, creates a chasm in object recognition. Classifiers from many-shot learning for SEEN classes and those from few-shot learning for UNSEEN classes do not mix -they cannot be combined directly to recognize all object categories at the same time. In this paper, we study the problem of Generalized Few-Shot Learning (GFSL), which focuses on the joint classification of both data-rich and data-poor categories. In particular, our goal is for the model trained on the SEEN categories to be capable of incorporating limited UNSEEN class instances, and make predictions for test instances in both the "head" and "tail" of the entire distribution of categories. Figure 1 illustrates the high-level idea of our proposal, contrasting the standard few-shot learning. In contrast to prior works (; ;) that focus on learning "head" and "tail" concepts in a transductive manner, our learning setup requires inductive modeling of the"tail", which is therefore more challenging as we assume no knowledge about the UNSEEN "tail" categories is available during the model learning phase. (GFSL). GFSL requires to extract inductive bias from SEEN categories to facilitate efficiently learning on few-shot UNSEEN "tail" categories, while maintaining discernability on "head" classes. To this end, we propose Classifier Synthesis Learning (CASTLE), where the few-shot classifiers are synthesized based on a shared neural dictionary across classes. Such synthesized few-shot classifiers are then used together with the many-shot classifiers. To this purpose, we create a scenario, via sampling a set of instances from SEEN categories and pretend that they come from UNSEEN, and apply the synthesized classifiers (based on the instances) as if they are many-shot classifiers to optimize multi-class classification together with the remaining many-shot SEEN classifiers. In other words, we construct few-shot classifiers to not only perform well on the few-shot classes but also to be competitive when used in conjunction with many-shot classifiers of populated classes. We argue that such highly contrastive learning can benefit few-shot classification with high discernibility in its learned visual embeddings (cf. Section 4.2 and Section 4.4). We empirically validate our approach on two standard benchmark data sets -MiniImageNet and TieredImageNet. The proposed approach retains competitive "head" concept recognition performances while outperforming existing approaches on few-shot learning and generalized few-shot learning. We highlight that CASTLE has learned a better calibration between many-shot SEEN classifiers and synthesized UNSEEN classifiers, which naturally addresses the confidence mismatch phenomena, i.e, SEEN and UNSEEN classifiers have different confidence ranges. We define a K-shot N -way classification task to be one with N classes to make prediction and K training examples per class for learning. The training set (i.e, support set) is represented as, where x i ∈ R D is an instance and y i ∈ {0, 1} N (i.e, one-hot vector) is its label. Similarly, the test set is D test and contains i.i.d. samples from the same distribution as D train. From few-shot learning to generalized few-shot learning. In many-shot learning, where K is large, a classification model f: R D → {0, 1} N is learned by optimizing E (xi,yi)∈D train (f (x i), y i ). Here f is often instantiated as an embedding function φ(·) and a linear classifier Θ: f (x i) = φ(x i) Θ. The loss function (·, ·) measures the discrepancy between the prediction and the true label. On the other hand, Few-shot learning (FSL) faces the challenge in transferring knowledge across learning visual concepts. It assumes two non-overlapping sets of SEEN (S) and UNSEEN (U) classes. During training, it has access to all SEEN classes for learning an inductive bias, which is then transferred to learn good classifiers on U rapidly with a small K. Generalized Few-Shot Learning (GFSL), different from FSL which neglects classification of the S classes, aims at building models that simultaneously predicts over S ∪ U categories. As a , such a model needs to deal with many-shot classification from |S| SEEN classes along side with learning |U| emerging UNSEEN classes 1. Meta-learning for few-shot learning. Meta-learning has been an effective framework for FSL (; ;) in the recent years. The main idea is to mimic the future few-shot learning scenario by optimizing a shared f across K-shot N -way tasks drawn from the SEEN class sets S. In particular, a K-shot N -way task D S train sampled from S is constructed by randomly choosing N categories from S and K examples in each of them. A corresponding test set D S test (a.k.a. query set) is sampled from S to evaluate the ing few-shot classifier f. Therefore, we expect the learned classifier f "generalizes" well on the training few-shot tasks sampled from SEEN classes, to "generalize" well on few-shot tasks drawn from UNSEEN class set U. In this paper, we focus on the methods described in . Specifically, the classifier f is based on an embedding function, f = φ: R D → R d, which transforms input examples into a latent space with d dimensions. φ is learned to pull similar objects close while pushing dissimilar ones far away . For a test instance x j, the embedding function φ makes a prediction based on a soft nearest neighbor classifier: sim(φ(x j), φ(x i)) measures the similarity between the test instance φ(x j) and each training instance φ(x i). When there is more than one instance per class, i.e, K > 1, instances in the same class can be averaged to assist make a final decision. By learning a good φ, important visual features for few-shot classification are distilled, which will be used for few-shot tasks from the UNSEEN classes. The main idea of CASTLE includes a classifier composition model for synthesizing classifiers with the few-shot training data, and an effective learning algorithm that learns many-shot classifiers and few-shot classifiers (together with its composition model end-to-end) at the same time. In Section 3.1, we introduce the classifier composition model uses a few-shot training data to query a common set of neural bases, and then assemble the target "synthesized classifiers". In Section 3.2, we propose a unified learning objective that directly contrasts many-shot classifiers with few-shot classifiers, via constructing classification tasks over U ∪ S categories. It enforces the few-shot classifiers to explicitly compete against the many-shot classifiers in the model learning, which leads to more discriminative few-shot classifiers in the GFSL setting. We base our classifier composition model on 2018). Different from their approach with a pre-fixed feature embedding, we use a learned embedding function and a neural dictionary. Here we define a dictionary as pairs of "key" and "value" embeddings, where each "key" and "value" is associated with a neural base, which is designed to encode shared primitives for composing classifiers of S ∪ U. Formally, the neural dictionary contains a set of |B| learnable bases B = {b 1, b 2, . . ., b |B|}, and b k ∈ B ∈ R d. The key and value for the dictionary are generated based on two linear projections U and V of elements in B. For instance, Ub i and Vb i represent the generated key and value embeddings. Denote I [y i = c] as an indicator that selects instances in the class c. To synthesize a classifier for a class c, we first compute the class signature as the embedding prototype, defined as the average embedding of all K shots of instances (in a K-shot N -way task): We then compute the coefficients α c for assembling the classifier of class c, via measuring the compatibility score between the class signature and the key embeddings of the neural dictionary, The coefficient α k c is then normalized with the sum of compatibility scores over all |B| bases, which then is used to convexly combine the value embeddings and synthesize the classifier, We formulate the classifier composition as a summation of the initial prototype embedding p c and the residual component classifier is then 2 -normalized and used for (generalized) few-shot classification. Since both the embedding "key" and classifier "value" are generated based on the same set of neural bases, it encodes a compact set of latent features for a wide range of classes. We hope the learned neural bases contain a rich set of classifier primitives to be transferred to novel compositions of emerging visual categories. In addition to transferring knowledge from SEEN to UNSEEN classes as in FSL, in generalized fewshot learning, the few-shot classifiers is required to do well when used in conjunction with many-shot classifiers. Therefore, a GFSL classifier f should have a low expected error as what follows: Suppose we have sampled a K-shot N -way few-shot learning task D U train, which contains |U| visual UNSEEN categories. For each task, the classifier f predicts a test instance in D S∪U test towards both tail classes U and head classes S. In other words, based on D U train and the many-shot classifiers Θ S, a randomly sampled instance in S ∪ U should be effectively predicted. In summary, a GFSL classifier generalizes its joint prediction ability to S ∪ U given D U train and Θ S during inference. Unified learning objective. CASTLE learns a generalizable GFSL classifier via training on the SEEN class set S. For each class in s ∈ S, it keeps many-shot classifiers (i.e, liner classifier over the embedding function φ(·)) Θ s. Next, we sample a "fake" K-shot N -way few-shot task from S, which contains C categories. For each classes in C, we synthesize their classifiers by W C = {w c | c ∈ C} as in Eq. 5. We treat the remaining S − C classes as the "fake" head classes, and use their corresponding many-shot classifiers Θ S−C. They are combined with the synthesized classifiers W C (from the few-shot classes C) to form the set of joint classifiersŴ = W C ∪ Θ S−C, over all classes in S. Finally, we optimize the learning objective as what follows: Despite that few-shot classifiers W C are synthesized using with K training instances (cf. Eq. 3), they are optimized to jointly classify instances from all SEEN categories S. After minimizing the accumulated loss in Eq. 7 over multiple GFSL tasks, the learned model extends its discerning ability to UNSEEN classes so as has low error in Eq. 6. During inference, CASTLE synthesizes the classifiers for UNSEEN classes based on the neural dictionary with their few-shot training examples, and makes a joint prediction over S ∪ U with the help of many-shot classifier Θ S. Multi-classifier learning. A natural way to minimize Eq. 7 implements a stochastic gradient descent step in each mini-batch by sampling one GFSL task, which contains a K-shot N -way training set together with a set of test instances (x j, y j) from S. It is clear that increasing the number of GFSL tasks per gradient step can improve the optimization stability. Therefore, we propose an efficient implementation that utilizes a large number of GFSL tasks to compute gradients. with different sets of C, which is then applied to compute the averaged loss over z using Eq. 7. In the scope of this paper, CASTLE always uses multi-classifier learning unless it is explicitly mentioned. With this, we observed a significant speed-up in terms of convergence (cf. Section C.1 in the appendix for an ablation study). In this section, we design experiments to validate the effectiveness of the CASTLE in GFSL (cf. Section 4.2). We first introduce the training and evaluation protocol of Ren et al. (2018a) and compare CASTLE with existing methods. Next, we provide an analysis over algorithms with alternative protocols that measures different aspects of GFSL (cf. Section 4.3). We verify that CASTLE is advantageous as it learns a better calibration between SEEN and UNSEEN classifiers. Finally, we show that CASTLE also benefit standard FSL performances (cf. Section 4.4). Data sets. We consider two benchmark data sets derived from ILSVRC-12 dataset . The miniImageNet dataset has 100 classes and 600 examples per class. For evaluation, we follow the split of Figure A5 of the Appendix provides an illustration of how data are split. Baselines and prior methods. We explore several (strong) choices in deriving classifiers for the SEEN and UNSEEN classes: Multiclass Classifier (MC) + kNN. A multi-class classifier is trained on the SEEN classes as standard many-shot classification . When evaluated on UNSEEN classes for few-shot tasks, we apply the learned feature embedding with a nearest neighbor classifier. ProtoNet + ProtoNet. We train Prototypical Network (a.k.a ProtoNet) on SEEN classes, pretending they were few-shot. When evaluated on the SEEN categories, we randomly sample 100 training instances per category to compute the class prototypes. We use the MC classifier's feature mapping to initialize the embedding function, and use the final embedding function for UNSEEN classes. The prediction is straightforward as both sets of classes are generated with ProtoNet. MC + ProtoNet. We combine the learning objective of and to jointly learn the MC classifier and feature embedding, which trades off between few-shot and many-shot learning. Besides IFSL (a), we also re-implemented existing approaches (or adapted the original release if available), i.e, L2ML and DFSL to compare with CASTLE. Note that L2ML is originally designed in the transductive setting, which we made some adaption for inductive prediction. Please refer to original papers for details. For CASTLE, we use the {Θ S} (i.e, the multiclass classifiers, cf. Section 3.2) for the SEEN classes and the synthesized classifiers for the UNSEEN classes to classify an instance into all classes, and then select the prediction with the highest confidence score. Evaluation measures. Mean accuracy over all SEEN and 5 sampled UNSEEN classes is the main measurement to evaluate a GFSL method . We sample 10,000 1-shot or 5-shot GFSL tasks to evaluate this for the sake of reliability. Besides the few-shot training examples, an equal number of test instances sampled from all head and 5 tail categories are used during the evaluation. The mean and 95% confidence interval are reported. In addition to accuracy, Ren et al. (2018a) also use ∆-value, a measure of average accuracy drop between predicting specific (SEEN or UNSEEN) class and predicting all categories jointly. Methods balance the prediction of SEEN and UNSEEN classes well can receive a low accuracy drop. In the later sections, we introduce two other GFSL measures --the harmonic mean accuracy and the area under SEEN-UNSEEN curve (AUSUC). Please refer to the Section A of the Appendix for more details about experimental setups, implementation details, model optimization, and evaluation measures 3. The main of all methods on miniImageNet is shown in Table 1. We found that CASTLE outperforms all the existing methods as well as our proposed baseline systems in terms of the mean accuracy. Meanwhile, when looked at the ∆-value, CASTLE is least affected between predicting for SEEN/USSEEN classes separately and predicting over all classes jointly. However, we argue that either mean accuracy or ∆-value is not informative enough to tell about a GFSL algorithm's performances. For example, a baseline system, i.e, ProtoNet + ProtoNet perform better than IFSL in terms of 5-shot mean accuracy but not ∆-value. In this case, how shall we rank these two systems? To answer this question, we propose to use another evaluation measure, harmonic mean of the mean accuracy for each SEEN and UNSEEN category, when they are classified jointly. Harmonic mean is a better GFSL performance measure. Since the number of SEEN and UNSEEN classes are most likely to be not equal, e.g, 64 vs. 5 in our cases, directly computing the mean accuracy over all classes is almost always biased. For example, a many-shot classifier that only classifies samples into SEEN classes can receive a good performance than one that recognizes both SEEN and UNSEEN. Therefore, we argue that harmonic mean over the mean accuracy can better assess a classifier's performance, as now the performances are negatively affected when a classifier ignores classes (e.g, MC classifier get 0% harmonic mean). Specifically, we compute the top-1 accuracy for instances from SEEN and UNSEEN classes, and take their harmonic mean as the performance measure. The are included in the right side of the Table 1. Now we observe that the many-shot baseline MC+kNN has extremely low performance as it tends to ignore UNSEEN categories. Meanwhile, CASTLE remains the best when ranked by the harmonic mean accuracy against others. Evaluate GFSL beyond 5 UNSEEN categories. Besides using harmonic mean accuracy, we argue that another important aspect in evaluating GFSL is to go beyond the 5 sampled UNSEEN categories, as it is never the case in real-world. On the contrary, we care most about the GFSL with a large number of UNSEEN classes. To this end, we evaluate GFSL with all available SEEN and UNSEEN categories over both MiniImageNet and TieredImageNet, and report their in Table 2 and Table 3. We report the mean accuracy over SEEN and UNSEEN categories, as well as the harmonic mean accuracy of all categories. We observe that CASTLE outperforms all approaches in the UNSEEN and more importantly, the ALL categories section, across two data sets. On the SEEN categories, CASTLE remains competitive against the ad hoc many-shot classifier (MC). In this section, we do analyses to show tuning a great confidence calibration factor significantly improves GFSL performance of baseline models, CASTLE has balanced the confidence score of SEEN and UNSEEN predictions, requiring no explicit calibration, and CASTLE is consistently better than other approaches across an increasing number of "tail" categories. For more ablation studies about CASTLE, we refer readers to the Appendix (cf. Section C.1). Confidence calibration matters in GFSL. In generalized zero-shot learning, has identified a significant prediction bias between classification confidence of SEEN and UNSEEN classifiers. We find a similar phenomena in GFSL. For instance, ProtoNet + ProtoNet baseline has a very confident classifier on SEEN categories than UNSEEN categories (The scale of confidence is on average 2.1 times higher). To address this issue, we compute a calibration factor based on the validation set of UNSEEN categories, such that the prediction logits are calibrated by subtracting this factor out from the confidence of SEEN categories' predictions. The of all methods after calibration is shown in Figure 2. We observe a consistent improvement over the harmonic mean of accuracy for all methods, while CASTLE is the least affected. This suggests that CASTLE, learned with the unified GFSL objective, has a well-calibrated classification confidence and does not require additional data and extra learning phase to search this calibration factor. Moreover, we use area under SEEN-UNSEEN curve (AUSUC) as a measure of different GFSL algorithms. Here, AUSUC is a performance measure that takes the effects of calibration factor out. To do so, we enumerate through a large range of calibration factors, and subtract it from the confidence score of SEEN classifiers. Through this process, the joint prediction performances over SEEN and UNSEEN categories, denoted as S → S ∪ U and U → S ∪ U, shall vary as the calibration factor changes. For instance, when calibration factor is infinite large, we are measuring a classifier that only predicts UNSEEN categories. We denote this as the SEEN-UNSEEN curve. The is shown in Figure 3. As a , we observe that CASTLE archives the largest area under curve, which indicates that CASTLE is in general a better algorithm over others among different calibration factors. Robust evaluation of GFSL. Other than the harmonic mean accuracy of all SEEN and UNSEEN categories shown in cf. Table 2 and 3, we study the dynamic of how harmonic mean accuracy changes with an incremental number of UNSEEN "tail" concepts. In other words, we show the GFSL performances w.r.t. different numbers of "tail" concepts. We use this as a robust evaluation of each system's GFSL capability. The 1-shot learning is shown as Figure 4. We observe that CASTLE consistently outperforms other baselines by a clear margin. Finally, we also evaluate our proposed approach's performance on two standard few-shot learning benchmarks, i.e, miniImageNet and TieredImageNet data set. The are shown in the Table 4 and Table 5. We compare our approach to previous state-of-the-art methods and found CASTLE ProtoNet 61.40 ± 0.02 76.56 ± 0.02 LEO 61.76 ± 0.08 77.59 ± 0.12 OptNet 62 54.48 ± 0.93 71.32 ± 0.78 LEO 66.33 ± 0.05 81.44 ± 0.09 OptNet 65 outperforming all of them, in both 1-shot 5-way and 5-shot 5-way accuracy. This supports our hypothesis that jointly learning with many-shot classification forces few-shot classifiers to be discriminative. Please refer to the Appendix for details about task setups, performance measures, and visualizations. Building a high-quality visual system usually requires to have a large scale annotated training set with many shots per categories. Many large-scale datasets such as ImageNet have an ample number of instances for popular classes . However, the data-scarce "tail" of the category distribution matters. For example, a visual search engine needs to deal with the rare object of interests (e.g endangered species) or newly defined items (e.g new smartphone models), which only possess a few data instances. Directly training a system over all classes is prone to over-fit and can be biased towards the data-rich categories. Few-shot learning (FSL) is proposed to tackle this problem, via meta-learning an inductive bias from the SEEN classes, such that it transfers to the learning process of UNSEEN classes with few training data during the model deployment. For example, one line of works uses meta-learned discriminative feature embeddings (; ; ; ; ;) together with non-parametric nearest neighbor classifiers, to recognize novel classes given a few exemplars. Another line of works (; ; ; ;) chooses to learn a common initialization to a pre-specified model configuration and adapt rapidly using fixed steps of gradient descents over the few-shot training data from UNSEEN categories. FSL emphasizes on building models of the UNSEEN classes and ignore its real-world use case of assisting the many-shot recognition of the "'head" categories. A more realistic setting, i.e, low-shot learning, has been studied before (; ; ; ;). The main aim is to recognize the entire set of concepts in a transductive learning framework -during the training of the target model, you have access to both the SEEN and UNSEEN categories. The key difference to our proposed GFSL is that we assume no access to UNSEEN classes in the learning phase, which requires the model to inductively transfer knowledge from SEEN classes to UNSEEN ones during the evaluation. Previous approaches mostly focus on the transductive setup of GFSL. Some of them (; ;) apply the exemplar-based classification paradigms on both SEEN and UNSEEN categories to resolve the transductive learning problem. Others (; Schönfeld et al., 2018;) usually ignore the explicit relationship between SEEN and UNSEEN categories, and learn separate classifiers. Ren et al. (2018a); propose to solve inductive GFSL via either composing UNSEEN with SEEN classifiers or meta-leaning with recurrent back-propagation procedure. is the most related work to CASTLE, where we differ in how we compose classifiers and the unified learning objective, i.e, we used a learned neural dictionary instead of using MC classifiers as bases. In summary, CASTLE learns both many-shot classifiers and synthesized classifiers via optimizing a single unified objective function, where a classifier composition model with a neural dictionary is leveraged for assembling few-shot classifiers. Our experiments highlight that CASTLE not only outperforms existing methods in terms of GFSL performances from many different aspects, but more interestingly, also improves the classifier's discernibility over standard FSL. Following the recent methods (; ;), we use a residual network (ResNet) to implement the embedding backbone φ. We first pre-train this backbone network (also explored by (; ; ;) ) and perform model selection strategy similar to . To learn our methods as well as baseline systems, we then use Momentum SGD with an initial learning rate 1e-4. In the rest of this section, we explain each of the above with complete details. A.1 DATA SET DETAILS. Two benchmark data sets are used in our experiments. The MiniImageNet dataset is a subset of the ILSVRC-12 dataset . There are totally 100 classes and 600 examples in each class. For evaluation, we follow the split of and use 64 of 100 classes for meta-training, 16 for validation, and 20 for meta-test (model evaluation). In other words, a model is trained on few-shot tasks sampled from the 64 SEEN classes set during meta-training, and the best model is selected based on the few-shot classification performance over the 16 class set. The final model is evaluated based on few-shot tasks sampled from the 20 UNSEEN classes. The TieredImageNet (b) is a more complicated version compared with the miniImageNet. It contains 34 super-categories in total, with 20 for meta-training, 6 for validation, and 8 for model testing (meta-test). Each of the super-category has 10 to 30 classes. In detail, there are 351, 97, and 160 classes for meta-training, meta-validation, and meta-test, respectively. The divergence of the super-concept leads to a more difficult few-shot classification problem. Since both data sets are constructed by images from ILSVRC-12, we augment the meta-train set of each data set by sampling non-overlapping images from the corresponding classes in ILSVRC-12. The auxiliary meta-train set is used to measure the generalized few-shot learning classification performance on the SEEN class set. For example, for each of the 64 SEEN classes in the MiniImageNet, we collect 200 more non-overlapping images per class from ILSVRC-12 as the test set for many-shot classification. An illustration of the data set split is shown in Figure A5. Figure A5: The split of data in the generalized few-shot classification scenario. In addition to the standard data set like MiniImagetnet (blue part), we collect non-overlapping augmented "head" class instances from the corresponding categories in the ImageNet (red part), to measure the classification ability on the seen classes. Then in the generalized few-shot classification task, few-shot instances are sampled from each of the unseen classes, while the model should have the ability to predict instances from both the "head" and "tail" classes. Following the setting of most recent methods (; ;), we use the residual network to implement the embedding backbone φ. Different from the standard configuration, the literature (; ;) resize the input image to 80 × 80 × 3 for MiniImageNet (while 84 × 84 × 3 for TieredImageNet) and remove the first two down-sampling layers in the network. In concrete words, three residual blocks are used after an initial convolutional layer (with stride 1 and padding 1) over the image, which have channels 160/320/640, stride 2, and padding 2. After a global average pooling layer, it leads to a 640 dimensional embedding. The concrete architecture is visualized as Figure A15. Please refer to Pytorch documentation 4 for complete references of each building blocks. Before the meta-training stage, we try to find a good initialization for the embedding φ. In particular, on MiniImageNet we add a linear layer on the backbone output and optimize a 64-way (while 351-way for TieredImageNet) classification problem on the meta-training set with the cross-entropy loss function. Stochastic gradient descent with initial learning rate 0.1 and momentum 0.9 is used to complete such optimization. The 16 classes in MiniImageNet (resp. 97 classes in TieredImageNet) for model selection also assist the choice of the pre-trained model. After each epoch, we use the current embedding and measures the nearest neighbor based few-shot classification performance on the sampled few-shot tasks from these 16 (resp. 97) classes. The most suitable embedding function is recorded. After that, such learned backbone is used to initialize the embedding part φ of the whole model. In later sections, we will show the effect of pre-training strategy on both few-shot and generalized few-shot classification measures. We use the pre-trained backbone to initialize the embedding part φ of a model for CASTLE and our re-implemented comparison methods such as MC+kNN, ProtoNet+ProtoNet, MC+ProtoNet, L2ML , and DFSL . When there exists a backbone initialization, we set the initial learning rate as 1e-4 and optimize the model with Momentum SGD. The learning rate will be halved after optimizing 2,000 mini-batches. During meta-learning, all methods are optimized over 5-way few-shot tasks, where the number of shots in a task is consistent with the inference (meta-test) stage. For example, if the goal is a 1-shot 5-way model, we sample 1-shot 5-way D test. An illustration of the architecture of CASTLE is shown in Figure A6. For CASTLE, we randomly sample a 24-way task from S in each mini-batch, and re-sample 64 5-way tasks from it. It is notable that all instances in the 24-way task are encoded by the ResNet backbone with same parameters in advance. Therefore, by embedding the synthesized 5-way few-shot classifiers into the global many-shot classifier, it in 64 different configurations of the generalized few-shot classifiers. To evaluate which we randomly sample instances with batch size 128 from S and compute the GFSL objective in Eq. 7. In this section, we provide details about the training and evaluation setups for the generalized few-shot learning, followed by concrete descriptions for comparison methods. Setup. We train a multi-class classifier on the populated SEEN classes following practices of training Residual Networks . Here a ResNet backbone network is used, identical to the ones described in Section A.2. During the training |S|-way classifiers are trained in a supervised learning manner. Training details. During the inference, test examples of S categories are evaluated based on the |S|-way classifiers and |U| categories are evaluated using the support embeddings from D U train with a nearest neighbor classifier. To evaluate the generalized few-shot classification task, we take the union of multi-class classifiers' confidence and ProtoNet confidence as joint classification scores on S ∪ U. Setup. We train a few-shot classifier (initialized by the MC classifier's feature mapping) using the Prototypical Network (a.k.a ProtoNet). The backbone network is the same ResNet as before. Training and inference. During the inference, we compute the class prototypes of SEEN classes via using 100 training instances per category. The class prototypes of UNSEEN classes are computed based on the sampled few-shot training set. During the inference of generalized few-shot learning, the confidence of a test instances is jointly determined by its (negative) distance to both SEEN and UNSEEN class prototypes. Setup. We combine the learning objective of the previous two baselines to jointly learn the MC classifier and feature embedding. Since there are two objectives for many-shot (cross-entropy loss on all SEEN classes) and few-shot (ProtoNet meta-learning objective) classification respectively, it trades off between many-shot and few-shot learning. Therefore, this learned model can be used as multi-class linear classifiers on the "head" categories, and used as ProtoNet on the "tail" categories. Training and inference. During the inference, the model predicts instances from SEEN class S with the MC classifier, while takes advantage of the few-shot prototypes to discern UNSEEN class instances. Figure A7: An illustration of the harmonic mean based GFSL evaluation. S and U denotes the SEEN and UNSEEN instances (x) and labels (y) respectively. S ∪ U is the joint set of S and U. The notation X → Y, X, Y ∈ {S, U, S ∪ U} means computing prediction with instances from X to labels of Y. By computing a performance measure (like accuracy) on the joint label space prediction of SEEN and UNSEEN instances separately, a harmonic mean is computed to obtain the final measure. To evaluate the generalized few-shot classification task, we take the union of multi-class classifiers' confidence and ProtoNet confidence as joint classification scores on S ∪ U. Setup. propose learning to model the "tail" (L2ML) by connecting a few-shot classifier with the corresponding many-shot classifier. The method is designed to learn classifier dynamics from data-poor "tail" classes to the data-rich "head" classes. Since L2ML is originally designed to learn with both SEEN and UNSEEN classes in a transductive manner, in our experiment, we adaptive it to out setting. Therefore, we learn a classifier mapping based on the sampled few-shot tasks from SEEN class set S, which transforms a few-shot classifier in UNSEEN class set U inductively. Training and inference. Following , we first train a many-shot classifier W upon the ResNet backbone on the SEEN class set S. We use the same residual architecture as in to implement the classifier mapping f, which transforms a few-shot classifier to a many-shot classifier. During the meta-learning stage, a S-way few-shot task is sampled in each mini-batch, which produces a S-way linear few-shot classifierŴ based on the fixed pre-trained embedding. The objective of L2ML not only regresses the mapped few-shot classifier f (Ŵ) close to the many-shot one W measured by square loss, but also minimize the classification loss of f (Ŵ) over a randomly sampled instances from S. Therefore, this learned model uses a pre-trained multi-class classifier W for those "head" categories, and used the predicted few-shot classifiers with f for the "tail" categories. Setup. Dynamic Few-Shot Learning without forgetting (DFSL) also adopts a generalized few-shot learning objective. It decomposes the GFSL learning with two stages. A cosine classifier together with the backbone is learned at first. The pre-trained cosine classifier is regarded as bases. Based on the fixed backbone, another attention-based network constructs the classifier for a particular class by a linear combination of the elements in the bases. Training and inference. We follow the strategy in to train the DFSL model. Based on the pre-trained backbone and cosine classifier, we construct a dictionary with size |S| whose elements correspond to each category in S. In each mini-batch of meta-training, we sample a few-shot task from the SEEN class set whose classes construct the set C. Then, an attention model composes the classifier for the few-shot task by weighting the |S| − |C| elements in the dictionary not corresponding to C. To evaluate the composed classifier, DFSL samples an equal number of instances from C and S − C for a test. For inference, we use the cosine classifier for "head" classes and composed few-shot classifier for "tail" classes. We take advantage of the auxiliary meta-train set from the benchmark data sets during GFSL evaluations, and an illustration of the data set construction can be found in Figure A5. The notation X → Y with X, Y ∈ {S, U, S ∪ U} means computing prediction with instances from X to labels of Y. For example, S → S ∪ U means we first filter instances come from the SEEN class set (x ∈ S), and predict them into the joint label space (y ∈ S ∪ U). For a GFSL model, we consider its performance with different measurements. An illustration of some criteria is shown in Figure A7. Many-shot accuracy. A model is required to predict the auxiliary SEEN class instances towards all SEEN classes (S → S). This is the same criterion with the standard supervised learning. Few-shot accuracy. Following the standard protocol (; ; ;), we sample 10,000 K-shot N -way tasks from U during inference. In detail, we first sample N classes from U, and then sample K + 15 instances for each class. The first N K labeled instances (K instances from each of the N classes) are used to build the few-shot classifier, and the remaining 15N (15 instances from each of the N classes) are used to evaluate the quality of such few-shot classifier. During our test, we consider K = 1 and K = 5 as in the literature, and change N ranges from {5, 10, 15, . . ., |U|} as a more robust measure. It is noteworthy that in this test stage, all the instances come from U and are predicted to classes in U (U → U). Generalized few-shot accuracy. Different from many-shot and few-shot evaluations, the generalized few-shot learning takes the joint instance and label spaces into consideration. In other words, the instances come from S ∪ U and their predicted labels also in S ∪ U (S ∪ U → S ∪ U). This is obviously more difficult than the previous many-shot (S → S) and few-shot (U → U) tasks. During the test, with a bit abuse of notations, we sample K-shot S + N -way tasks from S ∪ U. Concretely, we first sample a K-shot N -way task from U, with N K training and 15N test instances respectively. Then, we randomly sample 15N instances from S. Thus in a GFSL evaluation task, there are N K labeled instances from U, and 30N test instances from S ∪ U. We compute the accuracy of S ∪ U as the final measure. Generalized few-shot ∆-value. Since the problem becomes difficult when the predicted label space expands from S → S to S → S ∪ U (and also U → U to U → S ∪ U), the accuracy of a model will have a drop. To measure how the classification ability of a GFSL model changes when working in a GFSL scenario, Ren et al. (2018a) propose the ∆-Value to measure the average accuracy drop. In detail, for each sampled GFSL task, we first compute its many-shot accuracy (S → S) and few-shot accuracy (U → U). Then we calculate the corresponding accuracy of SEEN and UNSEEN instances in the joint label space, i.e, S → S ∪ U and U → S ∪ U. The ∆-Value is the average decrease of accuracy in these two cases. Generalized few-shot harmonic mean. Directly computing the accuracy still gets biased towards the populated classes, so we also consider the harmonic mean as a more balanced measure . By computing performance measurement such as top-1 accuracy and sample-wise Mean Average Precision (MAP) for S → S ∪ U and U → S ∪ U, the harmonic mean is used to average the performance in these two cases as the final measure. An illustration is in Figure A7. Generalized few-shot AUSUC. propose a calibration-agnostic criterion for generalized zero-shot learning. To avoid evaluating a model influenced by a calibration factor between SEEN and UNSEEN classes, they propose to determine the range of the calibration factor for all instances at first, and then plot the SEEN-UNSEEN accuracy curve based on different configurations of the calibration values. Finally, the area under the SEEN-UNSEEN curve is used as a more robust criterion. We follow to compute the AUSUC value for sampled GFSL tasks. In this section, we first do ablation studies on the proposed CASTLE approach, and then provide additional for comparison methods in the GFSL evaluations. In this section, we aim to study the ablated variant of our approach and perform in-depth analyses. Effects on the neural dictionary size |B|. We show the effects of the dictionary size (as the ratio of SEEN class size) for the generalized few-shot learning (measured by harmonic mean accuracy when there are 5 UNSEEN classes) in Figure A8. We observe that the neural dictionary with a ratio of 2 or 3 works best amongst all other dictionary sizes. Therefore, we fix the dictionary size as 128 across all experiments. Note that when |B| = 0, our method degenerates to case optimizing the unified objective in Eq. 7 without using the neural dictionary. How well is synthesized classifiers comparing multi-class classifiers? To assess the quality of synthesized classifier, we made a comparison against ProtoNet and also the Multi-class Classifier on the "head" SEEN concepts. To do so, we sample few-shot training instances on each SEEN category to synthesize classifiers (or compute class prototypes for ProtoNet), and then use solely the synthesized classifiers/class prototypes to evaluate multi-class accuracy. The are shown in the Figure A9. We observe that the learned synthesized classifier outperforms over ProtoNet by a large margin. Also, the model trained with unified learning objective (ULO) improves over the vanilla synthesized classifiers. Note that there is still a significant gap left against multi-class classifiers trained on the entire data set. It suggests that the classifier synthesis we learned is effective against using sole instance embeddings while still far from the many-shot multi-class classifiers. Different choices of the classifier synthesis. As in Eq. 3, when there are more than one instance per class in a few-shot task (i.e K > 1), CASTLE compute the averaged embeddings first, and then use the prototype of each class as the input of the neural dictionary to synthesize their corresponding classifiers. Here we explore another choice to deal with multiple instances in each class. We synthesize classifiers based on each instance first, and then average the corresponding synthesized classifiers for each class. This option equals an ensemble strategy to average the prediction of each instance's synthesized classifier. We denote the pre-average strategy (the one used in CASTLE) as "Pre-AVG", and the post-average strategy as "Post-AVG". The 5-Shot 5-way classification on MiniImageNet for these two strategies are shown in Table A6. From the , "Post-AVG" does not improve the FSL and GFSL performance obviously. Since averaging the synthesized classifiers in a hindsight way costs more memory during meta-training, we choose the "Pre-AVG" option to synthesize classifiers when there are more than 1 shot in each class. What is the performance when evaluated with more UNSEEN classes? As mentioned in the analysis of the main text, we now give additional five-shot learning for the incremental evaluation of the generalized few-shot learning (together with one-shot learning ). In addition to the test instances from the "head" 64 classes in MiniImageNet, 5 to 20 novel classes are included to compose the generalized few-shot tasks. Concretely, 1 or 5 instances per novel class are used to construct the "tail" classifier, combined with which the model is asked to do a joint classification of both SEEN and UNSEEN classes. Figure A10 and Figure A11 record the change of generalized few-shot learning performance (harmonic mean) when more UNSEEN classes emerge. We observe that CASTLE consistently outperforms all baseline approaches in each evaluation setup, with a clear margin. How is multiple classifiers learning's impact over the training? (cf. Section 3) CASTLE adopts a multi-classifier training strategy, i.e considering multiple GFSL tasks with different combinations of classifiers in a single mini-batch. Here we show the influence of the multi-classifier training method based on their FSL and GFSL performance. Figure A12 and Figure A13 show the change of loss and harmonic mean accuracy (with 5 UNSEEN tasks) when training CASTLE with different number of classifiers based on a pre-trained backbone, respectively. It is obvious that training with multiple classifiers converges faster and generalizes better than the vanilla model, without increasing the computational burden a lot. A more detailed comparison for training with different numbers of classifiers is listed in Table A7, which verifies the effectiveness of the multi-classifier training strategy. In this subsection, we provide concrete values for the GFSL measurements on MiniImageNet. To avoid repetition, only the of 1-Shot GFSL tasks are listed. From Table A8 to Table A11, the number of ways of UNSEEN classes in an inference GFSL task varies from 5 to 20. In addition to the top-1 accuracy, the sample-wise mean average precision (MAP) is also calculated as a basic measure before harmonic mean. As shown in Figure A7, the harmonic mean is the harmonic average of the joint prediction performance of SEEN (S → S ∪ U) and UNSEEN (U → S ∪ U) instances. Although CASTLE cannot achieve high joint label space prediction on SEEN class instances (S → S ∪ U), its high harmonic mean performance from its competitive discerning ability on the joint prediction of UNSEEN instances (S → S ∪ U). Table A9: Concrete evaluation criteria for generalized few-shot classification measurements on MiniImageNet. The GFSL tasks are composed by 1-shot 10-Way UNSEEN class. "HM" denotes the harmonic mean. As mentioned before, to obtain better generalized few-shot learning performances, a confidence calibration procedure between predictions for S and U is necessary. We therefore tune this factor based on the validation UNSEEN classes (e.g in the MiniImageNet cases, we use 16 validation classes to compute this value) and then applied to the evaluation on test UNSEEN classes (e.g corresponding to the 20 test categories in MiniImageNet). Table A12: Concrete evaluation criteria for generalized few-shot classification measurements on MiniImageNet. The GFSL tasks are composed by 1-shot 5-Way UNSEEN class, and the harmonic mean is computed with a calibration factor. "HM" denotes the harmonic mean. As mentioned in the main text, now we show the complete details and more of the study with regard to the effects of calibration factors. The importance of the calibration factor has already been validated in ). We exactly follow the strategy in to complete the calibration by subtracting a bias on the prediction logits of all SEEN classes. In other words, different from the vanilla prediction, a calibration bias is subtracted from the confidence for SEEN classes, to make it balanced with the predictions for the unseen parts. In detail, we choose the range of the bias by sampling 200 generalized few-shot tasks composed by validation instances and record the difference between the maximum value of SEEN and UNSEEN logits. The averaged difference value is used as the range of the bias selection. 30 equally split calibration bias values are used as candidates, and the best one is chosen based on 500 generalized few-shot tasks sampled from the meta-validation set. As a , we observe that calibrated methods can have a consistent improvement over the harmonic mean of accuracy. The are listed from Table A12 to Table A15, and the number of UNSEEN classes in a GFSL task changes from 5 to 20. Comparing with the without calibration factor in Table A8 -A11, the additional calibration step increases the joint prediction ability of UNSEEN instances a lot, so as to improve the final harmonic mean measurement. Our CASTLE get similar after using the calibration bias, especially when there are 5 UNSEEN classes. Therefore, CASTLE fits the generalized few-shot learning task, and does not require additional calibration step to balance the SEEN and UNSEEN predictions. To show the discriminative ability of the learned embedding, we visualize the embedding of 6 randomly selected UNSEEN classes with 50 instances per class from MiniImageNet in Figure A14. The embedding of four baseline approaches, namely MC + kNN, ProtoNet + ProtoNet, MC + ProtoNet, and CASTLE are shown. It can be found that CASTLE grasps the instance relationship of UNSEEN classes better than others.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJgjuCVKwS
We propose to learn synthesizing few-shot classifiers and many-shot classifiers using one single objective function for GFSL.
Machine learning workloads are often expensive to train, taking weeks to converge. The current generation of frameworks relies on custom back-ends in order to achieve efficiency, making it impractical to train models on less common hardware where no such back-ends exist. Knossos builds on recent work that avoids the need for hand-written libraries, instead compiles machine learning models in much the same way one would compile other kinds of software. In order to make the ing code efficient, the Knossos complier directly optimises the abstract syntax tree of the program. However in contrast to traditional compilers that employ hand-written optimisation passes, we take a rewriting approach driven by the $A^\star$ search algorithm and a learn value function that evaluates future potential cost reduction of taking various rewriting actions to the program. We show that Knossos can automatically learned optimisations that past compliers had to implement by hand. Furthermore, we demonstrate that Knossos can achieve wall time reduction compared to a hand-tuned compiler on a suite of machine learning programs, including basic linear algebra and convolutional networks. The Knossos compiler has minimal dependencies and can be used on any architecture that supports a \Cpp toolchain. Since cost model the proposed algorithm optimises can be tailored to a particular hardware architecture, the proposed approach can potentially applied to a variety of hardware. While the development of any kind of software can benefit from compliers able to produce fast code, runtime efficiency is particularity important for modern machine learning. In particular, because modern models they can take weeks to train , complier optimisations that lead to execution speed-ups are of huge value. In parallel, machine learning is being deployed on a variety of diverse devices ranging from wearables to huge clusters clusters of powerful GPUs. Since each architecture has different performance profile and requires different code optimisations, it is difficult to provide tooling that works fast on all of them. Traditionally, the tension between performance and interoperability is resolved by machine learning frameworks . In these frameworks, while code execution is outsourced to hardware-specific back-ends such as XLA (XLA authors, 2016). While this approach has seen huge initial success, the cost of providing customised back-ends for each target architecture is prohibitive. Moreover, the frameworks also custom front-ends that require the programmer to specify the model being trained as a compute graph. Since the compute graph has semantics separate from the host programming language, this process is often error-prone and time-consuming. In order to address these obstacles, a new generation of tools has recently appeared that transform machine learning code using the same techniques that have been used for compiling traditional software. The need for a separate front-end API for machine learning operations is eliminated by including automatic differentiation as a first-class feature of the complied language . Instead of custom back-ends, modern machine learning compliers use an intermediate representation and perform extensive code optimisations (; ; van ; ; ;). In addition, program optimisation is being modelled as a machine learning task itself, with the complier learning how to perform rewrites (b; a). in mind. We formalize program optimisation as a finite-horizon Markov Decision Process (MDP), with the reward signal determined by the cost of executing a program. By solving this MDP, we are able to produce fast code tailor-made for any given task and architecture, without relying on backend-specific hand-written libraries. Knossos works by re-writing programs written in an intermediate representation (IR). Akin to JAX and Zygote , all Knossos functions are potentially differentiable, avoiding the syntactic awkwardness that arises from embedding a differentiable program in a host language. The IR can then be transpiled, allowing it to run on any platform that supports a C ++ toolchain. This allows Knossos code to be seamlessly deployed on specialized or embedded hardware without the need of manual tuning, both for training and for deployment of models, enabling a much broader user base than competing approaches. To our knowledge, Knossos is the first compiler that combines RL-based program optimisation, firstclass support for deep learning primitives and the ability to target any architecture supporting the C ++ toolchain. We defer detailed scope comparisons with prior work to Section 4. We empirically demonstrate the benefits of our program optimisation in Section 5, showing that Knossos was able to automatically learn loop fusion, a type of compiler optimisation that previously had to be applied manually. We model code optimisation as a finite-horizon Markov Decision Process (MDP). An MDP is defined as a tuple (S, A, T, R, H, p 0), where S denotes the state space, A denotes the action space, T denotes the transition dynamics, R denotes the rewards, H is the maximum time budget allowed to solve the problem (the horizon) and p 0 is a fixed probability distribution over initial states. We provide a detailed description of the states, transitions and rewards later on this section. States and transitions An MDP state s = (e s, t s) consists of a Knossos program (or expression) e ∈ E and the remaining time budget t s ∈ [0, 1, . . ., H] (i.e., the number of remaining steps), where H is the maximum budget. Any state with t s = 0 is terminating. The initial state distribution p 0 models the expressions that the RL agent is likely to be asked to optimize. A sample Knossos expression is shown in Fig. 1a. The action set A corresponds to different possible ways of rewriting the same expression (see Fig. 1b). The transition function T: S × A → S returns the next state after taking an action. For example, the first rule in Fig. 1b says that adding zero to any expression can be simplified to the expression itself. Once the action is chosen, the transition is deterministic. Because rewrite rules can be applied to different subexpressions, we specify A using generic rewrite rules, which are applied by pattern matching. There are over 50 rules like this -we provide the details in Appendix C. An essential feature of the rewrites is that they do not change the meaning of the program, i.e. by simplifying from one expression to another we also implicitly generate a proof that the expressions are equivalent. The RL agent maintains a policy π(a|s), which defines the probability of taking an action in state s given there are t s steps remaining till the total time budget is exhausted. A policy π generates rollouts τ π. A rollout τ π is defined as a sequence of states, actions and rewards obtained from the MDP τ π = (s 1, a 1, r 1, s 2, a 2, r 2, . . . s H, r H). Since the policy π can be stochastic, it is modelled as a random variable. The goal of RL agent is to find an optimal policy π = arg max π J π, which attains the best possible return. The return is defined as R(s t, s t+1). Given a policy and the number of timesteps t remaining till the end of episode, we define a value function V (s) = E τ ts−1 i=0 R(s i, s i+1) s 0 = s, where t s denotes the remaining time budget at state s. The optimal value function V is defined as the value function of an optimal policy π. We assume access to a function c(s), which provides the cost model, i.e. the computational cost of running e s, the expression represented by state s = (e s, t s), on representative inputs. While developing a perfect cost models is theoretically impossible due to the intractability of the halting problem , very good cost models exist for the particular subset of programs that compliers are asked to optimise. The ideal cost model c B would correspond to the run-time of the program on typical inputs, but evaluating costs by benchmarking is very computationally intensive. In practice, one can often find a surrogate cost function such that for most initial programs s 0, the state that is reachable from s 0 and minimizes the surrogate cost function c agrees with that for the ideal cost function c B, that is, which is much easier to acquire. In other words, the cost function c does not have to produce the same run-time but the same minimum over programs. We show experimentally in Section 5 that it is indeed possible to reduce the wall clock time of running a program by optimising such a proxy cost model. Knossos has a modular architecture, making it easy to change the cost function. This makes it possible to quickly re-tune Knossos programs for any target hardware. We stress that the formalism allows us to find optimisations even in case getting to the optimized version of the code requires using intermediate programs of higher cost (see Fig. 8). Our reward function is based on this cost model. The rewards R(s 1, s 2) = c(s 2) − c(s 1) correspond to the attained reduction in cost when rewriting expression e s1 into e s2. This formulation ensures that return J π equals the total cost reduction attained along the length of the rollout τ. Similarly, the value function corresponds to the expected cost reduction under the current policy. Since our MDP includes a'no-op' rewrite rule that allows us to keep the current expression and hence the cost, the optimal value function is monotonic in t i.e. V ((e, t)) ≥ V ((e, t)) for any e, t ≥ t. 3 TRAINING THE RL AGENT Hard and Easy Aspects of Rewriting There are two main ways in which the task of rewriting expressions is more challenging than typical RL benchmarks. First, the allowed set of actions not only changes from state to state, but grows with the size of the expression. This makes exploration hard. Second, the states of the MDP, which correspond to the expressions being rewritten, are represented as graphs, whose size and topology varies as optimisation progresses. This is unlike traditional deep Reinforcement Learning , which learns either from pixels or from data of fixed shape. While the rewriting task has many features that make it difficult, it is also easier than many traditional RL tasks for three reasons. First, MDP transitions are completely deterministic. Second, the task has a large degree of locality in the sense that the performance of a program can often be substantially improved by optimising its parts separately. Third, we can generate state transitions in any order convenient to us, as opposed to the traditional RL setting, where we are constrained by the order imposed by the environment. Overall, we have a problem similar to traditional planning, but which requires us to generalise well in order to obtain competitive solutions. To do this, Knossos uses a custom RL algorithm, based on A search supported by value function learned with a graph neural networks (Algorithm 1). We describe how to obtain the heuristic in Section 3.2, and the search algorithm in Section 3.1. Empirical estimate of maximum cost reduction achievable from s end for return C, t, V target end function We use the A algorithm both to train the compiler and to deploy it. A maintains two priority queues. One queue (O) stores the frontier, i.e. states from which transitions have not been explored yet. The other one (C) stores the states visited so far and is used to avoid exploring the same path twice. The states are explored in the order induced by the A heuristic, which in our case corresponds to the learned value functionV, obtained from previous iterations. In particular, node priority is set as follows: Here,V (s) is the estimated future cost reduction obtained from state s within t remaining timesteps. The quantity c(s 0) − c(s) corresponds to the cost reduction that has already been achieved by time t, measured against the cost of the initial expression. Thus, f (s) is an estimate of the maximum possible cost improvement from a trajectory passing through state s at time t. After the search, we compute the empirical estimate of the maximum cost reduction achievable (V target (s)) for each visited state. The estimated value of s with t s timesteps is the maximum cost reduction found from s within t s steps. DISTANCE(s, s) in Algorithm 2 is the number of steps required to reach s from s. The algorithm stops after the value function was evaluated a set number of times. In the code this is represented with the function TERM-CONDITION. A is well-suited for the rewriting task because it exploits its characteristic features. In particular, it exploits determinism by assuming that a cost reduction achievable once can always be achieved again. It exploits the availability of reset by considering nodes in the order defined by the heuristic function. It exploits locality by preferring re-writes that need a small number of rule applications. Before deciding on A, we also performed experiments with Monte Carlo Tree Search (MCTS). MCTS does not make use of reset and had worse empirical performance (see Appendix D for details). States in the Knossos MDP correspond to computation graphs. In order to apply deep RL to these graphs, we need to be able to construct differentiable embeddings of them. To do this, we employ Graph Neural Networks based on Gated Recurrent Units . During the forward pass, the GNN begins with an initial embedding of the graph nodes. It then iteratively applies a diffusion process to the graph. At each step, the obtained representation is fed into a gated recurrent unit (GRU). The process implicitly encodes the edge structure of the graph in the obtained representation. We represent a Knossos expression as a graph. The graph nodes correspond to subexpressions (see Fig. 1a). The graph edges are of two kinds. The first kind of edges connects the nodes with their parents. In addition, we use another kind of edges, which is used to explicitly provide the information that two subexpressions are identical. See Table 2b for a list of all edge types. Edges can be directed or undirected, with the directed edges going in opposite ways considered different. To compute the value function for an expression e and time budget t, we start from computing the initial node embedding h 0 v ∈ R d for all node v ∈ N (e), where N (e) is the set of vertices in expression e. The initial node embedding consists of a one-hot encoding of the node type (constant, variable, etc) followed by zero padding. This embedding is then fed into the following recurrent computation (see Fig. 2a): where p ∈ P indexes different edge types, A p (v) is the set of neighbors of node v with respect to the pth edge type. We choose the message function m p to be a single dense layer for each edge type p and the aggregation operator ⊕ as the sum of all incoming messages. We use the GRU cell as the recurrent unit f. The final node embedding h Finally the value of expression e is computed by taking a weighted sum of the final node embedding h T v and passing through a dense layer as follows: where g: H are all one-layer dense networks, σ denotes the sigmoid function, and We train the above GNN to approximate the optimal value function V. LetV (s) = V (e s)[t s] the value function V computed for expression e s and time budget t s. To track an approximate lower bound of the optimal value function V, we minimize the loss l(is defined in Algorithm 2 and corresponds to the best cost improvement obtained with the current policy in t s steps. Normalization by t s is introduced to ease optimisation by ensuring that target values for all outputs of V are in a similar magnitude. Thus the value function estimatê V (s) can be obtained from per-step value estimateV (s) asV (s) = t s ·V (s). For the loss function l we use the Huber loss. Details about the optimiser used to minimize the loss l are given in Appendix B. In the pseudocode in Algorithm 1, this optimisation is represented with the function FIT. Knossos builds on a long tradition of compiler technology. Similarly to traditional compliers and the more recent deep learning compliers such as Myia (van), DLVM , ISAM and GLOW , Knossos uses an intermediate representation to optimize programs. However, while these approaches rely on layers of hand-coded optimisation heuristics, Knossos learns the algorithm used to optimize its programs. In this respect, Knossos is a spiritual successor of benchmark-driven hardware-agnostic optimisation approaches in computational linear algebra and signal processing . However, unlike these approaches, Knossos is a fully-fledged complier, and can optimize arbitrary programs. Moreover, thanks to its Reinforcement Learning-driven optimizer, Knossos has an advantage over existing approaches that attempt to learn how to optimize arbitrary code. For example, learns parameters of a code optimizer with a hard-coded hierarchy. REGAL only learns the hyper-parameters for a fixed genetic algorithm that preforms the actual optimisation. The TVM compiler (a) learns a cost model over programs, but uses simple simulated annealing to perform the optimisation. Similarly, Chen et al. (2018b) handles only index summation expressions and again relies on simulated annealing. LIFT defines an intermediate language suited for expressing numerical computation, but focuses on providing the right set of rewrite rules rather than on the program optimisation process itself. In Section 5, we demonstrate that the RL optimizer used by Knossos outperforms this approach by a large margin. Knossos is also related to JAX , which performs just-in-time compilation of Python code using the XLA backend (XLA authors, 2016). Knossos differs from JAX in two ways. First, it uses efficient RL code optimisation, which is architecture-agnostic. In fact, since Knossos generates C ++ code, it supports a much broader variety of target architectures. Also, unlike JAX, it makes use of the benefits of a statically typed languages. In terms of scope, Knossos is also similar to Zygote for Julia . However, unlike these compliers, Knossos makes use of an RL-driven code optimizer. Since Knossos provides first class support for automatic differentiation, it is also related to established deep learning frameworks (; ;). However, unlike Knossos, these frameworks do not learn how to optimize code, instead relying on manually-prepared back-ends. Moreover, using them either requires meta-programming, where the user has to use a high-level language to specify the desired computation graph using constructions external to the language , or is constrained to a restricted subset of the language . In contrast, the Knossos language can be used directly, without manually specifying computation graph constructs or restricting oneself to an allowed subset of the language. In parallel, the idea of automated rewriting to achieve a given objective was explored in the context of automated theorem provers. This is conceptually related to our approach since finding an equivalence between formulae is the same as finding a proof that they are equal. However, recent work in this space has substantial differences in scope. In particular, state-of-the-art work that searches for refutational proofs in first-order logic uses hardcoded features and cannot learn any new ones. Also, the optimal objective is very different. While a mathematical proof is only correct when completely reduced to a tautology, we are satisfied with simplifying an expression by a certain margin, not necessarily in the most optimal way possible. For the Reinforcement Learning part, our algorithm differs from standard techniques in that it has a much larger action space and a state space that consists of graphs, which makes the application of traditional RL algorithms like DQN , A2C and PPO ineffective. AlphaGo, which also performs a search over a large state space, but differs from Knossos in that it learns for pixel observations and uses an action space of bounded size. Reinforcement Learning has also been applied to expression rewriting and scheduling problems . However, since this approach used actor-critic RL that does not exploit reset, it less well-suited for compilation tasks as described in Section 3. We evaluated Knossos in three settings. First, to understand how close and reliably we can achieve the best optimisation, we applied Knossos to a manually curated set of arithmetic expressions, where we know the best available sequence of rewrites. Second, we applied Knossos to a set of linear algebraic operations, which are representative of typical workloads in numerical computing. Third, Float (Vec n (Vec l Float))))) (let (beta (get45 var0)) (let (mat_b (get35 var0)) (let (mat_a (get25 var0)) (let (alpha (get15 var0)) (let (mat_c (get55 var0)) (let (mat_x (build n (lam (var4 : Integer) (build l (lam (k : Integer) (sumbuild m (lam (var5 : Integer) (mul (index var4 (index var5 mat_a)) (index var5 (index k mat_b)))))))))) (let (mat_x_6 (build n (lam (var2 : Integer) (build m (lam (var3 : Integer) (mul alpha (index var2 (index var3 mat_x)))))))) (let (mat_y (build n (lam (var6 : Integer) (build m (lam (var1 : Integer) (mul beta (index var6 (index var1 mat_c)))))))) (build n (lam (i : Integer) (build m (lam (j : Integer) (add (index i (index j mat_x_6)) (index i (index j mat_y))))))))))))))) (def gemm (Vec n (Vec l Float)) ((var0 : (Tuple Float (Vec n (Vec m Float)) (Vec m (Vec l Float)) Float (Vec n (Vec l Float))))) (let (beta (get45 var0)) (let (mat_b (get35 var0)) (let (mat_a (get25 var0)) (let (alpha (get15 var0)) (let (mat_c (get55 var0)) (build n (lam (i : Integer) (build l (lam (j : Integer) (add (mul alpha (sumbuild m (lam (var5 : Integer) (mul (index i (index var5 mat_a)) (index var5 (index j mat_b)))))) (mul beta (index j (index i mat_c)))))))))))))) cost=7070100 10 8 10 10 we compare it to a hand-written rule-based transpiler of the Knossos IL, which we call ksc. Both Knossos and ksc output C ++, which is compiled to binary using gcc with optimisation enabled, ensuring a fair comparison. We describe the below. While arithmetic expressions are simple, optimising them is not always a simple task. Figure 3 shows an example of two similar arithmetic expressions. Although they look very similar, they require different optimisation strategy to reach to the optimal form. The left expression gets to optimal by an arithmetic simplification (×x to a denominator and a numerator) but the right expression gets to optimal by a common subexpression elimination. It is difficult for a rule-based compiler to distinguish the two and optimise such similar expressions using different strategies. To test Knossos on arithmetic expressions, we used a training set of 36 arithmetic expressions and a test set of 12 different ones. The details of the experimental setup are given in Appendix B. In this setting, we pick 6 expressions randomly from a training set to train in each epoch. We ran training for 30 epochs and running 10 repetitions for each experiment with different random seeds. Search depth was limited to 10 and the termination condition in A was set to 5000 evaluations of the value function. See Appendix B for the full details including network parameters. We show the in Figure 5a. It can be seen from the figure that Knossos achieved the oracle cost for all expressions. We also performed an ablation, comparing Knossos to A algorithm (shown as NoGNN) that does not perform the GNN recurrence in equation 4. As a baseline, we compared to greedy best-first search, which picks a next state to explore greedily without using the value function f (s):= c(s 0) − c(s). We also show a comparison to random search and the initial cost of the expression, before any optimisation. Bootstrap Mode Similarly to a traditional complier, where we are given a concrete program to optimize, the expressions used to evaluate Knossos in this benchmark were the same ones that we used used during training. Even in this setup, Knossos still generalises, but it does it across sub-expressions of the expressions in the training set. We tested that on 8 expressions, training for 30 epochs. Other experimental setup is the same as Arithmetic Expressions. Figure 5a shows the comparison of the minimum cost achieved by each agent. It can be seen from the figure that Knossos achieved the best possible cost for all expressions. Linear Algebra Primitives Numerical linear algebra is fundamental to most calculations in scientific computing and machine learning. Primitives such as vector multiplication, plane rotation, matrix multiplications and similar primitives often represent the most time-consuming part of the given computation. To evaluate the performance of Knossos on in this setting, we trained on a set of 11 such linear algebra primitives and evaluated on General Matrix Multiplication (GEMM). We trained for 5 epochs, each of which included optimisation of cost of 6 primitives. Search depth was limited to 30 and the termination condition in A was set to 5000 evaluations of the value function. Figure 6a shows the cost of GEMM. The plot shows for 10 independent runs of the Knossos code optimizer on the same input source file. We used an augmented set of training rules, which included vector operations (see Table 4 in Appendix). Because of the complexity of the task, we split the search into two phases of 15 steps each. The training phases differ in the set of allowed rules. In the first phase, we only allow rules that in large changes to the cost (Table 4). In the second phase, we allow all rules. The shaded area represents one standard deviation across the runs of Knossos. Results show that Knossos produced code of lower cost than the output of the traditional ksc complier according to our cost model. We also performed a benchmark using wall clock time, shown in Fig. 7a, again showing an improvement. In addition, we performed a qualitative evaluation of the output in Fig. 4. In the program obtained by ksc (middle listing), three temporary variables mat x, mat x 6, and mat y corresponding to the of A·B, α·mat x, and β ·C, respectively, are created. In the output of Knossos (bottom listing), all the temporary variables are gone. Hence, Knossos has discovered a form of loop fusion -the type of optimisation that previously had to be built into a complier by a laborious manual process. Convolutional Network In order to evaluate Knossos on workloads characteristic of modern machine learning pipelines, we also evaluated Knossos on a computer vision task. We optimize a code for training a convolutional deep network on the MNIST dataset (Vec l (Vec n Float))) ((var0 : (Tuple (Vec k (Vec l (Vec kn Float))) (Vec l (Vec n Float)) (Vec k (Vec n Float))))) (let ((kernels (get13 var0)) (image (get23 var0)) (d$r (get$3$3 var0))) (sumbuild k (lam (ki : Integer) (let (a_6 (index ki d$r)) (sumbuild n (lam (ni : Integer) (let (a_8 (index ni a_6)) (let (a_7 (build kn (lam (var1 : Integer) a_8))) (sumbuild kn (lam (kni : Integer) (let (a_10 (index kni a_7)) (let (a_11 (build l (lam (sum$i : Integer) a_10))) (sumbuild l (lam (li : Integer) (let (noi (sub (add ni (div kn 2)) kni)) (let (outside_image (or (gt 0 noi) (gte noi n))) (add (if outside_image (tuple (constVec k (constVec l (constVec kn 0.0))) (constVec l (constVec n 0.0))) (tuple (constVec k (constVec l (constVec kn 0.0))) (deltaVec l li (deltaVec n noi (mul (index kni (index li (index ki kernels))) (index li a_11)))))) (tuple (deltaVec k ki (deltaVec l li (deltaVec kn kni (mul (if outside_image 0.0 (index noi (index li image))) (index li a_11))))) (constVec l (constVec n 0.0))))))))))))))))))) cost=102267214109.0 (def rev$conv1d (Tuple (Vec k (Vec l (Vec kn Float))) (Vec l (Vec n Float))) (var0 : (Vec k (Vec l (Vec kn Float))) (Vec l (Vec n Float)) (Vec k (Vec n Float))) (let ((kernels (get13 var0)) (image (get23 var0)) (d$r (get$3$3 var0))) (sumbuild k (lam (ki : Integer) (sumbuild n (lam (ni : Integer) (sumbuild kn (lam (kni : Integer) (sumbuild l (lam (li : Integer) (let (noi (sub (add ni (div kn 2)) kni)) (let (outside_image (or (gt 0 (sub (add ni (div kn 2)) kni)) (gte (sub (add ni (div kn 2)) kni) n))) (add (if (or (gt 0 (sub (add ni (div kn 2)) kni)) (gte (sub (add ni (div kn 2)) kni) n)) (tuple (constVec k (constVec l (constVec kn 0.0))) (constVec l (constVec n 0.0))) (tuple (constVec k (constVec l (constVec kn 0.0))) (deltaVec l li (deltaVec n noi (mul (index kni (index li (index ki kernels))) (index li (build l (lam (sum$i : Integer) (index ni (index ki d$r)))))))))) (tuple (deltaVec k ki (deltaVec l li (deltaVec kn kni (mul (if outside_image 0.0 (index noi (index li image))) (index li (build l (lam (var0 : Integer) (index ni (index ki d$r))))))))) (constVec l (constVec n 0.0)))))))))))))))) cost=163955001999.0 (def rev$conv1d (Tuple (Vec k (Vec l (Vec kn Float))) (Vec l (Vec n Float))) (var0 : (Vec k (Vec l (Vec kn Float))) (Vec l (Vec n Float)) (Vec k (Vec n Float))) (let ((kernels (get13 var0)) (image (get23 var0)) (d$r (get$3$3 var0))) (add (sumbuild k (lam (var6 : Integer) (sumbuild n (lam (var5 : Integer) (sumbuild kn (lam (var7 : Integer) (sumbuild l (lam (var8 : Integer) (if (or (gt 0 (sub (add var5 (div kn 2)) var7)) (gte (sub (add var5 (div kn 2)) var7) n)) (tuple (constVec k (constVec l (constVec kn 0.0))) (constVec l (constVec n 0.0))) (tuple (constVec k (constVec l (constVec kn 0.0))) (deltaVec l var8 (deltaVec n (sub (add var5 (div kn 2)) var7) (mul (index var7 (index var8 (index var6 kernels))) (let (sum$i var8) (index var5 (index var6 d$r)))))))))))))))) (tuple (build k (lam (var4 : Integer) (sumbuild n (lam (var3 : Integer) (build l (lam (var1 : Integer) (build kn (lam (var2 : Integer) (mul (if (or (gt 0 (sub (add var3 (div kn 2)) var2)) (gte (sub (add var3 (div kn 2)) var2) n)) 0.0 (index (sub (add var3 (div kn 2)) var2) (index var1 image))) (let (var0 var1) (index var3 (index var4 d$r)))))))))))) (constVec l (constVec n 0.0))))) represents a typical implementation of a deep learning algorithm and contains primitives such as dense layers, convolutional layers, pooling layers, and so on. While MNIST is a basic benchmark, we stress that the goal of Knossos was code optimisation as opposed to the computer vision task itself. We trained on 5 expressions and evaluated on a reverse mode of a convolutional layer. We fixed the search depth to 40. The termination condition in A was set to 30000 evaluations of the value function. We used an augmented set of training rules and split the search into two phases of 20 steps each, allowing rules that in large changes to the cost in the first phase and all rules in the second phase. Results are shown in Figure 6b for the cost model and Figure 7b for the wall clock time. The shaded area represents the standard deviation across the runs of Knossos and the ing binary. As above, the Knossos optimizer produced code that outperformed the baseline. We have demonstrated that Knossos is capable of producing code that is faster than the output of a traditional complier. Moreover, unlike traditional compliers, Knossos does not rely on hand-crafted optimisation passes that are very laborious to implement. Instead, traditional optimisation passes are replaced by atomic rewrite rules that can be combined in many ways. In fact, in our benchmark of linear algebra primitives, Knossos was able to automatically discover loop fusion, an optimisation strategy long known to complier designers. Knossos code in our experiments can perform both training and inference and can be run on any hardware supporting the C ++ toolchain, including inexpensive embedded devices. We have introduced Knossos, a new complier targetting machine learning and numerical computation. Thanks to its automatic code optimisation, Knossos produces binaries that achieve better run-times than a traditional, rule-based complier. Knossos can deal with complex code generated by automatic differentiation and automatically discover optimisations that previously required careful complier design. We believe that Knossos will pave the way towards a new generation of future compliers, which will crucially rely on automatically inferring the correct optimisations. It also has a LISP-like surface syntax, which we used to implement our programs. In the future, we plan to provide transpilers, allowing for the compilation of code written in other languages into Knossos. We provide a sample Knossos program in Figure 4.In order to facilitate Machine Learning workloads, the Knossos IL has native support for automatic differentiation. We use a new unified view of automatic differentiation as generalised transposition . Rather than having an explicit distinction between forward mode and reverse mode AD, Knossos uses uses a type system together with a set of consistent rewrite rules. Whenever the gradient operator is used as part of a Knossos algorithm, the complier first generates a syntax tree corresponding to the differentiated program and then applies rewrites to optimize the cost of its execution. This means that the ing AD algorithm is tailor-made and optimized with that exact use case in mind. This is in contrast to systems such as PyTorch, which have hard-coded routines for backward-mode AD. From the perspective of the user, this process is completely transparent in the sense that taking gradients can be applied to any piece of Knossos code. While the details of this process are beyond the scope of this paper, from the perspective of this work, the important feature of AD is that it corresponds to a transformation of the abstract syntax tree. The ing AST can then be optimised in the same way as any other code. We now describe the parameters used to perform the experiments reported on in the paper. The parameters used by A in the four tasks described in Sec. 5 are listed in Tab. 1. The hyper-parameters for the value network training are given in Tab. 2. In the Graph Neural Network, initial node features are one-hot vectors that represent the node types. The used node types are: constant, variable, let, if, tuple, select, +, -, *, /, exp, log, ==, >, >=, or, build, apply, lam, sum, sumbuild, constVec, and deltaVec. Edge types are listed in Tab. 2b. The auxiliary edge type"is-identical" is inserted to identify identical subexpressions. It was added so that it is easier to learn re-writes that rely on matching expressions. The GNN was implemented using a sparse adjacency matrix instead of dense matrix in order to conserve GPU memory in settings where some expressions grow beyond > 10000 nodes during training. We ran the GNN recursion 10 times. For optimization we used the Adam optimizer with learning rate 0.0001 and set the dropout rate zero for GNN and 0.2 for the MLP. We list the basic rule set used in the arithmetic expressions benchmark in Tab. 3. The additional rewrite rules used in basic linear algebra and convolutional neural network are given in Tab. 4. In addition to A search, we compare the performance of Monte Carlo Tree Search using the UCT formula (; Kocsis & Szepesvári, 2006). In order to disambiguate across subtly different versions of the algorithm, we describe it below. Each iteration of MCTS consists of four steps (Algorithm 3). 1. Selection: Starting from the root, a tree policy is recursively descends through the tree until it reaches a leaf node. 2. Expansion: A child node is added to expand the tree. 3. Simulation: A rollout policy is applied from the new node until the end of the episode. 4. Back-up: The simulation is backed up through the selected nodes to update their statistic. The tree policy π t and rollout policy π r are defined as follows. π t (a|s) = arg max a∈A X(s) n(s) + β ln n(s) n(s)+1 π r (a|s) = softmax a∈A (R(s, a) + V (s), α) Here, n(s) is a visitation count of state s, and β is a constant to control the exploration bonus. X(s)/n(s) is the average cost reduction achieved by a set of trajectories which passed through n(s)+1 ) is reduced. This way, the agent is encouraged to try a diverse set of actions. We evaluated the performance of A search and MCTS for both training and test. The experimental setup is the same as the Generalisation to Unseen Data experiment in Section 5 except for the used search algorithm. For MCTS, we used α = 5.0 and β = 0.5 for both training and test. Figure 9a shows the of running all possible combinations of search algorithms when used for training and test in various configurations. Overall, using A for both training and test achieved the best performance. In particular, when we fixed the algorithm used during test to A and varied the training algorithm between A and MCTS, A achieved a significantly lower the total minimum cost than MCTS. Similarly, when we fixed the algorithm used for training to A and compared the performance during testing A achieved significantly lower cost than MCTS again. Train/Test expression set (div (div 1.0 x) (add 1.0 (div 1.0 x))) (add (div (div 1.0 x) (add (div 1.0 x) 1.0)) (div 1.0 x)) (add (div (div 1.0 x) (add (div 1.0 x) 2.0)) (div 1.0 x)) (mul (div x y) (div x y)) (div (mul (div x y) x) y) (add (div (mul x y) (add 1.0 (mul x y))) (mul x y)) (add (div 1.0 (add 1.0 (mul x y))) (mul x y)) (div (mul x y) (add 1.0 (mul x y))) Figure 10: List of expressions in training set for Linear Algebra Primitives.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SylyHkHYDB
We combine A* search with reinforcement learning to speed up machine learning code
Grasping an object and precisely stacking it on another is a difficult task for traditional robotic control or hand-engineered approaches. Here we examine the problem in simulation and provide techniques aimed at solving it via deep reinforcement learning. We introduce two straightforward extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), which make it significantly more data-efficient and scalable. Our show that by making extensive use of off-policy data and replay, it is possible to find high-performance control policies. Further, our hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots. Dexterous manipulation is a fundamental challenge in robotics. Researchers have long sought a way to enable robots to robustly and flexibly interact with fixed and free objects of different shapes, materials, and surface properties in the context of a broad range of tasks and environmental conditions. Such flexibility is very difficult to achieve with manually designed controllers. The recent resurgence of neural networks and "deep learning" has inspired hope that these methods will be as effective in the control domain as they are for perception. Indeed, recent work has used neural networks to learn solutions to a variety of control problems BID31 BID6 BID30 BID10 BID16.While the flexibility and generality of learning approaches is promising for robotics, these methods typically require a large amount of data that grows with the complexity of the task. What is feasible on a simulated system, where hundreds of millions of control steps are possible BID22 BID31, does not necessarily transfer to real robot applications due to unrealistic learning times. One solution to this problem is to restrict the generality of the controller by incorporating task specific knowledge, e.g. in the form of dynamic movement primitives BID29, or in the form of strong teaching signals, e.g. kinesthetic teaching of trajectories BID23. Recent works have had success learning flexible neural network policies directly on real robots (e.g. BID4 BID38), but tasks as complex as precise grasping-and-stacking remain daunting. In this paper we investigate in simulation the possibility of learning precise manipulation skills endto-end with a general purpose model-free deep reinforcement learning algorithm. We assess the feasibility of performing analogous experiments on real robotics hardware and provide guidance with respect to the choice of learning algorithm, experimental setup, and the performance that we can hope to achieve. We consider the task of picking up a Lego brick from the table and stacking it onto a second nearby brick using a robotic arm and gripper. This task involves contact-rich interactions between the robotic arm and two freely moving objects. It also requires mastering several sub-skills (reaching, grasping, lifting, and stacking). Each of these sub-skills is challenging in its own right as they require both precision (for instance, successful stacking requires accurate alignment of the two bricks) and as well as robust generalization over a large state space (e.g. different initial positions of the bricks and the initial configuration of the arm). Finally, there exist non-trivial and long-ranging dependencies between the solutions for different sub-tasks: for instance, the ability to successfully stack the brick depends critically on having picked up the brick in a sensible way beforehand. This paper makes several contributions: 1. We build on the Deep Deterministic Policy Gradient (DDPG;), a general purpose model-free reinforcement learning algorithm for continuous actions, and extend it in two ways: firstly, we improve the data efficiency of the algorithm by scheduling updates of the network parameters independently of interactions with the environment. Secondly, we overcome the computational and experimental bottlenecks of single-machine single-robot learning by introducing a distributed version of DDPG which allows data collection and network training to be spread out over multiple computers and robots. 2. We show how to use these straightforward algorithmic developments to solve a complex, multi-stage manipulation problem. We further propose two broadly applicable strategies that allow us to reliably find solutions to complex tasks and further reduce the amount of environmental interaction. The first of these strategies is a recipe for designing effective shaping rewards for compositional tasks, while the second biases the distribution of initial states to achieve an effect akin a form of apprenticeship learning. In combination these contributions allow us to reliably learn robust policies for the full stacking task from scratch in less than 10 million environment transitions. This corresponds to less than 10 hours of interaction time on 16 robots. In addition, we show that when states from demonstration trajectories are used as the start states for learning trials the full task can be learned with 1 million transitions (i.e. less than 1 hour of interaction on 16 robots). To our knowledge our provide the first demonstration of end-to-end learning for a complex manipulation problem involving multiple freely moving objects. They are also suggest that it may be possible to learn such non-trivial manipulation skills directly on real robots. Reinforcement learning (RL) approaches solve tasks through repeated interactions with the environment guided by a reward signal of success or failure BID33. A distinction is often made between value-based and policy search methods. The latter have been routinely applied in robotics, in part because they straightforwardly handle continuous and high-dimensional action spaces BID2, and applications include manipulation BID25 BID36 BID4 BID38 BID7, locomotion e.g. BID15 BID20, and a range of other challenges such as helicopter flight BID0 ). However, policy search methods can scale poorly with the number of parameters that need to be estimated, requiring the need for restricted policy classes, that in turn might not be powerful enough for solving complex tasks. One exception are guided policy search methods (GPS) BID38. These employ a teacher algorithm to locally optimize trajectories which are then summarized by a neural network policy. They gain data-efficiency by employing aggressive local policy updates and extensive training of their neural network policy. The teacher can use model-based or model-free BID38 trajectory optimization. The former can struggle with strong discontinuities in the dynamics, and both rely on access to a well defined and fully observed state space. Alternatively, model-free value function approaches enable effective reuse of data and do not require full access to the state space or to a model of the environment. The use of rich function approximators such as neural networks in value function methods dates back many years, e.g. BID37 BID34 BID11 BID9, and recent success with deep learning has driven the development of new end-to-end training methods for challenging control problems BID21 BID5 c;. Closely related to the ideas followed in this paper, BID4 demonstrates that value-based methods using neural network approximators can be used for relatively simple robotic manipulation tasks in the real world BID6 ). This work also followed a recent trend towards the use of experimental rigs that allow parallelized data collection, e.g. BID26, via the use of multiple robots from which experience is gathered simultaneously BID4 BID38.Finally, the use of demonstration data has played an important role in robot learning, both as a means to obtain suitable cost functions BID1 BID13 BID3 BID7 but also to bootstrap and thus speed up learning. For the latter, kinesthetic teaching is widely used BID25 BID38, though the need for a human operator to be able to guide the robot through the full movement can be limiting. In this section we explain the learning problem and summarize the DDPG algorithm. We explain its relationship to other Q-function based RL algorithms in the Appendix. The RL problem consists of an agent interacting with an environment in a sequential manner to maximize the expected sum of rewards. At time t the agent observes the state x t of the system and produces a control u t = π(x t ; θ) according to policy π with parameters θ. This leads the environment to transition to a new state x t+1 according to the dynamics x t+1 ∼ p(·|x t, u t), and the agent receives a reward r t = r(x t, u t). The goal is to maximize the expected sum of discounted rewards J(θ) = E τ ∼ρ θ t γ t−1 r(x t, u t), where ρ θ is the distribution over trajectories τ = (x 0, u 0, x 1, u 1, . . .) induced by the current policy: DISPLAYFORM0 DPG BID32 ) is a policy gradient algorithm for continuous action spaces that improves the deterministic policy function π via backpropagation of the action-value gradient from a learned approximation to the Q-function. Specifically, DPG maintains a parametric approximation Q(x t, u t ; φ) to the action value function Q π (x t, u t) associated with π and φ is chosen to minimize DISPLAYFORM1 where y t = r(x t, u t) + γQ(x t+1, π(x t+1)).ρ is usually close to the marginal transition distribution induced by π but often not identical. For instance, during learning u t may be chosen to be a noisy version of π(x t ; θ), e.g. u t = π(x t ; θ) + where ∼ N (0, σ 2) andρ is then the transition distribution induced by this noisy policy. The policy parameters θ are then updated according to DISPLAYFORM2 DDPG incorporates experience replay and target networks to the original DPG algorithm: Experience is collected into a buffer and updates to θ and φ (eqs. 1, 2) are computed using mini-batch updates with samples from this buffer. A second set of "target-networks" is maintained with parameters θ and φ. These are used to compute y t in eqn. and their parameters are slowly updated towards the current parameters θ, φ. Both measures significantly improve the stability of DDPG.The use of a Q-function facilitates off-policy learning. This decouples the collection of experience data from the updates of the policy and value networks which allows us to make many parameter update steps per step in the environment, ensuring that the networks are well fit to the data that is currently available. The full task that we consider in this paper is to use the arm to pick up one Lego brick from the table and stack it onto the remaining brick. This "composite" task can be decomposed into several subtasks, including grasping and stacking. We consider the full task as well as the two sub-tasks in isolation:Starting state Reward Grasp Both bricks on In every episode the arm starts in a random configuration with an appropriate positioning of gripper and brick. We implement the experiments in a physically plausible simulation in MuJoCo BID35 with the simulated arm being closely matched to a real-world Jaco arm setup in our lab. Episodes are terminated after 150 steps of 50ms of physical simulation time. The agent thus has 7.5 seconds to perform the task. Unless otherwise noted we give a reward of one upon successful completion of the task and zero otherwise. The observation contains information about the angles and angular velocities of the 6 joints of the arm and 3 fingers of the gripper, as well as the position and orientation of the two bricks and relative distances of the two bricks to the pinch position of the gripper (roughly the position where the fingertips would meet if the fingers are closed). The 9-dimensional continuous action directly sets the velocities of the arm and finger joints. In experiments not reported in this paper we have tried using observations with only the raw state of the brick and the arm configuration (i.e. without the vector between the end-effector and brick) This increased the number of environment interactions needed roughly by a factor of two to three. For each experimental condition we optimize the learning rate and train and measure the performance of 10 agents with different random initial network parameters. After every 30 training episodes the agent is evaluated for 10 episodes. We used the mean performance at each evaluation phase as the performance measure presented in all plots. In the plots the line shows the mean performance across agents and the shaded regions correspond to the range between the worst and best performing one In all plots the x-axis represents the number of environment transitions seen so far at an evaluation point (in millions) and the y-axis represent episode return. A video of the full setup and examples of policies solving the component and full tasks can be found here: https://www.youtube.com/watch?v=7vmXOGwLq24. In this section we study two methods for extending the DDPG algorithm and find that they can have significant effect on data and computation efficiency, in some cases making the difference between finding a solution to a task or not. Multiple mini-batch replay steps Deep neural networks can require many steps of gradient descent to converge. In a supervised learning setting this affects purely computation time. In reinforcement learning, however, neural network training is interleaved with the acquisition of interaction experience giving rise to a complex interaction. To gain a better understanding of this effect we modified the original DDPG algorithm as described in to perform a fixed but configurable number of mini-batch updates per step in the environment. In one update was performed after each new interaction step. We refer to DDPG with a configurable number of update steps as DPG-R and tested the impact of this modification on the two primitive tasks Grasp and StackInHand. The are shown in FIG1. The number of update steps has a dramatic effect on the amount of experience data required. After one million interactions the original version of DDPG with a single update step (blue traces) appears to have made no progress towards a successful policy for stacking, and only a small number of controllers have learned to grasp. Increasing the number of updates per interaction to 5 greatly improves the (green traces), and with 40 updates (purple) the first successful policies for stacking and grasping are obtained after 200,000 and 300,000 interactions respectively (corresponding to 1,300 and 2,000 episodes). Notably, for both tasks we continue to see a reduction in total environment interaction up to 40 update steps, the maximum used in the experiment. One possible explanation for this effect is the interaction alluded to above: insufficient training may lead to a form of underfitting of the policy. Since the policy is then used for exploration this affects the quality of the data collected in the next iteration which in turn has an effect on training in future iterations leading to overall slow learning. We have observed in various experiments (not shown) that other aspects of the network architecture (layer sizes, non-linearities) can similarly affect learning speed. Finally, it is important to note that one cannot replicate the effect of multiple replay steps simply by increasing the learning rate. In practice we find that attempts to do so make training unstable. Asynchronous DPG Increasing the number of update steps relative to the number of environment interactions greatly improves the data efficiency but also dramatically increases compute time. When the overall run time is dominated by the network updates it may scale linearly with the number of replay steps. In this setting experiments can quickly become impractical and parallelizing computation can provide a solution. Similarly, in a robotics setup the overall run time is typically dominated by the collection of interactions. In this case it is desirable to be able to collect experience from multiple robots simultaneously (e.g. as in BID38 BID4).We therefore develop an asynchronous version of DPG that allows parallelization of training and environment interaction by combining multiple instances of an DPG-R actor and critic that each share their network parameters and can be configured to either share or have independent experience replay buffers. This is inspired by the A3C algorithm proposed in BID22, and also analogous to BID4 BID38: We employ asynchronous updates whereby each worker has its own copy of the parameters and uses it for computing gradients which are then applied to a shared parameter instance without any synchronization. We use the Adam optimizer BID14 with local non-shared first-order statistics and a single shared instance of second-order statistics. The pseudo code of the asynchronous DPG-R is shown in algorithm box 1. Initialize global shared critic and actor network parameters: θ Q and θ µ Pseudo code for each learner thread: Initialize critic network Q(s, a|θ Q) and policy network µ(s|θ µ) with weights θ Q and θ µ.Initialize target network Q and µ with weights: DISPLAYFORM0 Initialize replay buffer R for episode = 1, M do Receive initial observation state s1 for t = 1, T do Select action at = µ(st|θ µ) + Nt according to the current policy and exploration noise Perform action at, observe reward rt and new state st+1 Store transition (st, at, rt, st+1) in R for update = 1, R do Sample a random minibatch of N transitions (si, ai, ri, si+1) from R Set yi = ri + γQ (si+1, µ (si+1|θ µ)|θ Q ) Perform asynchronous update of the shared critic parameters by minimizing the loss: DISPLAYFORM1 2 ) Perform asynchronous update of the shared policy parameters using the sampled gradient: DISPLAYFORM2 Copy the shared parameters to the local ones: DISPLAYFORM3 Every S update steps, update the target networks: Figure 2 (right) compares the performance of ADPG-R for different number of update steps and 16 workers (all workers performing both data collection and computing updates). Similar to FIG1 (left) we find that increasing the ratio of update steps per environment steps improves data efficiency, although the effect appears to be somewhat less pronounced than for DPG-R. FIG2 (left) directly compares the single-worker and asynchronous version of DPG-R. In both cases we choose the best performing number of replay steps and learning rate. As we can see, the use of multiple workers does not affect overall data efficiency for StackInHand but it reduced roughly in half for Grasp, with the note that the single worker still hasn't quite converged. DISPLAYFORM4 for end for end forFigure 3 (right) plots the same data but as a function of environment steps per worker. This measure corresponds to the optimal wall clock efficiency that we can achieve, under the assumption that communication time between workers is negligible compared to environment interaction and gradient computation (this usually holds up to a certain degree of parallelization). The theoretical wall clock time for 16 workers is about 16x lower for StackInHand and roughly 8x lower for Grasp. Overall these show that distributing neural network training and data collection across multiple computers and robots can be an extremely effective way of reducing the overall run time of experiments and thus making it feasible to run more challenging experiments. We make extensive use of asynchronous DPG for remaining the experiments. The reward function in the previous section was "sparse" or "pure" reward where a reward of 1 was given for states that correspond to successful task completion (brick lifted above 3cm for grasp; for stack) and 0 otherwise. For this reward to be useful it is necessary that the agent enters the goal region at least some of the time. While possible for each of the two subtasks in isolation, this is highly unlikely for the full task: without further guidance naïve random exploration is very unlikely to lead to a successful grasp-and -stack as we experimentally verify in FIG3.One solution are informative shaping rewards that provide a learning signal even for simple exploration strategies, e.g. by embedding information about the value function in the reward function. This is a convenient way of embedding prior knowledge about the solution and is a widely and successfully used approach for simple problems. For complex sequential or compositional tasks such as the one we are interested in here, however, a suitable reward function is often non-obvious and may require considerable effort and experimentation. In this section we propose and analyze several reward functions for the full Stack task, and provide a general recipe that can be applied to other tasks with compositional structure. Shaping rewards are often defined using a distance from or progress towards a goal state. Analogously our composite (shaping) reward functions return an increasing reward as the agent completes components of the full task. They are either piece-wise constant or smoothly varying across different regions of the state space that correspond to completed subtasks. In the case of Stack we use the following reward components (see the Appendix): These reward components can be combined in different ways. We consider three different composite rewards in additional to the original sparse task reward: Grasp shaping: Grasp brick 1 and Stack brick 1, i.e. the agent receives a reward of 0.25 when brick 1 has been grasped and a reward of 1.0 after completion of the full task. Reach and grasp shaping: Reach brick 1, Grasp brick 1 and Stack brick 1, i.e. the agent receives a reward of 0.125 when close to brick 1, a reward of 0.25 when brick 1 has been grasped, and a reward of 1.0 after completion of the full task. Full composite shaping: the sparse reward components as before in combination with the distancebased smoothly varying components A full description of the reward functions is provided in the Appendix. The actual reward functions given above are specific to the stacking task. But the general principle, a piecewise-constant sequence of rewards that increases as components of the tasks are completed, augmented with simple smoothly varying rewards that guide towards completion of individual subtasks should be widely applicable. It is important to note that the above reward functions do not describe all aspects of the task solution: we do not tell the agent how to grasp or stack but merely to bring the arm into a position where grasping (stacking) can be discovered from exploration and the sparse reward component. This eases the burden on the designer and is less likely to change the optimal solution in unwanted ways. In the previous section we described a strategy for designing effective compositional reward functions that alleviate the burden of exploration. However, designing such rewards can still be error prone and we did indeed encounter several unexpected failure cases as shown in the supplemental video (https://www.youtube.com/watch?v=7vmXOGwLq24) and detailed in the Appendix. Furthermore, suitable rewards may rely on privileged information not easily available in a real robotics setup. In this section we describe a second, complementary strategy for embedding prior knowledge into the training process and improving exploration. Specifically we propose to let the distribution of states at which the learning agent is initialized at the beginning of an episode reflect the compositional nature of the task: e.g., instead of initializing the agent at the beginning of the full task with both bricks on the table, we can initialize the agent occasionally with the brick already in its hand and thus prepared for stacking in the same way as when learning the subtask StackInHand in section 5.More generally, we can initialize episodes with states taken from anywhere along or close to successful trajectories. Suitable states can be either manually defined (as in section 5), or they can be obtained from a human demonstrator or a previously trained agent that can partially solve the task. This can be seen as a form of apprenticeship learning in which we provide teacher information by influencing the state visitation distribution. Unlike many other forms of imitation or apprenticeship learning, however, this approach requires neither complete trajectories nor demonstrator actions.. On all plots, x-axis is millions of transitions of total experience and y-axis is mean episode return. Policies with mean return over 100 robustly perform the full Stack from different starting states. Without reward shaping and basic start states only (a, blue) there is no learning progress. Instructive start states allow learning even with very uninformative sparse rewards indicating only overall task success (a,red).We perform experiments with two methods for generating the starting states. The first one uses the manually defined initial states from section 5 (both bricks located on the table or in states where the first brick is already in the gripper as if the agent just performed a successful grasp). The second method initializes the learning agent at start states sampled randomly from successful demonstration trajectories (derived from agents previously trained end-to-end on the compositional reward).The of these experiments are shown in FIG3. Green traces show for the four reward functions from section 6 in combination with the manually defined start states (from section 5). While there is still no learning for the sparse reward case, obtained with all other reward functions are improved. In particular, even for the second simplest reward function (Grasp shaping) we obtain some controllers that can solve the full task. Learning with the full composite shaping reward is faster and more robust than without the use of instructive states. The leftmost plot of FIG3 (red trace) shows for the case where the episode is initialized anywhere along trajectories from a pre-trained controller (which was obtained using full composite shaping; rightmost blue curve). We use this start state distribution in combination with the basic sparse reward for the overall case (Stack without shaping). Episodes were configured to be 50 steps, which we found to be better suited to this setup with assisted exploration. During testing we still used episodes with 150 steps as before (so that the traces are comparable). We can see a large improvement in performance in comparison to the two-state method variant even in the absence of any shaping rewards. We can learn a robust policy for all seeds within a total of 1 million environment transitions -less than 1 hour of interaction time on 16 simulated robots. These suggest that an appropriate start state distribution not only speeds up learning, it also allows simpler reward functions to be used. In our final experiment we found that the simplest reward function (i.e. only indicating overall experimental success) was sufficient to solve the task. In this case the robustness of trained policies to starting state variation is also encouraging. Over 1000 test trials we obtain 99.2% success for Grasp, 98.2% for StackInHand, and 95.5% for the full Stack task. We have introduced two extensions to the DDPG algorithm which make it a practical method for learning robust policies for complex continuous control tasks. We have shown that by decoupling the frequency of network updates from the environment interaction we can dramatically improve data-efficiency. Parallelizing data acquisition and learning substantially reduces wall clock time. In addition, we presented two methods that help to guide the learning process towards good solutions and thus reduce the pressure on exploration strategies and speed up learning. In combination these contributions allow us to solve a challenging manipulation problem end-to-end, suggesting that many hard control problems lie within the reach of modern learning methods. It is of course challenging to judge the transfer of in simulation to the real world. We have taken care to design a physically realistic simulation, and in initial experiments, which we have performed both in simulation and on the physical robot, we generally find a good correspondence of performance and learning speed between simulation and real world. This makes us optimistic that performance numbers may also hold when going to the real world. A second limitation of our simulated setup is that it currently uses information about the state of the environment would require additional instrumentation of the experimental setup, e.g. to determine the position of the two bricks in the work space. These are issues that need to be addressed with care as experiments move to robotics hardware in the lab. Nevertheless, the algorithms and techniques presented here offer important guidance for the application of deep reinforcement learning methods to dexterous manipulation on a real robot. 9 DDPG AND OTHER ALGORITHMS DDPG bears a relation to several other recent model free RL algorithms: The NAF algorithm BID6 which has recently been applied to a real-world robotics problem BID4 can be viewed as a DDPG variant where the Q-function is quadratic in the action so that the optimal action can be easily recovered directly from the Q-function, making a separate representation of the policy unnecessary. DDPG and especially NAF are the continuous action counterparts of DQN BID21, a Q-learning algorithm that recently re-popularized the use of experience replay and target networks to stabilize learning with powerful function approximators such as neural networks. DDPG, NAF, and DQN all interleave mini-batch updates of the Q-function (and the policy for DDPG) with data collection via interaction with the environment. These mini-batch based updates set DDPG and DQN apart from the otherwise closely related NFQ and NFQCA algorithms for discrete and continuous actions respectively. NFQ BID28 and NFQCA BID8 employ the same basic update as DDPG and DQN, however, they are batch algorithms that perform updates less frequently and fully re-fit the Q-function and the policy network after every episode with several hundred iterations of gradient descent with Rprop BID27 and using full-batch updates with the entire replay buffer. The aggressive training makes NFQCA data efficient, but the full batch updates can become impractical with large networks, large observation spaces, or when the number of training episodes is large. Finally, DPG can be seen as the deterministic limit of a particular instance of the stochastic value gradients (SVG) family BID10, which also computes policy gradient via back-propagation of value gradients, but optimizes stochastic policies. Target networks DQN DDPG, NAF Full-batch learning with Rprop Parameter resetting NFQ NFQCA In this section we provide further details regarding the composite reward functions described in the main text. For our experiments we derived these from the state vector of the simulation, but they could also be obtained through instrumentation in hardware. The reward functions are defined in terms of the following quantities:• b z: height of brick 1 above table • s B1 {x,y,z}: x,y,z positions of site located roughly in the center of brick 1 • s B2 {x,y,z}: x,y,z positions of site located just above brick 2, at the position where s B1 will be located when brick 1 is stacked on top of brick 2.• s P {x,y,z}: x,y,z positions of the pinch site of the hand -roughly the position where the fingertips would meet if the fingers are closed.. Using the above we can define the following conditions for the successful completion of subtasks:Reach Brick 1 The pinch site of the fingers is within a virtual box around the first brick position. DISPLAYFORM0 where ∆ reach {x,y,z} denote the half-lengths of the sides of the virtual box for reaching. Grasp Brick 1 Brick 1 is located above the table surface by a threshold, θ, that is possible only if the arm is the brick has been lifted. grasp =b z > θ Stack Brick 1 is stacked on brick 2. This is expressed as a box constraint on the displacement between brick 1 and brick 2 measured in the coordinate system of brick 2. stack =(|C DISPLAYFORM1 where ∆ stack {x,y,z} denote the half-lengths of the sides of the virtual box for stacking, and C is the rotation matrix that projects a vector into the coordinate system of brick 2. This projection into the coordinate system of brick 2 is necessary since brick 2 is allowed to move freely. It ensures that the box constraint is considered relative to the pose of brick 2. While this criterion for a successful stack is quite complicated to express in terms of sites, it could be easily implemented in hardware e.g. via a contact sensor attached to brick 2. The full composite reward also includes two distance based shaping components that guide the hand to the brick 1 and then brick 1 to brick 2. These could be approximate and would be relatively simple to implement with a hardware visual system that can only roughly identify the centroid of an object. The shaping components of the reward are given as follows:Reaching to brick 1: DISPLAYFORM0 Reaching to brick 2 for stacking r S2 (s B1, s B2) = 1 − tanh 2 (w 2 s B1 − s 2). Using the above components the reward functions we implement the composite reward functions described in the main text: Stack, Grasp shaping, Reach and grasp shaping, and Full composite shaping can be expressed as in equations below. These make use of the predicates above to determine whether which subtasks have been completed and return a reward accordingly.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJdCUMZAW
Data-efficient deep reinforcement learning can be used to learning precise stacking policies.
In recent years deep reinforcement learning has been shown to be adept at solving sequential decision processes with high-dimensional state spaces such as in the Atari games. Many reinforcement learning problems, however, involve high-dimensional discrete action spaces as well as high-dimensional state spaces. In this paper, we develop a novel policy gradient methodology for the case of large multidimensional discrete action spaces. We propose two approaches for creating parameterized policies: LSTM parameterization and a Modified MDP (MMDP) giving rise to Feed-Forward Network (FFN) parameterization. Both of these approaches provide expressive models to which backpropagation can be applied for training. We then consider entropy bonus, which is typically added to the reward function to enhance exploration. In the case of high-dimensional action spaces, calculating the entropy and the gradient of the entropy requires enumerating all the actions in the action space and running forward and backpropagation for each action, which may be computationally infeasible. We develop several novel unbiased estimators for the entropy bonus and its gradient. Finally, we test our algorithms on two environments: a multi-hunter multi-rabbit grid game and a multi-agent multi-arm bandit problem. In recent years deep reinforcement learning has been shown to be adept at solving sequential decision processes with high-dimensional state spaces such as in the Go game ) and Atari games BID6, BID7, BID9, BID11, BID12, BID22, BID1 ). In all of these success stories, the size of the action space was relatively small. Many reinforcement learning problems, however, involve high-dimensional action spaces as well as high-dimensional state spaces. Examples include StarCraft BID21, BID4 ), where there are many agents each of which can take a finite number of actions; and coordinating self-driving cars at an intersection, where each car can take a finite set of actions BID17 ).In this paper, we develop a novel policy gradient methodology for the case of large multidimensional action spaces. There are two major challenges in developing such a methodology:• For large multidimensional action spaces, how can we design expressive and differentiable parameterized policies which can be efficiently sampled?• In policy gradient, in order to encourage sufficient exploration, an entropy bonus term is typically added to the objective function. However, in the case of high-dimensional action spaces, calculating the entropy and its gradient requires enumerating all the actions in the action space and running forward and backpropagation for each action, which may be computationally infeasible. How can we efficiently approximate the entropy and its gradient while maintaining desirable exploration?In this paper, we first propose two approaches for parameterizing the policy: a LSTM model and a Modified MDP (MMDP) giving rise to Feed-Forward Network (FFN) model. For both of these parameterizations, actions can be efficiently sampled from the policy distribution, and backpropagation can be employed for training. We then develop several novel unbiased estimators for the entropy bonus and its gradient. These estimators can be combined with stochastic gradient descent giving a new a class of policy gradient algorithms with desirable exploration. Finally, we test our algorithms on two environments: a multi-agent multi-arm bandit problem and a multi-agent hunter-rabbit grid game. Consider an MDP with a d-dimensional action space A = A 1 × A 2 × · · · × A d. Denote a = (a 1, . . ., a d) for an action in A. A policy π(·|s) specifies for each state s a distribution over the action space A. In the standard RL setting, an agent interacts with an environment over a number of discrete timesteps BID19 BID15 ). At timestep t, the agent is in state s t and samples an action a t from the policy distribution π(·|s t). The agent then receives a scalar reward r t and the environment enters the next state s t+1. The agent then samples a t+1 from π(·|s t+1) and so on. The process continues until the end of the episode, denoted by T. The return R t = T −t k=0 γ k r t+k is the discounted accumulated return from time step t until the end of the episode. In the policy gradient formulation, we consider a set of parameterized policies π θ (·|s), θ ∈ Θ, and attempt to find a good θ within a parameter set Θ. Typically the policy π θ (·|s) is generated by a neural network with θ denoting the weights and biases in the network. The parameters θ are updated by performing stochastic gradient ascent on the expected reward. One example of such an algorithm is REINFORCE, proposed by BID23, where in a given episode at timestep t the parameters θ are updated as follows: DISPLAYFORM0 where b t (s t) is a baseline. It is well known that the policy gradient algorithm often converges to a local optimum. To discourage convergence to a highly suboptimal policy, the policy entropy is typically added to the update rule: DISPLAYFORM1 where DISPLAYFORM2 This approach is often referred to as adding entropy bonus or entropy regularization BID23 ) and is widely used in different applications of neural networks, such as optimal control in Atari games ), multi-agent games BID5 ) and optimizer search for supervised machine learning with RL BID0 ). β is referred to as the entropy weight. In applying policy gradient to MDP with large multidimensional action spaces, there are two challenges. First, how do we design an expressive and differentiable parameterized policy which can be efficiently sampled? Second, for the case of large multidimensional action spaces, calculating the entropy and its gradient requires enumerating all the actions in the action space, which may be infeasible. How do we then enhance exploration in a principled way? To abbreviate the notation, we write p θ (a) for π θ (a|s t), with the conditioning on s t being implicit. We consider schemes whereby the sample components a i, i = 1,..., d, are sequentially generated. In particular, after obtaining a 1, a 2,..., a i−1, we will generate a i ∈ A i from some parameterized distribution p θ (·|a 1, a 2, . . ., a i−1) defined over the one-dimensional set A i. After generating the distribution p θ (·|a 1, a 2, . . ., a i−1), i = 1,..., d and the action components a 1,..., a d sequentially, we can then define DISPLAYFORM0.., a i−1 ). We now propose two methods for creating the parameterized distributions p θ (a|a 1, a 2, . . ., a i−1), a ∈ A i. To our knowledge, these models are novel and have not been studied in multidimensional action space literature. We assume that the size of the one-dimensional action sets are equal, that is, DISPLAYFORM1 To handle action sets of different sizes, we include inconsequential actions if needed. The policy p θ (a) can be learned with a recurrent neural network (RNN). Long Short-Term Memory (LSTM), a special flavor of RNN, has recently been used with great success to represent conditional probabilities in language translation tasks BID18 ). Here, as shown in FIG0 (a), we use an LSTM to generate a parameterized multidimensional distribution p θ (·) and to sample a = (a 1, . . ., a d) from that distribution. Specifically, p θ (a|a 1, a 2, . . ., a i−1), a ∈ A i is given by the output of the LSTM. To generate a i, we run a forward pass through the LSTM with the input being a i−1 and the current state s t (and implicitly on a 1, . . ., a i−1 which influences h i−1). This produces a hidden state h i, which is then passed through a linear layer, producing a K dimensional vector. The softmax of this vector is taken to produce the one-dimensional conditional distribution p θ (a|a 1, a 2, ..., a i−1), a ∈ A i. Finally, a i is sampled from this one-dimensional distribution, and is then fed into the next stage of the LSTM to produce a i+1.After generating the action a = (a 1, . . ., a d), and the conditional probabilities p θ (·|a 1, a 2, . . ., a i−1), i = 1,..., d, we can evaluate p θ (a) as the product of the conditional probabilities. During training, we can also use backpropagation to efficiently calculate the first term on the RHS of the update rule in. As an alternative to using a LSTM to create parameterized multidimensional policies, we can modify the underlying MDP to create an equivalent MDP for which the action space is one dimensional at each time step. We refer to this MDP as the Modified MDP (MMDP). In the original MDP, we have state space S and action space A = A 1 × A 2 × · · · × A d where A i = {1, 2, . . ., K}. In MMDP, the state is modified to encapsulate the original state and all the action dimensions selected for state s so far, i.e., (s, a 1, a 2, . . ., a i, 0, . . ., 0) with a 1,..., a i being selected values for action dimensions 1 to i, and 0 being the placeholder for d − i − 1 dimensions. The new action space is A = {0, 1, . . ., K} and the new state space is S × {0, 1, . . ., K} d−1. The state transition probabilities for the MMDP are given by DISPLAYFORM0 where P (s |s, a 1, . . ., a d) is the transition probabiliy of the original MDP. The reward is only generated after all d component actions are taken. It is easily seen that the MMDP is equivalent to the original MDP.Since the MMDP has an one-dimensional action space, we can use a feed-forward network (FFN) to generate each action component as shown in FIG0 ). Note that the FFN input layer size is always |S| + K − 1 and the output layer size is K. As shown in, an entropy bonus is typically included to enhance exploration. However, for large multidimensional action spaces, calculating the entropy and the gradient of the entropy requires enumerating all the actions in the action space and running forward and backpropagation for each action. In this section we develop computationally efficient unbiased estimates for the entropy and its gradient. Let A = (A 1, . . ., A d) denote a random variable with distribution p θ (·). Let H θ denote the exact entropy of the distribution p θ (a): DISPLAYFORM0 (a) The RNN architecture. To generate ai, we input st and ai−1 into the RNN and then pass the ing hidden state hi through a linear layer and a softmax to generate a distribution, from which we sample ai. DISPLAYFORM1... DISPLAYFORM2 The MMDP architecture. To generate ai, we input st and a1, a2,..., ai−1 into a FFN. The output is passed through a softmax layer, providing a distribution from which we sample ai. Since the input size of the FFN is fixed, when generating ai, constants 0 serve as placeholders for ai+1,..., a d−1 in the input to the FFN. During training within an episode, for each state s t, the policy (using, for example, LSTM or MMDP) generates an action a = (a 1, a 2, . . ., a d). A crude approximation of the entropy bonus is: DISPLAYFORM0 This approximation is an unbiased estimate of H θ but its variance is likely to be large. To reduce the variance, we can generate M action samples a, a,..., a (M) when in s t and average the log action probabilities over the samples. However, generating a large number of samples is costly, especially when each sample is generated from a neural network, since each sample requires one additional forward pass. In this section, we develop an alternative unbiased estimator for entropy which only requires the one episodic sample. In the course of an episode, an action a = (a 1, a 2, . . ., a d) is generated for each s t. The alternative estimator accounts for the entropy along each dimension of the action space. DISPLAYFORM0 where DISPLAYFORM1 which is the entropy of A i conditioned on a i−1. This approximation of entropy bonus is computationally efficient since for each dimension i, we need to obtain p θ (·|a i−1), its log and gradient anyway during training. We refer to this approximation as the smoothed entropy. The smoothed entropy H θ (A) has several appealing properties. The proofs of Theorem 1 and Theorem 3 are straightforward and omitted. DISPLAYFORM2 is an unbiased estimator of the exact entropy H θ.Theorem 2. If p θ (a) has a multivariable normal distribution with mean and variance depending on θ, then: DISPLAYFORM3 Thus, the smoothed entropy equals the exact entropy for a multi-variate normal parameterization of the policy (Proof in Appendix B). Theorem 3. (i) If there is a sequence of weights θ 1, θ 2,... such that p θn (·) converges to the uniform distribution over A, then sup DISPLAYFORM4 Thus, the smoothed entropy H θ (a) mimics the exact entropy in that it has the same supremum and infinum values as the exact entropy. The above theorems indicate that H θ (a) may serve as a good proxy for H θ: it is an unbiased estimator for H θ, it has the same minimum and maximum values when varying θ; and in the special case when p θ (a) has a multivariate normal distribution, it is actually equal to H θ for all a ∈ A. Our numerical experiments have shown that the smoothed estimator H θ (a) typically has lower variance than the crude estimator H crude θ (a). However, it is not generally true that the smoothed estimator always has lower variance as counterexamples can be found. So far we have been looking at estimates of entropy. But the policy gradient algorithm uses the gradient of the entropy rather than just simply the entropy. As it turns out, the gradient of estimators H crude θ (a) and H θ (a) are not unbiased estimates of the gradient of the entropy. In this subsection, we provide unbiased estimators for the gradient of the entropy. For simplicity, in this section, we assume an one-step decision setting, such as in a multi-armed bandit problem. A straightforward calculation shows: DISPLAYFORM0 Suppose a is one sample from p θ (·). A crude unbiased estimator for the gradient of the entropy therefore is: − log p θ (a)∇ θ log p θ (a) = log p θ (a)∇ θ H crude θ (a). Note that this estimator is equal to the gradient of the crude estimator multiplied by a correction factor. Analogous to the smoothed estimator for entropy, we can also derive a smoothed estimator for the gradient of the entropy. Theorem 4. If a is a sample from p θ (·), then DISPLAYFORM1 is an unbiased estimator for the gradient of the entropy (Proof in Appendix C).Note that this estimate for the gradient of the entropy is equal to the gradient of the smoothed estimate H θ (a) plus a correction term. We refer to this estimate of the entropy gradient as the unbiased gradient estimate. We designed experiments to compare the LSTM and MMDP models, and to also compare how the different entropy approximations perform for both. For each entropy approximation, the entropy weight as described in was tuned to give the highest episode reward. For MMDP, the number of hidden layers was also tuned from 1 to 7. The rest of the hyperparameters are listed in Appendix A. In this environment, there is a n × n grid. At the beginning of each episode d hunters and d rabbits are randomly placed in the grid. The rabbits remain fixed in the episode, and each hunter can move to a neighboring square (including diagonal neighbors) or stay at the current square. So each hunter has nine possible actions, and altogether there are |A| = 9 d actions at each time step. When a hunter enters a square with a rabbit, the hunter captures the rabbit and remains there until the end of the game. In each episode, the goal is for the hunters to capture the rabbits as quickly as possible. Each episode is allowed to run for at most 10,000 time steps. To provide a dense reward signal, we formalize the goal with the following modification: capturing a rabbit gives a reward of 1, which is discounted by the number of time steps taken since the beginning of the episode. The discount factor is 0.8. The goal is to maximize the episode's total discounted reward. After a hunter captures a rabbit, they both become inactive. The representation of an active hunter or rabbit is (1, y position, x position). The representation of an inactive hunter or rabbit is (0, -1, -1). TAB0 shows the performance of the LSTM and MMDP models with different entropy estimates. (smoothed mode entropy is explained in Appendix D). The evaluation was performed in a square grid of 5 by 5 with 5 hunters and 5 rabbits. Training was run for 1 million episodes for each of the seeds. All evaluations are averaged over 1,000 episodes per seed for a total of 5,000 episodes. First, we observe that the LSTM model always does better than the MMDP model, particularly for the episode length. Second, we note that policies obtained with the entropy approximations all perform better than policies obtained without entropy or with crude entropy. For the LSTM model, the best performing approximation is smoothed entropy, reducing the mean episode length by 45% and increasing the mean episode reward by 10% compared to without entropy. We also note that there is not a significant difference in performance between the smoothed entropy estimate, smoothed mode estimate, and the unbiased gradient estimate. As shown in TAB1, smoothed entropy is also more robust to the initial seed than without entropy. For example, for the LSTM model, in the case of without entropy, seed 0 leads to significantly worse than the seeds 1-4. This does not happen to smoothed entropy. We now consider how policies trained with entropy approximations compare with polices trained with exact entropy. In order to calculate exact entropy in an acceptable amount of time, we reduced the number of hunters and rabbits to 4 hunters and 4 rabbits. Training was run for 50,000 episodes. TAB2 shows the performance differences between policies trained with entropy approximations and exact entropy. We see that the best entropy approximations perform only slightly worse than exact entropy for both LSTM and MMDP. Once again we see that the LSTM model performs better than the MMDP model. We examine a multi-agent version of the standard multi-armed bandit problem, where there are d agents each pulling one of K arms, with d ≤ K. The k th arm generates a reward r k. The total reward in a round is generated as follows. In each round, each agent chooses an arm. All of the chosen arms are then pulled, with each pulled arm generating a reward. Note that the total number of arms chosen, c, may be less than d since some agents may choose the same arm. The total reward is the sum of rewards from the c chosen arms. The optimal policy is for the d agents to collectively pull the d arms with the highest rewards. Additionally, among all the optimal assignments of d agents to the d arms that yield the highest reward, we add a bonus reward with probability p * if one particular agent-to-arms assignment is chosen. We performed experiments with 4 agents and 10 arms, with the k th arm providing a reward of k. The exceptional assignment gets a bonus of 200 with probability 0.01, and no bonus with probability 0.99. Thus the maximum expected reward is 36. Training was run for 100,000 rounds for each of the seeds. TAB3 shows average for the last 500 of the 100,000 rounds. The for the multi-agent bandit problem are consistent with those for the hunter-rabbit problem. Policies obtained with the entropy approximations all perform better than policies obtained without entropy or with crude entropy, particularly for the percentage of optimal arms pulled. We again note that using the unbiased gradient estimate does not perform significantly better than using the smoothed entropy estimate. There has been limited attention in the RL literature with regards to large discrete action spaces. BID13 proposes generalized value functions in the form of H-value functions, and also propose approximate linear programming as a solution technique. Their methodology is not suited for deep RL, and approximate linear programming may lead to highly sub-optimal solutions. Dulac-Arnold et al. FORMULA1 embeds discrete actions in a continuous space, picks actions in the continuous space and map these actions back into the discrete space. However, their algorithm introduces a new hyper-parameter that requires tuning for every new task. Our approach involves no new hyper-parameter other than those normally used in deep learning. In BID17, each action dimension is treated as an agent and backpropagation is used to learn coordination between the agents. The approach is particularly adept for problems where agents leave and enter the system. However, the approach requires homogenous agents, and has not been shown to solve large-scale problems. Furthermore, the decentralized approach will potentially lead to highly suboptimal polices even though communication is optimized among the agents. To our knowledge, we are the first to propose using LSTMs and a modified MDP to create policies for RL problems with large multidimensional action spaces. Although this leads to algorithms that are straightforward, the approaches are natural and well-suited to multidimensional action spaces. We also propose novel estimators for the entropy regularization term that is often used in policy gradient. To the best of our knowledge, no prior work has dealt with approximating the policy entropy for MDP with large multidimensional discrete action space. On the other hand, there has been many attempts to devise methods to encourage beneficial exploration for policy gradient. BID10 modifies the entropy term by adding weights to the log action probabilities, leading to a new optimization objective termed under-appreciated reward exploration. While entropy regularization has been mostly used in algorithms that explicitly parameterize the policies, BID14 applies entropy regularization to Q-learning methods. They make an important observation about the equivalence between policy gradient and entropy regularized Q-learning, which they term soft Q-learning. In this paper, we developed a novel policy gradient methodology for the case of large multidimensional discrete action spaces. We proposed two approaches for creating parameterized policies: LSTM parameterization and a Modified MDP (MMDP) giving rise to Feed-Forward Network (FFN) parameterization. Both of these approaches provide expressive models to which backpropagation can be applied for training. We then developed several novel unbiased estimators for entropy bonus and its gradient. We did experimental work for two environments with large multidimensional action space. For these environments, we found that both the LSTM and MMDP approach could successfully solve large multidimensional action space problems, with the LSTM approach generally performing better. We also found that the smoothed estimates of the entropy and the unbiased gradient estimate of the entropy gradient can help reduce computational cost while not sacrificing significant loss in performance. Hyperparameters for hunter-rabbit gameThe LSTM policy has 128 hidden nodes. For the MMDP policy, the number of hidden layers for smoothed entropy, smoothed mode entropy, unbiased gradient estimate, crude entropy and without entropy are 5, 3, 3, 4 3 and 3 respectively. Each MMDP layer has 128 nodes. We parameterize the baseline in with a feed forward neural network with one hidden layer of size 64. This network was trained using first visit Monte Carlo return to minimize the L1 loss between actual and predicted values of states visited during the epidode. Both the policies and baseline are optimized after each episode with RMSprop BID20 ). The RHS of FORMULA1 To obtain the in TAB2, the entropy weights for LSTM smoothed entropy, LSTM exact entropy, MMDP unbiased gradient estimate and MMDP exact entropy are 0.03, 0.01, 0.03 and 0.01 respectively. The MMDP networks have three layers with 128 nodes in each layer. Experimental are averaged over five seeds. The experiments were run with 4 agents and 10 arms. For the 10 arms, their rewards are i for i = 1,..., 10. The LSTM policy has 32 hidden nodes. The baseline in is a truncated average of the reward of the last 100 rounds. The entropy weight for crude entropy, smoothed entropy and unbiased gradient estimate are 0.005, 0.001 and 0.003 respectively. The learning rates for without entropy, crude entropy, smoothed entropy and unbiased gradient estimate are 0.006, 0.008, 0.002 and 0.005 respectively. Experimental are averaged over ten seeds. We first note that for DISPLAYFORM0 where X 1 and X 2 are random vectors, we have X 2 | X 1 = x 1 ∼ N (μ,Σ) wherē DISPLAYFORM1 Observe that the covariance matrix of the conditional distribution does not depend on the value of x 1 .Also note that for X ∼ N (µ, Σ), the entropy of X takes the form DISPLAYFORM2 where k is the dimension of X and | · | denotes the determinant. Therefore, the entropy of a multivariate normal random variable depends only on the variance and not on the mean. Because A is multivariate normal, the distribution of A i given A 1 = a 1,..., A i−1 = a i−1 has a normal distribution with a variance σ 2 i that does not depend on a 1,..., a i−1. Therefore DISPLAYFORM3 does not depend on a 1,..., a i−1 and hence H θ (a) does not depend on a. Combining this with the fact that H θ (a) is an unbiased estimator for H θ gives H θ (a) = H θ for all a ∈ A. From, we have: DISPLAYFORM0 We will now use conditional expectation to calculate the terms in the double sum. For i < j: DISPLAYFORM1 For i > j: Combining these three conditional expectations with, we obtain: DISPLAYFORM2 DISPLAYFORM3 Depending on the episodic action a at a given time step in the episode, the smoothed entropy H θ (a) may give unsatisfactory . For example, suppose for a particular episodic action a, H θ (a) H θ. In this case, the policy gradient may ignore the entropy bonus term, thinking that the policy already has enough entropy when it perhaps does not. We therefore consider alternative approximations which may improve performance at modest additional computational cost. First consider Thus in this case, instead of calculating the entropy over a sample action a, we calculate it over the most likely action a *. The problem here is that it is not easy to find a * when the given conditional probabilities p θ (a|a 1, . . ., a i−1) are not in closed form but only available algorithmically as outputs of neural networks. DISPLAYFORM0 A more computationally efficient approach would be to choose the action greedily: a 1 = argmax The actionâ is an approximation for the mode of the distribution p θ (·). As often done in NLP, we can use beam search to determine an action a that is a better approximation, that is, p θ (a) ≥ p θ (â). Indeed, the above H θ definition is beam search with beam size equal to 1. We refer to H θ as smoothed mode entropy. H θ with an appropriate beam size may be a better approximation for the entropy H θ than H θ (a). However, calculating H θ and its gradient comes with some computational cost. For example, with a beam size equal to one, we would have to make two passes through the neural network at each time step: one to obtain the episodic sample a and the other to obtain the greedy actionâ. For beam size n we would need to make n + 1 passes. We note that H θ is a biased estimator for H θ but with no variance. Thus there is a bias-variance tradeoff between H θ (a) and H θ. Note that H θ also satisfies Theorems 2 and 3 in subsection 4.2.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rk3b2qxCW
policy parameterizations and unbiased policy entropy estimators for MDP with large multidimensional discrete action space
Neural networks offer high-accuracy solutions to a range of problems, but are computationally costly to run in production systems. We propose a technique called Deep Learning Approximation to take an already-trained neural network model and build a faster (and almost equally accurate) network by manipulating the network structure and coefficients without requiring re-training or access to the training data. Speedup is achieved by applying a sequential series of independent optimizations that reduce the floating-point operations (FLOPs) required to perform a forward pass. An optimal lossy approximation is chosen for each layer by weighing the relative accuracy loss and FLOP reduction. On PASCAL VOC 2007 with the YOLO network, we show an end-to-end 2x speedup in a network forward pass with a $5$\% drop in mAP that can be re-gained by finetuning, enabling this network (and others like it) to be deployed in compute-constrained systems. FLOPs for DISPLAYFORM0 An optimal approximation is chosen by calculating the runtime and accuracy loss from all possible Table 1. When chaining approximations, R for is the ratio 55 of the final output FLOPs to the FLOPs from W. A is the product of the accuracy scores for each 56 approximation in the chain, since any error introduced by the first will be carried over to the next. with the absolute speedup, with the exception of the ResNet50 network, as shown in TAB3. Table 61 4 shows that the input parameter p can be chosen based on the desired runtime / accuracy tradeoff. of an improvement in runtime from DLA (for example ResNet50 in TAB3). Additionally, pushing 64 beyond the 2x speedup observed on YOLO without significant accuracy loss is not possible with
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
SyGdZXSboX
Decompose weights to use fewer FLOPs with SVD
Asynchronous distributed methods are a popular way to reduce the communication and synchronization costs of large-scale optimization. Yet, for all their success, little is known about their convergence guarantees in the challenging case of general non-smooth, non-convex objectives, beyond cases where closed-form proximal operator solutions are available. This is all the more surprising since these objectives are the ones appearing in the training of deep neural networks. In this paper, we introduce the first convergence analysis covering asynchronous methods in the case of general non-smooth, non-convex objectives. Our analysis applies to stochastic sub-gradient descent methods both with and without block variable partitioning, and both with and without momentum. It is phrased in the context of a general probabilistic model of asynchronous scheduling accurately adapted to modern hardware properties. We validate our analysis experimentally in the context of training deep neural network architectures. We show their overall successful asymptotic convergence as well as exploring how momentum, synchronization, and partitioning all affect performance. Training parameters arising in Deep Neural Net architectures is a difficult problem in several ways . First, with multiple layers and nonlinear activation functions such as sigmoid and softmax functions, the ultimate optimization problem is nonconvex. Second, with ReLU activation functions and max-pooling in convolutional structures, the problem is nonsmooth, i.e., it is not differentiable everywhere, although typically the set of non-differentiable points is a set of measure zero in the space of the parameters. Finally, in many applications it is unreasonable to load the whole sample size in memory to evaluate the objective function or (sub)gradient, thus samples must be taken, necessitating analysis in a probabilistic framework. The analysis of parallel optimization algorithms using shared memory architectures, motivated by applications in machine learning, was ushered in by the seminal work of (although precursors exist, see the references therein). Further work refined this analysis, e.g. and expanded it to nonconvex problems, e.g. . However, in all of these , a very simplistic model of asynchronous computation is presented to analyze the problem. Notably, it is assumed that every block of the parameter, among the set of blocks of iterates being optimized, has a fixed, equal probability of being chosen at every iteration, with a certain vector of delays that determine how old each block is that is stored in the cache relative to the shared memory. As one can surmise, this implies complete symmetry with regards to cores reading and computing the different blocks. This does not correspond to asynchronous computation in practice. In particular, in the common Non-Uniform Memory Access (NUMA) setting, practical experience has shown that it can be effective for each core to control a set of blocks. Thus, the choice of blocks will depend on previous iterates, which core was last to update, creating probabilistic dependence between the delay vector and the choice of block. This exact model is formalized in Cannelli et al., which introduced a new probabilistic model of asynchronous parallel optimization and presented a coordinate-wise updating successive convex approximation algorithm. In this paper, we are interested in studying parallel asynchronous stochastic subgradient descent for general nonconvex nonsmooth objectives, such as the ones arising in the training of deep neural network architectures. Currently, there is no work in the literature specifically addressing this problem. The closest related work is given by and, which consider asynchronous proximal gradient methods for solving problems of the form f (x) + g(x), where f is smooth and nonconvex, and g(x) is nonsmooth, with an easily computable closed form prox expression. This restriction applies to the case of training a neural network which has no ReLUs or max pooling in the architecture itself, i.e., every activation is a smooth function, and there is an additional regularization term, such as an 1. These papers derive expected rates of convergence. In the general case, where the activations themselves are nonsmooth-for instance in the presence of ReLUs-there is no such additive structure, and no proximal operator exists to handle away the non-smoothness and remove the necessity of computing and using subgradients explicitly in the optimization procedure. This general problem of nonsmooth nonconvex optimization is already difficult (see, e.g.,), and the introduction of stochastically uncertain iterate updates creates an additional challenge. Classically, the framework of stochastic approximation, with stochastic estimates of the subgradient approximating elements in a differential inclusion that defines a flow towards minimization of the objective function, is a standard, effective approach to analyzing algorithms for this class of problems. Some texts on the framework include , which we shall reference extensively in the paper, and. See also and Ruszczyński for some classical in convergence of stochastic algorithms for nonconvex nonsmooth optimization. Interest in stochastic approximation has resurfaced recently sparked by the popularity of Deep Neural Network architectures. For instance, see the analysis of nonconvex nonsmooth stochastic optimization with an eye towards such models in and. In this paper, we provide the first analysis for nonsmooth nonconvex stochastic subgradient methods in a parallel asynchronous setting, in the stochastic approximation framework. For this, we employ the state of the art model of parallel computation introduced in Cannelli et al., which we map onto the analysis framework of. We prove show that the generic asynchronous stochastic subgradient methods considered are convergent, with probability 1, for nonconvex nonsmooth functions. This is the first for this class of algorithms, and it combines the state of the art in these two areas, while extending the scope of the therein. Furthermore, given the success of momentum methods (see, e.g.,), we consider the momentum variant of the classical subgradient method, again presenting the first convergence analysis for this class of algorithms. We validate our analysis numerically by demonstrating the performance of asynchronous stochastic subgradient methods of different forms on the problem of ResNet deep network training. We shall consider variants of asynchronous updating with and without write locks and with and without block variable partitioning, showing the nuances in terms of convergence behavior as depending on these strategies and properties of the computational hardware. Consider the minimization problem min where f: R n → R is locally Lipschitz continuous (but could be nonconvex and nonsmooth) and furthermore, it is computationally infeasible to evaluate f (x) or an element of the Clarke subdifferential ∂f (x). The problem has many applications in machine learning, including the training of parameters in deep neural networks. In this setting, f (x) is loss function evaluated on some model with x as its parameters, and is dependant on input data A ∈ R n×m and target values y ∈ R m of high dimension, i.e., f (x) = f (x; (A, y)), with x a parameter to optimize with respect to the loss function. In cases of practical interest, f is decomposable in finite-sum form, where l: R m × R m → R represents the training loss and {(A i, y i)} is a partition of (A, y). We are concerned with algorithms that solve in a distributed fashion, i.e., using multiple processing cores. In particular, we are analyzing the following inconsistent read scenario: before computation begins, each core c is allocated a block of variables I c, for which it is responsible to update. At each iteration the core modifies a block of variables i k, chosen randomly among I c. Immediately after core c completes its k-th iteration, it updates the shared memory. A lock is only placed on the shared memory when a core writes to it, thus the process of reading may in computations of the function evaluated at variable values that never existed in memory, e.g., block 1 is read by core 1, then core 3 updates block 2, then core 1 reads block 2, and now block 1 is operating on a vector with the values in blocks 1 and 2 not simultaneously at their present local values at any point in time in shared memory. We shall index iterations to indicate when a core writes a new set of values for the variable into memory. kc n } be the vector of delays for each component of the variable used to evaluate the subgradient estimate, thus the j-th component of x that is used in the computation of the update at k is actually not In this paper, we are interested in applying stochastic approximation methods, of which the classic stochastic gradient descent forms a special case. Since f in is in general nonsmooth, we will exploit subgradient methods. Denote by ξ k the set of mini-batches used to compute an element of The set of minibatches ξ kc is chosen uniformly at random from (A, y), independently at each iteration. By the central limit theorem, the error is asymptotically Gaussian as the total size of the data as well as the size of the mini-batches increases. Asynchronous System Specification. We consider a shared-memory system with p processes concurrently and asynchronously performing computations on independent compute-cores. We interchangeably use the terms process and core. The memory allows concurrent-read-concurrent-write (CRCW) 1 access. The shared-memory system offers word-sized atomic read and fetch-and-add (faa) primitives. Processes use faa to update the components of the variable. We now recall the stochastic subgradient algorithm under asynchronous updating in Algorithm 1, from the perspective of the individual cores. The update of the iterate performed by where m is the momentum constant, required to satisfy 0 < m < 1 Sample i from the variables I c corresponding to c. Sample ξ. Compute a subgradient estimate g kc, local to k c Write, with a lock, to the shared memory vector partition Update, with a lock, to the shared memory vector partition k c = k c + 1 8: end while For the discrete time probabilistic model of computation introduced in Cannelli et al., we must present the basic requirements that must hold across cores. In particular, it is reasonable to expect that if some core is entirely faulty, or exponentially deccelerates in its computation, convergence should not be expected to be attained. Otherwise we wish to make the probabilistic assumption governing the asynchronous update scheduling as general as possible in allowing for a variety of possible architectures. The details of the probabilistic assumptions are technical and left to the Supplementary Material. It can be verified that the stochastic approxmation framework discussed in the next section detailing the convergence satisfies these assumptions. We have the standard assumption about the stochastic sub-gradient estimates. These assumptions hold under the standard stochastic gradient approach wherein one samples some subset ξ ⊆ {1, ..., M} of mini-batches uniformly from the set of size |ξ| subsets of {1, ..., M}, done independently at each iteration. This in independent noise at each iteration being applied to the stochastic subgradient term. From these mini-batches ξ, a subgradient is taken for each j ∈ ξ and averaged. Assumption 3.1. The stochastic subgradient estimates g(x, ξ) satisfy, where β(x) defines a bias term that is zero if f (·) is continuously differentiable at x. We provide some more details on the "global" model of asynchronous stochastic updating in the Supplementary material. In this section, we shall redefine the algorithm and its associated model presented in the previous section in a framework appropriate for analysis from the stochastic approximation perspective. Consider the Algorithm described as such, for data block i with respective iteration k, where Y j,i is the estimate of the partial subgradient with respect to block variables indexed by i at local iteration j. In the context of Algorithm 1, the step size is defined to be the subsequence {γ l} where l is the iteration index for the core corresponding to block i. Thus it takes the subsequence of γ k for which i k = i is the block of variables being modified. to denote a selection of some element of the subgradient, with respect to block These are standard conditions implied by the sampling procedure in stochastic gradient methods, introduced by the original Robbins-Monro method . In Stochastic Approximation, the standard approach is to formulate a dynamic system or differential inclusion that the sequence of iterates approaches asymptotically. For this reason, we introduce real time into the model of asynchronous computation, looking at the actual time elapsed between iterations for each block i. Define δτ k,i to be the real elapsed time between the k-th and k + 1-st iteration for block i. We let T k,i = k−1 j=0 δτ j,i and define for σ ≥ 0, p l (σ) = min{j : T j,i ≥ σ} the first iteration at or after σ. We assume now that the step-size sequence comes from an underlying real function, i.e., We now define new σ-algebras F k,i and F + k,i defined to measure the random variables {{x 0}, {Y j−1,i : j, i with T j,i < T k+1,i}, {T j,i : j, i with T j,i ≤ T k+1,i}}, and, {{x 0}, {Y j−1,i : j, i with T j,i ≤ T k+1,i}, {T j,i : j, i with T j,i ≤ T k+1,i}}, indicating the set of events up to, and up to and including the computed noisy update at k, respectively. Note that each of these constructions is still consistent with a core updating different blocks at random, with δτ k,i arising from an underlying distribution for δτ k,c(i). Let us relate these σ-algebras to those in the previous section. Note that this takes subsets of random The form of Y k,i defined above incorporates the random variable d k and i k, as in which components are updated and the age of the information used by where the subgradient is evaluated, as well as ξ k by the presence of the Martingale difference noise. For any sequence Z k,i we write Z σ k,i = Z pi(σ)+k,i, where p i (σ) is the least integer greater than or equal to σ. Thus, let δτ σ k,i denote the inter-update times for block i starting at the first update at or after σ, and γ σ k,i the associated step sizes.. We introduce piecewise constant interpolations of the vectors in real-time given by, Now we detail the assumptions on the real delay times. These ensure that the real-time delays do not grow without bound, either on average, or on relevantly substantial probability mass. Intuitively, this means that it is highly unlikely that any core deccelerates exponentially in its computation speed. and there is aū such that for any compact set A, Assumption 3.4. It holds that, This assumption holds if, e.g., the set of x such that f (·) is not continuously differentiable at x is of measure zero, which is the case for objectives of every DNN architecture the authors are aware of. As mentioned earlier, the primary goal of the previous section is to define a stochastic process that approximates some real-time process asymptotically, with this real-time process defined by dynamics for which at the limit the path converges to a stationary point. In particular, we shall see that the process defined for the iterate time scale approximates the path of a differential inclusion, and we shall see that this path defines stationary points of f (·). We must define the notion of an invariant set for a differential inclusion (DI). Definition 3.1. A set Λ ⊂ R n is an invariant set for a DIẋ ∈ g(x) if for all x 0 ∈ Λ, there is a solution x(t), −∞ < t < ∞ that lies entirely in Λ and satisfies x = x 0. Now we state our main . Its complete proof can be found in the Supplementary Material. Theorem 3.1. Let all the stated Assumptions hold. Then, the following system of differential inclusions, holds for any u satisfying 3.3. On large intervals [0, T],x σ (·) spends nearly all of its time, with the fraction going to one as T → ∞ and σ → ∞ in a small neighborhood of a bounded invariant set of. This Theorem shows weak convergence. The extension to convergence with probability one is straightforward and described in the Supplementary material. Finally, we wish to characterize the properties of this invariant set. From Corollary 5.11 , we can conclude that problems arising in training of deep neural network architectures, wherein f (x) = l(y j, a L) with l(·) one of several standard loss functions, including logistic or Hinge loss, and a i = ρ i (V i (x)a i−1 ) or i = 1,..., L layers, are activation functions, which are piece-wise defined to be log x, e x, max(0, x) or log(1 + e x), are such that their set of invariants {x *} for its associated differential inclusion satisfies 0 ∈ ∂f (x *), and furthermore the values f (x k) for any iterative algorithm generating {x k} such that x k → x *, an invariant of f (x), converge. Note that the differential inclusions defined above ensure asymptotic convergence to block-wise stationarity, i.e., 0 ∈ ∂ i f (x) for all i. It is clear, however, that every stationary point is also blockwise stationary, i.e., that 0 ∈ ∂f (x) implies 0 ∈ ∂ i f (x) for all i. In practice, the set of block-wise stationary points which are not stationary is not large. One can alternatively consider a variant of the algorithm wherein every core updates the entire vector (thus there is no block partitioning) but locks the shared memory whenever it either reads of writes from it. The same analysis applies to such a procedure. In particular, this amounts to i k = {1, ..., n} for all k and every limit of x σ (t) as either σ → ∞ or t → ∞ is a critical point of f (x) and, with probability one, asymptotically the algorithm converges to a critical point of f (x) (i.e., x such that 0 ∈ ∂f (x)). We describe an experimental evaluation comparing the following algorithms: WIASSM: Write Inconsistent Asynchronous Stochastic Subgradient Method with lock-free read and updates of x k,i. This procedure applied to smooth strongly-convex and smooth nonconvex f (x) is known as HogWild! in and AsySG-incon in , respectively, in the literature. Convergence analysis of HogWild! and AsySG-incon additionally required sparsity of x. They have no provable convergence guarantee for nonsmooth nonconvex models. WCASSM: Write Consistent Asynchronous stochastic subgradient method. WCASSM differs from WIASSM in its use of locks to update x k,i to make consistent writes. AsySG-con in is its counterpart for smooth nonconvex f (x) and sparse x. Figure 1: We plotted the train accuracy and generalization (test) loss and accuracy trajectories for the methods. SGD runs a single process, whereas the asynchronous methods run 10 concurrent processes. In this set of experiments we have no momentum correction. The WIASSM and WCASSM demonstrate better convergence per epoch compared to PASSM. Note that, the single process executing SGD iterations has a better opportunity to use CUDA threads as there is no concurrent use of GPUs by multiple processes. The per epoch performance of PASSM matches that of SGD inferring that amount of subgradient updates are almost identical: in the former it is done collectively by all the concurrent processes accessing disjoint set of tensors, whereas, in the latter it is done by a single process using comparable amount of parallization. We used a momentum = 0.9. It can be observed that with momentum correction the convergence of PASSM improves significantly. experimentally showed that the degree of asynchrony directly relates to momentum; our experiments show that the relative gain in terms of convergence per epoch by momentum correction is better for PASSM that exhibits more asynchrony compared to WCASSM, which uses locks for write consistency. The presented Partitioned Asynchronous Stochastic Subgradient Method. We read as well as update x k,i lock-free asynchronously. Hyper-parameters. For each of the methods, we adopt a decreasing step size strategy γ k,i = (α j × γ)/ √ k, where α j > 0 is a constant for the j th processing core. γ is fixed initially. In each of the methods we use an L2 penalty in the form of a weight-decay of 0.0005. Additionally, we introduced an L1 penalty of 0.0001 that simply gets added to the gradients after it has been put through the L2 penalty. In accordance with the theory, we explored the effect of momentum correction: we have two sets of benchmarks, one without momentum and another with a constant momentum of 0.9 while checking the convergence with epochs. In all of the above methods we load the datasets in mini-batches of size 64. We keep the hyper-parameters, in particular, learning rate and mini-batch-size, identical across methods for the sake of statistical fairness. In a shared-memory setting, there is not much to exploit on the front of saving on communication cost as some existing works do; and the computing units, see the system setting and the implementation below, are anyway utilized to their entirety by way of efficient data-parallelization. Dataset and Networks. We used CIFAR10 data set of RGB images. It contains 50000 labeled images for training and 10000 labeled images for testing. We trained a well known We used momentum = 0.9 in each of them. As described, a separate concurrent process keeps on saving a snapshot of the shared model at an interval of 1 minute, simultaneously with the training processes. Firstly, it can be observed that the convergence of PASSM is faster compared to the other two asynchronous methods for identical number of processes. This can be understood in terms of block partitioning the model across processes: it helps reducing the synchronization cost and thereby potentially speeds up the data processing per unit time. Furthermore, we clearly gain in terms of convergence per unit time when we increase the number of processes in PASSM. In contrast, we note that the use of locks by WCASSM actually slows it down when we increase the number of processes. This set of experiments demonstrate that PASSM has better convergence with respect to wall-clock time in addition to the scalability with parallel resources. CNN model Resnet18. ResNet18 has a blocked architecture -of residual blockstotaling 18 convolution layers. Each residual block is followed by a ReLU activation causing nonlinearity. Evidently, training of this neural network offers general nonsmooth nonconvex optimization problems. System Specification. We benchmarked the implementations on a NUMA workstation -2 sockets, 10 cores apiece, running at 2.4GHz (Intel(R) Xeon(R) E5-2640), HT enabled 40 logical cores, Linux 4.18.0-0.bpo.1-amd64 (Debian 9) -containing 4 Nvidia GeForce GTX 1080 GPUs. For a fair evaluation of scalability with cores, we bind the processes restricting their computations -in particular, the cost of data load -to individual CPU cores. In this setting, to evaluate the scalability with respect to wall-clock-time by increasing the availability of parallel resources, we run the experiments with 5 and 10 processes, which do not migrate across CPU sockets. For evaluation with respect to time, we employed a separate concurrent process that keeps on saving a snapshot of the shared model at an interval of 1 minute. Asynchronous Implementation. We implemented the asynchronous methods using the open-source Pytorch library and the multi-processing framework of Python. Given the multi-GPU environment, which could excellently exploit data-parallel computation, therefore, we used the nn. DataParallel module of Pytorch library. Thereby, a CNN instance is centrally allocated on one of the GPUs and computations -forward pass to compute the model over a computation graph and backward pass to compute the sub-gradients thereon -employ peer GPUs by replicating the central instance on them for each mini-batch in a data-parallel way. Effectively, the computation of stochastic subgradients happen over each GPU and they are summed and added to the central instance. Note that, this way of implementation exploits parallel resources while effectively simulating a shared-memory asynchronous environment. Model Partitioning. Unlike PASSM, the methods WIASSM, WCASSM and SGD do not partition the model and compute the stochastic subgradients over the entire computation graph of a CNN via backpropagation provided by the autograd module of Pytorch. PASSM partitions the list of leaves, which are tensors corresponding to the weights and biases, of the computation graph into blocks. While computing the stochastic subgradients with respect to a block, we switch off the requires_grad flag of the tensors corresponding to other blocks during backpropagation. This particular implementation component in some savings in stochastic sub-gradient computation with respect layers relatively closer to the output. Keeping this in view, we assigned blocks containing s i ≥ L/p parameter components, where L is the model size and p is the number of processes, to the processes P i computing stochastic sub-gradients corresponding to layers closer to output. Whereas, the process that computes sub-gradient of the layer-block closest to the input is assigned a block containing less than L/p parameter components. The assignments s i aim to balance computation We plotted test-accuracy in terms of Top1 correct match % vs time (in minutes). In can be observed that PASSM offers faster convergence per unit time in accuracy as well compared to the other two asynchronous methods. load, however, it varies across layers depending on the size of the assigned leaves in terms of parameter component. Nevertheless, a blocked architecture such as ResNet does not allow much scope of computation-cost saving on this count: we observed an insignificant difference in average processing time for the same number of epochs irrespective of switching off the requires_grad flag. Notice that, this is not a model parallelization and the stochastic subgradient computation with respect to a leaf depends on the computation path leading to the output. Irrespective of partitioning the model, the multi-GPU-based data-parallel implementation utilizes replication and data partitioning over GPUs while processing a mini-batch. The experimental observations are described in Figures 1, 2, 3, and 4. Summary. The block partitioning design of PASSM has its efficiency in the following: 1) it reduces the cost of optimization per process, since the parameter is partitioned. Note that, in case of neural networks, where backpropagation processes almost the entire computation graph irrespective of the location of the leaf, in particular in a blocked architecture such as ResNet, PASSM clearly misses out saving subgradient computation cost by way of parallelization; it can be significantly better if the subgradients with respect to the blocks could be computed independently; and 2) reduces memory traffic and potential write conflicts between processors which we observe in terms of better convergence per unit time. And finally, it is pertinent to highlight that we also observed that momentum correction improves the convergence per epoch of the block partitioning approach whose performance was way lower if we did not use it. In this paper we analyzed the convergence theory of asynchronous stochastic subgradient descent. We found that the state of the art probabilistic model on asynchronous parallel architecture applied to the stochastic subgradient method, with and without the use of momentum, is consistent with standard theory in stochastic approximation and asymptotic convergence with probability one holds for the method under the most general setting of asynchrony. We presented numerical that indicate some possible performance variabilities in three types of asynchrony: block partitioning inconsistent read (for which the above convergence theory applies), full-variable-update consistent write (for which the above convergence theory also applies), and full-variable-update inconsistent read/write (for which no convergence theory exists). Here we give a few more details describing the relation of the probabilistic model of asynchrony to the underlying hardware properties, as modeled in Cannelli et al.. In this section, we present k as a global counter, indicating sequential updates of any block among the variables. In iteration k, the updated iterate x k+1 i k depends on a random vector ζ of ζ k depends on the underlying scheduling or message passing protocol. We use the following formulation, which applies to a variety of architectures..., ζ t ) be the stochastic process representing the evolution of the blocks and minibatches used, as well as the iterate delays. The σ-algebra F is obtained as follows. Let the cylinder Consider the conditional distribution of ζ k+1 given ζ 0:k, we have the following assumptions on the probabilities of block selection and the delays, Assumption 6.1. The random variables ζ k satisfy, 1. There exists a δ such that d for some p min > 0. 3. It holds that, The first condition indicates that there is some maximum possible delay in the vectors, that each element of x used in the computation of x k+1 i k is not too old. The second is an irreducibility condition that there is a positive probability for any block or minibatch to be chosen, given any state of previous realizations of {ζ k}. The last assumption indicates that the set of events in Ω that asymptotically go to zero in conditional probability are of measure zero. In order to enforce global convergence, we wish to use a diminishing step-size. However, at the same time, as synchronization is to be avoided, there must not be a global counter indicating the rate of decrease of the step-size. In particular, each core will have its own local step size γ ν(c k,k) where c k is the core, and, defining the random variable Z k as the component of {1, ...,c} that is active at iteration k, the random variable denoting the number of updates performed by core c k, denoted by ν(k) is given by ν(k) In addition, noting that it has been observed that in practice, partitioning variable blocks across cores is more efficient than allowing every processor to have the ability to choose across every variable block . Thus we partition the blocks of variables across cores. We can thus denote c k as being defined uniquely by i k, the block variable index updated at iteration k. for some subsequence, which is antithetical to Assumption 3.1, Part 2. Thus, note that the stepsizes γ ν(c k,k) satisfy, where the limit of the sequence is taken in probability, which is an assumption for the analysis of asynchronous parallel algorithms in. We are now ready to present Algorithm 2. This is presented from the "global" iteration counter perspective. Input: x 0. 1: while Not converged and k < k max do 2: Update Update Set k = k + 1 6: end while 7 APPENDIX B: PRELIMINARY ASSUMPTIONS AND LEMMAS Thus, so is Proof. Uniform integrability of {Y k,i, Y σ k,i ; k, i} follows from Assumption 3.2, part 3. The uniform integrability of; k, i follows from 0 < m < 1 and the fact that a geometric sum of a uniformly integrable sequence is uniformly integrable. Now we define some terminology arising in the theory of weak convergence. We present a indicating sufficient conditions for a property called tightness. Theorem 7.1. (, Theorem 7.3. 3) Consider a sequence of processes {A k (·)} with paths in D(−∞, ∞) such that for all δ > 0 and each t in a dense set of (−∞, ∞) there is a compact set K δ,t such that, inf and for any T > 0, If a sequence is tight then every weak sense limit process is also a continuous time process. We say that A k (t) converges weakly to A if, for any bounded and continuous real-valued function F (·) on R n. Weak convergence is defined in terms of the Skorohod topology, a technical topology weaker than the topology of uniform convergence on bounded intervals, defined in. Convergence of a function f n (·) to f (·) in the Skorohod topology is equivalent to uniform convergence on each bounded time interval. We denote by D j [0, ∞) the j-fold product space of real-valued functions on the interval [0, ∞) that are right continuous with left-hand limits, with the Skorohod topology. It is a complete and separable metric space. Much of the proof of the main Theorem can be taken from the analagous in Chapter 12 of , which considers a particular model of asynchronous stochastic approximation. As we introduced a slightly different model from the literature, some of the details of the procedure are now different, and furthermore we introduced momentum to the algorithm, and thus in the next section we indicate how to treat the distinctions in the proof and show that the still holds. By Theorem 8.6, Chapter 3 in a sufficient condition for tightness of a sequence {A n (·)} is that for each δ > 0 and each t in a dense set in (−∞, ∞), there is a compact set K δ,t such that inf n P[A n (t) ∈ K δ,t ] ≥ 1 − δ and for each positive T, lim δ→0 lim sup n sup |τ |≤T, s≤δ E [|A n (τ + s) − A n (τ)|] = 0. Now since Y k,i is uniformly bounded, and Y σ k,i (·) is its interpolation with jumps only at t being equal to some T k,i, it holds that for all i,. This implies the Lipschitz continuity of the subsequence limits with probability one, which exist in the weak sense by Prohorov's Theorem, Theorems 6.1 and 6.2 . As σ → ∞ we denote the weakly convergent subsequence's weak sense limits by, Note that, x i (t) =x i (τ i (t)), x i (t) = x i (N i (t)), N i (τ i (t)) = t. with a set-valued map S(x, T, φ), and by the noise structure of the assumptions, it can easily be seen thatL exists for all possible values of x, T and φ in the notation of the paper. One can see that the uniqueness appears once in the beginning of the proof of Theorem 3.1 with the existence of this T 1 such that the trajectory lies in a specific ball around the limit point for t ≥ T 1. This can be replaced by the trajectory lying in this ball around the invariant set, for T 1 defined as the supremum of sucĥ T 1 associated with every possible subgradient, i.e., element of the DI. Since the subgradient is a compact set and is upper semicontinuous, this supremum exists. Finally, note that Assumption 3.2 is as Assumption 4.1 in and thus similarly implies Theorem 4.1 and Theorem 5.3. This proves that as σ → ∞, w.p.1 x σ (·) converges to an invariant set of the differential inclusion.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJlSPRVFwS
Asymptotic convergence for stochastic subgradien method with momentum under general parallel asynchronous computation for general nonconvex nonsmooth optimization
Stochastic neural networks with discrete random variables are an important class of models for their expressivity and interpretability. Since direct differentiation and backpropagation is not possible, Monte Carlo gradient estimation techniques have been widely employed for training such models. Efficient stochastic gradient estimators, such Straight-Through and Gumbel-Softmax, work well for shallow models with one or two stochastic layers. Their performance, however, suffers with increasing model complexity. In this work we focus on stochastic networks with multiple layers of Boolean latent variables. To analyze such such networks, we employ the framework of harmonic analysis for Boolean functions. We use it to derive an analytic formulation for the source of bias in the biased Straight-Through estimator. Based on the analysis we propose \emph{FouST}, a simple gradient estimation algorithm that relies on three simple bias reduction steps. Extensive experiments show that FouST performs favorably compared to state-of-the-art biased estimators, while being much faster than unbiased ones. To the best of our knowledge FouST is the first gradient estimator to train up very deep stochastic neural networks, with up to 80 deterministic and 11 stochastic layers. Stochastic neural networks with discrete latent variables have been an alluring class of models for their expressivity and interpretability, dating back to foundational work on Helmholtz machines and sigmoid belief nets . Since they are not directly differentiable, discrete random variables do not mesh well with the workhorse of modern Deep Learning, that is the backpropagation algorithm. Monte Carlo gradient estimation is an effective solution where, instead of computing the true gradients, one can sample gradients from some distribution. The sample estimates can be either biased or unbiased. Unbiased gradient estimates like score function estimators come typically at the cost of high variance leading to slow learning. In contrast, biased gradient estimates such Straight-Through , while efficient, run the risk of convergence to poor minima and unstable training. To this end several solutions have recently been proposed that either reduce variance in unbiased estimators (; ; ; ;) or control bias in biased estimators . These methods, however, have difficulty scaling up to complex neural networks with multiple stochastic layers: low-variance unbiased estimators are too expensive 1, while the compounded bias from the continuous relaxations on multiple stochastic layers leads to poor minima. In this work we focus on biased estimators. Our goal in this paper is a gradient estimator for Boolean random variables that works for any complex -deep or wide-neural network architecture. We resort to the term Boolean instead of binary to emphasize that we work directly on the Boolean space {−1, +1}, without any continuous relaxations or quantizations. With this in mind we re-purpose the framework of harmonic analysis of Boolean functions, widely used in computational learning and computational complexity theory (; ; ;). We cast stochastic neural networks as Boolean functions f (z) over Boolean latent variables z sampled from probability 1. We introduce the framework of harmonic analysis of Boolean functions to analyze discrete stochastic neural networks and their REINFORCE and Straight-Through gradients. We show that stochastic gradients compute Fourier coefficients. 2. Based on the above harmonic analysis we present FouST -a low-bias gradient estimator for Boolean latent variables based on three bias reduction steps. As a side contribution, we show that the gradient estimator employed with DARN , originally proposed for autoregressive models, is a strong baseline for gradient estimation in large and complex models with many stochastic layers. 3. We show that FouST is amenable to complex stochastic neural networks with Boolean random variables. To the best of our knowledge, FouST is the first gradient estimate algorithm that can train very deep stochastic neural networks with Boolean latent variables. The practical outcome is a simple gradient estimate algorithm that can be plugged in complex stochastic neural networks with multiple layers of Boolean random variables. We consider Boolean functions on the n-dimensional Boolean cube, f: {−1, +1} n → R. The setting of Harmonic Analysis for Boolean functions is the space of Boolean functions f with a product probability distribution on the Boolean input, that is p(z) = n i=1 p i (z). We denote p i as the probability of the i-th dimension being one, i.e., p i:= p i (z i = +1). We denote the mean and variance of z i by µ i and σ i, respectively. An example of a Boolean function in this setting is a generative neural network f: z → y with a factorized latent distribution, as commonly done in representation learning . In this example, z is the stochastic input -also known as the latent code in stochastic neural networks -taking only two possible values and y is the output, like cross entropy loss. Often, the goal of a generative neural network is to learn or approximate the latent distribution given input data x, i.e., p(z|x), as we will also explore in the experiments. We first introduce a few basic operations in the context of Harmonic Analysis of Boolean functions, which we shall use further on. Further necessary details are in the Appendix A. For a more comprehensive introduction, however, we refer the reader to. Inner product. The inner product of two Boolean functions f and g is: Fourier expansion. Let S be any subset of dimensions of the n-dimensional Boolean cube, S ⊆ [n] = {1, ..., n}. Per S we define a basis function, φ S (z):= i∈S φ i (z i), where for the empty set φ ∅ (z) = 1 and φ i is the z-score normalized dimension, i.e., φ i:= zi−µi σi. For example, under the uniform Bernoulli distribution for the i-th dimension, The 2 n functions φ S form an orthonormal basis for the space of Boolean functions f,, for i = j, where the expectations compute the inner product between two Boolean functions. The last identity derives from the independence of any dimensions i = j. We can then expand the Boolean function f on the set of 2 n orthonormal basis functions, also known as the p-biased Fourier expansion of f. Thef (p) (S) are the Fourier coefficients computed by the inverse Fourier expansion, That is, the inverse expansion is computed by the inner product for Boolean functions defined above. The cardinality of S is the degree of the coefficientf (p) (S). For instance, we have only one degree-0 coefficient,f (p) (∅), which equals to the expected value of f under the distribution p(z), Further, we have n degree-1 coefficientsf (p) (i) = f, φ i and so on. We examine Straight-Through gradient estimates using the framework of Harmonic Analysis of Boolean functions. For training the model parameters with a loss function L:= E p(z) [f (z)] we want to compute the gradient ∂ pi L = ∂ pi E p(z) [f (z)] for the i-th dimension. As the random sampling operation in the expectation is not differentiable, propose the Straight-Through estimator that approximates the gradient with. Clearly, the Straight-Through computes a biased approximation to the true gradient. Next, we quantify the bias in the Straight-Through estimator using the harmonic analysis of f (x). For the quantification of bias we first need the following lemma that connects the REINFORCE gradients with the degree-1 Fourier coefficients. The lemma is an extension of Margulis-Russo (; ;) formula. Lemma 1. Let f be a Boolean function. Then, the REINFORCE gradient estimates the degree 1 Fourier coefficientsf Proof. For compactness and clarity, we provide the proof in the Appendix B.1. We introduce new notation. In the following lemma bias ) denotes the bias of the i-th gradient estimate under the distribution p. Also, given the distribution p(z), p i→1/2 (z) is the distribution for which we set p(z i = +1) = p(z i = −1) = 1/2 for a given dimension i. Lemma 2. Let f be a Boolean function, where c k are the Taylor coefficients for the i-th dimension on f (z), that is z i, around 0 and bias Proof. For compactness and clarity, we provide only a proof sketch here showing the basic steps. These steps are also needed later in the description of the proposed gradient estimate algorithm. For the detailed proof please refer to the Appendix B.2. The proof sketch goes as follows. First, we derive a relation between the Fourier coefficients under the unknown distribution p(z) and under the uniform Bernoulli distribution p i→1/2 (z). Then, using this relation we derive the Taylor expansions for the true gradient as well as the Straight-Through gradient estimator. Last, to prove the lemma we compare the two Taylor expansions. Relation between Fourier coefficients under p(z) and p i→1/2 (z). If we expand the function f in terms of its φ S basis as in equation 2 and focus on the i-th dimension, by Lemma 1 we can show that the REINFORCE gradient is given by Taylor expansions of the true and the Straight-Through gradients. The Taylor expansion of Let's first focus on the true gradient. Since we work with Boolean ±1 values, we have that z k i = 1 for even k and z k i = z i for odd k. This will influence the even and the odd terms of the Taylor expansions. Specifically, for the Taylor expansion of the true gradient we can show that The expression in equation 7 implies that the true gradient with respect to the p i is the expected sum of the odd Taylor coefficients. Here we note that although the final expression in equation 7 can also be derived by a finite difference method, it does not make explicit, as in equation 31, the dependence on z i and µ i of the term inside the expectation. Now, let's focus on the Straight-Through gradient. Taking the derivative of the Taylor expansion w.r.t. to z i, we have The Straight-Through gradient is the expectation of equation 8 in the i-th dimension, that is where Comparing the Taylor expansions. By comparing the expansion of the Straight-Through gradient in equation 10 and the expansion of the true gradient in equation 7 and given that bias Taking the expectation in equation 9 under p i→1/2 causes the final term in equation 11 to vanish leaving bias Combining this expression with equation 9 gives the final expression (equation 4) from the lemma. Inspired by the Harmonic Analysis of the Straight-Through gradient estimates, we present a gradient estimate algorithm for deep Boolean latent models, FouST, for Fourier Straight-Through estimator. The algorithm relies on three bias reduction steps on the Straight-Through, lines 2, 3, 5 in Algorithm 1. As detailed earlier, the bias in the Straight-Through estimator is the sum of the bias under the uniform Bernoulli distribution plus extra bias due to non-zero expectation terms in higher-order harmonics when sampling from p(z). Sampling from p i→1/2 instead of p(z) would decrease the total bias from the form in equation 11 by setting the final term to 0. As a first bias reduction step, therefore, we do importance sampling. Specifically, after getting samples from p(z) and computing the gradients ∂ zi f (z) with the Straight-Through, we estimate the expectation under p i→1/2 as arrive at equation 12 in the context of unbiased control variates for quadratic functions. Lemma 2 shows that part of the bias in the Straight-Through estimator is due to the presence of extra factors in the Taylor coefficients. We can reduce the effect of these factors by taking advantage of the moments of the uniform distribution. Recalling that, we can attempt to correct the coefficients in equation 9, which for z k have the form (k + 1)c k, with the same extra factor of k + 1 that appears in the denominator of the kth moment. This suggests that we can sample from an auxiliary variable u and then use the auxiliary variable u with f instead of z and exploit the higher moments of the uniform distribution to reduce bias. For brevity, we illustrate the method with a case study of a two-dimensional z, and a bivariate f (z 1, z 2). As in Lemma 2, the partial true gradient of f (z 1, z 2) w.r.t. the first distribution parameter p 1 equals to "Bernoulli splitting uniform" trick. Assume an auxiliary variable u = (u 1, u 2), which we choose as follows. First, we sample z = (z 1, z 2) from the uniform Bernoulli distribution p 1→1/2 with (i set to 1). Then we take a uniform sample (u 1, u 2) with u i sampled from either The expectation of the gradient under such random sampling is Further detail is in Appendix C.1. We compare equation 14 with equation 13. In equation 14 we observe that the pure terms in z 1, namely terms with j = 0, always match those of the true gradient in equation 13. For j > 0 we obtain mixed terms with coefficients that do not match those of the true gradient in equation 13. Due to the 1 j+1 factor, for small j the mixed-degree terms are closer to the original ones in equation 13. For functions with small mixed degree terms, this can lead to bias reduction, at the cost of an increased variance because of sampling an auxiliary variable. In practice, to manage this bias-variance trade-off and to deal with functions that have greater dependence on mixed degree terms, we use smaller intervals for the random samples as in Algorithm 1. To summarize, for appropriate functions, the "Bernoulli splitting uniform" relies on the continuous variable u conditioned on the binary sample to reduce the bias. However, it is important to emphasize that u is only an auxiliary variable; the actual latent variable z is always binary. Thus, the "Bernoulli splitting uniform" trick does not lead to a relaxation of the sort used by Gumbel-Softmax , where there are no hard samples. Lastly we note that for a univariate f the "Bernoulli splitting uniform" trick leads to an unbiased estimator with an increased variance. The Fourier basis does not depend on the particular input representation and any two-valued set, say {−t, t} can be used as the Boolean representation. The choice of a representation, however, does affect the bias as we show next. As a concrete example, we let our input representation be n, where p i = p(z i = +1/2). While we can change the input representation like that, in general the Fourier coefficients in equation 3 will be different than for the {−1, +1} representation. We give the final forms of the gradients here. Details are given in Appendix C.2. Under the p i→1/2 distribution the degree-1 Fourier coefficients are: Note that compared to equation 7, in equation 15 we still get the odd terms c 1, c 3 albeit decayed by inverse powers of 2. Following the same process for the Straight-Through gradient as in equation 10, we have that While this is still biased, compared to equation 7 the bias is reduced by damping higher order terms by inverse powers of 2. The algorithm, described in algorithm 1, is a Straight-Through gradient estimator with the bias reduction steps described above, where a single sample is used to estimate the gradient. We emphasize that the algorithm uses a single sample and a single evaluation of the decoder per example and latent vector sample. Thus, the algorithm has the same complexity as that of the original Straight-Through estimator. Monte Carlo gradient estimators for training models with stochastic variables can be biased or unbiased. Perhaps the best known example of an unbiased gradient estimator is the REINFORCE algorithm . Unfortunately, REINFORCE gives gradients of high variance. For continuous stochastic variables propose the reparameterization trick, which transforms the random variable into a function of deterministic ones perturbed by a fixed noise source, yielding much lower variance gradient estimates. For discrete stochastic variables, REINFORCE is augmented with control variates for variance reduction. A number of control variate schemes have been proposed: NVIL subtracts two baselines (one constant and one input-dependent) from the objective to reduce variance. MuProp uses the first-order Taylor approximation of the function as a baseline. REBAR uses the Gumbel-Softmax trick to form a control variate for unbiased gradient estimates. RELAX generalizes REBAR to include an auxiliary network in the gradient expression and uses continuous relaxations and the reparameterization trick to give unbiased gradients. Regarding biased estimators, a simple choice is the Straight-Through estimator which uses the gradient relative to the sample as that relative to the probability parameter. Another recent approach is to use continuous relaxations of discrete random variables so that the reparameterization trick becomes applicable. The most common example of this being the GumbelSoftmax estimator . Although this is a continuous relaxation, it has been used to define the Gumbel Straight-Through estimator with hard samples. This uses arg max in the forward pass and the Gumbel-Softmax gradient is used as an approximation during in the backward pass. DARN , like MuProp, also uses the first-order Taylor expansion as a baseline but does not add the analytical expectation, making the estimator biased for non-quadratic functions. In this work we focus on biased Straight-Through gradient estimators. Specifically, we analyse how to reduce bias via Fourier expansions of Boolean functions. The Fourier expansion itself is widely used in computational learning theory with applications to learning low-degree functions , decision trees , constant-depth circuits and juntas . To the best of our knowledge we are the first to explore Fourier expansions for bias reduction of biased stochastic gradient estimators. Experimental Setup. We first validate FouST on a toy setup, where we already know the analytic expression of f (z). Next we validate FouST by training generative models using the variational autoencoder framework of. We optimize the single sample variational lower bound (ELBO) of the log-likelihood. We train variational autoencoders exclusively with Boolean latent variables on OMNIGLOT, CIFAR10, mini-ImageNet and MNIST (Appendix D.1). We train all models using a regular GPU with stochastic gradient descent with a momentum of 0.9 and a batch size of 128. We compare against Straight-Through, GumbelSoftmax, and DARN, although on more complex models some estimators diverge. The were consistent over multiple runs. Details regarding the architectures and hyperparameters used are in Appendix E. Upon acceptance we will open source all code, models, data and experiments., where t ∈ is a continuous target value and z is a sample from the Bernoulli distribution p(z). The optimum is obtained for p(z = +1) ∈ {0, 1}. Figure 1, shows a case with t = 0.45, where the minimizing solution is p(z = +1) = 0. We observe that unlike the Straight-Through estimator, FouST converges to the minimizing deterministic solution (lower is better). Training Stochastic MLPs. We train MLPs with one and two stochastic layers on OMNIGLOT, following the non-linear architecture of. Each stochastic Boolean layer is preceded by two deterministic layers of 200 tanh units. All hyperparameters remain fixed throughout the training. All estimators use one sample per example and a single decoder evaluation. We present in Fig. 2. FouST outperforms other biased gradient estimators in both datasets and architectures. FouST is clearly better than the StraightThrough estimator. Despite the complicated nature of the optimized neural network function f (z) the bias reduction appears fruitful. With one or two stochastic layers we can also use the unbiased REBAR. REBAR is not directly comparable to the estimators we study, since it uses multiple decoder evaluations and for models with multiple stochastic layers, multiple passes through later layers. Nevertheless, as shown in appendix D.1 for MNIST, with two stochastic layers REBAR reaches a worse test ELBO of -94.43 v. -91.94 for FouST. A possibility of worse test than training ELBOs for REBAR was also suggested in the original work . Training Stochastic ResNets. We further validate FouST in a setting where the encoder and decoder are stochastic ResNets, S-ResNets, which are standard ResNets with stochastic layers inserted between ResNet blocks. Similar to MLPs, FouST outperforms other biased estimators in this setting on CIFAR-10 (left in Figure 3). Note that despite the hyperparameter sweep, we were unable to train S-ResNet's with Gumbel-Softmax. So we compare against DARN and Straight-Through only. With an S-ResNet with 12 ResNet blocks and 4 stochatic layers FouST yields a score of 5.08 bits per dimension (bpd). This is comparable to the 5.14 bpd with the categorical VIMCO-trained model In the plots, we observe sharp spikes or slower curving. We hypothesize these are due, respectively, to stochasticity and bias, and are corrected to some degree along the way. Efficiency. We compare the efficiency of different estimators in Tab. 1. Like other biased estimators, FouST requires a single sample for estimating the gradients and has similar wallclock times. On MNIST, the unbiased REBAR is 15x and 40x slower than the biased estimators for two and five stochastic layer MLP's respectively. From the above experiments we conclude that FouST allows for efficient and effective training of fully connected and convolutional neural networks with Boolean stochastic variables. Last, we evaluate FouST on more complex neural networks with deeper and wider stochastic layers. We perform experiments with convolutional architectures on the larger scale and more realistic mini-ImageNet . As the scope of this work is not architecture search, we present two architectures inspired from residual networks of varying stochastic depth and width. The first one is a wide S-ResNet, S-ResNet-40-2-800, and has 40 deterministic (with encoder and decoder combined), 2 stochastic layers, and 800 channels for the last stochastic layer. The second, S-ResNet-80-11-256, is very deep with 80 deterministic and 11 stochastic layers, and a last stochastic layer with 256 channels. Architecture details are given in Appendix E.2. In this setup, training with existing unbiased estimators is intractable. We present in Fig. 3. We compare against DARN, since we were unable to train the models with Gumbel-Softmax. Incomplete lines indicate failure. We observe that FouST is able to achieve better training ELBO's in both cases. We conclude that FouST allows for scaling up the complexity of stochastic neural networks in terms of stochastic depth and width. For a Boolean function f the discrete derivative on the i-th latent dimension with a basis function φi is defined as The Fourier expansion of the discrete derivative equals The Fourier expansion of the discrete derivative is derived by equation 2: (i) all bases that do not contain the i-th dimension are constant to φi and thus set to zero, while (ii) for the rest of the terms ∂φ S dφ i = φ S\i from the definition of basis functions φS(z). In the following we differentiate partial derivatives on continuous functions noted with ∂· from discrete derivatives on Boolean functions noted with D·. The i-th discrete derivative is independent of zi both in the above definitions. Proof. We follow O'Donnell (2014, §8.4). In this proof we work with two representations of the Boolean function f. The first is the Fourier expansion of f under the uniform Bernoulli distribution. This is also the representation obtained by expressing f as a polynomial in zi. Since the domain of the function f is the Boolean cube, the polynomial representation is multilinear. That is f (z) = S⊆[n]f (S) j∈S zj. To avoid confusion and to differentiate the representation from the Boolean function, we use f (u) (z) to denote this representation in the following. Note that since this representation is a polynomial it is defined over any input in R n. In particular, The second representation we use is the Fourier expansion of the Boolean function f under p(x). We denote this by f (p). The following relation follows from the fact that when working with the Fourier representation, f (z) is multilinear, E p(z) [zi] = µi and the linearity of expectation. As the partial derivative of f (u) w.r.t. µi is equivalent to discrete derivative of, and keeping in mind that φi = (zi − µi)/σi, we have that We then note that the discrete derivative of f w.r.t. zi, Dz i f (u) (µ), from the left hand side of equation 19, is equivalent to the partial derivative of f w.r.t. µi, ∂µ i f (u) (µ). We complete the proof by noting that the right hand side in equation 24 is 1 2 times the REINFORCE gradient. The detailed proof of Lemma 2 is as follows. Proof. We first derive the Taylor expansions for the true gradient as well as the Straight-Through gradient estimator. Then, we prove the lemma by comparing the two Taylor expansions. By expanding the function f in terms of its φS basis as in equation 2 and focusing on the i-th dimension, we have thatf The first term,f (p) (i), is the term corresponding to {i} in the Fourier expansion of f under p i→1/2 (z). That isf This follows from the fact that when moving from p(z) to p i→1/2 (z), (i) we have that φi = zi, and (ii) no other term under the p(z) expansion contributes to the zi term under the p i→1/2 (z) expansion. As a consequence of Lemma 1 the REINFORCE gradient for the i-th dimension is given by Next, we will derive the Taylor expansions of the true and the Straight-Through gradients. The Taylor expansion of f (z) for zi around 0 is where are the Taylor coefficients. All c k are a function of zj, j = i. Let's first focus on the true gradient. Since we work with binary ±1 values, we have that z k i = 1 for even k and z k i = zi for odd k. This will influence the even and the odd terms of the Taylor expansions. Specifically, for the Taylor expansion of the true gradient we have from equation 27 and equation 3 that The expression in equation 32 implies that the true gradient with respect to the pi is the expected sum of the odd Taylor coefficients. Here we note that the although final expression in equation 32 can also be derived by a finite difference method, it does not make explicit, as in equation 31, the dependence on zi and µi of the term inside the expectation. By comparing the expansion of the Straight-Through gradient in equation 35 and the expansion of the true gradient in equation 32, Taking the expectation in equation 34 under p i→1/2 causes the final term in equation 37 to vanish leaving C LOW-BIAS GRADIENT ESTIMATES We describe the case of a bivariate function in detail. For brevity, we focus on a case study of a two-dimensional z, and a bivariate f (z1, z2) with the bivariate Taylor expansion f (z1, z2) = i,j ci,jz As in Lemma 2, the partial true gradient of f (z1, z2) w.r.t. the first distribution parameter p1 equals to Further, the Taylor expansion of "Bernoulli splitting uniform" trick. Assume an auxiliary variable u = (u1, u2), which we choose as follows. First, we sample z = (z1, z2) from the uniform Bernoulli distribution p 1→1/2. Then we take a uniform sample (u1, u2) with ui sampled from either for zi = +1 or from [−1, 0] if zi = −1. At this point it is important to note that the moments of the uniform distribution in [a, b], which simplifies to b/2, b 2 /3, b 3 /4,... for a = 0, and where we think of b as a binary sample i.e., b ∈ {−1, 1}. The expectation of the gradient under such random sampling is We then compare equation 40 with equation 39. In equation 40 we observe that the pure terms in z1, namely terms with j = 0, always match those of the true gradient in equation 39. For j > 0 we obtain mixed terms with coefficients that do not match those of the true gradient in equation 39. However, the partial gradient obtained with the auxiliary variables in equation 40 has coefficients following a decaying trend due to the 1 j+1. For small j, that is, the mixed-degree terms are closer to the original ones in equation 39. For functions with smaller mixed degree terms this leads to bias reduction, at the cost of an increased variance due to additional sampling. In practice many functions would have greater dependence on mixed degree terms. For such functions and to manage the bias-variance trade-off we choose smaller intervals for the uniform samples, that is a → b. The Fourier basis does not depend on the particular input representation and any two-valued set, say {−t, t} can be used as the Boolean representation. The choice of a representation, however, does affect the bias as we show next. As a concrete example, we let our input representation be zi ∈ {−1/2, 1/2} n, where pi = p(zi = +1/2). While we can change the input representation like that, in general the Fourier coefficients in equation 3 will be different than for the {−1, +1} representation. Letting h(zi) = 2zi ∈ {−1, 1}, the functions φi are now given Next, we write the Taylor series of f in terms of h(zi), f (z) = c0 + c1zi + c2z Under the p i→1/2 distribution, we still have that E p→1/2 [h(zi)] = 0 and the degree-1 Fourier coefficients are: Note that compared to equation 7, in equation 43 we still get the odd terms c1, c3 albeit decayed by inverse powers of 2. Following the same process like for equation 10, we have that To further judge the effect of our proposed modifications to Straight-Through, we performed ablation experiments where we separately applied scaling and noise to the importance-corrected Straight-Through. These experiments were performed on the single stochastic layer MNIST and OMNIGLOT models. The of the ablation experiments are shown in figure 5. From the figure it can be seen that scaling alone improves optimization in both cases and noise alone helps in the case of MNIST. Noise alone in a worse ELBO in the case of OMNIGLOT, but gives an improvement when combined with scaling. From these we conclude that the proposed modifications are effective. The encoder and decoder networks in this case are MLP's with one or more stochastic layers. Each stochastic layer is preceded by 2 deterministic layers with a tanh activation function. We chose learning rates from {1 × 10 −4, 2 × 10 −4, 4 × 10 −4, 6 × 10 −4}, Gumbel-Softmax temperatures from {0.1, 0.5}, and noise interval length for FouST from {0.1, 0.2}. For these dataset we use a stochastic variant or ResNets . Each network is composed of stacks of layers. Each layer has (i) one regular residual block as in , (ii) followed by at most one stochastic layer, except for the CIFAR architecture B in figure 3 where we used two stochastic layers in the last layer. The stacks are followed by a final stochastic layer in the encoder. We do downsampling at most once per stack. We used two layers per stack. For CIFAR we downsample twice so that the last stochastic layer has feature maps of size 8x8. We chose learning rate from {9 × 10 −7, 1 × 10 −6, 2 × 10 −6, 4 × 10 −6}, the FouST scaling parameter from {0.5, 0.8, 0.9}, and the uniform interval was scaled by a factor from {0.01, 0.05, 0.1} For mini-ImageNet we downsample thrice. We chose the learning rate from {2 × 10 −7, 3 × 10 −7, 4 × 10 −7, 5 × 10 −7}.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Bygadh4tDB
We present a low-bias estimator for Boolean stochastic variable models with many stochastic layers.
We propose a novel generative adversarial network for visual attributes manipulation (ManiGAN), which is able to semantically modify the visual attributes of given images using natural language descriptions. The key to our method is to design a novel co-attention module to combine text and image information rather than simply concatenating two features along the channel direction. Also, a detail correction module is proposed to rectify mismatched attributes of the synthetic image, and to reconstruct text-unrelated contents. Finally, we propose a new metric for evaluating manipulation , in terms of both the generation of text-related attributes and the reconstruction of text-unrelated contents. Extensive experiments on benchmark datasets demonstrate the advantages of our proposed method, regarding the effectiveness of image manipulation and the capability of generating high-quality . Image manipulation refers to the task of changing various aspects of given images from low-level colour or texture ) to high-level semantics , and has numerous potential applications in video games, image editing, and computer-aided design. Recently, with the development of deep learning and generative models, automatic image manipulation becomes possible, including image inpainting , image colourisation, style transfer , and domain or attribute translation (; . However, all the above works mainly focus on specific tasks, and only few studies concentrate on more general and user-friendly image manipulation by using natural language descriptions. Also, as shown in Fig.1, current state-of-the-art methods can only generate low-quality images and fail to effectively manipulate given images on more complicated datasets, such as COCO . The less effective performance is mainly because simply concatenating text and image cross-domain features along the channel direction, the model may fail to precisely correlate words and corresponding visual attributes, and thus cannot modified specific attributes required in the text, and conditioned only on a global sentence vector, current state-of-the-art methods lack important fine-grained information at the word-level, which prevents an effective manipulation using natural language descriptions. In this paper, we aim to manipulate given images using natural language descriptions. In particular, we focus on modifying visual attributes (e.g., category, texture, colour, and ) of input images by providing texts that describe desired attributes. To achieve this, we propose a novel generative adversarial network for visual attributes manipulation (ManiGAN), which allows to effectively manipulate given images using natural language descriptions and to produce high-quality . The contribution of our proposed method is fourfold: instead of simply concatenating hidden features generated from a natural language description and image features encoded from the input image along the channel direction, we propose a novel co-attention module where both features can collaborate to reconstruct the input image and also keep the synthetic semantically aligned with the given text description, a detail correction module (DCM) is introduced to rectify mismatched attributes, and to reconstruct text-unrelated contents existing in the input image, a new metric is proposed, which can appropriately reflect the generation of text-related visual attributes and the reconstruction of text-unrelated contents involved in the image manipulation, and extensive experiments on the CUB and COCO Figure 1: Examples of image manipulation using natural language descriptions. Current state-of-theart methods only generate low-quality images, and fail to do manipulation on COCO. In contrast, our method allows the input images to be manipulated accurately corresponding to the given text descriptions while preserving text-unrelated contents. to demonstrate the superiority of our model, which outperforms existing state-of-the-art methods both qualitatively and quantitatively. There are few studies focusing on image manipulation using natural language descriptions. proposed a GAN-based encoder-decoder architecture to disentangle the semantics of both input images and text descriptions. implemented a similar architecture, but introduced a text-adaptive discriminator that can provide specific word-level training feedback to the generator. However, both methods are limited in performance due to a less effective text-image concatenation method and a coarse sentence condition. Our work is also related to conditional image manipulation. introduced a VAE-GAN hybridisation model to modify natural images by exploring the latent features. and introduced paired and unpaired image-to-image translation methods based on conditional adversarial networks, respectively. However, all these methods focus mainly on image-to-image same-domain translation instead of image manipulation using cross-domain text descriptions. Recently, text-to-image generation has drawn much attention due to the success of GANs in generating photo-realistic images. first proposed to use conditional GANs to generate plausible images from given text descriptions. stacked multiple GANs to generate high-resolution images from coarse-to fine-scale. implemented a spatial attention mechanism to explore the fine-grained information at the word-level. However, all aforementioned methods mainly focus on generating new photo-realistic images from texts, and not on manipulating specific visual attributes of given images using natural language descriptions. Let I denote an input image required to be modified, and S denote a text description given by a user. We aim to semantically manipulate the input image I using the given text S, and also keep the visual attributes of the modified image I semantically aligned with S while preserving textunrelated contents existing in I. To achieve this, we first adopt the ControlGAN , as our basic framework, as it can effectively control text-to-image generation, and manipulate visual attributes of synthetic images. Then, we propose two novel components: co-attention module, and detail correction module to achieve effective image manipulation. We elaborate our model as follow, and the full architecture diagram is shown in Appendix A. As shown in Fig. 2 (a), our co-attention module takes two inputs: the hidden features h ∈ R C×H×D, where C is the number of channels, H and D are the height and width of the feature map, respectively, and the regional image features v ∈ R 256×17×17 of the input image I encoded by the GoogleNet . The activation value h ∈ R C×H×D is given by h = h W (v) + b(v), where W (v) and b(v) are the learned weights and biases dependent on the regional features v, and denotes Hadamard element-wise product. We use W and b to represent the functions that convert the regional features v to scaling and bias values. Then, the activation value h serves as the input for the next stage. We also apply the co-attention module before implementing an image generation network to produce synthetic images; please see Appendix A for more details. This linear combination form has been widely used in normalisation techniques (; ; ;), but, different from them, our co-attention module is only applied at specific positions instead of all normalisation layers, which requires less computational resources, and our co-attention module is designed to incorporate text and image cross-domain information, where W helps the model to focus on text-related visual attributes, while b provides input image information to help to reconstruct text-unrelated contents. Also, we experimentally find that implementing our co-attention module at all normalisation layers fails to produce reasonable images, which indicates that the normalisation techniques may not be suitable for the tasks requiring different domain information. , the functions W and b are implemented by a simple two-layer convolutional network, see Fig. 2 What has been learned by the co-attention module? To better understand what has been learned by our co-attention module, we conduct an ablation study shown in Fig. 3 to evaluate the effectiveness of W and b. As we can see, without W, some visual attributes cannot be perfectly generated (e.g., white belly in row 1 and the red head in row 2), and without b, the text-unrelated contents (e.g., ) are hard to preserve, which verify our assumption that W behaves as an attention function to help the model focus on text-related visual attributes, and b helps to complete missing text-unrelated details existing in the input image. Also, the visualisation of the channel feature maps of W (v) shown in the last three columns of Fig. 3 validates the attention mechanism of W. The main purpose of our model is to incorporate input images and then generate modified images aligned with given text descriptions. Then, it may inevitably produce some new visual attributes or mismatched contents that are not required in the given texts. To fix this issue, we propose a The bird has a black bill, a red crown, and a white belly. (top) This bird has wings that are black, and has a red belly and a red head. detail correction module (DCM) to rectify inappropriate attributes, and to reconstruct text-unrelated contents existing in the input images. The DCM consists of a generator and a discriminator, and is trained alternatively by minimising both objective functions. The generator, shown in Fig. 2 (b), takes three inputs: the last hidden features h last ∈ R C ×H ×D from the main module (we call our model without the DCM as main module), the word features, and visual features v ∈ R 128×128×128 that are extracted from the input image I by the VGG-16 pretrained on ImageNet . We have also applied GoogleNet and ResNet for feature extraction, but both do not perform well. Please refer to Appendix D for a detailed description of the detail correction module. We train the main module and detail correction module separately, and the generator and discriminator in both modules are trained alternatively by minimising both the generator loss L G and discriminator loss L D. Generator objective. The loss function for the generator follows those used in ControlGAN , but we introduce a regularisation term L reg = 1 − 1 C I H I W I ||I − I|| to prevent the network achieving identity mapping, which can penalise large perturbations when the generated image becomes the same as the input image. where the unconditional adversarial loss makes the synthetic image I indistinguishable from the real image I, the conditional adversarial loss aligns the generated image I with the given text description S, L DAMSM measures the text-image similarity at the word-level to provide finegrained feedback for image generation, L corre determines whether words-related visual attributes exist in the image, and L rec reduces randomness involved in the generation process. λ 1, λ 2, λ 3, and λ 4 are hyperparameters controlling the importance of additional losses. Note that we do not use L rec when we train the detail correction module. Discriminator objective. The loss function for the discriminator follows those used in Control-GAN , and the function used to train the discriminator in the detail correction module is the same as the one used in the last stage of the main module. conditional adversarial loss where S is a given text description randomly sampled from the dataset, the unconditional adversarial loss determines whether the given image is real, and the conditional adversarial loss reflects the semantic similarity between images and texts. Analysis. To prevent the model picking the input image as the solution, i.e., the model becomes an identity mapping network, we first introduce a regularisation term L reg to penalise large perturbations when the generated image becomes the same as the input image, and then we stop the training early when the model reaches a stage achieving the best trade-off between the generation of new visual attributes aligned with given text descriptions and the reconstruction of text-unrelated contents existing in the input images. As for when to stop training, it is based on our proposed measurement metric, called manipulative precision (see Fig. 4), which is discussed in Sec. 4. To evaluate our model, extensive quantitative and qualitative experiments are carried out. Two stateof-the-art approaches on image manipulation using natural language descriptions, SISGAN and TAGAN , are compared on the CUB birds and more complicated COCO datasets. Results for these two baselines are reproduced based on the code released by the authors. Please refer to Appendix A, B, and C for a detailed description of our network structures, the datasets, and training configurations. Quantitative . As mentioned above, our model can generate high-quality images compared with state-of-the-art methods. To demonstrate this, we adopt the inceptions score (IS) as the quantitative evaluation measure. In our experiments, we evaluate the IS on a large number of manipulated samples generated from mismatched pairs, i.e., randomly chosen input images manipulated by randomly selected text descriptions. However, as the IS cannot reflect the quality of the content preservation, the L 1 pixel difference (diff) is calculated between the input images and corresponding modified images. Moreover, using the pixel difference alone may falsely report a good reconstruction due to over-training that the model becomes an identity mapping network. To address this issue, we propose a new measurement metric, called manipulative precision (MP), incorporating both the text-image similarity (sim) and the pixel difference, where the text-image similarity is calculated by performing the cosine similarity on the text features and corresponding image features encoded from the modified images. This is based on the intuition that if the manipulated images are generated from an identity mapping network, then the text-image similarity should be low, as the synthetic images cannot perfectly keep a semantic consistence with given text descriptions. Thus, the measurement metric is defined as MP = (1 − diff) × sim. As shown in Table 1, our method has the highest MP values on both the CUB and COCO datasets compared with the state-of-the-art approaches, which demonstrates that our method can better generate text-related visual attributes, and also reconstruct text-unrelated contents existing in the input images. The model without main module (i.e., only having the DCM) gets the highest IS, the lowest L 1 pixel difference, and low text-image similarity. This is because the model has become a identity mapping network and loses the capability of image manipulation. Qualitative . Figs. 5 and 6 show the visual comparison between our ManiGAN, SISGAN , and TAGAN on the CUB and COCO datasets, respectively. It can be seen that both state-of-the-art methods are only able to produce low-quality and cannot effectively manipulate input images on the COCO dataset. However, our method is capable to perform an accurate manipulation and keep a highly semantic consistence between synthetic images and given text descriptions, while preserving text-unrelated contents. For example, shown in the last column of Fig. 6, SISGAN and TAGAN both fail to achieve an effective manipulation, while our model modifies the green grass to dry grass and also maps the cow into a sheep. Note that as birds can have many detailed descriptions (e.g., colour for different parts), we use a long sentence to manipulate them, while the text descriptions for COCO are more abstract and focus mainly on categories, thus we use words to do manipulation for simplicity, which has the same effect as using long detailed text descriptions. The effectiveness of the co-attention module. To verify the effectiveness of the co-attention module, we use the concatenation method to replace all co-attention modules, which concatenates hidden features h and regional features v along the channel direction, shown in Figs. 7 and 8 (d). As we can see that our full model can synthesise an object having exactly the same shape, pose, and position as the one existing in the input image, and also generate new visual attributes aligned with the given text description on the synthetic image. In contrast, as shown in the last two columns of Figs. 7 and 8 (d), with concatenation method, the model cannot reconstruct birds on the CUB bird dataset, and fails to do manipulation on the COCO dataset. This bird is yellow with a yellow belly, and has a yellow beak. A small bird with a red belly, a red crown, and black wings. This bird has wings that are brown, and has an orange belly and an orange breast. A bird that has a red beak, a grey head, and a grey belly. Also, to further validate the effectiveness of the co-attention module, we conduct an ablation study shown in Fig. 8 (c). It can be seen that our model without co-attention module that we just concatenate text and image features before feeding into the main module, which is used in and , fails to produce reasonable images on both datasets. In contrast, our full model can better generate text-required attributes and also reconstruct text-unrelated contents shown in the last column. Table 1 also verifies the effectiveness of our co-attention module, as the values of IS and MP increase significantly when we implement the co-attention module. The effectiveness of the detail correction module and main module. As shown in Fig. 8 (f), our model without detail correction module may miss some visual attributes (e.g., the bird missing the tail at row 2, the zebra missing the mouth at row 3), or generate new contents (e.g., new at row 1, different appearance of bus at row 4), which indicates that the detail correction module can correct inappropriate attributes and reconstruct the text-unrelated contents. Fig. 8 (e) shows that without the main module, our model fails to do image manipulation on both datasets, which just achieves an identity mapping. This is mainly because the model cannot precisely correlate words with corresponding visual attributes, which mostly has been done in the main module. A bird with black eye rings and a black bill, with a yellow crown and white belly. (matched) This bird has a yellow bill, a blue head, blue wings, and yellow belly. This beautiful bird is made up random patches of red, white, black, orange, and brown. (matched) A bird is brown and white in colour, with a grey belly and short orange bill. (given) Text Original Ours, Matched Ours, Given Concat., Matched Concat., Given Figure 7: Analysis of the co-attention module. "Matched" represents the texts matching original images. "Given" represents the texts provided by users. "Concat." denotes that instead of using co-attention, hidden features are concatenated with image features along the channel direction. This bird has a light grey belly, dark grey wings and head with a red beak. This bird has a yellow crown, blue wings and a yellow belly. removing the co-attention module and only concatenating image features and text features before feeding into the main module; d: using concatenation method to replace all co-attention modules; e: removing the main module and just training the DCM only; f: removing the DCM and just training the main module only; g: our full model. We have proposed a novel generative adversarial network for visual attributes manipulation, called ManiGAN, which can semantically manipulate the input images using natural language descriptions. Two novel components are proposed in our model: the co-attention module enables cooperation between hidden features and image features where both features can collaborate to reconstruct the input image and also keep the synthetic semantically aligned with the given text description, and the detail correction module can rectify mismatched visual attributes of the synthetic , and also reconstruct text-unrelated contents existing in the input image. Extensive experimental demonstrate the superiority of our proposed method, in terms of both the effectiveness of image manipulation and the capability of generating high-quality . We adopt the ControlGAN as the basic framework and replace batch normalisation with instance normalisation everywhere in the generator network except in the first stage. Basically, the co-attention module can be inserted anywhere in the generator, but we experimentally find that it is best to incorporate the module before upsampling blocks and image generation networks; see Fig. 9. Our method is evaluated on the CUB birds and the MS COCO datasets. The CUB dataset contains 8,855 training images and 2,933 test images, and each image has 10 corresponding text descriptions. As for the COCO dataset, it contains 82,783 training images and 40,504 validation images, and each image has 5 corresponding text descriptions. We preprocess this two datasets based on the methods introduced in. In our setting, we train the detail correction module (DCM) separately from the main module. Once the main module has converged, we train the DCM subsequently and set the main module as the eval mode. There are three stages in the main module, and each stage contains a generator and a discriminator. We train three stages at the same time, and three different-scale images 64×64, 128× 128, 256 × 256 are generated progressively. The main module is trained for 600 epochs on the CUB dataset and 120 epochs on the COCO dataset using the Adam optimiser with the learning rate 0.0002, and β 1 = 0.5, β 2 = 0.999. We do not use any learning rate decay, but for visualising generator output at any given point during the training, we use an exponential running average for the weights of the generator with decay 0.999. As for the DCM, there is a trade-off between generation of text-related attributes and the reconstruction of text-unrelated contents. Based on the manipulative precision (MP) values (see Fig. 4), we find that training 100 epochs for the CUB, and 12 epochs for the COCO to achieve an appropriate balance between generation and reconstruction. The other training setting are the same as in the main module. The hyperparameters λ 1, λ 2, λ 3, and λ 4 are set to 1, 5, 0.5, and 1 for the CUB dataset, and 15, 5, 0.5, and 1 for COCO, respectively. First, the visual features v are converted into the same size as the hidden features h last via a convolutional layer F, denotedṽ = F v, whereṽ ∈ R 128×H ×D. Then, we adopt the spatial attention and channel-wise attention introduced in to generate spatial attentive word-context features s ∈ R C ×H ×D and channel-wise attentive word-context features c ∈ R C ×H ×D, and concatenate these two features with the hidden features h last along the channel direction to generate new hidden features a ∈ R (3 * C)×H ×D. Next, to incorporate the visual featuresṽ, we adopt the co-attention module here, donatedã = a W (ṽ) + b (ṽ), where W and b are learned weights and bias dependent on visual featuresṽ. Then, the transformed featuresã are fed into a series of residual blocks followed by a convolutional layer to generate hidden features e. Before feeding e into a network to generate the output image, we apply the co-attention module on the e again to further strengthen the visual information; see Fig. 2 (b). We also track the trend of manipulation over epoch increases, as shown in Fig. 10. The image is smoothly modified to achieve the best balance between the generation of new visual attributes (e.g., dirt ) and the reconstruction of text-unrelated contents (e.g., the appearance of zebras). However, when the epoch goes larger, the generated visual attributes (e.g., dirt ) aligned with the given text description are erased, and the synthetic image becomes more and more similar to the input image. This verifies the existence of the trade-off between the generation of new visual attributes required in the given text description and the reconstruction of contents existing in the input image. We show additional comparison between our ManiGAN, SISGAN , and TAGAN on the CUB and COCO datasets. This bird is blue and grey with a red belly. This bird has wings that are grey and yellow with a yellow belly. This bird is black in colour, with a red crown and a red beak. This green bird has a black crown and a green belly. A bird with brown wings and a yellow body, with a yellow head. A white bird with grey wings and a red bill, with a white belly. Original SISGAN TAGAN Ours Figure 11: Additional between ManiGAN, SISGAN, and TAGAN on the CUB bird dataset. A small blue bird with an orange crown, with a grey belly. This bird has a red head, black eye rings, and a yellow belly. This bird is mostly red with a black beak, and a black tail. This tiny bird is blue and has a red bill and a red belly. This bird has a white head, a yellow bill, and a yellow belly. A white bird with red throat, black eye rings, and grey wings.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJl6tC4KwB
We propose a novel method to manipulate given images using natural language descriptions.
Optimal Transport (OT) naturally arises in many machine learning applications, where we need to handle cross-modality data from multiple sources. Yet the heavy computational burden limits its wide-spread uses. To address the scalability issue, we propose an implicit generative learning-based framework called SPOT (Scalable Push-forward of Optimal Transport). Specifically, we approximate the optimal transport plan by a pushforward of a reference distribution, and cast the optimal transport problem into a minimax problem. We then can solve OT problems efficiently using primal dual stochastic gradient-type algorithms. We also show that we can recover the density of the optimal transport plan using neural ordinary differential equations. Numerical experiments on both synthetic and real datasets illustrate that SPOT is robust and has favorable convergence behavior. SPOT also allows us to efficiently sample from the optimal transport plan, which benefits downstream applications such as domain adaptation. The Optimal Transport (OT) problem naturally arises in a variety of machine learning applications, where we need to handle cross-modality data from multiple sources. One example is domain adaptation: We collect multiple datasets from different domains, and we need to learn a model from a source dataset, which can be further adapted to target datasets BID18 BID8. Another example is resource allocation: We want to assign a set of assets (one data source) to a set of receivers (another data source) so that an optimal economic benefit is achieved BID46 BID17. Recent literature has shown that both aforementioned applications can be formulated as optimal transport problems. The optimal transport problem has a long history, and its earliest literature dates back to Monge. Since then, it has attracted increasing attention and been widely studied in multiple communities such as applied mathematics, probability, economy and geography BID51; BID23. Specifically, we consider two sets of data, which are generated from two different distributions denoted by X ∼ µ and Y ∼ ν.1 We aim to find an optimal joint distribution γ of X and Y, which minimizes the expectation on some ground cost function c, i.e., γ * = arg min γ∈Π(µ,ν) DISPLAYFORM0 The constraint γ ∈ Π(µ, ν) requires the marginal distribution of X and Y in γ to be identical to µ and ν, respectively. The cost function c measures the discrepancy between input X and Y. For crossmodality structured data, the form of c incorporates prior knowledge into optimal transport problem. Existing literature often refers to the optimal expected cost W * (µ, ν) = E (X,Y)∼γ * [c(X, Y)] as Wasserstein distance when c is a distance, and γ * as the optimal transport plan. For domain adaptation, the function c measures the discrepancy between X and Y, and the optimal transport plan γ * essentially reveals the transfer of the knowledge from source X to target Y. For resource allocation, the function c is the cost of assigning resource X to receiver Y, and the optimal transport plan γ To address the scalability and efficiency issues, we propose a new implicit generative learning-based framework for solving optimal transport problems. Specifically, we approximate γ * by a generative model, which maps from some latent variable Z to (X, Y). For simplicity, we denote DISPLAYFORM1 where ρ is some simple latent distribution and G is some operator, e.g., deep neural network or neural ordinary differential equation (ODE). Accordingly, instead of directly estimating the probability density of γ *, we estimate the mapping G between Z and (X, Y) by solving DISPLAYFORM2 We then cast equation 3 into a minimax optimization problem using the Lagrangian multiplier method. As the constraints in equation 3 are over the space of continuous distributions, the Lagrangian multiplier is actually infinite dimensional. Thus, we propose to approximate the Lagrangian multiplier by deep neural networks, which eventually delivers a finite dimensional generative learning problem. Our proposed framework has three major benefits: Our formulated minimax optimization problem can be efficiently solved by primal dual stochastic gradient-type algorithms. Many empirical studies have corroborated that these algorithms can easily scale to very large minimax problems in machine learning BID2; Our framework can take advantage of recent advances in deep learning. Many empirical evidences have suggested that deep neural networks can effectively adapt to data with intrinsic low dimensional structures BID33. Although they are often overparameterized, due to the inductive biases of the training algorithms, the intrinsic dimensions of deep neural networks are usually controlled very well, which avoids the curse of dimensionality; Our adopted generative models allow us to efficiently sample from the optimal transport plan. This is very convenient for certain downstream applications such as domain adaptation, where we can generate infinitely many data points paired across domains BID35.Moreover, the proposed framework can also recover the density of entropy regularized optimal transport plan. Specifically, we adopt the neural Ordinary Differential Equation (ODE) approach in to model the dynamics that how Z gradually evolves to G(Z). We then derive the ODE that describes how the density evolves, and solve the density of the transport plan from the ODE. The recovery of density requires no extra parameters, and can be evaluated efficiently. Notations: Given a matrix A ∈ R d×d, det(A) denotes its determinant, tr(A) = i A ii denotes its trace, A F = i,j A 2 ij denotes its Frobenius norm, and |A| denotes a matrix with [|A|] ij = |A ij |. We use dim(v) to denote the dimension of a vector v. We review some knowledge on optimal transport and implicit generative learning. Optimal Transport: The idea of optimal transport (OT) originally comes from Monge, which proposes to solve the following problem, DISPLAYFORM0 where T (·) is a mapping from the space of µ to the space of ν. The optimal mapping T * is referred to as Monge map, and equation 4 is referred to as Monge formulation of optimal transport. Monge formulation, however, is not necessarily feasible. For example, when X is a constant random variable and Y is not, there does not exist such a map T satisfying T (X) ∼ ν. The Kantorovich formulation of our interest in equation 1 is essentially a relaxation of equation 4 by replacing the deterministic mapping with the coupling between µ and ν. Consequently, Kantorovich formulation is guaranteed to be feasible and becomes the classical formulation of optimal transport in existing literature BID1 BID6 BID16 BID50. Implicit Generative Learning: For generative learning problems, direct estimation of a probability density function is not always convenient. For example, we may not have enough prior knowledge to specify an appropriate parametric form of the probability density function (pdf). Even when an appropriate parametric pdf is available, computing the maximum likelihood estimator (MLE) can be sometimes neither efficient nor scalable. To address these issues, we resort to implicit generative learning, which do not directly specify the density. Specifically, we consider that the observed variable X is generated by transforming a latent random variable Z (with some known distribution ρ) through some unknown mapping G(·), i.e., X = G(Z). We then can train a generative model by estimating G(·) with a properly chosen loss function, which can be easier to compute than MLE. Existing literature usually refer to the distribution of G(Z) as the push-forward of reference distribution ρ. Such an implicit generative learning approach also enjoys an additional benefit: We only need to choose ρ that is convenient to sample, e.g., uniform or Gaussian distribution, and we then can generate new samples from our learned distribution directly through the estimated mapping G very efficiently. For many applications, the target distribution can be quite complicated, in contrast to the distribution ρ being simple. This actually requires the mapping G to be flexible. Therefore, we choose to represent G using deep neural networks (DNNs), which are well known for its universal approximation property, i.e., DNNs with sufficiently many neurons and properly chosen activation functions can approximate any continuous functions over compact support up to an arbitrary error. Early empirical evidence, including variational auto-encoder and generative adversarial networks (GAN, BID21) have shown great success of parameterizing G with DNNs. They further motivate a series of variants, which adopt various DNN architectures to learn more complicated generative models BID45 BID5 BID54 BID10 BID28.Although the above methods cannot directly estimate the density of the target distribution, for certain applications, we can actually recover the density of G(Z). For example, generative flow methods such as NICE BID13, Real NVP BID14 Glow (Kingma & BID31) impose sparsity constraints on weight matrices, and exploit the hierarchical nature of DNNs to compute the densities layer by layer. Specifically, NICE proposed in BID13 denotes the transitions of densities within a neural network as DISPLAYFORM1, where h i represents the hidden units of the i-th layer and f i is the transition function. NICE suggest to restrict the Jacobian matrices of f i's to be triangular. Therefore, f i's are reversible and the transition of density in each layer can be easily computed. More recently, propose a neural ordinary differential equation (neural ODE) approach to compute the transition from Z to G(Z). Specifically, they introduce a dynamical formulation and parameterizing the mapping G using DNNs with recursive structures: They use an ODE to describe how the input Z gradually evolves towards the output G(Z) in continuous time, dz/dt = ξ(z(t), t), where z(t) denotes the continuous time interpolation of Z, and ξ(·, ·) denotes a feedforward-type DNN. Without loss of generality, we choose z = Z and z = G(Z). Then under certain regularity conditions, the mapping G(·) is guaranteed to be reversible, and the density of G(Z) can be computed in O(d) time, where d is the dimension of Z BID22. For better efficiency and scalability, we propose a new framework -named SPOT (Scalable Pushforward of Optimal Transport) -for solving the optimal transport problem. Before we proceed with the derivation, we first introduce some notations and assumptions. Recall that we aim to find an optimal joint distribution γ given by equation 1. For simplicity, we assume that the two marginal distributions X ∼ µ and Y ∼ ν have densities p X (x) and p Y (y) for X ∈ X and Y ∈ Y with compact X and Y, respectively. Moreover, we assume that the joint distribution γ has density p γ. Then we rewrite equation 1 as the following integral form, DISPLAYFORM0 We then convert equation 5 into a minmax optimization problem using the Lagrangian multiplier method. Note that equation 5 contains infinitely many constraints, i.e., the equality constraints need to hold for every x ∈ X and y ∈ Y. Therefore, we need infinitely many Lagrangian multipliers. For notational simplicity, we denote the Lagrangian multipliers associated with x and y by two functions λ X (x): X → R and λ Y (y): Y → R, respectively. Eventually we obtain DISPLAYFORM1 As mentioned earlier, solving p γ in the space of all continuous distributions is generally intractable. Thus, we adopt the push-forward method, which introduces a mapping G from some latent variable DISPLAYFORM2 The latent variable Z follows some distribution ρ that is easy to sample. We then rewrite equation 6 as min DISPLAYFORM3 Note that we have replaced the integrals with expectations, since x∈X p γ (x, y)dx, y∈Y p γ (x, y)dy, p X (x), and p Y (y) are probability density functions. Then we further parameterize G, λ X, and λ Y with neural networks 2. We denote G as the class of neural networks for parameterizing G and similarly F X and F Y as the classes of functions for λ X and λ Y, respectively. Although G, F X, and F Y are finite classes, our parameterization of G cannot exactly represent any continuous distributions of (X, Y) (only up to a small approximation error with sufficiently many neurons). Then the marginal distribution constraints, G X (Z) ∼ µ and G Y (Z) ∼ ν, are not necessarily satisfied. Therefore, the equilibrium of equation 7 does not necessarily exist, since the Lagrangian multipliers can be unbounded. Motivated by BID0, we require the neural networks for parameterizing λ X and λ Y to be η-Lipschitz, denoting as F η X and F η Y, respectively. Here η can be treated as a tuning parameter, and provides a refined control of the constraint violation. Since each η-Lipschitz function can be represented by ηf with f being 1-Lipschitz, we rewrite equation 7 as min DISPLAYFORM4 We apply alternating stochastic gradient algorithm to solve equation 8: in each iteration, we perform a few steps of gradient ascent on λ X and λ Y, respectively for a fixed G, followed by one-step gradient descent on G for fixed λ X and λ Y. We use Spectral Normalization (SN, BID38) to control the Lipschitz constant of λ X and λ Y being smaller than 1. Specifically, SN constrains the spectral norm of each weight matrix W by SN(W) = W/σ(W) in every iteration, where σ(W) denotes the spectral norm of W. Note that σ(W) can be efficiently approximated by a simple one-step power method BID20. Therefore, the computationally intensive SVD can be avoided. We summarize the algorithm in Algorithm 1 with SN omitted. Algorithm 1 Mini-batch Primal Dual Stochastic Gradient Algorithm for SPOT Require: DISPLAYFORM5 Initialized networks G, λ X, and λ Y with parameters w, θ, and β, respectively; α, the learning rate; n critic, the number of gradient ascent for λ X and λ Y; n, the batch size while w not converged do DISPLAYFORM6 Connection to Wasserstein Generative Adversarial Networks (WGANs): Our proposed framework equation 8 can be viewed as a multi-task learning version of Wasserstein GANs BID35. Specifically, the mapping G can be viewed as a generator that generates samples in the domains X and Y. The Lagrangian multipliers λ X and λ Y can be viewed as discriminators that evaluate the discrepancies of the generated sample distributions and the target marginal distributions. By restricting DISPLAYFORM7 essentially approximates the Wasserstein distance between the distributions of G X (Z) and X under the Euclidean ground cost BID51, the same holds for Y ). Denote DISPLAYFORM8 which essentially learns two Wasserstein GANs with a joint generator G through the regularizer R. Extension to Multiple Marginal Distributions: Our proposed framework can be straightforwardly extended to more than two marginal distributions. Consider the ground cost function c taking m inputs X 1,..., X m with X i ∼ µ i for i = 1,..., m. Then the optimal transport problem equation 1 becomes the multi-marginal problem BID42: DISPLAYFORM9 where Π(µ 1, µ 2, · · ·, µ m) denotes all the joint distributions with marginal distributions satisfying X i ∼ µ i for all i = 1,..., m. Following the same procedure for two distributions, we cast equation 10 into the following form min DISPLAYFORM10, where G and λ Xi's are all parameterized by neural networks. Existing methods for solving the multi-marginal problem equation 10 suggest to discretize the support of the joint distribution using a refined grid. For complex distributions, the grid size needs to be very large and can be exponential in m BID51. Our parameterization method actually only requires at most 2m neural networks, which further corroborates the scalability and efficiency of our framework. Existing literature has shown that entropy-regularized optimal transportation outperforms the unregularized counterpart in some applications BID15 BID9. This is because the entropy regularizer can tradeoff the estimation bias and variance by controlling the smoothness of the density function. We demonstrate how to efficiently recover the density p γ of the transport plan with entropy regularization. Instead of parameterizing G by a feedforward neural network, we choose the neural ODE approach, which uses neural networks to approximate the transition from input Z towards output G(Z) in the continuous time. Specifically, we take z = Z and z = G(Z). Let z(t) be the continuous interpolation of Z with density p(t) varying according to time t. We split z(t) into z 1 (t) and z 2 (t) such that dim(z 1) = dim(X) and dim(z 2) = dim(Y). We then write the neural ODE as DISPLAYFORM0 where ξ 1 and ξ 2 capture the dynamics of z(t). We parameterize ξ = (ξ 1, ξ 2) by a neural network with parameter w. We describe the dynamics of the joint density p(t) in the following proposition. Proposition 1. Let z, z 1, z 2, ξ 1 and ξ 2 be defined as above. Suppose ξ 1 and ξ 2 are uniformly Lipschitz continuous in z (the Lipschitz constant is independent of t) and continuous in t. The log joint density satisfies the following ODE: DISPLAYFORM1 where ∂ξ1 ∂z1 and ∂ξ2 ∂z2 are Jacobian matrices of ξ 1 and ξ 2 with respect to z 1 and z 2, respectively. Proposition 1 is a direct of Theorem 1 in. We can now recover the joint density by taking p γ = p, which further enables us to efficiently compute the entropy regularizer defined as DISPLAYFORM2 Then we consider the entropy regularized Wasserstein distance DISPLAYFORM3 is the objective function in equation 8. Note that here G is a functional operator of ξ, and hence parameterized with w. The training algorithm follows Algorithm 1, except that updating G becomes more complex due to involving the neural ODE and the entropy regularizer. To update G, we are essentially updating w using the gradient g w = ∂(L c + H)/∂w, where is the regularization coefficient. First we compute ∂L c /∂w. We adopt the integral form from in the following DISPLAYFORM4 where a(t) = ∂L c /∂z(t) is the so-called "adjoint variable". The detailed derivation is slightly involved due to the complicated terms in the chain rule. We refer the readers to for a complete argument. The advantage of introducing a(t) is that we can compute a(t) using the following ODE, DISPLAYFORM5 Then we can use a well developed numerical method to compute equation 13 efficiently BID12. Next, we compute ∂H/∂w in a similar procedure with a(t) replaced by b(t) = ∂H/∂ log p(t). We then write DISPLAYFORM6 Using the same numerical method, we can compute ∂H/∂w, which eventually allows us to compute g w and update w. We evaluate the SPOT framework on various tasks: Wasserstein distance approximation, density recovery, paired sample generation and domain adaptation. All experiments are implemented with PyTorch using one GTX1080Ti GPU and a Linux desktop computer with 32GB memory, and we adopt the Adam optimizer with configuration parameters 0.5 and 0.999 . We first demonstrate that SPOT can accurately and efficiently approximate the Wasserstein distance. We take the Euclidean ground cost, i.e. c(x, y) = x − y. Then DISPLAYFORM0 essentially approximates the Wasserstein distance. We take the marginal distributions µ and ν as two Gaussian distributions in R 2 with the same identity covariance matrix. The means are (−2.5, 0) and (2.5, 0), respectively. We find the Wasserstein distance between µ and ν equal to 5 by evaluating its closed-form solution. We generate n = 10 5 samples from both distributions µ and ν, respectively. Note that naively applying discretization-based algorithms by dividing the support according to samples requires at least 40 GB memory, which is beyond the memory capability. We parameterize G X, G Y, λ X, and λ Y with fully connected neural networks without sharing parameters. All the networks use the Leaky-ReLU activation BID37. G X and G Y have 2 hidden layers. λ X and λ Y have 1 hidden layer. The latent variable Z follows the standard Gaussian distribution in R 2. We take the batch size equal to 100.WD vs. Number of Epochs. We compare the algorithmic behavior of SPOT and Regularized Optimal Transport (ROT, BID47) with different regularization coefficients. For SPOT, we set the number of units in each hidden layer equal to 8 and η = 10 4. For ROT, we adopt the code from the authors 3 with only different input samples, learning rates, and regularization coefficients. FIG0 shows the convergence behavior of SPOT and ROT for approximating the Wasserstein distance between µ and ν with different learning rates. We observe that SPOT converges to the true Wasserstein distance with only 0.6%, 0.3%, and 0.3% relative errors corresponding to Learning Rates (LR) 10 −3, 10 −4, and 10 −5, respectively. In contrast, ROT is very sensitive to its regularization coefficient. Thus, it requires extensive tuning to achieve a good performance. WD vs. Number of Hidden Units. We then explore the adaptivity of SPOT by increasing the network size, while the input data are generated from some low dimensional distribution. Specifically, the number of hidden units per layer varies from 2 to 2 10. Recall that we parameterize G with two 2-hidden-layer neural networks, and λ X, λ Y with two 1-hidden-layer neural networks. Accordingly, the number of parameters in G varies from 36 to about 2 × 10 6, and that in λ X or λ Y varies from 12 to about 2, 000. The tuning parameter η also varies corresponding to the number of hidden units in λ X, λ Y. We use η = 10 5 for 2 1, 2 2 and 2 3 hidden units per layer, η = 2×10 4 for 2 4, 2 5 and 2 6 hidden units per layer, η = 10 4 for 2 7 and 2 8 hidden units per layer, η = 2 × 10 3 for 2 9, and 2 10 hidden units per layer. FIG1 shows the estimated WD with respect to the number of hidden units per layer. For large neural networks that have 2 9 or 2 10 hidden units per layer, i.e., 5.2 × 10 5 or 2.0 × 10 6 parameters, the number of parameters is far larger than the number of samples. Therefore, the model is heavily overparameterized. As we can observe in FIG1, the relative error however, does not increase as the number of parameters grows. This suggests that SPOT is robust with respect to the network size. We demonstrate that SPOT can effectively recover the joint density with entropy regularization. We adopt the neural ODE approach as described in Section 4. Denote φ(a, b) as the density N (a, b). We take the marginal distributions µ and ν as Gaussian distributions φ and φ(2, 0.5); mixtures of Gaussian 1 2 φ(−1, 0.5) + 1 2 φ(1, 0.5) and 1 2 φ(−2, 0.5)+ 1 2 φ(2, 0.5). The ground cost is the Euclidean square function, i.e., c(x, y) = x−y 2. We run the training algorithm for 6 × 10 5 iterations and in each iteration, we generate 500 samples from µ and ν, respectively. We parameterize ξ with a 3-hidden-layer fully-connected neural network with 64 hidden units per layer, and the latent dimension is 2. We take η = 10 6.Figure 4: Visualization of the marginal distributions and the joint density of the optimal transport plan. Figure 4 shows the input marginal densities and heat maps of output joint densities. We can see that a larger regularization coefficient yields a smoother joint density for the optimal transport plan. Note that with continuous marginal distributions and the Euclidean square ground cost, the joint density of the unregularized optimal transport degenerates to a generalized impulse function (i.e., a generalized Dirac δ function that has nonzero value on a manifold instead of one atom, as shown in Rachev FORMULA0 ; Onural FORMULA1). Entropy regularization prevents such degeneracy by enforcing smoothness of the density. We show that SPOT can generate paired samples (G X (Z), G Y (Z)) from unpaired data X and Y that are sampled from marginal distributions µ and ν, respectively. Synthetic Data. We take the squared Euclidean cost, i.e. c(x, y) = x−y 2, and adopt the same implementation and sample size as in Section 5.1 with learning rate 10 −3 and 32 hidden units per layer. FIG3 illustrates the input samples and the generated samples with two sets of different marginal distributions: The upper row corresponds to the same Gaussian distributions as in Section 5.1. The lower row takes X as Gaussian distribution with mean (−2.5, 0) and covariance 0.5I, Y as (sin(Y 1) + Y 2, 2Y 1 − 3), where Y 1 follows a uniform distribution on, and Y 2 follows a Gaussian distribution N (2, 0.1). We observe that the generated samples and the input samples are approximately identically distributed. Additionally, the paired relationship is as expected -the upper mass is transported to the upper region, and the lower mass is transported to the lower region. Real Data. We next show SPOT is able to generate high quality paired samples from two unpaired real datasets: MNIST and MNISTM BID18. The handwritten digits in MNIST and MNISTM datasets have different s and foregrounds (see FIG2 . The digits in paired images however, are expected to have similar contours. We leverage this prior knowledge 4 by adopting a semantic-aware cost function BID34 to extract the edge of handwritten letters, i.e., we use the following cost function DISPLAYFORM0 where C 1 and C 2 denote the Sobel filter BID49, and x j 's and y j 's are the three channels of RGB images. The operator * denotes the matrix convolution. We set We now use separate neural networks to parameterize G X and G Y instead of taking G X and G Y as outputs of a common network. Note that G X and G Y does not share parameters. Specifically, we use two 4-layer convolutional layers in each neural network for G X or G Y, and two 5-layer convolutional neural networks for λ X and λ Y . More detailed network settings are provided in Appendix A.2. The batch size is 32, and we train the framework with 2 × 10 5 iterations until the generated samples become stable. FIG2 shows the generated samples of SPOT. We also reproduce the of CoGAN with the code from the authors 5 . As can be seen, with approximately the same network size, SPOT yields paired images with better quality than CoGAN: The contours of the paired of SPOT are nearly identical, while the of CoGAN have no clear paired relation. Besides, the images corresponding to G Y (Z) in SPOT have colorful foreground and , while in CoGAN there are only few colors. Recall that in SPOT, the paired relation is encouraged by ground cost c, and in CoGAN it is encouraged by sharing parameters. By leveraging prior knowledge in ground cost c, the paired relation is more accurately controlled without compromising the quality of the generated images. DISPLAYFORM1 We further test our framework on more complex real datasets: Photo-Monet dataset and Edges-Shoes dataset. We adopt the Euclidean cost function for Photo-Monet dataset, and the semantic-aware cost function as in MNIST-MNISTM for Edges-Shoes dataset. Other implementations remain the same as the MNIST-MINSTM experiment. FIG5 demonstrates the generated samples of both datasets. We observe that the generated images have a desired paired relation: For each Z, G X (Z) and G Y (Z) gives a pair of corresponding scenery and shoe. The generated images are also of high quality, especially considering that Photo-Monet dataset is a pretty small but complex dataset with 6,288 photos and 1,073 paintings. Optimal transport has been used in domain adaptation, but existing methods are either computationally inefficient BID7, or cannot achieve a state-of-the-art performance BID48. Here, we demonstrate that SPOT can tackle large scale domain adaptation problems with state-of-the-art performance. In particular, we receive labeled source data {x i} ∼ µ, where each data point is associated with a label v i, and target data {y j} ∼ ν with unknown labels. For simplicity, we use X and Y to denote the random vectors following distributions µ and ν, respectively. The two distributions µ and ν can be coupled in a way that each paired samples of (X, Y) from the coupled joint distribution are likely to have the same label. In order to identify such coupling information between source and target data, we propose a new OT-based domain adaptation method -DASPOT (Domain Adaptation with SPOT) as follows. Specifically, we jointly train an optimal transport plan and two classifiers for X and Y (denoted by D X and D Y, respectively). Each classifier is a composition of two neural networks -an embedding network and a decision network. For simplicity, we denote D X = D e,X • D c,X, where D e,X denotes the embedding network, and D c,X denotes the decision network (respectively for D Y = D e,Y • D c,Y). We expect the embedding networks to extract high level features of the source and target data, and then find an optimal transport plan to align X and Y based on these high level features using SPOT. Here we choose a ground cost c(x, y) = D e,X (x) − D e,Y (y) 2. Let G denote the generator of SPOT. The Wasserstein distance of such an OT problem can be written as DISPLAYFORM0 Meanwhile, we train D X by minimizing the empirical risk DISPLAYFORM1, where E denotes the cross entropy loss function, and train D Y by minimizing DISPLAYFORM2 where [Eventually, the joint training optimize DISPLAYFORM3 where DISPLAYFORM4 is the objective function of OT problem in equation 8 with c defined above, and η s, η da are the tuning parameters. We choose η s = 10 3 for all experiments. We set η da = 0 for the first 10 5 iteration to wait the generators to be well trained. Then we set η da = 10 for the next 3 × 10 5 iteration. We take totally 4 × 10 5 iterations, and set the learning rate equal to 10 −4 and batch size equal to 128 for all experiments. We evaluate DASPOT with the MNIST, MNISTM, USPS BID26, and SVHN BID40 datasets. We denote a domain adaptation task as Source Domain → Target Domain. We compare the performance of DASPOT with other optimal transport based domain adaptation methods: ROT BID48, StochJDOT and DeepJDOT . As can be seen in TAB0, DASPOT achieves equal or better performances on all the tasks. Moreover, we show that DeepJDOT is not as efficient as DASPOT. For example, in the MNIST → USPS task, DASPOT requires 169s running time to achieve a 95% accuracy, while DeepJDOT requires 518s running time to achieve the same accuracy. The reason behind is that DeepJDOT needs to solve a series of optimal transport problems using Sinkhorn algorithm. The implementation of DeepJDOT is adapted from the authors' code 6. Existing literature shows that several stochastic algorithms can efficiently compute the Wasserstein distance between two continuous distributions. These algorithms, however, only apply to the dual of the OT problem equation 1, and cannot provide the optimal transport plan. For example, BID19 suggest to expand the dual variables in two reproducing kernel Hilbert spaces. They then apply the Stochastic Averaged Gradient (SAG) algorithm to compute the optimal objective value of OT with continuous marginal distributions or semi-discrete marginal distributions (i.e., one marginal distribution is continuous and the other is discrete). The follow-up work, BID47, parameterize the dual variables with neural networks and apply the Stochastic Gradient Descent (SGD) algorithm to eventually achieve a better convergence. These two methods can only provide the optimal transport plan and recover the joint density when the densities of the marginal distributions are known. This is prohibitive in most applications, since we only have access to the empirical data. Our framework actually allows us to efficiently compute the joint density from the transformation of the latent variable Z as in Section 4. TAB1 shows the architecture of two discriminators λ X, λ Y. The two networks have identical architechture and do not share parameters. The CNN architecture for USPS, MNIST and MNISTM. PReLU activation is applied BID24. TAB2 shows the architecture of two generators G X and G Y. The last column in TAB2 means whether G X and G Y share the same parameter. TAB3 shows the architecture of two discriminators λ X, λ Y, and two classifiers D X, D Y. The last column in TAB2 uses (·, ·) to denote which group of discriminators share the same parameter. TAB4 shows the architecture of two generators G X and G Y. The last column in TAB4 means whether G X and G Y share the same parameter. The Residual block is the same as the one in BID38. [3 × 3, ch, stride = 1, padding =0] Sigmoid False TAB5 shows the architecture of two discriminators λ X, λ Y, and two classifiers D X, D Y. The last column in TAB5 uses (·, ·) to denote which group of discriminators share the same parameter.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1xA7ILFOV
Use GAN-based method to scalably solve optimal transport
In this work, we propose a novel formulation of planning which views it as a probabilistic inference problem over future optimal trajectories. This enables us to use sampling methods, and thus, tackle planning in continuous domains using a fixed computational budget. We design a new algorithm, Sequential Monte Carlo Planning, by leveraging classical methods in Sequential Monte Carlo and Bayesian smoothing in the context of control as inference. Furthermore, we show that Sequential Monte Carlo Planning can capture multimodal policies and can quickly learn continuous control tasks. To exhibit intelligent behaviour machine learning agents must be able to learn quickly, predict the consequences of their actions, and explain how they will react in a given situation. These abilities are best achieved when the agent efficiently uses a model of the world to plan future actions. To date, planning algorithms have yielded very impressive . For instance, Alpha Go BID36 relied on Monte Carlo Tree Search (MCTS) BID23 ) to achieve super human performances. Cross entropy methods (CEM) BID34 have enabled robots to perform complex nonprehensile manipulations BID11 and algorithms to play successfully Tetris BID39. In addition, iterative linear quadratic regulator (iLQR) BID21 BID20 BID41 enabled humanoid robots tasks to get up from an arbitrary seated pose.Despite these successes, these algorithms make strong underlying assumptions about the environment. First, MCTS requires a discrete setting, limiting most of its successes to discrete games with known dynamics. Second, CEM assumes the distribution over future trajectories to be Gaussian, i.e. unimodal. Third, iLQR assumes that the dynamics are locally linear-Gaussian, which is a strong assumption on the dynamics and would also assume the distribution over future optimal trajectories to be Gaussian. For these reasons, planning remains an open problem in environments with continuous actions and complex dynamics. In this paper, we address the limitations of the aforementioned planning algorithms by creating a more general view of planning that can leverage advances in deep learning (DL) and probabilistic inference methods. This allows us to approximate arbitrary complicated distributions over trajectories with non-linear dynamics. We frame planning as density estimation problem over optimal future trajectories in the context of control as inference BID10 BID45 BID43; BID47 BID31. This perspective allows us to make use of tools from the inference research community and, as previously mentioned, model any distribution over future trajectories. The planning distribution is complex since trajectories consist of an intertwined sequence of states and actions. Sequential Monte Carlo (SMC) BID38 BID13 BID27 methods are flexible and efficient to model such a T −1 t≥1 p env (s t+1 |s t, a t) T t≥1 π θ (a t |s t) denotes the probability of a trajectory x 1:T under policy π θ. FIG8.1: O t is an observed optimality variable with probability p(O t |s t, a t) = exp(r(s t, a t)).x t = (s t, a t) are the state-action pair variables considered here as latent. Traditionally, in reinforcement learning (RL) problems, the goal is to find the optimal policy that maximizes the expected return E q θ [T t=1 γ t r t]. However, it is useful to frame RL as an inference problem within a probabilistic graphical framework BID33 BID45 BID30. First, we introduce an auxiliary binary random variable O t denoting the "optimality" of a pair (s t, a t) at time t and define its probability 1 as p(O t = 1|s t, a t) = exp(r(s t, a t)). O is a convenience variable only here for the sake of modeling. By considering the variables (s t, a t) as latent and O t as observed, we can construct a Hidden Markov Model (HMM) as depicted in figure 2.1. Notice that the link s → a is not present in figure 2.1 as the dependency of the optimal action on the state depends on the future observations. In this graphical model, the optimal policy is expressed as p(a t |s t, O t:T).The posterior probability of this graphical model can be written as 2: DISPLAYFORM0 r(s t, a t) + log p(a t).(2.1)It appears clearly that finding optimal trajectories is equivalent to finding plausible trajectories yielding a high return.1 as in BID30, if the rewards are bounded above, we can always remove a constant so that the probability is well defined.2 Notice that in the rest of the paper, we will abusively remove the product of the action priors T t=1 p(at) = exp T t=1 log p(at) from the joint as in BID30. We typically consider this term either constant or already included in the reward function. See Appendix A.2 for details. Many control as inference methods can be seen as approximating the density by optimizing its variational lower bound: BID43. Instead of directly differentiating the variational lower bound for the whole trajectory, it is possible to take a message passing approach such as the one used in Soft Actor-Critic (SAC) BID17 and directly estimate the optimal policy p(a t |s t, O t:T) using the backward message, i.e a soft Q function instead of the Monte Carlo return. DISPLAYFORM1 Since distributions over trajectories are complex, it is often difficult or impossible to directly draw samples from them. Fortunately in statistics, there are successful strategies for drawing samples from complex sequential distributions, such as SMC methods. For simplicity, in the remainder of this section we will overload the notation and refer to the target distribution as p(x) and the proposal distribution as q(x). We wish to draw samples from p but we only know its unnormalized density. We will use the proposal q to draw samples and estimate p. In the next section, we will define the distributions p and q in the context of planning. Importance sampling (IS): When x can be efficiently sampled from another simpler distribution q i.e. the proposal distribution, we can estimate the likelihood of any point x under p straightforwardly by computing the unnormalized importance sampling weights w(x) ∝ p(x) q(x) and using the identity DISPLAYFORM0 is defined as the normalized importance sampling weights. In practice, one draws N samples from q: {x (n) } N n=1 ∼ q; these are referred to as particles. The set of particles {x (n) } N n=1 associated with their weights {w (n) } N n=1 are simulations of samples from p. That is, we approximate the density p with a weighted sum of diracs from samples of q: DISPLAYFORM1, with x (n) sampled from q where δ x0 (x) denotes the Dirac delta mass located as x 0.Sequential Importance Sampling (SIS): When our problem is sequential in nature x = x 1:T, sampling x 1:T at once can be a challenging or even intractable task. By exploiting the sequential structure, the unnormalized weights can be updated iteratively in an efficient manner: w t (x 1:t) = w t−1 (x 1:t−1) p(xt|x1:t−1) q(xt|x1:t−1). We call this the update step. This enables us to sample sequentially x t ∼ q(x t |x 1:t−1) to finally obtain the set of particles {x Sequential Importance Resampling (SIR): When the horizon T is long, samples from q usually have a low likelihood under p, and thus the quality of our approximation decreases exponentially with T. More concretely, the unnormalized weights w (n) t converge to 0 with t → ∞. This usually causes the normalized weight distribution to degenerate, with one weight having a mass of 1 and the others a mass of 0. This phenomenon is known as weight impoverishment. One way to address weight impoverishment is to add a resampling step where each particle is stochastically resampled to higher likelihood regions at each time step. This can typically reduce the variance of the estimation from growing exponentially with t to growing linearly. In the context of control as inference, it is natural to see planning as the act of approximating a distribution of optimal future trajectories via simulation. In order to plan, an agent must possess a model of the world that can accurately capture consequences of its actions. In cases where multiple trajectories have the potential of being optimal, the agent must rationally partition its computational resources to explore each possibility. Given finite time, the agent must limit its planning to a finite horizon h. We, therefore, define planning as the act of approximating the optimal distribution over trajectories of length h. In the control-as-inference framework, this distribution is naturally expressed as p(a 1, s 2, . . . s h, a h |O 1:T, s 1), where s 1 represents our current state. As we consider the current state s 1 given, it is equivalent and convenient to focus on the planning distribution with horizon h: p(x 1:h |O 1:T). Bayesian smoothing is an approach to the problem of estimating the distribution of a latent variable conditioned on all past and future observations. One method to perform smoothing is to decompose the posterior with the two-filter formula BID4 BID26: DISPLAYFORM0 This corresponds to a forward-backward messages factorization in a Hidden Markov Model as depicted in figure 3.1. We broadly underline in orange forward variables and in blue backward variables in the rest of this section. DISPLAYFORM1 Figure 3.1: Factorization of the HMM into forward (orange) and backward (blue) messages. Estimating the forward message is filtering, estimating the value of the latent knowing all the observations is smoothing. Filtering is the task of estimating p(x 1:t |O 1:t): the probability of a latent variable conditioned on all past observations. In contrast, smoothing estimates p(x 1:t |O 1:T): the density of a latent variable conditioned on all the past and future measurements. In the belief propagation algorithm for HMMs, these probabilities correspond to the forward message α h (x h) = p(x 1:h |O 1:h) and backward message β h (x h) = p(O h+1:T |x h), both of which are computed recursively. While in discrete spaces these forward and backward messages can be estimated using the sumproduct algorithm, its complexity scales with the square of the space dimension making it unsuitable for continuous tasks. We will now devise efficient strategies for estimating reliably the full posterior using the SMC methods covered in section 2.2. The backward message p(O h+1:T |x h) can be understood as the answer to: "What is the probability of following an optimal trajectory from the next time step on until the end of the episode, given my current state?". Importantly, this term is closely related to the notion of value function in RL. Indeed, in the control-as-inference framework, the state-and action-value functions are defined as DISPLAYFORM0 T |s h, a h ) respectively. They are solutions of a soft-Bellman equation that differs a little from the traditional Bellman equation (; ; BID35 BID0 . A more in depth explanation can be found in BID30 . We can show subsequently that: DISPLAYFORM1 Full details can be found in Appendix A.3. Estimating the backward message is then equivalent to learning a value function. This value function as defined here is the same one used in Maximum Entropy RL . Using the of the previous subsections we can now derive the full update of the sequential importance sampling weights. To be consistent with the terminology of section 2.2, we call p(x 1:h |O 1:T) the target distribution and q θ (x 1:h) the proposal distribution. The sequential weight update formula is in our case: DISPLAYFORM0 3) is akin to a maximum entropy advantage function. The change in weight can be interpreted as sequentially correcting our expectation of the return of a trajectory. The full derivation is available in Appendix A.4. Our algorithm is similar to the Auxilliary Particle Filter which uses a one look ahead simulation step to update the weights. Note that we have assumed that our model of the environment was perfect to obtain this slightly simplified form. This assumption is made by most planning algorithms (LQR, CEM . . .): it entails that our plan is only as good as our model is. A typical way to mitigate this issue and be more robust to model errors is to re-plan at each time step; this technique is called Model Predictive Control (MPC) and is commonplace in control theory. We can now use the computations of previous subsections to derive the full algorithm. We consider the root state of the planning to be the current state s t. We aim at building a set of particles {x DISPLAYFORM0 and their weights {w DISPLAYFORM1 representative of the planning density p(x t:t+h |O 1:T) over optimal trajectories. We use SAC BID17 for the policy and value function, but any other Maximum Entropy policy can be used for the proposal distribution. Note that we used the value function estimated by SAC as a proxy the optimal one as it is usually done by actor critic methods. Algorithm 1 SMC Planning using SIR 1: for t in {1, . . ., T} do 2: DISPLAYFORM2 3: DISPLAYFORM3 for i in {t, . . ., t + h} do // Update 6: DISPLAYFORM0 7: DISPLAYFORM1 8: DISPLAYFORM2 // Resampling 10: DISPLAYFORM3 12: end for Sample n ∼ Uniform(1, N).14: We summarize the proposed algorithm in Algorithm 1. At each step, we sample from the proposal distribution or model-free agent (line 6) and use our learned model to sample the next state and reward (line 7). We then update the weights (line 8). In practice we only use one sample to estimate the expectations, thus we may incur a small bias. The resampling step is then performed (line 10-11) by resampling the trajectories according to their weight. After the planning horizon is reached, we sample one of our trajectories (line 13) and execute its first action into the environment (line 15-16). The observations (s t, a t, r t, s t+1) are then collected and added to a buffer (line 17) used to train the model as well as the policy and value function of the model-free agent. An alternative algorithm that does not use the resampling step (SIS) is highlighted in Algorithm 2 in Appendix A.6. DISPLAYFORM0 A schematic view of the algorithm can also be found on figure 3.2. We now discuss shortcomings our approach to planning as inference may suffer from, namely encouraging risk seeking policies. DISPLAYFORM0 we have the root white node st−1, the actions a (n) t−1 are black nodes and the leaf nodes are the s (n) t. We have one particle on the leftmost branch, two on the central branch and one on the rightmost branch.• In each tree, the white nodes represent states and black nodes represent actions. Each bullet point near a state represents a particle, meaning that this particle contains the total trajectory of the branch. The root of the tree represents the root planning state, we expand the tree downward when planning. Bias in the objective: Trajectories having a high likelihood under the posterior defined in Equation 2.1 are not necessarily trajectories yielding a high mean return. Indeed, as log E p exp R(x) ≥ E p R(x) we can see that the objective function we maximize is an upper bound on the quantity of interest: the mean return. This can lead to seeking risky trajectories as one very good outcome in log E exp could dominate all the other potentially very low outcomes, even if they might happen more frequently. This fact is alleviated when the dynamics of the environment are close to deterministic BID30. Thus, this bias does not appear to be very detrimental to us in our experiments 4 as our environments are fairly close to deterministic. The bias in the objective also appears in many control as inference works such as Particle Value Functions (a) and the probabilistic version of LQR proposed in BID43.Bias in the model: A distinct but closely related problem arises when one trains jointly the policy π θ and the model p model, i.e if q(x 1:T) is directly trained to approximate p(x 1:T |O 1:T). In that case, p model (s t+1 |s t, a t) will not approximate p env (s t+1 |s t, a t) but p env (s t+1 |s t, a t, O t:T) BID30. This means the model we learn has an optimism bias and learns transitions that are overly optimistic and do no match the environment's behavior. This issue is simply solved by training the model separately from the policy, on transition data contained in a buffer as seen on line 18 of Algorithm 1. In this section, we show how SMCP can deal with multimodal policies when planning. We believe multimodality is useful for exploring since it allows us to keep a distribution over many promising trajectories and also allows us to adapt to changes in the environment e.g. if a path is suddenly blocked. We applied two version of SMCP: i) with a resampling step (SIR) ii) without a resampling step (SIS) and compare it to CEM on a simple 2D point mass environment 4.1. Here, the agent can control the displacement on (x, y) within the square 2, a = (∆x, ∆y) with maximum magnitude ||a|| = 0.05. The starting position (•) of the agent is (x = 0, y = 0.5), while the goal is at g = (x = 1, y = 0.5). The reward is the agent's relative closeness increment to the goal: (a) Sequential Importance Resampling (SIR): when resampling the trajectories at each time step, the agent is able to focus on the promising trajectories and does not collapse on a single mode.(b) Sequential Importance Sampling (SIS): if we do not perform the resampling step the agent spends most of its computation on uninteresting trajectories and was not able to explore as well.(c) CEM: here the agent samples all the actions at once from a Gaussian with learned mean and covariance. We needed to update the parameters 50 times for the agent to find one solution, but it forgot the other one. The proposal distribution is taken to be an isotropic gaussian. Here we plot the planning distribution imagined at t = 0 for three different agents. A darker shade of blue indicates a higher likelihood of the trajectory. Only the agent using Sequential Importance Resampling was able to find good trajectories while not collapsing on a single mode. DISPLAYFORM0 ||st−g|| 2. However, there is a partial wall at the centre of the square leading to two optimal trajectories, one choosing the path below the wall and one choosing the path above. The proposal is an isotropic normal distribution for each planning algorithm, and since the environment's dynamics are known, there is no need for learning: the only difference between the three methods is how they handle planning. We also set the value function to 0 for SIR and SIS as we do not wish to perform any learning. We used 1500 particles for each method, and updated the parameters of CEM until convergence. Our experiment 4.1 shows how having particles can deal with multimodality and how the resampling step can help to focus on the most promising trajectories. The experiments were conducted on the Open AI Gym Mujoco benchmark suite BID5. To understand how planning can increase the learning speed of RL agents we focus on the 250000 first time steps. The Mujoco environments provide a complex benchmark with continuous states and actions that requires exploration in order to achieve state-of-the-art performances. The environment model used for our planning algorithm is the same as the probabilistic neural network used by BID8, it minimizes a gaussian negative log-likelihood model: DISPLAYFORM0 where Σ θ is diagonal and the transitions (s n, a n, s n+1) are obtained from the environment. We added more details about the architecture and the hyperparameters in the appendix A.5.We included two popular planning algorithms on Mujoco as baselines: CEM BID8 and Random Shooting (RS) . Furthermore, we included SAC BID17, a model free RL algorithm, since i) it has currently one of the highest performances on Mujoco tasks, which make it a very strong baseline, and ii) it is a component of our algorithm, as we use it as a proposal distribution in the planning phase. Our suggest that SMCP does not learn as fast as CEM and RS initially as it heavily relies on estimating a good value function. However, SMCP quickly achieves higher performances than CEM and RS. SMCP also learns faster than SAC because it was able to leverage information from the model early in training. Note that our differ slightly from the usually found in the model-based RL literature. This is because we are tackling a more difficult problem: estimating the transitions and the reward function. We are using unmodified versions of the environments which introduces many hurdles. For instance, the reward function is challenging to learn from the state and very noisy. Usually, the environments are modified such that their reward can be computed directly from the state e.g. BID8 3.As in BID18, we assess the significance of our by running each algorithm with multiple seeds (20 random seeds in our case, from seed 0 to seed 19) and we perform a statistical significance test following BID9. We test the hypothesis that our mean return on the last 100k steps is higher than the one obtained by SAC. Our are significant to the 5% for HalfCheetah and Walker2d. See Appendix A.7 for additional details. We also report some additional experimental such as effective sample size and model loss in Appendix A.8. Planning as inference: Seeing planning as an inference problem has been explored in cognitive neuroscience by BID3 and BID37. While shedding light on how Bayesian inference could be used in animal and human reasoning, it does not lead to a practical algorithm usable in complex environments. In the reinforcement learning literature, we are only aware of and BID44 that initially framed planning as an inference problem. However, both works make simplifying assumptions on the dynamics and do not attempt to capture the full posterior distribution. In the control theory literature, particle filters are usually used for inferring the true state of the system which is then used for control BID1. BID22 also combined SMC and MPC methods. While their algorithm is similar to ours, the distribution they approximate is not the Bayesian posterior, but a distribution which converges to a Dirac on the best trajectory. More recently, BID28 achieved promising on a rope manipulation task using generative adversarial network BID12 to generate future trajectories. Model based RL: Recent work has been done in order to improve environment modeling and account for different type of uncertainties. BID8 compared the performance of models that account for both aleatoric and epistemic uncertainties by using an ensemble of probabilistic models. BID16 combined the variational autoencoder BID25 ) and a LSTM BID19 to model the world. BID6 used a model to improve the target for temporal difference (TD) learning. Note that this line of work is complementary to ours as SMCP could make use of such models. Other works have been conducted in order to directly learn how to use a model BID15 BID46 BID7.Particle methods and variational inference: BID14 learn a good proposal distribution for SMC methods by minimizing the KL divergence with the optimal proposal. It is conceptually similar to the way we use SAC BID17 but it instead minimizes the reverse KL to the optimal proposal. Further works have combined SMC methods and variational inference (; b; BID29 to obtain lower variance estimates of the distribution of interest. In this work, we have introduced a connection between planning and inference and showed how we can exploit advances in deep learning and probabilistic inference to design a new efficient and theoretically grounded planning algorithm. We additionally proposed a natural way to combine model-free and model-based reinforcement learning for planning based on the SMC perspective. We empirically demonstrated that our method achieves state of the art on Mujoco. Our suggest that planning can lead to faster learning in control tasks. However, our particle-based inference method suffers some several shortcomings. First, we need many particles to build a good approximation of the posterior, and this can be computationally expensive since it requires to perform a forward pass of the policy, the value function and the model for every particle. Second, resampling can also have adverse effects, for instance all the particles could be resampled on the most likely particle, leading to a particle degeneracy. More advanced SMC methods dealing with this issue such as backward simulation BID32 or Particle Gibbs with Ancestor Sampling (PGAS) have been proposed and using them would certainly improve our . Another issue we did not tackle in our work is the use of models of the environment learned from data. Imperfect model are known to in compounding errors for prediction over long sequences. We chose to re-plan at each time step (Model Predictive Control) as it is often done in control to be more robust to model errors. More powerful models or uncertainty modeling techniques can also be used to improve the accuracy of our planning algorithm. While the inference and modeling techniques used here could be improved in multiple ways, SMCP achieved impressive learning speed on complex control tasks. The planning as inference framework proposed in this work is general and could serve as a stepping stone for further work combining probabilistic inference and deep reinforcement learning. A.1 ABBREVIATION AND NOTATION p(x) Density of interest. Approximation of the density of interest.t ∈ {1, . . . T} time steps.n ∈ {1, . . . N} particle number.h horizon length. The true joint distribution 2.1 in section 2.1 should actually be written: DISPLAYFORM0 In Mujoco environments, the reward is typically written as DISPLAYFORM1 where f is a function of the state (velocity for HalfCheetah on Mujoco for example). The part α||a t || 2 2can be seen as the contribution from the action prior (here a gaussian prior). One can also consider the prior to be constant (and potentially improper) so that is does not change the posterior p(x 1:T |O 1:T). DISPLAYFORM2 By definition of the optimal value function in BID30. DISPLAYFORM3 We use there the forward-backward equation 3.1 for the numerator and the denominator DISPLAYFORM4 A.5 EXPERIMENT DETAILS Random samples: 1000 transitions are initially collected by a random policy to pretrain the model and the proposal distribution. After which the agents start following their respective policy. Data preprocessing: We normalize the observations to have zero mean and standard deviation 1. The model is used to predict the planning distribution for the horizon h of N particles. We then sample a trajectory according to its weight and return the first action of this trajectory. In our experiments, we fix the maximum number of particles for every method to 2500. For SMCP, the temperature and horizon length are described in TAB2.3. We used a custom implementation with a Gaussian policy for both the SAC baseline and the proposal distribution used for both versions of SMCP. We used Adam with a learning rate of 0.001. The reward scaling suggested by BID17 for all experiments and used an implementation inspired by. We used a two hidden layers with 256 hidden units for the three networks: the value function, the policy and the soft Q functions. Model: We train the model p model to minimize the negative log likelihood of p(s t+1 |s t + ∆ t (s t, a t), σ t (s t, a t)). The exact architectures are detailed in TAB2.3. We train the model to predict the distribution of the change in states and learn a deterministic reward function from the current state and predict the change in state. Additionally, we manually add a penalty on the action magnitude in the reward function to simplify the learning. At the end of each episode we train the model for 10 epochs. Since the training is fairly short, we stored every transitions into the buffer. The model is defined as: DISPLAYFORM0 DISPLAYFORM1 7: DISPLAYFORM2 8: DISPLAYFORM3 9:end for 10:Sample n ∼ Categorical(w The significance of our is done following guidelines from BID9 . We test the hypothesis that the mean return of our method is superior to the one of SAC. We use 20 random seeds (from 0 to 19pro) for each method on each environment. For this we look at the average return from steps 150k to 250k for SIR-SAC and SAC, and conduct a Welch's t-test with unknown variance. We report the p-value for each environment tested on Mujoco. A p val < 0.05 usually indicates that there is strong evidence to suggest that our method outperforms SAC.• HalfCheetah-v2: p val = 0.003. There is very compelling evidence suggesting we outperform SAC.• Hopper-v2: p val = 0.09. There is no significant evidence suggesting we outperform SAC.• Walker2d-v2: p val = 0.03. There is compelling evidence suggesting we outperform SAC. A.8.1 EFFECTIVE SAMPLE SIZE More precisely the values are DISPLAYFORM0 where i is the depth of the planning, N is the number of particles and DISPLAYFORM1 We can see that as the proposal distribution improves the ESS also increases. The ESS on HalfCheetah is representative of the one obtained on the other environments. While these values are not high, we are still around 15% thus we do not suffer heavily from weight degeneracy. We also report the negative log likelihood loss of the environment's model during the training on Figure A.2.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByetGn0cYX
Leveraging control as inference and Sequential Monte Carlo methods, we proposed a probabilistic planning algorithm.
Training Generative Adversarial Networks (GANs) is notoriously challenging. We propose and study an architectural modification, self-modulation, which improves GAN performance across different data sets, architectures, losses, regularizers, and hyperparameter settings. Intuitively, self-modulation allows the intermediate feature maps of a generator to change as a function of the input noise vector. While reminiscent of other conditioning techniques, it requires no labeled data. In a large-scale empirical study we observe a relative decrease of 5%-35% in FID. Furthermore, all else being equal, adding this modification to the generator leads to improved performance in 124/144 (86%) of the studied settings. Self-modulation is a simple architectural change that requires no additional parameter tuning, which suggests that it can be applied readily to any GAN. Generative Adversarial Networks (GANs) are a powerful class of generative models successfully applied to a variety of tasks such as image generation BID20; ), learned compression BID15, super-resolution , inpainting , and domain transfer BID13 BID23.Training GANs is a notoriously challenging task BID6 BID15 as one is searching in a high-dimensional parameter space for a Nash equilibrium of a non-convex game. As a practical remedy one applies (usually a variant of) stochastic gradient descent, which can be unstable and lack guarantees. As a , one of the main research challenges is to stabilize GAN training. Several approaches have been proposed, including varying the underlying divergence between the model and data distributions ), regularization and normalization schemes BID7 ), optimization schedules , and specific neural architectures (; BID21 . A particularly successful approach is based on conditional generation; where the generator (and possibly discriminator) are given side information, for example class labels;;. In fact, state-of-the-art conditional GANs inject side information via conditional batch normalization (CBN) layers BID3; BID21. While this approach does help, a major drawback is that it requires external information, such as labels or embeddings, which is not always available. In this work we show that GANs benefit from self-modulation layers in the generator. Our approach is motivated by Feature-wise Linear Modulation in supervised learning (; BID3, with one key difference: instead of conditioning on external information, we condition on the generator's own input. As self-modulation requires a simple change which is easily applicable to all popular generator architectures, we believe that is a useful addition to the GAN toolbox. We provide a simple yet effective technique that can added universally to yield better GANs. We demonstrate empirically that for a wide variety of settings (loss functions, regularizers and normalizers, neural architectures, and optimization settings) that the proposed approach yields between a 5% and 35% improvement in sample quality. When using fixed hyperparameters settings our approach outperforms the baseline in 86%(124/144) of cases. Further, we show that self-modulation still helps even if label information is available. Finally, we discuss the effects of this method in light of recently proposed diagnostic tools, generator conditioning and precision/recall for generative models . Several recent works observe that conditioning the generative process on side information (such as labels or class embeddings) leads to improved models (; ;). Two major approaches to conditioning on side information s have emerged: Directly concatenate the side information s with the noise vector z , i.e. z = [s, z]. Condition the hidden layers directly on s, which is usually instantiated via conditional batch normalization BID3 ).Despite the success of conditional approaches, two concerns arise. The first is practical; side information is often unavailable. The second is conceptual; unsupervised models, such as GANs, seek to model data without labels. Including them side-steps the challenge and value of unsupervised learning. We propose self-modulating layers for the generator network. In these layers the hidden activations are modulated as a function of latent vector z. In particular, we apply modulation in a feature-wise fashion which allows the model to re-weight the feature maps as a function of the input. This is also motivated by the FiLM layer for supervised models (; BID3 in which a similar mechanism is used to condition a supervised network on side information. Batch normalization BID12 can improve the training of deep neural nets, and it is widely used in both discriminative and generative modeling ;). It is thus present in most modern networks, and provides a convenient entry point for self-modulation. Therefore, we present our method in the context of its application via batch normalization. In batch normalization the activations of a layer, h, are transformed as DISPLAYFORM0 where µ and σ 2 are the estimated mean and variances of the features across the data, and γ and β are learnable scale and shift parameters. Self-modulation for unconditional (without side information) generation. In this case the proposed method replaces the non-adaptive parameters β and γ with input-dependent β(z) and γ(z), respectively. These are parametrized by a neural network applied to the generator's input FIG0 ). In particular, for layer, we compute In general, it suffices that γ (·) and β (·) are differentiable. In this work, we use a small onehidden layer feed-forward network (MLP) with ReLU activation applied to the generator input z. Specifically, given parameter matrices U and V , and a bias vector b , we compute DISPLAYFORM1 DISPLAYFORM2 We do the same for β(z) with independent parameters. Self-modulation for conditional (with side information) generation. Having access to side information proved to be useful for conditional generation. The use of labels in the generator (and possibly discriminator) was introduced by and later adapted by;. In case that side information is available (e.g. class labels y), it can be readily incorporated into the proposed method. This can be achieved by simply composing the information y with the input z ∈ R d via some learnable function g, i.e. z = g(y, z). In this work we opt for the simplest option and instantiate g as a bi-linear interaction between z and two trainable embedding functions E, E: Y → R d of the class label y, as DISPLAYFORM3 This conditionally composed z can be directly used in Equation 1. Despite its simplicity, we demonstrate that it outperforms the standard conditional models. Discussion. TAB0 summarizes recent techniques for generator conditioning. While we choose to implement this approach via batch normalization, it can also operate independently by removing the normalization part in the Equation 1. We made this pragmatic choice due to the fact that such conditioning is common (; ;).The second question is whether one benefits from more complex modulation architectures, such as using an attention network BID18 whereby β and γ could be made dependent on all upstream activations, or constraining the elements in γ to which would yield a similar gating mechanism to an LSTM cell BID10. Based on initial experiments we concluded that this additional complexity does not yield a substantial increase in performance. We perform a large-scale study of self-modulation to demonstrate that this method yields robust improvements in a variety of settings. We consider loss functions, architectures, discriminator regularization/normalization strategies, and a variety of hyperparameter settings collected from recent studies (; BID7 ; BID15). We study both unconditional (without labels) and conditional (with labels) generation. Finally, we analyze the through the lens of the condition number of the generator's Jacobian as suggested by , and precision and recall as defined in. Loss functions. We consider two loss functions. The first one is the non-saturating loss proposed in BID6: DISPLAYFORM0 The second one is the hinge loss used in: DISPLAYFORM1 Controlling the Lipschitz constant of the discriminator. The discriminator's Lipschitz constant is a central quantity analyzed in the GAN literature (; BID22 . We consider two state-of-the-art techniques: gradient penalty BID7, and spectral normalization . Without normalization and regularization the models can perform poorly on some datasets. For the gradient penalty regularizer we consider regularization strength λ ∈ {1, 10}.Network architecture. We use two popular architecture types: one based on DCGAN , and another from which incorporates residual connections BID8. The details can be found in the appendix. Optimization hyper-parameters. We train all models for 100k generator steps with the Adam optimizer (We also perform a subset of the studies with 500K steps and discuss it in. We test two popular settings of the Adam hyperparameters (β 1, β 2): (0.5, 0.999) and (0, 0.9). Previous studies find that multiple discriminator steps per generator step can help the training BID6 ), thus we also consider both 1 and 2 discriminator steps per generator step 2. In total, this amounts to three different sets of hyperparameters for (β 1, β 2, disc iter): (0, 0.9, 1), (0, 0.9, 2), (0.5, 0.999, 1). We fix the learning rate to 0.0002 as in. All models are trained with batch size of 64 on a single nVidia P100 GPU. We report the best performing model attained during the training period; although the follow the same pattern if the final model is report. Datasets. We consider four datasets: CIFAR10, CELEBA-HQ, LSUN-BEDROOM, and IMAGENET. The LSUN-BEDROOM dataset BID19 contains around 3M images. We partition the images randomly into a test set containing 30588 images and a train set containing the rest. CELEBA-HQ contains 30k images . We use the 128 × 128 × 3 version obtained by running the code provided by the authors 3. We use 3000 examples as the test set and the remaining examples as the training set. CIFAR10 contains 70K images (32 × 32 × 3), partitioned into 60000 training instances and 10000 testing instances. Finally, we evaluate our method on IMAGENET, which contains 1.3M training images and 50K test images. We re-size the images to 128 × 128 × 3 as done in and BID21.Metrics. Quantitative evaluation of generative models remains one of the most challenging tasks. This is particularly true in the context of implicit generative models where likelihood cannot be effectively evaluated. Nevertheless, two quantitative measures have recently emerged: The Inception Score and the Frechet Inception Distance. While both of these scores have some drawbacks, they correlate well with scores assigned by human annotators and are somewhat robust. Inception Score (IS) posits that that the conditional label distribution p(y|x) of samples containing meaningful objects should have low entropy, while the marginal label distribution p(y) should have high entropy. Formally, DISPLAYFORM2 The score is computed using an Inception classifier. Drawbacks of applying IS to model comparison are discussed in BID2.An alternative score, the Frechet Inception Distance (FID), requires no labeled data BID9. The real and generated samples are first embedded into a feature space (using a specific layer of InceptionNet). Then, a multivariate Gaussian is fit each dataset and the distance is computed as DISPLAYFORM3 ), where µ and Σ denote the empirical mean and covariance and subscripts x and g denote the true and generated data, respectively. FID was shown to be robust to various manipulations and sensitive to mode dropping BID9. 2 We also experimented with 5 steps which didn't outperform the 2 step setting. 3 Available at https://github.com/tkarras/progressive_growing_of_gans. In the unpaired setting (as defined in Section 3.2), we compute the median score (across random seeds) and report the best attainable score across considered optimization hyperparameters. SELF-MOD is the method introduced in Section 2 and BASELINE refers to batch normalization. We observe that the proposed approach outperforms the baseline in 30 out of 32 settings. The relative improvement is detailed in TAB3. The standard error of the median is within 3% in the majority of the settings and is presented in TAB7 To test robustness, we run a Cartesian product of the parameters in Section 3.1 which in 36 settings for each dataset (2 losses, 2 architectures, 3 hyperparameter settings for spectral normalization, and 6 for gradient penalty). For each setting we run five random seeds for self-modulation and the baseline (no self-modulation, just batch normalization). We compute the median score across random seeds which in 1440 trained models. We distinguish between two sets of experiments. In the unpaired setting we define the model as the tuple of loss, regularizer/normalization, neural architecture, and conditioning (self-modulated or classic batch normalization). For each model compute the minimum FID across optimization hyperparameters (β 1, β 2, disc iters). We therefore compare the performance of self-modulation and baseline for each model after hyperparameter optimization. The of this study are reported in TAB1, and the relative improvements are in TAB3 and FIG1.We observe the following: When using the RESNET style architecture, the proposed method outperforms the baseline in all considered settings. When using the SNDCGAN architecture, it outperforms the baseline in 87.5% of the cases. The breakdown by datasets is shown in FIG1. FORMULA3 The improvement can be as high as a 33% reduction in FID. We observe similar improvement to the inception score, reported in the appendix. In the second setting, the paired setting, we assess how effective is the technique when simply added to an existing model with the same set of hyperparameters. In particular, we fix everything except the type of conditioning -the model tuple now includes the optimization hyperparameters. This in 36 settings for each data set for a total of 144 comparisons. We observe that selfmodulation outperforms the baseline in 124/144 settings. These suggest that self-modulation can be applied to most GANs even without additional hyperparameter tuning. Conditional Generation. We demonstrate that self-modulation also works for label-conditional generation. Here, one is given access the class label which may be used by the generator and the. We observe that the majority "good" models utilize self-modulation. Figure (c) shows that applying self-conditioning is more beneficial on the later layers, but should be applied to each layer for optimal performance. This effect persists across all considered datasets, see the appendix.discriminator. We compare two settings: Generator conditioning is applied via label-conditional Batch Norm BID3 ) with no use of labels in the discriminator (G-COND). Generator conditioning applied as above, but with projection based conditioning in the discriminator (intuitively it encourages the discriminator to use label discriminative features to distinguish true/fake samples), as in (P-CGAN). The former can be considered as a special case of the latter where discriminator conditioning is disabled. For P-CGAN, we use the architectures and hyper-parameter settings of. See the appendix, Section B.3 for details. In both cases, we compare standard label-conditional batch normalization to self-modulation with additional labels, as discussed in Section 2, Equation 3.The are shown in TAB4. Again, we observe that the simple incorporation of self-modulation leads to a significant improvement in performance in the considered settings. Training for longer on IMAGENET. To demonstrate that self-modulation continues to yield improvement after training for longer, we train IMAGENET for 500k generator steps. Due to the increased computational demand we use a single setting for the unconditional and conditional settings models following and , but using only two discriminator steps per generator. We expect that the would continue to improve if training longer. However, currently from 500k steps require training for ∼10 days on a P100 GPU.We compute the median FID across 3 random seeds. After 500k steps the baseline unconditional model attains FID 60.4, self-modulation attains 53.7 (11% improvement). In the conditional setting Where to apply self-modulation? Given the robust improvements of the proposed method, an immediate question is where to apply the modulation. We tested two settings: applying modulation to every batch normalization layer, and applying it to a single layer. The of this ablation are in FIG1. These suggest that the benefit of self-modulation is greatest in the last layer, as may be intuitive, but applying it to each layer is most effective. Self-modulation is a simple yet effective complementary addition to this line of work which makes a significant difference when no side information is available. In addition, when side information is available it can be readily applied as discussed in Section 2 and leads to further improvements. Conditional Modulation. Conditional modulation, using side information to modulate the computation flow in neural networks, is a rich idea which has been applied in various contexts (beyond GANs). Multiplicative and Additive Modulation. Existing conditional modulations mentioned above are usually instantiated via Batch Normalization, which include both multiplicative and additive modulation. These two types of modulation also link to other techniques widely used in neural network literature. The multiplicative modulation is closely related to Gating, which is adopted in LSTM BID10, gated PixelCNN (van den), Convolutional Sequence-to-sequence networks BID5 and Squeeze-and-excitation Networks BID11. The additive modulation is closely related to Residual Networks BID8. The proposed method adopts both types of modulation. We present a generator modification that improves the performance of most GANs. This technique is simple to implement and can be applied to all popular GANs, therefore we believe that selfmodulation is a useful addition to the GAN toolbox. Our suggest that self-modulation clearly yields performance gains, however, they do not say how this technique in better models. Interpretation of deep networks is a complex topic, especially for GANs, where the training process is less well understood. Rather than purely speculate, we compute two diagnostic statistics that were proposed recently ignite the discussion of the method's effects. First, we compute the condition number of the generators Jacobian. provide evidence that better generators have a Jacobian with lower condition number and hence regularize using this quantity. We estimate the generator condition number in the same was as. We compute the Jacobian (J z) i,j = δG(z)i δzj at each z in a minibatch, then average the logarithm of the condition numbers computed from each Jacobian. Second, we compute a notion of precision and recall for generative models. define the quantities, F 8 and F 1/8, for generators. These quantities relate intuitively to the traditional precision and recall metrics for classification. Generating points which have low probability under the true data distribution is interpreted as a loss in precision, and is penalized by the F 8 score. Failing to generate points that have high probability under the true data distributions is interpreted as a loss in recall, and is penalized by the F 1/8 score. FIG4 shows both statistics. The left hand plot shows the condition number plotted against FID score for each model. We observe that poor models tend to have large condition numbers; the correlation, although noisy, is always positive. This corroborates the observations in . However, we notice an inverse trend in the vicinity of the best models. The cluster of the best models with self-modulation has lower FID, but higher condition number, than the best models without self-modulation. Overall the correlation between FID and condition number is smaller for self-modulated models. This is surprising, it appears that rather than unilaterally reducing the condition number, self-modulation provides some training stability, yielding models with a small range of generator condition numbers. The right-hand plot in FIG4 shows the F 8 and F 1/8 scores. Models in the upper-left quadrant cover true data modes better (higher precision), and models in the lower-right quadrant produce more modes (higher recall). Self-modulated models tend to favor higher recall. This effect is most pronounced on IMAGENET. Overall these diagnostics indicate that self-modulation stabilizes the generator towards favorable conditioning values. It also appears to improve mode coverage. However, these metrics are very new; further development of analysis tools and theoretical study is needed to better disentangle the symptoms and causes of the self-modulation technique, and indeed of others. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. A ADDITIONAL We describe the model structures that are used in our experiments in this section. The SNDCGAN architecture we follows the ones used in. Since the resolution of images in CIFAR10is 32 × 32 × 3, while resolutions of images in other datasets are 128 × 128 × 3.There are slightly differences in terms of spatial dimensions for both architectures. The proposed self-modulation is applied to replace existing BN layer, we term it sBN (self-modulated BN) for short in TAB8, 8, 9, 10. The ResNet architecture we also follows the ones used in. Again, due to the resolution differences, two ResNet architectures are used in this work. The proposed self-modulation is applied to replace existing BN layer, we term it sBN (self-modulated BN) for short in TAB0, 12, 13, 14. For the conditional setting with label information available, we adopt the Projection Based Conditional GAN (P-cGAN) . There are both conditioning in generators as well ad discriminators. For generator, conditional batch norm is applied via conditioning on label information, more specifically, this can be expressed as follows, DISPLAYFORM0 Where each label y is associated with a scaling and shifting parameters independently. For discriminator label conditioning, the dot product between final layer feature φ(x) and label embedding E(y) is added back to the discriminator output logits, i.e. D(x, y) = ψ(φ(x)) + φ(x) T E(y) where φ(x) represents the final feature representation layer of input x, and ψ(·) is the linear transformation maps the feature vector into a real number. Intuitively, this type of conditional discriminator encourages discriminator to use label discriminative features to distinguish true/fake samples. Both the above conditioning strategies do not dependent on the specific architectures, and can be applied to above architectures with small modifications. We use the same architectures and hyper-parameter settings 4 as in. More specifically, the architecture is the same as ResNet above, and we compare in two settings: only generator label conditioning is applied, and there is no projection based conditioning in the discriminator, and both generator and discriminator conditioning are applied, which is the standard full P-cGAN.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hkl5aoR5tm
A simple GAN modification that improves performance across many losses, architectures, regularization schemes, and datasets.
Extreme Classification Methods have become of paramount importance, particularly for Information Retrieval (IR) problems, owing to the development of smart algorithms that are scalable to industry challenges. One of the prime class of models that aim to solve the memory and speed challenge of extreme multi-label learning is Group Testing. Multi-label Group Testing (MLGT) methods construct label groups by grouping original labels either randomly or based on some similarity and then train smaller classifiers to first predict the groups and then recover the original label vectors. Recently, a novel approach called MACH (Merged Average Classifiers via Hashing) was proposed which projects the huge label vectors to a small and manageable count-min sketch (CMS) matrix and then learns to predict this matrix to recover the original prediction probabilities. Thereby, the model memory scales O(logK) for K classes. MACH is a simple algorithm which works exceptionally well in practice. Despite this simplicity of MACH, there is a big gap between the theoretical understanding of the trade-offs with MACH. In this paper we fill this gap. Leveraging the theory of count-min sketch we provide precise quantification of the memory-identifiablity tradeoffs. We extend the theory to the case of multi-label classification, where the dependencies make the estimators hard to calculate in closed forms. To mitigate this issue, we propose novel quadratic approximation using the Inclusion-Exclusion Principle. Our estimator has significantly lower reconstruction error than the typical CMS estimator across various values of number of classes K, label sparsity and compression ratio. Extreme Classification has taken center-stage of Data Mining and Information Retrieval research in the past few years (; b; ;). It refers to the vanilla multiclass and multilabel classification problems where the number of classes K is significantly large. A large number of classes K brings a new set of computational and memory challenges in training and deploying classifiers. There have been several paradigms of models that tackle the scale challenge of Extreme Classification like 1-vs-all methods (b; ; Babbar & Schölkopf, 2017), tree based methods (a;), embedding models , etc. (as noted on the popular Extreme Classification Repository). One of the recent approaches proposed to alleviate the scale challenge of Multilabel Classification is Group Testing (; ;). In this method, all labels are grouped randomly into m groups/clusters. Each label may go into more than one group. We first train a classifier that predicts which of these clusters the input belongs to (treating each cluster as a separate label in a multilabel setting). For any given input, we first predict the clusters into which the true labels of the input may have been pooled. We can then identify all the true labels by taking an intersection over the inverted clusters. This approach suffers from a critical problem that even tree based approaches have, i.e., hard assignment of clusters. Since the recovery of true labels depends solely on hard-prediction of clusters, a mistake in the cluster prediction can cost us dearly in the final label prediction. Also, since the labels are pooled randomly, each individual meta-classifier is a weak and noisy one. In a recent development, Merged Average Classifiers via Hashing (MACH) was proposed that alleviates the hard-prediction problem in Group Testing methods by identifying the best labels based on the sum of prediction probabilities of the respective groups for a given input. In the hindsight, MACH subtly learns to predict a count-min sketch (CMS) matrix of the original probability vector. For the case of multiclass classification (every input having just a single label unlike multilabel), MACH proposes an unbiased estimator to recover the original K dimensional probability vector from the predicted CMS matrix. Multiclass classification naturally fits into the count-min sketch setting as no two labels can appear simultaneously for a given input. But the proposed theory does not naturally extend to multilabel learning. Further, the variance and error bounds for multiclass classification rely heavily on the choice of number of hash tables and the size of each hash table. That aspect has not been explored in prior work. Our Contributions: In this work we broadly make the following contributions: 1) We revisit MACH with a thorough analysis of proposed reconstruction estimator for multiclass learning. In particular, we prove that the variance of estimation is inversely proportional to the product of product of number of hash tables and size of each hash table (in theorem 2). 2) We also obtain a lower bound on hash table hyperparametrs given a tolerance to prediction error (in Theorems 4 and 5). 3) We propose a novel reconstruction estimator for the case of multilabel learning using InclusionExclusion principle (in theorem 6). This estimator comes out as a solution to a quadratic equation (hence we code-name it as 'quadratic estimator'). 4) We simulate multilabel learning setting by generating K dimensional probability vectors and their proxy CMS measurements. We then reconstruct the probability vector using both the mean estimator and the quadratic estimator and show that the reconstruction Mean-Squared Error (MSE) is significantly lower for the new estimator. Count-Min Sketch: Count-Min Sketch (CMS) was proposed to solve the frequency counting problem in large streaming setting. Assume that we have an infinite stream of elements e 1, e 2, e 3,... coming in. Each of these elements can take any value between K distinct ones. Here, K is very large and we cannot afford to store an array of counts to store every element's frequency (limited memory setting). We need a sub-linear efficient data structure from which we can retrieve the frequency of every element. In Count-Min Sketch , we basically assign O(log K)'signatures' to each class using 2-universal hash functions. We use O(log K) different hash functions H 1, H 2, H 3,..., H O(log K), each mapping any class i to a small range of buckets B << K, i.e., H j (i) ∈ {0, 1, 2, ..., B}. We maintain a counting-matrix C of order O(log K) * B. If we encounter class i in the stream of classes, we increment the counts in cells H 1 (i), H 2 (i)....., H O(log K) (i). It is easy to notice that there will be collisions of classes into these counting cells. Hence, the counts for a class in respective cells could be over-estimates of the true count. During inference, we want to know the frequency of a particular element say a 1. We simply go to all the cells where a 1 is mapped to. Each cell gives and over-estimated value of the original frequency of a 1. To reduce the offset of estimation, the algorithm proposes to take the minimum of all the estimates as the approximate frequency, i.e., n approx (An example illustration of CMS is shown in figure 1. Connecting CMS and Extreme Classification: Given a data instance x, a vanilla classifier outputs the probabilities p i, i ∈ {1, 2, ..., K}. We want to essentially compress the information of these K numbers to log K,i.e., we can only keep track of log K = BR measurements. Ideally, without any assumption, we cannot compress the information in K numbers to anything less than O(K), if we want to retain all information. However, in classification, the most informative quantity is the identity of arg max p i. If we can identify a scheme that can recover the high probability classes from smaller measurement vector, we can train a small-classifier to map an input to these measurements instead of the big classifier. The foremost class of models to accomplish this task are Encoder and Decoder based models like Compressive Sensing . The connection between compressed sensing and extreme classification was identified in prior works . We provide an intuitive explanation of why compressed sensing or any other sketching algorithm does work like count-min sketch in the appendix A. Figure 2: Schematic diagram of MACH. Both the input and the label vector are independently hashed R times (label vector is hashed from K to B, K being number of classes and B being number of buckets in each of the R hash tables). Small models are then trained in parallel. MACH is a new paradigm for extreme classification that uses universal hashing to reduce memory and computations. MACH randomly merges K classes into B meta-classes or buckets (B K). We then runs any offthe shelf classifier (typically simple feed forward neural networks) to predict the meta classes. This process is repeated R number of times, changing the hash function each time (or by simply changing the random seed of the same hash function, to induce a different random pooling each time). During prediction, MACH aggregates the output from each of the R small meta classifiers to retrieve the best class. In the schema shown in figure 2, the input is assumed to be a large dimensional sparse vector. In order to reduce model size from both ends (input and output), the sparse input can also be feature hashed to a manageable dimension. Please note that the theoretical analysis of MACH is agnostic to the input feature hashing. We are only concerned with retrieving the most relevant labels from the meta-class predictions. The subsequent sections formalize the algorithm and quantify the mean, variance, error bounds and hyper-parameter bounds. We begin with emphasizing that MACH does not assume any dependence among the classes. This is a fairly strong assumption because often in extreme classification, the labels have strong correlations. More so, this assumption is intrinsically violated in the case of multilabel learning. Nevertheless, MACH works extremely well in practice, particularly at industry scale challenges. Let there be K classes originally. We'll hash them to B meta-classes using a universal hash function. We repeat this process R times each with a different hash function (can be obtained by simply changing the random seed each time). We only have an R * B matrix that holds all information about the original probability vector of K dimensions (R * B K). Typical classification algorithms model the probability P r(y = i|x) = p i where i ∈ {0, 1, 2...K − 1}. With MACH, we bypass the hassle of training a huge last layer by instead modelling P r(y = b|x) = P j b for every hash function h j, where b ∈ {0, 1, 2, ..., B − 1} and j ∈ {0, 1, 2, ..., R − 1}. During prediction, we sought to recover the K vector from P j b matrix using an unbiased estimator as shown in subsequent sections. P j hj (i) stands for the probability of the bin (meta-class) that i th class is hashed into in j th repetition. Our goal is to obtain an unbiased estimator of p i in terms of {P From here on, the analysis diverges between Multiclass and Multilabel classification problems. We have With the above equations, given the R classifier models, an unbiased estimator of p i is: Theorem 1. Proof: Proof for this theorem has been given in . For clarity and coherence, we show the proof again here. For any j, we can always write where 1 hj (k)=hj (i) is an indicator random variable (generically denoted by I k from here on) suggesting whether class k has been hashed into the same bin as class i using hash function j. Since the hash function is universal, the expected value of the indicator is 1 B (each class will uniformly be binned into one of the B buckets). Thus This is because the expression k =i p k = 1−p i as the total probability sum up to one. Simplifying, we get. Using linearity of expectation and the fact that E(P j hj (i) ) = E(P k hj (i) ) for any j = k, it is not difficult to see that this value is also equal to Proof: Using the known V ar(aX + b) = a 2 V ar(X) and the fact that variance accumulates over sum of i.i.d random variables, we can write We first need to get V ar(P j hj (i) ). From eqn. 3, Hence, It's easy to see Hence, by merging eqns. 5 and 6, we get We can observe that larger the original probability p i, lower the variance of estimation which suggests that the higher probabilities are retained with high certainty and the lower probabilities are prone to noise. Since we only care for the correct prediction of the best class, we can offset the noise by increasing R. is also the computational complexity of prediction. With MACH, the memory complexity is O(BRd) and the computational complexity is O(BRd + KR) (including inference). To obtain significant savings, we want BR to be significantly smaller than K. We next show that BR ≈ O(log K) is sufficient for uniquely identifying the final class with high probability. Also, we need to tune the two knobs R and B for optimal performance on recovering the original probabilities. The subsequent theorems facilitate the prior knowledge of reasonable values of R and B based on our reconstruction error tolerance. In , the following theorem has been proven Theorem 3. For any B, R = log, guarantees that all pairs of classes c i and c j are distinguishable from each other with probability greater than 1 − δ 1. The above theorem specifies a bound such that no two pair of classes end up in the same bucket on all R hash functions. While this is simple and intuitive, it does not take into account the ease of classification. To be precise, when the difference between the probability of best class and the 2 nd best class is low (predictions are spurious), it is much harder to identify the best class as oppposed to when the difference is higher. Theorem 3 is completely agnostic to such considerations. Hence, the next theorems quantifies the requirements on R, B based on our tolerance to recovery error between p i andp i and also the ease of prediction (given by the difference between the p i and p j where i and j are the two best classes respectively). for any random variable X. For our proposed unbiased estimator in theorem 1, we have For a large enough B, B−1 B ≈ 1. Hence, we get the desired If the best class i * has p i * > α and we primarily care for recovering p i * with high probability, then we have RB > and classes i and j are the first and second best respectively (p i > p j > p k f or every k = i, j), Hence, based on the previous two theorems, we can get a reasonable estimate of what bucket size B should we choose and how many models that we need to train in parallel. The major difference between multi-class and multi-label classification from an analysis perspective is that eqn. 1 does not apply anymore. Hence, all the subsequent derivations do not apply in the case of multi-label classification. In the following theorems, we'll derive an approximate estimator using inclusion-exclusion principle to recover original probability vectors from MACH measurements for the case of multi-label classification. Each p i independently takes a value in. If we do not assume any relation between p i, it would be very difficult to derive an estimator. The most realistic assumption on the probability vectors is sparsity. Most real datasets have only few labels per sample even when the number of classes K is huge. For the purpose of analysis, we will assume that where V is the average of number of active labels per input. Theorem 6. Proof: P j hj (i) is the probability of union of all classes that have been hashed to bin h j (i) in j th hash function. Hence, using inclusion-exclusion principle, it can be written as Since all classes are independent of each other, we have Aggregating similar terms, we get In typical multilabel dataset, K runs into the order of millions where B is a few thousands. If we ignore all terms with B in denominator, we essentially end up with a plain mean estimator hj (i) ). We ideally want to use all terms but it is very cumbersome to analyze the summation (please note that the summation doesn't simplify to exponential as we have the clause k j = k l in each summation). In our case, we empirically show later on that even by limiting the expression to first order summation (ignore all terms B 2 or higher powers of B in denominator), we get a much better estimator for true probability. We can simplify the above expression into Solving for p i, we get our desired Unfortunately, proposing an unbiased estimator using the above is hard. One intuitive estimator that can potentially work isp i = Using Jensen's inequality (specifically, E[ Hence, E[p i] ≤ p i and we do not have an unbiased estimator. Nevertheless, the next section details simulation experiments that corroborate that our proposed estimator for multilabel classification has much lower mean-squared-error (MSE) than a plain mean estimator. To simulate the setup for multi-label MACH, we perform the following steps: • Choose a base prob ∈ which says how confident the prediction in the original probability vector is. • Initialize a K dimensional vector p orig = (p 1, p 2, ...., p K) with all zeros. We then implant the value base prob in int(V base prob) number of random locations. We now have a vector p orig which obeys p i = V. • Generate 1000 samples of K dimensional label vectors where each dimension i is a Bernoulli random variable with probability p i. These sample labels are realizations of p orig. • Merge each sample label vector into B dimensional binary labels where a bucket b is an OR over the constituent classes {i : h j (i) = b}. We repeat this step for R different hash functions,i.e., for all j ∈ 1, 2,..., R. • For each of R repetitions, calculate the mean of the respective B dimensional labels to get • Reconstruct p approx using theorem 6 and {P j : j = 1, 2, .., R}. • Calculate L2-norm of p orig − p approx • Repeat all above steps for 10000 times (generating a different p orig each time) and report the average L2-norm from the last step (it serves as the reconstruction MSE, lower the better). Following the above steps, we show the comparison of our proposed quadratic estimator in theorem 6 against the plain mean estimator by varying the values of K, B, V and base prob in figure 3. We can infer the following insights from the plots: • As K increases, the MSE grows. This is expected because the reconstructed vector has a small non-zero probability for many of the K classes and this induces noise and hence MSE grows. But the top classes are still retrieved with high certainty. • For any K, V, base prob, the MSE decreases when B increases which is expected (fewer collisions of classes and hence less noisier predictions). As the MSE gets lower, the gains from the square-root estimator are also low. This is good because in scenarios where B and R are small, we can do much better recovery using the proposed estimator. • For any K, B, base prob the MSE increases with V. This is again natural because larger V induces more'true' class collisions and hence the retrieval becomes fuzzy. • For any K, B, V the MSE decreases with base prob, albeit with much little difference than previous cases. This is interesting because a high base prob means that we have few but highly confident'true' classes among K. On the other hand, lower base prob indicates that'true' classes are scattered among a larger subset among K classes. Yet, MACH recovers the original probabilities with commendably low MSE. Varying B for K = 10000 Varying base prob for K = 10000 Varying B for K = 100000 Varying V for K = 100000 Varying base prob for K = 100000 Varying B for K = 1000000 Varying B for K = 1000000 Varying base prob for K = 1000000 Figure 3: Reconstruction Error (MSE) comparison between 1) vanilla mean estimator (plotted in magenta) and 2) proposed square-root estimator (plotted in green); for various configurations of K,B and V. The value of K varies as 10000, 100000, 1000000 for the 1 st, 2 nd and 3 rd rows respectively. In each row, the first plot fixes V, base prob and compares various values of B. The 2 nd plot fixes B, base prob and compares different values of B. The 3 rd one fixes B, V and compares different values of base prob. In all cases, we notice that the square-root estimator is consistently and significantly lower in MSE than the corresponding mean estimator. We perform a rigorous theoretical analysis of using Count-Min-Sketch for Extreme Classification and come up with error bounds and hyper-parameter constraints. We identify a critical shortcoming of reconstruction estimators proposed in prior research. We overcome the shortcoming by treating each bucket in a hash table as a union of merged original classes. Using inclusion-exclusion principle and a controlled label sparsity assumption, we come up with an approximate estimator to reconstruct original probability vector from the predicted Count-Min Sketch measurements. Our new estimator has significantly lower reconstruction MSE than the prior estimator. Why not Compressive Sensing or Count-Sketch? The measurements in Compressive Sensing are not a probability distribution but rather a few linear combinations of original probabilities. Imagine a set of classes {cats, dogs, cars, trucks}. Suppose we want to train a classifier that predicts a compressed distribution of classes like {0.6 * cars + 0.4 * cats, 0.5 * dogs + 0.5 * trucks}. There is no intuitive sense to these classes and we cannot train a model using softmax-loss which has been proven to work the best for classification. We can only attempt to train a regression model to minimize the norm(like L 1 -norm or L 2 -norm) between the projections of true K-vector and the predicted K-vectors(like in the case of
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1evKR4KvB
How to estimate original probability vector for millions of classes from count-min sketch measurements - a theoretical and practical setup.
Neural networks are commonly used as models for classification for a wide variety of tasks. Typically, a learned affine transformation is placed at the end of such models, yielding a per-class value used for classification. This classifier can have a vast number of parameters, which grows linearly with the number of possible classes, thus requiring increasingly more resources. In this work we argue that this classifier can be fixed, up to a global scale constant, with little or no loss of accuracy for most tasks, allowing memory and computational benefits. Moreover, we show that by initializing the classifier with a Hadamard matrix we can speed up inference as well. We discuss the implications for current understanding of neural network models. Deep neural network have become a widely used model for machine learning, achieving state-ofthe-art on many tasks. The most common task these models are used for is to perform classification, as in the case of convolutional neural networks (CNNs) used to classify images to a semantic category. CNN models are currently considered the standard for visual tasks, allowing far better accuracy than preceding approaches BID21 BID42.Training NN models and using them for inference requires large amounts of memory and computational resources, thus, extensive amount of research has been done lately to reduce the size of networks. BID6 used weight sharing and specification, BID28 used mixed precision to reduce the size of the neural networks by half. BID44 and BID19 used low rank approximations to speed up NNs. BID17, BID24 and BID51, used a more aggressive approach, in which weights, activations and gradients were quantized to further reduce computation during training. Although aggressive quantization benefits from smaller model size, the extreme compression rate comes with a loss of accuracy. Past work noted the fact that predefined BID31 and random BID15 projections can be used together with a learned affine transformation to achieve competitive on several tasks. In this study suggest the reversed proposal -that common NN models used can learn useful representation even without modifying the final output layer, which often holds a large number of parameters that grows linearly with number of classes. Convolutional neural networks (CNNs) are commonly used to solve a variety of spatial and temporal tasks. CNNs are usually composed of a stack of convolutional parameterized layers, spatial pooling layers and fully connected layers, separated by non-linear activation functions. Earlier architectures of CNNs BID22 BID21 used a set of fully-connected layers at later stage of the network, presumably to allow classification based on global features of an image. The final classifier can also be replaced with a convolutional layer with output feature maps matching the number of classes, as demonstrated by BID40.Despite the enormous number of trainable parameters these layers added to the model, they are known to have a rather marginal impact on the final performance of the network BID48 and are easily compressed and reduced after a model was trained by simple means such as matrix decomposition and sparsification BID6. Further more, modern architecture choices are characterized with the removal of most of the fully connected layers BID25 BID42, which was found to lead to better generalization and overall accuracy, together with a huge decrease in the number of trainable parameters. Additionally, numerous works showed that CNNs can be trained in a metric learning regime BID2 BID35 BID11, where no explicit classification layer was introduced and the objective regarded only distance measures between intermediate representations. BID7 suggested an all-convolutional network variant, where they kept the original initialization of the classification layer fixed with no negative impact on performance on the Cifar10 dataset. All of these properties provide evidence that fully-connected layers are in fact redundant and play a small role in learning and generalization. Despite the apparent minor role they play, fully-connected layers are still commonly used as classification layers, transforming from the dimension of network features N to the number of required class categories C. Therefore, each classification model must hold N · C number of trainable parameters that grows in a linear manner with the number of classes. This property still holds when the fully-connected layer is replaced with a convolutional classifier as shown by BID40.In this work we claim that for common use-cases of convolutional network, the parameters used for the final classification transform are completely redundant, and can be replaced with a predetermined linear transform. As we will show for the first time, this property holds even in largescale models and classification tasks, such as recent architectures trained on the ImageNet benchmark BID4 ).The use of a fixed transform can, in many cases, allow a huge decrease in model parameters, and a possible computational benefit. We suggest that existing models can, with no other modification, devoid their classifier weights, which can help the deployment of those models in devices with low computation ability and smaller memory capacity. Moreover, as we keep the classifier fixed, less parameters need to be updated, reducing the communication cost for models deployed in distributed systems. The use of a fixed transform which does not depend on the number classes can allow models to scale to a large number of possible outputs, without a linear cost in the number of parameters. We also suggest that these finding might shed light on the importance of the preceding non-linear layers to learning and generalization. We focus our attention on the final representation obtained by the network (the last hidden layer), before the classifier. We denote these representation as x = F (z; θ) where F is assumed to be a deep neural network with input z and parameters θ, e.g., a convolutional network, trained by backpropagation. In common NN models, this representation is followed by an additional affine transformation DISPLAYFORM0 where W and b are also trained by back-propagation. For input x of N length, and C different possible outputs, W is required to be a matrix of N × C. Training is done using cross-entropy loss, by feeding the network outputs through a softmax activation DISPLAYFORM1 and reducing the expected negative log likelihood with respect to ground-truth target t ∈ {1, . . ., C}, by minimizing DISPLAYFORM2 where w i is the i-th column of W. To evaluate our conjecture regarding the importance of the final classification transformation, we replaced the trainable parameter matrix W with a fixed orthonormal projection Q ∈ R N ×C, such that ∀i = j: q i · q j = 0 and q i 2 = 1, where q i is the ith column of Q. This can be ensured by a simple random sampling and singular-value decomposition As the rows of classifier weight matrix are fixed with an equally valued L 2 norm, we find it beneficial to also restrict the representation of x by normalizing it to reside on the n-dimensional spherê DISPLAYFORM0 This allows faster training and convergence, as the network does not need to account for changes in the scale of its weights. We now face the problem that q i ·x is bounded between −1 and 1. This causes convergence issues, as the softmax function is scale sensitive, and the network is affected by the inability to re-scale its input. This is similar to the phenomenon described by BID45 with respect to softmax function used for attention mechanisms. In the same spirit, we can amend this issue with a fixed scale T applied to softmax inputs f (y) = softmax(1 T y), also known as a softmax temperature. However, this introduces an additional hyper-parameter which may differ between networks and datasets. Instead, we suggest to introduce a single scalar parameter α to learn the softmax scale, effectively functioning as an inverse of the softmax temperature 1 T. Using normalized weights and an additional scale coefficient is similar in spirit to weightnormalization BID34, with the difference that we use a single scale for all entries in the weight matrix, in contrast to a scale for each row that BID34 uses. We keep the additional vector of bias parameters b ∈ R C, and train using the same negative-loglikelihood criterion. More explicitly, our classifier output is now DISPLAYFORM1 and we minimize the loss: DISPLAYFORM2 where we recall x is the final representation obtained by the network for a specific sample, and t ∈ {1, . . ., C} is the ground-truth label for that sample. Observing the behavior of the α parameter over time revealed a logarithmic growth depicted in graph 1. Interestingly, this is the same behavior exhibited by the norm of a learned classifier, first described by and linked to the generalization of the network. This was recently explained by the under-review work of BID39 as convergence to a max margin classifier. We suggest that using a single parameter will enable a simpler examination and possible further exploration of this phenomenon and its implications. We note that as −1 ≤ q i ·x ≤ 1, we also found it possible to train the network with a simple cosine angle loss:L(x, t) = q i ·x − 1, if i = t, q i ·x + 1, otherwise. allowing to discard the softmax function and its scale altogether, but ing in a slight decrease in final validation accuracy compared to original models. We further suggest the use of a Hadamard matrix BID9 as the final classification transform. Hadamard matrix H is an n × n matrix, where all of its entries are either +1 or −1. Further more, H is orthogonal, such that HH T = nI n where I n is the identity matrix. We can use a truncated Hadamard matrixĤ ∈ {−1, 1}C×N where all C rows are orthogonal as our final classification layer such that y =Ĥx + bThis usage allows two main benefits:• A deterministic, low-memory and easily generated matrix that can be used to classify.• Removal of the need to perform a full matrix-matrix multiplication -as multiplying by a Hadamard matrix can be done by simple sign manipulation and addition. We note that n must be a multiple of 4, but it can be easily truncated to fit normally defined networks. We also note the similarity of using a Hadamard matrix as a final classifier to methods of weight binarization such as the one suggested by BID3. As the classifier weights are fixed to need only 1-bit precision, it is now possible to focus our attention on the features preceding it.3 EXPERIMENTAL ImageNet 75.3% 75.3% 25,557,032 8.01% DenseNet169 BID14 ImageNet 76.2% 76% 14,149,480 11.76% ShuffleNet BID50 ImageNet 65.9% 65.4% 1,826,555 52.56% We used the well known Cifar10 and Cifar100 datasets by BID20 as an initial test-bed to explore the idea of a fixed classifier. Cifar10 is an image classification benchmark dataset containing 50, 000 training images and 10, 000 test images. The images are in color and contain 32 × 32 pixels. There are 10 possible classes of various animals and vehicles. Cifar100 holds the same number of images of same size, but contains 100 different classes. We trained a residual network of on the Cifar10 dataset. We used a network of depth 56 and the same hyper-parameters used in the original work. We compared two variants: the original model with a learned classifier, and our version, where a fixed transformation is used. The shown in figure 2 demonstrate that although the training error is considerably lower for the network with learned classifier, both models achieve the same classification accuracy on the validation set. Our conjecture is that with our new fixed parameterization, the network can no longer increase the norm of a given sample's representation -thus learning its label requires more effort. As this may happen for specific seen samples -it affects only training error. We also compared using a fixed scale variable α at different values vs. a learned parameter. Results for α = {0.1, 1, 10} are depicted in figure 3 for both training and validation error. As can be seen, similar validation accuracy can be obtained using a fixed scale value (in this case α = 1 or 10 will suffice) at the expense of another hyper-parameter to seek. In all our experiments we opted to train this parameter instead. In all experiments the α scale parameter was regularized with the same weight decay coefficient used on original classifier. We then followed to train a model on the Cifar100 dataset. We used the DenseNet-BC model of BID14 with depth of 100 layers and k = 12. We continued to train according to the original regime and setting described for this network and dataset. Naturally, the higher number of classes caused the number of parameters to grow and encompass about 4% of the whole model. Validation accuracy for the fixed-classifier model remained equally good as the original model, and we continued to observe the same training curve. In order to validate our on a more challenging dataset, we used the Imagenet dataset introduced by BID4. The Imagenet dataset spans over 1000 visual classes, and over 1.2 million samples. CNNs used to classify Imagenet such as BID21,, BID43 usually have a hidden representation leading to the final classifier of at least 1024 dimensions. This architectural choice, together with the large number of classes, causes the size of classifier to exceed millions of parameters and taking a sizable share from the entire model size. We evaluated our fixed classifier method on Imagenet using Resnet50 by with the same training regime and hyper-parameters. By using a fixed classifier, approximately 2-million parameters were removed from the model, accounting for about 8% of the model parameters. Following the same procedure, we trained a Densenet169 model BID14 for which a fixed classifier reduced about 12% of the parameters. Similarly to on Cifar10 dataset, we observed the same convergence speed and approximately the same final accuracy on both the validation and training sets. Furthermore, we were interested in evaluating more challenging models where the classifier parameters constitutes the majority amount. For this reason we chose the Shufflenet architecture BID50, which was designed to be used in low memory and limited computing platforms. The Shufflenet network contains about 1.8 million parameters, out of which 0.96 million are part of the final classifier. Fixing the classifier ed with a model with only 0.86 million parameters. This model was trained and found, again, to converge to similar validation accuracy as the original. Interestingly, this method allowed Imagenet training in an under-specified regime, where there are more training samples than number of parameters. This is an unconventional regime for modern deep networks, which are usually over-specified to have many more parameters than training samples BID49. Moreover, many recent theoretical related to neural network training BID47 BID33 BID36 BID37 and even generalization BID5 BID0 BID46 usually assume over-specification. TAB0 summarizes our fixed-classifier on convolutional networks, comparing to originally reported . We offer our drop-in replacement for learned classifier that can be used to train models with fixed classifiers and replicate our 1. As language modeling requires classification of all possible tokens available in the task vocabulary, we were interested to see if a fixed classifier can be used, possible saving a very large number of trainable parameters (vocabulary size can have tens or even hundreds of thousands of different words). Recent works have already found empirically that using the same weights for both word embedding and classifier can yield equal or better than using a separate pair of weights BID18 BID32 BID45. This is compliant with our findings that the linear classifier is largely redundant. To examine further reduction in the number of parameters, we removed both classifier and embedding weights and replaced them with a fixed transform. We trained a language model on the WikiText2 dataset described in BID26, using the same setting in BID27. We used a recurrent model with 2-layers of LSTM BID10 and embedding + hidden size of 512. As the vocabulary of WikiText2 holds about 33K different words, the expected number of parameters in embedding and classifier is about 34-million. This number makes for about 89% from the 38M parameters used for the whole model. We found that using a random orthogonal transform yielded poor compared to learned embedding. We suspect that, in oppose to image classification benchmarks, the embedding layer in language models holds information of the words similarities and relations, thus requiring a fine initialization. To test our intuition, we opted to use pre-trained embeddings using word2vec algorithm by BID29 or PMI factorization as suggested by BID23. We find that using fixed word2vec embeddings, we achieve much better . Specifically, we use 89% less parameters than the fully learned model, and obtain only somewhat worse perplexity. We argue that this implies a required structure in word embedding that stems from semantic relatedness between words and the natural imbalance between classes. However, we suggest that with a much more cost effective ways to train word embeddings (e.g., BID29), we can narrow the gap and avoid their cost when training bigger models. In the last couple of years a we observe a rapid growth in the number of classes benchmark datasets contain, for example: Cifar100 BID20 ), ImageNet1K, ImageNet22k (and language modeling BID26 . Therefore the computational demands of the final classifier will increase as well and should be considered no less than the architecture chosen. We use the work by BID41 as our use case, which introduced JFT-300M -an internal Google dataset with over 18K different classes. Using a Resnet50, with a 2048 sized representation, this led to a model with over 36M parameters. This means that over 60% of the model parameters reside in the final classification layer. BID41 further describes the difficulty in distributing this amount of parameters between the training servers, and the need to split them between 50 sub-layers. We also note the fact that the training procedure needs to account for synchronization after each parameter update -which must incur a non-trivial overhead. Our work can help considerably in this kind of scenario -where using a fixed classifier removes the need to do any gradient synchronization for the final layer. Furthermore, using a Hadamard matrix, we can remove the need to save the transformation altogether, and make it more efficient, allowing considerable memory and computational savings. We argue that our method works due to the ability of preceding layers in the network to learn separable representations that are easily classified even when the classifier itself is fixed. This property can be affected when the ratio between learned features and number of classes is small -that is, when C > N . We've been experimenting with such cases, for example Imagenet classification (C = 1000) using mobilenet-0.5 BID13 where N = 512, or reduced version of ResNet where N = 256. In both scenarios, our method converged similarly to a fully learned classifier reaching the same final validation accuracy. This is strengthening our finding, showing that even in cases in which C > N, fixed classifier can provide equally good . Another possible issue may appear when the possible classes are highly correlated. As a fixed orthogonal classifier does not account for this kind of correlation, it may prove hard for the network to learn in this case. This may suggest another reason for the difficulties we experienced in training a language model using an orthogonal fixed classifier, as word classes tend to have highly correlated instances. Understanding that linear classifiers used in NN models are largely redundant allows us to consider new approaches in training and understanding these models. Recent works BID1 suggested a connection between generalization capabilities of models and various norm-related quantities of their weights. Such might be potentially simplified in our model, since we have a single scalar variable (i.e., scale), which seems to be the only relevant parameter in the model (since we normalize the last hidden layer, and fix the last weight layer).The use of fixed classifiers might be further simplified in Binarized Neural Networks BID16, where the activations and weights are restricted to ±1 during propagations. In this case the norm of the last hidden layer is constant for all samples (equal to the square root of the hidden layer width). This constant can be absorbed into the scale constant α, and there is no need in a per-sample normalization as in eq. 1.We also plan to further explore more efficient ways to learn word embedding, where similar redundancy in classifier weights may suggest simpler forms of token representations -such as low-rank or sparse versions, allowing similar benefits to the fixed transformations we suggested. In this work we suggested removing the parameters from the classification layer used in deep neural networks. We showed empirical suggesting that keeping the classifier fixed cause little or no decline in classification performance for common balanced datasets such as Cifar and Imagenet, while allowing a noticeable reduction in trainable parameters. We argue that fixing the last layer can reduce the computational complexity for training as well as the communication cost in distributed learning. Furthermore, using a Hadamard matrix as classifier might lead to some computational benefits when properly implemented, and save memory otherwise spent on large amount of transformation coefficients. As datasets tend to become more complex by time (e.g., Cifar100, ImageNet1K, ImageNet22k, JFT-300M, and language modeling) we believe that resource hungry affine transformation should remain fixed during training, at least partially. We also found that new efficient methods to create pre-defined word embeddings should be explored, as they require huge amount of parameters that can possibly be avoided when learning a new task. Based on these findings, we recommend future research to focus on representations learned by the non-linear part of neural networks -up to the final classifier, as it seems to be highly redundant.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
S1Dh8Tg0-
You can fix the classifier in neural networks without losing accuracy
This paper introduces NEMO, an approach to unsupervised object detection that uses motion---instead of image labels---as a cue to learn object detection. To discriminate between motion of the target object and other changes in the image, it relies on negative examples that show the scene without the object. The required data can be collected very easily by recording two short videos, a positive one showing the object in motion and a negative one showing the scene without the object. Without any additional form of pretraining or supervision and despite of occlusions, distractions, camera motion, and adverse lighting, those videos are sufficient to learn object detectors that can be applied to new videos and even generalize to unseen scenes and camera angles. In a baseline comparison, unsupervised object detection outperforms off-the shelf template matching and tracking approaches that are given an initial bounding box of the object. The learned object representations are also shown to be accurate enough to capture the relevant information from manipulation task demonstrations, which makes them applicable to learning from demonstration in robotics. An example of object detection that was learned from 3 minutes of video can be found here: http://y2u.be/u_jyz9_ETz4 Object-based representations are a powerful abstraction of our world. Since these representations remove large amounts of information-an image of size 120 × 160 for example has 120 × 160 × 3 = 57.600 dimensions, while the coordinates of an object in that image only have 2 dimensions-object-based representations enable efficient generalization, simulation, planning, communication, etc. But grounding objects in sensory input currently relies on supervised learning, which requires a high number of labeled images, e.g. 500.000 manually annotated segments to learn 80 objects BID22. This paper takes a step towards replacing this labor-intensive supervision by learning to detect objects from videos that can be gathered quickly with minimal supervision and by exploiting the physical properties of objects. A physical object is a collection of matter that moves as a unit. Motion, in turn, can be a strong cue to learn object detection and replace the need for supervision in the form of labeled images. Given a video of a moving object, we can learn object-based representations by optimizing them to describe physically plausible motion BID16. But this approach only works in the absence of visual distractions. With camera motion, other moving objects, or changes in the , motion alone is not sufficient to learn such representations because many features in the image move in a physically plausible way. This paper improves on previous approaches by learning to ignore visual distractions through negative examples, i.e., videos of the scene without the target object but with the distractions. These negative videos are easy to collect because they do not need to be in sync with the positive ones, i.e., they do not need to have the same sequence of camera movements or the same object motions. This paper also addresses the challenge Figure 1: Learning to detect an object from 3 min of video. Left to right: training video of a pen in hand, negative video without pen, two test videos with per frame detections shown as black dots. [video link] of changes between training and test videos, e.g. due to different lighting or changes in the . Those changes can be harmful if an object representation is extracted using a standard pyramid-shaped convolutional network because every pixel directly affects the output, even if it is far from the object's location. Therefore, this paper uses a spatial encoder architecture that uses a spatial softmax output BID5, which is only affected by the strongest local activations, making it invariant to many visual distractions. The contribution of this paper is to demonstrate unsupervised object detection based on data that is easy and fast to collect. This is achieved by formulating the use of negative examples for object detection as a loss function, combining it with motion-based learning objectives, and using these objectives to train a spatial encoder network using a combination of random search and gradient descent. The ing method is called learning from negative examples and motion (NEMO). A glimpse of the are shown in Figure 1.Experimental in Section 4 show that NEMO can learn new objects from only two short videos of a few minutes, without using pretrained models and without using supervision beyond marking these videos as positive and negative. The also show that NEMO can learn object detection in the presence of frequent occlusions, distractions, camera motion, and changes in lighting and . Even though it uses data that can be collected in seconds to minutes, the learned object detection generalizes to new scenes and camera angles and outperforms template matching and tracking baselines. The experiments also show how the learned object representations can be useful to demonstrate tasks such as writing or pick-and-place tasks, e.g. to make robot learning more data-efficient. This work is strongly related to physics-based representation learning, where a latent representation is learned by optimizing consistency with physics. BID30 learn to map images to latent representations by optimizing consistency with a known dynamics model. BID15 and BID16 make more general assumptions about physical interactions and define them as learning objectives. A number of approaches combine physical assumptions with image reconstruction to learn latent representations BID8 BID31 BID5. BID7 learn to embed object regions in an image using spatio-temporal consistency. BID13 learn object embeddings from self-supervised interactions based on object persistence. BID14 learn representations that are equivariant to known ego motion. BID29 learn latent representations from multiple synchronous videos of motion. While these approaches are similar to this paper in spirit, they learn image embeddings, while this paper learns to detect objects in the image coordinates. This more constrained object-based representation makes the presented approach particularly robust and efficient. This paper is also connected to the idea of active perception BID2, where action is used to facilitate perception. Motion has been used for a long time to identify and track objects BID23, to segment them BID6, to understand their articulation BID18, and so on. Recently, this idea has been combined with learning in order to generalize beyond the observed motion, e.g. to learn object segmentation from videos of moving objects BID27 and from videos generated by robot The variation loss based on two frames at times t and t + d enforces variation between detected object locations, ing in a gradient that pushes z (t) and z DISPLAYFORM0 apart. (c) The slowness loss enforces object detections in frames t and t + 1 to be close, which produces a gradient that pulls z (t) and z (t+1) together. (d) The presence loss enforces that the object is detected in the positive frame t rather than in the negative frame t −, which creates a gradient that increases activations in the positive frame and decreases them in the negative frame.interactions BID28. This paper goes into the same direction for learning object detection by introducing ideas from representation learning and by leveraging negative examples. Labeling the training videos as positive and negative examples can also be viewed as weakly supervised learning, which deals with learning from labels that are only partially informative. Weakly supervised object detection relies on image-wide labels to learn to localize the corresponding objects in the image BID26 BID25. While these approaches use image-wide labels to replace object location labels, which are more difficult to obtain, this paper go a step further and only uses per-video labels and compensates this reduction of supervision by adding motion as a cue for learning object detection. The key idea of NEMO is to learn to detect an object from two videos, a positive video that shows the target object in motion and a negative video of the same scene without that object. These videos are used to optimize two objectives: 1) Learn to detect something that moves in a physically plausible way in the positive video, such that its location varies over time without having instantaneous jumps, which is defined below as a combination of a variation loss and a slowness loss. 2) Learn to detect something that is present in the positive video but not in the negative video, which is defined as a presence loss. These objectives are used to train a spatial encoder network, which produces an object detection based on the strongest activation after a stack of convolutions. Optimization is done by a combination of random search and gradient descent. We will now look in detail into each of these components. NEMO's network architecture is based on the encoder part of deep spatial autoencoders BID5 and therefore called a spatial encoder. The spatial encoder is a stack of convolutional layers without pooling or subsampling, which uses residual connections BID10, batch normal-ization BID12, and ReLU nonlinearities BID24. The spatial encoder has n residual blocks, each with c channels and kernel size k (see FIG0 . The experiments in this paper used c = 32 channels, up to n = 10 layers, and kernel sizes k ∈ {3, 5, 7}. Since the parameters n and k control the receptive field of the object detector, they must roughly match the size of the target object. The experiments used k = 7 for learning to detect the Roomba and k = 3 for all other objects. The output layer of the spatial encoder has a single channel, followed by a spatial softmax, which produces a probability distribution over the object's location in the image. The mean and mode of this distribution estimate the object's location. The mode can be used during inference because it is more robust to distractions but not during gradient-based training since it is not differentiable. The spatial encoder is trained by minimizing a combination of three losses-variation, slowness, and presence (see FIG0, which are defined here. Let us denote the input image at time t as I (t) ∈ R h×w where h and w are the height and width of the image. We will refer to the spatial encoder as f with parameters θ, and the output of f before the spatial softmax as DISPLAYFORM0. By applying the spatial softmax across image coordinates i and j, we get a probability image P (t) ∈ R h×w and its mean z (t) ∈ R 2 normalized to [−1, 1] as DISPLAYFORM1 The first two losses, variation and slowness, operate on the mean z in positive frames. Together, they measure whether the detected object location z (t) moves in a physically plausible way by comparing pairs of z (t) for different t. The variation loss encodes the assumption that the target object does not stay still by enforcing that z t+d is different from z t for d in some range [d min, d max]. The variation loss measures the proximity using e −distance, which is 1 if z t = z t+d and goes to 0 with increasing distance BID16. DISPLAYFORM2 The hyperparameter β scales this distance and controls how far z t and z t+d need to be apart. The hyperparameters d min and d max define for which time differences variation is enforced; d min should be large enough that the object has typically changed its location in the image after d min frames; d max should be small enough that slower changes in the typically take longer than d max frames. All experiments in this paper use the parameters β = 10, d min = 50, and d max = 100.The slowness loss encodes the assumption that objects move with relatively low velocities, i.e., that their locations at time t and t + 1 are typically close to each other. Consequently, this loss measures the squared distance between z in consecutive time steps t and t + 1, which favors smooth over erratic object trajectories BID32 BID15. DISPLAYFORM3 The presence loss encodes the assumption that the object is present in the positive video but not in the negative one. Taking a positive frame t and a negative frame t −, we can compute the probability of the object being somewhere in the positive frame q (t,t −) by computing the spatial softmax jointly over both frames and summing over all pixels. The loss is then defined as negative log probability. DISPLAYFORM4 Under review as a conference paper at ICLR 2019These three losses are combined in a weighted sum, DISPLAYFORM5, where the weights were chosen such that all gradients have the same order of magnitude. All experiments in this paper use w variation = 2, w slowness = 10, and w presence = 1. The losses are optimized from minibatches of size b, such that every minibatch includes b samples of consecutive frames {(I (t), I (t+1) )} t and b samples of frames d ∈ [d min, d max] steps apart {(I (t), I (t+d) )} t,d, which are used to compute the variation and slowness losses. The presence loss uses all combinations of the positive frames in {(I (t), I (t+d) )} t,d with b negative frames {I (t −) } t − ing in 2b 2 pairs to average over. All experiments use b = 10. For a good initialization of the spatial softmax output, O (t) is divided by a temperature α = 10 before the softmax is applied. For numerical stability of the gradient computation, Gaussian noise ∼ N (µ = 0, σ = 10 −5) is added to z t and q DISPLAYFORM0 When L(θ) is optimized with an the adaptive gradient descent method Adam BID19, it either converges very quickly within a few hundred gradient descent steps or-for some objects-gets stuck in a local optimum. NEMO addresses this problem by optimizing the loss with a combination of random search and gradient descent. It initializes the spatial encoder m = 10 times, optimizes each by a small number, e.g. 200, gradient descent steps and then finetunes the best model in additional 1000 gradient steps. Stopping the training this early in very fast training time of about 10 minutes on a single GPU and seemed to improve generalization compared to training to convergence, although this needs to be further investigated. The following experiments evaluate object detection for a number of different objects without using pretrained networks and without using any labeled images, instead relying on a few minutes of positive videos showing a moving object and negative videos of the same scene that are quick and easy to obtain (in total about 20 minutes of video for all experiments). All show object detection on test videos not used for training. The show per-frame detections in subsampled 120 × 160 or 90 × 160 videos without applying any tracking or filtering. The experiments evaluate NEMO in three different settings, testing detection accuracy with static and moving cameras for single and multiple objects and generalization to new scenes and camera angles. The experiments also illustrate how learned object detection can enable learning from demonstration, and provide a comparison to template matching and tracking that shows a substantial advantage over these baselines. The data and code based on TensorFlow BID0 and Keras BID4 to reproduce all experiments will be published with this paper. In the first experiment (see Figure 1), the algorithm learned to detect a pen in a hand from a positive video in which the pen was moved in random spirals across the table and a negative video of the hand moving without the pen. Testing the learned object detector on unseen videos of writing "hello" and "world" shows precise and stable per frame object detection from only 2.5 minutes of video. Note how the method learned to be invariant to distractions such as position of the arm. Moving the camera causes motion in the entire video, which makes it more difficult to learn object detection based on motion. This experiment evaluates NEMO on such videos that were recorded by moving the camera in a way that keeps the object in video without constantly centering on it, because that would cancel any object motion in the image. For this set of experiments, the object detector uses the current frame and additionally a difference image from the previous frame as input, which produced more consistent . The in Figure 3 show that NEMO works despite camera motion, that it can handle difficult lighting conditions as for the toy car, and that the learned object detection can even transfer to different scenes, as for the car detector that was trained in the balcony scene and tested in the hallway. The total training data used for learning all three objects is just under 12 minutes of video. This experiment tests detection of multiple objects in a table top setting with five objects. The training data for this experiment are five videos of about 2 minutes, with one video per object in which the object is randomly moved on the table without any other object being present. Each video was used as positive and the remaining four as negative examples to train a different object detector. The in Figure 4 show the output of these five object detectors in a test video in which objects are randomly rearranged on the table. These show mostly correct detections, even under occlusions by the hand and other objects, and generalization to a different camera angle never seen during training. The video linked in the caption reveals that the object detector is not perfect. It works most of time, but generates two kinds of errors: 1) It occasionally errorneously detects the hand as being the target object even when the object is visible-this is a clear error. 2) More often, it detects the object in a wrong location when it is actually occluded by the hand. This is because, in the current version, the spatial encoder cannot return nothing, it always returns the most likely object location. This could be resolved by estimating the detection uncertainty and only returning the object location when the certainty exceeds a threshold. Since the straight-forward approach of using the value of the pre-softmax activations as a certainty estimate did not produce satisfying , this should be addressed in future work. This experiment illustrates how object-based representations learned by NEMO can enable subsequent learning such as learning from demonstrations (see Figure 5). The experiment uses the object detectors learned in the last experiment and applies them to demonstrations for three pick-and-place tasks. Comparing the object locations in the first and last frame of the demonstration reveals which objects were moved to which locations. Simple statistics over the final object locations across all demonstrations of a task describe the task in a way that could be used as the goal for a planning algorithm or as reward for reinforcement learning. Since there is no other approach for learning object detection from positive videos of moving objects and negative videos without those objects, there is no baseline to fairly compare against. To still put NEMO's performance in the context of existing work on object detection and tracking, this experiment compares it to established template matching and tracking approaches using their OpenCV implementations BID3: template matching with normalized cross correlation BID21, tracking with online boosting (OLB, BID9, tracking with online multiple instance learning , tracking with kernelized correlation filters , and tracking-learning-detection (TLD, BID17. Since all of these baselines need some form of supervision, they were given bounding boxes around the initial object locations to initialize tracking or extract an object template. The baselines were applied to the non-subsampled video with resolution 480 × 640. Figure 6 compares NEMO to these baselines for three test videos from the experiments above, for which 20 equally spaced frames across the video were manually labeled. The show that detecting or tracking the pen is relatively easy. NEMO, MIL, and KCF retain a very low error rate in this video. But for the other two videos, all tracking methods quickly diverge and template matching performs poorly as well, while NEMO solves these videos almost perfectly. The tracking methods fail in these videos due to substantial occlusions during object motion in the table top setting and difficult lighting conditions and fast motion in the toy car example. Template matching cannot handle the change in appearance of the objects based on lighting, scale, and orientation. The additional videos used by NEMO for learning the object detector, however, lead to robustness to these variations, which shows the advantage of unsupervised learning. This paper presented NEMO, a novel approach to unsupervised object detection from short videos of moving objects and negative videos of scenes without those objects. By demonstrating data-efficient and robust object detection without the use of image labels, this paper opens up new research directions. There are a number of extensions that would improve the presented approach. Combining it with ensemble methods, for example, could provide an uncertainty estimate required to infer whether the object is visible in the current frame. Integrating the approach with tracking or filtering could exploit temporal consistency not only during training but also during inference. For learning multiple objects in the same scene, merging the different object detectors into a single network could improve performance by sharing intermediate features. And creating a large-scale data-set for this approach would be very valuable to develop it further. Taking a broader view, the presented approach takes a step towards unsupervised learning of object-based representations. While this paper used manually recorded videos, the method can also be applied to data collected by a robot similar to BID13 and BID28 to learn objects autonomously. Acquiring such object-based representations could build a bridge to geometric and symbolic reasoning and enable efficient learning, communication, prediction, and planning in object-based representations.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Byl9bhA5F7
Learning to detect objects without image labels from 3 minutes of video
Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical show that the state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models. Machine reading comprehension (MRC) is a fundamental task in Natural Language Processing, which requires models to understand a body of text and answer a particular question related to the context. With success of unsupervised representation learning in NLP, language pre-training based models such as GPT-2 , BERT , XLNet and RoBERTa have achieved nearly saturated performance on most of the popular MRC datasets (; ; ;). It is time to challenge state-of-the-art models with more difficult reading comprehension tasks and move a step forward to more comprehensive analysis and reasoning over text . In natural language understanding, logical reasoning is an important ability to examine, analyze and critically evaluate arguments as they occur in ordinary language according to the definition from Law School Admission Council (2019a). It is a significant component of human intelligence and is essential in negotiation, debate and writing etc. However, existing reading comprehension datasets have none or merely a small amount of data requiring logical reasoning, e.g., 0% in MCTest dataset and 1.2% in SQuAD according to. One related task is natural language inference, which requires models to label the logical relationships of sentence pairs. However, this task only considers three types of simple logical relationships and only needs reasoning at sentence-level. To push the development of models in logical reasoning from simple logical relationship classification to multiple complicated logical reasoning and from sentence-level to passage-level, it is necessary to introduce a reading comprehension dataset targeting logical reasoning. A typical example of logical reasoning questions is shown in Table 1. Similar to the format of multiple-choice reading comprehension datasets , it contains a context, a question and four options with only one right answer. To answer the question in this example, readers need to identify the logical connections between the lines to pinpoint the conflict, then understand each of the options and select an option that solves the conflict. Human minds need extensive training and practice to get used to complex reasoning, and it will take immense efforts for crowdsourcing workers to design such logical reasoning questions. Inspired by the datasets extracted from standardized examinations , we build a dataset by selecting such logical reasoning questions from standardized exams such as GMAT 1 and LSAT 2. We finally collect 6,139 pieces of logical reasoning questions, which constitute a Reading Comprehension dataset requiring logical reasoning (ReClor). Human-annotated datasets usually contain biases (; ; ; ; ;), which are often exploited by neural network models as shortcut solutions to achieve high testing accuracy. For data points whose options can be selected correctly without knowing the contexts and questions, we classify them as biased ones. In order to fully assess the logical reasoning ability of the models, we propose to identify the biased data points and group them as EASY set, and put the rest into HARD set. Based on our experiments on these separate sets, we find that even the state-of-the-art models can only perform well on EASY set and struggle on HARD set as shown in Figure 1. This phenomenon shows that current models can well capture the biases in the dataset but lack the ability to understand the text and reason based on connections between the lines. On the other hand, human beings perform similarly on both the EASY and HARD set. It is thus observed that there is still a long way to go to equip models with true logical reasoning ability. The contributions of our paper are two-fold. First, we introduce ReClor, a new reading comprehension dataset requiring logical reasoning. We use option-only-input baselines trained with different random seeds to identify the data points with biases in the testing set, and group them as EASY set, with the rest as HARD set to facilitate comprehensive evaluation. Second, we evaluate several stateof-the-art models on ReClor and find these pre-trained language models can perform well on EASY set but struggle on the HARD set. This indicates although current models are good at exploiting biases in the dataset, they are far from capable of performing real logical reasoning yet. Reading Comprehension Datasets A variety of reading comprehension datasets have been introduced to promote the development of this field. MCTest is a dataset with 2,000 multiple-choice reading comprehension questions about fictional stories in the format similar to ReClor. proposed SQuAD dataset, which contains 107,785 questionanswer pairs on 536 Wikipedia articles. The authors manually labeled 192 examples of the dataset and found that the examples mainly require reasoning of lexical or syntactic variation. In an analysis of the above-mentioned datasets, found that none of questions requiring logical reasoning in MCTest dataset and only 1.2% in SQuAD dataset . introduced RACE dataset by collecting the English exams for middle and high school Chinese students in the age range between 12 to 18. They hired crowd workers on Amazon Mechanical Turk to label the reasoning type of 500 samples in the dataset and show that around 70 % of the samples are in the category of word matching, paraphrasing or single-sentence reasoning. To encourage progress on deeper comprehension of language, more reading comprehension datasets requiring more complicated reasoning types are introduced, such as iterative reasoning about the narrative of a story (Kočiskỳ et al., 2018), multi-hop reasoning Table 1: An example in the ReClor dataset which is modified from the Law School Admission Council (2019b). across multiple sentences and multiple documents , commonsense knowledge reasoning (; ;) and numerical discrete reasoning over paragraphs . However, to the best of our knowledge, although there are some datasets targeting logical reasoning in other NLP tasks mentioned in the next section, there is no dataset targeting evaluating logical reasoning in reading comprehension task. This work introduces a new dataset to fill this gap. Logical Reasoning in NLP There are several tasks and datasets introduced to investigate logical reasoning in NLP. The task of natural language inference, also known as recognizing textual entailment (; ; ; ;) requires models to take a pair of sentence as input and classify their relationship types, i.e., ENTAILMENT, NEUTRAL, or CONTRADICTION. SNLI and MultiNLI datasets are proposed for this task. However, this task only focuses on sentence-level logical relationship reasoning and the relationships are limited to only a few types. Another task related to logical reasoning in NLP is argument reasoning comprehension task introduced by with a dataset of this task. Given an argument with a claim and a premise, this task aims to select the correct implicit warrant from two options. Although the task is on passage-level logical reasoning, it is limited to only one logical reasoning type, i.e., identifying warrants. ReClor and the proposed task integrate various logical reasoning types into reading comprehension, with the aim to promote the development of models in logical reasoning not only from sentence-level to passage-level, but also from simple logical reasoning types to the complicated diverse ones. There have been several datasets extracted from human standardized examinations in NLP, such as RACE dataset mentioned above. Besides, NTCIR QA Lab offers comparative evaluation for solving real-world university entrance exam questions; The dataset of CLEF QA Entrance Exams Task is extracted from standardized English examinations for university admission in Japan; ARC dataset consists of 7,787 science questions targeting student grade level, ranging from 3rd grade to 9th; The dialogue-based multiple-choice reading comprehension dataset DREAM contains 10,197 questions for 6,444 multi-turn multi-party dialogues from English language exams that are designed by human experts to assess the comprehension level of Chinese learners of English. Compared with these datasets, ReClor distinguishes itself by targeting logical reasoning. 3 RECLOR DATA COLLECTION AND ANALYSIS The format of data in ReClor is similar to other multiple-choice reading comprehension datasets , where a data point contains a context, a question and four answer options, among which only one option is right/most suitable. We collect reading comprehen- sion problems that require complicated logical reasoning. However, producing such data requires the ability to perform complex logical reasoning, which makes it hard for crowdsourcing workers to generate such logical questions. Fortunately, we find the reading comprehension problems in some standardized tests, such as GMAT and LSAT, are highly in line with our expectation. We construct a dataset containing 6,139 logical reasoning questions sourced from open websites and books 3. In the original problems, there are five answer options in which only one is right. To comply with fair use of law 4, we shuffle the order of answer options and randomly delete one of the wrong options for each data point, which in four options with one right option and three wrong options. Furthermore, similar to ImageNet dataset 5, we plan to offer the dataset to researchers/educators who agree to have it for non-commercial research and/or educational use only. As mentioned above, we collect 6,139 data points, in which 91.22% are from actual exams of GMAT and LSAT while others are from high-quality practice exams. They are divided into training set, validation set and testing set with 4,639, 500 and 1,000 data points respectively. The overall statistics of ReClor and comparison with other similar multiple-choice MRC datasets are summarized in Table 2. As shown, ReClor is of comparable size and relatively large vocabulary size. Compared with RACE, the length of the context of ReCor is much shorter. In RACE, there are many redundant sentences in context to answer a question. However, in ReClor, every sentence in the context passages is important, which makes this dataset focus on evaluating the logical reasoning ability of models rather than the ability to extract relevant information from a long context. The length of answer options of ReClor is largest among these datasets. We analyze and manually annotate the types of questions on the testing set and group them into 18 categories, whose percentages and descriptions are shown in Table 3. The percentages of different types of questions reflect those in the logical reasoning module of GMAT and LSAT. Some examples of different types of logical reasoning are listed in Figure 2, and more examples are listed in the Appendix C. Taking two examples, we further express how humans would solve such questions in Table 4, showing the challenge of ReClor. The dataset is collected from exams devised by experts in logical reasoning, which means it is annotated by humans and may introduce biases in the dataset. Recent studies have shown that models can utilize the biases in a dataset of natural language understanding to perform well on the task without truly understanding the text (; ; ; ; ;). It is necessary to analyze such data biases to help evaluate models. In the ReClor dataset, the common context and question are shared across the four options for each data point, so we focus on the analysis of the difference in lexical choice and sentence length of the right and wrong options without contexts and questions. We first investigate the biases of lexical choice. We lowercase the options and then use WordPiece tokenization of BERT BASE , for the tokens in options, we analyze their conditional probability of label l ∈ {right, wrong} given by the token t by p(l|t) = count(t, l)/count(t). The larger the correlation score is for a particular token, the more likely it contributes to the prediction of related option. Many neural network based models such as FastText , Bi-LSTM, GPT , GPT-2 , BERT , XLNet , If the purpose of laws is to contribute to people's happiness, we have a basis for criticizing existing laws as well as proposing new laws. Hence, if that is not the purpose, then we have no basis for the evaluation of existing laws, from which we must conclude that existing laws acquire legitimacy simply because they are the laws Question: The reasoning in the argument is flawed in that the argument Options: A. takes a sufficient condition for a state of affairs to be a necessary condition for it B. draws a about how the world actually is on the basis of claims about how it should be C. infers a causal relationship from the mere presence of a correlation D. trades on the use of a term in one sense in a premise and in a different sense in the Answer: A Reasoning Process of Humans: We may first look at the question to understand the specific task of the question -identify a flaw. We then analyze the argument in the context. The 'existing laws acquire legitimacy simply because they are the laws.' is based on the argument (purpose is NOT happiness) → (NOT basis for criticizing laws), which is obtained from the first statement: (purpose is happiness) → (basis for criticizing laws). However, we know ¬A → ¬B cannot be obtained from A → B. Therefore, we should choose option A that describes this flaw. The distractors here are different types of reasoning flaws. Prior knowledge of basic logical rules is needed to correctly answer this question. Psychologist: Phonemic awareness, or the knowledge that spoken language can be broken into component sounds, is essential for learning to read an alphabetic language. But one also needs to learn how sounds are symbolically represented by means of letters; otherwise, phonemic awareness will not translate into the ability to read an alphabetic language. Yet many children who are taught by the whole-language method, which emphasizes the ways words sound, learn to read alphabetic languages. Question: Which one of the following can be properly inferred from the psychologist's statements? A. The whole-language method invariably succeeds in teaching awareness of how spoken language can be broken into component sounds. B. Some children who are taught by the whole-language method are not prevented from learning how sounds are represented by means of letters. C. The whole-language method succeeds in teaching many children how to represent sounds symbolically by means of letters. D. When the whole-language method succeeds in teaching someone how to represent sounds by means of letters, that person acquires the ability to read an alphabetic language. Answer: B Reasoning Process of Humans: Looking at the question and we know that it is asking about implication. From the first two sentences in context, we know that there are two necessary conditions to read an alphabetic language: phonemic awareness and symbolic letters. We also learn [(NOT symbolic letters) AND (phonemic awareness)] → read an alphabetic language (denoted as Formula 1). The last sentence in the context says that many children are taught by the whole-language method to learn a language. As for option A, from the context, we only know the whole language method works for'many' children, which cannot be inferred to'invariably' works. As for option B, combing three sentences in the context, we know that the whole-language method meets the two necessary conditions to learn a language, especially the last sentence mentions'learn to read alphabetic languages'. Children learn to read alphabetic languages means that they must recognize symbolic letters that represent sound because symbolic letters is a necessary condition of read an alphabetic language; otherwise, they cannot read because of Formula 1 mentioned above. Therefore, option B is correct. As for option C, from the context we only know the whole-language method teaches phonemic awareness and read an alphabetic language. Symbolic letters may be taught by other methods, so C is wrong. As for D, similar to C, symbolic letters may be taught by other methods and we also cannot obtain: symbolic letters → read an alphabetic language. Table 4: Two examples to show how humans to solve the questions. Context: The current pattern of human consumption of resources, in which we rely on nonrenewable resources, for example metal ore, must eventually change. Since there is only so much metal ore available, ultimately, we must either do without or turn to renewable resources to take its place. A. We cannot indefinitely replace exhausted nonrenewable resources with other nonrenewable resources. B. Consumption of nonrenewable resources will not continue to increase in the near future. C. There are renewable resource replacements for all of the nonrenewable resources currently being consumed. D. Ultimately we cannot do without nonrenewable resources. Context: Some theorists argue that literary critics should strive to be value-neutral in their literary criticism. These theorists maintain that by exposing the meaning of literary works without evaluating them, critics will enable readers to make their own judgments about the works' merits. But literary criticism cannot be completely value-neutral. Thus, some theorists are mistaken about what is an appropriate goal for literary criticism. Q: The argument's follows logically if which one of the following is assumed? A. Any critic who is able to help readers make their own judgments about literary works' merits should strive to produce value-neutral criticism. B. If it is impossible to produce completely value-neutral literary criticism, then critics should not even try to be value-neutral. C. The less readers understand the meaning of a literary work, the less capable they will be of evaluating that work's merits.. D. Critics are more likely to provide criticisms of the works they like than to provide criticisms of the works they dislike. The television show Henry was not widely watched until it was scheduled for Tuesday evenings immediately after That' s Life, the most popular show on television. During the year after the move, Henry was consistently one of the ten most-watched shows on television. Since Henry' s recent move to Wednesday evenings, however, it has been watched by far fewer people. We must conclude that Henry was widely watched before the move to Wednesday evenings because it followed That' s Life and not because people especially liked it. Q: Which one of the following, if true, most strengthens the argument? A. The show that now follows That's Life on Tuesdays has double the number of viewers it had before being moved. B. Henry has been on the air for three years, but That's Life has been on the air for only two years. C. After its recent move to Wednesday, Henry was aired at the same time as the second most popular show on television. D. That's Life was not widely watched during the first year it was aired. Context: When a certain gland becomes cancerous in humans, it produces high levels of a particular protein. A blood test can determine the level of this protein well before a cancer of the gland could be detected by other means. Some doctors recommend that aggressive anticancer treatment should be begun as early as possible for anyone who is tested and is found to have high levels of the protein. Q: Which one of the following, if true, most seriously weakens the doctors' recommendation? A. The blood test for the protein has been in use for some time to monitor the condition of patients who have been diagnosed as having cancer of the gland. B. Before the blood test became available, about one third of all cases of cancer of the gland were detected in early stages. C. So far, no patients whose protein levels were found to be normal have subsequently developed cancer of the gland. D. Enlargement of the gland, a common condition infrequently associated with cancer, in high levels of the protein. Context: In a study, pairs of trained dogs were placed side by side and given a command such as "sit". After both obeyed the command, one dog was given a treat while its partner was given no reward at all. Over time, the dogs who went unrewarded began to disobey the command. This shows that dogs have an aversion to being treated unfairly. Q: Which one of the following would be most useful to know in order to evaluate the argument? A. Were dogs who were accustomed to receiving regular rewards prior to the study more inclined to obey the command? B. How many repetitions were required before the unrewarded dogs began to disobey the command? RoBERTa have achieved impressive in various NLP tasks. We challenge these neural models with ReClor to investigate how well they can perform. Details of the baseline models and implementation are shown in the Appendix A and B. As mentioned earlier, biases prevalently exist in human-annotated datasets (; ; ;), which are often exploited by models to perform well without truly understanding the text. Therefore, it is necessary to find out the biased data points in ReClor in order to evaluate models in a more comprehensive manner . To this end, we feed the five strong baseline models (GPT, GPT-2, BERT BASE, XLNet BASE and RoBERTa BASE) with ONLY THE ANSWER OPTIONS for each problem. In other words, we purposely remove the context and question in the inputs. In this way, we are able to identify those problems that can be answered correctly by merely exploiting the biases in answer options without knowing the relevant context and question. However, the setting of this task is a multiple-choice question with 4 probable options, and even a chance baseline could have 25% probability to get it right. To eliminate the effect of random guess, we set four different random seeds for each model and pick the data points that are predicted correctly in all four cases to form the EASY set. Then, the data points which are predicted correctly by the models at random could be nearly eliminated, since any data point only has a probability of (25%) 4 = 0.39% to be guessed right consecutively for four times. Then we unite the sets of data points that are consistently predicted right by each model, because intuitively different models may learn different biases of the dataset. The above process is formulated as the following expression, where C seed1 BERT denotes the set of data points which are predicted correctly by BERT BASE with seed 1, and similarly for the rest. Table 6 shows the average performance for each model trained with four different random seeds and the number of data points predicted correctly by all of them. Finally, we get 440 data points from the testing set C TEST and we denote this subset as EASY set C EASY and the other as HARD set C HARD. Table 6: Average accuracy of each model using four different random seeds with only answer options as input, and the number of their common correct predictions. Among multiple-choice reading comprehension or QA datasets from exams, although the size of ReClor is comparable to those of ACR and DREAM , it is much smaller than. Recent studies (; ; ;) have shown the effectiveness of pre-training on similar tasks or datasets then fine-tuning on the target dataset for transfer learning. find that by first training on RACE and then further fine-tuning on the target dataset, the performances of BERT BASE on multiple-choice dataset MC500 and DREAM can significantly boost from 69.5% to 81.2%, and from 63.2% to 70.2%, respectively. However, they also find that the model cannot obtain significant improvement even performs worse if it is first fine-tuned on span-based dataset like SQuAD . ReClor is a multiple-choice dataset, so we choose RACE for fine-tuning study. The performance of all tested models on the ReClor is presented in Table 7. This dataset is built on questions designed for students who apply for admission to graduate schools, thus we randomly choose 100 samples from the testing set and divide them into ten tests, which are distributed to ten different graduate students in a university. We take the average of their scores and present it as the baseline of graduate students. The data of ReClor are carefully chosen and modified from only high-quality questions from standardized graduate entrance exams. We set the ceiling performance to 100% since ambiguous questions are not included in the dataset. The performance of fastText is better than random guess, showing that word correlation could be used to help improve performance to some extent. It is difficult for Bi-LSTM to converge on this Table 7: Accuracy (%) of models and human performance. The column Input means whether to input context (C), question (Q) and answer options (A). The RACE column represents whether to first use RACE to fine-tune before training on ReClor. dataset. Transformer-based pre-training models have relatively good performance, close to the performance of graduate students. However, we find that these models only perform well on EASY set with around 70% accuracy, showing these models have an outstanding ability to capture the biases of the dataset, but they perform poorly on HARD set with only around 30% accuracy. In contrast, humans can still keep good performance on HARD set. We notice the difference in testing accuracy performed by graduate students on EASY and HARD set, but this could be due to the small number of students participated in the experiments. Therefore, we say humans perform relatively consistent on both biased and non-biased dataset. It is noticed that if the models are first trained on RACE and then fine-tuned on ReClor, they could obtain significant improvement, especially on HARD set. The overall performance of XLNet LARGE is even very close to that of graduate students. This similar phenomenon can also be observed on DREAM dataset by , which shows the potential of transfer learning for reasoning tasks. However, even after fine-tuning on RACE, the best performance of these strong baselines on HARD set is around 50%, still lower than that of graduate students and far away from ceiling performance. Experiments in different input settings are also done. Compared with the input setting of answer options only (A), the setting of questions and answer options (Q, A) can not bring significant improvement. This may be because some questions e.g., Which one of the following is an assumption required by the argument?, Which one of the following, if true, most strengthens the argument? can be used in the same reasoning types of question, which could not offer much information. Further adding context causes significant boost, showing the high informativeness of the context. We further analyze the model performance with respect to different question types of logical reasoning. Some are shown in Figure 4 and the full are shown in Figure 5, 6 and 7 in the Appendix E. Three models of BERT LARGE, XLNet LARGE and RoBERTa LARGE all perform poorly on the type of MATCH PRINCIPLES/POINTS on EASY set. On HARD set, the three models perform poorly on certain types such as STRENGTHEN, WEAKEN and ROLE which require extensive logical reasoning. However, they perform relatively better on other types, such as and SUMMARY/MAIN POINT that are more straight-forward. For the of transfer learning, we analyze XLNet LARGE in detail. Though the overall performance is significantly boosted after fine-tuning on RACE first, the histograms in the bottom of Figure 4 show that on EASY set, accuracy improves a similar amount among different question type, while on HARD set, significant improvement on some question types is observed, such as EVALUATION, and SUM-MARY/MAIN POINT. This may be because these types require less logical reasoning to some extent compared with other types, and similar question types may also be found in RACE dataset. Thus, the pre-training on RACE helps enhance the ability of logical reasoning especially of relatively simple reasoning types, but more methods are still needed to further enhance the ability especially that of relatively complex reasoning types. Figure 4: Performance of models on EASY (left) and HARD (right) testing sets and that of models. XLNet LARGE +Fine-Tune means the model is first fine-tuned on RACE before training on ReClor. In this paper, we introduce ReClor, a reading comprehension dataset requiring logical reasoning, with the aim to push research progress on logical reasoning in NLP forward from sentence-level to passage-level and from simple logical reasoning to multiple complicated one. We propose to identify biased data points and split the testing set into EASY and HARD group for biased and non-biased data separately. We further empirically study the different behaviors of state-of-the-art models on these two testing sets, and find recent powerful transformer-based pre-trained language models have an excellent ability to exploit the biases in the dataset but have difficulty in understanding and reasoning given the non-biased data with low performance close to or slightly better than random guess. These show there is a long way to equip deep learning models with real logical reasoning abilities. We hope this work would inspire more research in future to adopt similar split technique and evaluation scheme when reporting their model performance. We also show by first fine-tuning on a large-scale dataset RACE then fine-tuning on ReClor, the models could obtain significant improvement, showing the potential of transfer learning to solve reasoning tasks. fastText FastText models sentences as a bag of n-grams, and tries to predict the probability of each answer being correct independently. We choose the answer with the highest score as the prediction for the multiple-choice setting. LSTM sentence encoder A two-layer bi-LSTM is randomly initialized as a sentence encoder with GloVe word embedding . With a span of text as input, the last hidden state of the second layer is max-pooled and then fed into a fully-connected layer to compute the output score. GPT and GPT-2 GPT and GPT-2 are both transformer based models which are pre-trained using unsupervised method with a standard language modeling objective. GPT is pre-trained on BooksCorpus; GPT-2 is pre-trained using a larger dataset called WebText. Here we use the smallest model proposed in as our GPT-2 baseline. To fine-tune on ReClor, the final hidden vector corresponding to the last input token ([ classify] ) is used as the aggregate representation followed by an extra fully connected layer to compute the score. BERT BERT is also a transformer based model which is trained by using BooksCorpus and English Wikipedia in two unsupervised tasks, i.e., Masked LM (MLM) and Next Sentence Prediction (NSP). During fine-tuning, the final hidden vector corresponding to the first input token ([CLS]) is used as the aggregate representation followed by two extra fully connected layers to compute the score. XLNet XLNet The input format of different models is shown in Table 8. start Context delimiter Question || Option classify GPT-2 start Context delimiter Question || Option classify BERT [CLS] Context [SEP] Question || Option [SEP] [PAD]... XLNet <pad>... Context <sep> Question || Option <sep> <cls> RoBERTa <s> Context </s> </s> Question || Option </s> <pad>... Adam is used by all models with β 1 = 0.9, β 2 = 0.999 and = 1e − 8. For fastText, we use its python library 6 by converting ReClor to the required form, and keep the default setting of the hyper parameters. For Bi-LSTM, we use a two-layer Bidirectional LSTM with the GloVe 300d word embedding followed by max-pooling and a fully-connected layer. We train the model for 100 epochs using a batch size of 64 and learning rate of 0.1. A learning rate decay of 0.5 is also applied every 10 epochs. For pre-training models, we modify the code of PyTorch-Transformers of Hugging Face 7 to implement them on ReClor. We use a batch size of 24 and fine-tune for 50 epochs. For different models, we select the best fine-tuning learning rate among Table 9: Hyperparameters for finetuning pre-training language models on ReClor 6.25e-05, 2e-05, 1e-05, and 5e-06 on the validation set. The maximum input sequence length for all models is 256. The hyperparameters are shown in Table 9. Type: Necessary Assumptions Definition: identify the claim that must be true or is required in order for the argument to work Context: Slash-and-burn agriculture involves burning several acres of forest, leaving vegetable ash that provides ample fertilizer for three or four years of bountiful crops. On the cleared land nutrients leach out of the soil, however, and the land becomes too poor to support agriculture. New land is then cleared by burning and the process starts again. Since most farming in the tropics uses this method, forests in this region will eventually be permanently eradicated. Question: The argument depends on the assumption that Options: A. forests in the tropics do not regenerate well enough to restore themselves once they have been cleared by the slash-and-burn method B. some other methods of agriculture are not as destructive to the environment in tropical regions as the slash-and-burn method is C. forests in the tropics are naturally deficient in nutrients that are needed to support the growth of plants that are not native to those regions D. slash-and-burn agriculture is particularly suitable for farming in tropical areas Answer: A Table 10: The definition and an example of the logical reasoning type -Necessary Assumptions Type: Sufficient Assumptions Definition: identify a sufficient assumption, that is, an assumption that, if added to the argument, would make it logically valid Context: Geologist: A new method for forecasting earthquakes has reliably predicted several earthquakes. Unfortunately, this method can predict only that an earthquake will fall somewhere within a range of two and a half points on the Richter scale. Thus, since a difference of two and a half points can be the difference between a marginally perceptible shaking and a quake that causes considerable damage, the new method is unlikely to be useful. Question: Which one of the following, if assumed, enables the geologist's to be properly inferred? A. An earthquake-forecasting method is unlikely to be useful unless its predictions always differentiate earthquakes that are barely noticeable from ones that in substantial destruction. B. Several well-established methods for forecasting earthquakes can predict within much narrower ranges than two and a half points on the Richter scale. C. Even if an earthquake-forecasting method makes predictions within a very narrow range on the Richter scale, this method is not likely to be useful unless its predictions are reliable. D. An earthquake-forecasting method has not been shown to be useful until it has been used to reliably predict a large number of earthquakes. Answer: A Table 11: The definition and an example of the logical reasoning type -Sufficient Assumptions Type: Strengthen Definition: identify information that would strengthen an argument Context: Financial success does not guarantee happiness. This claim is not mere proverbial wisdom but a fact verified by statistics. In a recently concluded survey, only one-third of the respondents who claimed to have achieved financial success reported that they were happy. Question: Which one of the following, if true, most strongly supports the drawn from the survey ? Options: A. Most of the respondents who reported they were unhappy were in fact happy. B. The respondents who reported financial success were, for the most part, financially successful. C. Many of the respondents who claimed not to have achieved financial success reported that they were happy five years ago. D. Many of the respondents who failed to report financial success were in fact financially successful. Answer: B Table 12: The definition and an example of the logical reasoning type -Strengthen Type: Weaken Definition: identify information that would weaken an argument Context: "DNA fingerprinting" is a recently-introduced biochemical procedure that uses a pattern derived from a person' s genetic material to match a suspect' s genetic material against that of a specimen from a crime scene. Proponents have claimed astronomically high odds against obtaining a match by chance alone. These odds are based on an assumption that there is independence between the different characteristics represented by a single pattern. Question: Which one of the following, if true, casts the most doubt on the claim of the proponents of DNA fingerprinting? Options: A. The skill required of laboratory technicians performing the DNA fingerprinting procedure is not extraordinary. B. There is a generally accepted theoretical basis for interpreting the patterns produced by the procedure. C. In the whole population there are various different subgroups, within each of which certain sets of genetic characteristics are shared. D. In the investigation of certain genetic diseases, the techniques used in DNA fingerprinting have traced the transmission of the diseases among the living members of very large families. Answer: C Table 13: The definition and an example of the logical reasoning type -Weaken Type: Evaluation Definition: identify information that would be useful to know to evaluate an argument Context: George: Some scientists say that global warming will occur because people are releasing large amounts of carbon dioxide into the atmosphere by burning trees and fossil fuels. We can see, though, that the predicted warming is occurring already. In the middle of last winter, we had a month of springlike weather in our area, and this fall, because of unusually mild temperatures, the leaves on our town' s trees were three weeks late in turning color. Question: Which one of the following would it be most relevant to investigate in evaluating the of George's argument? Options: A. whether air pollution is causing some trees in the area to lose their leaves B. what proportion of global emissions of carbon dioxide is due to the burning of trees by humans C. whether unusually warm weather is occurring elsewhere on the globe more frequently than before D. when leaves on the trees in the town usually change color Answer: C To reduce the mosquito population in a resort area, hundreds of trees were planted that bear fruit attractive to birds. Over the years, as the trees matured, they attracted a variety of bird species and greatly increased the summer bird population in the area. As expected, the birds ate many mosquitoes. However, the planting of the fruit trees had the very opposite of its intended effect. Which one of the following, if true, most helps to explain the apparently paradoxical ? Options: A. Most of the species of birds that were attracted by the trees that were planted did not eat mosquitoes. B. Increases and decreases in mosquito populations tend to follow a cyclical pattern. C. The species of birds that were attracted in the greatest number by the fruit of the trees that were planted did not eat mosquitoes. D. The birds attracted to the area by the trees ate many more insects that prey on mosquitoes than they did mosquitoes. Answer: D Buying elaborate screensavers -programs that put moving images on a computer monitor to prevent damage -can cost a company far more in employee time than it saves in electricity and monitor protection. Employees cannot resist spending time playing with screensavers that flash interesting graphics across their screens. Which one of the following most closely conforms to the principle illustrated above? Options: A. An electronic keyboard may be cheaper to buy than a piano but more expensive to repair. B. An energy-efficient insulation system may cost more up front but will ultimately save money over the life of the house. C. The time that it takes to have a pizza delivered may be longer than it takes to cook a complete dinner. D. A complicated hotel security system may cost more in customer goodwill than it saves in losses by theft. Answer: D The museum' s night security guard maintains that the thieves who stole the portrait did not enter the museum at any point at or above ground level. Therefore, the thieves must have gained access to the museum from below ground level. The flawed pattern of reasoning in the argument above is most similar to that in which one of the following? Options: A. As had generally been expected, not all questionnaires were sent in by the official deadline. It follows that plans must have been made for the processing of questionnaires received late. B. The store's competitors claim that the store, in selling off the shirts at those prices, neither made any profit nor broke even. Consequently, the store's customers must have been able to buy shirts there at less than the store's cost. C. The product label establishes that this insecticide is safe for both humans and pets. Therefore, the insecticide must also be safe for such wild mammals as deer and rabbits. D. If the census is to be believed, the percentage of men who are married is higher than the percentage of women who are married. Thus, the census must show a higher number of men than of women overall. Answer: B It is an absurd idea that whatever artistic endeavor the government refuses to support it does not allow, as one can see by rephrasing the statement to read: No one is allowed to create art without a government subsidy. The pattern of reasoning in which one of the following is most similar to that in the argument above? " Options: A. The notion that every scientist who has been supported by a government grant will be successful is absurd, as one can see by rewording it:No scientist is allowed to do research without a government grant. B. The notion that every scientist who is supported by a government grant will be successful is absurd, as one can see by rewording it:No scientist lacking governmental support will be successful. C. The claim that any driver who is not arrested does not break the law is absurd, as one can see by rewording it: Every driver who gets arrested has broken the law. D. The claim that any driver who is not arrested does not break the law is absurd, as one can see by rewording it: Every driver who breaks the law gets arrested. Answer: D If such remote considerations were relevant, almost every other consideration would be too. But this would make determining the seriousness of an offense so difficult that it would be impossible to apply the proportionality principle. The statement that considerations as remote as what an offender did years ago are relevant to the seriousness of an offense plays which one of the following roles in the argument? A. It is an allegedly untenable consequence of a view rejected in the argument's overall . B. It is a statement the argument provides grounds to accept and from which the overall is inferred. C. It is the overall in favor of which the argument offers evidence. D. It is a premise offered in support of an intermediate of the argument. Answer: A The tidal range at a particular location is the difference in height between high tide and low tide. Tidal studies have shown that one of the greatest tidal ranges in the world is found in the Bay of Fundy and reaches more than seventeen meters. Since the only forces involved in inducing the tides are the sun' s and moon' s gravity, the magnitudes of tidal ranges also must be explained entirely by gravitational forces. Question: Which one of the following most accurately describes a flaw in the reasoning above? Options: A. It does not differentiate between the tidal effect of the sun and the tidal effect of the moon. B. It fails to consider that the size of a tidal range could be affected by the conditions in which gravitational forces act. C. It presumes, without providing warrant, that most activity within the world's oceans is a of an interplay of gravitational forces. D. It gives only one example of a tidal range. Answer: B PhishCo runs a number of farms in the arid province of Nufa, depending largely on irrigation. Now, as part of a plan to efficiently increase the farms' total production, it plans to drill down to an aquifer containing warm, slightly salty water that will be used to raise fish in ponds. The water from the ponds will later be used to supplement piped-in irrigation water for PhishCo's vegetable fields, and the ponds and accompanying vegetation should help reduce the heat in the area of the farms. Which of the following would, if true, most strongly suggest that the plan, if implemented, would increase the overall efficiency of PhishCo's farms? Options: A. Organic waste from fish in the pond water will help to fertilize fields where it is used for irrigation. B. Fish raised on PhishCo's farms are likely to be saleable in the nearest urban areas. C. Ponds will be located on low-lying land now partially occupied by grain crops. D. The government of Nufa will help to arrange loan financing to partially cover the costs of drilling. Answer: A
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJgJtT4tvB
We introduce ReClor, a reading comprehension dataset requiring logical reasoning, and find that current state-of-the-art models struggle with real logical reasoning with poor performance near that of random guess.